pax_global_header00006660000000000000000000000064146261774700014530gustar00rootroot0000000000000052 comment=2b21de55a609e1fe93c53ada95ed16f2a6777d3c Azure-WALinuxAgent-2b21de5/000077500000000000000000000000001462617747000155305ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/.gitattributes000066400000000000000000000047261462617747000204340ustar00rootroot00000000000000############################################################################### # Set default behavior to automatically normalize line endings. ############################################################################### * text=auto ############################################################################### # Set default behavior for command prompt diff. # # This is need for earlier builds of msysgit that does not have it on by # default for csharp files. # Note: This is only used by command line ############################################################################### #*.cs diff=csharp ############################################################################### # Set the merge driver for project and solution files # # Merging from the command prompt will add diff markers to the files if there # are conflicts (Merging from VS is not affected by the settings below, in VS # the diff markers are never inserted). Diff markers may cause the following # file extensions to fail to load in VS. An alternative would be to treat # these files as binary and thus will always conflict and require user # intervention with every merge. To do so, just uncomment the entries below ############################################################################### #*.sln merge=binary #*.csproj merge=binary #*.vbproj merge=binary #*.vcxproj merge=binary #*.vcproj merge=binary #*.dbproj merge=binary #*.fsproj merge=binary #*.lsproj merge=binary #*.wixproj merge=binary #*.modelproj merge=binary #*.sqlproj merge=binary #*.wwaproj merge=binary ############################################################################### # behavior for image files # # image files are treated as binary by default. ############################################################################### #*.jpg binary #*.png binary #*.gif binary ############################################################################### # diff behavior for common document formats # # Convert binary document formats to text before diffing them. This feature # is only available from the command line. Turn it on by uncommenting the # entries below. ############################################################################### #*.doc diff=astextplain #*.DOC diff=astextplain #*.docx diff=astextplain #*.DOCX diff=astextplain #*.dot diff=astextplain #*.DOT diff=astextplain #*.pdf diff=astextplain #*.PDF diff=astextplain #*.rtf diff=astextplain #*.RTF diff=astextplain Azure-WALinuxAgent-2b21de5/.github/000077500000000000000000000000001462617747000170705ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/.github/CONTRIBUTING.md000066400000000000000000000105271462617747000213260ustar00rootroot00000000000000# Contributing to Linux Guest Agent First, thank you for contributing to WALinuxAgent repository! ## Basics If you would like to become an active contributor to this project, please follow the instructions provided in [Microsoft Azure Projects Contribution Guidelines](http://azure.github.io/guidelines/). ## Table of Contents [Before starting](#before-starting) - [Github basics](#github-basics) - [Code of Conduct](#code-of-conduct) [Making Changes](#making-changes) - [Pull Requests](#pull-requests) - [Pull Request Guidelines](#pull-request-guidelines) - [Cleaning up commits](#cleaning-up-commits) - [General guidelines](#general-guidelines) - [Testing guidelines](#testing-guidelines) ## Before starting ### Github basics #### GitHub workflow If you don't have experience with Git and Github, some of the terminology and process can be confusing. [Here's a guide to understanding Github](https://guides.github.com/introduction/flow/). #### Forking the Azure/Guest-Configuration-Extension repository Unless you are working with multiple contributors on the same file, we ask that you fork the repository and submit your Pull Request from there. [Here's a guide to forks in Github](https://guides.github.com/activities/forking/). ### Code of Conduct This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. ## Making Changes ### Pull Requests You can find all of the pull requests that have been opened in the [Pull Request](https://github.com/Azure/Guest-Configuration-Extension/pulls) section of the repository. To open your own pull request, click [here](https://github.com/Azure/WALinuxAgent/compare). When creating a pull request, keep the following in mind: - Make sure you are pointing to the fork and branch that your changes were made in - Choose the correct branch you want your pull request to be merged into - The pull request template that is provided **should be filled out**; this is not something that should just be deleted or ignored when the pull request is created - Deleting or ignoring this template will elongate the time it takes for your pull request to be reviewed ### Pull Request Guidelines A pull request template will automatically be included as a part of your PR. Please fill out the checklist as specified. Pull requests **will not be reviewed** unless they include a properly completed checklist. #### Cleaning up Commits If you are thinking about making a large change, **break up the change into small, logical, testable chunks, and organize your pull requests accordingly**. Often when a pull request is created with a large number of files changed and/or a large number of lines of code added and/or removed, GitHub will have a difficult time opening up the changes on their site. This forces the Azure Guest-Configuration-Extension team to use separate software to do a code review on the pull request. If you find yourself creating a pull request and are unable to see all the changes on GitHub, we recommend **splitting the pull request into multiple pull requests that are able to be reviewed on GitHub**. If splitting up the pull request is not an option, we recommend **creating individual commits for different parts of the pull request, which can be reviewed individually on GitHub**. For more information on cleaning up the commits in a pull request, such as how to rebase, squash, and cherry-pick, click [here](https://github.com/Azure/azure-powershell/blob/dev/documentation/cleaning-up-commits.md). #### General guidelines The following guidelines must be followed in **EVERY** pull request that is opened. - Title of the pull request is clear and informative - There are a small number of commits that each have an informative message - A description of the changes the pull request makes is included, and a reference to the issue being resolved, if the change address any - All files have the Microsoft copyright header #### Testing Guidelines The following guidelines must be followed in **EVERY** pull request that is opened. - Pull request includes test coverage for the included changesAzure-WALinuxAgent-2b21de5/.github/ISSUE_TEMPLATE/000077500000000000000000000000001462617747000212535ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/.github/ISSUE_TEMPLATE/bug_report.md000066400000000000000000000016351462617747000237520ustar00rootroot00000000000000--- name: Bug report about: Create a report to help us improve title: "[BUG] Bug Title" --- **Describe the bug: A clear and concise description of what the bug is.** Note: Please add some context which would help us understand the problem better 1. Section of the log where the error occurs. 2. Serial console output 3. Steps to reproduce the behavior. **Distro and WALinuxAgent details (please complete the following information):** - Distro and Version: [e.g. Ubuntu 16.04] - WALinuxAgent version [e.g. 2.2.40, you can copy the output of `waagent --version`, more info [here](https://github.com/Azure/WALinuxAgent/wiki/FAQ#what-does-goal-state-agent-mean-in-waagent---version-output) ] **Additional context** Add any other context about the problem here. **Log file attached** If possible, please provide the full /var/log/waagent.log file to help us understand the problem better and get the context of the issue. Azure-WALinuxAgent-2b21de5/.github/PULL_REQUEST_TEMPLATE.md000066400000000000000000000022151462617747000226710ustar00rootroot00000000000000 ## Description Issue # --- ### PR information - [ ] The title of the PR is clear and informative. - [ ] There are a small number of commits, each of which has an informative message. This means that previously merged commits do not appear in the history of the PR. For information on cleaning up the commits in your pull request, [see this page](https://github.com/Azure/azure-powershell/blob/master/documentation/development-docs/cleaning-up-commits.md). - [ ] If applicable, the PR references the bug/issue that it fixes in the description. - [ ] New Unit tests were added for the changes made ### Quality of Code and Contribution Guidelines - [ ] I have read the [contribution guidelines](https://github.com/Azure/WALinuxAgent/blob/master/.github/CONTRIBUTING.md).Azure-WALinuxAgent-2b21de5/.github/codecov.yml000066400000000000000000000000461462617747000212350ustar00rootroot00000000000000github_checks: annotations: false Azure-WALinuxAgent-2b21de5/.github/workflows/000077500000000000000000000000001462617747000211255ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/.github/workflows/ci_pr.yml000066400000000000000000000102431462617747000227440ustar00rootroot00000000000000name: CI Unit tests on: push: branches: [ "*" ] pull_request: branches: [ "*" ] workflow_dispatch: jobs: test-python-2_6-and-3_4-versions: strategy: fail-fast: false matrix: include: - python-version: 2.6 - python-version: 3.4 name: "Python ${{ matrix.python-version }} Unit Tests" runs-on: ubuntu-20.04 container: image: ubuntu:16.04 volumes: - /home/waagent:/home/waagent defaults: run: shell: bash -l {0} env: NOSEOPTS: "--verbose" steps: - uses: actions/checkout@v3 - name: Install Python ${{ matrix.python-version }} run: | apt-get update apt-get install -y curl bzip2 sudo python3 curl https://dcrdata.blob.core.windows.net/python/python-${{ matrix.python-version }}.tar.bz2 -o python-${{ matrix.python-version }}.tar.bz2 sudo tar xjvf python-${{ matrix.python-version }}.tar.bz2 --directory / - name: Test with nosetests run: | if [[ ${{ matrix.python-version }} == 2.6 ]]; then source /home/waagent/virtualenv/python2.6.9/bin/activate else source /home/waagent/virtualenv/python3.4.8/bin/activate fi ./ci/nosetests.sh exit $? test-python-2_7: strategy: fail-fast: false name: "Python 2.7 Unit Tests" runs-on: ubuntu-20.04 defaults: run: shell: bash -l {0} env: NOSEOPTS: "--verbose" steps: - uses: actions/checkout@v3 - name: Install Python 2.7 run: | apt-get update apt-get install -y curl bzip2 sudo curl https://dcrdata.blob.core.windows.net/python/python-2.7.tar.bz2 -o python-2.7.tar.bz2 sudo tar xjvf python-2.7.tar.bz2 --directory / - name: Test with nosetests run: | source /home/waagent/virtualenv/python2.7.16/bin/activate ./ci/nosetests.sh exit $? test-current-python-versions: strategy: fail-fast: false matrix: include: - python-version: 3.5 PYLINTOPTS: "--rcfile=ci/3.6.pylintrc --ignore=tests_e2e,makepkg.py" - python-version: 3.6 PYLINTOPTS: "--rcfile=ci/3.6.pylintrc --ignore=tests_e2e" - python-version: 3.7 PYLINTOPTS: "--rcfile=ci/3.6.pylintrc --ignore=tests_e2e" - python-version: 3.8 PYLINTOPTS: "--rcfile=ci/3.6.pylintrc --ignore=tests_e2e" - python-version: 3.9 PYLINTOPTS: "--rcfile=ci/3.6.pylintrc" additional-nose-opts: "--with-coverage --cover-erase --cover-inclusive --cover-branches --cover-package=azurelinuxagent" name: "Python ${{ matrix.python-version }} Unit Tests" runs-on: ubuntu-20.04 env: PYLINTOPTS: ${{ matrix.PYLINTOPTS }} PYLINTFILES: "azurelinuxagent setup.py makepkg.py tests tests_e2e" NOSEOPTS: "--with-timer ${{ matrix.additional-nose-opts }}" PYTHON_VERSION: ${{ matrix.python-version }} steps: - name: Checkout WALinuxAgent repo uses: actions/checkout@v3 - name: Setup Python ${{ matrix.python-version }} uses: actions/setup-python@v4 with: python-version: ${{ matrix.python-version }} - name: Install dependencies id: install-dependencies run: | sudo env "PATH=$PATH" python -m pip install --upgrade pip sudo env "PATH=$PATH" pip install -r requirements.txt sudo env "PATH=$PATH" pip install -r test-requirements.txt - name: Run pylint run: | pylint $PYLINTOPTS --jobs=0 $PYLINTFILES - name: Test with nosetests if: success() || (failure() && steps.install-dependencies.outcome == 'success') run: | ./ci/nosetests.sh exit $? - name: Compile Coverage if: matrix.python-version == 3.9 run: | echo looking for coverage files : ls -alh | grep -i coverage sudo env "PATH=$PATH" coverage combine coverage.*.data sudo env "PATH=$PATH" coverage xml sudo env "PATH=$PATH" coverage report - name: Upload Coverage if: matrix.python-version == 3.9 uses: codecov/codecov-action@v3 with: file: ./coverage.xml Azure-WALinuxAgent-2b21de5/.gitignore000066400000000000000000000020461462617747000175220ustar00rootroot00000000000000# Byte-compiled / optimized / DLL files __pycache__/ *.py[cod] *$py.class # Virtualenv py3env/ # C extensions *.so # Distribution / packaging .Python env/ build/ develop-eggs/ dist/ downloads/ eggs/ parts/ sdist/ var/ *.egg-info/ .installed.cfg *.egg # PyCharm .idea/ .idea_modules/ # PyInstaller # Usually these files are written by a python script from a template # before PyInstaller builds the exe, so as to inject date/other infos into it. *.manifest *.spec # Installer logs pip-log.txt pip-delete-this-directory.txt # Unit test / coverage reports htmlcov/ .tox/ .coverage .cache nosetests.xml coverage.xml # Translations *.mo *.pot # Django stuff: *.log # Sphinx documentation docs/_build/ # PyBuilder target/ waagentc *.pyproj *.sln *.suo waagentc bin/waagent2.0c # rope project .ropeproject/ # mac osx specific files .DS_Store ### VirtualEnv template # Virtualenv # http://iamzed.com/2009/05/07/a-primer-on-virtualenv/ .Python pyvenv.cfg .venv pip-selfcheck.json # virtualenv venv/ ENV/ # dotenv .env # pyenv .python-version .vscode/Azure-WALinuxAgent-2b21de5/CODEOWNERS000066400000000000000000000012661462617747000171300ustar00rootroot000000000000001 # See https://help.github.com/articles/about-codeowners/ # for more info about CODEOWNERS file # It uses the same pattern rule for gitignore file # https://git-scm.com/docs/gitignore#_pattern_format # Provisioning Agent # The Azure Linux Provisioning team is interested in getting notifications # when there are requests for changes in the provisioning agent. For any # questions, please feel free to reach out to thstring@microsoft.com. /azurelinuxagent/pa/ @trstringer @anhvoms /tests/pa/ @trstringer @anhvoms # # RDMA # /azurelinuxagent/common/rdma.py @longlimsft /azurelinuxagent/pa/rdma/ @longlimsft # # Linux Agent team # * @narrieta @ZhidongPeng @nagworld9 @maddieford @gabstamsft Azure-WALinuxAgent-2b21de5/LICENSE.txt000066400000000000000000000261301462617747000173550ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2016 Microsoft Corporation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Azure-WALinuxAgent-2b21de5/MAINTENANCE.md000066400000000000000000000013131462617747000175720ustar00rootroot00000000000000## Microsoft Azure Linux Agent Maintenance Guide ### Version rules * Production releases are public * Test releases are for internal use * Production versions use only [major].[minor].[revision] * Test versions use [major].[minor].[revision].[build] * Test a.b.c.0 is equivalent to Prod a.b.c * Publishing to Production requires incrementing the revision and dropping the build number * We do not use pre-release labels on any builds ### Version updates * The version of the agent can be found at https://github.com/Azure/WALinuxAgent/blob/master/azurelinuxagent/common/version.py#L53 assigned to AGENT_VERSION * Update the version here and send for PR before declaring a release via GitHub Azure-WALinuxAgent-2b21de5/MANIFEST000066400000000000000000000005701462617747000166630ustar00rootroot00000000000000# file GENERATED by distutils, do NOT edit README setup.py bin/waagent config/waagent.conf config/waagent.logrotate test/test_logger.py walinuxagent/__init__.py walinuxagent/agent.py walinuxagent/conf.py walinuxagent/envmonitor.py walinuxagent/extension.py walinuxagent/install.py walinuxagent/logger.py walinuxagent/protocol.py walinuxagent/provision.py walinuxagent/util.py Azure-WALinuxAgent-2b21de5/MANIFEST.in000066400000000000000000000001141462617747000172620ustar00rootroot00000000000000recursive-include bin * recursive-include init * recursive-include config * Azure-WALinuxAgent-2b21de5/NOTICE000066400000000000000000000002411462617747000164310ustar00rootroot00000000000000Microsoft Azure Linux Agent Copyright 2012 Microsoft Corporation This product includes software developed at Microsoft Corporation (http://www.microsoft.com/). Azure-WALinuxAgent-2b21de5/README.md000066400000000000000000000601351462617747000170140ustar00rootroot00000000000000 # Microsoft Azure Linux Agent ## Linux distributions support Our daily automation tests most of the [Linux distributions supported by Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros); the Agent can be used on other distributions as well, but development, testing and support for those are done by the open source community. Testing is done using the develop branch, which can be unstable. For a stable build please use the master branch instead. [![CodeCov](https://codecov.io/gh/Azure/WALinuxAgent/branch/develop/graph/badge.svg)](https://codecov.io/gh/Azure/WALinuxAgent/branch/develop) ## Introduction The Microsoft Azure Linux Agent (waagent) manages Linux provisioning and VM interaction with the Azure Fabric Controller. It provides the following functionality for Linux IaaS deployments: * Image Provisioning * Creation of a user account * Configuring SSH authentication types * Deployment of SSH public keys and key pairs * Setting the host name * Publishing the host name to the platform DNS * Reporting SSH host key fingerprint to the platform * Resource Disk Management * Formatting and mounting the resource disk * Configuring swap space * Networking * Manages routes to improve compatibility with platform DHCP servers * Ensures the stability of the network interface name * Kernel * Configure virtual NUMA (disable for kernel <2.6.37) * Configure SCSI timeouts for the root device (which could be remote) * Diagnostics * Console redirection to the serial port * SCVMM Deployments * Detect and bootstrap the VMM agent for Linux when running in a System Center Virtual Machine Manager 2012R2 environment * VM Extension * Inject component authored by Microsoft and Partners into Linux VM (IaaS) to enable software and configuration automation * VM Extension reference implementation on [GitHub](https://github.com/Azure/azure-linux-extensions) ## Communication The information flow from the platform to the agent occurs via two channels: * A boot-time attached DVD for IaaS deployments. This DVD includes an OVF-compliant configuration file that includes all provisioning information other than the actual SSH keypairs. * A TCP endpoint exposing a REST API used to obtain deployment and topology configuration. ### HTTP Proxy The Agent will use an HTTP proxy if provided via the `http_proxy` (for `http` requests) or `https_proxy` (for `https` requests) environment variables. Due to limitations of Python, the agent *does not* support HTTP proxies requiring authentication. Similarly, the Agent will bypass the proxy if the environment variable `no_proxy` is set. Note that the way to define those environment variables for the Agent service varies across different distros. For distros that use systemd, a common approach is to use Environment or EnvironmentFile in the [Service] section of the service definition, for example using an override or a drop-in file (see "systemctl edit" for overrides). Example ```bash # cat /etc/systemd/system/walinuxagent.service.d/http-proxy.conf [Service] Environment="http_proxy=http://proxy.example.com:80/" Environment="https_proxy=http://proxy.example.com:80/" # ``` The Agent passes its environment to the VM Extensions it executes, including `http_proxy` and `https_proxy`, so defining a proxy for the Agent will also define it for the VM Extensions. The [`HttpProxy.Host` and `HttpProxy.Port`](#httpproxyhost-httpproxyport) configuration variables, if used, override the environment settings. Note that this configuration variables are local to the Agent process and are not passed to VM Extensions. ## Requirements The following systems have been tested and are known to work with the Azure Linux Agent. Please note that this list may differ from the official list of supported systems on the Microsoft Azure Platform as described [here](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros). Waagent depends on some system packages in order to function properly: * Python 2.6+ * OpenSSL 1.0+ * OpenSSH 5.3+ * Filesystem utilities: sfdisk, fdisk, mkfs, parted * Password tools: chpasswd, sudo * Text processing tools: sed, grep * Network tools: ip-route ## Installation Installing via your distribution's package repository is the only method that is supported. You can install from source for more advanced options, such as installing to a custom location or creating custom images. Installing from source, though, may override customizations done to the Agent by your distribution, and is meant only for advanced users. We provide very limited support for this method. To install from source, you can use **setuptools**: ```bash sudo python setup.py install --register-service ``` For Python 3, use: ```bash sudo python3 setup.py install --register-service ``` You can view more installation options by running: ```bash sudo python setup.py install --help ``` The agent's log file is kept at `/var/log/waagent.log`. Lastly, you can also customize your own RPM or DEB packages using the configuration samples provided in the deb and rpm sections below. This method is also meant for advanced users and we provide very limited support for it. ## Upgrade Upgrading via your distribution's package repository or using automatic updates are the only supported methods. More information can be found here: [Update Linux Agent](https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/update-linux-agent) To upgrade the Agent from source, you can use **setuptools**. Upgrading from source is meant for advanced users and we provide very limited support for it. ```bash sudo python setup.py install --force ``` Restart waagent service,for most of linux distributions: ```bash sudo service waagent restart ``` For Ubuntu, use: ```bash sudo service walinuxagent restart ``` For CoreOS, use: ```bash sudo systemctl restart waagent ``` ## Command line options ### Flags `-verbose`: Increase verbosity of specified command `-force`: Skip interactive confirmation for some commands ### Commands `-help`: Lists the supported commands and flags. `-deprovision`: Attempt to clean the system and make it suitable for re-provisioning, by deleting the following: * All SSH host keys (if Provisioning.RegenerateSshHostKeyPair is 'y' in the configuration file) * Nameserver configuration in /etc/resolv.conf * Root password from /etc/shadow (if Provisioning.DeleteRootPassword is 'y' in the configuration file) * Cached DHCP client leases * Resets host name to localhost.localdomain **WARNING!** Deprovision does not guarantee that the image is cleared of all sensitive information and suitable for redistribution. `-deprovision+user`: Performs everything under deprovision (above) and also deletes the last provisioned user account and associated data. `-version`: Displays the version of waagent `-serialconsole`: Configures GRUB to mark ttyS0 (the first serial port) as the boot console. This ensures that kernel bootup logs are sent to the serial port and made available for debugging. `-daemon`: Run waagent as a daemon to manage interaction with the platform. This argument is specified to waagent in the waagent init script. `-start`: Run waagent as a background process `-collect-logs [-full]`: Runs the log collector utility that collects relevant agent logs for debugging and stores them in the agent folder on disk. Exact location will be shown when run. Use flag `-full` for more exhaustive log collection. ## Configuration A configuration file (/etc/waagent.conf) controls the actions of waagent. Blank lines and lines whose first character is a `#` are ignored (end-of-line comments are *not* supported). A sample configuration file is shown below: ```yml Extensions.Enabled=y Extensions.GoalStatePeriod=6 Provisioning.Agent=auto Provisioning.DeleteRootPassword=n Provisioning.RegenerateSshHostKeyPair=y Provisioning.SshHostKeyPairType=rsa Provisioning.MonitorHostName=y Provisioning.DecodeCustomData=n Provisioning.ExecuteCustomData=n Provisioning.PasswordCryptId=6 Provisioning.PasswordCryptSaltLength=10 ResourceDisk.Format=y ResourceDisk.Filesystem=ext4 ResourceDisk.MountPoint=/mnt/resource ResourceDisk.MountOptions=None ResourceDisk.EnableSwap=n ResourceDisk.EnableSwapEncryption=n ResourceDisk.SwapSizeMB=0 Logs.Verbose=n Logs.Collect=y Logs.CollectPeriod=3600 OS.AllowHTTP=n OS.RootDeviceScsiTimeout=300 OS.EnableFIPS=n OS.OpensslPath=None OS.SshClientAliveInterval=180 OS.SshDir=/etc/ssh HttpProxy.Host=None HttpProxy.Port=None ``` The various configuration options are described in detail below. Configuration options are of three types : Boolean, String or Integer. The Boolean configuration options can be specified as "y" or "n". The special keyword "None" may be used for some string type configuration entries as detailed below. ### Configuration File Options #### __Extensions.Enabled__ _Type: Boolean_ _Default: y_ This allows the user to enable or disable the extension handling functionality in the agent. Valid values are "y" or "n". If extension handling is disabled, the goal state will still be processed and VM status is still reported, but only every 5 minutes. Extension config within the goal state will be ignored. Note that functionality such as password reset, ssh key updates and backups depend on extensions. Only disable this if you do not need extensions at all. _Note_: disabling extensions in this manner is not the same as running completely without the agent. In order to do that, the `provisionVMAgent` flag must be set at provisioning time, via whichever API is being used. We will provide more details on this on our wiki when it is generally available. #### __Extensions.WaitForCloudInit__ _Type: Boolean_ _Default: n_ Waits for cloud-init to complete (cloud-init status --wait) before executing VM extensions. Both cloud-init and VM extensions are common ways to customize a VM during initial deployment. By default, the agent will start executing extensions while cloud-init may still be in the 'config' stage and won't wait for the 'final' stage to complete. Cloud-init and extensions may execute operations that conflict with each other (for example, both of them may try to install packages). Setting this option to 'y' ensures that VM extensions are executed only after cloud-init has completed all its stages. Note that using this option requires creating a custom image with the value of this option set to 'y', in order to ensure that the wait is performed during the initial deployment of the VM. #### __Extensions.WaitForCloudInitTimeout__ _Type: Integer_ _Default: 3600_ Timeout in seconds for the Agent to wait on cloud-init. If the timeout elapses, the Agent will continue executing VM extensions. See Extensions.WaitForCloudInit for more details. #### __Extensions.GoalStatePeriod__ _Type: Integer_ _Default: 6_ How often to poll for new goal states (in seconds) and report the status of the VM and extensions. Goal states describe the desired state of the extensions on the VM. _Note_: setting up this parameter to more than a few minutes can make the state of the VM be reported as unresponsive/unavailable on the Azure portal. Also, this setting affects how fast the agent starts executing extensions. #### __AutoUpdate.UpdateToLatestVersion__ _Type: Boolean_ _Default: y_ Enables auto-update of the Extension Handler. The Extension Handler is responsible for managing extensions and reporting VM status. The core functionality of the agent is contained in the Extension Handler, and we encourage users to enable this option in order to maintain an up to date version. When this option is enabled, the Agent will install new versions when they become available. When disabled, the Agent will not install any new versions, but it will use the most recent version already installed on the VM. _Notes_: 1. This option was added on version 2.10.0.8 of the Agent. For previous versions, see AutoUpdate.Enabled. 2. If both options are specified in waagent.conf, AutoUpdate.UpdateToLatestVersion overrides the value set for AutoUpdate.Enabled. 3. Changing config option requires a service restart to pick up the updated setting. For more information on the agent version, see our [FAQ](https://github.com/Azure/WALinuxAgent/wiki/FAQ#what-does-goal-state-agent-mean-in-waagent---version-output).
For more information on the agent update, see our [FAQ](https://github.com/Azure/WALinuxAgent/wiki/FAQ#how-auto-update-works-for-extension-handler).
For more information on the AutoUpdate.UpdateToLatestVersion vs AutoUpdate.Enabled, see our [FAQ](https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion).
#### __AutoUpdate.Enabled__ _Type: Boolean_ _Default: y_ Enables auto-update of the Extension Handler. This flag is supported for legacy reasons and we strongly recommend using AutoUpdate.UpdateToLatestVersion instead. The difference between these 2 flags is that, when set to 'n', AutoUpdate.Enabled will use the version of the Extension Handler that is pre-installed on the image, while AutoUpdate.UpdateToLatestVersion will use the most recent version that has already been installed on the VM (via auto-update). On most distros the default value is 'y'. #### __Provisioning.Agent__ _Type: String_ _Default: auto_ Choose which provisioning agent to use (or allow waagent to figure it out by specifying "auto"). Possible options are "auto" (default), "waagent", "cloud-init", or "disabled". #### __Provisioning.Enabled__ (*removed in 2.2.45*) _Type: Boolean_ _Default: y_ This allows the user to enable or disable the provisioning functionality in the agent. Valid values are "y" or "n". If provisioning is disabled, SSH host and user keys in the image are preserved and any configuration specified in the Azure provisioning API is ignored. _Note_: This configuration option has been removed and has no effect. waagent now auto-detects cloud-init as a provisioning agent (with an option to override with `Provisioning.Agent`). #### __Provisioning.MonitorHostName__ _Type: Boolean_ _Default: n_ Monitor host name changes and publish changes via DHCP requests. #### __Provisioning.MonitorHostNamePeriod__ _Type: Integer_ _Default: 30_ How often to monitor host name changes (in seconds). This setting is ignored if MonitorHostName is not set. #### __Provisioning.UseCloudInit__ _Type: Boolean_ _Default: n_ This options enables / disables support for provisioning by means of cloud-init. When true ("y"), the agent will wait for cloud-init to complete before installing extensions and processing the latest goal state. _Provisioning.Enabled_ must be disabled ("n") for this option to have an effect. Setting _Provisioning.Enabled_ to true ("y") overrides this option and runs the built-in agent provisioning code. _Note_: This configuration option has been removed and has no effect. waagent now auto-detects cloud-init as a provisioning agent (with an option to override with `Provisioning.Agent`). #### __Provisioning.DeleteRootPassword__ _Type: Boolean_ _Default: n_ If set, the root password in the /etc/shadow file is erased during the provisioning process. #### __Provisioning.RegenerateSshHostKeyPair__ _Type: Boolean_ _Default: y_ If set, all SSH host key pairs (ecdsa, dsa and rsa) are deleted during the provisioning process from /etc/ssh/. And a single fresh key pair is generated. The encryption type for the fresh key pair is configurable by the Provisioning.SshHostKeyPairType entry. Please note that some distributions will re-create SSH key pairs for any missing encryption types when the SSH daemon is restarted (for example, upon a reboot). #### __Provisioning.SshHostKeyPairType__ _Type: String_ _Default: rsa_ This can be set to an encryption algorithm type that is supported by the SSH daemon on the VM. The typically supported values are "rsa", "dsa" and "ecdsa". Note that "putty.exe" on Windows does not support "ecdsa". So, if you intend to use putty.exe on Windows to connect to a Linux deployment, please use "rsa" or "dsa". #### __Provisioning.MonitorHostName__ _Type: Boolean_ _Default: y_ If set, waagent will monitor the Linux VM for hostname changes (as returned by the "hostname" command) and automatically update the networking configuration in the image to reflect the change. In order to push the name change to the DNS servers, networking will be restarted in the VM. This will result in brief loss of Internet connectivity. #### __Provisioning.DecodeCustomData__ _Type: Boolean_ _Default: n_ If set, waagent will decode CustomData from Base64. #### __Provisioning.ExecuteCustomData__ _Type: Boolean_ _Default: n_ If set, waagent will execute CustomData after provisioning. #### __Provisioning.PasswordCryptId__ _Type: String_ _Default: 6_ Algorithm used by crypt when generating password hash. * 1 - MD5 * 2a - Blowfish * 5 - SHA-256 * 6 - SHA-512 #### __Provisioning.PasswordCryptSaltLength__ _Type: String_ _Default: 10_ Length of random salt used when generating password hash. #### __ResourceDisk.Format__ _Type: Boolean_ _Default: y_ If set, the resource disk provided by the platform will be formatted and mounted by waagent if the filesystem type requested by the user in "ResourceDisk.Filesystem" is anything other than "ntfs". A single partition of type Linux (83) will be made available on the disk. Note that this partition will not be formatted if it can be successfully mounted. #### __ResourceDisk.Filesystem__ _Type: String_ _Default: ext4_ This specifies the filesystem type for the resource disk. Supported values vary by Linux distribution. If the string is X, then mkfs.X should be present on the Linux image. SLES 11 images should typically use 'ext3'. BSD images should use 'ufs2' here. #### __ResourceDisk.MountPoint__ _Type: String_ _Default: /mnt/resource_ This specifies the path at which the resource disk is mounted. #### __ResourceDisk.MountOptions__ _Type: String_ _Default: None_ Specifies disk mount options to be passed to the mount -o command. This is a comma separated list of values, ex. 'nodev,nosuid'. See mount(8) for details. #### __ResourceDisk.EnableSwap__ _Type: Boolean_ _Default: n_ If set, a swap file (/swapfile) is created on the resource disk and added to the system swap space. #### __ResourceDisk.EnableSwapEncryption__ _Type: Boolean_ _Default: n_ If set, the swap file (/swapfile) is mounted as an encrypted filesystem (flag supported only on FreeBSD.) #### __ResourceDisk.SwapSizeMB__ _Type: Integer_ _Default: 0_ The size of the swap file in megabytes. #### __Logs.Verbose__ _Type: Boolean_ _Default: n_ If set, log verbosity is boosted. Waagent logs to /var/log/waagent.log and leverages the system logrotate functionality to rotate logs. #### __Logs.Collect__ _Type: Boolean_ _Default: y_ If set, agent logs will be periodically collected and uploaded to a secure location for improved supportability. NOTE: This feature relies on the agent's resource usage features (cgroups); this flag will not take effect on any distro not supported. #### __Logs.CollectPeriod__ _Type: Integer_ _Default: 3600_ This configures how frequently to collect and upload logs. Default is each hour. NOTE: This only takes effect if the Logs.Collect option is enabled. #### __OS.AllowHTTP__ _Type: Boolean_ _Default: n_ If SSL support is not compiled into Python, the agent will fail all HTTPS requests. You can set this option to 'y' to make the agent fall-back to HTTP, instead of failing the requests. NOTE: Allowing HTTP may unintentionally expose secure data. #### __OS.EnableRDMA__ _Type: Boolean_ _Default: n_ If set, the agent will attempt to install and then load an RDMA kernel driver that matches the version of the firmware on the underlying hardware. #### __OS.EnableFIPS__ _Type: Boolean_ _Default: n_ If set, the agent will emit into the environment "OPENSSL_FIPS=1" when executing OpenSSL commands. This signals OpenSSL to use any installed FIPS-compliant libraries. Note that the agent itself has no FIPS-specific code. _If no FIPS-compliant certificates are installed, then enabling this option will cause all OpenSSL commands to fail._ #### __OS.MonitorDhcpClientRestartPeriod__ _Type: Integer_ _Default: 30_ The agent monitor restarts of the DHCP client and restores network rules when it happens. This setting determines how often (in seconds) to monitor for restarts. #### __OS.RootDeviceScsiTimeout__ _Type: Integer_ _Default: 300_ This configures the SCSI timeout in seconds on the root device. If not set, the system defaults are used. #### __OS.RootDeviceScsiTimeoutPeriod__ _Type: Integer_ _Default: 30_ How often to set the SCSI timeout on the root device (in seconds). This setting is ignored if RootDeviceScsiTimeout is not set. #### __OS.OpensslPath__ _Type: String_ _Default: None_ This can be used to specify an alternate path for the openssl binary to use for cryptographic operations. #### __OS.RemovePersistentNetRulesPeriod__ _Type: Integer_ _Default: 30_ How often to remove the udev rules for persistent network interface names (75-persistent-net-generator.rules and /etc/udev/rules.d/70-persistent-net.rules) (in seconds) #### __OS.SshClientAliveInterval__ _Type: Integer_ _Default: 180_ This values sets the number of seconds the agent uses for the SSH ClientAliveInterval configuration option. #### __OS.SshDir__ _Type: String_ _Default: `/etc/ssh`_ This option can be used to override the normal location of the SSH configuration directory. #### __HttpProxy.Host, HttpProxy.Port__ _Type: String_ _Default: None_ If set, the agent will use this proxy server for HTTP/HTTPS requests. These values *will* override the `http_proxy` or `https_proxy` environment variables. Lastly, `HttpProxy.Host` is required (if to be used) and `HttpProxy.Port` is optional. #### __CGroups.EnforceLimits__ _Type: Boolean_ _Default: y_ If set, the agent will attempt to set cgroups limits for cpu and memory for the agent process itself as well as extension processes. See the wiki for further details on this. #### __CGroups.Excluded__ _Type: String_ _Default: customscript,runcommand_ The list of extensions which will be excluded from cgroups limits. This should be comma separated. ### Telemetry WALinuxAgent collects usage data and sends it to Microsoft to help improve our products and services. The data collected is used to track service health and assist with Azure support requests. Data collected does not include any personally identifiable information. Read our [privacy statement](http://go.microsoft.com/fwlink/?LinkId=521839) to learn more. WALinuxAgent does not support disabling telemetry at this time. WALinuxAgent must be removed to disable telemetry collection. If you need this feature, please open an issue in GitHub and explain your requirement. ### Appendix We do not maintain packaging information in this repo but some samples are shown below as a reference. See the downstream distribution repositories for officially maintained packaging. #### deb packages The official Ubuntu WALinuxAgent package can be found [here](https://launchpad.net/ubuntu/+source/walinuxagent). Run once: 1. Install required packages ```bash sudo apt-get -y install ubuntu-dev-tools pbuilder python-all debhelper ``` 2. Create the pbuilder environment ```bash sudo pbuilder create --debootstrapopts --variant=buildd ``` 3. Obtain `waagent.dsc` from a downstream package repo To compile the package, from the top-most directory: 1. Build the source package ```bash dpkg-buildpackage -S ``` 2. Build the package ```bash sudo pbuilder build waagent.dsc ``` 3. Fetch the built package, usually from `/var/cache/pbuilder/result` #### rpm packages The instructions below describe how to build an rpm package. 1. Install setuptools ```bash curl https://bootstrap.pypa.io/ez_setup.py -o - | python ``` 2. The following command will build the binary and source RPMs: ```bash python setup.py bdist_rpm ``` ----- This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. Azure-WALinuxAgent-2b21de5/SECURITY.md000066400000000000000000000053051462617747000173240ustar00rootroot00000000000000 ## Security Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/). If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below. ## Reporting Security Issues **Please do not report security vulnerabilities through public GitHub issues.** Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report). If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey). You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc). Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue: * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.) * Full paths of source file(s) related to the manifestation of the issue * The location of the affected source code (tag/branch/commit or direct URL) * Any special configuration required to reproduce the issue * Step-by-step instructions to reproduce the issue * Proof-of-concept or exploit code (if possible) * Impact of the issue, including how an attacker might exploit the issue This information will help us triage your report more quickly. If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs. ## Preferred Languages We prefer all communications to be in English. ## Policy Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd). Azure-WALinuxAgent-2b21de5/__main__.py000066400000000000000000000012521462617747000176220ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import azurelinuxagent.agent as agent agent.main() Azure-WALinuxAgent-2b21de5/azurelinuxagent/000077500000000000000000000000001462617747000207555ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/azurelinuxagent/__init__.py000066400000000000000000000011651462617747000230710ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-2b21de5/azurelinuxagent/agent.py000066400000000000000000000425251462617747000224350ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # """ Module agent """ from __future__ import print_function import os import re import subprocess import sys import threading from azurelinuxagent.ga import logcollector, cgroupconfigurator from azurelinuxagent.ga.cgroup import AGENT_LOG_COLLECTOR, CpuCgroup, MemoryCgroup from azurelinuxagent.ga.cgroupapi import SystemdCgroupsApi import azurelinuxagent.common.conf as conf import azurelinuxagent.common.event as event import azurelinuxagent.common.logger as logger from azurelinuxagent.common.future import ustr from azurelinuxagent.ga.logcollector import LogCollector, OUTPUT_RESULTS_FILE_PATH from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.utils import fileutil, textutil from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.utils.networkutil import AddFirewallRules from azurelinuxagent.common.version import AGENT_NAME, AGENT_LONG_VERSION, AGENT_VERSION, \ DISTRO_NAME, DISTRO_VERSION, \ PY_VERSION_MAJOR, PY_VERSION_MINOR, \ PY_VERSION_MICRO, GOAL_STATE_AGENT_VERSION, \ get_daemon_version, set_daemon_version from azurelinuxagent.ga.collect_logs import CollectLogsHandler, get_log_collector_monitor_handler from azurelinuxagent.pa.provision.default import ProvisionHandler class AgentCommands(object): """ This is the list of all commands that the Linux Guest Agent supports """ DeprovisionUser = "deprovision+user" Deprovision = "deprovision" Daemon = "daemon" Start = "start" RegisterService = "register-service" RunExthandlers = "run-exthandlers" Version = "version" ShowConfig = "show-configuration" Help = "help" CollectLogs = "collect-logs" SetupFirewall = "setup-firewall" Provision = "provision" class Agent(object): def __init__(self, verbose, conf_file_path=None): """ Initialize agent running environment. """ self.conf_file_path = conf_file_path self.osutil = get_osutil() # Init stdout log level = logger.LogLevel.VERBOSE if verbose else logger.LogLevel.INFO logger.add_logger_appender(logger.AppenderType.STDOUT, level) # Init config conf_file_path = self.conf_file_path \ if self.conf_file_path is not None \ else self.osutil.get_agent_conf_file_path() conf.load_conf_from_file(conf_file_path) # Init log verbose = verbose or conf.get_logs_verbose() level = logger.LogLevel.VERBOSE if verbose else logger.LogLevel.INFO logger.add_logger_appender(logger.AppenderType.FILE, level, path=conf.get_agent_log_file()) # echo the log to /dev/console if the machine will be provisioned if conf.get_logs_console() and not ProvisionHandler.is_provisioned(): self.__add_console_appender(level) if event.send_logs_to_telemetry(): logger.add_logger_appender(logger.AppenderType.TELEMETRY, logger.LogLevel.WARNING, path=event.add_log_event) ext_log_dir = conf.get_ext_log_dir() try: if os.path.isfile(ext_log_dir): raise Exception("{0} is a file".format(ext_log_dir)) if not os.path.isdir(ext_log_dir): fileutil.mkdir(ext_log_dir, mode=0o755, owner=self.osutil.get_root_username()) except Exception as e: logger.error( "Exception occurred while creating extension " "log directory {0}: {1}".format(ext_log_dir, e)) # Init event reporter # Note that the reporter is not fully initialized here yet. Some telemetry fields are filled with data # originating from the goal state or IMDS, which requires a WireProtocol instance. Once a protocol # has been established, those fields must be explicitly initialized using # initialize_event_logger_vminfo_common_parameters(). Any events created before that initialization # will contain dummy values on those fields. event.init_event_status(conf.get_lib_dir()) event_dir = os.path.join(conf.get_lib_dir(), event.EVENTS_DIRECTORY) event.init_event_logger(event_dir) event.enable_unhandled_err_dump("WALA") def __add_console_appender(self, level): logger.add_logger_appender(logger.AppenderType.CONSOLE, level, path="/dev/console") def daemon(self): """ Run agent daemon """ set_daemon_version(AGENT_VERSION) logger.set_prefix("Daemon") threading.current_thread().setName("Daemon") child_args = None \ if self.conf_file_path is None \ else "-configuration-path:{0}".format(self.conf_file_path) from azurelinuxagent.daemon import get_daemon_handler daemon_handler = get_daemon_handler() daemon_handler.run(child_args=child_args) def provision(self): """ Run provision command """ from azurelinuxagent.pa.provision import get_provision_handler provision_handler = get_provision_handler() provision_handler.run() def deprovision(self, force=False, deluser=False): """ Run deprovision command """ from azurelinuxagent.pa.deprovision import get_deprovision_handler deprovision_handler = get_deprovision_handler() deprovision_handler.run(force=force, deluser=deluser) def register_service(self): """ Register agent as a service """ print("Register {0} service".format(AGENT_NAME)) self.osutil.register_agent_service() print("Stop {0} service".format(AGENT_NAME)) self.osutil.stop_agent_service() print("Start {0} service".format(AGENT_NAME)) self.osutil.start_agent_service() def run_exthandlers(self, debug=False): """ Run the update and extension handler """ logger.set_prefix("ExtHandler") threading.current_thread().setName("ExtHandler") # # Agents < 2.2.53 used to echo the log to the console. Since the extension handler could have been started by # one of those daemons, output a message indicating that output to the console will stop, otherwise users # may think that the agent died if they noticed that output to the console stops abruptly. # # Feel free to remove this code if telemetry shows there are no more agents <= 2.2.53 in the field. # if conf.get_logs_console() and get_daemon_version() < FlexibleVersion("2.2.53"): self.__add_console_appender(logger.LogLevel.INFO) try: logger.info(u"The agent will now check for updates and then will process extensions. Output to /dev/console will be suspended during those operations.") finally: logger.disable_console_output() from azurelinuxagent.ga.update import get_update_handler update_handler = get_update_handler() update_handler.run(debug) def show_configuration(self): configuration = conf.get_configuration() for k in sorted(configuration.keys()): print("{0} = {1}".format(k, configuration[k])) def collect_logs(self, is_full_mode): logger.set_prefix("LogCollector") if is_full_mode: logger.info("Running log collector mode full") else: logger.info("Running log collector mode normal") # Check the cgroups unit log_collector_monitor = None cgroups_api = SystemdCgroupsApi() cpu_cgroup_path, memory_cgroup_path = cgroups_api.get_process_cgroup_paths("self") if CollectLogsHandler.is_enabled_monitor_cgroups_check(): cpu_slice_matches = (cgroupconfigurator.LOGCOLLECTOR_SLICE in cpu_cgroup_path) memory_slice_matches = (cgroupconfigurator.LOGCOLLECTOR_SLICE in memory_cgroup_path) if not cpu_slice_matches or not memory_slice_matches: logger.info("The Log Collector process is not in the proper cgroups:") if not cpu_slice_matches: logger.info("\tunexpected cpu slice") if not memory_slice_matches: logger.info("\tunexpected memory slice") sys.exit(logcollector.INVALID_CGROUPS_ERRCODE) def initialize_cgroups_tracking(cpu_cgroup_path, memory_cgroup_path): cpu_cgroup = CpuCgroup(AGENT_LOG_COLLECTOR, cpu_cgroup_path) msg = "Started tracking cpu cgroup {0}".format(cpu_cgroup) logger.info(msg) cpu_cgroup.initialize_cpu_usage() memory_cgroup = MemoryCgroup(AGENT_LOG_COLLECTOR, memory_cgroup_path) msg = "Started tracking memory cgroup {0}".format(memory_cgroup) logger.info(msg) return [cpu_cgroup, memory_cgroup] try: log_collector = LogCollector(is_full_mode) # Running log collector resource(CPU, Memory) monitoring only if agent starts the log collector. # If Log collector start by any other means, then it will not be monitored. if CollectLogsHandler.is_enabled_monitor_cgroups_check(): tracked_cgroups = initialize_cgroups_tracking(cpu_cgroup_path, memory_cgroup_path) log_collector_monitor = get_log_collector_monitor_handler(tracked_cgroups) log_collector_monitor.run() archive = log_collector.collect_logs_and_get_archive() logger.info("Log collection successfully completed. Archive can be found at {0} " "and detailed log output can be found at {1}".format(archive, OUTPUT_RESULTS_FILE_PATH)) except Exception as e: logger.error("Log collection completed unsuccessfully. Error: {0}".format(ustr(e))) logger.info("Detailed log output can be found at {0}".format(OUTPUT_RESULTS_FILE_PATH)) sys.exit(1) finally: if log_collector_monitor is not None: log_collector_monitor.stop() @staticmethod def setup_firewall(firewall_metadata): print("Setting up firewall for the WALinux Agent with args: {0}".format(firewall_metadata)) try: AddFirewallRules.add_iptables_rules(firewall_metadata['wait'], firewall_metadata['dst_ip'], firewall_metadata['uid']) print("Successfully set the firewall rules") except Exception as error: print("Unable to add firewall rules. Error: {0}".format(ustr(error))) sys.exit(1) def main(args=None): """ Parse command line arguments, exit with usage() on error. Invoke different methods according to different command """ if args is None: args = [] if len(args) <= 0: args = sys.argv[1:] command, force, verbose, debug, conf_file_path, log_collector_full_mode, firewall_metadata = parse_args(args) if command == AgentCommands.Version: version() elif command == AgentCommands.Help: print(usage()) elif command == AgentCommands.Start: start(conf_file_path=conf_file_path) else: try: agent = Agent(verbose, conf_file_path=conf_file_path) if command == AgentCommands.DeprovisionUser: agent.deprovision(force, deluser=True) elif command == AgentCommands.Deprovision: agent.deprovision(force, deluser=False) elif command == AgentCommands.Provision: agent.provision() elif command == AgentCommands.RegisterService: agent.register_service() elif command == AgentCommands.Daemon: agent.daemon() elif command == AgentCommands.RunExthandlers: agent.run_exthandlers(debug) elif command == AgentCommands.ShowConfig: agent.show_configuration() elif command == AgentCommands.CollectLogs: agent.collect_logs(log_collector_full_mode) elif command == AgentCommands.SetupFirewall: agent.setup_firewall(firewall_metadata) except Exception as e: logger.error(u"Failed to run '{0}': {1}", command, textutil.format_exception(e)) def parse_args(sys_args): """ Parse command line arguments """ cmd = AgentCommands.Help force = False verbose = False debug = False conf_file_path = None log_collector_full_mode = False firewall_metadata = { "dst_ip": None, "uid": None, "wait": "" } regex_cmd_format = "^([-/]*){0}" for arg in sys_args: if arg == "": # Don't parse an empty parameter continue m = re.match("^(?:[-/]*)configuration-path:([\w/\.\-_]+)", arg) # pylint: disable=W1401 if not m is None: conf_file_path = m.group(1) if not os.path.exists(conf_file_path): print("Error: Configuration file {0} does not exist".format( conf_file_path), file=sys.stderr) print(usage()) sys.exit(1) elif re.match("^([-/]*)deprovision\\+user", arg): cmd = AgentCommands.DeprovisionUser elif re.match(regex_cmd_format.format(AgentCommands.Deprovision), arg): cmd = AgentCommands.Deprovision elif re.match(regex_cmd_format.format(AgentCommands.Daemon), arg): cmd = AgentCommands.Daemon elif re.match(regex_cmd_format.format(AgentCommands.Start), arg): cmd = AgentCommands.Start elif re.match(regex_cmd_format.format(AgentCommands.RegisterService), arg): cmd = AgentCommands.RegisterService elif re.match(regex_cmd_format.format(AgentCommands.RunExthandlers), arg): cmd = AgentCommands.RunExthandlers elif re.match(regex_cmd_format.format(AgentCommands.Version), arg): cmd = AgentCommands.Version elif re.match(regex_cmd_format.format("verbose"), arg): verbose = True elif re.match(regex_cmd_format.format("debug"), arg): debug = True elif re.match(regex_cmd_format.format("force"), arg): force = True elif re.match(regex_cmd_format.format(AgentCommands.ShowConfig), arg): cmd = AgentCommands.ShowConfig elif re.match("^([-/]*)(help|usage|\\?)", arg): cmd = AgentCommands.Help elif re.match(regex_cmd_format.format(AgentCommands.CollectLogs), arg): cmd = AgentCommands.CollectLogs elif re.match(regex_cmd_format.format("full"), arg): log_collector_full_mode = True elif re.match(regex_cmd_format.format(AgentCommands.SetupFirewall), arg): cmd = AgentCommands.SetupFirewall elif re.match(regex_cmd_format.format("dst_ip=(?P[\\d.]{7,})"), arg): firewall_metadata['dst_ip'] = re.match(regex_cmd_format.format("dst_ip=(?P[\\d.]{7,})"), arg).group( 'dst_ip') elif re.match(regex_cmd_format.format("uid=(?P[\\d]+)"), arg): firewall_metadata['uid'] = re.match(regex_cmd_format.format("uid=(?P[\\d]+)"), arg).group('uid') elif re.match(regex_cmd_format.format("(w|wait)$"), arg): firewall_metadata['wait'] = "-w" else: cmd = AgentCommands.Help break return cmd, force, verbose, debug, conf_file_path, log_collector_full_mode, firewall_metadata def version(): """ Show agent version """ print(("{0} running on {1} {2}".format(AGENT_LONG_VERSION, DISTRO_NAME, DISTRO_VERSION))) print("Python: {0}.{1}.{2}".format(PY_VERSION_MAJOR, PY_VERSION_MINOR, PY_VERSION_MICRO)) print("Goal state agent: {0}".format(GOAL_STATE_AGENT_VERSION)) def usage(): """ Return agent usage message """ s = "\n" s += ("usage: {0} [-verbose] [-force] [-help] " "-configuration-path:" "-deprovision[+user]|-register-service|-version|-daemon|-start|" "-run-exthandlers|-show-configuration|-collect-logs [-full]|-setup-firewall [-dst_ip= -uid= [-w/--wait]]" "").format(sys.argv[0]) s += "\n" return s def start(conf_file_path=None): """ Start agent daemon in a background process and set stdout/stderr to /dev/null """ args = [sys.argv[0], '-daemon'] if conf_file_path is not None: args.append('-configuration-path:{0}'.format(conf_file_path)) with open(os.devnull, 'w') as devnull: subprocess.Popen(args, stdout=devnull, stderr=devnull) if __name__ == '__main__' : main() Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/000077500000000000000000000000001462617747000222455ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/AgentGlobals.py000066400000000000000000000023221462617747000251600ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ class AgentGlobals(object): """ This class is used for setting AgentGlobals which can be used all throughout the Agent. """ GUID_ZERO = "00000000-0000-0000-0000-000000000000" # # Some modules (e.g. telemetry) require an up-to-date container ID. We update this variable each time we # fetch the goal state. # _container_id = GUID_ZERO @staticmethod def get_container_id(): return AgentGlobals._container_id @staticmethod def update_container_id(container_id): AgentGlobals._container_id = container_id Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/__init__.py000066400000000000000000000011661462617747000243620ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/agent_supported_feature.py000066400000000000000000000126431462617747000275430ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.common import conf class SupportedFeatureNames(object): """ Enum for defining the Feature Names for all features that we the agent supports """ MultiConfig = "MultipleExtensionsPerHandler" ExtensionTelemetryPipeline = "ExtensionTelemetryPipeline" FastTrack = "FastTrack" GAVersioningGovernance = "VersioningGovernance" # Guest Agent Versioning class AgentSupportedFeature(object): """ Interface for defining all features that the Linux Guest Agent supports and reports their if supported back to CRP """ def __init__(self, name, version="1.0", supported=False): self.__name = name self.__version = version self.__supported = supported @property def name(self): return self.__name @property def version(self): return self.__version @property def is_supported(self): return self.__supported class _MultiConfigFeature(AgentSupportedFeature): __NAME = SupportedFeatureNames.MultiConfig __VERSION = "1.0" __SUPPORTED = True def __init__(self): super(_MultiConfigFeature, self).__init__(name=_MultiConfigFeature.__NAME, version=_MultiConfigFeature.__VERSION, supported=_MultiConfigFeature.__SUPPORTED) class _ETPFeature(AgentSupportedFeature): __NAME = SupportedFeatureNames.ExtensionTelemetryPipeline __VERSION = "1.0" __SUPPORTED = True def __init__(self): super(_ETPFeature, self).__init__(name=self.__NAME, version=self.__VERSION, supported=self.__SUPPORTED) class _GAVersioningGovernanceFeature(AgentSupportedFeature): """ CRP would drive the RSM update if agent reports that it does support RSM upgrades with this flag otherwise CRP fallback to largest version. Agent doesn't report supported feature flag if auto update is disabled or old version of agent running that doesn't understand GA versioning. Note: Especially Windows need this flag to report to CRP that GA doesn't support the updates. So linux adopted same flag to have a common solution. """ __NAME = SupportedFeatureNames.GAVersioningGovernance __VERSION = "1.0" __SUPPORTED = conf.get_auto_update_to_latest_version() def __init__(self): super(_GAVersioningGovernanceFeature, self).__init__(name=self.__NAME, version=self.__VERSION, supported=self.__SUPPORTED) # This is the list of features that Agent supports and we advertise to CRP __CRP_ADVERTISED_FEATURES = { SupportedFeatureNames.MultiConfig: _MultiConfigFeature(), SupportedFeatureNames.GAVersioningGovernance: _GAVersioningGovernanceFeature() } # This is the list of features that Agent supports and we advertise to Extensions __EXTENSION_ADVERTISED_FEATURES = { SupportedFeatureNames.ExtensionTelemetryPipeline: _ETPFeature() } def get_supported_feature_by_name(feature_name): if feature_name in __CRP_ADVERTISED_FEATURES: return __CRP_ADVERTISED_FEATURES[feature_name] if feature_name in __EXTENSION_ADVERTISED_FEATURES: return __EXTENSION_ADVERTISED_FEATURES[feature_name] raise NotImplementedError("Feature with Name: {0} not found".format(feature_name)) def get_agent_supported_features_list_for_crp(): """ List of features that the GuestAgent currently supports (like FastTrack, MultiConfig, etc). We need to send this list as part of Status reporting to inform CRP of all the features the agent supports. :return: Dict containing all CRP supported features with the key as their names and the AgentFeature object as the value if they are supported by the Agent Eg: { MultipleExtensionsPerHandler: _MultiConfigFeature() } """ return dict((name, feature) for name, feature in __CRP_ADVERTISED_FEATURES.items() if feature.is_supported) def get_agent_supported_features_list_for_extensions(): """ List of features that the GuestAgent currently supports (like Extension Telemetry Pipeline, etc) needed by Extensions. We need to send this list as environment variables when calling extension commands to inform Extensions of all the features the agent supports. :return: Dict containing all Extension supported features with the key as their names and the AgentFeature object as the value if the feature is supported by the Agent. Eg: { CRPSupportedFeatureNames.ExtensionTelemetryPipeline: _ETPFeature() } """ return dict((name, feature) for name, feature in __EXTENSION_ADVERTISED_FEATURES.items() if feature.is_supported) Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/conf.py000066400000000000000000000525331462617747000235540ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # """ Module conf loads and parses configuration file """ # pylint: disable=W0105 import os import os.path from azurelinuxagent.common.utils.fileutil import read_file #pylint: disable=R0401 from azurelinuxagent.common.exception import AgentConfigError DISABLE_AGENT_FILE = 'disable_agent' class ConfigurationProvider(object): """ Parse and store key:values in /etc/waagent.conf. """ def __init__(self): self.values = dict() def load(self, content): if not content: raise AgentConfigError("Can't not parse empty configuration") for line in content.split('\n'): if not line.startswith("#") and "=" in line: parts = line.split('=', 1) if len(parts) < 2: continue key = parts[0].strip() value = parts[1].split('#')[0].strip("\" ").strip() self.values[key] = value if value != "None" else None @staticmethod def _get_default(default): if hasattr(default, '__call__'): return default() return default def get(self, key, default_value): """ Retrieves a string parameter by key and returns its value. If not found returns the default value, or if the default value is a callable returns the result of invoking the callable. """ val = self.values.get(key) return val if val is not None else self._get_default(default_value) def get_switch(self, key, default_value): """ Retrieves a switch parameter by key and returns its value as a boolean. If not found returns the default value, or if the default value is a callable returns the result of invoking the callable. """ val = self.values.get(key) if val is not None and val.lower() == 'y': return True elif val is not None and val.lower() == 'n': return False return self._get_default(default_value) def get_int(self, key, default_value): """ Retrieves an int parameter by key and returns its value. If not found returns the default value, or if the default value is a callable returns the result of invoking the callable. """ try: return int(self.values.get(key)) except TypeError: return self._get_default(default_value) except ValueError: return self._get_default(default_value) def is_present(self, key): """ Returns True if the given flag present in the configuration file, False otherwise. """ return self.values.get(key) is not None __conf__ = ConfigurationProvider() def load_conf_from_file(conf_file_path, conf=__conf__): """ Load conf file from: conf_file_path """ if os.path.isfile(conf_file_path) == False: raise AgentConfigError(("Missing configuration in {0}" "").format(conf_file_path)) try: content = read_file(conf_file_path) conf.load(content) except IOError as err: raise AgentConfigError(("Failed to load conf file:{0}, {1}" "").format(conf_file_path, err)) __SWITCH_OPTIONS__ = { "OS.AllowHTTP": False, "OS.EnableFirewall": False, "OS.EnableFIPS": False, "OS.EnableRDMA": False, "OS.UpdateRdmaDriver": False, "OS.CheckRdmaDriver": False, "Logs.Verbose": False, "Logs.Console": True, "Logs.Collect": True, "Extensions.Enabled": True, "Extensions.WaitForCloudInit": False, "Provisioning.AllowResetSysUser": False, "Provisioning.RegenerateSshHostKeyPair": False, "Provisioning.DeleteRootPassword": False, "Provisioning.DecodeCustomData": False, "Provisioning.ExecuteCustomData": False, "Provisioning.MonitorHostName": False, "DetectScvmmEnv": False, "ResourceDisk.Format": False, "ResourceDisk.EnableSwap": False, "ResourceDisk.EnableSwapEncryption": False, "AutoUpdate.Enabled": True, "AutoUpdate.UpdateToLatestVersion": True, "EnableOverProvisioning": True, # # "Debug" options are experimental and may be removed in later # versions of the Agent. # "Debug.CgroupLogMetrics": False, "Debug.CgroupDisableOnProcessCheckFailure": True, "Debug.CgroupDisableOnQuotaCheckFailure": True, "Debug.EnableAgentMemoryUsageCheck": False, "Debug.EnableFastTrack": True, "Debug.EnableGAVersioning": True } __STRING_OPTIONS__ = { "Lib.Dir": "/var/lib/waagent", "DVD.MountPoint": "/mnt/cdrom/secure", "Pid.File": "/var/run/waagent.pid", "Extension.LogDir": "/var/log/azure", "OS.OpensslPath": "/usr/bin/openssl", "OS.SshDir": "/etc/ssh", "OS.HomeDir": "/home", "OS.PasswordPath": "/etc/shadow", "OS.SudoersDir": "/etc/sudoers.d", "OS.RootDeviceScsiTimeout": None, "Provisioning.Agent": "auto", "Provisioning.SshHostKeyPairType": "rsa", "Provisioning.PasswordCryptId": "6", "HttpProxy.Host": None, "ResourceDisk.MountPoint": "/mnt/resource", "ResourceDisk.MountOptions": None, "ResourceDisk.Filesystem": "ext3", "AutoUpdate.GAFamily": "Prod", "Debug.CgroupMonitorExpiryTime": "2022-03-31", "Debug.CgroupMonitorExtensionName": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", } __INTEGER_OPTIONS__ = { "Extensions.GoalStatePeriod": 6, "Extensions.InitialGoalStatePeriod": 6, "Extensions.WaitForCloudInitTimeout": 3600, "OS.EnableFirewallPeriod": 300, "OS.RemovePersistentNetRulesPeriod": 30, "OS.RootDeviceScsiTimeoutPeriod": 30, "OS.MonitorDhcpClientRestartPeriod": 30, "OS.SshClientAliveInterval": 180, "Provisioning.MonitorHostNamePeriod": 30, "Provisioning.PasswordCryptSaltLength": 10, "HttpProxy.Port": None, "ResourceDisk.SwapSizeMB": 0, "Autoupdate.Frequency": 3600, "Logs.CollectPeriod": 3600, # # "Debug" options are experimental and may be removed in later # versions of the Agent. # "Debug.CgroupCheckPeriod": 300, "Debug.AgentCpuQuota": 50, "Debug.AgentCpuThrottledTimeThreshold": 120, "Debug.AgentMemoryQuota": 30 * 1024 ** 2, "Debug.EtpCollectionPeriod": 300, "Debug.AutoUpdateHotfixFrequency": 14400, "Debug.AutoUpdateNormalFrequency": 86400, "Debug.FirewallRulesLogPeriod": 86400 } def get_configuration(conf=__conf__): options = {} for option in __SWITCH_OPTIONS__: options[option] = conf.get_switch(option, __SWITCH_OPTIONS__[option]) for option in __STRING_OPTIONS__: options[option] = conf.get(option, __STRING_OPTIONS__[option]) for option in __INTEGER_OPTIONS__: options[option] = conf.get_int(option, __INTEGER_OPTIONS__[option]) return options def get_default_value(option): if option in __STRING_OPTIONS__: return __STRING_OPTIONS__[option] raise ValueError("{0} is not a valid configuration parameter.".format(option)) def get_int_default_value(option): if option in __INTEGER_OPTIONS__: return int(__INTEGER_OPTIONS__[option]) raise ValueError("{0} is not a valid configuration parameter.".format(option)) def get_switch_default_value(option): if option in __SWITCH_OPTIONS__: return __SWITCH_OPTIONS__[option] raise ValueError("{0} is not a valid configuration parameter.".format(option)) def is_present(key, conf=__conf__): """ Returns True if the given flag present in the configuration file, False otherwise. """ return conf.is_present(key) def enable_firewall(conf=__conf__): return conf.get_switch("OS.EnableFirewall", False) def get_enable_firewall_period(conf=__conf__): return conf.get_int("OS.EnableFirewallPeriod", 300) def get_remove_persistent_net_rules_period(conf=__conf__): return conf.get_int("OS.RemovePersistentNetRulesPeriod", 30) def get_monitor_dhcp_client_restart_period(conf=__conf__): return conf.get_int("OS.MonitorDhcpClientRestartPeriod", 30) def enable_rdma(conf=__conf__): return conf.get_switch("OS.EnableRDMA", False) or \ conf.get_switch("OS.UpdateRdmaDriver", False) or \ conf.get_switch("OS.CheckRdmaDriver", False) def enable_rdma_update(conf=__conf__): return conf.get_switch("OS.UpdateRdmaDriver", False) def enable_check_rdma_driver(conf=__conf__): return conf.get_switch("OS.CheckRdmaDriver", True) def get_logs_verbose(conf=__conf__): return conf.get_switch("Logs.Verbose", False) def get_logs_console(conf=__conf__): return conf.get_switch("Logs.Console", True) def get_collect_logs(conf=__conf__): return conf.get_switch("Logs.Collect", True) def get_collect_logs_period(conf=__conf__): return conf.get_int("Logs.CollectPeriod", 3600) def get_lib_dir(conf=__conf__): return conf.get("Lib.Dir", "/var/lib/waagent") def get_published_hostname(conf=__conf__): # Some applications rely on this file; do not remove this setting return os.path.join(get_lib_dir(conf), 'published_hostname') def get_dvd_mount_point(conf=__conf__): return conf.get("DVD.MountPoint", "/mnt/cdrom/secure") def get_agent_pid_file_path(conf=__conf__): return conf.get("Pid.File", "/var/run/waagent.pid") def get_ext_log_dir(conf=__conf__): return conf.get("Extension.LogDir", "/var/log/azure") def get_agent_log_file(): return "/var/log/waagent.log" def get_fips_enabled(conf=__conf__): return conf.get_switch("OS.EnableFIPS", False) def get_openssl_cmd(conf=__conf__): return conf.get("OS.OpensslPath", "/usr/bin/openssl") def get_ssh_client_alive_interval(conf=__conf__): return conf.get("OS.SshClientAliveInterval", 180) def get_ssh_dir(conf=__conf__): return conf.get("OS.SshDir", "/etc/ssh") def get_home_dir(conf=__conf__): return conf.get("OS.HomeDir", "/home") def get_passwd_file_path(conf=__conf__): return conf.get("OS.PasswordPath", "/etc/shadow") def get_sudoers_dir(conf=__conf__): return conf.get("OS.SudoersDir", "/etc/sudoers.d") def get_sshd_conf_file_path(conf=__conf__): return os.path.join(get_ssh_dir(conf), "sshd_config") def get_ssh_key_glob(conf=__conf__): return os.path.join(get_ssh_dir(conf), 'ssh_host_*key*') def get_ssh_key_private_path(conf=__conf__): return os.path.join(get_ssh_dir(conf), 'ssh_host_{0}_key'.format(get_ssh_host_keypair_type(conf))) def get_ssh_key_public_path(conf=__conf__): return os.path.join(get_ssh_dir(conf), 'ssh_host_{0}_key.pub'.format(get_ssh_host_keypair_type(conf))) def get_root_device_scsi_timeout(conf=__conf__): return conf.get("OS.RootDeviceScsiTimeout", None) def get_root_device_scsi_timeout_period(conf=__conf__): return conf.get_int("OS.RootDeviceScsiTimeoutPeriod", 30) def get_ssh_host_keypair_type(conf=__conf__): keypair_type = conf.get("Provisioning.SshHostKeyPairType", "rsa") if keypair_type == "auto": ''' auto generates all supported key types and returns the rsa thumbprint as the default. ''' return "rsa" return keypair_type def get_ssh_host_keypair_mode(conf=__conf__): return conf.get("Provisioning.SshHostKeyPairType", "rsa") def get_extensions_enabled(conf=__conf__): return conf.get_switch("Extensions.Enabled", True) def get_wait_for_cloud_init(conf=__conf__): return conf.get_switch("Extensions.WaitForCloudInit", False) def get_wait_for_cloud_init_timeout(conf=__conf__): return conf.get_switch("Extensions.WaitForCloudInitTimeout", 3600) def get_goal_state_period(conf=__conf__): return conf.get_int("Extensions.GoalStatePeriod", 6) def get_initial_goal_state_period(conf=__conf__): return conf.get_int("Extensions.InitialGoalStatePeriod", default_value=lambda: get_goal_state_period(conf=conf)) def get_allow_reset_sys_user(conf=__conf__): return conf.get_switch("Provisioning.AllowResetSysUser", False) def get_regenerate_ssh_host_key(conf=__conf__): return conf.get_switch("Provisioning.RegenerateSshHostKeyPair", False) def get_delete_root_password(conf=__conf__): return conf.get_switch("Provisioning.DeleteRootPassword", False) def get_decode_customdata(conf=__conf__): return conf.get_switch("Provisioning.DecodeCustomData", False) def get_execute_customdata(conf=__conf__): return conf.get_switch("Provisioning.ExecuteCustomData", False) def get_password_cryptid(conf=__conf__): return conf.get("Provisioning.PasswordCryptId", "6") def get_provisioning_agent(conf=__conf__): return conf.get("Provisioning.Agent", "auto") def get_provision_enabled(conf=__conf__): """ Provisioning (as far as waagent is concerned) is enabled if either the agent is set to 'auto' or 'waagent'. This wraps logic that was introduced for flexible provisioning agent configuration and detection. The replaces the older bool setting to turn provisioning on or off. """ return get_provisioning_agent(conf) in ("auto", "waagent") def get_password_crypt_salt_len(conf=__conf__): return conf.get_int("Provisioning.PasswordCryptSaltLength", 10) def get_monitor_hostname(conf=__conf__): return conf.get_switch("Provisioning.MonitorHostName", False) def get_monitor_hostname_period(conf=__conf__): return conf.get_int("Provisioning.MonitorHostNamePeriod", 30) def get_httpproxy_host(conf=__conf__): return conf.get("HttpProxy.Host", None) def get_httpproxy_port(conf=__conf__): return conf.get_int("HttpProxy.Port", None) def get_detect_scvmm_env(conf=__conf__): return conf.get_switch("DetectScvmmEnv", False) def get_resourcedisk_format(conf=__conf__): return conf.get_switch("ResourceDisk.Format", False) def get_resourcedisk_enable_swap(conf=__conf__): return conf.get_switch("ResourceDisk.EnableSwap", False) def get_resourcedisk_enable_swap_encryption(conf=__conf__): return conf.get_switch("ResourceDisk.EnableSwapEncryption", False) def get_resourcedisk_mountpoint(conf=__conf__): return conf.get("ResourceDisk.MountPoint", "/mnt/resource") def get_resourcedisk_mountoptions(conf=__conf__): return conf.get("ResourceDisk.MountOptions", None) def get_resourcedisk_filesystem(conf=__conf__): return conf.get("ResourceDisk.Filesystem", "ext3") def get_resourcedisk_swap_size_mb(conf=__conf__): return conf.get_int("ResourceDisk.SwapSizeMB", 0) def get_autoupdate_gafamily(conf=__conf__): return conf.get("AutoUpdate.GAFamily", "Prod") def get_autoupdate_enabled(conf=__conf__): return conf.get_switch("AutoUpdate.Enabled", True) def get_autoupdate_frequency(conf=__conf__): return conf.get_int("Autoupdate.Frequency", 3600) def get_enable_overprovisioning(conf=__conf__): return conf.get_switch("EnableOverProvisioning", True) def get_allow_http(conf=__conf__): return conf.get_switch("OS.AllowHTTP", False) def get_disable_agent_file_path(conf=__conf__): return os.path.join(get_lib_dir(conf), DISABLE_AGENT_FILE) def get_cgroups_enabled(conf=__conf__): return conf.get_switch("CGroups.Enabled", True) def get_monitor_network_configuration_changes(conf=__conf__): return conf.get_switch("Monitor.NetworkConfigurationChanges", False) def get_auto_update_to_latest_version(conf=__conf__): """ If set to True, agent will update to the latest version NOTE: when both turned on, both AutoUpdate.Enabled and AutoUpdate.UpdateToLatestVersion same meaning: update to latest version when turned off, AutoUpdate.Enabled: reverts to pre-installed agent, AutoUpdate.UpdateToLatestVersion: uses latest version already installed on the vm and does not download new agents Even we are deprecating AutoUpdate.Enabled, we still need to support if users explicitly setting it instead new flag. If AutoUpdate.UpdateToLatestVersion is present, it overrides any value set for AutoUpdate.Enabled (if present). If AutoUpdate.UpdateToLatestVersion is not present but AutoUpdate.Enabled is present and set to 'n', we adhere to AutoUpdate.Enabled flag's behavior if both not present, we default to True. """ default = get_autoupdate_enabled(conf=conf) return conf.get_switch("AutoUpdate.UpdateToLatestVersion", default) def get_cgroup_check_period(conf=__conf__): """ How often to perform checks on cgroups (are the processes in the cgroups as expected, has the agent exceeded its quota, etc) NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_int("Debug.CgroupCheckPeriod", 300) def get_cgroup_log_metrics(conf=__conf__): """ If True, resource usage metrics are written to the local log NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_switch("Debug.CgroupLogMetrics", False) def get_cgroup_disable_on_process_check_failure(conf=__conf__): """ If True, cgroups will be disabled if the process check fails NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_switch("Debug.CgroupDisableOnProcessCheckFailure", True) def get_cgroup_disable_on_quota_check_failure(conf=__conf__): """ If True, cgroups will be disabled if the CPU quota check fails NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_switch("Debug.CgroupDisableOnQuotaCheckFailure", True) def get_agent_cpu_quota(conf=__conf__): """ CPU quota for the agent as a percentage of 1 CPU (100% == 1 CPU) NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_int("Debug.AgentCpuQuota", 50) def get_agent_cpu_throttled_time_threshold(conf=__conf__): """ Throttled time threshold for agent cpu in seconds. NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_int("Debug.AgentCpuThrottledTimeThreshold", 120) def get_agent_memory_quota(conf=__conf__): """ Memory quota for the agent in bytes. NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_int("Debug.AgentMemoryQuota", 30 * 1024 ** 2) def get_enable_agent_memory_usage_check(conf=__conf__): """ If True, Agent checks it's Memory usage. NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_switch("Debug.EnableAgentMemoryUsageCheck", False) def get_cgroup_monitor_expiry_time(conf=__conf__): """ cgroups monitoring for pilot extensions disabled after expiry time NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get("Debug.CgroupMonitorExpiryTime", "2022-03-31") def get_cgroup_monitor_extension_name (conf=__conf__): """ cgroups monitoring extension name NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get("Debug.CgroupMonitorExtensionName", "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent") def get_enable_fast_track(conf=__conf__): """ If True, the agent use FastTrack when retrieving goal states NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_switch("Debug.EnableFastTrack", True) def get_etp_collection_period(conf=__conf__): """ Determines the frequency to perform ETP collection on extensions telemetry events. NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_int("Debug.EtpCollectionPeriod", 300) def get_self_update_hotfix_frequency(conf=__conf__): """ Determines the frequency to check for Hotfix upgrades ( version changed in new upgrades). NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_int("Debug.SelfUpdateHotfixFrequency", 4 * 60 * 60) def get_self_update_regular_frequency(conf=__conf__): """ Determines the frequency to check for regular upgrades (.. version changed in new upgrades). NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_int("Debug.SelfUpdateRegularFrequency", 24 * 60 * 60) def get_enable_ga_versioning(conf=__conf__): """ If True, the agent looks for rsm updates(checking requested version in GS) otherwise it will fall back to self-update and finds the highest version from PIR. NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_switch("Debug.EnableGAVersioning", False) def get_firewall_rules_log_period(conf=__conf__): """ Determine the frequency to perform the periodic operation of logging firewall rules. NOTE: This option is experimental and may be removed in later versions of the Agent. """ return conf.get_int("Debug.FirewallRulesLogPeriod", 86400) Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/datacontract.py000066400000000000000000000052561462617747000252760ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.common.exception import ProtocolError import azurelinuxagent.common.logger as logger # pylint: disable=W0105 """ Base class for data contracts between guest and host and utilities to manipulate the properties in those contracts """ # pylint: enable=W0105 class DataContract(object): pass class DataContractList(list): def __init__(self, item_cls): # pylint: disable=W0231 self.item_cls = item_cls def validate_param(name, val, expected_type): if val is None: raise ProtocolError("{0} is None".format(name)) if not isinstance(val, expected_type): raise ProtocolError(("{0} type should be {1} not {2}" "").format(name, expected_type, type(val))) def set_properties(name, obj, data): if isinstance(obj, DataContract): validate_param("Property '{0}'".format(name), data, dict) for prob_name, prob_val in data.items(): prob_full_name = "{0}.{1}".format(name, prob_name) try: prob = getattr(obj, prob_name) except AttributeError: logger.warn("Unknown property: {0}", prob_full_name) continue prob = set_properties(prob_full_name, prob, prob_val) setattr(obj, prob_name, prob) return obj elif isinstance(obj, DataContractList): validate_param("List '{0}'".format(name), data, list) for item_data in data: item = obj.item_cls() item = set_properties(name, item, item_data) obj.append(item) return obj else: return data def get_properties(obj): if isinstance(obj, DataContract): data = {} props = vars(obj) for prob_name, prob in list(props.items()): data[prob_name] = get_properties(prob) return data elif isinstance(obj, DataContractList): data = [] for item in obj: item_data = get_properties(item) data.append(item_data) return data else: return obj Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/dhcp.py000066400000000000000000000351701462617747000235430ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import array import os import socket import time import azurelinuxagent.common.logger as logger from azurelinuxagent.common.exception import DhcpError from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.utils.restutil import KNOWN_WIRESERVER_IP from azurelinuxagent.common.utils.textutil import hex_dump, hex_dump2, \ hex_dump3, \ compare_bytes, str_to_ord, \ unpack_big_endian, \ int_to_ip4_addr # the kernel routing table representation of 168.63.129.16 KNOWN_WIRESERVER_IP_ENTRY = '10813FA8' def get_dhcp_handler(): return DhcpHandler() class DhcpHandler(object): """ Azure use DHCP option 245 to pass endpoint ip to VMs. """ def __init__(self): self.osutil = get_osutil() self.endpoint = None self.gateway = None self.routes = None self._request_broadcast = False self.skip_cache = False def run(self): """ Send dhcp request Configure default gateway and routes Save wire server endpoint if found """ if self.wireserver_route_exists or self.dhcp_cache_exists: return self.send_dhcp_req() self.conf_routes() def wait_for_network(self): """ Wait for network stack to be initialized. """ ipv4 = self.osutil.get_ip4_addr() while ipv4 == '' or ipv4 == '0.0.0.0': logger.info("Waiting for network.") time.sleep(10) logger.info("Try to start network interface.") self.osutil.start_network() ipv4 = self.osutil.get_ip4_addr() @property def wireserver_route_exists(self): """ Determine whether a route to the known wireserver ip already exists, and if so use that as the endpoint. This is true when running in a virtual network. :return: True if a route to KNOWN_WIRESERVER_IP exists. """ route_exists = False logger.info("Test for route to {0}".format(KNOWN_WIRESERVER_IP)) try: route_table = self.osutil.read_route_table() if any((KNOWN_WIRESERVER_IP_ENTRY in route) for route in route_table): # reset self.gateway and self.routes # we do not need to alter the routing table self.endpoint = KNOWN_WIRESERVER_IP self.gateway = None self.routes = None route_exists = True logger.info("Route to {0} exists".format(KNOWN_WIRESERVER_IP)) else: logger.warn("No route exists to {0}".format(KNOWN_WIRESERVER_IP)) except Exception as e: logger.error( "Could not determine whether route exists to {0}: {1}".format( KNOWN_WIRESERVER_IP, e)) return route_exists @property def dhcp_cache_exists(self): """ Check whether the dhcp options cache exists and contains the wireserver endpoint, unless skip_cache is True. :return: True if the cached endpoint was found in the dhcp lease """ if self.skip_cache: return False exists = False logger.info("Checking for dhcp lease cache") cached_endpoint = self.osutil.get_dhcp_lease_endpoint() # pylint: disable=E1128 if cached_endpoint is not None: self.endpoint = cached_endpoint exists = True logger.info("Cache exists [{0}]".format(exists)) return exists def conf_routes(self): logger.info("Configure routes") logger.info("Gateway:{0}", self.gateway) logger.info("Routes:{0}", self.routes) # Add default gateway if self.gateway is not None and self.osutil.is_missing_default_route(): self.osutil.route_add(0, 0, self.gateway) if self.routes is not None: for route in self.routes: self.osutil.route_add(route[0], route[1], route[2]) def _send_dhcp_req(self, request): __waiting_duration__ = [0, 10, 30, 60, 60] for duration in __waiting_duration__: try: self.osutil.allow_dhcp_broadcast() response = socket_send(request) validate_dhcp_resp(request, response) return response except DhcpError as e: logger.warn("Failed to send DHCP request: {0}", e) time.sleep(duration) return None def send_dhcp_req(self): """ Check if DHCP is available """ dhcp_available = self.osutil.is_dhcp_available() if not dhcp_available: logger.info("send_dhcp_req: DHCP not available") self.endpoint = KNOWN_WIRESERVER_IP return # pylint: disable=W0105 """ Build dhcp request with mac addr Configure route to allow dhcp traffic Stop dhcp service if necessary """ # pylint: enable=W0105 logger.info("Send dhcp request") mac_addr = self.osutil.get_mac_addr() # Do unicast first, then fallback to broadcast if fails. req = build_dhcp_request(mac_addr, self._request_broadcast) if not self._request_broadcast: self._request_broadcast = True # Temporary allow broadcast for dhcp. Remove the route when done. missing_default_route = self.osutil.is_missing_default_route() ifname = self.osutil.get_if_name() if missing_default_route: self.osutil.set_route_for_dhcp_broadcast(ifname) # In some distros, dhcp service needs to be shutdown before agent probe # endpoint through dhcp. if self.osutil.is_dhcp_enabled(): self.osutil.stop_dhcp_service() resp = self._send_dhcp_req(req) if self.osutil.is_dhcp_enabled(): self.osutil.start_dhcp_service() if missing_default_route: self.osutil.remove_route_for_dhcp_broadcast(ifname) if resp is None: raise DhcpError("Failed to receive dhcp response.") self.endpoint, self.gateway, self.routes = parse_dhcp_resp(resp) def validate_dhcp_resp(request, response): # pylint: disable=R1710 bytes_recv = len(response) if bytes_recv < 0xF6: logger.error("HandleDhcpResponse: Too few bytes received:{0}", bytes_recv) return False logger.verbose("BytesReceived:{0}", hex(bytes_recv)) logger.verbose("DHCP response:{0}", hex_dump(response, bytes_recv)) # check transactionId, cookie, MAC address cookie should never mismatch # transactionId and MAC address may mismatch if we see a response # meant from another machine if not compare_bytes(request, response, 0xEC, 4): logger.verbose("Cookie not match:\nsend={0},\nreceive={1}", hex_dump3(request, 0xEC, 4), hex_dump3(response, 0xEC, 4)) raise DhcpError("Cookie in dhcp respones doesn't match the request") if not compare_bytes(request, response, 4, 4): logger.verbose("TransactionID not match:\nsend={0},\nreceive={1}", hex_dump3(request, 4, 4), hex_dump3(response, 4, 4)) raise DhcpError("TransactionID in dhcp respones " "doesn't match the request") if not compare_bytes(request, response, 0x1C, 6): logger.verbose("Mac Address not match:\nsend={0},\nreceive={1}", hex_dump3(request, 0x1C, 6), hex_dump3(response, 0x1C, 6)) raise DhcpError("Mac Addr in dhcp respones " "doesn't match the request") def parse_route(response, option, i, length, bytes_recv): # pylint: disable=W0613 # http://msdn.microsoft.com/en-us/library/cc227282%28PROT.10%29.aspx logger.verbose("Routes at offset: {0} with length:{1}", hex(i), hex(length)) routes = [] if length < 5: logger.error("Data too small for option:{0}", option) j = i + 2 while j < (i + length + 2): mask_len_bits = str_to_ord(response[j]) mask_len_bytes = (((mask_len_bits + 7) & ~7) >> 3) mask = 0xFFFFFFFF & (0xFFFFFFFF << (32 - mask_len_bits)) j += 1 net = unpack_big_endian(response, j, mask_len_bytes) net <<= (32 - mask_len_bytes * 8) net &= mask j += mask_len_bytes gateway = unpack_big_endian(response, j, 4) j += 4 routes.append((net, mask, gateway)) if j != (i + length + 2): logger.error("Unable to parse routes") return routes def parse_ip_addr(response, option, i, length, bytes_recv): if i + 5 < bytes_recv: if length != 4: logger.error("Endpoint or Default Gateway not 4 bytes") return None addr = unpack_big_endian(response, i + 2, 4) ip_addr = int_to_ip4_addr(addr) return ip_addr else: logger.error("Data too small for option:{0}", option) return None def parse_dhcp_resp(response): """ Parse DHCP response: Returns endpoint server or None on error. """ logger.verbose("parse Dhcp Response") bytes_recv = len(response) endpoint = None gateway = None routes = None # Walk all the returned options, parsing out what we need, ignoring the # others. We need the custom option 245 to find the the endpoint we talk to # as well as to handle some Linux DHCP client incompatibilities; # options 3 for default gateway and 249 for routes; 255 is end. i = 0xF0 # offset to first option while i < bytes_recv: option = str_to_ord(response[i]) length = 0 if (i + 1) < bytes_recv: length = str_to_ord(response[i + 1]) logger.verbose("DHCP option {0} at offset:{1} with length:{2}", hex(option), hex(i), hex(length)) if option == 255: logger.verbose("DHCP packet ended at offset:{0}", hex(i)) break elif option == 249: routes = parse_route(response, option, i, length, bytes_recv) elif option == 3: gateway = parse_ip_addr(response, option, i, length, bytes_recv) logger.verbose("Default gateway:{0}, at {1}", gateway, hex(i)) elif option == 245: endpoint = parse_ip_addr(response, option, i, length, bytes_recv) logger.verbose("Azure wire protocol endpoint:{0}, at {1}", endpoint, hex(i)) else: logger.verbose("Skipping DHCP option:{0} at {1} with length {2}", hex(option), hex(i), hex(length)) i += length + 2 return endpoint, gateway, routes def socket_send(request): sock = None try: sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) sock.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(("0.0.0.0", 68)) sock.sendto(request, ("", 67)) sock.settimeout(10) logger.verbose("Send DHCP request: Setting socket.timeout=10, " "entering recv") response = sock.recv(1024) return response except IOError as e: raise DhcpError("{0}".format(e)) finally: if sock is not None: sock.close() def build_dhcp_request(mac_addr, request_broadcast): """ Build DHCP request string. """ # # typedef struct _DHCP { # UINT8 Opcode; /* op: BOOTREQUEST or BOOTREPLY */ # UINT8 HardwareAddressType; /* htype: ethernet */ # UINT8 HardwareAddressLength; /* hlen: 6 (48 bit mac address) */ # UINT8 Hops; /* hops: 0 */ # UINT8 TransactionID[4]; /* xid: random */ # UINT8 Seconds[2]; /* secs: 0 */ # UINT8 Flags[2]; /* flags: 0 or 0x8000 for broadcast*/ # UINT8 ClientIpAddress[4]; /* ciaddr: 0 */ # UINT8 YourIpAddress[4]; /* yiaddr: 0 */ # UINT8 ServerIpAddress[4]; /* siaddr: 0 */ # UINT8 RelayAgentIpAddress[4]; /* giaddr: 0 */ # UINT8 ClientHardwareAddress[16]; /* chaddr: 6 byte eth MAC address */ # UINT8 ServerName[64]; /* sname: 0 */ # UINT8 BootFileName[128]; /* file: 0 */ # UINT8 MagicCookie[4]; /* 99 130 83 99 */ # /* 0x63 0x82 0x53 0x63 */ # /* options -- hard code ours */ # # UINT8 MessageTypeCode; /* 53 */ # UINT8 MessageTypeLength; /* 1 */ # UINT8 MessageType; /* 1 for DISCOVER */ # UINT8 End; /* 255 */ # } DHCP; # # tuple of 244 zeros # (struct.pack_into would be good here, but requires Python 2.5) request = [0] * 244 trans_id = gen_trans_id() # Opcode = 1 # HardwareAddressType = 1 (ethernet/MAC) # HardwareAddressLength = 6 (ethernet/MAC/48 bits) for a in range(0, 3): request[a] = [1, 1, 6][a] # fill in transaction id (random number to ensure response matches request) for a in range(0, 4): request[4 + a] = str_to_ord(trans_id[a]) logger.verbose("BuildDhcpRequest: transactionId:%s,%04X" % ( hex_dump2(trans_id), unpack_big_endian(request, 4, 4))) if request_broadcast: # set broadcast flag to true to request the dhcp server # to respond to a boradcast address, # this is useful when user dhclient fails. request[0x0A] = 0x80 # fill in ClientHardwareAddress for a in range(0, 6): request[0x1C + a] = str_to_ord(mac_addr[a]) # DHCP Magic Cookie: 99, 130, 83, 99 # MessageTypeCode = 53 DHCP Message Type # MessageTypeLength = 1 # MessageType = DHCPDISCOVER # End = 255 DHCP_END for a in range(0, 8): request[0xEC + a] = [99, 130, 83, 99, 53, 1, 1, 255][a] return array.array("B", request) def gen_trans_id(): return os.urandom(4) Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/errorstate.py000066400000000000000000000021661462617747000250160ustar00rootroot00000000000000from datetime import datetime, timedelta ERROR_STATE_DELTA_DEFAULT = timedelta(minutes=15) ERROR_STATE_DELTA_INSTALL = timedelta(minutes=5) ERROR_STATE_HOST_PLUGIN_FAILURE = timedelta(minutes=5) class ErrorState(object): def __init__(self, min_timedelta=ERROR_STATE_DELTA_DEFAULT): self.min_timedelta = min_timedelta self.count = 0 self.timestamp = None def incr(self): if self.count == 0: self.timestamp = datetime.utcnow() self.count += 1 def reset(self): self.count = 0 self.timestamp = None def is_triggered(self): if self.timestamp is None: return False delta = datetime.utcnow() - self.timestamp if delta >= self.min_timedelta: return True return False @property def fail_time(self): if self.timestamp is None: return 'unknown' delta = round((datetime.utcnow() - self.timestamp).seconds / 60.0, 2) if delta < 60: return '{0} min'.format(delta) delta_hr = round(delta / 60.0, 2) return '{0} hr'.format(delta_hr) Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/event.py000066400000000000000000001005441462617747000237440ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import atexit import json import os import platform import re import sys import threading import time import traceback from datetime import datetime import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger from azurelinuxagent.common.AgentGlobals import AgentGlobals from azurelinuxagent.common.exception import EventError, OSUtilError from azurelinuxagent.common.future import ustr from azurelinuxagent.common.datacontract import get_properties, set_properties from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.telemetryevent import TelemetryEventParam, TelemetryEvent, CommonTelemetryEventSchema, \ GuestAgentGenericLogsSchema, GuestAgentExtensionEventsSchema, GuestAgentPerfCounterEventsSchema from azurelinuxagent.common.utils import fileutil, textutil from azurelinuxagent.common.utils.textutil import parse_doc, findall, find, getattrib, str_to_encoded_ustr from azurelinuxagent.common.version import CURRENT_VERSION, CURRENT_AGENT, AGENT_NAME, DISTRO_NAME, DISTRO_VERSION, DISTRO_CODE_NAME, AGENT_EXECUTION_MODE from azurelinuxagent.common.protocol.imds import get_imds_client EVENTS_DIRECTORY = "events" _EVENT_MSG = "Event: name={0}, op={1}, message={2}, duration={3}" TELEMETRY_EVENT_PROVIDER_ID = "69B669B9-4AF8-4C50-BDC4-6006FA76E975" TELEMETRY_EVENT_EVENT_ID = 1 TELEMETRY_METRICS_EVENT_ID = 4 TELEMETRY_LOG_PROVIDER_ID = "FFF0196F-EE4C-4EAF-9AA5-776F622DEB4F" TELEMETRY_LOG_EVENT_ID = 7 # # When this flag is enabled the TODO comment in Logger.log() needs to be addressed; also the tests # marked with "Enable this test when SEND_LOGS_TO_TELEMETRY is enabled" should be enabled. # SEND_LOGS_TO_TELEMETRY = False MAX_NUMBER_OF_EVENTS = 1000 AGENT_EVENT_FILE_EXTENSION = '.waagent.tld' EVENT_FILE_REGEX = re.compile(r'(?P\.waagent)?\.tld$') def send_logs_to_telemetry(): return SEND_LOGS_TO_TELEMETRY class WALAEventOperation: ActivateResourceDisk = "ActivateResourceDisk" AgentBlacklisted = "AgentBlacklisted" AgentEnabled = "AgentEnabled" AgentMemory = "AgentMemory" AgentUpgrade = "AgentUpgrade" ArtifactsProfileBlob = "ArtifactsProfileBlob" CGroupsCleanUp = "CGroupsCleanUp" CGroupsDisabled = "CGroupsDisabled" CGroupsInfo = "CGroupsInfo" CloudInit = "CloudInit" CollectEventErrors = "CollectEventErrors" CollectEventUnicodeErrors = "CollectEventUnicodeErrors" ConfigurationChange = "ConfigurationChange" CustomData = "CustomData" DefaultChannelChange = "DefaultChannelChange" Deploy = "Deploy" Disable = "Disable" Downgrade = "Downgrade" Download = "Download" Enable = "Enable" ExtensionProcessing = "ExtensionProcessing" ExtensionTelemetryEventProcessing = "ExtensionTelemetryEventProcessing" FetchGoalState = "FetchGoalState" Firewall = "Firewall" GoalState = "GoalState" GoalStateUnsupportedFeatures = "GoalStateUnsupportedFeatures" HealthCheck = "HealthCheck" HealthObservation = "HealthObservation" HeartBeat = "HeartBeat" HostnamePublishing = "HostnamePublishing" HostPlugin = "HostPlugin" HostPluginHeartbeat = "HostPluginHeartbeat" HostPluginHeartbeatExtended = "HostPluginHeartbeatExtended" HttpErrors = "HttpErrors" HttpGet = "HttpGet" ImdsHeartbeat = "ImdsHeartbeat" Install = "Install" InitializeHostPlugin = "InitializeHostPlugin" Log = "Log" LogCollection = "LogCollection" NoExec = "NoExec" OSInfo = "OSInfo" OpenSsl = "OpenSsl" Partition = "Partition" PersistFirewallRules = "PersistFirewallRules" ProvisionAfterExtensions = "ProvisionAfterExtensions" PluginSettingsVersionMismatch = "PluginSettingsVersionMismatch" InvalidExtensionConfig = "InvalidExtensionConfig" Provision = "Provision" ProvisionGuestAgent = "ProvisionGuestAgent" RemoteAccessHandling = "RemoteAccessHandling" ReportEventErrors = "ReportEventErrors" ReportEventUnicodeErrors = "ReportEventUnicodeErrors" ReportStatus = "ReportStatus" ReportStatusExtended = "ReportStatusExtended" ResetFirewall = "ResetFirewall" Restart = "Restart" SequenceNumberMismatch = "SequenceNumberMismatch" SetCGroupsLimits = "SetCGroupsLimits" SkipUpdate = "SkipUpdate" StatusProcessing = "StatusProcessing" UnhandledError = "UnhandledError" UnInstall = "UnInstall" Unknown = "Unknown" Update = "Update" VmSettings = "VmSettings" VmSettingsSummary = "VmSettingsSummary" SHOULD_ENCODE_MESSAGE_LEN = 80 SHOULD_ENCODE_MESSAGE_OP = [ WALAEventOperation.Disable, WALAEventOperation.Enable, WALAEventOperation.Install, WALAEventOperation.UnInstall, ] class EventStatus(object): EVENT_STATUS_FILE = "event_status.json" def __init__(self): self._path = None self._status = {} def clear(self): self._status = {} self._save() def event_marked(self, name, version, op): return self._event_name(name, version, op) in self._status def event_succeeded(self, name, version, op): event = self._event_name(name, version, op) if event not in self._status: return True return self._status[event] is True def initialize(self, status_dir=conf.get_lib_dir()): self._path = os.path.join(status_dir, EventStatus.EVENT_STATUS_FILE) self._load() def mark_event_status(self, name, version, op, status): event = self._event_name(name, version, op) self._status[event] = (status is True) self._save() def _event_name(self, name, version, op): return "{0}-{1}-{2}".format(name, version, op) def _load(self): try: self._status = {} if os.path.isfile(self._path): with open(self._path, 'r') as f: self._status = json.load(f) except Exception as e: logger.warn("Exception occurred loading event status: {0}".format(e)) self._status = {} def _save(self): try: with open(self._path, 'w') as f: json.dump(self._status, f) except Exception as e: logger.warn("Exception occurred saving event status: {0}".format(e)) __event_status__ = EventStatus() __event_status_operations__ = [ WALAEventOperation.ReportStatus ] def parse_json_event(data_str): data = json.loads(data_str) event = TelemetryEvent() set_properties("TelemetryEvent", event, data) event.file_type = "json" return event def parse_event(data_str): try: try: return parse_json_event(data_str) except ValueError: return parse_xml_event(data_str) except Exception as e: raise EventError("Error parsing event: {0}".format(ustr(e))) def parse_xml_param(param_node): name = getattrib(param_node, "Name") value_str = getattrib(param_node, "Value") attr_type = getattrib(param_node, "T") value = value_str if attr_type == 'mt:uint64': value = int(value_str) elif attr_type == 'mt:bool': value = bool(value_str) elif attr_type == 'mt:float64': value = float(value_str) return TelemetryEventParam(name, value) def parse_xml_event(data_str): try: xml_doc = parse_doc(data_str) event_id = getattrib(find(xml_doc, "Event"), 'id') provider_id = getattrib(find(xml_doc, "Provider"), 'id') event = TelemetryEvent(event_id, provider_id) param_nodes = findall(xml_doc, 'Param') for param_node in param_nodes: event.parameters.append(parse_xml_param(param_node)) event.file_type = "xml" return event except Exception as e: raise ValueError(ustr(e)) def _encode_message(op, message): """ Gzip and base64 encode a message based on the operation. The intent of this message is to make the logs human readable and include the stdout/stderr from extension operations. Extension operations tend to generate a lot of noise, which makes it difficult to parse the line-oriented waagent.log. The compromise is to encode the stdout/stderr so we preserve the data and do not destroy the line oriented nature. The data can be recovered using the following command: $ echo '' | base64 -d | pigz -zd You may need to install the pigz command. :param op: Operation, e.g. Enable or Install :param message: Message to encode :return: gzip'ed and base64 encoded message, or the original message """ if len(message) == 0: return message if op not in SHOULD_ENCODE_MESSAGE_OP: return message try: return textutil.compress(message) except Exception: # If the message could not be encoded a dummy message ('<>') is returned. # The original message was still sent via telemetry, so all is not lost. return "<>" def _log_event(name, op, message, duration, is_success=True): global _EVENT_MSG # pylint: disable=W0603 if not is_success: logger.error(_EVENT_MSG, name, op, message, duration) else: logger.info(_EVENT_MSG, name, op, message, duration) class CollectOrReportEventDebugInfo(object): """ This class is used for capturing and reporting debug info that is captured during event collection and reporting to wireserver. It captures the count of unicode errors and any unexpected errors and also a subset of errors with stacks to help with debugging any potential issues. """ __MAX_ERRORS_TO_REPORT = 5 OP_REPORT = "Report" OP_COLLECT = "Collect" def __init__(self, operation=OP_REPORT): self.__unicode_error_count = 0 self.__unicode_errors = set() self.__op_error_count = 0 self.__op_errors = set() if operation == self.OP_REPORT: self.__unicode_error_event = WALAEventOperation.ReportEventUnicodeErrors self.__op_errors_event = WALAEventOperation.ReportEventErrors elif operation == self.OP_COLLECT: self.__unicode_error_event = WALAEventOperation.CollectEventUnicodeErrors self.__op_errors_event = WALAEventOperation.CollectEventErrors def report_debug_info(self): def report_dropped_events_error(count, errors, operation_name): err_msg_format = "DroppedEventsCount: {0}\nReasons (first {1} errors): {2}" if count > 0: add_event(op=operation_name, message=err_msg_format.format(count, CollectOrReportEventDebugInfo.__MAX_ERRORS_TO_REPORT, ', '.join(errors)), is_success=False) report_dropped_events_error(self.__op_error_count, self.__op_errors, self.__op_errors_event) report_dropped_events_error(self.__unicode_error_count, self.__unicode_errors, self.__unicode_error_event) @staticmethod def _update_errors_and_get_count(error_count, errors, error): error_count += 1 if len(errors) < CollectOrReportEventDebugInfo.__MAX_ERRORS_TO_REPORT: errors.add("{0}: {1}".format(ustr(error), traceback.format_exc())) return error_count def update_unicode_error(self, unicode_err): self.__unicode_error_count = self._update_errors_and_get_count(self.__unicode_error_count, self.__unicode_errors, unicode_err) def update_op_error(self, op_err): self.__op_error_count = self._update_errors_and_get_count(self.__op_error_count, self.__op_errors, op_err) class EventLogger(object): def __init__(self): self.event_dir = None self.periodic_events = {} # # All events should have these parameters. # # The first set comes from the current OS and is initialized here. These values don't change during # the agent's lifetime. # # The next two sets come from the goal state and IMDS and must be explicitly initialized using # initialize_vminfo_common_parameters() once a protocol for communication with the host has been # created. Their values don't change during the agent's lifetime. Note that we initialize these # parameters here using dummy values (*_UNINITIALIZED) since events sent to the host should always # match the schema defined for them in the telemetry pipeline. # # There is another set of common parameters that must be computed at the time the event is created # (e.g. the timestamp and the container ID); those are added to events (along with the parameters # below) in _add_common_event_parameters() # # Note that different kinds of events may also include other parameters; those are added by the # corresponding add_* method (e.g. add_metric for performance metrics). # self._common_parameters = [] # Parameters from OS osutil = get_osutil() keyword_name = { "CpuArchitecture": osutil.get_vm_arch() } self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.OSVersion, EventLogger._get_os_version())) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.ExecutionMode, AGENT_EXECUTION_MODE)) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.RAM, int(EventLogger._get_ram(osutil)))) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.Processors, int(EventLogger._get_processors(osutil)))) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.KeywordName, json.dumps(keyword_name))) # Parameters from goal state self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.TenantName, "TenantName_UNINITIALIZED")) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.RoleName, "RoleName_UNINITIALIZED")) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.RoleInstanceName, "RoleInstanceName_UNINITIALIZED")) # # # Parameters from IMDS self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.Location, "Location_UNINITIALIZED")) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.SubscriptionId, "SubscriptionId_UNINITIALIZED")) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.ResourceGroupName, "ResourceGroupName_UNINITIALIZED")) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.VMId, "VMId_UNINITIALIZED")) self._common_parameters.append(TelemetryEventParam(CommonTelemetryEventSchema.ImageOrigin, 0)) @staticmethod def _get_os_version(): return "{0}:{1}-{2}-{3}:{4}".format(platform.system(), DISTRO_NAME, DISTRO_VERSION, DISTRO_CODE_NAME, platform.release()) @staticmethod def _get_ram(osutil): try: return osutil.get_total_mem() except OSUtilError as e: logger.warn("Failed to get RAM info; will be missing from telemetry: {0}", ustr(e)) return 0 @staticmethod def _get_processors(osutil): try: return osutil.get_processor_cores() except OSUtilError as e: logger.warn("Failed to get Processors info; will be missing from telemetry: {0}", ustr(e)) return 0 def initialize_vminfo_common_parameters(self, protocol): """ Initializes the common parameters that come from the goal state and IMDS """ # create an index of the event parameters for faster updates parameters = {} for p in self._common_parameters: parameters[p.name] = p try: vminfo = protocol.get_vminfo() parameters[CommonTelemetryEventSchema.TenantName].value = vminfo.tenantName parameters[CommonTelemetryEventSchema.RoleName].value = vminfo.roleName parameters[CommonTelemetryEventSchema.RoleInstanceName].value = vminfo.roleInstanceName except Exception as e: logger.warn("Failed to get VM info from goal state; will be missing from telemetry: {0}", ustr(e)) try: imds_client = get_imds_client(protocol.get_endpoint()) imds_info = imds_client.get_compute() parameters[CommonTelemetryEventSchema.Location].value = imds_info.location parameters[CommonTelemetryEventSchema.SubscriptionId].value = imds_info.subscriptionId parameters[CommonTelemetryEventSchema.ResourceGroupName].value = imds_info.resourceGroupName parameters[CommonTelemetryEventSchema.VMId].value = imds_info.vmId parameters[CommonTelemetryEventSchema.ImageOrigin].value = int(imds_info.image_origin) except Exception as e: logger.warn("Failed to get IMDS info; will be missing from telemetry: {0}", ustr(e)) def save_event(self, data): if self.event_dir is None: logger.warn("Cannot save event -- Event reporter is not initialized.") return try: fileutil.mkdir(self.event_dir, mode=0o700) except (IOError, OSError) as e: msg = "Failed to create events folder {0}. Error: {1}".format(self.event_dir, ustr(e)) raise EventError(msg) try: existing_events = os.listdir(self.event_dir) if len(existing_events) >= MAX_NUMBER_OF_EVENTS: logger.periodic_warn(logger.EVERY_MINUTE, "[PERIODIC] Too many files under: {0}, current count: {1}, " "removing oldest event files".format(self.event_dir, len(existing_events))) existing_events.sort() oldest_files = existing_events[:-999] for event_file in oldest_files: os.remove(os.path.join(self.event_dir, event_file)) except (IOError, OSError) as e: msg = "Failed to remove old events from events folder {0}. Error: {1}".format(self.event_dir, ustr(e)) raise EventError(msg) filename = os.path.join(self.event_dir, ustr(int(time.time() * 1000000))) try: with open(filename + ".tmp", 'wb+') as hfile: hfile.write(data.encode("utf-8")) os.rename(filename + ".tmp", filename + AGENT_EVENT_FILE_EXTENSION) except (IOError, OSError) as e: msg = "Failed to write events to file: {0}".format(e) raise EventError(msg) def reset_periodic(self): self.periodic_events = {} def is_period_elapsed(self, delta, h): return h not in self.periodic_events or \ (self.periodic_events[h] + delta) <= datetime.now() def add_periodic(self, delta, name, op=WALAEventOperation.Unknown, is_success=True, duration=0, version=str(CURRENT_VERSION), message="", log_event=True, force=False): h = hash(name + op + ustr(is_success) + message) if force or self.is_period_elapsed(delta, h): self.add_event(name, op=op, is_success=is_success, duration=duration, version=version, message=message, log_event=log_event) self.periodic_events[h] = datetime.now() def add_event(self, name, op=WALAEventOperation.Unknown, is_success=True, duration=0, version=str(CURRENT_VERSION), message="", log_event=True): if (not is_success) and log_event: _log_event(name, op, message, duration, is_success=is_success) event = TelemetryEvent(TELEMETRY_EVENT_EVENT_ID, TELEMETRY_EVENT_PROVIDER_ID) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Name, str_to_encoded_ustr(name))) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Version, str_to_encoded_ustr(version))) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Operation, str_to_encoded_ustr(op))) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.OperationSuccess, bool(is_success))) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Message, str_to_encoded_ustr(message))) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Duration, int(duration))) self.add_common_event_parameters(event, datetime.utcnow()) data = get_properties(event) try: self.save_event(json.dumps(data)) except EventError as e: logger.periodic_error(logger.EVERY_FIFTEEN_MINUTES, "[PERIODIC] {0}".format(ustr(e))) def add_log_event(self, level, message): event = TelemetryEvent(TELEMETRY_LOG_EVENT_ID, TELEMETRY_LOG_PROVIDER_ID) event.parameters.append(TelemetryEventParam(GuestAgentGenericLogsSchema.EventName, WALAEventOperation.Log)) event.parameters.append(TelemetryEventParam(GuestAgentGenericLogsSchema.CapabilityUsed, logger.LogLevel.STRINGS[level])) event.parameters.append(TelemetryEventParam(GuestAgentGenericLogsSchema.Context1, str_to_encoded_ustr(self._clean_up_message(message)))) event.parameters.append(TelemetryEventParam(GuestAgentGenericLogsSchema.Context2, datetime.utcnow().strftime(logger.Logger.LogTimeFormatInUTC))) event.parameters.append(TelemetryEventParam(GuestAgentGenericLogsSchema.Context3, '')) self.add_common_event_parameters(event, datetime.utcnow()) data = get_properties(event) try: self.save_event(json.dumps(data)) except EventError: pass def add_metric(self, category, counter, instance, value, log_event=False): """ Create and save an event which contains a telemetry event. :param str category: The category of metric (e.g. "cpu", "memory") :param str counter: The specific metric within the category (e.g. "%idle") :param str instance: For instanced metrics, the instance identifier (filesystem name, cpu core#, etc.) :param value: Value of the metric :param bool log_event: If true, log the collected metric in the agent log """ if log_event: message = "Metric {0}/{1} [{2}] = {3}".format(category, counter, instance, value) _log_event(AGENT_NAME, "METRIC", message, 0) event = TelemetryEvent(TELEMETRY_METRICS_EVENT_ID, TELEMETRY_EVENT_PROVIDER_ID) event.parameters.append(TelemetryEventParam(GuestAgentPerfCounterEventsSchema.Category, str_to_encoded_ustr(category))) event.parameters.append(TelemetryEventParam(GuestAgentPerfCounterEventsSchema.Counter, str_to_encoded_ustr(counter))) event.parameters.append(TelemetryEventParam(GuestAgentPerfCounterEventsSchema.Instance, str_to_encoded_ustr(instance))) event.parameters.append(TelemetryEventParam(GuestAgentPerfCounterEventsSchema.Value, float(value))) self.add_common_event_parameters(event, datetime.utcnow()) data = get_properties(event) try: self.save_event(json.dumps(data)) except EventError as e: logger.periodic_error(logger.EVERY_FIFTEEN_MINUTES, "[PERIODIC] {0}".format(ustr(e))) @staticmethod def _clean_up_message(message): # By the time the message has gotten to this point it is formatted as # # Old Time format # YYYY/MM/DD HH:mm:ss.fffffff LEVEL . # YYYY/MM/DD HH:mm:ss.fffffff . # YYYY/MM/DD HH:mm:ss LEVEL . # YYYY/MM/DD HH:mm:ss . # # UTC ISO Time format added in #1716 # YYYY-MM-DDTHH:mm:ss.fffffffZ LEVEL . # YYYY-MM-DDTHH:mm:ss.fffffffZ . # YYYY-MM-DDTHH:mm:ssZ LEVEL . # YYYY-MM-DDTHH:mm:ssZ . # # The timestamp and the level are redundant, and should be stripped. The logging library does not schematize # this data, so I am forced to parse the message using a regex. The format is regular, so the burden is low, # and usability on the telemetry side is high. if not message: return message # Adding two regexs to simplify the handling of logs and to keep it maintainable. Most of the logs would have # level includent in the log itself, but if it doesn't have, the second regex is a catch all case and will work # for all the cases. log_level_format_parser = re.compile(r"^.*(INFO|WARNING|ERROR|VERBOSE)\s*(.*)$") log_format_parser = re.compile(r"^[0-9:/\-TZ\s.]*\s(.*)$") # Parsing the log messages containing levels in it extract_level_message = log_level_format_parser.search(message) if extract_level_message: return extract_level_message.group(2) # The message bit else: # Parsing the log messages without levels in it. extract_message = log_format_parser.search(message) if extract_message: return extract_message.group(1) # The message bit else: return message def add_common_event_parameters(self, event, event_timestamp): """ This method is called for all events and ensures all telemetry fields are added before the event is sent out. Note that the event timestamp is saved in the OpcodeName field. """ common_params = [TelemetryEventParam(CommonTelemetryEventSchema.GAVersion, CURRENT_AGENT), TelemetryEventParam(CommonTelemetryEventSchema.ContainerId, AgentGlobals.get_container_id()), TelemetryEventParam(CommonTelemetryEventSchema.OpcodeName, event_timestamp.strftime(logger.Logger.LogTimeFormatInUTC)), TelemetryEventParam(CommonTelemetryEventSchema.EventTid, threading.current_thread().ident), TelemetryEventParam(CommonTelemetryEventSchema.EventPid, os.getpid()), TelemetryEventParam(CommonTelemetryEventSchema.TaskName, threading.current_thread().getName())] if event.eventId == TELEMETRY_EVENT_EVENT_ID and event.providerId == TELEMETRY_EVENT_PROVIDER_ID: # Currently only the GuestAgentExtensionEvents has these columns, the other tables dont have them so skipping # this data in those tables. common_params.extend([TelemetryEventParam(GuestAgentExtensionEventsSchema.ExtensionType, event.file_type), TelemetryEventParam(GuestAgentExtensionEventsSchema.IsInternal, False)]) event.parameters.extend(common_params) event.parameters.extend(self._common_parameters) __event_logger__ = EventLogger() def get_event_logger(): return __event_logger__ def elapsed_milliseconds(utc_start): now = datetime.utcnow() if now < utc_start: return 0 d = now - utc_start return int(((d.days * 24 * 60 * 60 + d.seconds) * 1000) + \ (d.microseconds / 1000.0)) def report_event(op, is_success=True, message='', log_event=True): add_event(AGENT_NAME, version=str(CURRENT_VERSION), is_success=is_success, message=message, op=op, log_event=log_event) def report_periodic(delta, op, is_success=True, message=''): add_periodic(delta, AGENT_NAME, version=str(CURRENT_VERSION), is_success=is_success, message=message, op=op) def report_metric(category, counter, instance, value, log_event=False, reporter=__event_logger__): """ Send a telemetry event reporting a single instance of a performance counter. :param str category: The category of the metric (cpu, memory, etc) :param str counter: The name of the metric ("%idle", etc) :param str instance: For instanced metrics, the identifier of the instance. E.g. a disk drive name, a cpu core# :param value: The value of the metric :param bool log_event: If True, log the metric in the agent log as well :param EventLogger reporter: The EventLogger instance to which metric events should be sent """ if reporter.event_dir is None: logger.warn("Cannot report metric event -- Event reporter is not initialized.") message = "Metric {0}/{1} [{2}] = {3}".format(category, counter, instance, value) _log_event(AGENT_NAME, "METRIC", message, 0) return try: reporter.add_metric(category, counter, instance, float(value), log_event) except ValueError: logger.periodic_warn(logger.EVERY_HALF_HOUR, "[PERIODIC] Cannot cast the metric value. Details of the Metric - " "{0}/{1} [{2}] = {3}".format(category, counter, instance, value)) def initialize_event_logger_vminfo_common_parameters(protocol, reporter=__event_logger__): reporter.initialize_vminfo_common_parameters(protocol) def add_event(name=AGENT_NAME, op=WALAEventOperation.Unknown, is_success=True, duration=0, version=str(CURRENT_VERSION), message="", log_event=True, reporter=__event_logger__): if reporter.event_dir is None: logger.warn("Cannot add event -- Event reporter is not initialized.") _log_event(name, op, message, duration, is_success=is_success) return if should_emit_event(name, version, op, is_success): mark_event_status(name, version, op, is_success) reporter.add_event(name, op=op, is_success=is_success, duration=duration, version=str(version), message=message, log_event=log_event) def add_log_event(level, message, forced=False, reporter=__event_logger__): """ :param level: LoggerLevel of the log event :param message: Message :param forced: Force write the event even if send_logs_to_telemetry() is disabled (NOTE: Remove this flag once send_logs_to_telemetry() is enabled for all events) :param reporter: :return: """ if reporter.event_dir is None: return if not (forced or send_logs_to_telemetry()): return if level >= logger.LogLevel.WARNING: reporter.add_log_event(level, message) def add_periodic(delta, name, op=WALAEventOperation.Unknown, is_success=True, duration=0, version=str(CURRENT_VERSION), message="", log_event=True, force=False, reporter=__event_logger__): if reporter.event_dir is None: logger.warn("Cannot add periodic event -- Event reporter is not initialized.") _log_event(name, op, message, duration, is_success=is_success) return reporter.add_periodic(delta, name, op=op, is_success=is_success, duration=duration, version=str(version), message=message, log_event=log_event, force=force) def mark_event_status(name, version, op, status): if op in __event_status_operations__: __event_status__.mark_event_status(name, version, op, status) def should_emit_event(name, version, op, status): return \ op not in __event_status_operations__ or \ __event_status__ is None or \ not __event_status__.event_marked(name, version, op) or \ __event_status__.event_succeeded(name, version, op) != status def init_event_logger(event_dir): __event_logger__.event_dir = event_dir def init_event_status(status_dir): __event_status__.initialize(status_dir) def dump_unhandled_err(name): if hasattr(sys, 'last_type') and hasattr(sys, 'last_value') and \ hasattr(sys, 'last_traceback'): last_type = getattr(sys, 'last_type') last_value = getattr(sys, 'last_value') last_traceback = getattr(sys, 'last_traceback') error = traceback.format_exception(last_type, last_value, last_traceback) message = "".join(error) add_event(name, is_success=False, message=message, op=WALAEventOperation.UnhandledError) def enable_unhandled_err_dump(name): atexit.register(dump_unhandled_err, name) Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/exception.py000066400000000000000000000177151462617747000246300ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # """ Defines all exceptions """ class ExitException(BaseException): """ Used to exit the agent's process """ def __init__(self, reason): super(ExitException, self).__init__() self.reason = reason class AgentUpgradeExitException(ExitException): """ Used to exit the agent's process due to Agent Upgrade """ class AgentError(Exception): """ Base class of agent error. """ def __init__(self, msg, inner=None): msg = u"[{0}] {1}".format(type(self).__name__, msg) if inner is not None: msg = u"{0}\nInner error: {1}".format(msg, inner) super(AgentError, self).__init__(msg) class AgentConfigError(AgentError): """ When configure file is not found or malformed. """ def __init__(self, msg=None, inner=None): super(AgentConfigError, self).__init__(msg, inner) class AgentMemoryExceededException(AgentError): """ When Agent memory limit reached. """ def __init__(self, msg=None, inner=None): super(AgentMemoryExceededException, self).__init__(msg, inner) class AgentNetworkError(AgentError): """ When network is not available. """ def __init__(self, msg=None, inner=None): super(AgentNetworkError, self).__init__(msg, inner) class AgentUpdateError(AgentError): """ When agent failed to update. """ def __init__(self, msg=None, inner=None): super(AgentUpdateError, self).__init__(msg, inner) class AgentFamilyMissingError(AgentError): """ When agent family is missing. """ def __init__(self, msg=None, inner=None): super(AgentFamilyMissingError, self).__init__(msg, inner) class CGroupsException(AgentError): """ Exception to classify any cgroups related issue. """ def __init__(self, msg=None, inner=None): super(CGroupsException, self).__init__(msg, inner) class ExtensionError(AgentError): """ When failed to execute an extension """ def __init__(self, msg=None, inner=None, code=-1): super(ExtensionError, self).__init__(msg, inner) self.code = code class ExtensionOperationError(ExtensionError): """ When the command times out or returns with a non-zero exit_code """ def __init__(self, msg=None, inner=None, code=-1, exit_code=-1): super(ExtensionOperationError, self).__init__(msg, inner) self.code = code self.exit_code = exit_code class ExtensionUpdateError(ExtensionError): """ Error raised when failed to update an extension """ class ExtensionDownloadError(ExtensionError): """ Error raised when failed to download and setup an extension """ class ExtensionsGoalStateError(ExtensionError): """ Error raised when the ExtensionsGoalState is malformed """ class ExtensionsConfigError(ExtensionsGoalStateError): """ Error raised when the ExtensionsConfig is malformed """ class MultiConfigExtensionEnableError(ExtensionError): """ Error raised when enable for a Multi-Config extension is failing. """ class ProvisionError(AgentError): """ When provision failed """ def __init__(self, msg=None, inner=None): super(ProvisionError, self).__init__(msg, inner) class ResourceDiskError(AgentError): """ Mount resource disk failed """ def __init__(self, msg=None, inner=None): super(ResourceDiskError, self).__init__(msg, inner) class DhcpError(AgentError): """ Failed to handle dhcp response """ def __init__(self, msg=None, inner=None): super(DhcpError, self).__init__(msg, inner) class OSUtilError(AgentError): """ Failed to perform operation to OS configuration """ def __init__(self, msg=None, inner=None): super(OSUtilError, self).__init__(msg, inner) class ProtocolError(AgentError): """ Azure protocol error """ def __init__(self, msg=None, inner=None): super(ProtocolError, self).__init__(msg, inner) class ProtocolNotFoundError(ProtocolError): """ Error raised when Azure protocol endpoint not found """ class HttpError(AgentError): """ Http request failure """ def __init__(self, msg=None, inner=None): super(HttpError, self).__init__(msg, inner) class InvalidContainerError(HttpError): """ Error raised when Container id sent in the header is invalid """ class EventError(AgentError): """ Event reporting error """ def __init__(self, msg=None, inner=None): super(EventError, self).__init__(msg, inner) class CryptError(AgentError): """ Encrypt/Decrypt error """ def __init__(self, msg=None, inner=None): super(CryptError, self).__init__(msg, inner) class UpdateError(AgentError): """ Update Guest Agent error """ def __init__(self, msg=None, inner=None): super(UpdateError, self).__init__(msg, inner) class ResourceGoneError(HttpError): """ The requested resource no longer exists (i.e., status code 410) """ def __init__(self, msg=None, inner=None): if msg is None: msg = "Resource is gone" super(ResourceGoneError, self).__init__(msg, inner) class InvalidExtensionEventError(AgentError): """ Error thrown when the extension telemetry event is invalid as defined per the contract with extensions. """ # Types of InvalidExtensionEventError MissingKeyError = "MissingKeyError" EmptyMessageError = "EmptyMessageError" OversizeEventError = "OversizeEventError" def __init__(self, msg=None, inner=None): super(InvalidExtensionEventError, self).__init__(msg, inner) class ServiceStoppedError(AgentError): """ Error thrown when trying to access a Service which is stopped """ def __init__(self, msg=None, inner=None): super(ServiceStoppedError, self).__init__(msg, inner) class ExtensionErrorCodes(object): """ Common Error codes used across by Compute RP for better understanding the cause and clarify common occurring errors """ # Unknown Failures PluginUnknownFailure = -1 # Success PluginSuccess = 0 # Catch all error code. PluginProcessingError = 1000 # Plugin failed to download PluginManifestDownloadError = 1001 # Cannot find or load successfully the HandlerManifest.json PluginHandlerManifestNotFound = 1002 # Cannot successfully serialize the HandlerManifest.json PluginHandlerManifestDeserializationError = 1003 # Cannot download the plugin package PluginPackageDownloadFailed = 1004 # Cannot extract the plugin form package PluginPackageExtractionFailed = 1005 # Install failed PluginInstallProcessingFailed = 1007 # Update failed PluginUpdateProcessingFailed = 1008 # Enable failed PluginEnableProcessingFailed = 1009 # Disable failed PluginDisableProcessingFailed = 1010 # Extension script timed out PluginHandlerScriptTimedout = 1011 # Invalid status file of the extension. PluginSettingsStatusInvalid = 1012 def __init__(self): pass class GoalStateAggregateStatusCodes(object): # Success Success = 0 # Unknown failure GoalStateUnknownFailure = -1 # The goal state requires features that are not supported by this version of the VM agent GoalStateUnsupportedRequiredFeatures = 2001 Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/future.py000066400000000000000000000144671462617747000241450ustar00rootroot00000000000000import contextlib import platform import sys import os import re # Note broken dependency handling to avoid potential backward # compatibility issues on different distributions try: import distro # pylint: disable=E0401 except Exception: pass # pylint: disable=W0105 """ Add alias for python2 and python3 libs and functions. """ # pylint: enable=W0105 if sys.version_info[0] == 3: import http.client as httpclient # pylint: disable=W0611,import-error from urllib.parse import urlparse # pylint: disable=W0611,import-error,no-name-in-module """Rename Python3 str to ustr""" # pylint: disable=W0105 ustr = str bytebuffer = memoryview # We aren't using these imports in this file, but we want them to be available # to import from this module in others. # Additionally, python2 doesn't have this, so we need to disable import-error # as well. # unused-import, import-error Disabled: Due to backward compatibility between py2 and py3 from builtins import int, range # pylint: disable=unused-import,import-error from collections import OrderedDict # pylint: disable=W0611 from queue import Queue, Empty # pylint: disable=W0611,import-error # unused-import Disabled: python2.7 doesn't have subprocess.DEVNULL # so this import is only used by python3. import subprocess # pylint: disable=unused-import elif sys.version_info[0] == 2: import httplib as httpclient # pylint: disable=E0401,W0611 from urlparse import urlparse # pylint: disable=E0401 from Queue import Queue, Empty # pylint: disable=W0611,import-error # We want to suppress the following: # - undefined-variable: # These builtins are not defined in python3 # - redefined-builtin: # This is intentional, so that code that wants to use builtins we're # assigning new names to doesn't need to check python versions before # doing so. # pylint: disable=undefined-variable,redefined-builtin ustr = unicode # Rename Python2 unicode to ustr bytebuffer = buffer range = xrange int = long if sys.version_info[1] >= 7: from collections import OrderedDict # For Py 2.7+ else: from ordereddict import OrderedDict # Works only on 2.6 # pylint: disable=E0401 else: raise ImportError("Unknown python version: {0}".format(sys.version_info)) def get_linux_distribution(get_full_name, supported_dists): """Abstract platform.linux_distribution() call which is deprecated as of Python 3.5 and removed in Python 3.7""" try: supported = platform._supported_dists + (supported_dists,) osinfo = list( platform.linux_distribution( # pylint: disable=W1505 full_distribution_name=get_full_name, supported_dists=supported ) ) # The platform.linux_distribution() lib has issue with detecting OpenWRT linux distribution. # Merge the following patch provided by OpenWRT as a temporary fix. if os.path.exists("/etc/openwrt_release"): osinfo = get_openwrt_platform() if not osinfo or osinfo == ['', '', '']: return get_linux_distribution_from_distro(get_full_name) full_name = platform.linux_distribution()[0].strip() # pylint: disable=W1505 osinfo.append(full_name) except AttributeError: return get_linux_distribution_from_distro(get_full_name) return osinfo def get_linux_distribution_from_distro(get_full_name): """Get the distribution information from the distro Python module.""" # If we get here we have to have the distro module, thus we do # not wrap the call in a try-except block as it would mask the problem # and result in a broken agent installation osinfo = list( distro.linux_distribution( full_distribution_name=get_full_name ) ) full_name = distro.linux_distribution()[0].strip() osinfo.append(full_name) # Fixing is the problem https://github.com/Azure/WALinuxAgent/issues/2715. Distro.linux_distribution method not retuning full version # If best is true, the most precise version number out of all examined sources is returned. if "mariner" in osinfo[0].lower(): osinfo[1] = distro.version(best=True) return osinfo def get_openwrt_platform(): """ Add this workaround for detecting OpenWRT products because the version and product information is contained in the /etc/openwrt_release file. """ result = [None, None, None] openwrt_version = re.compile(r"^DISTRIB_RELEASE=['\"](\d+\.\d+.\d+)['\"]") openwrt_product = re.compile(r"^DISTRIB_ID=['\"]([\w-]+)['\"]") with open('/etc/openwrt_release', 'r') as fh: content = fh.readlines() for line in content: version_matches = openwrt_version.match(line) product_matches = openwrt_product.match(line) if version_matches: result[1] = version_matches.group(1) elif product_matches: if product_matches.group(1) == "OpenWrt": result[0] = "openwrt" return result def is_file_not_found_error(exception): # pylint for python2 complains, but FileNotFoundError is # defined for python3. # pylint: disable=undefined-variable if sys.version_info[0] == 2: # Python 2 uses OSError(errno=2) return isinstance(exception, OSError) and exception.errno == 2 elif sys.version_info[0] == 3: return isinstance(exception, FileNotFoundError) return isinstance(exception, FileNotFoundError) @contextlib.contextmanager def subprocess_dev_null(): if sys.version_info[0] == 3: # Suppress no-member errors on python2.7 yield subprocess.DEVNULL # pylint: disable=no-member else: try: devnull = open(os.devnull, "a+") yield devnull except Exception: yield None finally: if devnull is not None: devnull.close() def array_to_bytes(buff): # Python 3.9 removed the tostring() method on arrays, the new alias is tobytes() if sys.version_info[0] == 2: return buff.tostring() if sys.version_info[0] == 3 and sys.version_info[1] <= 8: return buff.tostring() return buff.tobytes() Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/logger.py000066400000000000000000000266561462617747000241150ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and openssl_bin 1.0+ # """ Log utils """ import sys from datetime import datetime, timedelta from threading import currentThread from azurelinuxagent.common.future import ustr EVERY_DAY = timedelta(days=1) EVERY_HALF_DAY = timedelta(hours=12) EVERY_SIX_HOURS = timedelta(hours=6) EVERY_HOUR = timedelta(hours=1) EVERY_HALF_HOUR = timedelta(minutes=30) EVERY_FIFTEEN_MINUTES = timedelta(minutes=15) EVERY_MINUTE = timedelta(minutes=1) class Logger(object): """ Logger class """ # This format is based on ISO-8601, Z represents UTC (Zero offset) LogTimeFormatInUTC = u'%Y-%m-%dT%H:%M:%S.%fZ' def __init__(self, logger=None, prefix=None): self.appenders = [] self.logger = self if logger is None else logger self.periodic_messages = {} self.prefix = prefix self.silent = False def reset_periodic(self): self.logger.periodic_messages = {} def set_prefix(self, prefix): self.prefix = prefix def _is_period_elapsed(self, delta, h): return h not in self.logger.periodic_messages or \ (self.logger.periodic_messages[h] + delta) <= datetime.now() def _periodic(self, delta, log_level_op, msg_format, *args): h = hash(msg_format) if self._is_period_elapsed(delta, h): log_level_op(msg_format, *args) self.logger.periodic_messages[h] = datetime.now() def periodic_info(self, delta, msg_format, *args): self._periodic(delta, self.info, msg_format, *args) def periodic_verbose(self, delta, msg_format, *args): self._periodic(delta, self.verbose, msg_format, *args) def periodic_warn(self, delta, msg_format, *args): self._periodic(delta, self.warn, msg_format, *args) def periodic_error(self, delta, msg_format, *args): self._periodic(delta, self.error, msg_format, *args) def verbose(self, msg_format, *args): self.log(LogLevel.VERBOSE, msg_format, *args) def info(self, msg_format, *args): self.log(LogLevel.INFO, msg_format, *args) def warn(self, msg_format, *args): self.log(LogLevel.WARNING, msg_format, *args) def error(self, msg_format, *args): self.log(LogLevel.ERROR, msg_format, *args) def log(self, level, msg_format, *args): def write_log(log_appender): # pylint: disable=W0612 """ The appender_lock flag is used to signal if the logger is currently in use. This prevents a subsequent log coming in due to writing of a log statement to be not written. Eg: Assuming a logger with two appenders - FileAppender and TelemetryAppender. Here is an example of how using appender_lock flag can help. logger.warn("foo") |- log.warn() (azurelinuxagent.common.logger.Logger.warn) |- log() (azurelinuxagent.common.logger.Logger.log) |- FileAppender.appender_lock is currently False not log_appender.appender_lock is True |- We sets it to True. |- FileAppender.write completes. |- FileAppender.appender_lock sets to False. |- TelemetryAppender.appender_lock is currently False not log_appender.appender_lock is True |- We sets it to True. [A] |- TelemetryAppender.write gets called but has an error and writes a log.warn("bar") |- log() (azurelinuxagent.common.logger.Logger.log) |- FileAppender.appender_lock is set to True (log_appender.appender_lock was false when entering). |- FileAppender.write completes. |- FileAppender.appender_lock sets to False. |- TelemetryAppender.appender_lock is already True, not log_appender.appender_lock is False Thus [A] cannot happen again if TelemetryAppender.write is not getting called. It prevents faulty appenders to not get called again and again. :param log_appender: Appender :return: None """ if not log_appender.appender_lock: try: log_appender.appender_lock = True log_appender.write(level, log_item) finally: log_appender.appender_lock = False if self.silent: return # if msg_format is not unicode convert it to unicode if type(msg_format) is not ustr: msg_format = ustr(msg_format, errors="backslashreplace") if len(args) > 0: msg = msg_format.format(*args) else: msg = msg_format time = datetime.utcnow().strftime(Logger.LogTimeFormatInUTC) level_str = LogLevel.STRINGS[level] thread_name = currentThread().getName() if self.prefix is not None: log_item = u"{0} {1} {2} {3} {4}\n".format(time, level_str, thread_name, self.prefix, msg) else: log_item = u"{0} {1} {2} {3}\n".format(time, level_str, thread_name, msg) log_item = ustr(log_item.encode('ascii', "backslashreplace"), encoding="ascii") for appender in self.appenders: appender.write(level, log_item) # # TODO: we should actually call # # write_log(appender) # # (see PR #1659). Before doing that, write_log needs to be thread-safe. # # This needs to be done when SEND_LOGS_TO_TELEMETRY is enabled. # if self.logger != self: for appender in self.logger.appenders: appender.write(level, log_item) # # TODO: call write_log instead (see comment above) # def add_appender(self, appender_type, level, path): appender = _create_logger_appender(appender_type, level, path) self.appenders.append(appender) def console_output_enabled(self): """ Returns True if the current list of appenders includes at least one ConsoleAppender """ return any(isinstance(appender, ConsoleAppender) for appender in self.appenders) def disable_console_output(self): """ Removes all ConsoleAppenders from the current list of appenders """ self.appenders = [appender for appender in self.appenders if not isinstance(appender, ConsoleAppender)] class Appender(object): def __init__(self, level): self.appender_lock = False self.level = level def write(self, level, msg): pass class ConsoleAppender(Appender): def __init__(self, level, path): super(ConsoleAppender, self).__init__(level) self.path = path def write(self, level, msg): if self.level <= level: try: with open(self.path, "w") as console: console.write(msg) except IOError: pass class FileAppender(Appender): def __init__(self, level, path): super(FileAppender, self).__init__(level) self.path = path def write(self, level, msg): if self.level <= level: try: with open(self.path, "a+") as log_file: log_file.write(msg) except IOError: pass class StdoutAppender(Appender): def __init__(self, level): # pylint: disable=W0235 super(StdoutAppender, self).__init__(level) def write(self, level, msg): if self.level <= level: try: sys.stdout.write(msg) except IOError: pass class TelemetryAppender(Appender): def __init__(self, level, event_func): super(TelemetryAppender, self).__init__(level) self.event_func = event_func def write(self, level, msg): if self.level <= level: try: self.event_func(level, msg) except IOError: pass # Initialize logger instance DEFAULT_LOGGER = Logger() class LogLevel(object): VERBOSE = 0 INFO = 1 WARNING = 2 ERROR = 3 STRINGS = [ "VERBOSE", "INFO", "WARNING", "ERROR" ] class AppenderType(object): FILE = 0 CONSOLE = 1 STDOUT = 2 TELEMETRY = 3 def add_logger_appender(appender_type, level=LogLevel.INFO, path=None): DEFAULT_LOGGER.add_appender(appender_type, level, path) def console_output_enabled(): return DEFAULT_LOGGER.console_output_enabled() def disable_console_output(): DEFAULT_LOGGER.disable_console_output() def reset_periodic(): DEFAULT_LOGGER.reset_periodic() def set_prefix(prefix): DEFAULT_LOGGER.set_prefix(prefix) def periodic_info(delta, msg_format, *args): """ The hash-map maintaining the state of the logs gets reset here - azurelinuxagent.ga.monitor.MonitorHandler.reset_loggers. The current time period is defined by RESET_LOGGERS_PERIOD. """ DEFAULT_LOGGER.periodic_info(delta, msg_format, *args) def periodic_verbose(delta, msg_format, *args): """ The hash-map maintaining the state of the logs gets reset here - azurelinuxagent.ga.monitor.MonitorHandler.reset_loggers. The current time period is defined by RESET_LOGGERS_PERIOD. """ DEFAULT_LOGGER.periodic_verbose(delta, msg_format, *args) def periodic_error(delta, msg_format, *args): """ The hash-map maintaining the state of the logs gets reset here - azurelinuxagent.ga.monitor.MonitorHandler.reset_loggers. The current time period is defined by RESET_LOGGERS_PERIOD. """ DEFAULT_LOGGER.periodic_error(delta, msg_format, *args) def periodic_warn(delta, msg_format, *args): """ The hash-map maintaining the state of the logs gets reset here - azurelinuxagent.ga.monitor.MonitorHandler.reset_loggers. The current time period is defined by RESET_LOGGERS_PERIOD. """ DEFAULT_LOGGER.periodic_warn(delta, msg_format, *args) def verbose(msg_format, *args): DEFAULT_LOGGER.verbose(msg_format, *args) def info(msg_format, *args): DEFAULT_LOGGER.info(msg_format, *args) def warn(msg_format, *args): DEFAULT_LOGGER.warn(msg_format, *args) def error(msg_format, *args): DEFAULT_LOGGER.error(msg_format, *args) def log(level, msg_format, *args): DEFAULT_LOGGER.log(level, msg_format, args) def _create_logger_appender(appender_type, level=LogLevel.INFO, path=None): if appender_type == AppenderType.CONSOLE: return ConsoleAppender(level, path) elif appender_type == AppenderType.FILE: return FileAppender(level, path) elif appender_type == AppenderType.STDOUT: return StdoutAppender(level) elif appender_type == AppenderType.TELEMETRY: return TelemetryAppender(level, path) else: raise ValueError("Unknown appender type") Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/000077500000000000000000000000001462617747000235645ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/__init__.py000066400000000000000000000012631462617747000256770ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.common.osutil.factory import get_osutil Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/alpine.py000066400000000000000000000031631462617747000254110ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.osutil.default import DefaultOSUtil class AlpineOSUtil(DefaultOSUtil): def __init__(self): super(AlpineOSUtil, self).__init__() self.agent_conf_file_path = '/etc/waagent.conf' self.jit_enabled = True def is_dhcp_enabled(self): return True def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", "dhcpcd"]) def restart_if(self, ifname, retries=None, wait=None): logger.info('restarting {} (sort of, actually SIGHUPing dhcpcd)'.format(ifname)) pid = self.get_dhcp_pid() if pid != None: ret = shellutil.run_get_output('kill -HUP {}'.format(pid)) # pylint: disable=W0612 def set_ssh_client_alive_interval(self): # Alpine will handle this. pass def conf_sshd(self, disable_password): # Alpine will handle this. pass Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/arch.py000066400000000000000000000041611462617747000250550ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.osutil.default import DefaultOSUtil class ArchUtil(DefaultOSUtil): def __init__(self): super(ArchUtil, self).__init__() self.jit_enabled = True @staticmethod def get_systemd_unit_file_install_path(): return "/usr/lib/systemd/system" @staticmethod def get_agent_bin_path(): return "/usr/bin" def is_dhcp_enabled(self): return True def start_network(self): return shellutil.run("systemctl start systemd-networkd", chk_err=False) def restart_if(self, ifname=None, retries=None, wait=None): shellutil.run("systemctl restart systemd-networkd") def restart_ssh_service(self): # SSH is socket activated on CoreOS. No need to restart it. pass def stop_dhcp_service(self): return shellutil.run("systemctl stop systemd-networkd", chk_err=False) def start_dhcp_service(self): return shellutil.run("systemctl start systemd-networkd", chk_err=False) def start_agent_service(self): return shellutil.run("systemctl start {0}".format(self.service_name), chk_err=False) def stop_agent_service(self): return shellutil.run("systemctl stop {0}".format(self.service_name), chk_err=False) def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", "systemd-networkd"]) def conf_sshd(self, disable_password): # Don't whack the system default sshd conf pass Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/bigip.py000066400000000000000000000331731462617747000252370ustar00rootroot00000000000000# Copyright 2016 F5 Networks Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import array import fcntl import os import platform import re import socket import struct import time from azurelinuxagent.common.future import array_to_bytes try: # WAAgent > 2.1.3 import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.exception import OSUtilError from azurelinuxagent.common.osutil.default import DefaultOSUtil except ImportError: # WAAgent <= 2.1.3 import azurelinuxagent.logger as logger import azurelinuxagent.utils.shellutil as shellutil from azurelinuxagent.exception import OSUtilError from azurelinuxagent.distro.default.osutil import DefaultOSUtil class BigIpOSUtil(DefaultOSUtil): def __init__(self): # pylint: disable=W0235 super(BigIpOSUtil, self).__init__() def _wait_until_mcpd_is_initialized(self): """Wait for mcpd to become available All configuration happens in mcpd so we need to wait that this is available before we go provisioning the system. I call this method at the first opportunity I have (during the DVD mounting call). This ensures that the rest of the provisioning does not need to wait for mcpd to be available unless it absolutely wants to. :return bool: Returns True upon success :raises OSUtilError: Raises exception if mcpd does not come up within roughly 50 minutes (100 * 30 seconds) """ for retries in range(1, 100): # pylint: disable=W0612 # Retry until mcpd completes startup: logger.info("Checking to see if mcpd is up") rc = shellutil.run("/usr/bin/tmsh -a show sys mcp-state field-fmt 2>/dev/null | grep phase | grep running", chk_err=False) if rc == 0: logger.info("mcpd is up!") break time.sleep(30) if rc == 0: return True raise OSUtilError( "mcpd hasn't completed initialization! Cannot proceed!" ) def _save_sys_config(self): cmd = "/usr/bin/tmsh save sys config" rc = shellutil.run(cmd) if rc != 0: logger.error("WARNING: Cannot save sys config on 1st boot.") return rc def restart_ssh_service(self): return shellutil.run("/usr/bin/bigstart restart sshd", chk_err=False) def stop_agent_service(self): return shellutil.run("/sbin/service {0} stop".format(self.service_name), chk_err=False) def start_agent_service(self): return shellutil.run("/sbin/service {0} start".format(self.service_name), chk_err=False) def register_agent_service(self): return shellutil.run("/sbin/chkconfig --add {0}".format(self.service_name), chk_err=False) def unregister_agent_service(self): return shellutil.run("/sbin/chkconfig --del {0}".format(self.service_name), chk_err=False) def get_dhcp_pid(self): return self._get_dhcp_pid(["/sbin/pidof", "dhclient"]) def set_hostname(self, hostname): """Set the static hostname of the device Normally, tmsh is used to set the hostname for the system. For our purposes at this time though, I would hesitate to trust this function. Azure(Stack) uses the name that you provide in the Web UI or ARM (for example) as the value of the hostname argument to this method. The problem is that there is nowhere in the UI that specifies the restrictions and checks that tmsh has for the hostname. For example, if you set the name "bigip1" in the Web UI, Azure(Stack) considers that a perfectly valid name. When WAAgent gets around to running though, tmsh will reject that value because it is not a fully qualified domain name. The proper value should have been bigip.xxx.yyy WAAgent will not fail if this command fails, but the hostname will not be what the user set either. Currently we do not set the hostname when WAAgent starts up, so I am passing on setting it here too. :param hostname: The hostname to set on the device """ return None def set_dhcp_hostname(self, hostname): """Sets the DHCP hostname See `set_hostname` for an explanation of why I pass here :param hostname: The hostname to set on the device """ return None def useradd(self, username, expiration=None, comment=None): """Create user account using tmsh Our policy is to create two accounts when booting a BIG-IP instance. The first account is the one that the user specified when they did the instance creation. The second one is the admin account that is, or should be, built in to the system. :param username: The username that you want to add to the system :param expiration: The expiration date to use. We do not use this value. :param comment: description of the account. We do not use this value. """ if self.get_userentry(username): logger.info("User {0} already exists, skip useradd", username) return None cmd = ['/usr/bin/tmsh', 'create', 'auth', 'user', username, 'partition-access', 'add', '{', 'all-partitions', '{', 'role', 'admin', '}', '}', 'shell', 'bash'] self._run_command_raising_OSUtilError(cmd, err_msg="Failed to create user account:{0}".format(username)) self._save_sys_config() return 0 def chpasswd(self, username, password, crypt_id=6, salt_len=10): """Change a user's password with tmsh Since we are creating the user specified account and additionally changing the password of the built-in 'admin' account, both must be modified in this method. Note that the default method also checks for a "system level" of the user; based on the value of UID_MIN in /etc/login.defs. In our env, all user accounts have the UID 0. So we can't rely on this value. :param username: The username whose password to change :param password: The unencrypted password to set for the user :param crypt_id: If encrypting the password, the crypt_id that was used :param salt_len: If encrypting the password, the length of the salt value used to do it. """ # Start by setting the password of the user provided account self._run_command_raising_OSUtilError( ['/usr/bin/tmsh', 'modify', 'auth', 'user', username, 'password', password], err_msg="Failed to set password for {0}".format(username)) # Next, set the password of the built-in 'admin' account to be have # the same password as the user provided account userentry = self.get_userentry('admin') if userentry is None: raise OSUtilError("The 'admin' user account was not found!") self._run_command_raising_OSUtilError( ['/usr/bin/tmsh', 'modify', 'auth', 'user', 'admin', 'password', password], err_msg="Failed to set password for admin") self._save_sys_config() return 0 def del_account(self, username): """Deletes a user account. Note that the default method also checks for a "system level" of the user; based on the value of UID_MIN in /etc/login.defs. In our env, all user accounts have the UID 0. So we can't rely on this value. We also don't use sudo, so we remove that method call as well. :param username: :return: """ self._run_command_without_raising(["touch", "/var/run/utmp"]) self._run_command_without_raising(['/usr/bin/tmsh', 'delete', 'auth', 'user', username]) def get_dvd_device(self, dev_dir='/dev'): """Find BIG-IP's CD/DVD device This device is almost certainly /dev/cdrom so I added the ? to this pattern. Note that this method will return upon the first device found, but in my tests with 12.1.1 it will also find /dev/sr0 on occasion. This is NOT the correct CD/DVD device though. :todo: Consider just always returning "/dev/cdrom" here if that device device exists on all platforms that are supported on Azure(Stack) :param dev_dir: The root directory from which to look for devices """ patten = r'(sr[0-9]|hd[c-z]|cdrom[0-9]?)' for dvd in [re.match(patten, dev) for dev in os.listdir(dev_dir)]: if dvd is not None: return "/dev/{0}".format(dvd.group(0)) raise OSUtilError("Failed to get dvd device") # The linter reports that this function's arguments differ from those # of the function this overrides. This doesn't seem to be a problem, however, # because this function accepts any option that could'be been specified for # the original (and, by forwarding the kwargs to the original, will reject any # option _not_ accepted by the original). Additionally, this method allows us # to keep the defaults for mount_dvd in one place (the original function) instead # of having to duplicate it here as well. def mount_dvd(self, **kwargs): # pylint: disable=W0221 """Mount the DVD containing the provisioningiso.iso file This is the _first_ hook that WAAgent provides for us, so this is the point where we should wait for mcpd to load. I am just overloading this method to add the mcpd wait. Then I proceed with the stock code. :param max_retry: Maximum number of retries waagent will make when mounting the provisioningiso.iso DVD :param chk_err: Whether to check for errors or not in the mounting commands """ self._wait_until_mcpd_is_initialized() return super(BigIpOSUtil, self).mount_dvd(**kwargs) def eject_dvd(self, chk_err=True): """Runs the eject command to eject the provisioning DVD BIG-IP does not include an eject command. It is sufficient to just umount the DVD disk. But I will log that we do not support this for future reference. :param chk_err: Whether or not to check for errors raised by the eject command """ logger.warn("Eject is not supported on this platform") def get_first_if(self): """Return the interface name, and ip addr of the management interface. We need to add a struct_size check here because, curiously, our 64bit platform is identified by python in Azure(Stack) as 32 bit and without adjusting the struct_size, we can't get the information we need. I believe this may be caused by only python i686 being shipped with BIG-IP instead of python x86_64?? """ iface = '' expected = 16 # how many devices should I expect... python_arc = platform.architecture()[0] if python_arc == '64bit': struct_size = 40 # for 64bit the size is 40 bytes else: struct_size = 32 # for 32bit the size is 32 bytes sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) buff = array.array('B', b'\0' * (expected * struct_size)) param = struct.pack('iL', expected*struct_size, buff.buffer_info()[0]) ret = fcntl.ioctl(sock.fileno(), 0x8912, param) retsize = (struct.unpack('iL', ret)[0]) if retsize == (expected * struct_size): logger.warn(('SIOCGIFCONF returned more than {0} up ' 'network interfaces.'), expected) sock = array_to_bytes(buff) for i in range(0, struct_size * expected, struct_size): iface = self._format_single_interface_name(sock, i) # Azure public was returning "lo:1" when deploying WAF if b'lo' in iface: continue else: break return iface.decode('latin-1'), socket.inet_ntoa(sock[i+20:i+24]) # pylint: disable=undefined-loop-variable def _format_single_interface_name(self, sock, offset): return sock[offset:offset+16].split(b'\0', 1)[0] def route_add(self, net, mask, gateway): """Add specified route using tmsh. :param net: :param mask: :param gateway: :return: """ cmd = ("/usr/bin/tmsh create net route " "{0}/{1} gw {2}").format(net, mask, gateway) return shellutil.run(cmd, chk_err=False) def device_for_ide_port(self, port_id): """Return device name attached to ide port 'n'. Include a wait in here because BIG-IP may not have yet initialized this list of devices. :param port_id: :return: """ for retries in range(1, 100): # pylint: disable=W0612 # Retry until devices are ready if os.path.exists("/sys/bus/vmbus/devices/"): break else: time.sleep(10) return super(BigIpOSUtil, self).device_for_ide_port(port_id) Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/clearlinux.py000066400000000000000000000075351462617747000263160ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os # pylint: disable=W0611 import re # pylint: disable=W0611 import pwd # pylint: disable=W0611 import shutil # pylint: disable=W0611 import socket # pylint: disable=W0611 import array # pylint: disable=W0611 import struct # pylint: disable=W0611 import fcntl # pylint: disable=W0611 import time # pylint: disable=W0611 import base64 # pylint: disable=W0611 import errno import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger # pylint: disable=W0611 import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.utils.textutil as textutil # pylint: disable=W0611 from azurelinuxagent.common.osutil.default import DefaultOSUtil from azurelinuxagent.common.exception import OSUtilError class ClearLinuxUtil(DefaultOSUtil): def __init__(self): super(ClearLinuxUtil, self).__init__() self.agent_conf_file_path = '/usr/share/defaults/waagent/waagent.conf' self.jit_enabled = True @staticmethod def get_systemd_unit_file_install_path(): return "/usr/lib/systemd/system" @staticmethod def get_agent_bin_path(): return "/usr/bin" def is_dhcp_enabled(self): return True def start_network(self) : return shellutil.run("systemctl start systemd-networkd", chk_err=False) def restart_if(self, ifname=None, retries=None, wait=None): shellutil.run("systemctl restart systemd-networkd") def restart_ssh_service(self): # SSH is socket activated. No need to restart it. pass def stop_dhcp_service(self): return shellutil.run("systemctl stop systemd-networkd", chk_err=False) def start_dhcp_service(self): return shellutil.run("systemctl start systemd-networkd", chk_err=False) def start_agent_service(self): return shellutil.run("systemctl start {0}".format(self.service_name), chk_err=False) def stop_agent_service(self): return shellutil.run("systemctl stop {0}".format(self.service_name), chk_err=False) def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", "systemd-networkd"]) def conf_sshd(self, disable_password): # Don't whack the system default sshd conf pass def del_root_password(self): try: passwd_file_path = conf.get_passwd_file_path() try: passwd_content = fileutil.read_file(passwd_file_path) if not passwd_content: # Empty file is no better than no file raise IOError(errno.ENOENT, "Empty File", passwd_file_path) except (IOError, OSError) as file_read_err: if file_read_err.errno != errno.ENOENT: raise new_passwd = ["root:*LOCK*:14600::::::"] else: passwd = passwd_content.split('\n') new_passwd = [x for x in passwd if not x.startswith("root:")] new_passwd.insert(0, "root:*LOCK*:14600::::::") fileutil.write_file(passwd_file_path, "\n".join(new_passwd)) except IOError as e: raise OSUtilError("Failed to delete root password:{0}".format(e)) pass # pylint: disable=W0107 Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/coreos.py000066400000000000000000000057201462617747000254340ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os from azurelinuxagent.common.utils import shellutil from azurelinuxagent.common.osutil.default import DefaultOSUtil class CoreOSUtil(DefaultOSUtil): def __init__(self): super(CoreOSUtil, self).__init__() self.agent_conf_file_path = '/usr/share/oem/waagent.conf' self.waagent_path = '/usr/share/oem/bin/waagent' self.python_path = '/usr/share/oem/python/bin' self.jit_enabled = True if 'PATH' in os.environ: path = "{0}:{1}".format(os.environ['PATH'], self.python_path) else: path = self.python_path os.environ['PATH'] = path if 'PYTHONPATH' in os.environ: py_path = os.environ['PYTHONPATH'] py_path = "{0}:{1}".format(py_path, self.waagent_path) else: py_path = self.waagent_path os.environ['PYTHONPATH'] = py_path @staticmethod def get_agent_bin_path(): return "/usr/share/oem/bin" def is_sys_user(self, username): # User 'core' is not a sysuser. if username == 'core': return False return super(CoreOSUtil, self).is_sys_user(username) def is_dhcp_enabled(self): return True def start_network(self): return shellutil.run("systemctl start systemd-networkd", chk_err=False) def restart_if(self, ifname=None, retries=None, wait=None): shellutil.run("systemctl restart systemd-networkd") def restart_ssh_service(self): # SSH is socket activated on CoreOS. No need to restart it. pass def stop_dhcp_service(self): return shellutil.run("systemctl stop systemd-networkd", chk_err=False) def start_dhcp_service(self): return shellutil.run("systemctl start systemd-networkd", chk_err=False) def start_agent_service(self): return shellutil.run("systemctl start {0}".format(self.service_name), chk_err=False) def stop_agent_service(self): return shellutil.run("systemctl stop {0}".format(self.service_name), chk_err=False) def get_dhcp_pid(self): return self._get_dhcp_pid( ["systemctl", "show", "-p", "MainPID", "systemd-networkd"], transform_command_output=lambda o: o.replace("MainPID=", "")) def conf_sshd(self, disable_password): # In CoreOS, /etc/sshd_config is mount readonly. Skip the setting. pass Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/debian.py000066400000000000000000000052431462617747000253640ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os # pylint: disable=W0611 import re # pylint: disable=W0611 import pwd # pylint: disable=W0611 import shutil # pylint: disable=W0611 import socket # pylint: disable=W0611 import array # pylint: disable=W0611 import struct # pylint: disable=W0611 import fcntl # pylint: disable=W0611 import time # pylint: disable=W0611 import base64 # pylint: disable=W0611 import azurelinuxagent.common.logger as logger # pylint: disable=W0611 import azurelinuxagent.common.utils.fileutil as fileutil # pylint: disable=W0611 import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.utils.textutil as textutil # pylint: disable=W0611 from azurelinuxagent.common.osutil.default import DefaultOSUtil class DebianOSBaseUtil(DefaultOSUtil): def __init__(self): super(DebianOSBaseUtil, self).__init__() self.jit_enabled = True def restart_ssh_service(self): return shellutil.run("systemctl --job-mode=ignore-dependencies try-reload-or-restart ssh", chk_err=False) def stop_agent_service(self): return shellutil.run("service azurelinuxagent stop", chk_err=False) def start_agent_service(self): return shellutil.run("service azurelinuxagent start", chk_err=False) def start_network(self): pass def remove_rules_files(self, rules_files=""): pass def restore_rules_files(self, rules_files=""): pass def get_dhcp_lease_endpoint(self): return self.get_endpoint_from_leases_path('/var/lib/dhcp/dhclient.*.leases') class DebianOSModernUtil(DebianOSBaseUtil): def __init__(self): super(DebianOSModernUtil, self).__init__() self.jit_enabled = True self.service_name = self.get_service_name() @staticmethod def get_service_name(): return "walinuxagent" def stop_agent_service(self): return shellutil.run("systemctl stop {0}".format(self.service_name), chk_err=False) def start_agent_service(self): return shellutil.run("systemctl start {0}".format(self.service_name), chk_err=False) Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/default.py000066400000000000000000001723661462617747000256010ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import base64 import datetime import errno import fcntl import glob import json import multiprocessing import os import platform import pwd import re import shutil import socket import struct import sys import time from pwd import getpwall import array from azurelinuxagent.common import conf from azurelinuxagent.common import logger from azurelinuxagent.common.utils import fileutil from azurelinuxagent.common.utils import shellutil from azurelinuxagent.common.utils import textutil from azurelinuxagent.common.exception import OSUtilError from azurelinuxagent.common.future import ustr, array_to_bytes from azurelinuxagent.common.utils.cryptutil import CryptUtil from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.utils.networkutil import RouteEntry, NetworkInterfaceCard, AddFirewallRules from azurelinuxagent.common.utils.shellutil import CommandError __RULES_FILES__ = ["/lib/udev/rules.d/75-persistent-net-generator.rules", "/etc/udev/rules.d/70-persistent-net.rules"] """ Define distro specific behavior. OSUtil class defines default behavior for all distros. Each concrete distro classes could overwrite default behavior if needed. """ _IPTABLES_VERSION_PATTERN = re.compile("^[^\d\.]*([\d\.]+).*$") # pylint: disable=W1401 _IPTABLES_LOCKING_VERSION = FlexibleVersion('1.4.21') def _add_wait(wait, command): """ If 'wait' is True, adds the wait option (-w) to the given iptables command line """ if wait: command.insert(1, "-w") return command def get_iptables_version_command(): return ["iptables", "--version"] def get_firewall_list_command(wait): return _add_wait(wait, ["iptables", "-t", "security", "-L", "-nxv"]) def get_firewall_packets_command(wait): return _add_wait(wait, ["iptables", "-t", "security", "-L", "OUTPUT", "--zero", "OUTPUT", "-nxv"]) # Precisely delete the rules created by the agent. # this rule was used <= 2.2.25. This rule helped to validate our change, and determine impact. def get_firewall_delete_conntrack_accept_command(wait, destination): return _add_wait(wait, ["iptables", "-t", "security", AddFirewallRules.DELETE_COMMAND, "OUTPUT", "-d", destination, "-p", "tcp", "-m", "conntrack", "--ctstate", "INVALID,NEW", "-j", "ACCEPT"]) def get_delete_accept_tcp_rule(wait, destination): return AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.DELETE_COMMAND, destination, wait=wait) def get_firewall_delete_owner_accept_command(wait, destination, owner_uid): return _add_wait(wait, ["iptables", "-t", "security", AddFirewallRules.DELETE_COMMAND, "OUTPUT", "-d", destination, "-p", "tcp", "-m", "owner", "--uid-owner", str(owner_uid), "-j", "ACCEPT"]) def get_firewall_delete_conntrack_drop_command(wait, destination): return _add_wait(wait, ["iptables", "-t", "security", AddFirewallRules.DELETE_COMMAND, "OUTPUT", "-d", destination, "-p", "tcp", "-m", "conntrack", "--ctstate", "INVALID,NEW", "-j", "DROP"]) PACKET_PATTERN = "^\s*(\d+)\s+(\d+)\s+DROP\s+.*{0}[^\d]*$" # pylint: disable=W1401 ALL_CPUS_REGEX = re.compile('^cpu .*') ALL_MEMS_REGEX = re.compile('^Mem.*') _enable_firewall = True DMIDECODE_CMD = 'dmidecode --string system-uuid' PRODUCT_ID_FILE = '/sys/class/dmi/id/product_uuid' UUID_PATTERN = re.compile( r'^\s*[A-F0-9]{8}(?:\-[A-F0-9]{4}){3}\-[A-F0-9]{12}\s*$', re.IGNORECASE) IOCTL_SIOCGIFCONF = 0x8912 IOCTL_SIOCGIFFLAGS = 0x8913 IOCTL_SIOCGIFHWADDR = 0x8927 IFNAMSIZ = 16 IP_COMMAND_OUTPUT = re.compile('^\d+:\s+(\w+):\s+(.*)$') # pylint: disable=W1401 STORAGE_DEVICE_PATH = '/sys/bus/vmbus/devices/' GEN2_DEVICE_ID = 'f8b3781a-1e82-4818-a1c3-63d806ec15bb' class DefaultOSUtil(object): def __init__(self): self.agent_conf_file_path = '/etc/waagent.conf' self.selinux = None self.disable_route_warning = False self.jit_enabled = False self.service_name = self.get_service_name() @staticmethod def get_service_name(): return "waagent" @staticmethod def get_systemd_unit_file_install_path(): return "/lib/systemd/system" @staticmethod def get_agent_bin_path(): return "/usr/sbin" @staticmethod def get_vm_arch(): try: return platform.machine() except Exception as e: logger.warn("Unable to determine cpu architecture: {0}", ustr(e)) return "unknown" def get_firewall_dropped_packets(self, dst_ip=None): # If a previous attempt failed, do not retry global _enable_firewall # pylint: disable=W0603 if not _enable_firewall: return 0 try: wait = self.get_firewall_will_wait() try: output = shellutil.run_command(get_firewall_packets_command(wait)) pattern = re.compile(PACKET_PATTERN.format(dst_ip)) for line in output.split('\n'): m = pattern.match(line) if m is not None: return int(m.group(1)) except Exception as e: if isinstance(e, CommandError) and (e.returncode == 3 or e.returncode == 4): # pylint: disable=E1101 # Transient error that we ignore returncode 3. This code fires every loop # of the daemon (60m), so we will get the value eventually. # ignore returncode 4 as temporary fix (RULE_REPLACE failed (Invalid argument)) return 0 logger.warn("Failed to get firewall packets: {0}", ustr(e)) return -1 return 0 except Exception as e: _enable_firewall = False logger.warn("Unable to retrieve firewall packets dropped" "{0}".format(ustr(e))) return -1 def get_firewall_will_wait(self): # Determine if iptables will serialize access try: output = shellutil.run_command(get_iptables_version_command()) except Exception as e: msg = "Unable to determine version of iptables: {0}".format(ustr(e)) logger.warn(msg) raise Exception(msg) m = _IPTABLES_VERSION_PATTERN.match(output) if m is None: msg = "iptables did not return version information: {0}".format(output) logger.warn(msg) raise Exception(msg) wait = "-w" \ if FlexibleVersion(m.group(1)) >= _IPTABLES_LOCKING_VERSION \ else "" return wait def _delete_rule(self, rule): """ Continually execute the delete operation until the return code is non-zero or the limit has been reached. """ for i in range(1, 100): # pylint: disable=W0612 try: rc = shellutil.run_command(rule) # pylint: disable=W0612 except CommandError as e: if e.returncode == 1: return if e.returncode == 2: raise Exception("invalid firewall deletion rule '{0}'".format(rule)) def remove_firewall(self, dst_ip, uid, wait): # If a previous attempt failed, do not retry global _enable_firewall # pylint: disable=W0603 if not _enable_firewall: return False try: # This rule was <= 2.2.25 only, and may still exist on some VMs. Until 2.2.25 # has aged out, keep this cleanup in place. self._delete_rule(get_firewall_delete_conntrack_accept_command(wait, dst_ip)) self._delete_rule(get_delete_accept_tcp_rule(wait, dst_ip)) self._delete_rule(get_firewall_delete_owner_accept_command(wait, dst_ip, uid)) self._delete_rule(get_firewall_delete_conntrack_drop_command(wait, dst_ip)) return True except Exception as e: _enable_firewall = False logger.info("Unable to remove firewall -- " "no further attempts will be made: " "{0}".format(ustr(e))) return False def remove_legacy_firewall_rule(self, dst_ip): # This function removes the legacy firewall rule that was added <= 2.2.25. # Not adding the global _enable_firewall check here as this will only be called once per service start and # we dont want the state of this call to affect other iptable calls. try: wait = self.get_firewall_will_wait() # This rule was <= 2.2.25 only, and may still exist on some VMs. Until 2.2.25 # has aged out, keep this cleanup in place. self._delete_rule(get_firewall_delete_conntrack_accept_command(wait, dst_ip)) except Exception as error: logger.info( "Unable to remove legacy firewall rule, won't try removing it again. Error: {0}".format(ustr(error))) def enable_firewall(self, dst_ip, uid): """ It checks if every iptable rule exists and add them if not present. It returns a tuple(enable firewall success status, missing rules array) enable firewall success status: Returns True if every firewall rule exists otherwise False missing rules: array with names of the missing rules ("ACCEPT DNS", "ACCEPT", "DROP") """ # If a previous attempt failed, do not retry global _enable_firewall # pylint: disable=W0603 if not _enable_firewall: return False, [] missing_rules = [] try: wait = self.get_firewall_will_wait() # check every iptable rule and delete others if any rule is missing # and append every iptable rule to the end of the chain. try: missing_rules.extend(AddFirewallRules.get_missing_iptables_rules(wait, dst_ip, uid)) if len(missing_rules) > 0: self.remove_firewall(dst_ip, uid, wait) AddFirewallRules.add_iptables_rules(wait, dst_ip, uid) except CommandError as e: if e.returncode == 2: self.remove_firewall(dst_ip, uid, wait) msg = "please upgrade iptables to a version that supports the -C option" logger.warn(msg) raise except Exception as error: logger.warn(ustr(error)) raise return True, missing_rules except Exception as e: _enable_firewall = False logger.info("Unable to establish firewall -- " "no further attempts will be made: " "{0}".format(ustr(e))) return False, missing_rules def get_firewall_list(self, wait=None): try: if wait is None: wait = self.get_firewall_will_wait() output = shellutil.run_command(get_firewall_list_command(wait)) return output except Exception as e: logger.warn("Listing firewall rules failed: {0}".format(ustr(e))) return "" @staticmethod def _correct_instance_id(instance_id): """ Azure stores the instance ID with an incorrect byte ordering for the first parts. For example, the ID returned by the metadata service: D0DF4C54-4ECB-4A4B-9954-5BDF3ED5C3B8 will be found as: 544CDFD0-CB4E-4B4A-9954-5BDF3ED5C3B8 This code corrects the byte order such that it is consistent with that returned by the metadata service. """ if not UUID_PATTERN.match(instance_id): return instance_id parts = instance_id.split('-') return '-'.join([ textutil.swap_hexstring(parts[0], width=2), textutil.swap_hexstring(parts[1], width=2), textutil.swap_hexstring(parts[2], width=2), parts[3], parts[4] ]) def is_current_instance_id(self, id_that): """ Compare two instance IDs for equality, but allow that some IDs may have been persisted using the incorrect byte ordering. """ id_this = self.get_instance_id() logger.verbose("current instance id: {0}".format(id_this)) logger.verbose(" former instance id: {0}".format(id_that)) return id_this.lower() == id_that.lower() or \ id_this.lower() == self._correct_instance_id(id_that).lower() def get_agent_conf_file_path(self): return self.agent_conf_file_path def get_instance_id(self): """ Azure records a UUID as the instance ID First check /sys/class/dmi/id/product_uuid. If that is missing, then extracts from dmidecode If nothing works (for old VMs), return the empty string """ if os.path.isfile(PRODUCT_ID_FILE): s = fileutil.read_file(PRODUCT_ID_FILE).strip() else: rc, s = shellutil.run_get_output(DMIDECODE_CMD) if rc != 0 or UUID_PATTERN.match(s) is None: return "" return self._correct_instance_id(s.strip()) @staticmethod def get_userentry(username): try: return pwd.getpwnam(username) except KeyError: return None def get_root_username(self): return "root" def is_sys_user(self, username): """ Check whether use is a system user. If reset sys user is allowed in conf, return False Otherwise, check whether UID is less than UID_MIN """ if conf.get_allow_reset_sys_user(): return False userentry = self.get_userentry(username) uidmin = None try: uidmin_def = fileutil.get_line_startingwith("UID_MIN", "/etc/login.defs") if uidmin_def is not None: uidmin = int(uidmin_def.split()[1]) except IOError as e: # pylint: disable=W0612 pass if uidmin == None: uidmin = 100 if userentry != None and userentry[2] < uidmin: return True else: return False def useradd(self, username, expiration=None, comment=None): """ Create user account with 'username' """ userentry = self.get_userentry(username) if userentry is not None: logger.info("User {0} already exists, skip useradd", username) return if expiration is not None: cmd = ["useradd", "-m", username, "-e", expiration] else: cmd = ["useradd", "-m", username] if comment is not None: cmd.extend(["-c", comment]) self._run_command_raising_OSUtilError(cmd, err_msg="Failed to create user account:{0}".format(username)) def chpasswd(self, username, password, crypt_id=6, salt_len=10): if self.is_sys_user(username): raise OSUtilError(("User {0} is a system user, " "will not set password.").format(username)) passwd_hash = textutil.gen_password_hash(password, crypt_id, salt_len) self._run_command_raising_OSUtilError(["usermod", "-p", passwd_hash, username], err_msg="Failed to set password for {0}".format(username)) def get_users(self): return getpwall() def conf_sudoer(self, username, nopasswd=False, remove=False): sudoers_dir = conf.get_sudoers_dir() sudoers_wagent = os.path.join(sudoers_dir, 'waagent') if not remove: # for older distros create sudoers.d if not os.path.isdir(sudoers_dir): # create the sudoers.d directory fileutil.mkdir(sudoers_dir) # add the include of sudoers.d to the /etc/sudoers sudoers_file = os.path.join(sudoers_dir, os.pardir, 'sudoers') include_sudoers_dir = "\n#includedir {0}\n".format(sudoers_dir) fileutil.append_file(sudoers_file, include_sudoers_dir) sudoer = None if nopasswd: sudoer = "{0} ALL=(ALL) NOPASSWD: ALL".format(username) else: sudoer = "{0} ALL=(ALL) ALL".format(username) if not os.path.isfile(sudoers_wagent) or \ fileutil.findstr_in_file(sudoers_wagent, sudoer) is False: fileutil.append_file(sudoers_wagent, "{0}\n".format(sudoer)) fileutil.chmod(sudoers_wagent, 0o440) else: # remove user from sudoers if os.path.isfile(sudoers_wagent): try: content = fileutil.read_file(sudoers_wagent) sudoers = content.split("\n") sudoers = [x for x in sudoers if username not in x] fileutil.write_file(sudoers_wagent, "\n".join(sudoers)) except IOError as e: raise OSUtilError("Failed to remove sudoer: {0}".format(e)) def del_root_password(self): try: passwd_file_path = conf.get_passwd_file_path() passwd_content = fileutil.read_file(passwd_file_path) passwd = passwd_content.split('\n') new_passwd = [x for x in passwd if not x.startswith("root:")] new_passwd.insert(0, "root:*LOCK*:14600::::::") fileutil.write_file(passwd_file_path, "\n".join(new_passwd)) except IOError as e: raise OSUtilError("Failed to delete root password:{0}".format(e)) @staticmethod def _norm_path(filepath): home = conf.get_home_dir() # Expand HOME variable if present in path path = os.path.normpath(filepath.replace("$HOME", home)) return path def deploy_ssh_keypair(self, username, keypair): """ Deploy id_rsa and id_rsa.pub """ path, thumbprint = keypair path = self._norm_path(path) dir_path = os.path.dirname(path) fileutil.mkdir(dir_path, mode=0o700, owner=username) lib_dir = conf.get_lib_dir() prv_path = os.path.join(lib_dir, thumbprint + '.prv') if not os.path.isfile(prv_path): raise OSUtilError("Can't find {0}.prv".format(thumbprint)) shutil.copyfile(prv_path, path) pub_path = path + '.pub' crytputil = CryptUtil(conf.get_openssl_cmd()) pub = crytputil.get_pubkey_from_prv(prv_path) fileutil.write_file(pub_path, pub) self.set_selinux_context(pub_path, 'unconfined_u:object_r:ssh_home_t:s0') self.set_selinux_context(path, 'unconfined_u:object_r:ssh_home_t:s0') os.chmod(path, 0o644) os.chmod(pub_path, 0o600) def openssl_to_openssh(self, input_file, output_file): cryptutil = CryptUtil(conf.get_openssl_cmd()) cryptutil.crt_to_ssh(input_file, output_file) def deploy_ssh_pubkey(self, username, pubkey): """ Deploy authorized_key """ path, thumbprint, value = pubkey if path is None: raise OSUtilError("Public key path is None") crytputil = CryptUtil(conf.get_openssl_cmd()) path = self._norm_path(path) dir_path = os.path.dirname(path) fileutil.mkdir(dir_path, mode=0o700, owner=username) if value is not None: if not value.startswith("ssh-"): raise OSUtilError("Bad public key: {0}".format(value)) if not value.endswith("\n"): value += "\n" fileutil.write_file(path, value) elif thumbprint is not None: lib_dir = conf.get_lib_dir() crt_path = os.path.join(lib_dir, thumbprint + '.crt') if not os.path.isfile(crt_path): raise OSUtilError("Can't find {0}.crt".format(thumbprint)) pub_path = os.path.join(lib_dir, thumbprint + '.pub') pub = crytputil.get_pubkey_from_crt(crt_path) fileutil.write_file(pub_path, pub) self.set_selinux_context(pub_path, 'unconfined_u:object_r:ssh_home_t:s0') self.openssl_to_openssh(pub_path, path) fileutil.chmod(pub_path, 0o600) else: raise OSUtilError("SSH public key Fingerprint and Value are None") self.set_selinux_context(path, 'unconfined_u:object_r:ssh_home_t:s0') fileutil.chowner(path, username) fileutil.chmod(path, 0o644) def is_selinux_system(self): """ Checks and sets self.selinux = True if SELinux is available on system. """ if self.selinux == None: if shellutil.run("which getenforce", chk_err=False) == 0: self.selinux = True else: self.selinux = False return self.selinux def is_selinux_enforcing(self): """ Calls shell command 'getenforce' and returns True if 'Enforcing'. """ if self.is_selinux_system(): output = shellutil.run_get_output("getenforce")[1] return output.startswith("Enforcing") else: return False def set_selinux_context(self, path, con): # pylint: disable=R1710 """ Calls shell 'chcon' with 'path' and 'con' context. Returns exit result. """ if self.is_selinux_system(): if not os.path.exists(path): logger.error("Path does not exist: {0}".format(path)) return 1 try: shellutil.run_command(['chcon', con, path], log_error=True) except shellutil.CommandError as cmd_err: return cmd_err.returncode return 0 def conf_sshd(self, disable_password): option = "no" if disable_password else "yes" conf_file_path = conf.get_sshd_conf_file_path() conf_file = fileutil.read_file(conf_file_path).split("\n") textutil.set_ssh_config(conf_file, "PasswordAuthentication", option) textutil.set_ssh_config(conf_file, "ChallengeResponseAuthentication", option) textutil.set_ssh_config(conf_file, "ClientAliveInterval", str(conf.get_ssh_client_alive_interval())) fileutil.write_file(conf_file_path, "\n".join(conf_file)) logger.info("{0} SSH password-based authentication methods." .format("Disabled" if disable_password else "Enabled")) logger.info("Configured SSH client probing to keep connections alive.") def get_dvd_device(self, dev_dir='/dev'): pattern = r'(sr[0-9]|hd[c-z]|cdrom[0-9]|cd[0-9]|vd[b-z])' device_list = os.listdir(dev_dir) for dvd in [re.match(pattern, dev) for dev in device_list]: if dvd is not None: return "/dev/{0}".format(dvd.group(0)) inner_detail = "The following devices were found, but none matched " \ "the pattern [{0}]: {1}\n".format(pattern, device_list) raise OSUtilError(msg="Failed to get dvd device from {0}".format(dev_dir), inner=inner_detail) def mount_dvd(self, max_retry=6, chk_err=True, dvd_device=None, mount_point=None, sleep_time=5): if dvd_device is None: dvd_device = self.get_dvd_device() if mount_point is None: mount_point = conf.get_dvd_mount_point() mount_list = shellutil.run_get_output("mount")[1] existing = self.get_mount_point(mount_list, dvd_device) if existing is not None: # already mounted logger.info("{0} is already mounted at {1}", dvd_device, existing) return if not os.path.isdir(mount_point): os.makedirs(mount_point) err = '' for retry in range(1, max_retry): return_code, err = self.mount(dvd_device, mount_point, option=["-o", "ro", "-t", "udf,iso9660,vfat"], chk_err=False) if return_code == 0: logger.info("Successfully mounted dvd") return else: logger.warn( "Mounting dvd failed [retry {0}/{1}, sleeping {2} sec]", retry, max_retry - 1, sleep_time) if retry < max_retry: time.sleep(sleep_time) if chk_err: raise OSUtilError("Failed to mount dvd device", inner=err) def umount_dvd(self, chk_err=True, mount_point=None): if mount_point is None: mount_point = conf.get_dvd_mount_point() return_code = self.umount(mount_point, chk_err=chk_err) if chk_err and return_code != 0: raise OSUtilError("Failed to unmount dvd device at {0}".format(mount_point)) def eject_dvd(self, chk_err=True): dvd = self.get_dvd_device() dev = dvd.rsplit('/', 1)[1] pattern = r'(vd[b-z])' # We should not eject if the disk is not a cdrom if re.search(pattern, dev): return try: shellutil.run_command(["eject", dvd]) except shellutil.CommandError as cmd_err: if chk_err: msg = "Failed to eject dvd: ret={0}\n[stdout]\n{1}\n\n[stderr]\n{2}"\ .format(cmd_err.returncode, cmd_err.stdout, cmd_err.stderr) raise OSUtilError(msg) def try_load_atapiix_mod(self): try: self.load_atapiix_mod() except Exception as e: logger.warn("Could not load ATAPI driver: {0}".format(e)) def load_atapiix_mod(self): if self.is_atapiix_mod_loaded(): return ret, kern_version = shellutil.run_get_output("uname -r") if ret != 0: raise Exception("Failed to call uname -r") mod_path = os.path.join('/lib/modules', kern_version.strip('\n'), 'kernel/drivers/ata/ata_piix.ko') if not os.path.isfile(mod_path): raise Exception("Can't find module file:{0}".format(mod_path)) ret, output = shellutil.run_get_output("insmod " + mod_path) # pylint: disable=W0612 if ret != 0: raise Exception("Error calling insmod for ATAPI CD-ROM driver") if not self.is_atapiix_mod_loaded(max_retry=3): raise Exception("Failed to load ATAPI CD-ROM driver") def is_atapiix_mod_loaded(self, max_retry=1): for retry in range(0, max_retry): ret = shellutil.run("lsmod | grep ata_piix", chk_err=False) if ret == 0: logger.info("Module driver for ATAPI CD-ROM is already present.") return True if retry < max_retry - 1: time.sleep(1) return False def mount(self, device, mount_point, option=None, chk_err=True): if not option: option = [] cmd = ["mount"] cmd.extend(option + [device, mount_point]) try: output = shellutil.run_command(cmd, log_error=chk_err) except shellutil.CommandError as cmd_err: detail = "[{0}] returned {1}:\n stdout: {2}\n\nstderr: {3}".format(cmd, cmd_err.returncode, cmd_err.stdout, cmd_err.stderr) return cmd_err.returncode, detail return 0, output def umount(self, mount_point, chk_err=True): try: shellutil.run_command(["umount", mount_point], log_error=chk_err) except shellutil.CommandError as cmd_err: return cmd_err.returncode return 0 def allow_dhcp_broadcast(self): # Open DHCP port if iptables is enabled. # We supress error logging on error. shellutil.run("iptables -D INPUT -p udp --dport 68 -j ACCEPT", chk_err=False) shellutil.run("iptables -I INPUT -p udp --dport 68 -j ACCEPT", chk_err=False) def remove_rules_files(self, rules_files=None): if rules_files is None: rules_files = __RULES_FILES__ lib_dir = conf.get_lib_dir() for src in rules_files: file_name = fileutil.base_name(src) dest = os.path.join(lib_dir, file_name) if os.path.isfile(dest): os.remove(dest) if os.path.isfile(src): logger.warn("Move rules file {0} to {1}", file_name, dest) shutil.move(src, dest) def restore_rules_files(self, rules_files=None): if rules_files is None: rules_files = __RULES_FILES__ lib_dir = conf.get_lib_dir() for dest in rules_files: filename = fileutil.base_name(dest) src = os.path.join(lib_dir, filename) if os.path.isfile(dest): continue if os.path.isfile(src): logger.warn("Move rules file {0} to {1}", filename, dest) shutil.move(src, dest) def get_mac_addr(self): """ Convenience function, returns mac addr bound to first non-loopback interface. """ ifname = self.get_if_name() addr = self.get_if_mac(ifname) return textutil.hexstr_to_bytearray(addr) def get_if_mac(self, ifname): """ Return the mac-address bound to the socket. """ sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) param = struct.pack('256s', (ifname[:15] + ('\0' * 241)).encode('latin-1')) info = fcntl.ioctl(sock.fileno(), IOCTL_SIOCGIFHWADDR, param) sock.close() return ''.join(['%02X' % textutil.str_to_ord(char) for char in info[18:24]]) @staticmethod def _get_struct_ifconf_size(): """ Return the sizeof struct ifinfo. On 64-bit platforms the size is 40 bytes; on 32-bit platforms the size is 32 bytes. """ python_arc = platform.architecture()[0] struct_size = 32 if python_arc == '32bit' else 40 return struct_size def _get_all_interfaces(self): """ Return a dictionary mapping from interface name to IPv4 address. Interfaces without a name are ignored. """ expected = 16 # how many devices should I expect... struct_size = DefaultOSUtil._get_struct_ifconf_size() array_size = expected * struct_size buff = array.array('B', b'\0' * array_size) param = struct.pack('iL', array_size, buff.buffer_info()[0]) sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) ret = fcntl.ioctl(sock.fileno(), IOCTL_SIOCGIFCONF, param) retsize = (struct.unpack('iL', ret)[0]) sock.close() if retsize == array_size: logger.warn(('SIOCGIFCONF returned more than {0} up ' 'network interfaces.'), expected) ifconf_buff = array_to_bytes(buff) ifaces = {} for i in range(0, array_size, struct_size): iface = ifconf_buff[i:i + IFNAMSIZ].split(b'\0', 1)[0] if len(iface) > 0: iface_name = iface.decode('latin-1') if iface_name not in ifaces: ifaces[iface_name] = socket.inet_ntoa(ifconf_buff[i + 20:i + 24]) return ifaces def get_first_if(self): """ Return the interface name, and IPv4 addr of the "primary" interface or, failing that, any active non-loopback interface. """ primary = self.get_primary_interface() ifaces = self._get_all_interfaces() if primary in ifaces: return primary, ifaces[primary] for iface_name in ifaces.keys(): if not self.is_loopback(iface_name): logger.info("Choosing non-primary [{0}]".format(iface_name)) return iface_name, ifaces[iface_name] return '', '' @staticmethod def _build_route_list(proc_net_route): """ Construct a list of network route entries :param list(str) proc_net_route: Route table lines, including headers, containing at least one route :return: List of network route objects :rtype: list(RouteEntry) """ idx = 0 column_index = {} header_line = proc_net_route[0] for header in filter(lambda h: len(h) > 0, header_line.split("\t")): column_index[header.strip()] = idx idx += 1 try: idx_iface = column_index["Iface"] idx_dest = column_index["Destination"] idx_gw = column_index["Gateway"] idx_flags = column_index["Flags"] idx_metric = column_index["Metric"] idx_mask = column_index["Mask"] except KeyError: msg = "/proc/net/route is missing key information; headers are [{0}]".format(header_line) logger.error(msg) return [] route_list = [] for entry in proc_net_route[1:]: route = entry.split("\t") if len(route) > 0: route_obj = RouteEntry(route[idx_iface], route[idx_dest], route[idx_gw], route[idx_mask], route[idx_flags], route[idx_metric]) route_list.append(route_obj) return route_list @staticmethod def read_route_table(): """ Return a list of strings comprising the route table, including column headers. Each line is stripped of leading or trailing whitespace but is otherwise unmolested. :return: Entries in the text route table :rtype: list(str) """ try: with open('/proc/net/route') as routing_table: return list(map(str.strip, routing_table.readlines())) except Exception as e: logger.error("Cannot read route table [{0}]", ustr(e)) return [] @staticmethod def get_list_of_routes(route_table): """ Construct a list of all network routes known to this system. :param list(str) route_table: List of text entries from route table, including headers :return: a list of network routes :rtype: list(RouteEntry) """ route_list = [] count = len(route_table) if count < 1: logger.error("/proc/net/route is missing headers") elif count == 1: logger.error("/proc/net/route contains no routes") else: route_list = DefaultOSUtil._build_route_list(route_table) return route_list def get_primary_interface(self): """ Get the name of the primary interface, which is the one with the default route attached to it; if there are multiple default routes, the primary has the lowest Metric. :return: the interface which has the default route """ # from linux/route.h RTF_GATEWAY = 0x02 DEFAULT_DEST = "00000000" primary_interface = None if not self.disable_route_warning: logger.info("Examine /proc/net/route for primary interface") route_table = DefaultOSUtil.read_route_table() def is_default(route): return route.destination == DEFAULT_DEST and int(route.flags) & RTF_GATEWAY == RTF_GATEWAY candidates = list(filter(is_default, DefaultOSUtil.get_list_of_routes(route_table))) if len(candidates) > 0: def get_metric(route): return int(route.metric) primary_route = min(candidates, key=get_metric) primary_interface = primary_route.interface if primary_interface is None: primary_interface = '' if not self.disable_route_warning: with open('/proc/net/route') as routing_table_fh: routing_table_text = routing_table_fh.read() logger.warn('Could not determine primary interface, ' 'please ensure /proc/net/route is correct') logger.warn('Contents of /proc/net/route:\n{0}'.format(routing_table_text)) logger.warn('Primary interface examination will retry silently') self.disable_route_warning = True else: logger.info('Primary interface is [{0}]'.format(primary_interface)) self.disable_route_warning = False return primary_interface def is_primary_interface(self, ifname): """ Indicate whether the specified interface is the primary. :param ifname: the name of the interface - eth0, lo, etc. :return: True if this interface binds the default route """ return self.get_primary_interface() == ifname def is_loopback(self, ifname): """ Determine if a named interface is loopback. """ s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) ifname_buff = ifname + ('\0' * 256) result = fcntl.ioctl(s.fileno(), IOCTL_SIOCGIFFLAGS, ifname_buff) flags, = struct.unpack('H', result[16:18]) isloopback = flags & 8 == 8 if not self.disable_route_warning: logger.info('interface [{0}] has flags [{1}], ' 'is loopback [{2}]'.format(ifname, flags, isloopback)) s.close() return isloopback def get_dhcp_lease_endpoint(self): """ OS specific, this should return the decoded endpoint of the wireserver from option 245 in the dhcp leases file if it exists on disk. :return: The endpoint if available, or None """ return None @staticmethod def get_endpoint_from_leases_path(pathglob): """ Try to discover and decode the wireserver endpoint in the specified dhcp leases path. :param pathglob: The path containing dhcp lease files :return: The endpoint if available, otherwise None """ endpoint = None HEADER_LEASE = "lease" HEADER_OPTION_245 = "option unknown-245" HEADER_EXPIRE = "expire" FOOTER_LEASE = "}" FORMAT_DATETIME = "%Y/%m/%d %H:%M:%S" option_245_re = re.compile( r'\s*option\s+unknown-245\s+([0-9a-fA-F]+):([0-9a-fA-F]+):([0-9a-fA-F]+):([0-9a-fA-F]+);') logger.info("looking for leases in path [{0}]".format(pathglob)) for lease_file in glob.glob(pathglob): leases = open(lease_file).read() if HEADER_OPTION_245 in leases: cached_endpoint = None option_245_match = None expired = True # assume expired for line in leases.splitlines(): if line.startswith(HEADER_LEASE): cached_endpoint = None expired = True elif HEADER_EXPIRE in line: if "never" in line: expired = False else: try: expire_string = line.split(" ", 4)[-1].strip(";") expire_date = datetime.datetime.strptime(expire_string, FORMAT_DATETIME) if expire_date > datetime.datetime.utcnow(): expired = False except: # pylint: disable=W0702 logger.error("could not parse expiry token '{0}'".format(line)) elif FOOTER_LEASE in line: logger.info("dhcp entry:{0}, 245:{1}, expired:{2}".format( cached_endpoint, option_245_match is not None, expired)) if not expired and cached_endpoint is not None: endpoint = cached_endpoint logger.info("found endpoint [{0}]".format(endpoint)) # we want to return the last valid entry, so # keep searching else: option_245_match = option_245_re.match(line) if option_245_match is not None: cached_endpoint = '{0}.{1}.{2}.{3}'.format( int(option_245_match.group(1), 16), int(option_245_match.group(2), 16), int(option_245_match.group(3), 16), int(option_245_match.group(4), 16)) if endpoint is not None: logger.info("cached endpoint found [{0}]".format(endpoint)) else: logger.info("cached endpoint not found") return endpoint def is_missing_default_route(self): try: route_cmd = ["ip", "route", "show"] routes = shellutil.run_command(route_cmd) for route in routes.split("\n"): if route.startswith("0.0.0.0 ") or route.startswith("default "): return False return True except CommandError as e: logger.warn("Cannot get the routing table. {0} failed: {1}", ustr(route_cmd), ustr(e)) return False def get_if_name(self): if_name = '' if_found = False while not if_found: if_name = self.get_first_if()[0] if_found = len(if_name) >= 2 if not if_found: time.sleep(2) return if_name def get_ip4_addr(self): return self.get_first_if()[1] def set_route_for_dhcp_broadcast(self, ifname): try: route_cmd = ["ip", "route", "add", "255.255.255.255", "dev", ifname] return shellutil.run_command(route_cmd) except CommandError: return "" def remove_route_for_dhcp_broadcast(self, ifname): try: route_cmd = ["ip", "route", "del", "255.255.255.255", "dev", ifname] shellutil.run_command(route_cmd) except CommandError: pass def is_dhcp_available(self): return True def is_dhcp_enabled(self): return False def stop_dhcp_service(self): pass def start_dhcp_service(self): pass def start_network(self): pass def start_agent_service(self): pass def stop_agent_service(self): pass def register_agent_service(self): pass def unregister_agent_service(self): pass def restart_ssh_service(self): pass def route_add(self, net, mask, gateway): # pylint: disable=W0613 """ Add specified route """ try: cmd = ["ip", "route", "add", net, "via", gateway] return shellutil.run_command(cmd) except CommandError: return "" @staticmethod def _text_to_pid_list(text): return [int(n) for n in text.split()] @staticmethod def _get_dhcp_pid(command, transform_command_output=None): try: output = shellutil.run_command(command) if transform_command_output is not None: output = transform_command_output(output) return DefaultOSUtil._text_to_pid_list(output) except CommandError as exception: # pylint: disable=W0612 return [] def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", "dhclient"]) def set_hostname(self, hostname): fileutil.write_file('/etc/hostname', hostname) self._run_command_without_raising(["hostname", hostname], log_error=False) def set_dhcp_hostname(self, hostname): autosend = r'^[^#]*?send\s*host-name.*?(|gethostname[(,)])' dhclient_files = ['/etc/dhcp/dhclient.conf', '/etc/dhcp3/dhclient.conf', '/etc/dhclient.conf'] for conf_file in dhclient_files: if not os.path.isfile(conf_file): continue if fileutil.findre_in_file(conf_file, autosend): # Return if auto send host-name is configured return fileutil.update_conf_file(conf_file, 'send host-name', 'send host-name "{0}";'.format(hostname)) def restart_if(self, ifname, retries=3, wait=5): retry_limit = retries + 1 for attempt in range(1, retry_limit): return_code = shellutil.run("ifdown {0} && ifup {0}".format(ifname), expected_errors=[1] if attempt < retries else []) if return_code == 0: return logger.warn("failed to restart {0}: return code {1}".format(ifname, return_code)) if attempt < retry_limit: logger.info("retrying in {0} seconds".format(wait)) time.sleep(wait) else: logger.warn("exceeded restart retries") def check_and_recover_nic_state(self, ifname): # TODO: This should be implemented for all distros where we reset the network during publishing hostname. Currently it is only implemented in RedhatOSUtil. pass def publish_hostname(self, hostname, recover_nic=False): """ Publishes the provided hostname. """ self.set_dhcp_hostname(hostname) self.set_hostname_record(hostname) ifname = self.get_if_name() self.restart_if(ifname) if recover_nic: self.check_and_recover_nic_state(ifname) def set_scsi_disks_timeout(self, timeout): for dev in os.listdir("/sys/block"): if dev.startswith('sd'): self.set_block_device_timeout(dev, timeout) def set_block_device_timeout(self, dev, timeout): if dev is not None and timeout is not None: file_path = "/sys/block/{0}/device/timeout".format(dev) content = fileutil.read_file(file_path) original = content.splitlines()[0].rstrip() if original != timeout: fileutil.write_file(file_path, timeout) logger.info("Set block dev timeout: {0} with timeout: {1}", dev, timeout) def get_mount_point(self, mountlist, device): """ Example of mountlist: /dev/sda1 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) /dev/sdb1 on /mnt/resource type ext4 (rw) """ if (mountlist and device): for entry in mountlist.split('\n'): if (re.search(device, entry)): tokens = entry.split() # Return the 3rd column of this line return tokens[2] if len(tokens) > 2 else None return None @staticmethod def _enumerate_device_id(): """ Enumerate all storage device IDs. Args: None Returns: Iterator[Tuple[str, str]]: VmBus and storage devices. """ if os.path.exists(STORAGE_DEVICE_PATH): for vmbus in os.listdir(STORAGE_DEVICE_PATH): deviceid = fileutil.read_file(os.path.join(STORAGE_DEVICE_PATH, vmbus, "device_id")) guid = deviceid.strip('{}\n') yield vmbus, guid @staticmethod def search_for_resource_disk(gen1_device_prefix, gen2_device_id): """ Search the filesystem for a device by ID or prefix. Args: gen1_device_prefix (str): Gen1 resource disk prefix. gen2_device_id (str): Gen2 resource device ID. Returns: str: The found device. """ device = None # We have to try device IDs for both Gen1 and Gen2 VMs. logger.info('Searching gen1 prefix {0} or gen2 {1}'.format(gen1_device_prefix, gen2_device_id)) try: for vmbus, guid in DefaultOSUtil._enumerate_device_id(): if guid.startswith(gen1_device_prefix) or guid == gen2_device_id: for root, dirs, files in os.walk(STORAGE_DEVICE_PATH + vmbus): # pylint: disable=W0612 root_path_parts = root.split('/') # For Gen1 VMs we only have to check for the block dir in the # current device. But for Gen2 VMs all of the disks (sda, sdb, # sr0) are presented in this device on the same SCSI controller. # Because of that we need to also read the LUN. It will be: # 0 - OS disk # 1 - Resource disk # 2 - CDROM if root_path_parts[-1] == 'block' and ( guid != gen2_device_id or root_path_parts[-2].split(':')[-1] == '1'): device = dirs[0] return device else: # older distros for d in dirs: if ':' in d and "block" == d.split(':')[0]: device = d.split(':')[1] return device except (OSError, IOError) as exc: logger.warn('Error getting device for {0} or {1}: {2}', gen1_device_prefix, gen2_device_id, ustr(exc)) return None def device_for_ide_port(self, port_id): """ Return device name attached to ide port 'n'. """ if port_id > 3: return None g0 = "00000000" if port_id > 1: g0 = "00000001" port_id = port_id - 2 gen1_device_prefix = '{0}-000{1}'.format(g0, port_id) device = DefaultOSUtil.search_for_resource_disk( gen1_device_prefix=gen1_device_prefix, gen2_device_id=GEN2_DEVICE_ID ) logger.info('Found device: {0}'.format(device)) return device def set_hostname_record(self, hostname): fileutil.write_file(conf.get_published_hostname(), contents=hostname) def get_hostname_record(self): hostname_record = conf.get_published_hostname() if not os.path.exists(hostname_record): # older agents (but newer or equal to 2.2.3) create published_hostname during provisioning; when provisioning is done # by cloud-init the hostname is written to set-hostname hostname = self._get_cloud_init_hostname() if hostname is None: logger.info("Retrieving hostname using socket.gethostname()") hostname = socket.gethostname() logger.info('Published hostname record does not exist, creating [{0}] with hostname [{1}]', hostname_record, hostname) self.set_hostname_record(hostname) record = fileutil.read_file(hostname_record) return record @staticmethod def _get_cloud_init_hostname(): """ Retrieves the hostname set by cloud-init; returns None if cloud-init did not set the hostname or if there is an error retrieving it. """ hostname_file = '/var/lib/cloud/data/set-hostname' try: if os.path.exists(hostname_file): # # The format is similar to # # $ cat /var/lib/cloud/data/set-hostname # { # "fqdn": "nam-u18", # "hostname": "nam-u18" # } # logger.info("Retrieving hostname from {0}", hostname_file) with open(hostname_file, 'r') as file_: hostname_info = json.load(file_) if "hostname" in hostname_info: return hostname_info["hostname"] except Exception as exception: logger.warn("Error retrieving hostname: {0}", ustr(exception)) return None def del_account(self, username): if self.is_sys_user(username): logger.error("{0} is a system user. Will not delete it.", username) self._run_command_without_raising(["touch", "/var/run/utmp"]) self._run_command_without_raising(['userdel', '-f', '-r', username]) self.conf_sudoer(username, remove=True) def decode_customdata(self, data): return base64.b64decode(data).decode('utf-8') def get_total_mem(self): # Get total memory in bytes and divide by 1024**2 to get the value in MB. return os.sysconf('SC_PAGE_SIZE') * os.sysconf('SC_PHYS_PAGES') / (1024 ** 2) def get_processor_cores(self): return multiprocessing.cpu_count() def check_pid_alive(self, pid): try: pid = int(pid) os.kill(pid, 0) except (ValueError, TypeError): return False except OSError as os_error: if os_error.errno == errno.EPERM: return True return False return True @property def is_64bit(self): return sys.maxsize > 2 ** 32 @staticmethod def _get_proc_stat(): """ Get the contents of /proc/stat. # cpu 813599 3940 909253 154538746 874851 0 6589 0 0 0 # cpu0 401094 1516 453006 77276738 452939 0 3312 0 0 0 # cpu1 412505 2423 456246 77262007 421912 0 3276 0 0 0 :return: A single string with the contents of /proc/stat :rtype: str """ results = None try: results = fileutil.read_file('/proc/stat') except (OSError, IOError) as ex: logger.warn("Couldn't read /proc/stat: {0}".format(ex.strerror)) raise return results @staticmethod def get_total_cpu_ticks_since_boot(): """ Compute the number of USER_HZ units of time that have elapsed in all categories, across all cores, since boot. :return: int """ system_cpu = 0 proc_stat = DefaultOSUtil._get_proc_stat() if proc_stat is not None: for line in proc_stat.splitlines(): if ALL_CPUS_REGEX.match(line): system_cpu = sum( int(i) for i in line.split()[1:8]) # see "man proc" for a description of these fields break return system_cpu @staticmethod def get_used_and_available_system_memory(): """ Get the contents of free -b in bytes. # free -b # total used free shared buff/cache available # Mem: 8340144128 619352064 5236809728 1499136 2483982336 7426314240 # Swap: 0 0 0 :return: used and available memory in megabytes """ used_mem = available_mem = 0 free_cmd = ["free", "-b"] memory = shellutil.run_command(free_cmd) for line in memory.split("\n"): if ALL_MEMS_REGEX.match(line): mems = line.split() used_mem = int(mems[2]) available_mem = int(mems[6]) # see "man free" for a description of these fields return used_mem/(1024 ** 2), available_mem/(1024 ** 2) def get_nic_state(self, as_string=False): """ Capture NIC state (IPv4 and IPv6 addresses plus link state). :return: By default returns a dictionary of NIC state objects, with the NIC name as key. If as_string is True returns the state as a string :rtype: dict(str,NetworkInformationCard) """ state = {} all_command = ["ip", "-a", "-o", "link"] inet_command = ["ip", "-4", "-a", "-o", "address"] inet6_command = ["ip", "-6", "-a", "-o", "address"] try: all_output = shellutil.run_command(all_command) except shellutil.CommandError as command_error: logger.verbose("Could not fetch NIC link info: {0}", ustr(command_error)) return "" if as_string else {} if as_string: def run_command(command): try: return shellutil.run_command(command) except shellutil.CommandError as command_error: return str(command_error) inet_output = run_command(inet_command) inet6_output = run_command(inet6_command) return "Executing {0}:\n{1}\nExecuting {2}:\n{3}\nExecuting {4}:\n{5}\n".format(all_command, all_output, inet_command, inet_output, inet6_command, inet6_output) else: self._update_nic_state_all(state, all_output) self._update_nic_state(state, inet_command, NetworkInterfaceCard.add_ipv4, "an IPv4 address") self._update_nic_state(state, inet6_command, NetworkInterfaceCard.add_ipv6, "an IPv6 address") return state @staticmethod def _update_nic_state_all(state, command_output): for entry in command_output.splitlines(): # Sample output: # 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 addrgenmode eui64 # 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:30:c3:5a brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode eui64 # 3: docker0: mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default \ link/ether 02:42:b5:d5:00:1d brd ff:ff:ff:ff:ff:ff promiscuity 0 \ bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q addrgenmode eui64 result = IP_COMMAND_OUTPUT.match(entry) if result: name = result.group(1) state[name] = NetworkInterfaceCard(name, result.group(2)) @staticmethod def _update_nic_state(state, ip_command, handler, description): """ Update the state of NICs based on the output of a specified ip subcommand. :param dict(str, NetworkInterfaceCard) state: Dictionary of NIC state objects :param str ip_command: The ip command to run :param handler: A method on the NetworkInterfaceCard class :param str description: Description of the particular information being added to the state """ try: output = shellutil.run_command(ip_command) for entry in output.splitlines(): # family inet sample output: # 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever # 2: eth0 inet 10.145.187.220/26 brd 10.145.187.255 scope global eth0\ valid_lft forever preferred_lft forever # 3: docker0 inet 192.168.43.1/24 brd 192.168.43.255 scope global docker0\ valid_lft forever preferred_lft forever # # family inet6 sample output: # 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever # 2: eth0 inet6 fe80::20d:3aff:fe30:c35a/64 scope link \ valid_lft forever preferred_lft forever result = IP_COMMAND_OUTPUT.match(entry) if result: interface_name = result.group(1) if interface_name in state: handler(state[interface_name], result.group(2)) else: logger.error("Interface {0} has {1} but no link state".format(interface_name, description)) except shellutil.CommandError as command_error: logger.error("[{0}] failed: {1}", ' '.join(ip_command), str(command_error)) @staticmethod def _run_command_without_raising(cmd, log_error=True): try: shellutil.run_command(cmd, log_error=log_error) # Original implementation of run() does a blanket catch, so mimicking the behaviour here except Exception: pass @staticmethod def _run_multiple_commands_without_raising(commands, log_error=True, continue_on_error=False): for cmd in commands: try: shellutil.run_command(cmd, log_error=log_error) # Original implementation of run() does a blanket catch, so mimicking the behaviour here except Exception: if continue_on_error: continue break @staticmethod def _run_command_raising_OSUtilError(cmd, err_msg, cmd_input=None): # This method runs shell command using the new secure shellutil.run_command and raises OSUtilErrors on failures. try: return shellutil.run_command(cmd, log_error=True, input=cmd_input) except shellutil.CommandError as e: raise OSUtilError( "{0}, Retcode: {1}, Output: {2}, Error: {3}".format(err_msg, e.returncode, e.stdout, e.stderr)) except Exception as e: raise OSUtilError("{0}, Retcode: {1}, Error: {2}".format(err_msg, -1, ustr(e))) Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/devuan.py000066400000000000000000000034531462617747000254250ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.osutil.default import DefaultOSUtil class DevuanOSUtil(DefaultOSUtil): def __init__(self): super(DevuanOSUtil, self).__init__() self.jit_enabled = True def restart_ssh_service(self): logger.info("DevuanOSUtil::restart_ssh_service - trying to restart sshd") return shellutil.run("/usr/sbin/service restart ssh", chk_err=False) def stop_agent_service(self): logger.info("DevuanOSUtil::stop_agent_service - trying to stop waagent") return shellutil.run("/usr/sbin/service walinuxagent stop", chk_err=False) def start_agent_service(self): logger.info("DevuanOSUtil::start_agent_service - trying to start waagent") return shellutil.run("/usr/sbin/service walinuxagent start", chk_err=False) def start_network(self): pass def remove_rules_files(self, rules_files=""): pass def restore_rules_files(self, rules_files=""): pass def get_dhcp_lease_endpoint(self): return self.get_endpoint_from_leases_path('/var/lib/dhcp/dhclient.*.leases') Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/factory.py000066400000000000000000000130021462617747000256010ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from distutils.version import LooseVersion as Version # pylint: disable=no-name-in-module, import-error import azurelinuxagent.common.logger as logger from azurelinuxagent.common.version import DISTRO_NAME, DISTRO_CODE_NAME, DISTRO_VERSION, DISTRO_FULL_NAME from .alpine import AlpineOSUtil from .arch import ArchUtil from .bigip import BigIpOSUtil from .clearlinux import ClearLinuxUtil from .coreos import CoreOSUtil from .debian import DebianOSBaseUtil, DebianOSModernUtil from .default import DefaultOSUtil from .devuan import DevuanOSUtil from .freebsd import FreeBSDOSUtil from .gaia import GaiaOSUtil from .iosxe import IosxeOSUtil from .mariner import MarinerOSUtil from .nsbsd import NSBSDOSUtil from .openbsd import OpenBSDOSUtil from .openwrt import OpenWRTOSUtil from .redhat import RedhatOSUtil, Redhat6xOSUtil, RedhatOSModernUtil from .suse import SUSEOSUtil, SUSE11OSUtil from .photonos import PhotonOSUtil from .ubuntu import UbuntuOSUtil, Ubuntu12OSUtil, Ubuntu14OSUtil, \ UbuntuSnappyOSUtil, Ubuntu16OSUtil, Ubuntu18OSUtil from .fedora import FedoraOSUtil def get_osutil(distro_name=DISTRO_NAME, distro_code_name=DISTRO_CODE_NAME, distro_version=DISTRO_VERSION, distro_full_name=DISTRO_FULL_NAME): # We are adding another layer of abstraction here since we want to be able to mock the final result of the # function call. Since the get_osutil function is imported in various places in our tests, we can't mock # it globally. Instead, we add _get_osutil function and mock it in the test base class, AgentTestCase. return _get_osutil(distro_name, distro_code_name, distro_version, distro_full_name) def _get_osutil(distro_name, distro_code_name, distro_version, distro_full_name): if distro_name == "photonos": return PhotonOSUtil() if distro_name == "arch": return ArchUtil() if "Clear Linux" in distro_full_name: return ClearLinuxUtil() if distro_name == "ubuntu": ubuntu_version = Version(distro_version) if ubuntu_version in [Version("12.04"), Version("12.10")]: return Ubuntu12OSUtil() if ubuntu_version in [Version("14.04"), Version("14.10")]: return Ubuntu14OSUtil() if ubuntu_version in [Version('16.04'), Version('16.10'), Version('17.04')]: return Ubuntu16OSUtil() if Version('18.04') <= ubuntu_version <= Version('24.04'): return Ubuntu18OSUtil() if distro_full_name == "Snappy Ubuntu Core": return UbuntuSnappyOSUtil() return UbuntuOSUtil() if distro_name == "alpine": return AlpineOSUtil() if distro_name == "kali": return DebianOSBaseUtil() if distro_name in ("flatcar", "coreos") or distro_code_name in ("flatcar", "coreos"): return CoreOSUtil() if distro_name in ("suse", "sle_hpc", "sles", "opensuse"): if distro_full_name == 'SUSE Linux Enterprise Server' \ and Version(distro_version) < Version('12') \ or distro_full_name == 'openSUSE' and Version(distro_version) < Version('13.2'): return SUSE11OSUtil() return SUSEOSUtil() if distro_name == "debian": if "sid" in distro_version or Version(distro_version) > Version("7"): return DebianOSModernUtil() return DebianOSBaseUtil() # Devuan support only works with v4+ # Reason is that Devuan v4 (Chimaera) uses python v3.9, in which the # platform.linux_distribution module has been removed. This was unable # to distinguish between debian and devuan. The new distro.linux_distribution module # is able to distinguish between the two. if distro_name == "devuan" and Version(distro_version) >= Version("4"): return DevuanOSUtil() if distro_name in ("redhat", "rhel", "centos", "oracle", "almalinux", "cloudlinux", "rocky"): if Version(distro_version) < Version("7"): return Redhat6xOSUtil() if Version(distro_version) >= Version("8.6"): return RedhatOSModernUtil() return RedhatOSUtil() if distro_name == "euleros": return RedhatOSUtil() if distro_name == "uos": return RedhatOSUtil() if distro_name == "freebsd": return FreeBSDOSUtil() if distro_name == "openbsd": return OpenBSDOSUtil() if distro_name == "bigip": return BigIpOSUtil() if distro_name == "gaia": return GaiaOSUtil() if distro_name == "iosxe": return IosxeOSUtil() if distro_name == "mariner": return MarinerOSUtil() if distro_name == "nsbsd": return NSBSDOSUtil() if distro_name == "openwrt": return OpenWRTOSUtil() if distro_name == "fedora": return FedoraOSUtil() logger.warn("Unable to load distro implementation for {0}. Using default distro implementation instead.", distro_name) return DefaultOSUtil() Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/fedora.py000066400000000000000000000045221462617747000254010ustar00rootroot00000000000000# # Copyright 2022 Red Hat Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import time import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.osutil.default import DefaultOSUtil class FedoraOSUtil(DefaultOSUtil): def __init__(self): super(FedoraOSUtil, self).__init__() self.agent_conf_file_path = '/etc/waagent.conf' @staticmethod def get_systemd_unit_file_install_path(): return '/usr/lib/systemd/system' @staticmethod def get_agent_bin_path(): return '/usr/sbin' def is_dhcp_enabled(self): return True def start_network(self): pass def restart_if(self, ifname=None, retries=None, wait=None): retry_limit = retries+1 for attempt in range(1, retry_limit): return_code = shellutil.run("ip link set {0} down && ip link set {0} up".format(ifname)) if return_code == 0: return logger.warn("failed to restart {0}: return code {1}".format(ifname, return_code)) if attempt < retry_limit: logger.info("retrying in {0} seconds".format(wait)) time.sleep(wait) else: logger.warn("exceeded restart retries") def restart_ssh_service(self): shellutil.run('systemctl restart sshd') def stop_dhcp_service(self): pass def start_dhcp_service(self): pass def start_agent_service(self): return shellutil.run('systemctl start waagent', chk_err=False) def stop_agent_service(self): return shellutil.run('systemctl stop waagent', chk_err=False) def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", "dhclient"]) def conf_sshd(self, disable_password): pass Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/freebsd.py000066400000000000000000000606551462617747000255640ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import socket import struct import binascii import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.utils.textutil as textutil import azurelinuxagent.common.logger as logger from azurelinuxagent.common.exception import OSUtilError from azurelinuxagent.common.osutil.default import DefaultOSUtil from azurelinuxagent.common.future import ustr class FreeBSDOSUtil(DefaultOSUtil): def __init__(self): super(FreeBSDOSUtil, self).__init__() self._scsi_disks_timeout_set = False self.jit_enabled = True @staticmethod def get_agent_bin_path(): return "/usr/local/sbin" def set_hostname(self, hostname): rc_file_path = '/etc/rc.conf' conf_file = fileutil.read_file(rc_file_path).split("\n") textutil.set_ini_config(conf_file, "hostname", hostname) fileutil.write_file(rc_file_path, "\n".join(conf_file)) self._run_command_without_raising(["hostname", hostname], log_error=False) def restart_ssh_service(self): return shellutil.run('service sshd restart', chk_err=False) def useradd(self, username, expiration=None, comment=None): """ Create user account with 'username' """ userentry = self.get_userentry(username) if userentry is not None: logger.warn("User {0} already exists, skip useradd", username) return if expiration is not None: cmd = ["pw", "useradd", username, "-e", expiration, "-m"] else: cmd = ["pw", "useradd", username, "-m"] if comment is not None: cmd.extend(["-c", comment]) self._run_command_raising_OSUtilError(cmd, err_msg="Failed to create user account:{0}".format(username)) def del_account(self, username): if self.is_sys_user(username): logger.error("{0} is a system user. Will not delete it.", username) self._run_command_without_raising(['touch', '/var/run/utx.active']) self._run_command_without_raising(['rmuser', '-y', username]) self.conf_sudoer(username, remove=True) def chpasswd(self, username, password, crypt_id=6, salt_len=10): if self.is_sys_user(username): raise OSUtilError(("User {0} is a system user, " "will not set password.").format(username)) passwd_hash = textutil.gen_password_hash(password, crypt_id, salt_len) self._run_command_raising_OSUtilError(['pw', 'usermod', username, '-H', '0'], cmd_input=passwd_hash, err_msg="Failed to set password for {0}".format(username)) def del_root_password(self): err = shellutil.run('pw usermod root -h -') if err: raise OSUtilError("Failed to delete root password: Failed to update password database.") def get_if_mac(self, ifname): data = self._get_net_info() if data[0] == ifname: return data[2].replace(':', '').upper() return None def get_first_if(self): return self._get_net_info()[:2] @staticmethod def read_route_table(): """ Return a list of strings comprising the route table as in the Linux /proc/net/route format. The input taken is from FreeBSDs `netstat -rn -f inet` command. Here is what the function does in detail: 1. Runs `netstat -rn -f inet` which outputs a column formatted list of ipv4 routes in priority order like so: > Routing tables > > Internet: > Destination Gateway Flags Refs Use Netif Expire > default 61.221.xx.yy UGS 0 247 em1 > 10 10.10.110.5 UGS 0 50 em0 > 10.10.110/26 link#1 UC 0 0 em0 > 10.10.110.5 00:1b:0d:e6:58:40 UHLW 2 0 em0 1145 > 61.221.xx.yy/29 link#2 UC 0 0 em1 > 61.221.xx.yy 00:1b:0d:e6:57:c0 UHLW 2 0 em1 1055 > 61.221.xx/24 link#2 UC 0 0 em1 > 127.0.0.1 127.0.0.1 UH 0 0 lo0 2. Convert it to an array of lines that resemble an equivalent /proc/net/route content on a Linux system like so: > Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT > gre828 00000000 00000000 0001 0 0 0 000000F8 0 0 0 > ens160 00000000 FE04700A 0003 0 0 100 00000000 0 0 0 > gre828 00000008 00000000 0001 0 0 0 000000FE 0 0 0 > ens160 0004700A 00000000 0001 0 0 100 00FFFFFF 0 0 0 > gre828 2504700A 00000000 0005 0 0 0 FFFFFFFF 0 0 0 > gre828 3704700A 00000000 0005 0 0 0 FFFFFFFF 0 0 0 > gre828 4104700A 00000000 0005 0 0 0 FFFFFFFF 0 0 0 :return: Entries in the ipv4 route priority list from `netstat -rn -f inet` in the linux `/proc/net/route` style :rtype: list(str) """ def _get_netstat_rn_ipv4_routes(): """ Runs `netstat -rn -f inet` and parses its output and returns a list of routes where the key is the column name and the value is the value in the column, stripped of leading and trailing whitespace. :return: List of dictionaries representing routes in the ipv4 route priority list from `netstat -rn -f inet` :rtype: list(dict) """ cmd = ["netstat", "-rn", "-f", "inet"] output = shellutil.run_command(cmd, log_error=True) output_lines = output.split("\n") if len(output_lines) < 3: raise OSUtilError("`netstat -rn -f inet` output seems to be empty") output_lines = [line.strip() for line in output_lines if line] if "Internet:" not in output_lines: raise OSUtilError("`netstat -rn -f inet` output seems to contain no ipv4 routes") route_header_line = output_lines.index("Internet:") + 1 # Parse the file structure and left justify the routes route_start_line = route_header_line + 1 route_line_length = max([len(line) for line in output_lines[route_header_line:]]) netstat_route_list = [line.ljust(route_line_length) for line in output_lines[route_start_line:]] # Parse the headers _route_headers = output_lines[route_header_line].split() n_route_headers = len(_route_headers) route_columns = {} for i in range(0, n_route_headers - 1): route_columns[_route_headers[i]] = ( output_lines[route_header_line].index(_route_headers[i]), (output_lines[route_header_line].index(_route_headers[i + 1]) - 1) ) route_columns[_route_headers[n_route_headers - 1]] = ( output_lines[route_header_line].index(_route_headers[n_route_headers - 1]), None ) # Parse the routes netstat_routes = [] n_netstat_routes = len(netstat_route_list) for i in range(0, n_netstat_routes): netstat_route = {} for column in route_columns: netstat_route[column] = netstat_route_list[i][ route_columns[column][0]:route_columns[column][1]].strip() netstat_route["Metric"] = n_netstat_routes - i netstat_routes.append(netstat_route) # Return the Sections return netstat_routes def _ipv4_ascii_address_to_hex(ipv4_ascii_address): """ Converts an IPv4 32bit address from its ASCII notation (ie. 127.0.0.1) to an 8 digit padded hex notation (ie. "0100007F") string. :return: 8 character long hex string representation of the IP :rtype: string """ # Raises socket.error if the IP is not a valid IPv4 return "%08X" % int(binascii.hexlify( struct.pack("!I", struct.unpack("=I", socket.inet_pton(socket.AF_INET, ipv4_ascii_address))[0])), 16) def _ipv4_cidr_mask_to_hex(ipv4_cidr_mask): """ Converts an subnet mask from its CIDR integer notation (ie. 32) to an 8 digit padded hex notation (ie. "FFFFFFFF") string representing its bitmask form. :return: 8 character long hex string representation of the IP :rtype: string """ return "{0:08x}".format( struct.unpack("=I", struct.pack("!I", (0xffffffff << (32 - ipv4_cidr_mask)) & 0xffffffff))[0]).upper() def _ipv4_cidr_destination_to_hex(destination): """ Converts an destination address from its CIDR notation (ie. 127.0.0.1/32 or default or localhost) to an 8 digit padded hex notation (ie. "0100007F" or "00000000" or "0100007F") string and its subnet bitmask also in hex (FFFFFFFF). :return: tuple of 8 character long hex string representation of the IP and 8 character long hex string representation of the subnet mask :rtype: tuple(string, int) """ destination_ip = "0.0.0.0" destination_subnetmask = 32 if destination != "default": if destination == "localhost": destination_ip = "127.0.0.1" else: destination_ip = destination.split("/") if len(destination_ip) > 1: destination_subnetmask = int(destination_ip[1]) destination_ip = destination_ip[0] hex_destination_ip = _ipv4_ascii_address_to_hex(destination_ip) hex_destination_subnetmask = _ipv4_cidr_mask_to_hex(destination_subnetmask) return hex_destination_ip, hex_destination_subnetmask def _try_ipv4_gateway_to_hex(gateway): """ If the gateway is an IPv4 address, return its IP in hex, else, return "00000000" :return: 8 character long hex string representation of the IP of the gateway :rtype: string """ try: return _ipv4_ascii_address_to_hex(gateway) except socket.error: return "00000000" def _ascii_route_flags_to_bitmask(ascii_route_flags): """ Converts route flags to a bitmask of their equivalent linux/route.h values. :return: integer representation of a 16 bit mask :rtype: int """ bitmask_flags = 0 RTF_UP = 0x0001 RTF_GATEWAY = 0x0002 RTF_HOST = 0x0004 RTF_DYNAMIC = 0x0010 if "U" in ascii_route_flags: bitmask_flags |= RTF_UP if "G" in ascii_route_flags: bitmask_flags |= RTF_GATEWAY if "H" in ascii_route_flags: bitmask_flags |= RTF_HOST if "S" not in ascii_route_flags: bitmask_flags |= RTF_DYNAMIC return bitmask_flags def _freebsd_netstat_rn_route_to_linux_proc_net_route(netstat_route): """ Converts a single FreeBSD `netstat -rn -f inet` route to its equivalent /proc/net/route line. ie: > default 0.0.0.0 UGS 0 247 em1 to > em1 00000000 00000000 0003 0 0 0 FFFFFFFF 0 0 0 :return: string representation of the equivalent /proc/net/route line :rtype: string """ network_interface = netstat_route["Netif"] hex_destination_ip, hex_destination_subnetmask = _ipv4_cidr_destination_to_hex(netstat_route["Destination"]) hex_gateway = _try_ipv4_gateway_to_hex(netstat_route["Gateway"]) bitmask_flags = _ascii_route_flags_to_bitmask(netstat_route["Flags"]) dummy_refcount = 0 dummy_use = 0 route_metric = netstat_route["Metric"] dummy_mtu = 0 dummy_window = 0 dummy_irtt = 0 return "{0}\t{1}\t{2}\t{3}\t{4}\t{5}\t{6}\t{7}\t{8}\t{9}\t{10}".format( network_interface, hex_destination_ip, hex_gateway, bitmask_flags, dummy_refcount, dummy_use, route_metric, hex_destination_subnetmask, dummy_mtu, dummy_window, dummy_irtt ) linux_style_route_file = ["Iface\tDestination\tGateway\tFlags\tRefCnt\tUse\tMetric\tMask\tMTU\tWindow\tIRTT"] try: netstat_routes = _get_netstat_rn_ipv4_routes() # Make sure the `netstat -rn -f inet` contains columns for Netif, Destination, Gateway and Flags which are needed to convert # to the Linux Format if len(netstat_routes) > 0: missing_headers = [] if "Netif" not in netstat_routes[0]: missing_headers.append("Netif") if "Destination" not in netstat_routes[0]: missing_headers.append("Destination") if "Gateway" not in netstat_routes[0]: missing_headers.append("Gateway") if "Flags" not in netstat_routes[0]: missing_headers.append("Flags") if missing_headers: raise KeyError( "`netstat -rn -f inet` output is missing columns required to convert to the Linux /proc/net/route format; columns are [{0}]".format( missing_headers)) # Parse the Netstat IPv4 Routes for netstat_route in netstat_routes: try: linux_style_route = _freebsd_netstat_rn_route_to_linux_proc_net_route(netstat_route) linux_style_route_file.append(linux_style_route) except Exception: # Skip the route continue except Exception as e: logger.error("Cannot read route table [{0}]", ustr(e)) return linux_style_route_file @staticmethod def get_list_of_routes(route_table): """ Construct a list of all network routes known to this system. :param list(str) route_table: List of text entries from route table, including headers :return: a list of network routes :rtype: list(RouteEntry) """ route_list = [] count = len(route_table) if count < 1: logger.error("netstat -rn -f inet is missing headers") elif count == 1: logger.error("netstat -rn -f inet contains no routes") else: route_list = DefaultOSUtil._build_route_list(route_table) return route_list def get_primary_interface(self): """ Get the name of the primary interface, which is the one with the default route attached to it; if there are multiple default routes, the primary has the lowest Metric. :return: the interface which has the default route """ RTF_GATEWAY = 0x0002 DEFAULT_DEST = "00000000" primary_interface = None if not self.disable_route_warning: logger.info("Examine `netstat -rn -f inet` for primary interface") route_table = self.read_route_table() def is_default(route): return (route.destination == DEFAULT_DEST) and (RTF_GATEWAY & route.flags) candidates = list(filter(is_default, self.get_list_of_routes(route_table))) if len(candidates) > 0: def get_metric(route): return int(route.metric) primary_route = min(candidates, key=get_metric) primary_interface = primary_route.interface if primary_interface is None: primary_interface = '' if not self.disable_route_warning: logger.warn('Could not determine primary interface, ' 'please ensure routes are correct') logger.warn('Primary interface examination will retry silently') self.disable_route_warning = True else: logger.info('Primary interface is [{0}]'.format(primary_interface)) self.disable_route_warning = False return primary_interface def is_primary_interface(self, ifname): """ Indicate whether the specified interface is the primary. :param ifname: the name of the interface - eth0, lo, etc. :return: True if this interface binds the default route """ return self.get_primary_interface() == ifname def is_loopback(self, ifname): """ Determine if a named interface is loopback. """ return ifname.startswith("lo") def route_add(self, net, mask, gateway): cmd = 'route add {0} {1} {2}'.format(net, gateway, mask) return shellutil.run(cmd, chk_err=False) def is_missing_default_route(self): """ For FreeBSD, the default broadcast goes to current default gw, not a all-ones broadcast address, need to specify the route manually to get it work in a VNET environment. SEE ALSO: man ip(4) IP_ONESBCAST, """ RTF_GATEWAY = 0x0002 DEFAULT_DEST = "00000000" route_table = self.read_route_table() routes = self.get_list_of_routes(route_table) for route in routes: if (route.destination == DEFAULT_DEST) and (RTF_GATEWAY & route.flags): return False return True def is_dhcp_enabled(self): return True def start_dhcp_service(self): shellutil.run("/etc/rc.d/dhclient start {0}".format(self.get_if_name()), chk_err=False) def allow_dhcp_broadcast(self): pass def set_route_for_dhcp_broadcast(self, ifname): return shellutil.run("route add 255.255.255.255 -iface {0}".format(ifname), chk_err=False) def remove_route_for_dhcp_broadcast(self, ifname): shellutil.run("route delete 255.255.255.255 -iface {0}".format(ifname), chk_err=False) def get_dhcp_pid(self): return self._get_dhcp_pid(["pgrep", "-n", "dhclient"]) def eject_dvd(self, chk_err=True): dvd = self.get_dvd_device() retcode = shellutil.run("cdcontrol -f {0} eject".format(dvd)) if chk_err and retcode != 0: raise OSUtilError("Failed to eject dvd: ret={0}".format(retcode)) def restart_if(self, ifname, retries=None, wait=None): # Restart dhclient only to publish hostname shellutil.run("/etc/rc.d/dhclient restart {0}".format(ifname), chk_err=False) def get_total_mem(self): cmd = "sysctl hw.physmem |awk '{print $2}'" ret, output = shellutil.run_get_output(cmd) if ret: raise OSUtilError("Failed to get total memory: {0}".format(output)) try: return int(output) / 1024 / 1024 except ValueError: raise OSUtilError("Failed to get total memory: {0}".format(output)) def get_processor_cores(self): ret, output = shellutil.run_get_output("sysctl hw.ncpu |awk '{print $2}'") if ret: raise OSUtilError("Failed to get processor cores.") try: return int(output) except ValueError: raise OSUtilError("Failed to get total memory: {0}".format(output)) def set_scsi_disks_timeout(self, timeout): if self._scsi_disks_timeout_set: return ret, output = shellutil.run_get_output('sysctl kern.cam.da.default_timeout={0}'.format(timeout)) if ret: raise OSUtilError("Failed set SCSI disks timeout: {0}".format(output)) self._scsi_disks_timeout_set = True def check_pid_alive(self, pid): return shellutil.run('ps -p {0}'.format(pid), chk_err=False) == 0 @staticmethod def _get_net_info(): """ There is no SIOCGIFCONF on freeBSD - just parse ifconfig. Returns strings: iface, inet4_addr, and mac or 'None,None,None' if unable to parse. We will sleep and retry as the network must be up. """ iface = '' inet = '' mac = '' err, output = shellutil.run_get_output('ifconfig -l ether', chk_err=False) if err: raise OSUtilError("Can't find ether interface:{0}".format(output)) ifaces = output.split() if not ifaces: raise OSUtilError("Can't find ether interface.") iface = ifaces[0] err, output = shellutil.run_get_output('ifconfig ' + iface, chk_err=False) if err: raise OSUtilError("Can't get info for interface:{0}".format(iface)) for line in output.split('\n'): if line.find('inet ') != -1: inet = line.split()[1] elif line.find('ether ') != -1: mac = line.split()[1] logger.verbose("Interface info: ({0},{1},{2})", iface, inet, mac) return iface, inet, mac def device_for_ide_port(self, port_id): """ Return device name attached to ide port 'n'. """ if port_id > 3: return None g0 = "00000000" if port_id > 1: g0 = "00000001" port_id = port_id - 2 err, output = shellutil.run_get_output('sysctl dev.storvsc | grep pnpinfo | grep deviceid=') if err: return None g1 = "000" + ustr(port_id) g0g1 = "{0}-{1}".format(g0, g1) # pylint: disable=W0105 """ search 'X' from 'dev.storvsc.X.%pnpinfo: classid=32412632-86cb-44a2-9b5c-50d1417354f5 deviceid=00000000-0001-8899-0000-000000000000' """ # pylint: enable=W0105 cmd_search_ide = "sysctl dev.storvsc | grep pnpinfo | grep deviceid={0}".format(g0g1) err, output = shellutil.run_get_output(cmd_search_ide) if err: return None cmd_extract_id = cmd_search_ide + "|awk -F . '{print $3}'" err, output = shellutil.run_get_output(cmd_extract_id) # pylint: disable=W0105 """ try to search 'blkvscX' and 'storvscX' to find device name """ # pylint: enable=W0105 output = output.rstrip() cmd_search_blkvsc = "camcontrol devlist -b | grep blkvsc{0} | awk '{{print $1}}'".format(output) err, output = shellutil.run_get_output(cmd_search_blkvsc) if err == 0: output = output.rstrip() cmd_search_dev = "camcontrol devlist | grep {0} | awk -F \( '{{print $2}}'|sed -e 's/.*(//'| sed -e 's/).*//'".format(output) # pylint: disable=W1401 err, output = shellutil.run_get_output(cmd_search_dev) if err == 0: for possible in output.rstrip().split(','): if not possible.startswith('pass'): return possible cmd_search_storvsc = "camcontrol devlist -b | grep storvsc{0} | awk '{{print $1}}'".format(output) err, output = shellutil.run_get_output(cmd_search_storvsc) if err == 0: output = output.rstrip() cmd_search_dev = "camcontrol devlist | grep {0} | awk -F \( '{{print $2}}'|sed -e 's/.*(//'| sed -e 's/).*//'".format(output) # pylint: disable=W1401 err, output = shellutil.run_get_output(cmd_search_dev) if err == 0: for possible in output.rstrip().split(','): if not possible.startswith('pass'): return possible return None @staticmethod def get_total_cpu_ticks_since_boot(): return 0 Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/gaia.py000066400000000000000000000167611462617747000250520ustar00rootroot00000000000000# # Copyright 2017 Check Point Software Technologies # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import base64 import socket import struct import time import azurelinuxagent.common.conf as conf from azurelinuxagent.common.exception import OSUtilError from azurelinuxagent.common.future import ustr, bytebuffer, range, int # pylint: disable=redefined-builtin import azurelinuxagent.common.logger as logger from azurelinuxagent.common.osutil.default import DefaultOSUtil from azurelinuxagent.common.utils.cryptutil import CryptUtil import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.utils.textutil as textutil class GaiaOSUtil(DefaultOSUtil): def __init__(self): # pylint: disable=W0235 super(GaiaOSUtil, self).__init__() def _run_clish(self, cmd): ret = 0 out = "" for i in range(10): # pylint: disable=W0612 try: final_command = ["/bin/clish", "-s", "-c", "'{0}'".format(cmd)] out = shellutil.run_command(final_command, log_error=True) ret = 0 break except shellutil.CommandError as e: ret = e.returncode out = e.stdout except Exception as e: ret = -1 out = ustr(e) if 'NMSHST0025' in out: # Entry for [hostname] already present ret = 0 break time.sleep(2) return ret, out def useradd(self, username, expiration=None, comment=None): logger.warn('useradd is not supported on GAiA') def chpasswd(self, username, password, crypt_id=6, salt_len=10): logger.info('chpasswd') passwd_hash = textutil.gen_password_hash(password, crypt_id, salt_len) ret, out = self._run_clish( 'set user admin password-hash ' + passwd_hash) if ret != 0: raise OSUtilError(("Failed to set password for {0}: {1}" "").format('admin', out)) def conf_sudoer(self, username, nopasswd=False, remove=False): logger.info('conf_sudoer is not supported on GAiA') def del_root_password(self): logger.info('del_root_password') ret, out = self._run_clish('set user admin password-hash *LOCK*') # pylint: disable=W0612 if ret != 0: raise OSUtilError("Failed to delete root password") def _replace_user(self, path, username): if path.startswith('$HOME'): path = '/home' + path[5:] parts = path.split('/') parts[2] = username return '/'.join(parts) def deploy_ssh_keypair(self, username, keypair): logger.info('deploy_ssh_keypair') username = 'admin' path, thumbprint = keypair path = self._replace_user(path, username) super(GaiaOSUtil, self).deploy_ssh_keypair( username, (path, thumbprint)) def openssl_to_openssh(self, input_file, output_file): cryptutil = CryptUtil(conf.get_openssl_cmd()) ret, out = shellutil.run_get_output( conf.get_openssl_cmd() + " rsa -pubin -noout -text -in '" + input_file + "'") if ret != 0: raise OSUtilError('openssl failed with {0}'.format(ret)) modulus = [] exponent = [] buf = None for line in out.split('\n'): if line.startswith('Modulus:'): buf = modulus buf.append(line) continue if line.startswith('Exponent:'): buf = exponent buf.append(line) continue if buf and line: buf.append(line.strip().replace(':', '')) def text_to_num(buf): if len(buf) == 1: return int(buf[0].split()[1]) return int(''.join(buf[1:]), 16) n = text_to_num(modulus) e = text_to_num(exponent) keydata = bytearray() keydata.extend(struct.pack('>I', len('ssh-rsa'))) keydata.extend(b'ssh-rsa') keydata.extend(struct.pack('>I', len(cryptutil.num_to_bytes(e)))) keydata.extend(cryptutil.num_to_bytes(e)) keydata.extend(struct.pack('>I', len(cryptutil.num_to_bytes(n)) + 1)) keydata.extend(b'\0') keydata.extend(cryptutil.num_to_bytes(n)) keydata_base64 = base64.b64encode(bytebuffer(keydata)) fileutil.write_file(output_file, ustr(b'ssh-rsa ' + keydata_base64 + b'\n', encoding='utf-8')) def deploy_ssh_pubkey(self, username, pubkey): logger.info('deploy_ssh_pubkey') username = 'admin' path, thumbprint, value = pubkey path = self._replace_user(path, username) super(GaiaOSUtil, self).deploy_ssh_pubkey( username, (path, thumbprint, value)) def eject_dvd(self, chk_err=True): logger.warn('eject is not supported on GAiA') def mount(self, device, mount_point, option=None, chk_err=True): if not option: option = [] if any('udf,iso9660' in opt for opt in option): ret, out = super(GaiaOSUtil, self).mount(device, mount_point, option=[opt.replace('udf,iso9660', 'udf') for opt in option], chk_err=chk_err) if not ret: return ret, out return super(GaiaOSUtil, self).mount( device, mount_point, option=option, chk_err=chk_err) def allow_dhcp_broadcast(self): logger.info('allow_dhcp_broadcast is ignored on GAiA') def remove_rules_files(self, rules_files=''): pass def restore_rules_files(self, rules_files=''): logger.info('restore_rules_files is ignored on GAiA') def restart_ssh_service(self): return shellutil.run('/sbin/service sshd condrestart', chk_err=False) def _address_to_string(self, addr): return socket.inet_ntoa(struct.pack("!I", addr)) def _get_prefix(self, mask): return str(sum([bin(int(x)).count('1') for x in mask.split('.')])) def route_add(self, net, mask, gateway): logger.info('route_add {0} {1} {2}', net, mask, gateway) if net == 0 and mask == 0: cidr = 'default' else: cidr = self._address_to_string(net) + '/' + self._get_prefix( self._address_to_string(mask)) ret, out = self._run_clish( # pylint: disable=W0612 'set static-route ' + cidr + ' nexthop gateway address ' + self._address_to_string(gateway) + ' on') return ret def set_hostname(self, hostname): logger.warn('set_hostname is ignored on GAiA') def set_dhcp_hostname(self, hostname): logger.warn('set_dhcp_hostname is ignored on GAiA') def publish_hostname(self, hostname, recover_nic=False): logger.warn('publish_hostname is ignored on GAiA') def del_account(self, username): logger.warn('del_account is ignored on GAiA') Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/iosxe.py000066400000000000000000000067461462617747000253020ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil.default import DefaultOSUtil, PRODUCT_ID_FILE, DMIDECODE_CMD, UUID_PATTERN from azurelinuxagent.common.utils import textutil, fileutil # pylint: disable=W0611 # pylint: disable=W0105 ''' The IOSXE distribution is a variant of the Centos distribution, version 7.1. The primary difference is that IOSXE makes some assumptions about the waagent environment: - only the waagent daemon is executed - no provisioning is performed - no DHCP-based services are available ''' # pylint: enable=W0105 class IosxeOSUtil(DefaultOSUtil): def __init__(self): # pylint: disable=W0235 super(IosxeOSUtil, self).__init__() @staticmethod def get_systemd_unit_file_install_path(): return "/usr/lib/systemd/system" def set_hostname(self, hostname): """ Unlike redhat 6.x, redhat 7.x will set hostname via hostnamectl Due to a bug in systemd in Centos-7.0, if this call fails, fallback to hostname. """ hostnamectl_cmd = ["hostnamectl", "set-hostname", hostname, "--static"] try: shellutil.run_command(hostnamectl_cmd) except Exception as e: logger.warn("[{0}] failed with error: {1}, attempting fallback".format(' '.join(hostnamectl_cmd), ustr(e))) DefaultOSUtil.set_hostname(self, hostname) def publish_hostname(self, hostname, recover_nic=False): """ Restart NetworkManager first before publishing hostname """ shellutil.run("service NetworkManager restart") super(IosxeOSUtil, self).publish_hostname(hostname, recover_nic) def register_agent_service(self): return shellutil.run("systemctl enable waagent", chk_err=False) def unregister_agent_service(self): return shellutil.run("systemctl disable waagent", chk_err=False) def openssl_to_openssh(self, input_file, output_file): DefaultOSUtil.openssl_to_openssh(self, input_file, output_file) def is_dhcp_available(self): return False def get_instance_id(self): ''' Azure records a UUID as the instance ID First check /sys/class/dmi/id/product_uuid. If that is missing, then extracts from dmidecode If nothing works (for old VMs), return the empty string ''' if os.path.isfile(PRODUCT_ID_FILE): try: s = fileutil.read_file(PRODUCT_ID_FILE).strip() return self._correct_instance_id(s.strip()) except IOError: pass rc, s = shellutil.run_get_output(DMIDECODE_CMD) if rc != 0 or UUID_PATTERN.match(s) is None: return "" return self._correct_instance_id(s.strip()) Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/mariner.py000066400000000000000000000047151462617747000256020ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.common.osutil.default import DefaultOSUtil class MarinerOSUtil(DefaultOSUtil): def __init__(self): super(MarinerOSUtil, self).__init__() self.jit_enabled = True @staticmethod def get_systemd_unit_file_install_path(): return "/usr/lib/systemd/system" @staticmethod def get_agent_bin_path(): return "/usr/bin" def is_dhcp_enabled(self): return True def start_network(self): self._run_command_without_raising(["systemctl", "start", "systemd-networkd"], log_error=False) def restart_if(self, ifname=None, retries=None, wait=None): self._run_command_without_raising(["systemctl", "restart", "systemd-networkd"]) def restart_ssh_service(self): self._run_command_without_raising(["systemctl", "restart", "sshd"]) def stop_dhcp_service(self): self._run_command_without_raising(["systemctl", "stop", "systemd-networkd"], log_error=False) def start_dhcp_service(self): self._run_command_without_raising(["systemctl", "start", "systemd-networkd"], log_error=False) def start_agent_service(self): self._run_command_without_raising(["systemctl", "start", "{0}".format(self.service_name)], log_error=False) def stop_agent_service(self): self._run_command_without_raising(["systemctl", "stop", "{0}".format(self.service_name)], log_error=False) def register_agent_service(self): self._run_command_without_raising(["systemctl", "enable", "{0}".format(self.service_name)], log_error=False) def unregister_agent_service(self): self._run_command_without_raising(["systemctl", "disable", "{0}".format(self.service_name)], log_error=False) def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", "systemd-networkd"]) def conf_sshd(self, disable_password): pass Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/nsbsd.py000066400000000000000000000136201462617747000252510ustar00rootroot00000000000000# # Copyright 2018 Stormshield # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import os import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.exception import OSUtilError from azurelinuxagent.common.osutil.freebsd import FreeBSDOSUtil class NSBSDOSUtil(FreeBSDOSUtil): resolver = None def __init__(self): super(NSBSDOSUtil, self).__init__() self.agent_conf_file_path = '/etc/waagent.conf' if self.resolver is None: # NSBSD doesn't have a system resolver, configure a python one try: import dns.resolver except ImportError: raise OSUtilError("Python DNS resolver not available. Cannot proceed!") self.resolver = dns.resolver.Resolver(configure=False) servers = [] cmd = "getconf /usr/Firewall/ConfigFiles/dns Servers | tail -n +2" ret, output = shellutil.run_get_output(cmd) # pylint: disable=W0612 for server in output.split("\n"): if server == '': break server = server[:-1] # remove last '=' cmd = "grep '{}' /etc/hosts".format(server) + " | awk '{print $1}'" ret, ip = shellutil.run_get_output(cmd) ip = ip.strip() # Remove new line char servers.append(ip) self.resolver.nameservers = servers dns.resolver.override_system_resolver(self.resolver) def set_hostname(self, hostname): self._run_command_without_raising( ['/usr/Firewall/sbin/setconf', '/usr/Firewall/System/global', 'SystemName', hostname]) self._run_command_without_raising(["/usr/Firewall/sbin/enlog"]) self._run_command_without_raising(["/usr/Firewall/sbin/enproxy", "-u"]) self._run_command_without_raising(["/usr/Firewall/sbin/ensl", "-u"]) self._run_command_without_raising(["/usr/Firewall/sbin/ennetwork", "-f"]) def restart_ssh_service(self): return shellutil.run('/usr/Firewall/sbin/enservice', chk_err=False) def conf_sshd(self, disable_password): option = "0" if disable_password else "1" shellutil.run('setconf /usr/Firewall/ConfigFiles/system SSH State 1', chk_err=False) shellutil.run('setconf /usr/Firewall/ConfigFiles/system SSH Password {}'.format(option), chk_err=False) shellutil.run('enservice', chk_err=False) logger.info("{0} SSH password-based authentication methods." .format("Disabled" if disable_password else "Enabled")) def get_root_username(self): return "admin" def useradd(self, username, expiration=None, comment=None): """ Create user account with 'username' """ logger.warn("User creation disabled") def del_account(self, username): logger.warn("User deletion disabled") def conf_sudoer(self, username, nopasswd=False, remove=False): logger.warn("Sudo is not enabled") def chpasswd(self, username, password, crypt_id=6, salt_len=10): self._run_command_raising_OSUtilError(["/usr/Firewall/sbin/fwpasswd", "-p", password], err_msg="Failed to set password for admin") # password set, activate webadmin and ssh access commands = [['setconf', '/usr/Firewall/ConfigFiles/webadmin', 'ACL', 'any'], ['ensl']] self._run_multiple_commands_without_raising(commands, log_error=False, continue_on_error=False) def deploy_ssh_pubkey(self, username, pubkey): """ Deploy authorized_key """ path, thumbprint, value = pubkey # pylint: disable=W0612 # overide parameters super(NSBSDOSUtil, self).deploy_ssh_pubkey('admin', ["/usr/Firewall/.ssh/authorized_keys", thumbprint, value]) def del_root_password(self): logger.warn("Root password deletion disabled") def start_dhcp_service(self): shellutil.run("/usr/Firewall/sbin/nstart dhclient", chk_err=False) def stop_dhcp_service(self): shellutil.run("/usr/Firewall/sbin/nstop dhclient", chk_err=False) def get_dhcp_pid(self): ret = "" pidfile = "/var/run/dhclient.pid" if os.path.isfile(pidfile): ret = fileutil.read_file(pidfile, encoding='ascii') return self._text_to_pid_list(ret) def eject_dvd(self, chk_err=True): pass def restart_if(self, ifname=None, retries=None, wait=None): # Restart dhclient only to publish hostname shellutil.run("ennetwork", chk_err=False) def set_dhcp_hostname(self, hostname): # already done by the dhcp client pass def get_firewall_dropped_packets(self, dst_ip=None): # disable iptables methods return 0 def get_firewall_will_wait(self): # disable iptables methods return "" def _delete_rule(self, rule): # disable iptables methods return def remove_firewall(self, dst_ip=None, uid=None, wait=""): # disable iptables methods return True def enable_firewall(self, dst_ip=None, uid=None): # disable iptables methods return True, True def get_firewall_list(self, wait=""): # disable iptables methods return "" Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/openbsd.py000066400000000000000000000325041462617747000255740ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # Copyright 2017 Reyk Floeter # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and OpenSSL 1.0+ import os import re import time import glob import datetime import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.logger as logger import azurelinuxagent.common.conf as conf from azurelinuxagent.common.exception import OSUtilError from azurelinuxagent.common.osutil.default import DefaultOSUtil UUID_PATTERN = re.compile( r'^\s*[A-F0-9]{8}(?:\-[A-F0-9]{4}){3}\-[A-F0-9]{12}\s*$', re.IGNORECASE) class OpenBSDOSUtil(DefaultOSUtil): def __init__(self): super(OpenBSDOSUtil, self).__init__() self.jit_enabled = True self._scsi_disks_timeout_set = False @staticmethod def get_agent_bin_path(): return "/usr/local/sbin" def get_instance_id(self): ret, output = shellutil.run_get_output("sysctl -n hw.uuid") if ret != 0 or UUID_PATTERN.match(output) is None: return "" return output.strip() def set_hostname(self, hostname): fileutil.write_file("/etc/myname", "{}\n".format(hostname)) self._run_command_without_raising(["hostname", hostname], log_error=False) def restart_ssh_service(self): return shellutil.run('rcctl restart sshd', chk_err=False) def start_agent_service(self): return shellutil.run('rcctl start {0}'.format(self.service_name), chk_err=False) def stop_agent_service(self): return shellutil.run('rcctl stop {0}'.format(self.service_name), chk_err=False) def register_agent_service(self): shellutil.run('chmod 0555 /etc/rc.d/{0}'.format(self.service_name), chk_err=False) return shellutil.run('rcctl enable {0}'.format(self.service_name), chk_err=False) def unregister_agent_service(self): return shellutil.run('rcctl disable {0}'.format(self.service_name), chk_err=False) def del_account(self, username): if self.is_sys_user(username): logger.error("{0} is a system user. Will not delete it.", username) self._run_command_without_raising(["touch", "/var/run/utmp"]) self._run_command_without_raising(["userdel", "-r", username]) self.conf_sudoer(username, remove=True) def conf_sudoer(self, username, nopasswd=False, remove=False): doas_conf = "/etc/doas.conf" doas = None if not remove: if not os.path.isfile(doas_conf): # always allow root to become root doas = "permit keepenv nopass root\n" fileutil.append_file(doas_conf, doas) if nopasswd: doas = "permit keepenv nopass {0}\n".format(username) else: doas = "permit keepenv persist {0}\n".format(username) fileutil.append_file(doas_conf, doas) fileutil.chmod(doas_conf, 0o644) else: # Remove user from doas.conf if os.path.isfile(doas_conf): try: content = fileutil.read_file(doas_conf) doas = content.split("\n") doas = [x for x in doas if username not in x] fileutil.write_file(doas_conf, "\n".join(doas)) except IOError as err: raise OSUtilError("Failed to remove sudoer: " "{0}".format(err)) def chpasswd(self, username, password, crypt_id=6, salt_len=10): if self.is_sys_user(username): raise OSUtilError(("User {0} is a system user. " "Will not set passwd.").format(username)) output = self._run_command_raising_OSUtilError(['encrypt'], cmd_input=password, err_msg="Failed to encrypt password for {0}".format(username)) passwd_hash = output.strip() self._run_command_raising_OSUtilError(['usermod', '-p', passwd_hash, username], err_msg="Failed to set password for {0}".format(username)) def del_root_password(self): ret, output = shellutil.run_get_output('usermod -p "*" root') if ret: raise OSUtilError("Failed to delete root password: " "{0}".format(output)) def get_if_mac(self, ifname): data = self._get_net_info() if data[0] == ifname: return data[2].replace(':', '').upper() return None def get_first_if(self): return self._get_net_info()[:2] def route_add(self, net, mask, gateway): cmd = 'route add {0} {1} {2}'.format(net, gateway, mask) return shellutil.run(cmd, chk_err=False) def is_missing_default_route(self): ret = shellutil.run("route -n get default", chk_err=False) if ret == 0: return False return True def is_dhcp_enabled(self): pass def start_dhcp_service(self): pass def stop_dhcp_service(self): pass def get_dhcp_lease_endpoint(self): """ OpenBSD has a sligthly different lease file format. """ endpoint = None pathglob = '/var/db/dhclient.leases.{}'.format(self.get_first_if()[0]) HEADER_LEASE = "lease" HEADER_OPTION = "option option-245" HEADER_EXPIRE = "expire" FOOTER_LEASE = "}" FORMAT_DATETIME = "%Y/%m/%d %H:%M:%S %Z" logger.info("looking for leases in path [{0}]".format(pathglob)) for lease_file in glob.glob(pathglob): leases = open(lease_file).read() if HEADER_OPTION in leases: cached_endpoint = None has_option_245 = False expired = True # assume expired for line in leases.splitlines(): if line.startswith(HEADER_LEASE): cached_endpoint = None has_option_245 = False expired = True elif HEADER_OPTION in line: try: ipaddr = line.split(" ")[-1].strip(";").split(":") cached_endpoint = \ ".".join(str(int(d, 16)) for d in ipaddr) has_option_245 = True except ValueError: logger.error("could not parse '{0}'".format(line)) elif HEADER_EXPIRE in line: if "never" in line: expired = False else: try: expire_string = line.split( " ", 4)[-1].strip(";") expire_date = datetime.datetime.strptime( expire_string, FORMAT_DATETIME) if expire_date > datetime.datetime.utcnow(): expired = False except ValueError: logger.error("could not parse expiry token " "'{0}'".format(line)) elif FOOTER_LEASE in line: logger.info("dhcp entry:{0}, 245:{1}, expired: {2}" .format(cached_endpoint, has_option_245, expired)) if not expired and cached_endpoint is not None and has_option_245: endpoint = cached_endpoint logger.info("found endpoint [{0}]".format(endpoint)) # we want to return the last valid entry, so # keep searching if endpoint is not None: logger.info("cached endpoint found [{0}]".format(endpoint)) else: logger.info("cached endpoint not found") return endpoint def allow_dhcp_broadcast(self): pass def set_route_for_dhcp_broadcast(self, ifname): return shellutil.run("route add 255.255.255.255 -iface " "{0}".format(ifname), chk_err=False) def remove_route_for_dhcp_broadcast(self, ifname): shellutil.run("route delete 255.255.255.255 -iface " "{0}".format(ifname), chk_err=False) def get_dhcp_pid(self): return self._get_dhcp_pid(["pgrep", "-n", "dhclient"]) def get_dvd_device(self, dev_dir='/dev'): pattern = r'cd[0-9]c' for dvd in [re.match(pattern, dev) for dev in os.listdir(dev_dir)]: if dvd is not None: return "/dev/{0}".format(dvd.group(0)) raise OSUtilError("Failed to get DVD device") def mount_dvd(self, max_retry=6, chk_err=True, dvd_device=None, mount_point=None, sleep_time=5): if dvd_device is None: dvd_device = self.get_dvd_device() if mount_point is None: mount_point = conf.get_dvd_mount_point() if not os.path.isdir(mount_point): os.makedirs(mount_point) for retry in range(0, max_retry): retcode = self.mount(dvd_device, mount_point, option=["-o", "ro", "-t", "udf"], chk_err=False) if retcode == 0: logger.info("Successfully mounted DVD") return if retry < max_retry - 1: mountlist = shellutil.run_get_output("/sbin/mount")[1] existing = self.get_mount_point(mountlist, dvd_device) if existing is not None: logger.info("{0} is mounted at {1}", dvd_device, existing) return logger.warn("Mount DVD failed: retry={0}, ret={1}", retry, retcode) time.sleep(sleep_time) if chk_err: raise OSUtilError("Failed to mount DVD.") def eject_dvd(self, chk_err=True): dvd = self.get_dvd_device() retcode = shellutil.run("cdio eject {0}".format(dvd)) if chk_err and retcode != 0: raise OSUtilError("Failed to eject DVD: ret={0}".format(retcode)) def restart_if(self, ifname, retries=3, wait=5): # Restart dhclient only to publish hostname shellutil.run("/sbin/dhclient {0}".format(ifname), chk_err=False) def get_total_mem(self): ret, output = shellutil.run_get_output("sysctl -n hw.physmem") if ret: raise OSUtilError("Failed to get total memory: {0}".format(output)) try: return int(output)/1024/1024 except ValueError: raise OSUtilError("Failed to get total memory: {0}".format(output)) def get_processor_cores(self): ret, output = shellutil.run_get_output("sysctl -n hw.ncpu") if ret: raise OSUtilError("Failed to get processor cores.") try: return int(output) except ValueError: raise OSUtilError("Failed to get total memory: {0}".format(output)) def set_scsi_disks_timeout(self, timeout): pass def check_pid_alive(self, pid): # pylint: disable=R1710 if not pid: return return shellutil.run('ps -p {0}'.format(pid), chk_err=False) == 0 @staticmethod def _get_net_info(): """ There is no SIOCGIFCONF on OpenBSD - just parse ifconfig. Returns strings: iface, inet4_addr, and mac or 'None,None,None' if unable to parse. We will sleep and retry as the network must be up. """ iface = '' inet = '' mac = '' ret, output = shellutil.run_get_output( 'ifconfig hvn | grep -E "^hvn.:" | sed "s/:.*//g"', chk_err=False) if ret: raise OSUtilError("Can't find ether interface:{0}".format(output)) ifaces = output.split() if not ifaces: raise OSUtilError("Can't find ether interface.") iface = ifaces[0] ret, output = shellutil.run_get_output( 'ifconfig ' + iface, chk_err=False) if ret: raise OSUtilError("Can't get info for interface:{0}".format(iface)) for line in output.split('\n'): if line.find('inet ') != -1: inet = line.split()[1] elif line.find('lladdr ') != -1: mac = line.split()[1] logger.verbose("Interface info: ({0},{1},{2})", iface, inet, mac) return iface, inet, mac def device_for_ide_port(self, port_id): """ Return device name attached to ide port 'n'. """ return "wd{0}".format(port_id) @staticmethod def get_total_cpu_ticks_since_boot(): return 0 Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/openwrt.py000066400000000000000000000136201462617747000256360ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # Copyright 2018 Sonus Networks, Inc. (d.b.a. Ribbon Communications Operating Company) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import re import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.common.osutil.default import DefaultOSUtil from azurelinuxagent.common.utils.networkutil import NetworkInterfaceCard class OpenWRTOSUtil(DefaultOSUtil): def __init__(self): super(OpenWRTOSUtil, self).__init__() self.agent_conf_file_path = '/etc/waagent.conf' self.dhclient_name = 'udhcpc' self.ip_command_output = re.compile('^\d+:\s+(\w+):\s+(.*)$') # pylint: disable=W1401 self.jit_enabled = True def eject_dvd(self, chk_err=True): logger.warn('eject is not supported on OpenWRT') def useradd(self, username, expiration=None, comment=None): """ Create user account with 'username' """ userentry = self.get_userentry(username) if userentry is not None: logger.info("User {0} already exists, skip useradd", username) return if expiration is not None: cmd = ["useradd", "-m", username, "-s", "/bin/ash", "-e", expiration] else: cmd = ["useradd", "-m", username, "-s", "/bin/ash"] if not os.path.exists("/home"): os.mkdir("/home") if comment is not None: cmd.extend(["-c", comment]) self._run_command_raising_OSUtilError(cmd, err_msg="Failed to create user account:{0}".format(username)) def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", self.dhclient_name]) def get_nic_state(self, as_string=False): """ Capture NIC state (IPv4 and IPv6 addresses plus link state). :return: Dictionary of NIC state objects, with the NIC name as key :rtype: dict(str,NetworkInformationCard) """ if as_string: # as_string not supported on open wrt return '' state = {} status, output = shellutil.run_get_output("ip -o link", chk_err=False, log_cmd=False) if status != 0: logger.verbose("Could not fetch NIC link info; status {0}, {1}".format(status, output)) return {} for entry in output.splitlines(): result = self.ip_command_output.match(entry) if result: name = result.group(1) state[name] = NetworkInterfaceCard(name, result.group(2)) self._update_nic_state(state, "ip -o -f inet address", NetworkInterfaceCard.add_ipv4, "an IPv4 address") self._update_nic_state(state, "ip -o -f inet6 address", NetworkInterfaceCard.add_ipv6, "an IPv6 address") return state def _update_nic_state(self, state, ip_command, handler, description): """ Update the state of NICs based on the output of a specified ip subcommand. :param dict(str, NetworkInterfaceCard) state: Dictionary of NIC state objects :param str ip_command: The ip command to run :param handler: A method on the NetworkInterfaceCard class :param str description: Description of the particular information being added to the state """ status, output = shellutil.run_get_output(ip_command, chk_err=True) if status != 0: return for entry in output.splitlines(): result = self.ip_command_output.match(entry) if result: interface_name = result.group(1) if interface_name in state: handler(state[interface_name], result.group(2)) else: logger.error("Interface {0} has {1} but no link state".format(interface_name, description)) def is_dhcp_enabled(self): pass def start_dhcp_service(self): pass def stop_dhcp_service(self): pass def start_network(self) : return shellutil.run("/etc/init.d/network start", chk_err=True) def restart_ssh_service(self): # pylint: disable=R1710 # Since Dropbear is the default ssh server on OpenWRt, lets do a sanity check if os.path.exists("/etc/init.d/sshd"): return shellutil.run("/etc/init.d/sshd restart", chk_err=True) else: logger.warn("sshd service does not exists") def stop_agent_service(self): return shellutil.run("/etc/init.d/{0} stop".format(self.service_name), chk_err=True) def start_agent_service(self): return shellutil.run("/etc/init.d/{0} start".format(self.service_name), chk_err=True) def register_agent_service(self): return shellutil.run("/etc/init.d/{0} enable".format(self.service_name), chk_err=True) def unregister_agent_service(self): return shellutil.run("/etc/init.d/{0} disable".format(self.service_name), chk_err=True) def set_hostname(self, hostname): fileutil.write_file('/etc/hostname', hostname) commands = [['uci', 'set', 'system.@system[0].hostname={0}'.format(hostname)], ['uci', 'commit', 'system'], ['/etc/init.d/system', 'reload']] self._run_multiple_commands_without_raising(commands, log_error=False, continue_on_error=False) def remove_rules_files(self, rules_files=""): pass Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/photonos.py000066400000000000000000000040161462617747000260100ustar00rootroot00000000000000# # Copyright 2021 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.osutil.default import DefaultOSUtil class PhotonOSUtil(DefaultOSUtil): def __init__(self): super(PhotonOSUtil, self).__init__() self.agent_conf_file_path = '/etc/waagent.conf' @staticmethod def get_systemd_unit_file_install_path(): return '/usr/lib/systemd/system' @staticmethod def get_agent_bin_path(): return '/usr/bin' def is_dhcp_enabled(self): return True def start_network(self) : return shellutil.run('systemctl start systemd-networkd', chk_err=False) def restart_if(self, ifname=None, retries=None, wait=None): shellutil.run('systemctl restart systemd-networkd') def restart_ssh_service(self): shellutil.run('systemctl restart sshd') def stop_dhcp_service(self): return shellutil.run('systemctl stop systemd-networkd', chk_err=False) def start_dhcp_service(self): return shellutil.run('systemctl start systemd-networkd', chk_err=False) def start_agent_service(self): return shellutil.run('systemctl start waagent', chk_err=False) def stop_agent_service(self): return shellutil.run('systemctl stop waagent', chk_err=False) def get_dhcp_pid(self): return self._get_dhcp_pid(['pidof', 'systemd-networkd']) def conf_sshd(self, disable_password): pass Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/redhat.py000066400000000000000000000326611462617747000254150ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os # pylint: disable=W0611 import re # pylint: disable=W0611 import pwd # pylint: disable=W0611 import shutil # pylint: disable=W0611 import socket # pylint: disable=W0611 import array # pylint: disable=W0611 import struct # pylint: disable=W0611 import fcntl # pylint: disable=W0611 import time # pylint: disable=W0611 import base64 # pylint: disable=W0611 import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger from azurelinuxagent.common.future import ustr, bytebuffer # pylint: disable=W0611 from azurelinuxagent.common.exception import OSUtilError, CryptError import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.utils.textutil as textutil # pylint: disable=W0611 from azurelinuxagent.common.utils.cryptutil import CryptUtil from azurelinuxagent.common.osutil.default import DefaultOSUtil class Redhat6xOSUtil(DefaultOSUtil): def __init__(self): super(Redhat6xOSUtil, self).__init__() self.jit_enabled = True def start_network(self): return shellutil.run("/sbin/service networking start", chk_err=False) def restart_ssh_service(self): return shellutil.run("/sbin/service sshd condrestart", chk_err=False) def stop_agent_service(self): return shellutil.run("/sbin/service {0} stop".format(self.service_name), chk_err=False) def start_agent_service(self): return shellutil.run("/sbin/service {0} start".format(self.service_name), chk_err=False) def register_agent_service(self): return shellutil.run("chkconfig --add {0}".format(self.service_name), chk_err=False) def unregister_agent_service(self): return shellutil.run("chkconfig --del {0}".format(self.service_name), chk_err=False) def openssl_to_openssh(self, input_file, output_file): pubkey = fileutil.read_file(input_file) try: cryptutil = CryptUtil(conf.get_openssl_cmd()) ssh_rsa_pubkey = cryptutil.asn1_to_ssh(pubkey) except CryptError as e: raise OSUtilError(ustr(e)) fileutil.append_file(output_file, ssh_rsa_pubkey) # Override def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", "dhclient"]) def set_hostname(self, hostname): """ Set /etc/sysconfig/network """ fileutil.update_conf_file('/etc/sysconfig/network', 'HOSTNAME', 'HOSTNAME={0}'.format(hostname)) self._run_command_without_raising(["hostname", hostname], log_error=False) def set_dhcp_hostname(self, hostname): ifname = self.get_if_name() filepath = "/etc/sysconfig/network-scripts/ifcfg-{0}".format(ifname) fileutil.update_conf_file(filepath, 'DHCP_HOSTNAME', 'DHCP_HOSTNAME={0}'.format(hostname)) def get_dhcp_lease_endpoint(self): return self.get_endpoint_from_leases_path('/var/lib/dhclient/dhclient-*.leases') class RedhatOSUtil(Redhat6xOSUtil): def __init__(self): super(RedhatOSUtil, self).__init__() self.service_name = self.get_service_name() @staticmethod def get_systemd_unit_file_install_path(): return "/usr/lib/systemd/system" def set_hostname(self, hostname): """ Unlike redhat 6.x, redhat 7.x will set hostname via hostnamectl Due to a bug in systemd in Centos-7.0, if this call fails, fallback to hostname. """ hostnamectl_cmd = ['hostnamectl', 'set-hostname', hostname, '--static'] try: shellutil.run_command(hostnamectl_cmd, log_error=False) except shellutil.CommandError: logger.warn("[{0}] failed, attempting fallback".format(' '.join(hostnamectl_cmd))) DefaultOSUtil.set_hostname(self, hostname) def get_nm_controlled(self, ifname): filepath = "/etc/sysconfig/network-scripts/ifcfg-{0}".format(ifname) nm_controlled_cmd = ['grep', 'NM_CONTROLLED=', filepath] try: result = shellutil.run_command(nm_controlled_cmd, log_error=False).rstrip() if result and len(result.split('=')) > 1: # Remove trailing white space and ' or " characters value = result.split('=')[1].replace("'", '').replace('"', '').rstrip() if value == "n" or value == "no": return False except shellutil.CommandError as e: # Command might fail because NM_CONTROLLED value is not in interface config file (exit code 1). # Log warning for any other exit code. # NM_CONTROLLED=y by default if not specified. if e.returncode != 1: logger.warn("[{0}] failed: {1}.\nAgent will continue to publish hostname without NetworkManager restart".format(' '.join(nm_controlled_cmd), e)) except Exception as e: logger.warn("Unexpected error while retrieving value of NM_CONTROLLED in {0}: {1}.\nAgent will continue to publish hostname without NetworkManager restart".format(filepath, e)) return True def get_nic_operational_and_general_states(self, ifname): """ Checks the contents of /sys/class/net/{ifname}/operstate and the results of 'nmcli -g general.state device show {ifname}' to determine the state of the provided interface. Raises an exception if the network interface state cannot be determined. """ filepath = "/sys/class/net/{0}/operstate".format(ifname) nic_general_state_cmd = ['nmcli', '-g', 'general.state', 'device', 'show', ifname] if not os.path.isfile(filepath): msg = "Unable to determine primary network interface {0} state, because state file does not exist: {1}".format(ifname, filepath) logger.warn(msg) raise Exception(msg) try: nic_oper_state = fileutil.read_file(filepath).rstrip().lower() nic_general_state = shellutil.run_command(nic_general_state_cmd, log_error=True).rstrip().lower() if nic_oper_state != "up": logger.warn("The primary network interface {0} operational state is '{1}'.".format(ifname, nic_oper_state)) else: logger.info("The primary network interface {0} operational state is '{1}'.".format(ifname, nic_oper_state)) if nic_general_state != "100 (connected)": logger.warn("The primary network interface {0} general state is '{1}'.".format(ifname, nic_general_state)) else: logger.info("The primary network interface {0} general state is '{1}'.".format(ifname, nic_general_state)) return nic_oper_state, nic_general_state except Exception as e: msg = "Unexpected error while determining the primary network interface state: {0}".format(e) logger.warn(msg) raise Exception(msg) def check_and_recover_nic_state(self, ifname): """ Checks if the provided network interface is in an 'up' state. If the network interface is in a 'down' state, attempt to recover the interface by restarting the Network Manager service. Raises an exception if an attempt to bring the interface into an 'up' state fails, or if the state of the network interface cannot be determined. """ nic_operstate, nic_general_state = self.get_nic_operational_and_general_states(ifname) if nic_operstate == "down" or "disconnected" in nic_general_state: logger.info("Restarting the Network Manager service to recover network interface {0}".format(ifname)) self.restart_network_manager() # Interface does not come up immediately after NetworkManager restart. Wait 5 seconds before checking # network interface state. time.sleep(5) nic_operstate, nic_general_state = self.get_nic_operational_and_general_states(ifname) # It is possible for network interface to be in an unknown or unmanaged state. Log warning if state is not # down, disconnected, up, or connected if nic_operstate != "up" or nic_general_state != "100 (connected)": msg = "Network Manager restart failed to bring network interface {0} into 'up' and 'connected' state".format(ifname) logger.warn(msg) raise Exception(msg) else: logger.info("Network Manager restart successfully brought the network interface {0} into 'up' and 'connected' state".format(ifname)) elif nic_operstate != "up" or nic_general_state != "100 (connected)": # We already logged a warning with the network interface state in get_nic_operstate(). Raise an exception # for the env thread to send to telemetry. raise Exception("The primary network interface {0} operational state is '{1}' and general state is '{2}'.".format(ifname, nic_operstate, nic_general_state)) def restart_network_manager(self): shellutil.run("service NetworkManager restart") def publish_hostname(self, hostname, recover_nic=False): """ Restart NetworkManager first before publishing hostname, only if the network interface is not controlled by the NetworkManager service (as determined by NM_CONTROLLED=n in the interface configuration). If the NetworkManager service is restarted before the agent publishes the hostname, and NM_controlled=y, a race condition may happen between the NetworkManager service and the Guest Agent making changes to the network interface configuration simultaneously. Note: check_and_recover_nic_state(ifname) raises an Exception if an attempt to recover the network interface fails, or if the network interface state cannot be determined. Callers should handle this exception by sending an event to telemetry. TODO: Improve failure reporting and add success reporting to telemetry for hostname changes. Right now we are only reporting failures to telemetry by raising an Exception in publish_hostname for the calling thread to handle by reporting the failure to telemetry. """ ifname = self.get_if_name() nm_controlled = self.get_nm_controlled(ifname) if not nm_controlled: self.restart_network_manager() # TODO: Current recover logic is only effective when the NetworkManager manages the network interface. Update the recover logic so it is effective even when NM_CONTROLLED=n super(RedhatOSUtil, self).publish_hostname(hostname, recover_nic and nm_controlled) def register_agent_service(self): return shellutil.run("systemctl enable {0}".format(self.service_name), chk_err=False) def unregister_agent_service(self): return shellutil.run("systemctl disable {0}".format(self.service_name), chk_err=False) def openssl_to_openssh(self, input_file, output_file): DefaultOSUtil.openssl_to_openssh(self, input_file, output_file) def get_dhcp_lease_endpoint(self): # dhclient endpoint = self.get_endpoint_from_leases_path('/var/lib/dhclient/dhclient-*.lease') if endpoint is None: # NetworkManager endpoint = self.get_endpoint_from_leases_path('/var/lib/NetworkManager/dhclient-*.lease') return endpoint class RedhatOSModernUtil(RedhatOSUtil): def __init__(self): # pylint: disable=W0235 super(RedhatOSModernUtil, self).__init__() def restart_if(self, ifname, retries=3, wait=5): """ Restart an interface by bouncing the link. systemd-networkd observes this event, and forces a renew of DHCP. """ retry_limit = retries + 1 for attempt in range(1, retry_limit): return_code = shellutil.run("ip link set {0} down && ip link set {0} up".format(ifname)) if return_code == 0: return logger.warn("failed to restart {0}: return code {1}".format(ifname, return_code)) if attempt < retry_limit: logger.info("retrying in {0} seconds".format(wait)) time.sleep(wait) else: logger.warn("exceeded restart retries") def check_and_recover_nic_state(self, ifname): # TODO: Implement and test a way to recover the network interface for RedhatOSModernUtil pass def publish_hostname(self, hostname, recover_nic=False): # RedhatOSUtil was updated to conditionally run NetworkManager restart in response to a race condition between # NetworkManager restart and the agent restarting the network interface during publish_hostname. Keeping the # NetworkManager restart in RedhatOSModernUtil because the issue was not reproduced on these versions. shellutil.run("service NetworkManager restart") DefaultOSUtil.publish_hostname(self, hostname) Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/suse.py000066400000000000000000000155701462617747000251250ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import time import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil # pylint: disable=W0611 from azurelinuxagent.common.exception import OSUtilError # pylint: disable=W0611 from azurelinuxagent.common.future import ustr # pylint: disable=W0611 from azurelinuxagent.common.osutil.default import DefaultOSUtil class SUSE11OSUtil(DefaultOSUtil): def __init__(self): super(SUSE11OSUtil, self).__init__() self.jit_enabled = True self.dhclient_name = 'dhcpcd' def set_hostname(self, hostname): fileutil.write_file('/etc/HOSTNAME', hostname) self._run_command_without_raising(["hostname", hostname], log_error=False) def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", self.dhclient_name]) def is_dhcp_enabled(self): return True def stop_dhcp_service(self): self._run_command_without_raising(["/sbin/service", self.dhclient_name, "stop"], log_error=False) def start_dhcp_service(self): self._run_command_without_raising(["/sbin/service", self.dhclient_name, "start"], log_error=False) def start_network(self): self._run_command_without_raising(["/sbin/service", "network", "start"], log_error=False) def restart_ssh_service(self): self._run_command_without_raising(["/sbin/service", "sshd", "restart"], log_error=False) def stop_agent_service(self): self._run_command_without_raising(["/sbin/service", self.service_name, "stop"], log_error=False) def start_agent_service(self): self._run_command_without_raising(["/sbin/service", self.service_name, "start"], log_error=False) def register_agent_service(self): self._run_command_without_raising(["/sbin/insserv", self.service_name], log_error=False) def unregister_agent_service(self): self._run_command_without_raising(["/sbin/insserv", "-r", self.service_name], log_error=False) class SUSEOSUtil(SUSE11OSUtil): def __init__(self): super(SUSEOSUtil, self).__init__() self.dhclient_name = 'wickedd-dhcp4' def publish_hostname(self, hostname, recover_nic=False): self.set_dhcp_hostname(hostname) self.set_hostname_record(hostname) ifname = self.get_if_name() # To push the hostname to the dhcp server we do not need to # bring down the interface, just make the make ifup do whatever is # necessary self.ifup(ifname) def ifup(self, ifname, retries=3, wait=5): logger.info('Interface {0} bounce with ifup'.format(ifname)) retry_limit=retries+1 for attempt in range(1, retry_limit): try: shellutil.run_command(['ifup', ifname], log_error=True) except Exception: if attempt < retry_limit: logger.info("retrying in {0} seconds".format(wait)) time.sleep(wait) else: logger.warn("exceeded restart retries") @staticmethod def get_systemd_unit_file_install_path(): return "/usr/lib/systemd/system" def set_hostname(self, hostname): self._run_command_without_raising( ["hostnamectl", "set-hostname", hostname], log_error=False ) def set_dhcp_hostname(self, hostname): dhcp_config_file_path = '/etc/sysconfig/network/dhcp' hostname_send_setting = fileutil.get_line_startingwith( 'DHCLIENT_HOSTNAME_OPTION', dhcp_config_file_path ) if hostname_send_setting: value = hostname_send_setting.split('=')[-1] # wicked's source accepts values with double quotes, single quotes, and no quotes at all. if value in ('"AUTO"', "'AUTO'", 'AUTO') or value == '"{0}"'.format(hostname): # Return if auto send host-name is configured or the current # hostname is already set up to be sent return else: # Do not use update_conf_file as it moves the setting to the # end of the file separating it from the contextual comment new_conf = [] dhcp_conf = fileutil.read_file( dhcp_config_file_path).split('\n') for entry in dhcp_conf: if entry.startswith('DHCLIENT_HOSTNAME_OPTION'): new_conf.append( 'DHCLIENT_HOSTNAME_OPTION="{0}"'. format(hostname) ) continue new_conf.append(entry) fileutil.write_file(dhcp_config_file_path, '\n'.join(new_conf)) else: fileutil.append_file( dhcp_config_file_path, 'DHCLIENT_HOSTNAME_OPTION="{0}"'. format(hostname) ) def stop_dhcp_service(self): self._run_command_without_raising(["systemctl", "stop", "{}.service".format(self.dhclient_name)], log_error=False) def start_dhcp_service(self): self._run_command_without_raising(["systemctl", "start", "{}.service".format(self.dhclient_name)], log_error=False) def start_network(self): self._run_command_without_raising(["systemctl", "start", "network.service"], log_error=False) def restart_ssh_service(self): self._run_command_without_raising(["systemctl", "restart", "sshd.service"], log_error=False) def stop_agent_service(self): self._run_command_without_raising(["systemctl", "stop", "{}.service".format(self.service_name)], log_error=False) def start_agent_service(self): self._run_command_without_raising(["systemctl", "start", "{}.service".format(self.service_name)], log_error=False) def register_agent_service(self): self._run_command_without_raising(["systemctl", "enable", "{}.service".format(self.service_name)], log_error=False) def unregister_agent_service(self): self._run_command_without_raising(["systemctl", "disable", "{}.service".format(self.service_name)], log_error=False) Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/systemd.py000066400000000000000000000047721462617747000256400ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import re from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.utils import shellutil def _get_os_util(): if _get_os_util.value is None: _get_os_util.value = get_osutil() return _get_os_util.value _get_os_util.value = None def is_systemd(): """ Determine if systemd is managing system services; the implementation follows the same strategy as, for example, sd_booted() in libsystemd, or /usr/sbin/service """ return os.path.exists("/run/systemd/system/") def get_version(): # the output is similar to # $ systemctl --version # systemd 245 (245.4-4ubuntu3) # +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP etc # return shellutil.run_command(['systemctl', '--version']) def get_unit_file_install_path(): """ e.g. /lib/systemd/system """ return _get_os_util().get_systemd_unit_file_install_path() def get_agent_unit_name(): """ e.g. walinuxagent.service """ return _get_os_util().get_service_name() + ".service" def get_agent_unit_file(): """ e.g. /lib/systemd/system/walinuxagent.service """ return os.path.join(get_unit_file_install_path(), get_agent_unit_name()) def get_agent_drop_in_path(): """ e.g. /lib/systemd/system/walinuxagent.service.d """ return os.path.join(get_unit_file_install_path(), "{0}.d".format(get_agent_unit_name())) def get_unit_property(unit_name, property_name): output = shellutil.run_command(["systemctl", "show", unit_name, "--property", property_name]) # Output is similar to # # systemctl show walinuxagent.service --property CPUQuotaPerSecUSec # CPUQuotaPerSecUSec=50ms match = re.match("[^=]+=(?P.+)", output) if match is None: raise ValueError("Can't find property {0} of {1}".format(property_name, unit_name)) return match.group('value') Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/osutil/ubuntu.py000066400000000000000000000143701462617747000254650ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import glob import textwrap import time import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.osutil.default import DefaultOSUtil class Ubuntu14OSUtil(DefaultOSUtil): def __init__(self): super(Ubuntu14OSUtil, self).__init__() self.jit_enabled = True self.service_name = self.get_service_name() @staticmethod def get_service_name(): return "walinuxagent" def start_network(self): return shellutil.run("service networking start", chk_err=False) def stop_agent_service(self): try: shellutil.run_command(["service", self.service_name, "stop"]) except shellutil.CommandError as cmd_err: return cmd_err.returncode return 0 def start_agent_service(self): try: shellutil.run_command(["service", self.service_name, "start"]) except shellutil.CommandError as cmd_err: return cmd_err.returncode return 0 def remove_rules_files(self, rules_files=""): pass def restore_rules_files(self, rules_files=""): pass def get_dhcp_lease_endpoint(self): return self.get_endpoint_from_leases_path('/var/lib/dhcp/dhclient.*.leases') class Ubuntu12OSUtil(Ubuntu14OSUtil): def __init__(self): # pylint: disable=W0235 super(Ubuntu12OSUtil, self).__init__() # Override def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", "dhclient3"]) class Ubuntu16OSUtil(Ubuntu14OSUtil): """ Ubuntu 16.04, 16.10, and 17.04. """ def __init__(self): super(Ubuntu16OSUtil, self).__init__() self.service_name = self.get_service_name() def register_agent_service(self): return shellutil.run("systemctl unmask {0}".format(self.service_name), chk_err=False) def unregister_agent_service(self): return shellutil.run("systemctl mask {0}".format(self.service_name), chk_err=False) class Ubuntu18OSUtil(Ubuntu16OSUtil): """ Ubuntu >=18.04 and <=24.04 """ def __init__(self): super(Ubuntu18OSUtil, self).__init__() self.service_name = self.get_service_name() def restart_if(self, ifname, retries=3, wait=5): """ Restart systemd-networkd """ retry_limit=retries+1 for attempt in range(1, retry_limit): try: shellutil.run_command(["systemctl", "restart", "systemd-networkd"]) except shellutil.CommandError as cmd_err: logger.warn("failed to restart systemd-networkd: return code {1}".format(cmd_err.returncode)) if attempt < retry_limit: logger.info("retrying in {0} seconds".format(wait)) time.sleep(wait) else: logger.warn("exceeded restart retries") def get_dhcp_pid(self): return self._get_dhcp_pid(["pidof", "systemd-networkd"]) def start_network(self): return shellutil.run("systemctl start systemd-networkd", chk_err=False) def stop_network(self): return shellutil.run("systemctl stop systemd-networkd", chk_err=False) def start_dhcp_service(self): return self.start_network() def stop_dhcp_service(self): return self.stop_network() def start_agent_service(self): return shellutil.run("systemctl start {0}".format(self.service_name), chk_err=False) def stop_agent_service(self): return shellutil.run("systemctl stop {0}".format(self.service_name), chk_err=False) def get_dhcp_lease_endpoint(self): pathglob = "/run/systemd/netif/leases/*" logger.info("looking for leases in path [{0}]".format(pathglob)) endpoint = None for lease_file in glob.glob(pathglob): try: with open(lease_file) as f: lease = f.read() for line in lease.splitlines(): if line.startswith("OPTION_245"): option_245 = line.split("=")[1] options = [int(i, 16) for i in textwrap.wrap(option_245, 2)] endpoint = "{0}.{1}.{2}.{3}".format(*options) logger.info("found endpoint [{0}]".format(endpoint)) except Exception as e: logger.info( "Failed to parse {0}: {1}".format(lease_file, str(e)) ) if endpoint is not None: logger.info("cached endpoint found [{0}]".format(endpoint)) else: logger.info("cached endpoint not found") return endpoint class UbuntuOSUtil(Ubuntu16OSUtil): def __init__(self): # pylint: disable=W0235 super(UbuntuOSUtil, self).__init__() def restart_if(self, ifname, retries=3, wait=5): """ Restart an interface by bouncing the link. systemd-networkd observes this event, and forces a renew of DHCP. """ retry_limit = retries+1 for attempt in range(1, retry_limit): return_code = shellutil.run("ip link set {0} down && ip link set {0} up".format(ifname)) if return_code == 0: return logger.warn("failed to restart {0}: return code {1}".format(ifname, return_code)) if attempt < retry_limit: logger.info("retrying in {0} seconds".format(wait)) time.sleep(wait) else: logger.warn("exceeded restart retries") class UbuntuSnappyOSUtil(Ubuntu14OSUtil): def __init__(self): super(UbuntuSnappyOSUtil, self).__init__() self.conf_file_path = '/apps/walinuxagent/current/waagent.conf' Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/protocol/000077500000000000000000000000001462617747000241065ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/protocol/__init__.py000066400000000000000000000011651462617747000262220ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/protocol/extensions_goal_state.py000066400000000000000000000157571462617747000311000ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import datetime from azurelinuxagent.common import logger from azurelinuxagent.common.AgentGlobals import AgentGlobals from azurelinuxagent.common.exception import AgentError from azurelinuxagent.common.utils import textutil class GoalStateChannel(object): WireServer = "WireServer" HostGAPlugin = "HostGAPlugin" Empty = "Empty" class GoalStateSource(object): Fabric = "Fabric" FastTrack = "FastTrack" Empty = "Empty" class VmSettingsParseError(AgentError): """ Error raised when the VmSettings are malformed """ def __init__(self, message, etag, vm_settings_text, inner=None): super(VmSettingsParseError, self).__init__(message, inner) self.etag = etag self.vm_settings_text = vm_settings_text class ExtensionsGoalState(object): """ ExtensionsGoalState represents the extensions information in the goal state; that information can originate from ExtensionsConfig when the goal state is retrieved from the WireServe or from vmSettings when it is retrieved from the HostGAPlugin. NOTE: This is an abstract class. The corresponding concrete classes can be instantiated using the ExtensionsGoalStateFactory. """ def __init__(self): self._is_outdated = False @property def id(self): """ Returns a string that includes the incarnation number if the ExtensionsGoalState was created from ExtensionsConfig, or the etag if it was created from vmSettings. """ raise NotImplementedError() @property def is_outdated(self): """ A goal state can be outdated if, for example, the VM Agent is using Fast Track and support for it stops (e.g. the VM is migrated to a node with an older version of the HostGAPlugin) and now the Agent is fetching goal states via the WireServer. """ return self._is_outdated @is_outdated.setter def is_outdated(self, value): self._is_outdated = value @property def svd_sequence_number(self): raise NotImplementedError() @property def activity_id(self): raise NotImplementedError() @property def correlation_id(self): raise NotImplementedError() @property def created_on_timestamp(self): raise NotImplementedError() @property def channel(self): """ Whether the goal state was retrieved from the WireServer or the HostGAPlugin """ raise NotImplementedError() @property def source(self): """ Whether the goal state originated from Fabric or Fast Track """ raise NotImplementedError() @property def status_upload_blob(self): raise NotImplementedError() @property def status_upload_blob_type(self): raise NotImplementedError() def _set_status_upload_blob_type(self, value): raise NotImplementedError() @property def required_features(self): raise NotImplementedError() @property def on_hold(self): raise NotImplementedError() @property def agent_families(self): raise NotImplementedError() @property def extensions(self): raise NotImplementedError() def get_redacted_text(self): """ Returns the raw text (either the ExtensionsConfig or the vmSettings) with any confidential data removed, or an empty string for empty goal states. """ raise NotImplementedError() def _do_common_validations(self): """ Does validations common to vmSettings and ExtensionsConfig """ if self.status_upload_blob_type not in ["BlockBlob", "PageBlob"]: logger.info("Status Blob type '{0}' is not valid, assuming BlockBlob", self.status_upload_blob) self._set_status_upload_blob_type("BlockBlob") @staticmethod def _ticks_to_utc_timestamp(ticks_string): """ Takes 'ticks', a string indicating the number of ticks since midnight 0001-01-01 00:00:00, and returns a UTC timestamp (every tick is 1/10000000 of a second). """ minimum = datetime.datetime(1900, 1, 1, 0, 0) # min value accepted by datetime.strftime() as_date_time = minimum if ticks_string not in (None, ""): try: as_date_time = datetime.datetime.min + datetime.timedelta(seconds=float(ticks_string) / 10 ** 7) except Exception as exception: logger.verbose("Can't parse ticks: {0}", textutil.format_exception(exception)) as_date_time = max(as_date_time, minimum) return as_date_time.strftime(logger.Logger.LogTimeFormatInUTC) @staticmethod def _string_to_id(id_string): """ Takes 'id', a string indicating an ID, and returns a null GUID if the string is None or empty; otherwise return 'id' unchanged """ if id_string in (None, ""): return AgentGlobals.GUID_ZERO return id_string class EmptyExtensionsGoalState(ExtensionsGoalState): def __init__(self, incarnation): super(EmptyExtensionsGoalState, self).__init__() self._id = "incarnation_{0}".format(incarnation) self._incarnation = incarnation @property def id(self): return self._id @property def incarnation(self): return self._incarnation @property def svd_sequence_number(self): return self._incarnation @property def activity_id(self): return AgentGlobals.GUID_ZERO @property def correlation_id(self): return AgentGlobals.GUID_ZERO @property def created_on_timestamp(self): return datetime.datetime.min @property def channel(self): return GoalStateChannel.Empty @property def source(self): return GoalStateSource.Empty @property def status_upload_blob(self): return None @property def status_upload_blob_type(self): return None def _set_status_upload_blob_type(self, value): raise TypeError("EmptyExtensionsGoalState is immutable; cannot change the value of the status upload blob") @property def required_features(self): return [] @property def on_hold(self): return False @property def agent_families(self): return [] @property def extensions(self): return [] def get_redacted_text(self): return '' Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/protocol/extensions_goal_state_factory.py000066400000000000000000000027341462617747000326160ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ from azurelinuxagent.common.protocol.extensions_goal_state import EmptyExtensionsGoalState from azurelinuxagent.common.protocol.extensions_goal_state_from_extensions_config import ExtensionsGoalStateFromExtensionsConfig from azurelinuxagent.common.protocol.extensions_goal_state_from_vm_settings import ExtensionsGoalStateFromVmSettings class ExtensionsGoalStateFactory(object): @staticmethod def create_empty(incarnation): return EmptyExtensionsGoalState(incarnation) @staticmethod def create_from_extensions_config(incarnation, xml_text, wire_client): return ExtensionsGoalStateFromExtensionsConfig(incarnation, xml_text, wire_client) @staticmethod def create_from_vm_settings(etag, json_text, correlation_id): return ExtensionsGoalStateFromVmSettings(etag, json_text, correlation_id) extensions_goal_state_from_extensions_config.py000066400000000000000000000707041462617747000356410ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/protocol# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import json from collections import defaultdict from azurelinuxagent.common import logger from azurelinuxagent.common.event import add_event, WALAEventOperation from azurelinuxagent.common.exception import ExtensionsConfigError from azurelinuxagent.common.future import ustr from azurelinuxagent.common.protocol.extensions_goal_state import ExtensionsGoalState, GoalStateChannel, GoalStateSource from azurelinuxagent.common.protocol.restapi import ExtensionSettings, Extension, VMAgentFamily, ExtensionState, InVMGoalStateMetaData from azurelinuxagent.common.utils.textutil import parse_doc, parse_json, findall, find, findtext, getattrib, gettext, format_exception, \ is_str_none_or_whitespace, is_str_empty class ExtensionsGoalStateFromExtensionsConfig(ExtensionsGoalState): def __init__(self, incarnation, xml_text, wire_client): super(ExtensionsGoalStateFromExtensionsConfig, self).__init__() self._id = "incarnation_{0}".format(incarnation) self._is_outdated = False self._incarnation = incarnation self._text = xml_text self._status_upload_blob = None self._status_upload_blob_type = None self._required_features = [] self._on_hold = False self._activity_id = None self._correlation_id = None self._created_on_timestamp = None self._agent_families = [] self._extensions = [] try: self._parse_extensions_config(xml_text, wire_client) self._do_common_validations() except Exception as e: raise ExtensionsConfigError("Error parsing ExtensionsConfig (incarnation: {0}): {1}\n{2}".format(incarnation, format_exception(e), self.get_redacted_text())) def _parse_extensions_config(self, xml_text, wire_client): xml_doc = parse_doc(xml_text) ga_families_list = find(xml_doc, "GAFamilies") ga_families = findall(ga_families_list, "GAFamily") for ga_family in ga_families: name = findtext(ga_family, "Name") version = findtext(ga_family, "Version") is_version_from_rsm = findtext(ga_family, "IsVersionFromRSM") is_vm_enabled_for_rsm_upgrades = findtext(ga_family, "IsVMEnabledForRSMUpgrades") uris_list = find(ga_family, "Uris") uris = findall(uris_list, "Uri") family = VMAgentFamily(name) family.version = version if is_version_from_rsm is not None: # checking None because converting string to lowercase family.is_version_from_rsm = is_version_from_rsm.lower() == "true" if is_vm_enabled_for_rsm_upgrades is not None: # checking None because converting string to lowercase family.is_vm_enabled_for_rsm_upgrades = is_vm_enabled_for_rsm_upgrades.lower() == "true" for uri in uris: family.uris.append(gettext(uri)) self._agent_families.append(family) self.__parse_plugins_and_settings_and_populate_ext_handlers(xml_doc) required_features_list = find(xml_doc, "RequiredFeatures") if required_features_list is not None: self._parse_required_features(required_features_list) self._status_upload_blob = findtext(xml_doc, "StatusUploadBlob") status_upload_node = find(xml_doc, "StatusUploadBlob") self._status_upload_blob_type = getattrib(status_upload_node, "statusBlobType") logger.verbose("Extension config shows status blob type as [{0}]", self._status_upload_blob_type) self._on_hold = ExtensionsGoalStateFromExtensionsConfig._fetch_extensions_on_hold(xml_doc, wire_client) in_vm_gs_metadata = InVMGoalStateMetaData(find(xml_doc, "InVMGoalStateMetaData")) self._activity_id = self._string_to_id(in_vm_gs_metadata.activity_id) self._correlation_id = self._string_to_id(in_vm_gs_metadata.correlation_id) self._created_on_timestamp = self._ticks_to_utc_timestamp(in_vm_gs_metadata.created_on_ticks) @staticmethod def _fetch_extensions_on_hold(xml_doc, wire_client): def log_info(message): logger.info(message) add_event(op=WALAEventOperation.ArtifactsProfileBlob, message=message, is_success=True, log_event=False) def log_warning(message): logger.warn(message) add_event(op=WALAEventOperation.ArtifactsProfileBlob, message=message, is_success=False, log_event=False) artifacts_profile_blob = findtext(xml_doc, "InVMArtifactsProfileBlob") if is_str_none_or_whitespace(artifacts_profile_blob): log_info("ExtensionsConfig does not include a InVMArtifactsProfileBlob; will assume the VM is not on hold") return False try: profile = wire_client.fetch_artifacts_profile_blob(artifacts_profile_blob) except Exception as error: log_warning("Can't download the artifacts profile blob; will assume the VM is not on hold. {0}".format(ustr(error))) return False if is_str_empty(profile): log_info("The artifacts profile blob is empty; will assume the VM is not on hold.") return False try: artifacts_profile = _InVMArtifactsProfile(profile) except Exception as exception: log_warning("Can't parse the artifacts profile blob; will assume the VM is not on hold. Error: {0}".format(ustr(exception))) return False return artifacts_profile.get_on_hold() @property def id(self): return self._id @property def incarnation(self): return self._incarnation @property def svd_sequence_number(self): return self._incarnation @property def activity_id(self): return self._activity_id @property def correlation_id(self): return self._correlation_id @property def created_on_timestamp(self): return self._created_on_timestamp @property def channel(self): return GoalStateChannel.WireServer @property def source(self): return GoalStateSource.Fabric @property def status_upload_blob(self): return self._status_upload_blob @property def status_upload_blob_type(self): return self._status_upload_blob_type def _set_status_upload_blob_type(self, value): self._status_upload_blob_type = value @property def required_features(self): return self._required_features @property def on_hold(self): return self._on_hold @property def agent_families(self): return self._agent_families @property def extensions(self): return self._extensions def get_redacted_text(self): text = self._text for ext_handler in self._extensions: for extension in ext_handler.settings: if extension.protectedSettings is not None: text = text.replace(extension.protectedSettings, "*** REDACTED ***") return text def _parse_required_features(self, required_features_list): for required_feature in findall(required_features_list, "RequiredFeature"): feature_name = findtext(required_feature, "Name") # per the documentation, RequiredFeatures also have a "Value" attribute but currently it is not being populated self._required_features.append(feature_name) def __parse_plugins_and_settings_and_populate_ext_handlers(self, xml_doc): """ Sample ExtensionConfig Plugin and PluginSettings: { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"01_add_extensions_with_dependency":"ff2a3da6-8e12-4ab6-a4ca-4e3a473ab385"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"01_add_extensions_with_dependency":"2e837740-cf7e-4528-b3a4-241002618f05"} } } ] } """ plugins_list = find(xml_doc, "Plugins") plugins = findall(plugins_list, "Plugin") plugin_settings_list = find(xml_doc, "PluginSettings") plugin_settings = findall(plugin_settings_list, "Plugin") for plugin in plugins: extension = Extension() try: ExtensionsGoalStateFromExtensionsConfig._parse_plugin(extension, plugin) ExtensionsGoalStateFromExtensionsConfig._parse_plugin_settings(extension, plugin_settings) except ExtensionsConfigError as error: extension.invalid_setting_reason = ustr(error) self._extensions.append(extension) @staticmethod def _parse_plugin(extension, plugin): """ Sample config: https://rdfecurrentuswestcache3.blob.core.test-cint.azure-test.net/0e53c53ef0be4178bacb0a1fecf12a74/Microsoft.Azure.Extensions_CustomScript_usstagesc_manifest.xml https://rdfecurrentuswestcache4.blob.core.test-cint.azure-test.net/0e53c53ef0be4178bacb0a1fecf12a74/Microsoft.Azure.Extensions_CustomScript_usstagesc_manifest.xml Note that the `additionalLocations` subnode is populated with links generated by PIR for resiliency. In regions with this feature enabled, CRP will provide any extra links in the format above. If no extra links are provided, the subnode will not exist. """ def _log_error_if_none(attr_name, value): # Plugin Name and Version are very essential fields, without them we wont be able to even report back to CRP # about that handler. For those cases we need to fail the GoalState completely but currently we dont support # reporting status at a GoalState level (we only report at a handler level). # Once that functionality is added to the GA, we would raise here rather than just report error in our logs. if value in (None, ""): add_event(op=WALAEventOperation.InvalidExtensionConfig, message="{0} is None for ExtensionConfig, logging error".format(attr_name), log_event=True, is_success=False) return value extension.name = _log_error_if_none("Extensions.Plugins.Plugin.name", getattrib(plugin, "name")) extension.version = _log_error_if_none("Extensions.Plugins.Plugin.version", getattrib(plugin, "version")) extension.state = getattrib(plugin, "state") if extension.state in (None, ""): raise ExtensionsConfigError("Received empty Extensions.Plugins.Plugin.state, failing Handler") def getattrib_wrapped_in_list(node, attr_name): attr = getattrib(node, attr_name) return [attr] if attr not in (None, "") else [] location = getattrib_wrapped_in_list(plugin, "location") failover_location = getattrib_wrapped_in_list(plugin, "failoverlocation") locations = location + failover_location additional_location_node = find(plugin, "additionalLocations") if additional_location_node is not None: nodes_list = findall(additional_location_node, "additionalLocation") locations += [gettext(node) for node in nodes_list] for uri in locations: extension.manifest_uris.append(uri) @staticmethod def _parse_plugin_settings(extension, plugin_settings): """ Sample config: { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"01_add_extensions_with_dependency":"ff2a3da6-8e12-4ab6-a4ca-4e3a473ab385"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host First: Hello World TestTry2!"},"parameters":[{"name":"extensionName","value":"firstRunCommand"}],"timeoutInSeconds":120} } } ] } """ if plugin_settings is None: return extension_name = extension.name version = extension.version def to_lower(str_to_change): return str_to_change.lower() if str_to_change is not None else None extension_plugin_settings = [x for x in plugin_settings if to_lower(getattrib(x, "name")) == to_lower(extension_name)] if not extension_plugin_settings: return settings = [x for x in extension_plugin_settings if getattrib(x, "version") == version] if len(settings) != len(extension_plugin_settings): msg = "Extension PluginSettings Version Mismatch! Expected PluginSettings version: {0} for Extension: {1} but found versions: ({2})".format( version, extension_name, ', '.join(set([getattrib(x, "version") for x in extension_plugin_settings]))) add_event(op=WALAEventOperation.PluginSettingsVersionMismatch, message=msg, log_event=True, is_success=False) raise ExtensionsConfigError(msg) if len(settings) > 1: msg = "Multiple plugin settings found for the same extension: {0} and version: {1} (Expected: 1; Available: {2})".format( extension_name, version, len(settings)) raise ExtensionsConfigError(msg) plugin_settings_node = settings[0] runtime_settings_nodes = findall(plugin_settings_node, "RuntimeSettings") extension_runtime_settings_nodes = findall(plugin_settings_node, "ExtensionRuntimeSettings") if any(runtime_settings_nodes) and any(extension_runtime_settings_nodes): # There can only be a single RuntimeSettings node or multiple ExtensionRuntimeSettings nodes per Plugin msg = "Both RuntimeSettings and ExtensionRuntimeSettings found for the same extension: {0} and version: {1}".format( extension_name, version) raise ExtensionsConfigError(msg) if runtime_settings_nodes: if len(runtime_settings_nodes) > 1: msg = "Multiple RuntimeSettings found for the same extension: {0} and version: {1} (Expected: 1; Available: {2})".format( extension_name, version, len(runtime_settings_nodes)) raise ExtensionsConfigError(msg) # Only Runtime settings available, parse that ExtensionsGoalStateFromExtensionsConfig.__parse_runtime_settings(plugin_settings_node, runtime_settings_nodes[0], extension_name, extension) elif extension_runtime_settings_nodes: # Parse the ExtensionRuntime settings for the given extension ExtensionsGoalStateFromExtensionsConfig.__parse_extension_runtime_settings(plugin_settings_node, extension_runtime_settings_nodes, extension) @staticmethod def __get_dependency_level_from_node(depends_on_node, name): depends_on_level = 0 if depends_on_node is not None: try: depends_on_level = int(getattrib(depends_on_node, "dependencyLevel")) except (ValueError, TypeError): logger.warn("Could not parse dependencyLevel for handler {0}. Setting it to 0".format(name)) depends_on_level = 0 return depends_on_level @staticmethod def __parse_runtime_settings(plugin_settings_node, runtime_settings_node, extension_name, extension): """ Sample Plugin in PluginSettings containing DependsOn and RuntimeSettings (single settings per extension) - { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "", "protectedSettings": "", "publicSettings": {"UserName":"test1234"} } } ] } """ depends_on_nodes = findall(plugin_settings_node, "DependsOn") if len(depends_on_nodes) > 1: msg = "Extension Handler can only have a single dependsOn node for Single config extensions. Found: {0}".format( len(depends_on_nodes)) raise ExtensionsConfigError(msg) depends_on_node = depends_on_nodes[0] if depends_on_nodes else None depends_on_level = ExtensionsGoalStateFromExtensionsConfig.__get_dependency_level_from_node(depends_on_node, extension_name) ExtensionsGoalStateFromExtensionsConfig.__parse_and_add_extension_settings(runtime_settings_node, extension_name, extension, depends_on_level) @staticmethod def __parse_extension_runtime_settings(plugin_settings_node, extension_runtime_settings_nodes, extension): """ Sample PluginSettings containing DependsOn and ExtensionRuntimeSettings - { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host First: Hello World 1234!"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host First: Hello World 1234!"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host Third: Hello World 3!"}} } } ] } """ # Parse and cache the Dependencies for each extension first dependency_levels = defaultdict(int) for depends_on_node in findall(plugin_settings_node, "DependsOn"): extension_name = getattrib(depends_on_node, "name") if extension_name in (None, ""): raise ExtensionsConfigError("No Name not specified for DependsOn object in ExtensionRuntimeSettings for MultiConfig!") dependency_level = ExtensionsGoalStateFromExtensionsConfig.__get_dependency_level_from_node(depends_on_node, extension_name) dependency_levels[extension_name] = dependency_level extension.supports_multi_config = True for extension_runtime_setting_node in extension_runtime_settings_nodes: # Name and State will only be set for ExtensionRuntimeSettings for Multi-Config extension_name = getattrib(extension_runtime_setting_node, "name") if extension_name in (None, ""): raise ExtensionsConfigError("Extension Name not specified for ExtensionRuntimeSettings for MultiConfig!") # State can either be `ExtensionState.Enabled` (default) or `ExtensionState.Disabled` state = getattrib(extension_runtime_setting_node, "state") state = ustr(state.lower()) if state not in (None, "") else ExtensionState.Enabled ExtensionsGoalStateFromExtensionsConfig.__parse_and_add_extension_settings(extension_runtime_setting_node, extension_name, extension, dependency_levels[extension_name], state=state) @staticmethod def __parse_and_add_extension_settings(settings_node, name, extension, depends_on_level, state=ExtensionState.Enabled): seq_no = getattrib(settings_node, "seqNo") if seq_no in (None, ""): raise ExtensionsConfigError("SeqNo not specified for the Extension: {0}".format(name)) try: runtime_settings = json.loads(gettext(settings_node)) except ValueError as error: logger.error("Invalid extension settings: {0}", ustr(error)) # Incase of invalid/no settings, add the name and seqNo of the Extension and treat it as an extension with # no settings since we were able to successfully parse those data properly. Without this, we wont report # anything for that sequence number and CRP would eventually have to timeout rather than fail fast. extension.settings.append( ExtensionSettings(name=name, sequenceNumber=seq_no, state=state, dependencyLevel=depends_on_level)) return for plugin_settings_list in runtime_settings["runtimeSettings"]: handler_settings = plugin_settings_list["handlerSettings"] extension_settings = ExtensionSettings() # There is no "extension name" for single Handler Settings. Use HandlerName for those extension_settings.name = name extension_settings.state = state extension_settings.sequenceNumber = int(seq_no) extension_settings.publicSettings = handler_settings.get("publicSettings") extension_settings.protectedSettings = handler_settings.get("protectedSettings") extension_settings.dependencyLevel = depends_on_level thumbprint = handler_settings.get("protectedSettingsCertThumbprint") extension_settings.certificateThumbprint = thumbprint extension.settings.append(extension_settings) # Do not extend this class class _InVMArtifactsProfile(object): """ deserialized json string of InVMArtifactsProfile. It is expected to contain the following fields: * inVMArtifactsProfileBlobSeqNo * profileId (optional) * onHold (optional) * certificateThumbprint (optional) * encryptedHealthChecks (optional) * encryptedApplicationProfile (optional) """ def __init__(self, artifacts_profile_json): self._on_hold = False artifacts_profile = parse_json(artifacts_profile_json) on_hold = artifacts_profile.get('onHold') if on_hold is not None: # accept both bool and str values on_hold_normalized = str(on_hold).lower() if on_hold_normalized == "true": self._on_hold = True elif on_hold_normalized == "false": self._on_hold = False else: raise Exception("Invalid value for onHold: {0}".format(on_hold)) def get_on_hold(self): return self._on_hold Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/protocol/extensions_goal_state_from_vm_settings.py000066400000000000000000000653531462617747000345420ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import datetime import json import re import sys from azurelinuxagent.common import logger from azurelinuxagent.common.AgentGlobals import AgentGlobals from azurelinuxagent.common.event import WALAEventOperation, add_event from azurelinuxagent.common.future import ustr from azurelinuxagent.common.protocol.extensions_goal_state import ExtensionsGoalState, GoalStateChannel, VmSettingsParseError from azurelinuxagent.common.protocol.restapi import VMAgentFamily, Extension, ExtensionRequestedState, ExtensionSettings from azurelinuxagent.common.utils.flexible_version import FlexibleVersion class ExtensionsGoalStateFromVmSettings(ExtensionsGoalState): _MINIMUM_TIMESTAMP = datetime.datetime(1900, 1, 1, 0, 0) # min value accepted by datetime.strftime() def __init__(self, etag, json_text, correlation_id): super(ExtensionsGoalStateFromVmSettings, self).__init__() self._id = "etag_{0}".format(etag) self._etag = etag self._svd_sequence_number = 0 self._hostga_plugin_correlation_id = correlation_id self._text = json_text self._host_ga_plugin_version = FlexibleVersion('0.0.0.0') self._schema_version = FlexibleVersion('0.0.0.0') self._activity_id = AgentGlobals.GUID_ZERO self._correlation_id = AgentGlobals.GUID_ZERO self._created_on_timestamp = self._MINIMUM_TIMESTAMP self._source = None self._status_upload_blob = None self._status_upload_blob_type = None self._required_features = [] self._on_hold = False self._agent_families = [] self._extensions = [] try: self._parse_vm_settings(json_text) self._do_common_validations() except Exception as e: message = "Error parsing vmSettings [HGAP: {0} Etag:{1}]: {2}".format(self._host_ga_plugin_version, etag, ustr(e)) raise VmSettingsParseError(message, etag, self.get_redacted_text()) @property def id(self): return self._id @property def etag(self): return self._etag @property def svd_sequence_number(self): return self._svd_sequence_number @property def host_ga_plugin_version(self): return self._host_ga_plugin_version @property def schema_version(self): return self._schema_version @property def activity_id(self): """ The CRP activity id """ return self._activity_id @property def correlation_id(self): """ The correlation id for the CRP operation """ return self._correlation_id @property def hostga_plugin_correlation_id(self): """ The correlation id for the call to the HostGAPlugin vmSettings API """ return self._hostga_plugin_correlation_id @property def created_on_timestamp(self): """ Timestamp assigned by the CRP (time at which the goal state was created) """ return self._created_on_timestamp @property def channel(self): return GoalStateChannel.HostGAPlugin @property def source(self): return self._source @property def status_upload_blob(self): return self._status_upload_blob @property def status_upload_blob_type(self): return self._status_upload_blob_type def _set_status_upload_blob_type(self, value): self._status_upload_blob_type = value @property def required_features(self): return self._required_features @property def on_hold(self): return self._on_hold @property def agent_families(self): return self._agent_families @property def extensions(self): return self._extensions def get_redacted_text(self): return re.sub(r'("protectedSettings"\s*:\s*)"[^"]+"', r'\1"*** REDACTED ***"', self._text) def _parse_vm_settings(self, json_text): vm_settings = _CaseFoldedDict.from_dict(json.loads(json_text)) self._parse_simple_attributes(vm_settings) self._parse_status_upload_blob(vm_settings) self._parse_required_features(vm_settings) self._parse_agent_manifests(vm_settings) self._parse_extensions(vm_settings) def _parse_simple_attributes(self, vm_settings): # Sample: # { # "hostGAPluginVersion": "1.0.8.115", # "vmSettingsSchemaVersion": "0.0", # "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", # "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", # "inSvdSeqNo": 1, # "extensionsLastModifiedTickCount": 637726657706205217, # "extensionGoalStatesSource": "FastTrack", # ... # } # The HGAP version is included in some messages, so parse it first host_ga_plugin_version = vm_settings.get("hostGAPluginVersion") if host_ga_plugin_version is not None: self._host_ga_plugin_version = FlexibleVersion(host_ga_plugin_version) self._activity_id = self._string_to_id(vm_settings.get("activityId")) self._correlation_id = self._string_to_id(vm_settings.get("correlationId")) self._svd_sequence_number = self._string_to_id(vm_settings.get("inSvdSeqNo")) self._created_on_timestamp = self._ticks_to_utc_timestamp(vm_settings.get("extensionsLastModifiedTickCount")) schema_version = vm_settings.get("vmSettingsSchemaVersion") if schema_version is not None: self._schema_version = FlexibleVersion(schema_version) on_hold = vm_settings.get("onHold") if on_hold is not None: self._on_hold = on_hold self._source = vm_settings.get("extensionGoalStatesSource") if self._source is None: self._source = "UNKNOWN" def _parse_status_upload_blob(self, vm_settings): # Sample: # { # ... # "statusUploadBlob": { # "statusBlobType": "BlockBlob", # "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" # }, # ... # } status_upload_blob = vm_settings.get("statusUploadBlob") if status_upload_blob is None: self._status_upload_blob = None self._status_upload_blob_type = "BlockBlob" else: self._status_upload_blob = status_upload_blob.get("value") if self._status_upload_blob is None: raise Exception("Missing statusUploadBlob.value") self._status_upload_blob_type = status_upload_blob.get("statusBlobType") if self._status_upload_blob_type is None: self._status_upload_blob_type = "BlockBlob" def _parse_required_features(self, vm_settings): # Sample: # { # ... # "requiredFeatures": [ # { # "name": "MultipleExtensionsPerHandler" # } # ], # ... # } required_features = vm_settings.get("requiredFeatures") if required_features is not None: if not isinstance(required_features, list): raise Exception("requiredFeatures should be an array (got {0})".format(required_features)) def get_required_features_names(): for feature in required_features: name = feature.get("name") if name is None: raise Exception("A required feature is missing the 'name' property (got {0})".format(feature)) yield name self._required_features.extend(get_required_features_names()) def _parse_agent_manifests(self, vm_settings): # Sample: # { # ... # "gaFamilies": [ # { # "name": "Prod", # "version": "9.9.9.9", # "isVersionFromRSM": true, # "isVMEnabledForRSMUpgrades": true, # "uris": [ # "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", # "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" # ] # }, # { # "name": "Test", # "uris": [ # "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", # "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" # ] # } # ], # ... # } families = vm_settings.get("gaFamilies") if families is None: return if not isinstance(families, list): raise Exception("gaFamilies should be an array (got {0})".format(families)) for family in families: name = family["name"] version = family.get("version") is_version_from_rsm = family.get("isVersionFromRSM") is_vm_enabled_for_rsm_upgrades = family.get("isVMEnabledForRSMUpgrades") uris = family.get("uris") if uris is None: uris = [] agent_family = VMAgentFamily(name) agent_family.version = version agent_family.is_version_from_rsm = is_version_from_rsm agent_family.is_vm_enabled_for_rsm_upgrades = is_vm_enabled_for_rsm_upgrades for u in uris: agent_family.uris.append(u) self._agent_families.append(agent_family) def _parse_extensions(self, vm_settings): # Sample (NOTE: The first sample is single-config, the second multi-config): # { # ... # "extensionGoalStates": [ # { # "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", # "version": "1.9.1", # "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", # "state": "enabled", # "autoUpgrade": true, # "runAsStartupTask": false, # "isJson": true, # "useExactVersion": true, # "settingsSeqNo": 0, # "settings": [ # { # "protectedSettingsCertThumbprint": "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F", # "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", # "publicSettings": "{\"GCS_AUTO_CONFIG\":true}" # } # ], # "dependsOn": [ # ... # ] # }, # { # "name": "Microsoft.CPlat.Core.RunCommandHandlerLinux", # "version": "1.2.0", # "location": "https://umsavbvncrpzbnxmxzmr.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", # "failoverlocation": "https://umsajbjtqrb3zqjvgb2z.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", # "additionalLocations": [ # "https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml" # ], # "state": "enabled", # "autoUpgrade": true, # "runAsStartupTask": false, # "isJson": true, # "useExactVersion": true, # "settingsSeqNo": 0, # "isMultiConfig": true, # "settings": [ # { # "publicSettings": "{\"source\":{\"script\":\"echo '4abb1e88-f349-41f8-8442-247d9fdfcac5'\"}}", # "seqNo": 0, # "extensionName": "MCExt1", # "extensionState": "enabled" # }, # { # "publicSettings": "{\"source\":{\"script\":\"echo 'e865c9bc-a7b3-42c6-9a79-cfa98a1ee8b3'\"}}", # "seqNo": 0, # "extensionName": "MCExt2", # "extensionState": "enabled" # }, # { # "publicSettings": "{\"source\":{\"script\":\"echo 'f923e416-0340-485c-9243-8b84fb9930c6'\"}}", # "seqNo": 0, # "extensionName": "MCExt3", # "extensionState": "enabled" # } # ], # "dependsOn": [ # ... # ] # } # ... # ] # ... # } extension_goal_states = vm_settings.get("extensionGoalStates") if extension_goal_states is not None: if not isinstance(extension_goal_states, list): raise Exception("extension_goal_states should be an array (got {0})".format(type(extension_goal_states))) # report only the type, since the value may contain secrets for extension_gs in extension_goal_states: extension = Extension() extension.name = extension_gs['name'] extension.version = extension_gs['version'] extension.state = extension_gs['state'] if extension.state not in ExtensionRequestedState.All: raise Exception('Invalid extension state: {0} ({1})'.format(extension.state, extension.name)) is_multi_config = extension_gs.get('isMultiConfig') if is_multi_config is not None: extension.supports_multi_config = is_multi_config location = extension_gs.get('location') if location is not None: extension.manifest_uris.append(location) fail_over_location = extension_gs.get('failoverLocation') if fail_over_location is not None: extension.manifest_uris.append(fail_over_location) additional_locations = extension_gs.get('additionalLocations') if additional_locations is not None: if not isinstance(additional_locations, list): raise Exception('additionalLocations should be an array (got {0})'.format(additional_locations)) extension.manifest_uris.extend(additional_locations) # # Settings # settings_list = extension_gs.get('settings') if settings_list is not None: if not isinstance(settings_list, list): raise Exception("'settings' should be an array (extension: {0})".format(extension.name)) if not extension.supports_multi_config and len(settings_list) > 1: raise Exception("Single-config extension includes multiple settings (extension: {0})".format(extension.name)) for s in settings_list: settings = ExtensionSettings() public_settings = s.get('publicSettings') # Note that publicSettings, protectedSettings and protectedSettingsCertThumbprint can be None; do not change this to, for example, # empty, since those values are serialized to the extension's status file and extensions may depend on the current implementation # (for example, no public settings would currently be serialized as '"publicSettings": null') settings.publicSettings = None if public_settings is None else json.loads(public_settings) settings.protectedSettings = s.get('protectedSettings') thumbprint = s.get('protectedSettingsCertThumbprint') if thumbprint is None and settings.protectedSettings is not None: raise Exception("The certificate thumbprint for protected settings is missing (extension: {0})".format(extension.name)) settings.certificateThumbprint = thumbprint # in multi-config each settings have their own name, sequence number and state if extension.supports_multi_config: settings.name = s['extensionName'] settings.sequenceNumber = s['seqNo'] settings.state = s['extensionState'] else: settings.name = extension.name settings.sequenceNumber = extension_gs['settingsSeqNo'] settings.state = extension.state extension.settings.append(settings) # # Dependency level # depends_on = extension_gs.get("dependsOn") if depends_on is not None: self._parse_dependency_level(depends_on, extension) self._extensions.append(extension) @staticmethod def _parse_dependency_level(depends_on, extension): # Sample (NOTE: The first sample is single-config, the second multi-config): # { # ... # "extensionGoalStates": [ # { # "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", # ... # "settings": [ # ... # ], # "dependsOn": [ # { # "DependsOnExtension": [ # { # "handler": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent" # } # ], # "dependencyLevel": 1 # } # ] # }, # { # "name": "Microsoft.CPlat.Core.RunCommandHandlerLinux", # ... # "isMultiConfig": true, # "settings": [ # { # ... # "extensionName": "MCExt1", # }, # { # ... # "extensionName": "MCExt2", # }, # { # ... # "extensionName": "MCExt3", # } # ], # "dependsOn": [ # { # "dependsOnExtension": [ # { # "extension": "...", # "handler": "..." # }, # { # "extension": "...", # "handler": "..." # } # ], # "dependencyLevel": 2, # "name": "MCExt1" # }, # { # "dependsOnExtension": [ # { # "extension": "...", # "handler": "..." # } # ], # "dependencyLevel": 1, # "name": "MCExt2" # } # ... # ] # ... # } if not isinstance(depends_on, list): raise Exception('dependsOn should be an array ({0}) (got {1})'.format(extension.name, depends_on)) if not extension.supports_multi_config: # single-config length = len(depends_on) if length > 1: raise Exception('dependsOn should be an array with exactly one item for single-config extensions ({0}) (got {1})'.format(extension.name, depends_on)) if length == 0: logger.warn('dependsOn is an empty array for extension {0}; setting the dependency level to 0'.format(extension.name)) dependency_level = 0 else: dependency_level = depends_on[0]['dependencyLevel'] depends_on_extension = depends_on[0].get('dependsOnExtension') if depends_on_extension is None: # TODO: Consider removing this check and its telemetry after a few releases if we do not receive any telemetry indicating # that dependsOnExtension is actually missing from the vmSettings message = 'Missing dependsOnExtension on extension {0}'.format(extension.name) logger.warn(message) add_event(WALAEventOperation.ProvisionAfterExtensions, message=message, is_success=False, log_event=False) else: message = '{0} depends on {1}'.format(extension.name, depends_on_extension) logger.info(message) add_event(WALAEventOperation.ProvisionAfterExtensions, message=message, is_success=True, log_event=False) if len(extension.settings) == 0: message = 'Extension {0} does not have any settings. Will ignore dependency (dependency level: {1})'.format(extension.name, dependency_level) logger.warn(message) add_event(WALAEventOperation.ProvisionAfterExtensions, message=message, is_success=False, log_event=False) else: extension.settings[0].dependencyLevel = dependency_level else: # multi-config settings_by_name = {} for settings in extension.settings: settings_by_name[settings.name] = settings for dependency in depends_on: settings = settings_by_name.get(dependency["name"]) if settings is None: raise Exception("Dependency '{0}' does not correspond to any of the settings in the extension (settings: {1})".format(dependency["name"], settings_by_name.keys())) settings.dependencyLevel = dependency["dependencyLevel"] # # TODO: The current implementation of the vmSettings API uses inconsistent cases on the names of the json items it returns. # To work around that, we use _CaseFoldedDict to query those json items in a case-insensitive matter, Do not use # _CaseFoldedDict for other purposes. Remove it once the vmSettings API is updated. # class _CaseFoldedDict(dict): @staticmethod def from_dict(dictionary): case_folded = _CaseFoldedDict() for key, value in dictionary.items(): case_folded[key] = _CaseFoldedDict._to_case_folded_dict_item(value) return case_folded def get(self, key): return super(_CaseFoldedDict, self).get(_casefold(key)) def has_key(self, key): return super(_CaseFoldedDict, self).get(_casefold(key)) def __getitem__(self, key): return super(_CaseFoldedDict, self).__getitem__(_casefold(key)) def __setitem__(self, key, value): return super(_CaseFoldedDict, self).__setitem__(_casefold(key), value) def __contains__(self, key): return super(_CaseFoldedDict, self).__contains__(_casefold(key)) @staticmethod def _to_case_folded_dict_item(item): if isinstance(item, dict): case_folded_dict = _CaseFoldedDict() for key, value in item.items(): case_folded_dict[_casefold(key)] = _CaseFoldedDict._to_case_folded_dict_item(value) return case_folded_dict if isinstance(item, list): return [_CaseFoldedDict._to_case_folded_dict_item(list_item) for list_item in item] return item def copy(self): raise NotImplementedError() @staticmethod def fromkeys(*args, **kwargs): raise NotImplementedError() def pop(self, key, default=None): raise NotImplementedError() def setdefault(self, key, default=None): raise NotImplementedError() def update(self, E=None, **F): # known special case of dict.update raise NotImplementedError() def __delitem__(self, *args, **kwargs): raise NotImplementedError() # casefold() does not exist on Python 2 so we use lower() there def _casefold(string): if sys.version_info[0] == 2: return type(string).lower(string) # the type of "string" can be unicode or str # Class 'str' has no 'casefold' member (no-member) -- Disabled: This warning shows up on Python 2.7 pylint runs # but this code is actually not executed on Python 2. return str.casefold(string) # pylint: disable=no-member Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/protocol/goal_state.py000066400000000000000000001034521462617747000266070ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import datetime import os import re import time import json from azurelinuxagent.common import conf from azurelinuxagent.common import logger from azurelinuxagent.common.AgentGlobals import AgentGlobals from azurelinuxagent.common.datacontract import set_properties from azurelinuxagent.common.event import add_event, WALAEventOperation from azurelinuxagent.common.exception import ProtocolError, ResourceGoneError from azurelinuxagent.common.future import ustr from azurelinuxagent.common.protocol.extensions_goal_state_factory import ExtensionsGoalStateFactory from azurelinuxagent.common.protocol.extensions_goal_state import VmSettingsParseError, GoalStateSource from azurelinuxagent.common.protocol.hostplugin import VmSettingsNotSupported, VmSettingsSupportStopped from azurelinuxagent.common.protocol.restapi import Cert, CertList, RemoteAccessUser, RemoteAccessUsersList, ExtHandlerPackage, ExtHandlerPackageList from azurelinuxagent.common.utils import fileutil from azurelinuxagent.common.utils.archive import GoalStateHistory, SHARED_CONF_FILE_NAME from azurelinuxagent.common.utils.cryptutil import CryptUtil from azurelinuxagent.common.utils.textutil import parse_doc, findall, find, findtext, getattrib, gettext GOAL_STATE_URI = "http://{0}/machine/?comp=goalstate" CERTS_FILE_NAME = "Certificates.xml" P7M_FILE_NAME = "Certificates.p7m" PEM_FILE_NAME = "Certificates.pem" TRANSPORT_CERT_FILE_NAME = "TransportCert.pem" TRANSPORT_PRV_FILE_NAME = "TransportPrivate.pem" _GET_GOAL_STATE_MAX_ATTEMPTS = 6 class GoalStateProperties(object): """ Enum for defining the properties that we fetch in the goal state """ RoleConfig = 0x1 HostingEnv = 0x2 SharedConfig = 0x4 ExtensionsGoalState = 0x8 Certificates = 0x10 RemoteAccessInfo = 0x20 All = RoleConfig | HostingEnv | SharedConfig | ExtensionsGoalState | Certificates | RemoteAccessInfo class GoalStateInconsistentError(ProtocolError): """ Indicates an inconsistency in the goal state (e.g. missing tenant certificate) """ def __init__(self, msg, inner=None): super(GoalStateInconsistentError, self).__init__(msg, inner) class GoalState(object): def __init__(self, wire_client, goal_state_properties=GoalStateProperties.All, silent=False, save_to_history=False): """ Fetches the goal state using the given wire client. Fetching the goal state involves several HTTP requests to the WireServer and the HostGAPlugin. There is an initial request to WireServer's goalstate API, which response includes the incarnation, role instance, container ID, role config, and URIs to the rest of the goal state (ExtensionsConfig, Certificates, Remote Access users, etc.). Additional requests are done using those URIs (all of them point to APIs in the WireServer). Additionally, there is a request to the HostGAPlugin for the vmSettings, which determines the goal state for extensions when using the Fast Track pipeline. To reduce the number of requests, when possible, create a single instance of GoalState and use the update() method to keep it up to date. """ try: self._wire_client = wire_client self._history = None self._save_to_history = save_to_history self._extensions_goal_state = None # populated from vmSettings or extensionsConfig self._goal_state_properties = goal_state_properties self.logger = logger.Logger(logger.DEFAULT_LOGGER) self.logger.silent = silent # These properties hold the goal state from the WireServer and are initialized by self._fetch_full_wire_server_goal_state() self._incarnation = None self._role_instance_id = None self._role_config_name = None self._container_id = None self._hosting_env = None self._shared_conf = None self._certs = EmptyCertificates() self._certs_uri = None self._remote_access = None self.update(silent=silent) except ProtocolError: raise except Exception as exception: # We don't log the error here since fetching the goal state is done every few seconds raise ProtocolError(msg="Error fetching goal state", inner=exception) @property def incarnation(self): return self._incarnation @property def container_id(self): if not self._goal_state_properties & GoalStateProperties.RoleConfig: raise ProtocolError("ContainerId is not in goal state properties") else: return self._container_id @property def role_instance_id(self): if not self._goal_state_properties & GoalStateProperties.RoleConfig: raise ProtocolError("RoleInstanceId is not in goal state properties") else: return self._role_instance_id @property def role_config_name(self): if not self._goal_state_properties & GoalStateProperties.RoleConfig: raise ProtocolError("RoleConfig is not in goal state properties") else: return self._role_config_name @property def extensions_goal_state(self): if not self._goal_state_properties & GoalStateProperties.ExtensionsGoalState: raise ProtocolError("ExtensionsGoalState is not in goal state properties") else: return self._extensions_goal_state @property def certs(self): if not self._goal_state_properties & GoalStateProperties.Certificates: raise ProtocolError("Certificates is not in goal state properties") else: return self._certs @property def hosting_env(self): if not self._goal_state_properties & GoalStateProperties.HostingEnv: raise ProtocolError("HostingEnvironment is not in goal state properties") else: return self._hosting_env @property def shared_conf(self): if not self._goal_state_properties & GoalStateProperties.SharedConfig: raise ProtocolError("SharedConfig is not in goal state properties") else: return self._shared_conf @property def remote_access(self): if not self._goal_state_properties & GoalStateProperties.RemoteAccessInfo: raise ProtocolError("RemoteAccessInfo is not in goal state properties") else: return self._remote_access def fetch_agent_manifest(self, family_name, uris): """ This is a convenience method that wraps WireClient.fetch_manifest(), but adds the required 'use_verify_header' parameter and saves the manifest to the history folder. """ return self._fetch_manifest("agent", "waagent.{0}".format(family_name), uris) def fetch_extension_manifest(self, extension_name, uris): """ This is a convenience method that wraps WireClient.fetch_manifest(), but adds the required 'use_verify_header' parameter and saves the manifest to the history folder. """ return self._fetch_manifest("extension", extension_name, uris) def _fetch_manifest(self, manifest_type, name, uris): try: is_fast_track = self.extensions_goal_state.source == GoalStateSource.FastTrack xml_text = self._wire_client.fetch_manifest(manifest_type, uris, use_verify_header=is_fast_track) if self._save_to_history: self._history.save_manifest(name, xml_text) return ExtensionManifest(xml_text) except Exception as e: raise ProtocolError("Failed to retrieve {0} manifest. Error: {1}".format(manifest_type, ustr(e))) @staticmethod def update_host_plugin_headers(wire_client): """ Updates the container ID and role config name that are send in the headers of HTTP requests to the HostGAPlugin """ # Fetching the goal state updates the HostGAPlugin so simply trigger the request GoalState._fetch_goal_state(wire_client) def update(self, silent=False): """ Updates the current GoalState instance fetching values from the WireServer/HostGAPlugin as needed """ self.logger.silent = silent try: self._update(force_update=False) except GoalStateInconsistentError as e: message = "Detected an inconsistency in the goal state: {0}".format(ustr(e)) self.logger.warn(message) add_event(op=WALAEventOperation.GoalState, is_success=False, message=message) self._update(force_update=True) message = "The goal state is consistent" self.logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message) def _update(self, force_update): # # Fetch the goal state from both the HGAP and the WireServer # timestamp = datetime.datetime.utcnow() if force_update: message = "Refreshing goal state and vmSettings" self.logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message) incarnation, xml_text, xml_doc = GoalState._fetch_goal_state(self._wire_client) goal_state_updated = force_update or incarnation != self._incarnation if goal_state_updated: message = 'Fetched a new incarnation for the WireServer goal state [incarnation {0}]'.format(incarnation) self.logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message) vm_settings, vm_settings_updated = None, False if self._goal_state_properties & GoalStateProperties.ExtensionsGoalState: try: vm_settings, vm_settings_updated = GoalState._fetch_vm_settings(self._wire_client, force_update=force_update) except VmSettingsSupportStopped as exception: # If the HGAP stopped supporting vmSettings, we need to use the goal state from the WireServer self._restore_wire_server_goal_state(incarnation, xml_text, xml_doc, exception) return if vm_settings_updated: self.logger.info('') message = "Fetched new vmSettings [HostGAPlugin correlation ID: {0} eTag: {1} source: {2}]".format(vm_settings.hostga_plugin_correlation_id, vm_settings.etag, vm_settings.source) self.logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message) # Ignore the vmSettings if their source is Fabric (processing a Fabric goal state may require the tenant certificate and the vmSettings don't include it.) if vm_settings is not None and vm_settings.source == GoalStateSource.Fabric: if vm_settings_updated: message = "The vmSettings originated via Fabric; will ignore them." self.logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message) vm_settings, vm_settings_updated = None, False # If neither goal state has changed we are done with the update if not goal_state_updated and not vm_settings_updated: return # Start a new history subdirectory and capture the updated goal state tag = "{0}".format(incarnation) if vm_settings is None else "{0}-{1}".format(incarnation, vm_settings.etag) if self._save_to_history: self._history = GoalStateHistory(timestamp, tag) if goal_state_updated: self._history.save_goal_state(xml_text) if vm_settings_updated: self._history.save_vm_settings(vm_settings.get_redacted_text()) # # Continue fetching the rest of the goal state # extensions_config = None if goal_state_updated: extensions_config = self._fetch_full_wire_server_goal_state(incarnation, xml_doc) # # Lastly, decide whether to use the vmSettings or extensionsConfig for the extensions goal state # if goal_state_updated and vm_settings_updated: most_recent = vm_settings if vm_settings.created_on_timestamp > extensions_config.created_on_timestamp else extensions_config elif goal_state_updated: most_recent = extensions_config else: # vm_settings_updated most_recent = vm_settings if self._extensions_goal_state is None or most_recent.created_on_timestamp >= self._extensions_goal_state.created_on_timestamp: self._extensions_goal_state = most_recent # # For Fast Track goal states, verify that the required certificates are in the goal state. # # Some scenarios can produce inconsistent goal states. For example, during hibernation/resume, the Fabric goal state changes (the # tenant certificate is re-generated when the VM is restarted) *without* the incarnation necessarily changing (e.g. if the incarnation # is 1 before the hibernation; on resume the incarnation is set to 1 even though the goal state has a new certificate). If a Fast # Track goal state comes after that, the extensions will need the new certificate. The Agent needs to refresh the goal state in that # case, to ensure it fetches the new certificate. # if self._extensions_goal_state.source == GoalStateSource.FastTrack and self._goal_state_properties & GoalStateProperties.Certificates: self._check_certificates() self._check_and_download_missing_certs_on_disk() def _check_certificates(self): # Check that certificates needed by extensions are in goal state certs.summary for extension in self.extensions_goal_state.extensions: for settings in extension.settings: if settings.protectedSettings is None: continue certificates = self.certs.summary if not any(settings.certificateThumbprint == c['thumbprint'] for c in certificates): message = "Certificate {0} needed by {1} is missing from the goal state".format(settings.certificateThumbprint, extension.name) raise GoalStateInconsistentError(message) def _download_certificates(self, certs_uri): xml_text = self._wire_client.fetch_config(certs_uri, self._wire_client.get_header_for_cert()) certs = Certificates(xml_text, self.logger) # Log and save the certificates summary (i.e. the thumbprint but not the certificate itself) to the goal state history for c in certs.summary: message = "Downloaded certificate {0}".format(c) self.logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message) if len(certs.warnings) > 0: self.logger.warn(certs.warnings) add_event(op=WALAEventOperation.GoalState, message=certs.warnings) if self._save_to_history: self._history.save_certificates(json.dumps(certs.summary)) return certs def _check_and_download_missing_certs_on_disk(self): # Re-download certificates if any have been removed from disk since last download if self._certs_uri is not None: certificates = self.certs.summary certs_missing_from_disk = False for c in certificates: cert_path = os.path.join(conf.get_lib_dir(), c['thumbprint'] + '.crt') if not os.path.isfile(cert_path): certs_missing_from_disk = True message = "Certificate required by goal state is not on disk: {0}".format(cert_path) self.logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message) if certs_missing_from_disk: # Try to re-download certs. Sometimes download may fail if certs_uri is outdated/contains wrong # container id (for example, when the VM is moved to a new container after resuming from # hibernation). If download fails we should report and continue with goal state processing, as some # extensions in the goal state may succeed. try: self._download_certificates(self._certs_uri) except Exception as e: message = "Unable to download certificates. Goal state processing will continue, some " \ "extensions requiring certificates may fail. Error: {0}".format(ustr(e)) self.logger.warn(message) add_event(op=WALAEventOperation.GoalState, is_success=False, message=message) def _restore_wire_server_goal_state(self, incarnation, xml_text, xml_doc, vm_settings_support_stopped_error): msg = 'The HGAP stopped supporting vmSettings; will fetched the goal state from the WireServer.' self.logger.info(msg) add_event(op=WALAEventOperation.VmSettings, message=msg) if self._save_to_history: self._history = GoalStateHistory(datetime.datetime.utcnow(), incarnation) self._history.save_goal_state(xml_text) self._extensions_goal_state = self._fetch_full_wire_server_goal_state(incarnation, xml_doc) if self._extensions_goal_state.created_on_timestamp < vm_settings_support_stopped_error.timestamp: self._extensions_goal_state.is_outdated = True msg = "Fetched a Fabric goal state older than the most recent FastTrack goal state; will skip it.\nFabric: {0}\nFastTrack: {1}".format( self._extensions_goal_state.created_on_timestamp, vm_settings_support_stopped_error.timestamp) self.logger.info(msg) add_event(op=WALAEventOperation.VmSettings, message=msg) def save_to_history(self, data, file_name): if self._save_to_history: self._history.save(data, file_name) @staticmethod def _fetch_goal_state(wire_client): """ Issues an HTTP request for the goal state (WireServer) and returns a tuple containing the response as text and as an XML Document """ uri = GOAL_STATE_URI.format(wire_client.get_endpoint()) # In some environments a few goal state requests return a missing RoleInstance; these retries are used to work around that issue # TODO: Consider retrying on 410 (ResourceGone) as well incarnation = "unknown" for _ in range(0, _GET_GOAL_STATE_MAX_ATTEMPTS): xml_text = wire_client.fetch_config(uri, wire_client.get_header()) xml_doc = parse_doc(xml_text) incarnation = findtext(xml_doc, "Incarnation") role_instance = find(xml_doc, "RoleInstance") if role_instance: break time.sleep(0.5) else: raise ProtocolError("Fetched goal state without a RoleInstance [incarnation {inc}]".format(inc=incarnation)) # Telemetry and the HostGAPlugin depend on the container id/role config; keep them up-to-date each time we fetch the goal state # (note that these elements can change even if the incarnation of the goal state does not change) container = find(xml_doc, "Container") container_id = findtext(container, "ContainerId") role_config = find(role_instance, "Configuration") role_config_name = findtext(role_config, "ConfigName") AgentGlobals.update_container_id(container_id) # Telemetry uses this global to pick up the container id wire_client.update_host_plugin(container_id, role_config_name) return incarnation, xml_text, xml_doc @staticmethod def _fetch_vm_settings(wire_client, force_update=False): """ Issues an HTTP request (HostGAPlugin) for the vm settings and returns the response as an ExtensionsGoalState. """ vm_settings, vm_settings_updated = (None, False) if conf.get_enable_fast_track(): try: try: vm_settings, vm_settings_updated = wire_client.get_host_plugin().fetch_vm_settings(force_update=force_update) except ResourceGoneError: # retry after refreshing the HostGAPlugin GoalState.update_host_plugin_headers(wire_client) vm_settings, vm_settings_updated = wire_client.get_host_plugin().fetch_vm_settings(force_update=force_update) except VmSettingsSupportStopped: raise except VmSettingsNotSupported: pass except VmSettingsParseError as exception: # ensure we save the vmSettings if there were parsing errors, but save them only once per ETag if not GoalStateHistory.tag_exists(exception.etag): GoalStateHistory(datetime.datetime.utcnow(), exception.etag).save_vm_settings(exception.vm_settings_text) raise return vm_settings, vm_settings_updated def _fetch_full_wire_server_goal_state(self, incarnation, xml_doc): """ Issues HTTP requests (to the WireServer) for each of the URIs in the goal state (ExtensionsConfig, Certificate, Remote Access users, etc) and populates the corresponding properties. Returns the value of ExtensionsConfig. """ try: self.logger.info('') message = 'Fetching full goal state from the WireServer [incarnation {0}]'.format(incarnation) self.logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message) role_instance_id = None role_config_name = None container_id = None if GoalStateProperties.RoleConfig & self._goal_state_properties: role_instance = find(xml_doc, "RoleInstance") role_instance_id = findtext(role_instance, "InstanceId") role_config = find(role_instance, "Configuration") role_config_name = findtext(role_config, "ConfigName") container = find(xml_doc, "Container") container_id = findtext(container, "ContainerId") extensions_config_uri = findtext(xml_doc, "ExtensionsConfig") if not (GoalStateProperties.ExtensionsGoalState & self._goal_state_properties) or extensions_config_uri is None: extensions_config = ExtensionsGoalStateFactory.create_empty(incarnation) else: xml_text = self._wire_client.fetch_config(extensions_config_uri, self._wire_client.get_header()) extensions_config = ExtensionsGoalStateFactory.create_from_extensions_config(incarnation, xml_text, self._wire_client) if self._save_to_history: self._history.save_extensions_config(extensions_config.get_redacted_text()) hosting_env = None if GoalStateProperties.HostingEnv & self._goal_state_properties: hosting_env_uri = findtext(xml_doc, "HostingEnvironmentConfig") xml_text = self._wire_client.fetch_config(hosting_env_uri, self._wire_client.get_header()) hosting_env = HostingEnv(xml_text) if self._save_to_history: self._history.save_hosting_env(xml_text) shared_config = None if GoalStateProperties.SharedConfig & self._goal_state_properties: shared_conf_uri = findtext(xml_doc, "SharedConfig") xml_text = self._wire_client.fetch_config(shared_conf_uri, self._wire_client.get_header()) shared_config = SharedConfig(xml_text) if self._save_to_history: self._history.save_shared_conf(xml_text) # SharedConfig.xml is used by other components (Azsec and Singularity/HPC Infiniband), so save it to the agent's root directory as well shared_config_file = os.path.join(conf.get_lib_dir(), SHARED_CONF_FILE_NAME) try: fileutil.write_file(shared_config_file, xml_text) except Exception as e: logger.warn("Failed to save {0}: {1}".format(shared_config, e)) certs = EmptyCertificates() certs_uri = findtext(xml_doc, "Certificates") if (GoalStateProperties.Certificates & self._goal_state_properties) and certs_uri is not None: certs = self._download_certificates(certs_uri) remote_access = None if GoalStateProperties.RemoteAccessInfo & self._goal_state_properties: remote_access_uri = findtext(container, "RemoteAccessInfo") if remote_access_uri is not None: xml_text = self._wire_client.fetch_config(remote_access_uri, self._wire_client.get_header_for_cert()) remote_access = RemoteAccess(xml_text) if self._save_to_history: self._history.save_remote_access(xml_text) self._incarnation = incarnation self._role_instance_id = role_instance_id self._role_config_name = role_config_name self._container_id = container_id self._hosting_env = hosting_env self._shared_conf = shared_config self._certs = certs self._certs_uri = certs_uri self._remote_access = remote_access return extensions_config except Exception as exception: self.logger.warn("Fetching the goal state failed: {0}", ustr(exception)) raise ProtocolError(msg="Error fetching goal state", inner=exception) finally: message = 'Fetch goal state completed' self.logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message) class HostingEnv(object): def __init__(self, xml_text): self.xml_text = xml_text xml_doc = parse_doc(xml_text) incarnation = find(xml_doc, "Incarnation") self.vm_name = getattrib(incarnation, "instance") role = find(xml_doc, "Role") self.role_name = getattrib(role, "name") deployment = find(xml_doc, "Deployment") self.deployment_name = getattrib(deployment, "name") class SharedConfig(object): def __init__(self, xml_text): self.xml_text = xml_text class Certificates(object): def __init__(self, xml_text, my_logger): self.cert_list = CertList() self.summary = [] # debugging info self.warnings = [] # Save the certificates local_file = os.path.join(conf.get_lib_dir(), CERTS_FILE_NAME) fileutil.write_file(local_file, xml_text) # Separate the certificates into individual files. xml_doc = parse_doc(xml_text) data = findtext(xml_doc, "Data") if data is None: return # if the certificates format is not Pkcs7BlobWithPfxContents do not parse it certificate_format = findtext(xml_doc, "Format") if certificate_format and certificate_format != "Pkcs7BlobWithPfxContents": message = "The Format is not Pkcs7BlobWithPfxContents. Format is {0}".format(certificate_format) my_logger.warn(message) add_event(op=WALAEventOperation.GoalState, message=message) return cryptutil = CryptUtil(conf.get_openssl_cmd()) p7m_file = os.path.join(conf.get_lib_dir(), P7M_FILE_NAME) p7m = ("MIME-Version:1.0\n" # pylint: disable=W1308 "Content-Disposition: attachment; filename=\"{0}\"\n" "Content-Type: application/x-pkcs7-mime; name=\"{1}\"\n" "Content-Transfer-Encoding: base64\n" "\n" "{2}").format(p7m_file, p7m_file, data) fileutil.write_file(p7m_file, p7m) trans_prv_file = os.path.join(conf.get_lib_dir(), TRANSPORT_PRV_FILE_NAME) trans_cert_file = os.path.join(conf.get_lib_dir(), TRANSPORT_CERT_FILE_NAME) pem_file = os.path.join(conf.get_lib_dir(), PEM_FILE_NAME) # decrypt certificates cryptutil.decrypt_p7m(p7m_file, trans_prv_file, trans_cert_file, pem_file) # The parsing process use public key to match prv and crt. buf = [] prvs = {} thumbprints = {} index = 0 v1_cert_list = [] with open(pem_file) as pem: for line in pem.readlines(): buf.append(line) if re.match(r'[-]+END.*KEY[-]+', line): tmp_file = Certificates._write_to_tmp_file(index, 'prv', buf) pub = cryptutil.get_pubkey_from_prv(tmp_file) prvs[pub] = tmp_file buf = [] index += 1 elif re.match(r'[-]+END.*CERTIFICATE[-]+', line): tmp_file = Certificates._write_to_tmp_file(index, 'crt', buf) pub = cryptutil.get_pubkey_from_crt(tmp_file) thumbprint = cryptutil.get_thumbprint_from_crt(tmp_file) thumbprints[pub] = thumbprint # Rename crt with thumbprint as the file name crt = "{0}.crt".format(thumbprint) v1_cert_list.append({ "name": None, "thumbprint": thumbprint }) os.rename(tmp_file, os.path.join(conf.get_lib_dir(), crt)) buf = [] index += 1 # Rename prv key with thumbprint as the file name for pubkey in prvs: thumbprint = thumbprints[pubkey] if thumbprint: tmp_file = prvs[pubkey] prv = "{0}.prv".format(thumbprint) os.rename(tmp_file, os.path.join(conf.get_lib_dir(), prv)) else: # Since private key has *no* matching certificate, # it will not be named correctly self.warnings.append("Found NO matching cert/thumbprint for private key!") for pubkey, thumbprint in thumbprints.items(): has_private_key = pubkey in prvs self.summary.append({"thumbprint": thumbprint, "hasPrivateKey": has_private_key}) for v1_cert in v1_cert_list: cert = Cert() set_properties("certs", cert, v1_cert) self.cert_list.certificates.append(cert) @staticmethod def _write_to_tmp_file(index, suffix, buf): file_name = os.path.join(conf.get_lib_dir(), "{0}.{1}".format(index, suffix)) fileutil.write_file(file_name, "".join(buf)) return file_name class EmptyCertificates: def __init__(self): self.cert_list = CertList() self.summary = [] # debugging info self.warnings = [] class RemoteAccess(object): """ Object containing information about user accounts """ # # # # # # # # # # # # # def __init__(self, xml_text): self.xml_text = xml_text self.version = None self.incarnation = None self.user_list = RemoteAccessUsersList() if self.xml_text is None or len(self.xml_text) == 0: return xml_doc = parse_doc(self.xml_text) self.version = findtext(xml_doc, "Version") self.incarnation = findtext(xml_doc, "Incarnation") user_collection = find(xml_doc, "Users") users = findall(user_collection, "User") for user in users: remote_access_user = RemoteAccess._parse_user(user) self.user_list.users.append(remote_access_user) @staticmethod def _parse_user(user): name = findtext(user, "Name") encrypted_password = findtext(user, "Password") expiration = findtext(user, "Expiration") remote_access_user = RemoteAccessUser(name, encrypted_password, expiration) return remote_access_user class ExtensionManifest(object): def __init__(self, xml_text): if xml_text is None: raise ValueError("ExtensionManifest is None") logger.verbose("Load ExtensionManifest.xml") self.pkg_list = ExtHandlerPackageList() self._parse(xml_text) def _parse(self, xml_text): xml_doc = parse_doc(xml_text) self._handle_packages(findall(find(xml_doc, "Plugins"), "Plugin"), False) self._handle_packages(findall(find(xml_doc, "InternalPlugins"), "Plugin"), True) def _handle_packages(self, packages, isinternal): for package in packages: version = findtext(package, "Version") disallow_major_upgrade = findtext(package, "DisallowMajorVersionUpgrade") if disallow_major_upgrade is None: disallow_major_upgrade = '' disallow_major_upgrade = disallow_major_upgrade.lower() == "true" uris = find(package, "Uris") uri_list = findall(uris, "Uri") uri_list = [gettext(x) for x in uri_list] pkg = ExtHandlerPackage() pkg.version = version pkg.disallow_major_upgrade = disallow_major_upgrade for uri in uri_list: pkg.uris.append(uri) pkg.isinternal = isinternal self.pkg_list.versions.append(pkg) Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/protocol/healthservice.py000066400000000000000000000152331462617747000273120ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import json from azurelinuxagent.common import logger from azurelinuxagent.common.exception import HttpError from azurelinuxagent.common.future import ustr from azurelinuxagent.common.utils import restutil from azurelinuxagent.common.version import AGENT_NAME, CURRENT_VERSION class Observation(object): def __init__(self, name, is_healthy, description='', value=''): if name is None: raise ValueError("Observation name must be provided") if is_healthy is None: raise ValueError("Observation health must be provided") if value is None: value = '' if description is None: description = '' self.name = name self.is_healthy = is_healthy self.description = description self.value = value @property def as_obj(self): return { "ObservationName": self.name[:64], "IsHealthy": self.is_healthy, "Description": self.description[:128], "Value": self.value[:128] } class HealthService(object): ENDPOINT = 'http://{0}:80/HealthService' API = 'reporttargethealth' VERSION = "1.0" OBSERVER_NAME = 'WALinuxAgent' HOST_PLUGIN_HEARTBEAT_OBSERVATION_NAME = 'GuestAgentPluginHeartbeat' HOST_PLUGIN_STATUS_OBSERVATION_NAME = 'GuestAgentPluginStatus' HOST_PLUGIN_VERSIONS_OBSERVATION_NAME = 'GuestAgentPluginVersions' HOST_PLUGIN_ARTIFACT_OBSERVATION_NAME = 'GuestAgentPluginArtifact' IMDS_OBSERVATION_NAME = 'InstanceMetadataHeartbeat' MAX_OBSERVATIONS = 10 def __init__(self, endpoint): self.endpoint = HealthService.ENDPOINT.format(endpoint) self.api = HealthService.API self.version = HealthService.VERSION self.source = HealthService.OBSERVER_NAME self.observations = list() @property def as_json(self): data = { "Api": self.api, "Version": self.version, "Source": self.source, "Observations": [o.as_obj for o in self.observations] } return json.dumps(data) def report_host_plugin_heartbeat(self, is_healthy): """ Reports a signal for /health :param is_healthy: whether the call succeeded """ self._observe(name=HealthService.HOST_PLUGIN_HEARTBEAT_OBSERVATION_NAME, is_healthy=is_healthy) self._report() def report_host_plugin_versions(self, is_healthy, response): """ Reports a signal for /versions :param is_healthy: whether the api call succeeded :param response: debugging information for failures """ self._observe(name=HealthService.HOST_PLUGIN_VERSIONS_OBSERVATION_NAME, is_healthy=is_healthy, value=response) self._report() def report_host_plugin_extension_artifact(self, is_healthy, source, response): """ Reports a signal for /extensionArtifact :param is_healthy: whether the api call succeeded :param source: specifies the api caller for debugging failures :param response: debugging information for failures """ self._observe(name=HealthService.HOST_PLUGIN_ARTIFACT_OBSERVATION_NAME, is_healthy=is_healthy, description=source, value=response) self._report() def report_host_plugin_status(self, is_healthy, response): """ Reports a signal for /status :param is_healthy: whether the api call succeeded :param response: debugging information for failures """ self._observe(name=HealthService.HOST_PLUGIN_STATUS_OBSERVATION_NAME, is_healthy=is_healthy, value=response) self._report() def report_imds_status(self, is_healthy, response): """ Reports a signal for /metadata/instance :param is_healthy: whether the api call succeeded and returned valid data :param response: debugging information for failures """ self._observe(name=HealthService.IMDS_OBSERVATION_NAME, is_healthy=is_healthy, value=response) self._report() def _observe(self, name, is_healthy, value='', description=''): # ensure we keep the list size within bounds if len(self.observations) >= HealthService.MAX_OBSERVATIONS: del self.observations[:HealthService.MAX_OBSERVATIONS-1] self.observations.append(Observation(name=name, is_healthy=is_healthy, value=value, description=description)) def _report(self): logger.verbose('HealthService: report observations') try: restutil.http_post(self.endpoint, self.as_json, headers={'Content-Type': 'application/json'}) logger.verbose('HealthService: Reported observations to {0}: {1}', self.endpoint, self.as_json) except HttpError as e: logger.warn("HealthService: could not report observations: {0}", ustr(e)) finally: # report any failures via telemetry self._report_failures() # these signals are not timestamped, so there is no value in persisting data del self.observations[:] def _report_failures(self): try: logger.verbose("HealthService: report failures as telemetry") from azurelinuxagent.common.event import add_event, WALAEventOperation for o in self.observations: if not o.is_healthy: add_event(AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.HealthObservation, is_success=False, message=json.dumps(o.as_obj)) except Exception as e: logger.verbose("HealthService: could not report failures: {0}".format(ustr(e))) Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/protocol/hostplugin.py000066400000000000000000000754111462617747000266640ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import base64 import datetime import json import os.path import uuid from azurelinuxagent.common import logger, conf from azurelinuxagent.common.errorstate import ErrorState, ERROR_STATE_HOST_PLUGIN_FAILURE from azurelinuxagent.common.event import WALAEventOperation, add_event from azurelinuxagent.common.exception import HttpError, ProtocolError, ResourceGoneError from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.future import ustr, httpclient from azurelinuxagent.common.protocol.healthservice import HealthService from azurelinuxagent.common.protocol.extensions_goal_state import VmSettingsParseError, GoalStateSource from azurelinuxagent.common.protocol.extensions_goal_state_factory import ExtensionsGoalStateFactory from azurelinuxagent.common.utils import restutil, textutil, timeutil from azurelinuxagent.common.utils.textutil import remove_bom from azurelinuxagent.common.version import AGENT_NAME, AGENT_VERSION, PY_VERSION_MAJOR HOST_PLUGIN_PORT = 32526 URI_FORMAT_GET_API_VERSIONS = "http://{0}:{1}/versions" URI_FORMAT_VM_SETTINGS = "http://{0}:{1}/vmSettings" URI_FORMAT_GET_EXTENSION_ARTIFACT = "http://{0}:{1}/extensionArtifact" URI_FORMAT_PUT_VM_STATUS = "http://{0}:{1}/status" URI_FORMAT_PUT_LOG = "http://{0}:{1}/vmAgentLog" URI_FORMAT_HEALTH = "http://{0}:{1}/health" API_VERSION = "2015-09-01" _HEADER_CLIENT_NAME = "x-ms-client-name" _HEADER_CLIENT_VERSION = "x-ms-client-version" _HEADER_CORRELATION_ID = "x-ms-client-correlationid" _HEADER_CONTAINER_ID = "x-ms-containerid" _HEADER_DEPLOYMENT_ID = "x-ms-vmagentlog-deploymentid" _HEADER_VERSION = "x-ms-version" _HEADER_HOST_CONFIG_NAME = "x-ms-host-config-name" _HEADER_ARTIFACT_LOCATION = "x-ms-artifact-location" _HEADER_ARTIFACT_MANIFEST_LOCATION = "x-ms-artifact-manifest-location" _HEADER_VERIFY_FROM_ARTIFACTS_BLOB = "x-ms-verify-from-artifacts-blob" MAXIMUM_PAGEBLOB_PAGE_SIZE = 4 * 1024 * 1024 # Max page size: 4MB class HostPluginProtocol(object): is_default_channel = False FETCH_REPORTING_PERIOD = datetime.timedelta(minutes=1) STATUS_REPORTING_PERIOD = datetime.timedelta(minutes=1) def __init__(self, endpoint): """ NOTE: Before using the HostGAPlugin be sure to invoke GoalState.update_host_plugin_headers() to initialize the container id and role config name """ if endpoint is None: raise ProtocolError("HostGAPlugin: Endpoint not provided") self.is_initialized = False self.is_available = False self.api_versions = None self.endpoint = endpoint self.container_id = None self.deployment_id = None self.role_config_name = None self.manifest_uri = None self.health_service = HealthService(endpoint) self.fetch_error_state = ErrorState(min_timedelta=ERROR_STATE_HOST_PLUGIN_FAILURE) self.status_error_state = ErrorState(min_timedelta=ERROR_STATE_HOST_PLUGIN_FAILURE) self.fetch_last_timestamp = None self.status_last_timestamp = None self._version = FlexibleVersion("0.0.0.0") # Version 0 means "unknown" self._supports_vm_settings = None # Tri-state variable: None == Not Initialized, True == Supports, False == Does Not Support self._supports_vm_settings_next_check = datetime.datetime.now() self._vm_settings_error_reporter = _VmSettingsErrorReporter() self._cached_vm_settings = None # Cached value of the most recent vmSettings # restore the state of Fast Track if not os.path.exists(self._get_fast_track_state_file()): self._supports_vm_settings = False self._supports_vm_settings_next_check = datetime.datetime.now() self._fast_track_timestamp = timeutil.create_timestamp(datetime.datetime.min) else: self._supports_vm_settings = True self._supports_vm_settings_next_check = datetime.datetime.now() self._fast_track_timestamp = HostPluginProtocol.get_fast_track_timestamp() @staticmethod def _extract_deployment_id(role_config_name): # Role config name consists of: .(...) return role_config_name.split(".")[0] if role_config_name is not None else None def check_vm_settings_support(self): """ Returns True if the HostGAPlugin supports the vmSettings API. """ # _host_plugin_supports_vm_settings is set by fetch_vm_settings() if self._supports_vm_settings is None: _, _ = self.fetch_vm_settings() return self._supports_vm_settings def update_container_id(self, new_container_id): self.container_id = new_container_id def update_role_config_name(self, new_role_config_name): self.role_config_name = new_role_config_name self.deployment_id = self._extract_deployment_id(new_role_config_name) def update_manifest_uri(self, new_manifest_uri): self.manifest_uri = new_manifest_uri def ensure_initialized(self): if not self.is_initialized: self.api_versions = self.get_api_versions() self.is_available = API_VERSION in self.api_versions self.is_initialized = self.is_available add_event(op=WALAEventOperation.InitializeHostPlugin, is_success=self.is_available) return self.is_available def get_health(self): """ Call the /health endpoint :return: True if 200 received, False otherwise """ url = URI_FORMAT_HEALTH.format(self.endpoint, HOST_PLUGIN_PORT) logger.verbose("HostGAPlugin: Getting health from [{0}]", url) response = restutil.http_get(url, max_retry=1) return restutil.request_succeeded(response) def get_api_versions(self): url = URI_FORMAT_GET_API_VERSIONS.format(self.endpoint, HOST_PLUGIN_PORT) logger.verbose("HostGAPlugin: Getting API versions at [{0}]" .format(url)) return_val = [] error_response = '' is_healthy = False try: headers = {_HEADER_CONTAINER_ID: self.container_id} response = restutil.http_get(url, headers) if restutil.request_failed(response): error_response = restutil.read_response_error(response) logger.error("HostGAPlugin: Failed Get API versions: {0}".format(error_response)) is_healthy = not restutil.request_failed_at_hostplugin(response) else: return_val = ustr(remove_bom(response.read()), encoding='utf-8') is_healthy = True except HttpError as e: logger.error("HostGAPlugin: Exception Get API versions: {0}".format(e)) self.health_service.report_host_plugin_versions(is_healthy=is_healthy, response=error_response) return return_val def get_vm_settings_request(self, correlation_id): url = URI_FORMAT_VM_SETTINGS.format(self.endpoint, HOST_PLUGIN_PORT) headers = { _HEADER_VERSION: API_VERSION, _HEADER_CONTAINER_ID: self.container_id, _HEADER_HOST_CONFIG_NAME: self.role_config_name, _HEADER_CORRELATION_ID: correlation_id } return url, headers def get_artifact_request(self, artifact_url, use_verify_header, artifact_manifest_url=None): if not self.ensure_initialized(): raise ProtocolError("HostGAPlugin: Host plugin channel is not available") if textutil.is_str_none_or_whitespace(artifact_url): raise ProtocolError("HostGAPlugin: No extension artifact url was provided") url = URI_FORMAT_GET_EXTENSION_ARTIFACT.format(self.endpoint, HOST_PLUGIN_PORT) headers = { _HEADER_VERSION: API_VERSION, _HEADER_CONTAINER_ID: self.container_id, _HEADER_HOST_CONFIG_NAME: self.role_config_name, _HEADER_ARTIFACT_LOCATION: artifact_url} if use_verify_header: headers[_HEADER_VERIFY_FROM_ARTIFACTS_BLOB] = "true" if artifact_manifest_url is not None: headers[_HEADER_ARTIFACT_MANIFEST_LOCATION] = artifact_manifest_url return url, headers def report_fetch_health(self, uri, is_healthy=True, source='', response=''): if uri != URI_FORMAT_GET_EXTENSION_ARTIFACT.format(self.endpoint, HOST_PLUGIN_PORT): return if self.should_report(is_healthy, self.fetch_error_state, self.fetch_last_timestamp, HostPluginProtocol.FETCH_REPORTING_PERIOD): self.fetch_last_timestamp = datetime.datetime.utcnow() health_signal = self.fetch_error_state.is_triggered() is False self.health_service.report_host_plugin_extension_artifact(is_healthy=health_signal, source=source, response=response) def report_status_health(self, is_healthy, response=''): if self.should_report(is_healthy, self.status_error_state, self.status_last_timestamp, HostPluginProtocol.STATUS_REPORTING_PERIOD): self.status_last_timestamp = datetime.datetime.utcnow() health_signal = self.status_error_state.is_triggered() is False self.health_service.report_host_plugin_status(is_healthy=health_signal, response=response) @staticmethod def should_report(is_healthy, error_state, last_timestamp, period): """ Determine whether a health signal should be reported :param is_healthy: whether the current measurement is healthy :param error_state: the error state which is tracking time since failure :param last_timestamp: the last measurement time stamp :param period: the reporting period :return: True if the signal should be reported, False otherwise """ if is_healthy: # we only reset the error state upon success, since we want to keep # reporting the failure; this is different to other uses of error states # which do not have a separate periodicity error_state.reset() else: error_state.incr() if last_timestamp is None: last_timestamp = datetime.datetime.utcnow() - period return datetime.datetime.utcnow() >= (last_timestamp + period) def put_vm_log(self, content): """ Try to upload VM logs, a compressed zip file, via the host plugin /vmAgentLog channel. :param content: the binary content of the zip file to upload """ if not self.ensure_initialized(): raise ProtocolError("HostGAPlugin: HostGAPlugin is not available") if content is None: raise ProtocolError("HostGAPlugin: Invalid argument passed to upload VM logs. Content was not provided.") url = URI_FORMAT_PUT_LOG.format(self.endpoint, HOST_PLUGIN_PORT) response = restutil.http_put(url, data=content, headers=self._build_log_headers(), redact_data=True, timeout=30) if restutil.request_failed(response): error_response = restutil.read_response_error(response) raise HttpError("HostGAPlugin: Upload VM logs failed: {0}".format(error_response)) return response def put_vm_status(self, status_blob, sas_url, config_blob_type=None): """ Try to upload the VM status via the host plugin /status channel :param sas_url: the blob SAS url to pass to the host plugin :param config_blob_type: the blob type from the extension config :type status_blob: StatusBlob """ if not self.ensure_initialized(): raise ProtocolError("HostGAPlugin: HostGAPlugin is not available") if status_blob is None or status_blob.vm_status is None: raise ProtocolError("HostGAPlugin: Status blob was not provided") logger.verbose("HostGAPlugin: Posting VM status") blob_type = status_blob.type if status_blob.type else config_blob_type if blob_type == "BlockBlob": self._put_block_blob_status(sas_url, status_blob) else: self._put_page_blob_status(sas_url, status_blob) def _put_block_blob_status(self, sas_url, status_blob): url = URI_FORMAT_PUT_VM_STATUS.format(self.endpoint, HOST_PLUGIN_PORT) response = restutil.http_put(url, data=self._build_status_data( sas_url, status_blob.get_block_blob_headers(len(status_blob.data)), bytearray(status_blob.data, encoding='utf-8')), headers=self._build_status_headers()) if restutil.request_failed(response): error_response = restutil.read_response_error(response) is_healthy = not restutil.request_failed_at_hostplugin(response) self.report_status_health(is_healthy=is_healthy, response=error_response) raise HttpError("HostGAPlugin: Put BlockBlob failed: {0}" .format(error_response)) else: self.report_status_health(is_healthy=True) logger.verbose("HostGAPlugin: Put BlockBlob status succeeded") def _put_page_blob_status(self, sas_url, status_blob): url = URI_FORMAT_PUT_VM_STATUS.format(self.endpoint, HOST_PLUGIN_PORT) # Convert the status into a blank-padded string whose length is modulo 512 status = bytearray(status_blob.data, encoding='utf-8') status_size = int((len(status) + 511) / 512) * 512 status = bytearray(status_blob.data.ljust(status_size), encoding='utf-8') # First, initialize an empty blob response = restutil.http_put(url, data=self._build_status_data( sas_url, status_blob.get_page_blob_create_headers(status_size)), headers=self._build_status_headers()) if restutil.request_failed(response): error_response = restutil.read_response_error(response) is_healthy = not restutil.request_failed_at_hostplugin(response) self.report_status_health(is_healthy=is_healthy, response=error_response) raise HttpError("HostGAPlugin: Failed PageBlob clean-up: {0}" .format(error_response)) else: self.report_status_health(is_healthy=True) logger.verbose("HostGAPlugin: PageBlob clean-up succeeded") # Then, upload the blob in pages if sas_url.count("?") <= 0: sas_url = "{0}?comp=page".format(sas_url) else: sas_url = "{0}&comp=page".format(sas_url) start = 0 end = 0 while start < len(status): # Create the next page end = start + min(len(status) - start, MAXIMUM_PAGEBLOB_PAGE_SIZE) page_size = int((end - start + 511) / 512) * 512 buf = bytearray(page_size) buf[0: end - start] = status[start: end] # Send the page response = restutil.http_put(url, data=self._build_status_data( sas_url, status_blob.get_page_blob_page_headers(start, end), buf), headers=self._build_status_headers()) if restutil.request_failed(response): error_response = restutil.read_response_error(response) is_healthy = not restutil.request_failed_at_hostplugin(response) self.report_status_health(is_healthy=is_healthy, response=error_response) raise HttpError( "HostGAPlugin Error: Put PageBlob bytes " "[{0},{1}]: {2}".format(start, end, error_response)) # Advance to the next page (if any) start = end def _build_status_data(self, sas_url, blob_headers, content=None): headers = [] for name in iter(blob_headers.keys()): headers.append({ 'headerName': name, 'headerValue': blob_headers[name] }) data = { 'requestUri': sas_url, 'headers': headers } if not content is None: data['content'] = self._base64_encode(content) return json.dumps(data, sort_keys=True) def _build_status_headers(self): return { _HEADER_VERSION: API_VERSION, "Content-type": "application/json", _HEADER_CONTAINER_ID: self.container_id, _HEADER_HOST_CONFIG_NAME: self.role_config_name } def _build_log_headers(self): return { _HEADER_VERSION: API_VERSION, _HEADER_CONTAINER_ID: self.container_id, _HEADER_DEPLOYMENT_ID: self.deployment_id, _HEADER_CLIENT_NAME: AGENT_NAME, _HEADER_CLIENT_VERSION: AGENT_VERSION, _HEADER_CORRELATION_ID: str(uuid.uuid4()) } def _base64_encode(self, data): s = base64.b64encode(bytes(data)) if PY_VERSION_MAJOR > 2: return s.decode('utf-8') return s @staticmethod def _get_fast_track_state_file(): # This file keeps the timestamp of the most recent goal state if it was retrieved via Fast Track return os.path.join(conf.get_lib_dir(), "fast_track.json") @staticmethod def _save_fast_track_state(timestamp): try: with open(HostPluginProtocol._get_fast_track_state_file(), "w") as file_: json.dump({"timestamp": timestamp}, file_) except Exception as e: logger.warn("Error updating the Fast Track state ({0}): {1}", HostPluginProtocol._get_fast_track_state_file(), ustr(e)) @staticmethod def clear_fast_track_state(): try: if os.path.exists(HostPluginProtocol._get_fast_track_state_file()): os.remove(HostPluginProtocol._get_fast_track_state_file()) except Exception as e: logger.warn("Error clearing the current state for Fast Track ({0}): {1}", HostPluginProtocol._get_fast_track_state_file(), ustr(e)) @staticmethod def get_fast_track_timestamp(): """ Returns the timestamp of the most recent FastTrack goal state retrieved by fetch_vm_settings(), or None if the most recent goal state was Fabric or fetch_vm_settings() has not been invoked. """ if not os.path.exists(HostPluginProtocol._get_fast_track_state_file()): return timeutil.create_timestamp(datetime.datetime.min) try: with open(HostPluginProtocol._get_fast_track_state_file(), "r") as file_: return json.load(file_)["timestamp"] except Exception as e: logger.warn("Can't retrieve the timestamp for the most recent Fast Track goal state ({0}), will assume the current time. Error: {1}", HostPluginProtocol._get_fast_track_state_file(), ustr(e)) return timeutil.create_timestamp(datetime.datetime.utcnow()) def fetch_vm_settings(self, force_update=False): """ Queries the vmSettings from the HostGAPlugin and returns an (ExtensionsGoalState, bool) tuple with the vmSettings and a boolean indicating if they are an updated (True) or a cached value (False). Raises * VmSettingsNotSupported if the HostGAPlugin does not support the vmSettings API * VmSettingsSupportStopped if the HostGAPlugin stopped supporting the vmSettings API * VmSettingsParseError if the HostGAPlugin returned invalid vmSettings (e.g. syntax error) * ResourceGoneError if the container ID and roleconfig name need to be refreshed * ProtocolError if the request fails for any other reason (e.g. not supported, time out, server error) """ def raise_not_supported(): try: if self._supports_vm_settings: # The most recent goal state was delivered using FastTrack, and suddenly the HostGAPlugin does not support the vmSettings API anymore. # This can happen if, for example, the VM is migrated across host nodes that are running different versions of the HostGAPlugin. logger.warn("The HostGAPlugin stopped supporting the vmSettings API. If there is a pending FastTrack goal state, it will not be executed.") add_event(op=WALAEventOperation.VmSettings, message="[VmSettingsSupportStopped] HostGAPlugin: {0}".format(self._version), is_success=False, log_event=False) raise VmSettingsSupportStopped(self._fast_track_timestamp) else: logger.info("HostGAPlugin {0} does not support the vmSettings API. Will not use FastTrack.", self._version) add_event(op=WALAEventOperation.VmSettings, message="[VmSettingsNotSupported] HostGAPlugin: {0}".format(self._version), is_success=True) raise VmSettingsNotSupported() finally: self._supports_vm_settings = False self._supports_vm_settings_next_check = datetime.datetime.now() + datetime.timedelta(hours=6) # check again in 6 hours def format_message(msg): return "GET vmSettings [correlation ID: {0} eTag: {1}]: {2}".format(correlation_id, etag, msg) try: # Raise if VmSettings are not supported, but check again periodically since the HostGAPlugin could have been updated since the last check # Note that self._host_plugin_supports_vm_settings can be None, so we need to compare against False if not self._supports_vm_settings and self._supports_vm_settings_next_check > datetime.datetime.now(): # Raise VmSettingsNotSupported directly instead of using raise_not_supported() to avoid resetting the timestamp for the next check raise VmSettingsNotSupported() etag = None if force_update or self._cached_vm_settings is None else self._cached_vm_settings.etag correlation_id = str(uuid.uuid4()) self._vm_settings_error_reporter.report_request() url, headers = self.get_vm_settings_request(correlation_id) if etag is not None: headers['if-none-match'] = etag response = restutil.http_get(url, headers=headers, use_proxy=False, max_retry=1, return_raw_response=True) if response.status == httpclient.GONE: raise ResourceGoneError() if response.status == httpclient.NOT_FOUND: # the HostGAPlugin does not support FastTrack raise_not_supported() if response.status == httpclient.NOT_MODIFIED: # The goal state hasn't changed, return the current instance return self._cached_vm_settings, False if response.status != httpclient.OK: error_description = restutil.read_response_error(response) # For historical reasons the HostGAPlugin returns 502 (BAD_GATEWAY) for internal errors instead of using # 500 (INTERNAL_SERVER_ERROR). We add a short prefix to the error message in the hope that it will help # clear any confusion produced by the poor choice of status code. if response.status == httpclient.BAD_GATEWAY: error_description = "[Internal error in HostGAPlugin] {0}".format(error_description) error_description = format_message(error_description) if 400 <= response.status <= 499: self._vm_settings_error_reporter.report_error(error_description, _VmSettingsError.ClientError) elif 500 <= response.status <= 599: self._vm_settings_error_reporter.report_error(error_description, _VmSettingsError.ServerError) else: self._vm_settings_error_reporter.report_error(error_description, _VmSettingsError.HttpError) raise ProtocolError(error_description) for h in response.getheaders(): if h[0].lower() == 'etag': response_etag = h[1] break else: # since the vmSettings were updated, the response must include an etag message = format_message("The vmSettings response does not include an Etag header") raise ProtocolError(message) response_content = ustr(response.read(), encoding='utf-8') vm_settings = ExtensionsGoalStateFactory.create_from_vm_settings(response_etag, response_content, correlation_id) # log the HostGAPlugin version if vm_settings.host_ga_plugin_version != self._version: self._version = vm_settings.host_ga_plugin_version message = "HostGAPlugin version: {0}".format(vm_settings.host_ga_plugin_version) logger.info(message) add_event(op=WALAEventOperation.HostPlugin, message=message, is_success=True) # Don't support HostGAPlugin versions older than 133 if vm_settings.host_ga_plugin_version < FlexibleVersion("1.0.8.133"): raise_not_supported() self._supports_vm_settings = True self._cached_vm_settings = vm_settings if vm_settings.source == GoalStateSource.FastTrack: self._fast_track_timestamp = vm_settings.created_on_timestamp self._save_fast_track_state(vm_settings.created_on_timestamp) else: self.clear_fast_track_state() return vm_settings, True except (ProtocolError, ResourceGoneError, VmSettingsNotSupported, VmSettingsParseError): raise except Exception as exception: if isinstance(exception, IOError) and "timed out" in ustr(exception): message = format_message("Timeout") self._vm_settings_error_reporter.report_error(message, _VmSettingsError.Timeout) else: message = format_message("Request failed: {0}".format(textutil.format_exception(exception))) self._vm_settings_error_reporter.report_error(message, _VmSettingsError.RequestFailed) raise ProtocolError(message) finally: self._vm_settings_error_reporter.report_summary() class VmSettingsNotSupported(TypeError): """ Indicates that the HostGAPlugin does not support the vmSettings API """ class VmSettingsSupportStopped(VmSettingsNotSupported): """ Indicates that the HostGAPlugin supported the vmSettings API in previous calls, but now it does not support it for current call. This can happen, for example, if the VM is migrated across nodes with different HostGAPlugin versions. """ def __init__(self, timestamp): super(VmSettingsSupportStopped, self).__init__() self.timestamp = timestamp class _VmSettingsError(object): ClientError = 'ClientError' HttpError = 'HttpError' RequestFailed = 'RequestFailed' ServerError = 'ServerError' Timeout = 'Timeout' class _VmSettingsErrorReporter(object): _MaxErrors = 3 # Max number of errors reported to telemetry (by period) _Period = datetime.timedelta(hours=1) # How often to report the summary def __init__(self): self._reset() def _reset(self): self._request_count = 0 # Total number of vmSettings HTTP requests self._error_count = 0 # Total number of errors issuing vmSettings requests (includes all kinds of errors) self._client_error_count = 0 # Count of client side errors (HTTP status in the 400s) self._http_error_count = 0 # Count of HTTP errors other than 400s and 500s self._request_failure_count = 0 # Total count of requests that could not be issued (does not include timeouts or requests that were actually issued and failed, for example, with 500 or 400 statuses) self._server_error_count = 0 # Count of server side errors (HTTP status in the 500s) self._timeout_count = 0 # Count of timeouts on vmSettings requests self._next_period = datetime.datetime.now() + _VmSettingsErrorReporter._Period def report_request(self): self._request_count += 1 def report_error(self, error, category): self._error_count += 1 if self._error_count <= _VmSettingsErrorReporter._MaxErrors: add_event(op=WALAEventOperation.VmSettings, message="[{0}] {1}".format(category, error), is_success=True, log_event=False) if category == _VmSettingsError.ClientError: self._client_error_count += 1 elif category == _VmSettingsError.HttpError: self._http_error_count += 1 elif category == _VmSettingsError.RequestFailed: self._request_failure_count += 1 elif category == _VmSettingsError.ServerError: self._server_error_count += 1 elif category == _VmSettingsError.Timeout: self._timeout_count += 1 def report_summary(self): if datetime.datetime.now() >= self._next_period: summary = { "requests": self._request_count, "errors": self._error_count, "serverErrors": self._server_error_count, "clientErrors": self._client_error_count, "timeouts": self._timeout_count, "failedRequests": self._request_failure_count } message = json.dumps(summary) add_event(op=WALAEventOperation.VmSettingsSummary, message=message, is_success=True, log_event=False) if self._error_count > 0: logger.info("[VmSettingsSummary] {0}", message) self._reset() Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/protocol/imds.py000066400000000000000000000336701462617747000254250ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License, Version 2.0 (the "License"); import json import re from collections import namedtuple import azurelinuxagent.common.utils.restutil as restutil from azurelinuxagent.common.exception import HttpError, ResourceGoneError from azurelinuxagent.common.future import ustr import azurelinuxagent.common.logger as logger from azurelinuxagent.common.datacontract import DataContract, set_properties from azurelinuxagent.common.utils.flexible_version import FlexibleVersion IMDS_ENDPOINT = '169.254.169.254' APIVERSION = '2018-02-01' BASE_METADATA_URI = "http://{0}/metadata/{1}?api-version={2}" IMDS_IMAGE_ORIGIN_UNKNOWN = 0 IMDS_IMAGE_ORIGIN_CUSTOM = 1 IMDS_IMAGE_ORIGIN_ENDORSED = 2 IMDS_IMAGE_ORIGIN_PLATFORM = 3 MetadataResult = namedtuple('MetadataResult', ['success', 'service_error', 'response']) IMDS_RESPONSE_SUCCESS = 0 IMDS_RESPONSE_ERROR = 1 IMDS_CONNECTION_ERROR = 2 IMDS_INTERNAL_SERVER_ERROR = 3 def get_imds_client(wireserver_endpoint): return ImdsClient(wireserver_endpoint) # A *slightly* future proof list of endorsed distros. # -> e.g. I have predicted the future and said that 20.04-LTS will exist # and is endored. # # See https://docs.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros for # more details. # # This is not an exhaustive list. This is a best attempt to mark images as # endorsed or not. Image publishers do not encode all of the requisite information # in their publisher, offer, sku, and version to definitively mark something as # endorsed or not. This is not perfect, but it is approximately 98% perfect. ENDORSED_IMAGE_INFO_MATCHER_JSON = """{ "CANONICAL": { "UBUNTUSERVER": { "List": [ "14.04.0-LTS", "14.04.1-LTS", "14.04.2-LTS", "14.04.3-LTS", "14.04.4-LTS", "14.04.5-LTS", "14.04.6-LTS", "14.04.7-LTS", "14.04.8-LTS", "16.04-LTS", "16.04.0-LTS", "18.04-LTS", "20.04-LTS", "22.04-LTS" ] } }, "COREOS": { "COREOS": { "STABLE": { "Minimum": "494.4.0" } } }, "CREDATIV": { "DEBIAN": { "Minimum": "7" } }, "OPENLOGIC": { "CENTOS": { "Minimum": "6.3", "List": [ "7-LVM", "7-RAW" ] }, "CENTOS-HPC": { "Minimum": "6.3" } }, "REDHAT": { "RHEL": { "Minimum": "6.7", "List": [ "7-LVM", "7-RAW" ] }, "RHEL-HANA": { "Minimum": "6.7" }, "RHEL-SAP": { "Minimum": "6.7" }, "RHEL-SAP-APPS": { "Minimum": "6.7" }, "RHEL-SAP-HANA": { "Minimum": "6.7" } }, "SUSE": { "SLES": { "List": [ "11-SP4", "11-SP5", "11-SP6", "12-SP1", "12-SP2", "12-SP3", "12-SP4", "12-SP5", "12-SP6" ] }, "SLES-BYOS": { "List": [ "11-SP4", "12", "12-SP1", "12-SP2", "12-SP3", "12-SP4", "12-SP5", "15", "15-SP1", "15-SP2", "15-SP3", "15-SP4", "15-SP5" ] }, "SLES-SAP": { "List": [ "11-SP4", "12", "12-SP1", "12-SP2", "12-SP3", "12-SP4", "12-SP5", "15", "15-SP1", "15-SP2", "15-SP3", "15-SP4", "15-SP5" ] }, "SLE-HPC": { "List": [ "15-SP1", "15-SP2", "15-SP3", "15-SP4", "15-SP5" ] } } }""" class ImageInfoMatcher(object): def __init__(self, doc): self.doc = json.loads(doc) def is_match(self, publisher, offer, sku, version): def _is_match_walk(doci, keys): key = keys.pop(0).upper() if key is None: return False if key not in doci: return False if 'List' in doci[key] and keys[0] in doci[key]['List']: return True if 'Match' in doci[key] and re.match(doci[key]['Match'], keys[0]): return True if 'Minimum' in doci[key]: try: return FlexibleVersion(keys[0]) >= FlexibleVersion(doci[key]['Minimum']) except ValueError: pass return _is_match_walk(doci[key], keys) return _is_match_walk(self.doc, [ publisher, offer, sku, version ]) class ComputeInfo(DataContract): __matcher = ImageInfoMatcher(ENDORSED_IMAGE_INFO_MATCHER_JSON) def __init__(self, location=None, name=None, offer=None, osType=None, placementGroupId=None, platformFaultDomain=None, placementUpdateDomain=None, publisher=None, resourceGroupName=None, sku=None, subscriptionId=None, tags=None, version=None, vmId=None, vmSize=None, vmScaleSetName=None, zone=None): self.location = location self.name = name self.offer = offer self.osType = osType self.placementGroupId = placementGroupId self.platformFaultDomain = platformFaultDomain self.platformUpdateDomain = placementUpdateDomain self.publisher = publisher self.resourceGroupName = resourceGroupName self.sku = sku self.subscriptionId = subscriptionId self.tags = tags self.version = version self.vmId = vmId self.vmSize = vmSize self.vmScaleSetName = vmScaleSetName self.zone = zone @property def image_info(self): return "{0}:{1}:{2}:{3}".format(self.publisher, self.offer, self.sku, self.version) @property def image_origin(self): """ An integer value describing the origin of the image. 0 -> unknown 1 -> custom - user created image 2 -> endorsed - See https://docs.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros 3 -> platform - non-endorsed image that is available in the Azure Marketplace. """ try: if self.publisher == "": return IMDS_IMAGE_ORIGIN_CUSTOM if ComputeInfo.__matcher.is_match(self.publisher, self.offer, self.sku, self.version): return IMDS_IMAGE_ORIGIN_ENDORSED else: return IMDS_IMAGE_ORIGIN_PLATFORM except Exception as e: logger.periodic_warn(logger.EVERY_FIFTEEN_MINUTES, "[PERIODIC] Could not determine the image origin from IMDS: {0}".format(ustr(e))) return IMDS_IMAGE_ORIGIN_UNKNOWN class ImdsClient(object): def __init__(self, wireserver_endpoint, version=APIVERSION): self._api_version = version self._headers = { 'User-Agent': restutil.HTTP_USER_AGENT, 'Metadata': True, } self._health_headers = { 'User-Agent': restutil.HTTP_USER_AGENT_HEALTH, 'Metadata': True, } self._regex_ioerror = re.compile(r".*HTTP Failed. GET http://[^ ]+ -- IOError .*") self._regex_throttled = re.compile(r".*HTTP Retry. GET http://[^ ]+ -- Status Code 429 .*") self._wireserver_endpoint = wireserver_endpoint def _get_metadata_url(self, endpoint, resource_path): return BASE_METADATA_URI.format(endpoint, resource_path, self._api_version) def _http_get(self, endpoint, resource_path, headers): url = self._get_metadata_url(endpoint, resource_path) return restutil.http_get(url, headers=headers, use_proxy=False) def _get_metadata_from_endpoint(self, endpoint, resource_path, headers): """ Get metadata from one of the IMDS endpoints. :param str endpoint: IMDS endpoint to call :param str resource_path: path of IMDS resource :param bool headers: headers to send in the request :return: Tuple status: one of the following response status codes: IMDS_RESPONSE_SUCCESS, IMDS_RESPONSE_ERROR, IMDS_CONNECTION_ERROR, IMDS_INTERNAL_SERVER_ERROR response: IMDS response on IMDS_RESPONSE_SUCCESS, failure message otherwise """ try: resp = self._http_get(endpoint=endpoint, resource_path=resource_path, headers=headers) except ResourceGoneError: return IMDS_INTERNAL_SERVER_ERROR, "IMDS error in /metadata/{0}: HTTP Failed with Status Code 410: Gone".format(resource_path) except HttpError as e: msg = str(e) if self._regex_throttled.match(msg): return IMDS_RESPONSE_ERROR, "IMDS error in /metadata/{0}: Throttled".format(resource_path) if self._regex_ioerror.match(msg): logger.periodic_warn(logger.EVERY_FIFTEEN_MINUTES, "[PERIODIC] [IMDS_CONNECTION_ERROR] Unable to connect to IMDS endpoint {0}".format(endpoint)) return IMDS_CONNECTION_ERROR, "IMDS error in /metadata/{0}: Unable to connect to endpoint".format(resource_path) return IMDS_INTERNAL_SERVER_ERROR, "IMDS error in /metadata/{0}: {1}".format(resource_path, msg) if resp.status >= 500: return IMDS_INTERNAL_SERVER_ERROR, "IMDS error in /metadata/{0}: {1}".format( resource_path, restutil.read_response_error(resp)) if restutil.request_failed(resp): return IMDS_RESPONSE_ERROR, "IMDS error in /metadata/{0}: {1}".format( resource_path, restutil.read_response_error(resp)) return IMDS_RESPONSE_SUCCESS, resp.read() def get_metadata(self, resource_path, is_health): """ Get metadata from IMDS, falling back to Wireserver endpoint if necessary. :param str resource_path: path of IMDS resource :param bool is_health: True if for health/heartbeat, False otherwise :return: instance of MetadataResult :rtype: MetadataResult """ headers = self._health_headers if is_health else self._headers endpoint = IMDS_ENDPOINT status, resp = self._get_metadata_from_endpoint(endpoint, resource_path, headers) if status == IMDS_CONNECTION_ERROR: endpoint = self._wireserver_endpoint status, resp = self._get_metadata_from_endpoint(endpoint, resource_path, headers) if status == IMDS_RESPONSE_SUCCESS: return MetadataResult(True, False, resp) elif status == IMDS_INTERNAL_SERVER_ERROR: return MetadataResult(False, True, resp) return MetadataResult(False, False, resp) def get_compute(self): """ Fetch compute information. :return: instance of a ComputeInfo :rtype: ComputeInfo """ # ensure we get a 200 result = self.get_metadata('instance/compute', is_health=False) if not result.success: raise HttpError(result.response) data = json.loads(ustr(result.response, encoding="utf-8")) compute_info = ComputeInfo() set_properties('compute', compute_info, data) return compute_info def validate(self): """ Determines whether the metadata instance api returns 200, and the response is valid: compute should contain location, name, subscription id, and vm size and network should contain mac address and private ip address. :return: Tuple is_healthy: False when service returns an error, True on successful response and connection failures. error_response: validation failure details to assist with debugging """ # ensure we get a 200 result = self.get_metadata('instance', is_health=True) if not result.success: # we should only return False when the service is unhealthy return (not result.service_error), result.response # ensure the response is valid json try: json_data = json.loads(ustr(result.response, encoding="utf-8")) except Exception as e: return False, "JSON parsing failed: {0}".format(ustr(e)) # ensure all expected fields are present and have a value try: # TODO: compute fields cannot be verified yet since we need to exclude rdfe vms (#1249) self.check_field(json_data, 'network') self.check_field(json_data['network'], 'interface') self.check_field(json_data['network']['interface'][0], 'macAddress') self.check_field(json_data['network']['interface'][0], 'ipv4') self.check_field(json_data['network']['interface'][0]['ipv4'], 'ipAddress') self.check_field(json_data['network']['interface'][0]['ipv4']['ipAddress'][0], 'privateIpAddress') except ValueError as v: return False, ustr(v) return True, '' @staticmethod def check_field(dict_obj, field): if field not in dict_obj or dict_obj[field] is None: raise ValueError('Missing field: [{0}]'.format(field)) if len(dict_obj[field]) == 0: raise ValueError('Empty field: [{0}]'.format(field)) Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/protocol/metadata_server_migration_util.py000066400000000000000000000060131462617747000327340ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger from azurelinuxagent.common.event import add_event, WALAEventOperation from azurelinuxagent.common.utils.restutil import KNOWN_WIRESERVER_IP from azurelinuxagent.common.version import AGENT_NAME, CURRENT_VERSION # Name for Metadata Server Protocol _METADATA_PROTOCOL_NAME = "MetadataProtocol" # MetadataServer Certificates for Cleanup _LEGACY_METADATA_SERVER_TRANSPORT_PRV_FILE_NAME = "V2TransportPrivate.pem" _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME = "V2TransportCert.pem" _LEGACY_METADATA_SERVER_P7B_FILE_NAME = "Certificates.p7b" # MetadataServer Endpoint _KNOWN_METADATASERVER_IP = "169.254.169.254" def is_metadata_server_artifact_present(): metadata_artifact_path = os.path.join(conf.get_lib_dir(), _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME) return os.path.isfile(metadata_artifact_path) def cleanup_metadata_server_artifacts(osutil): logger.info("Clean up for MetadataServer to WireServer protocol migration: removing MetadataServer certificates and resetting firewall rules.") _cleanup_metadata_protocol_certificates() _reset_firewall_rules(osutil) def _cleanup_metadata_protocol_certificates(): """ Removes MetadataServer Certificates. """ lib_directory = conf.get_lib_dir() _ensure_file_removed(lib_directory, _LEGACY_METADATA_SERVER_TRANSPORT_PRV_FILE_NAME) _ensure_file_removed(lib_directory, _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME) _ensure_file_removed(lib_directory, _LEGACY_METADATA_SERVER_P7B_FILE_NAME) def _reset_firewall_rules(osutil): """ Removes MetadataServer firewall rule so IMDS can be used. Enables WireServer firewall rule based on if firewall is configured to be on. """ osutil.remove_firewall(dst_ip=_KNOWN_METADATASERVER_IP, uid=os.getuid(), wait=osutil.get_firewall_will_wait()) if conf.enable_firewall(): success, _ = osutil.enable_firewall(dst_ip=KNOWN_WIRESERVER_IP, uid=os.getuid()) add_event( AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.Firewall, is_success=success, log_event=False) def _ensure_file_removed(directory, file_name): """ Removes files if they are present. """ path = os.path.join(directory, file_name) if os.path.isfile(path): os.remove(path) Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/protocol/ovfenv.py000066400000000000000000000116601462617747000257670ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # """ Copy and parse ovf-env.xml from provisioning ISO and local cache """ import os # pylint: disable=W0611 import re # pylint: disable=W0611 import shutil # pylint: disable=W0611 import xml.dom.minidom as minidom # pylint: disable=W0611 import azurelinuxagent.common.logger as logger from azurelinuxagent.common.exception import ProtocolError from azurelinuxagent.common.future import ustr # pylint: disable=W0611 import azurelinuxagent.common.utils.fileutil as fileutil # pylint: disable=W0611 from azurelinuxagent.common.utils.textutil import parse_doc, findall, find, findtext OVF_VERSION = "1.0" OVF_NAME_SPACE = "http://schemas.dmtf.org/ovf/environment/1" WA_NAME_SPACE = "http://schemas.microsoft.com/windowsazure" def _validate_ovf(val, msg): if val is None: raise ProtocolError("Failed to validate OVF: {0}".format(msg)) class OvfEnv(object): """ Read, and process provisioning info from provisioning file OvfEnv.xml """ def __init__(self, xml_text): if xml_text is None: raise ValueError("ovf-env is None") logger.verbose("Load ovf-env.xml") self.hostname = None self.username = None self.user_password = None self.customdata = None self.disable_ssh_password_auth = True self.ssh_pubkeys = [] self.ssh_keypairs = [] self.provision_guest_agent = None self.parse(xml_text) def parse(self, xml_text): """ Parse xml tree, retreiving user and ssh key information. Return self. """ wans = WA_NAME_SPACE ovfns = OVF_NAME_SPACE xml_doc = parse_doc(xml_text) environment = find(xml_doc, "Environment", namespace=ovfns) _validate_ovf(environment, "Environment not found") section = find(environment, "ProvisioningSection", namespace=wans) _validate_ovf(section, "ProvisioningSection not found") version = findtext(environment, "Version", namespace=wans) _validate_ovf(version, "Version not found") if version > OVF_VERSION: logger.warn("Newer provisioning configuration detected. " "Please consider updating waagent") conf_set = find(section, "LinuxProvisioningConfigurationSet", namespace=wans) _validate_ovf(conf_set, "LinuxProvisioningConfigurationSet not found") self.hostname = findtext(conf_set, "HostName", namespace=wans) _validate_ovf(self.hostname, "HostName not found") self.username = findtext(conf_set, "UserName", namespace=wans) _validate_ovf(self.username, "UserName not found") self.user_password = findtext(conf_set, "UserPassword", namespace=wans) self.customdata = findtext(conf_set, "CustomData", namespace=wans) auth_option = findtext(conf_set, "DisableSshPasswordAuthentication", namespace=wans) if auth_option is not None and auth_option.lower() == "true": self.disable_ssh_password_auth = True else: self.disable_ssh_password_auth = False public_keys = findall(conf_set, "PublicKey", namespace=wans) for public_key in public_keys: path = findtext(public_key, "Path", namespace=wans) fingerprint = findtext(public_key, "Fingerprint", namespace=wans) value = findtext(public_key, "Value", namespace=wans) self.ssh_pubkeys.append((path, fingerprint, value)) keypairs = findall(conf_set, "KeyPair", namespace=wans) for keypair in keypairs: path = findtext(keypair, "Path", namespace=wans) fingerprint = findtext(keypair, "Fingerprint", namespace=wans) self.ssh_keypairs.append((path, fingerprint)) platform_settings_section = find(environment, "PlatformSettingsSection", namespace=wans) _validate_ovf(platform_settings_section, "PlatformSettingsSection not found") platform_settings = find(platform_settings_section, "PlatformSettings", namespace=wans) _validate_ovf(platform_settings, "PlatformSettings not found") self.provision_guest_agent = findtext(platform_settings, "ProvisionGuestAgent", namespace=wans) _validate_ovf(self.provision_guest_agent, "ProvisionGuestAgent not found") Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/protocol/restapi.py000066400000000000000000000262521462617747000261360ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import socket import time from azurelinuxagent.common.datacontract import DataContract, DataContractList from azurelinuxagent.common.future import ustr from azurelinuxagent.common.utils.textutil import getattrib from azurelinuxagent.common.version import DISTRO_VERSION, DISTRO_NAME, CURRENT_VERSION VERSION_0 = "0.0.0.0" class VMInfo(DataContract): def __init__(self, subscriptionId=None, vmName=None, roleName=None, roleInstanceName=None, tenantName=None): self.subscriptionId = subscriptionId self.vmName = vmName self.roleName = roleName self.roleInstanceName = roleInstanceName self.tenantName = tenantName class CertificateData(DataContract): def __init__(self, certificateData=None): self.certificateData = certificateData class Cert(DataContract): def __init__(self, name=None, thumbprint=None, certificateDataUri=None, storeName=None, storeLocation=None): self.name = name self.thumbprint = thumbprint self.certificateDataUri = certificateDataUri self.storeLocation = storeLocation self.storeName = storeName class CertList(DataContract): def __init__(self): self.certificates = DataContractList(Cert) class VMAgentFamily(object): def __init__(self, name): self.name = name # Two-state: None, string. Set to None if version not specified in the GS self.version = None # Tri-state: None, True, False. Set to None if this property not specified in the GS. self.is_version_from_rsm = None # Tri-state: None, True, False. Set to None if this property not specified in the GS. self.is_vm_enabled_for_rsm_upgrades = None self.uris = [] def __repr__(self): return self.__str__() def __str__(self): return "[name: '{0}' uris: {1}]".format(self.name, self.uris) class ExtensionState(object): Enabled = ustr("enabled") Disabled = ustr("disabled") class ExtensionRequestedState(object): """ This is the state of the Handler as requested by the Goal State. CRP only supports 2 states as of now - Enabled and Uninstall Disabled was used for older XML extensions and we keep it to support backward compatibility. """ Enabled = ustr("enabled") Disabled = ustr("disabled") Uninstall = ustr("uninstall") All = [Enabled, Disabled, Uninstall] class ExtensionSettings(object): """ The runtime settings associated with a Handler - Maps to Extension.PluginSettings.Plugin.RuntimeSettings for single config extensions in the ExtensionConfig.xml Eg: 1.settings, 2.settings - Maps to Extension.PluginSettings.Plugin.ExtensionRuntimeSettings for multi-config extensions in the ExtensionConfig.xml Eg: .1.settings, .2.settings """ def __init__(self, name=None, sequenceNumber=None, publicSettings=None, protectedSettings=None, certificateThumbprint=None, dependencyLevel=0, state=ExtensionState.Enabled): self.name = name self.sequenceNumber = sequenceNumber self.publicSettings = publicSettings self.protectedSettings = protectedSettings self.certificateThumbprint = certificateThumbprint self.dependencyLevel = dependencyLevel self.state = state def dependency_level_sort_key(self, handler_state): level = self.dependencyLevel # Process uninstall or disabled before enabled, in reverse order # Prioritize Handler state and Extension state both when sorting extensions # remap 0 to -1, 1 to -2, 2 to -3, etc if handler_state != ExtensionRequestedState.Enabled or self.state != ExtensionState.Enabled: level = (0 - level) - 1 return level def __repr__(self): return self.__str__() def __str__(self): return "{0}".format(self.name) class Extension(object): """ The main Plugin/handler specified by the publishers. Maps to Extension.PluginSettings.Plugins.Plugin in the ExtensionConfig.xml file Eg: Microsoft.OSTC.CustomScript """ def __init__(self, name=None): self.name = name self.version = None self.state = None self.settings = [] self.manifest_uris = [] self.supports_multi_config = False self.__invalid_handler_setting_reason = None @property def is_invalid_setting(self): return self.__invalid_handler_setting_reason is not None @property def invalid_setting_reason(self): return self.__invalid_handler_setting_reason @invalid_setting_reason.setter def invalid_setting_reason(self, value): self.__invalid_handler_setting_reason = value def dependency_level_sort_key(self): levels = [e.dependencyLevel for e in self.settings] if len(levels) == 0: level = 0 else: level = min(levels) # Process uninstall or disabled before enabled, in reverse order # remap 0 to -1, 1 to -2, 2 to -3, etc if self.state != u"enabled": level = (0 - level) - 1 return level def __repr__(self): return self.__str__() def __str__(self): return "{0}-{1}".format(self.name, self.version) class InVMGoalStateMetaData(DataContract): """ Object for parsing the GoalState MetaData received from CRP Eg: """ def __init__(self, in_vm_metadata_node): self.correlation_id = getattrib(in_vm_metadata_node, "correlationId") self.activity_id = getattrib(in_vm_metadata_node, "activityId") self.created_on_ticks = getattrib(in_vm_metadata_node, "createdOnTicks") self.in_svd_seq_no = getattrib(in_vm_metadata_node, "inSvdSeqNo") class ExtHandlerPackage(DataContract): def __init__(self, version=None): self.version = version self.uris = [] # TODO update the naming to align with metadata protocol self.isinternal = False self.disallow_major_upgrade = False class ExtHandlerPackageList(DataContract): def __init__(self): self.versions = DataContractList(ExtHandlerPackage) class VMProperties(DataContract): def __init__(self, certificateThumbprint=None): # TODO need to confirm the property name self.certificateThumbprint = certificateThumbprint class ProvisionStatus(DataContract): def __init__(self, status=None, subStatus=None, description=None): self.status = status self.subStatus = subStatus self.description = description self.properties = VMProperties() class ExtensionSubStatus(DataContract): def __init__(self, name=None, status=None, code=None, message=None): self.name = name self.status = status self.code = code self.message = message class ExtensionStatus(DataContract): def __init__(self, name=None, configurationAppliedTime=None, operation=None, status=None, seq_no=None, code=None, message=None): self.name = name self.configurationAppliedTime = configurationAppliedTime self.operation = operation self.status = status self.sequenceNumber = seq_no self.code = code self.message = message self.substatusList = DataContractList(ExtensionSubStatus) class ExtHandlerStatus(DataContract): def __init__(self, name=None, version=None, status=None, code=0, message=None): self.name = name self.version = version self.status = status self.code = code self.message = message self.supports_multi_config = False self.extension_status = None class VMAgentStatus(DataContract): def __init__(self, status=None, message=None, gs_aggregate_status=None, update_status=None): self.status = status self.message = message self.hostname = socket.gethostname() self.version = str(CURRENT_VERSION) self.osname = DISTRO_NAME self.osversion = DISTRO_VERSION self.extensionHandlers = DataContractList(ExtHandlerStatus) self.vm_artifacts_aggregate_status = VMArtifactsAggregateStatus(gs_aggregate_status) self.update_status = update_status self._supports_fast_track = False @property def supports_fast_track(self): return self._supports_fast_track def set_supports_fast_track(self, value): self._supports_fast_track = value class VMStatus(DataContract): def __init__(self, status, message, gs_aggregate_status=None, vm_agent_update_status=None): self.vmAgent = VMAgentStatus(status=status, message=message, gs_aggregate_status=gs_aggregate_status, update_status=vm_agent_update_status) class GoalStateAggregateStatus(DataContract): def __init__(self, seq_no, status=None, message="", code=None): self.message = message self.in_svd_seq_no = seq_no self.status = status self.code = code self.__utc_timestamp = time.gmtime() @property def processed_time(self): return self.__utc_timestamp class VMArtifactsAggregateStatus(DataContract): def __init__(self, gs_aggregate_status=None): self.goal_state_aggregate_status = gs_aggregate_status class RemoteAccessUser(DataContract): def __init__(self, name, encrypted_password, expiration): self.name = name self.encrypted_password = encrypted_password self.expiration = expiration class RemoteAccessUsersList(DataContract): def __init__(self): self.users = DataContractList(RemoteAccessUser) class VMAgentUpdateStatuses(object): Success = ustr("Success") Transitioning = ustr("Transitioning") Error = ustr("Error") Unknown = ustr("Unknown") class VMAgentUpdateStatus(object): def __init__(self, expected_version, status=VMAgentUpdateStatuses.Success, message="", code=0): self.expected_version = expected_version self.status = status self.message = message self.code = code Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/protocol/util.py000066400000000000000000000277111462617747000254450ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import errno import os import re import time import threading import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.common.singletonperthread import SingletonPerThread from azurelinuxagent.common.exception import ProtocolError, OSUtilError, \ ProtocolNotFoundError, DhcpError from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.dhcp import get_dhcp_handler from azurelinuxagent.common.protocol.metadata_server_migration_util import cleanup_metadata_server_artifacts, \ is_metadata_server_artifact_present from azurelinuxagent.common.protocol.ovfenv import OvfEnv from azurelinuxagent.common.protocol.wire import WireProtocol from azurelinuxagent.common.utils.restutil import KNOWN_WIRESERVER_IP, \ IOErrorCounter OVF_FILE_NAME = "ovf-env.xml" PROTOCOL_FILE_NAME = "Protocol" MAX_RETRY = 360 PROBE_INTERVAL = 10 ENDPOINT_FILE_NAME = "WireServerEndpoint" PASSWORD_PATTERN = ".*?<" PASSWORD_REPLACEMENT = "*<" WIRE_PROTOCOL_NAME = "WireProtocol" def get_protocol_util(): return ProtocolUtil() class ProtocolUtil(SingletonPerThread): """ ProtocolUtil handles initialization for protocol instance. 2 protocol types are invoked, wire protocol and metadata protocols. Note: ProtocolUtil is a sub class of SingletonPerThread, this basically means that there would only be 1 single instance of ProtocolUtil object per thread. """ def __init__(self): self._lock = threading.RLock() # protects the files on disk created during protocol detection self._protocol = None self.endpoint = None self.osutil = get_osutil() self.dhcp_handler = get_dhcp_handler() def copy_ovf_env(self): """ Copy ovf env file from dvd to hard disk. Remove password before save it to the disk """ dvd_mount_point = conf.get_dvd_mount_point() ovf_file_path_on_dvd = os.path.join(dvd_mount_point, OVF_FILE_NAME) ovf_file_path = os.path.join(conf.get_lib_dir(), OVF_FILE_NAME) try: self.osutil.mount_dvd() except OSUtilError as e: raise ProtocolError("[CopyOvfEnv] Error mounting dvd: " "{0}".format(ustr(e))) try: ovfxml = fileutil.read_file(ovf_file_path_on_dvd, remove_bom=True) ovfenv = OvfEnv(ovfxml) except (IOError, OSError) as e: raise ProtocolError("[CopyOvfEnv] Error reading file " "{0}: {1}".format(ovf_file_path_on_dvd, ustr(e))) try: ovfxml = re.sub(PASSWORD_PATTERN, PASSWORD_REPLACEMENT, ovfxml) fileutil.write_file(ovf_file_path, ovfxml) except (IOError, OSError) as e: raise ProtocolError("[CopyOvfEnv] Error writing file " "{0}: {1}".format(ovf_file_path, ustr(e))) self._cleanup_ovf_dvd() return ovfenv def _cleanup_ovf_dvd(self): try: self.osutil.umount_dvd() self.osutil.eject_dvd() except OSUtilError as e: logger.warn(ustr(e)) def get_ovf_env(self): """ Load saved ovf-env.xml """ ovf_file_path = os.path.join(conf.get_lib_dir(), OVF_FILE_NAME) if os.path.isfile(ovf_file_path): xml_text = fileutil.read_file(ovf_file_path) return OvfEnv(xml_text) else: raise ProtocolError( "ovf-env.xml is missing from {0}".format(ovf_file_path)) def _get_protocol_file_path(self): return os.path.join( conf.get_lib_dir(), PROTOCOL_FILE_NAME) def _get_wireserver_endpoint_file_path(self): return os.path.join( conf.get_lib_dir(), ENDPOINT_FILE_NAME) def get_wireserver_endpoint(self): self._lock.acquire() try: if self.endpoint: return self.endpoint file_path = self._get_wireserver_endpoint_file_path() if os.path.isfile(file_path): try: self.endpoint = fileutil.read_file(file_path) if self.endpoint: logger.info("WireServer endpoint {0} read from file", self.endpoint) return self.endpoint logger.error("[GetWireserverEndpoint] Unexpected empty file {0}", file_path) except (IOError, OSError) as e: logger.error("[GetWireserverEndpoint] Error reading file {0}: {1}", file_path, str(e)) else: logger.error("[GetWireserverEndpoint] Missing file {0}", file_path) self.endpoint = KNOWN_WIRESERVER_IP logger.info("Using hardcoded Wireserver endpoint {0}", self.endpoint) return self.endpoint finally: self._lock.release() def _set_wireserver_endpoint(self, endpoint): try: self.endpoint = endpoint file_path = self._get_wireserver_endpoint_file_path() fileutil.write_file(file_path, endpoint) except (IOError, OSError) as e: raise OSUtilError(ustr(e)) def _clear_wireserver_endpoint(self): """ Cleanup previous saved wireserver endpoint. """ self.endpoint = None endpoint_file_path = self._get_wireserver_endpoint_file_path() if not os.path.isfile(endpoint_file_path): return try: os.remove(endpoint_file_path) except (IOError, OSError) as e: # Ignore file-not-found errors (since the file is being removed) if e.errno == errno.ENOENT: return logger.error("Failed to clear wiresever endpoint: {0}", e) def _detect_protocol(self, save_to_history, init_goal_state=True): """ Probe protocol endpoints in turn. """ self.clear_protocol() for retry in range(0, MAX_RETRY): try: endpoint = self.dhcp_handler.endpoint if endpoint is None: # pylint: disable=W0105 ''' Check if DHCP can be used to get the wire protocol endpoint ''' # pylint: enable=W0105 dhcp_available = self.osutil.is_dhcp_available() if dhcp_available: logger.info("WireServer endpoint is not found. Rerun dhcp handler") try: self.dhcp_handler.run() except DhcpError as e: raise ProtocolError(ustr(e)) endpoint = self.dhcp_handler.endpoint else: logger.info("_detect_protocol: DHCP not available") endpoint = self.get_wireserver_endpoint() try: protocol = WireProtocol(endpoint) protocol.detect(init_goal_state=init_goal_state, save_to_history=save_to_history) self._set_wireserver_endpoint(endpoint) return protocol except ProtocolError as e: logger.info("WireServer is not responding. Reset dhcp endpoint") self.dhcp_handler.endpoint = None self.dhcp_handler.skip_cache = True raise e except ProtocolError as e: logger.info("Protocol endpoint not found: {0}", e) if retry < MAX_RETRY - 1: logger.info("Retry detect protocol: retry={0}", retry) time.sleep(PROBE_INTERVAL) raise ProtocolNotFoundError("No protocol found.") def _save_protocol(self, protocol_name): """ Save protocol endpoint """ protocol_file_path = self._get_protocol_file_path() try: fileutil.write_file(protocol_file_path, protocol_name) except (IOError, OSError) as e: logger.error("Failed to save protocol endpoint: {0}", e) def clear_protocol(self): """ Cleanup previous saved protocol endpoint. """ self._lock.acquire() try: logger.info("Clean protocol and wireserver endpoint") self._clear_wireserver_endpoint() self._protocol = None protocol_file_path = self._get_protocol_file_path() if not os.path.isfile(protocol_file_path): return try: os.remove(protocol_file_path) except (IOError, OSError) as e: # Ignore file-not-found errors (since the file is being removed) if e.errno == errno.ENOENT: return logger.error("Failed to clear protocol endpoint: {0}", e) finally: self._lock.release() def get_protocol(self, init_goal_state=True, save_to_history=False): """ Detect protocol by endpoint. :returns: protocol instance """ self._lock.acquire() try: if self._protocol is not None: return self._protocol # If the protocol file contains MetadataProtocol we need to fall through to # _detect_protocol so that we can generate the WireServer transport certificates. protocol_file_path = self._get_protocol_file_path() if os.path.isfile(protocol_file_path) and fileutil.read_file(protocol_file_path) == WIRE_PROTOCOL_NAME: endpoint = self.get_wireserver_endpoint() self._protocol = WireProtocol(endpoint) # If metadataserver certificates are present we clean certificates # and remove MetadataServer firewall rule. It is possible # there was a previous intermediate upgrade before 2.2.48 but metadata artifacts # were not cleaned up (intermediate updated agent does not have cleanup # logic but we transitioned from Metadata to Wire protocol) if is_metadata_server_artifact_present(): cleanup_metadata_server_artifacts(self.osutil) return self._protocol logger.info("Detect protocol endpoint") protocol = self._detect_protocol(save_to_history=save_to_history, init_goal_state=init_goal_state) IOErrorCounter.set_protocol_endpoint(endpoint=protocol.get_endpoint()) self._save_protocol(WIRE_PROTOCOL_NAME) self._protocol = protocol # Need to clean up MDS artifacts only after _detect_protocol so that we don't # delete MDS certificates if we can't reach WireServer and have to roll back # the update if is_metadata_server_artifact_present(): cleanup_metadata_server_artifacts(self.osutil) return self._protocol finally: self._lock.release() Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/protocol/wire.py000066400000000000000000001455141462617747000254400ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import json import os import random import shutil import time import zipfile from collections import defaultdict from datetime import datetime, timedelta from xml.sax import saxutils from azurelinuxagent.common import conf from azurelinuxagent.common import logger from azurelinuxagent.common.utils import textutil from azurelinuxagent.common.agent_supported_feature import get_agent_supported_features_list_for_crp, SupportedFeatureNames from azurelinuxagent.common.datacontract import validate_param from azurelinuxagent.common.event import add_event, WALAEventOperation, report_event, \ CollectOrReportEventDebugInfo, add_periodic from azurelinuxagent.common.exception import ProtocolNotFoundError, \ ResourceGoneError, ExtensionDownloadError, InvalidContainerError, ProtocolError, HttpError, ExtensionErrorCodes from azurelinuxagent.common.future import httpclient, bytebuffer, ustr from azurelinuxagent.common.protocol.goal_state import GoalState, TRANSPORT_CERT_FILE_NAME, TRANSPORT_PRV_FILE_NAME, \ GoalStateProperties from azurelinuxagent.common.protocol.hostplugin import HostPluginProtocol from azurelinuxagent.common.protocol.restapi import DataContract, ProvisionStatus, VMInfo, VMStatus from azurelinuxagent.common.telemetryevent import GuestAgentExtensionEventsSchema from azurelinuxagent.common.utils import fileutil, restutil from azurelinuxagent.common.utils.cryptutil import CryptUtil from azurelinuxagent.common.utils.textutil import parse_doc, findall, find, \ findtext, gettext, remove_bom, get_bytes_from_pem, parse_json from azurelinuxagent.common.version import AGENT_NAME, CURRENT_VERSION VERSION_INFO_URI = "http://{0}/?comp=versions" HEALTH_REPORT_URI = "http://{0}/machine?comp=health" ROLE_PROP_URI = "http://{0}/machine?comp=roleProperties" TELEMETRY_URI = "http://{0}/machine?comp=telemetrydata" PROTOCOL_VERSION = "2012-11-30" ENDPOINT_FINE_NAME = "WireServer" SHORT_WAITING_INTERVAL = 1 # 1 second MAX_EVENT_BUFFER_SIZE = 2 ** 16 - 2 ** 10 _DOWNLOAD_TIMEOUT = timedelta(minutes=5) class UploadError(HttpError): pass class WireProtocol(DataContract): def __init__(self, endpoint): if endpoint is None: raise ProtocolError("WireProtocol endpoint is None") self.client = WireClient(endpoint) def detect(self, init_goal_state=True, save_to_history=False): self.client.check_wire_protocol_version() trans_prv_file = os.path.join(conf.get_lib_dir(), TRANSPORT_PRV_FILE_NAME) trans_cert_file = os.path.join(conf.get_lib_dir(), TRANSPORT_CERT_FILE_NAME) cryptutil = CryptUtil(conf.get_openssl_cmd()) cryptutil.gen_transport_cert(trans_prv_file, trans_cert_file) # Initialize the goal state, including all the inner properties if init_goal_state: logger.info('Initializing goal state during protocol detection') self.client.reset_goal_state(save_to_history=save_to_history) def update_host_plugin_from_goal_state(self): self.client.update_host_plugin_from_goal_state() def get_endpoint(self): return self.client.get_endpoint() def get_vminfo(self): goal_state = self.client.get_goal_state() hosting_env = self.client.get_hosting_env() vminfo = VMInfo() vminfo.subscriptionId = None vminfo.vmName = hosting_env.vm_name vminfo.tenantName = hosting_env.deployment_name vminfo.roleName = hosting_env.role_name vminfo.roleInstanceName = goal_state.role_instance_id return vminfo def get_certs(self): certificates = self.client.get_certs() return certificates.cert_list def get_goal_state(self): return self.client.get_goal_state() def report_provision_status(self, provision_status): validate_param("provision_status", provision_status, ProvisionStatus) if provision_status.status is not None: self.client.report_health(provision_status.status, provision_status.subStatus, provision_status.description) if provision_status.properties.certificateThumbprint is not None: thumbprint = provision_status.properties.certificateThumbprint self.client.report_role_prop(thumbprint) def report_vm_status(self, vm_status): validate_param("vm_status", vm_status, VMStatus) self.client.status_blob.set_vm_status(vm_status) self.client.upload_status_blob() def report_event(self, events_iterator): self.client.report_event(events_iterator) def upload_logs(self, logs): self.client.upload_logs(logs) def get_status_blob_data(self): return self.client.status_blob.data def _build_role_properties(container_id, role_instance_id, thumbprint): xml = (u"" u"" u"" u"{0}" u"" u"" u"{1}" u"" u"" u"" u"" u"" u"" u"" u"").format(container_id, role_instance_id, thumbprint) return xml def _build_health_report(incarnation, container_id, role_instance_id, status, substatus, description): # The max description that can be sent to WireServer is 4096 bytes. # Exceeding this max can result in a failure to report health. # To keep this simple, we will keep a 10% buffer and trim before # encoding the description. if description: max_chars_before_encoding = 3686 len_before_trim = len(description) description = description[:max_chars_before_encoding] trimmed_char_count = len_before_trim - len(description) if trimmed_char_count > 0: logger.info( 'Trimmed health report description by {0} characters'.format( trimmed_char_count ) ) # Escape '&', '<' and '>' description = saxutils.escape(ustr(description)) detail = u'' if substatus is not None: substatus = saxutils.escape(ustr(substatus)) detail = (u"
" u"{0}" u"{1}" u"
").format(substatus, description) xml = (u"" u"" u"{0}" u"" u"{1}" u"" u"" u"{2}" u"" u"{3}" u"{4}" u"" u"" u"" u"" u"" u"").format(incarnation, container_id, role_instance_id, status, detail) return xml def ga_status_to_guest_info(ga_status): """ Convert VMStatus object to status blob format """ v1_ga_guest_info = { "computerName": ga_status.hostname, "osName": ga_status.osname, "osVersion": ga_status.osversion, "version": ga_status.version, } return v1_ga_guest_info def __get_formatted_msg_for_status_reporting(msg, lang="en-US"): return { 'lang': lang, 'message': msg } def _get_utc_timestamp_for_status_reporting(time_format="%Y-%m-%dT%H:%M:%SZ", timestamp=None): timestamp = time.gmtime() if timestamp is None else timestamp return time.strftime(time_format, timestamp) def ga_status_to_v1(ga_status): v1_ga_status = { "version": ga_status.version, "status": ga_status.status, "formattedMessage": __get_formatted_msg_for_status_reporting(ga_status.message) } if ga_status.update_status is not None: v1_ga_status["updateStatus"] = get_ga_update_status_to_v1(ga_status.update_status) return v1_ga_status def get_ga_update_status_to_v1(update_status): v1_ga_update_status = { "expectedVersion": update_status.expected_version, "status": update_status.status, "code": update_status.code, "formattedMessage": __get_formatted_msg_for_status_reporting(update_status.message) } return v1_ga_update_status def ext_substatus_to_v1(sub_status_list): status_list = [] for substatus in sub_status_list: status = { "name": substatus.name, "status": substatus.status, "code": substatus.code, "formattedMessage": __get_formatted_msg_for_status_reporting(substatus.message) } status_list.append(status) return status_list def ext_status_to_v1(ext_status): if ext_status is None: return None timestamp = _get_utc_timestamp_for_status_reporting() v1_sub_status = ext_substatus_to_v1(ext_status.substatusList) v1_ext_status = { "status": { "name": ext_status.name, "configurationAppliedTime": ext_status.configurationAppliedTime, "operation": ext_status.operation, "status": ext_status.status, "code": ext_status.code, "formattedMessage": __get_formatted_msg_for_status_reporting(ext_status.message) }, "version": 1.0, "timestampUTC": timestamp } if len(v1_sub_status) != 0: v1_ext_status['status']['substatus'] = v1_sub_status return v1_ext_status def ext_handler_status_to_v1(ext_handler_status): v1_handler_status = { 'handlerVersion': ext_handler_status.version, 'handlerName': ext_handler_status.name, 'status': ext_handler_status.status, 'code': ext_handler_status.code, 'useExactVersion': True } if ext_handler_status.message is not None: v1_handler_status["formattedMessage"] = __get_formatted_msg_for_status_reporting(ext_handler_status.message) v1_ext_status = ext_status_to_v1(ext_handler_status.extension_status) if ext_handler_status.extension_status is not None and v1_ext_status is not None: v1_handler_status["runtimeSettingsStatus"] = { 'settingsStatus': v1_ext_status, 'sequenceNumber': ext_handler_status.extension_status.sequenceNumber } # Add extension name if Handler supports MultiConfig if ext_handler_status.supports_multi_config: v1_handler_status["runtimeSettingsStatus"]["extensionName"] = ext_handler_status.extension_status.name return v1_handler_status def vm_artifacts_aggregate_status_to_v1(vm_artifacts_aggregate_status): gs_aggregate_status = vm_artifacts_aggregate_status.goal_state_aggregate_status if gs_aggregate_status is None: return None v1_goal_state_aggregate_status = { "formattedMessage": __get_formatted_msg_for_status_reporting(gs_aggregate_status.message), "timestampUTC": _get_utc_timestamp_for_status_reporting(timestamp=gs_aggregate_status.processed_time), "inSvdSeqNo": gs_aggregate_status.in_svd_seq_no, "status": gs_aggregate_status.status, "code": gs_aggregate_status.code } v1_artifact_aggregate_status = { "goalStateAggregateStatus": v1_goal_state_aggregate_status } return v1_artifact_aggregate_status def vm_status_to_v1(vm_status): timestamp = _get_utc_timestamp_for_status_reporting() v1_ga_guest_info = ga_status_to_guest_info(vm_status.vmAgent) v1_ga_status = ga_status_to_v1(vm_status.vmAgent) v1_vm_artifact_aggregate_status = vm_artifacts_aggregate_status_to_v1( vm_status.vmAgent.vm_artifacts_aggregate_status) v1_handler_status_list = [] for handler_status in vm_status.vmAgent.extensionHandlers: v1_handler_status_list.append(ext_handler_status_to_v1(handler_status)) v1_agg_status = { 'guestAgentStatus': v1_ga_status, 'handlerAggregateStatus': v1_handler_status_list } if v1_vm_artifact_aggregate_status is not None: v1_agg_status['vmArtifactsAggregateStatus'] = v1_vm_artifact_aggregate_status v1_vm_status = { 'version': '1.1', 'timestampUTC': timestamp, 'aggregateStatus': v1_agg_status, 'guestOSInfo': v1_ga_guest_info } supported_features = [] for _, feature in get_agent_supported_features_list_for_crp().items(): supported_features.append( { "Key": feature.name, "Value": feature.version } ) if vm_status.vmAgent.supports_fast_track: supported_features.append( { "Key": SupportedFeatureNames.FastTrack, "Value": "1.0" # This is a dummy version; CRP ignores it } ) if supported_features: v1_vm_status["supportedFeatures"] = supported_features return v1_vm_status class StatusBlob(object): def __init__(self, client): self.vm_status = None self.client = client self.type = None self.data = None def set_vm_status(self, vm_status): validate_param("vmAgent", vm_status, VMStatus) self.vm_status = vm_status def to_json(self): report = vm_status_to_v1(self.vm_status) return json.dumps(report) __storage_version__ = "2014-02-14" def prepare(self, blob_type): logger.verbose("Prepare status blob") self.data = self.to_json() self.type = blob_type def upload(self, url): try: if not self.type in ["BlockBlob", "PageBlob"]: raise ProtocolError("Illegal blob type: {0}".format(self.type)) if self.type == "BlockBlob": self.put_block_blob(url, self.data) else: self.put_page_blob(url, self.data) return True except Exception as e: logger.verbose("Initial status upload failed: {0}", e) return False def get_block_blob_headers(self, blob_size): return { "Content-Length": ustr(blob_size), "x-ms-blob-type": "BlockBlob", "x-ms-date": _get_utc_timestamp_for_status_reporting(), "x-ms-version": self.__class__.__storage_version__ } def put_block_blob(self, url, data): logger.verbose("Put block blob") headers = self.get_block_blob_headers(len(data)) resp = self.client.call_storage_service(restutil.http_put, url, data, headers) if resp.status != httpclient.CREATED: raise UploadError( "Failed to upload block blob: {0}".format(resp.status)) def get_page_blob_create_headers(self, blob_size): return { "Content-Length": "0", "x-ms-blob-content-length": ustr(blob_size), "x-ms-blob-type": "PageBlob", "x-ms-date": _get_utc_timestamp_for_status_reporting(), "x-ms-version": self.__class__.__storage_version__ } def get_page_blob_page_headers(self, start, end): return { "Content-Length": ustr(end - start), "x-ms-date": _get_utc_timestamp_for_status_reporting(), "x-ms-range": "bytes={0}-{1}".format(start, end - 1), "x-ms-page-write": "update", "x-ms-version": self.__class__.__storage_version__ } def put_page_blob(self, url, data): logger.verbose("Put page blob") # Convert string into bytes and align to 512 bytes data = bytearray(data, encoding='utf-8') page_blob_size = int((len(data) + 511) / 512) * 512 headers = self.get_page_blob_create_headers(page_blob_size) resp = self.client.call_storage_service(restutil.http_put, url, "", headers) if resp.status != httpclient.CREATED: raise UploadError( "Failed to clean up page blob: {0}".format(resp.status)) if url.count("?") <= 0: url = "{0}?comp=page".format(url) else: url = "{0}&comp=page".format(url) logger.verbose("Upload page blob") page_max = 4 * 1024 * 1024 # Max page size: 4MB start = 0 end = 0 while end < len(data): end = min(len(data), start + page_max) content_size = end - start # Align to 512 bytes page_end = int((end + 511) / 512) * 512 buf_size = page_end - start buf = bytearray(buf_size) buf[0: content_size] = data[start: end] headers = self.get_page_blob_page_headers(start, page_end) resp = self.client.call_storage_service( restutil.http_put, url, bytebuffer(buf), headers) if resp is None or resp.status != httpclient.CREATED: raise UploadError( "Failed to upload page blob: {0}".format(resp.status)) start = end def event_param_to_v1(param): param_format = ustr('') param_type = type(param.value) attr_type = "" if param_type is int: attr_type = 'mt:uint64' elif param_type is str: attr_type = 'mt:wstr' elif ustr(param_type).count("'unicode'") > 0: attr_type = 'mt:wstr' elif param_type is bool: attr_type = 'mt:bool' elif param_type is float: attr_type = 'mt:float64' return param_format.format(param.name, saxutils.quoteattr(ustr(param.value)), attr_type) def event_to_v1_encoded(event, encoding='utf-8'): params = "" for param in event.parameters: params += event_param_to_v1(param) event_str = ustr('').format(event.eventId, params) return event_str.encode(encoding) class WireClient(object): def __init__(self, endpoint): logger.info("Wire server endpoint:{0}", endpoint) self._endpoint = endpoint self._goal_state = None self._host_plugin = None self.status_blob = StatusBlob(self) def get_endpoint(self): return self._endpoint def call_wireserver(self, http_req, *args, **kwargs): try: # Never use the HTTP proxy for wireserver kwargs['use_proxy'] = False resp = http_req(*args, **kwargs) if restutil.request_failed(resp): msg = "[Wireserver Failed] URI {0} ".format(args[0]) if resp is not None: msg += " [HTTP Failed] Status Code {0}".format(resp.status) raise ProtocolError(msg) # If the GoalState is stale, pass along the exception to the caller except ResourceGoneError: raise except Exception as e: raise ProtocolError("[Wireserver Exception] {0}".format(ustr(e))) return resp def decode_config(self, data): if data is None: return None data = remove_bom(data) xml_text = ustr(data, encoding='utf-8') return xml_text def fetch_config(self, uri, headers): resp = self.call_wireserver(restutil.http_get, uri, headers=headers) return self.decode_config(resp.read()) @staticmethod def call_storage_service(http_req, *args, **kwargs): # Default to use the configured HTTP proxy if not 'use_proxy' in kwargs or kwargs['use_proxy'] is None: kwargs['use_proxy'] = True return http_req(*args, **kwargs) def fetch_artifacts_profile_blob(self, uri): return self._fetch_content("artifacts profile blob", [uri], use_verify_header=False)[1] # _fetch_content returns a (uri, content) tuple def fetch_manifest(self, manifest_type, uris, use_verify_header): uri, content = self._fetch_content("{0} manifest".format(manifest_type), uris, use_verify_header=use_verify_header) self.get_host_plugin().update_manifest_uri(uri) return content def _fetch_content(self, download_type, uris, use_verify_header): """ Walks the given list of 'uris' issuing HTTP GET requests; returns a tuple with the URI and the content of the first successful request. The 'download_type' is added to any log messages produced by this method; it should describe the type of content of the given URIs (e.g. "manifest", "extension package", etc). """ host_ga_plugin = self.get_host_plugin() direct_download = lambda uri: self.fetch(uri)[0] def hgap_download(uri): request_uri, request_headers = host_ga_plugin.get_artifact_request(uri, use_verify_header=use_verify_header) response, _ = self.fetch(request_uri, request_headers, use_proxy=False, retry_codes=restutil.HGAP_GET_EXTENSION_ARTIFACT_RETRY_CODES) return response return self._download_with_fallback_channel(download_type, uris, direct_download=direct_download, hgap_download=hgap_download) def download_zip_package(self, package_type, uris, target_file, target_directory, use_verify_header): """ Downloads the ZIP package specified in 'uris' (which is a list of alternate locations for the ZIP), saving it to 'target_file' and then expanding its contents to 'target_directory'. Deletes the target file after it has been expanded. The 'package_type' is only used in log messages and has no other semantics. It should specify the contents of the ZIP, e.g. "extension package" or "agent package" The 'use_verify_header' parameter indicates whether the verify header should be added when using the extensionArtifact API of the HostGAPlugin. """ host_ga_plugin = self.get_host_plugin() direct_download = lambda uri: self.stream(uri, target_file, headers=None, use_proxy=True) def hgap_download(uri): request_uri, request_headers = host_ga_plugin.get_artifact_request(uri, use_verify_header=use_verify_header, artifact_manifest_url=host_ga_plugin.manifest_uri) return self.stream(request_uri, target_file, headers=request_headers, use_proxy=False) on_downloaded = lambda: WireClient._try_expand_zip_package(package_type, target_file, target_directory) self._download_with_fallback_channel(package_type, uris, direct_download=direct_download, hgap_download=hgap_download, on_downloaded=on_downloaded) def _download_with_fallback_channel(self, download_type, uris, direct_download, hgap_download, on_downloaded=None): """ Walks the given list of 'uris' issuing HTTP GET requests, attempting to download the content of each URI. The download is done using both the default and the fallback channels, until one of them succeeds. The 'direct_download' and 'hgap_download' functions define the logic to do direct calls to the URI or to use the HostGAPlugin as a proxy for the download. Initially the default channel is the direct download and the fallback channel is the HostGAPlugin, but the default can be depending on the success/failure of each channel (see _download_using_appropriate_channel() for the logic to do this). The 'download_type' is added to any log messages produced by this method; it should describe the type of content of the given URIs (e.g. "manifest", "extension package, "agent package", etc). When the download is successful, _download_with_fallback_channel invokes the 'on_downloaded' function, which can be used to process the results of the download. This function should return True on success, and False on failure (it should not raise any exceptions). If the return value is False, the download is considered a failure and the next URI is tried. When the download succeeds, this method returns a (uri, response) tuple where the first item is the URI of the successful download and the second item is the response returned by the successful channel (i.e. one of direct_download and hgap_download). This method enforces a timeout (_DOWNLOAD_TIMEOUT) on the download and raises an exception if the limit is exceeded. """ logger.info("Downloading {0}", download_type) start_time = datetime.now() uris_shuffled = uris random.shuffle(uris_shuffled) most_recent_error = "None" for index, uri in enumerate(uris_shuffled): elapsed = datetime.now() - start_time if elapsed > _DOWNLOAD_TIMEOUT: message = "Timeout downloading {0}. Elapsed: {1} URIs tried: {2}/{3}. Last error: {4}".format(download_type, elapsed, index, len(uris), ustr(most_recent_error)) raise ExtensionDownloadError(message, code=ExtensionErrorCodes.PluginManifestDownloadError) try: # Disable W0640: OK to use uri in a lambda within the loop's body response = self._download_using_appropriate_channel(lambda: direct_download(uri), lambda: hgap_download(uri)) # pylint: disable=W0640 if on_downloaded is not None: on_downloaded() return uri, response except Exception as exception: most_recent_error = exception raise ExtensionDownloadError("Failed to download {0} from all URIs. Last error: {1}".format(download_type, ustr(most_recent_error)), code=ExtensionErrorCodes.PluginManifestDownloadError) @staticmethod def _try_expand_zip_package(package_type, target_file, target_directory): logger.info("Unzipping {0}: {1}", package_type, target_file) try: zipfile.ZipFile(target_file).extractall(target_directory) except Exception as exception: logger.error("Error while unzipping {0}: {1}", package_type, ustr(exception)) if os.path.exists(target_directory): try: shutil.rmtree(target_directory) except Exception as exception: logger.warn("Cannot delete {0}: {1}", target_directory, ustr(exception)) raise finally: try: os.remove(target_file) except Exception as exception: logger.warn("Cannot delete {0}: {1}", target_file, ustr(exception)) def stream(self, uri, destination, headers=None, use_proxy=None): """ Downloads the content of the given 'uri' and saves it to the 'destination' file. """ try: logger.verbose("Fetch [{0}] with headers [{1}] to file [{2}]", uri, headers, destination) response = self._fetch_response(uri, headers, use_proxy) if response is not None and not restutil.request_failed(response): chunk_size = 1024 * 1024 # 1MB buffer with open(destination, 'wb', chunk_size) as destination_fh: complete = False while not complete: chunk = response.read(chunk_size) destination_fh.write(chunk) complete = len(chunk) < chunk_size return "" except: if os.path.exists(destination): # delete the destination file, in case we did a partial download try: os.remove(destination) except Exception as exception: logger.warn("Can't delete {0}: {1}", destination, ustr(exception)) raise def fetch(self, uri, headers=None, use_proxy=None, decode=True, retry_codes=None, ok_codes=None): """ Returns a tuple with the content and headers of the response. The headers are a list of (name, value) tuples. """ logger.verbose("Fetch [{0}] with headers [{1}]", uri, headers) content = None response_headers = None response = self._fetch_response(uri, headers, use_proxy, retry_codes=retry_codes, ok_codes=ok_codes) if response is not None and not restutil.request_failed(response, ok_codes=ok_codes): response_content = response.read() content = self.decode_config(response_content) if decode else response_content response_headers = response.getheaders() return content, response_headers def _fetch_response(self, uri, headers=None, use_proxy=None, retry_codes=None, ok_codes=None): resp = None try: resp = self.call_storage_service( restutil.http_get, uri, headers=headers, use_proxy=use_proxy, retry_codes=retry_codes) host_plugin = self.get_host_plugin() if restutil.request_failed(resp, ok_codes=ok_codes): error_response = restutil.read_response_error(resp) msg = "Fetch failed from [{0}]: {1}".format(uri, error_response) logger.warn(msg) if host_plugin is not None: host_plugin.report_fetch_health(uri, is_healthy=not restutil.request_failed_at_hostplugin(resp), source='WireClient', response=error_response) raise ProtocolError(msg) else: if host_plugin is not None: host_plugin.report_fetch_health(uri, source='WireClient') except (HttpError, ProtocolError, IOError) as error: msg = "Fetch failed: {0}".format(error) logger.warn(msg) report_event(op=WALAEventOperation.HttpGet, is_success=False, message=msg, log_event=False) raise return resp def update_host_plugin_from_goal_state(self): """ Fetches a new goal state and updates the Container ID and Role Config Name of the host plugin client """ if self._host_plugin is not None: GoalState.update_host_plugin_headers(self) def update_host_plugin(self, container_id, role_config_name): if self._host_plugin is not None: self._host_plugin.update_container_id(container_id) self._host_plugin.update_role_config_name(role_config_name) def update_goal_state(self, silent=False, save_to_history=False): """ Updates the goal state if the incarnation or etag changed """ try: if self._goal_state is None: self._goal_state = GoalState(self, silent=silent, save_to_history=save_to_history) else: self._goal_state.update(silent=silent) except ProtocolError: raise except Exception as exception: raise ProtocolError("Error fetching goal state: {0}".format(ustr(exception))) def reset_goal_state(self, goal_state_properties=GoalStateProperties.All, silent=False, save_to_history=False): """ Resets the goal state """ try: if not silent: logger.info("Forcing an update of the goal state.") self._goal_state = GoalState(self, goal_state_properties=goal_state_properties, silent=silent, save_to_history=save_to_history) except ProtocolError: raise except Exception as exception: raise ProtocolError("Error fetching goal state: {0}".format(ustr(exception))) def get_goal_state(self): if self._goal_state is None: raise ProtocolError("Trying to fetch goal state before initialization!") return self._goal_state def get_hosting_env(self): if self._goal_state is None: raise ProtocolError("Trying to fetch Hosting Environment before initialization!") return self._goal_state.hosting_env def get_shared_conf(self): if self._goal_state is None: raise ProtocolError("Trying to fetch Shared Conf before initialization!") return self._goal_state.shared_conf def get_certs(self): if self._goal_state is None: raise ProtocolError("Trying to fetch Certificates before initialization!") return self._goal_state.certs def get_remote_access(self): if self._goal_state is None: raise ProtocolError("Trying to fetch Remote Access before initialization!") return self._goal_state.remote_access def check_wire_protocol_version(self): uri = VERSION_INFO_URI.format(self.get_endpoint()) version_info_xml = self.fetch_config(uri, None) version_info = VersionInfo(version_info_xml) preferred = version_info.get_preferred() if PROTOCOL_VERSION == preferred: logger.info("Wire protocol version:{0}", PROTOCOL_VERSION) elif PROTOCOL_VERSION in version_info.get_supported(): logger.info("Wire protocol version:{0}", PROTOCOL_VERSION) logger.info("Server preferred version:{0}", preferred) else: error = ("Agent supported wire protocol version: {0} was not " "advised by Fabric.").format(PROTOCOL_VERSION) raise ProtocolNotFoundError(error) def _call_hostplugin_with_container_check(self, host_func): """ Calls host_func on host channel and accounts for stale resource (ResourceGoneError or InvalidContainerError). If stale, it refreshes the goal state and retries host_func. """ try: return host_func() except (ResourceGoneError, InvalidContainerError) as error: host_plugin = self.get_host_plugin() old_container_id, old_role_config_name = host_plugin.container_id, host_plugin.role_config_name msg = "[PERIODIC] Request failed with the current host plugin configuration. " \ "ContainerId: {0}, role config file: {1}. Fetching new goal state and retrying the call." \ "Error: {2}".format(old_container_id, old_role_config_name, ustr(error)) logger.periodic_info(logger.EVERY_SIX_HOURS, msg) self.update_host_plugin_from_goal_state() new_container_id, new_role_config_name = host_plugin.container_id, host_plugin.role_config_name msg = "[PERIODIC] Host plugin reconfigured with new parameters. " \ "ContainerId: {0}, role config file: {1}.".format(new_container_id, new_role_config_name) logger.periodic_info(logger.EVERY_SIX_HOURS, msg) try: ret = host_func() msg = "[PERIODIC] Request succeeded using the host plugin channel after goal state refresh. " \ "ContainerId changed from {0} to {1}, " \ "role config file changed from {2} to {3}.".format(old_container_id, new_container_id, old_role_config_name, new_role_config_name) add_periodic(delta=logger.EVERY_SIX_HOURS, name=AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.HostPlugin, is_success=True, message=msg, log_event=True) return ret except (ResourceGoneError, InvalidContainerError) as error: msg = "[PERIODIC] Request failed using the host plugin channel after goal state refresh. " \ "ContainerId changed from {0} to {1}, role config file changed from {2} to {3}. " \ "Exception type: {4}.".format(old_container_id, new_container_id, old_role_config_name, new_role_config_name, type(error).__name__) add_periodic(delta=logger.EVERY_SIX_HOURS, name=AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.HostPlugin, is_success=False, message=msg, log_event=True) raise def _download_using_appropriate_channel(self, direct_download, hgap_download): """ Does a download using both the default and fallback channels. By default, the primary channel is direct, host channel is the fallback. We call the primary channel first and return on success. If primary fails, we try the fallback. If fallback fails, we return and *don't* switch the default channel. If fallback succeeds, we change the default channel. """ hgap_download_function_with_retry = lambda: self._call_hostplugin_with_container_check(hgap_download) if HostPluginProtocol.is_default_channel: primary_channel, secondary_channel = hgap_download_function_with_retry, direct_download else: primary_channel, secondary_channel = direct_download, hgap_download_function_with_retry try: return primary_channel() except Exception as exception: primary_channel_error = exception try: return_value = secondary_channel() # Since the secondary channel succeeded, flip the default channel HostPluginProtocol.is_default_channel = not HostPluginProtocol.is_default_channel message = "Default channel changed to {0} channel.".format("HostGAPlugin" if HostPluginProtocol.is_default_channel else "Direct") logger.info(message) add_event(AGENT_NAME, op=WALAEventOperation.DefaultChannelChange, version=CURRENT_VERSION, is_success=True, message=message, log_event=False) return return_value except Exception as exception: raise HttpError("Download failed both on the primary and fallback channels. Primary: [{0}] Fallback: [{1}]".format(ustr(primary_channel_error), ustr(exception))) def upload_status_blob(self): extensions_goal_state = self.get_goal_state().extensions_goal_state if extensions_goal_state.status_upload_blob is None: # the status upload blob is in ExtensionsConfig so force a full goal state refresh self.reset_goal_state(silent=True, save_to_history=True) extensions_goal_state = self.get_goal_state().extensions_goal_state if extensions_goal_state.status_upload_blob is None: raise ProtocolNotFoundError("Status upload uri is missing") logger.info("Refreshed the goal state to get the status upload blob. New Goal State ID: {0}", extensions_goal_state.id) blob_type = extensions_goal_state.status_upload_blob_type try: self.status_blob.prepare(blob_type) except Exception as e: raise ProtocolError("Exception creating status blob: {0}".format(ustr(e))) # Swap the order of use for the HostPlugin vs. the "direct" route. # Prefer the use of HostPlugin. If HostPlugin fails fall back to the # direct route. # # The code previously preferred the "direct" route always, and only fell back # to the HostPlugin *if* there was an error. We would like to move to # the HostPlugin for all traffic, but this is a big change. We would like # to see how this behaves at scale, and have a fallback should things go # wrong. This is why we try HostPlugin then direct. try: host = self.get_host_plugin() host.put_vm_status(self.status_blob, extensions_goal_state.status_upload_blob, extensions_goal_state.status_upload_blob_type) return except ResourceGoneError: # refresh the host plugin client and try again on the next iteration of the main loop self.update_host_plugin_from_goal_state() return except Exception as e: # for all other errors, fall back to direct msg = "Falling back to direct upload: {0}".format(ustr(e)) self.report_status_event(msg, is_success=True) try: if self.status_blob.upload(extensions_goal_state.status_upload_blob): return except Exception as e: msg = "Exception uploading status blob: {0}".format(ustr(e)) self.report_status_event(msg, is_success=False) raise ProtocolError("Failed to upload status blob via either channel") def report_role_prop(self, thumbprint): goal_state = self.get_goal_state() role_prop = _build_role_properties(goal_state.container_id, goal_state.role_instance_id, thumbprint) role_prop = role_prop.encode("utf-8") role_prop_uri = ROLE_PROP_URI.format(self.get_endpoint()) headers = self.get_header_for_xml_content() try: resp = self.call_wireserver(restutil.http_post, role_prop_uri, role_prop, headers=headers) except HttpError as e: raise ProtocolError((u"Failed to send role properties: " u"{0}").format(e)) if resp.status != httpclient.ACCEPTED: raise ProtocolError((u"Failed to send role properties: " u",{0}: {1}").format(resp.status, resp.read())) def report_health(self, status, substatus, description): goal_state = self.get_goal_state() health_report = _build_health_report(goal_state.incarnation, goal_state.container_id, goal_state.role_instance_id, status, substatus, description) health_report = health_report.encode("utf-8") health_report_uri = HEALTH_REPORT_URI.format(self.get_endpoint()) headers = self.get_header_for_xml_content() try: # 30 retries with 10s sleep gives ~5min for wireserver updates; # this is retried 3 times with 15s sleep before throwing a # ProtocolError, for a total of ~15min. resp = self.call_wireserver(restutil.http_post, health_report_uri, health_report, headers=headers, max_retry=30, retry_delay=15) except HttpError as e: raise ProtocolError((u"Failed to send provision status: " u"{0}").format(e)) if restutil.request_failed(resp): raise ProtocolError((u"Failed to send provision status: " u",{0}: {1}").format(resp.status, resp.read())) def send_encoded_event(self, provider_id, event_str, encoding='utf8'): uri = TELEMETRY_URI.format(self.get_endpoint()) data_format_header = ustr('').format( provider_id).encode(encoding) data_format_footer = ustr('').encode(encoding) # Event string should already be encoded by the time it gets here, to avoid double encoding, # dividing it into parts. data = data_format_header + event_str + data_format_footer try: header = self.get_header_for_xml_content() # NOTE: The call to wireserver requests utf-8 encoding in the headers, but the body should not # be encoded: some nodes in the telemetry pipeline do not support utf-8 encoding. resp = self.call_wireserver(restutil.http_post, uri, data, header) except HttpError as e: raise ProtocolError("Failed to send events:{0}".format(e)) if restutil.request_failed(resp): logger.verbose(resp.read()) raise ProtocolError( "Failed to send events:{0}".format(resp.status)) def report_event(self, events_iterator): buf = {} debug_info = CollectOrReportEventDebugInfo(operation=CollectOrReportEventDebugInfo.OP_REPORT) events_per_provider = defaultdict(int) def _send_event(provider_id, debug_info): try: self.send_encoded_event(provider_id, buf[provider_id]) except UnicodeError as uni_error: debug_info.update_unicode_error(uni_error) except Exception as error: debug_info.update_op_error(error) # Group events by providerId for event in events_iterator: try: if event.providerId not in buf: buf[event.providerId] = b"" event_str = event_to_v1_encoded(event) if len(event_str) >= MAX_EVENT_BUFFER_SIZE: # Ignore single events that are too large to send out details_of_event = [ustr(x.name) + ":" + ustr(x.value) for x in event.parameters if x.name in [GuestAgentExtensionEventsSchema.Name, GuestAgentExtensionEventsSchema.Version, GuestAgentExtensionEventsSchema.Operation, GuestAgentExtensionEventsSchema.OperationSuccess]] logger.periodic_warn(logger.EVERY_HALF_HOUR, "Single event too large: {0}, with the length: {1} more than the limit({2})" .format(str(details_of_event), len(event_str), MAX_EVENT_BUFFER_SIZE)) continue # If buffer is full, send out the events in buffer and reset buffer if len(buf[event.providerId] + event_str) >= MAX_EVENT_BUFFER_SIZE: logger.verbose("No of events this request = {0}".format(events_per_provider[event.providerId])) _send_event(event.providerId, debug_info) buf[event.providerId] = b"" events_per_provider[event.providerId] = 0 # Add encoded events to the buffer buf[event.providerId] = buf[event.providerId] + event_str events_per_provider[event.providerId] += 1 except Exception as error: logger.warn("Unexpected error when generating Events:{0}", textutil.format_exception(error)) # Send out all events left in buffer. for provider_id in list(buf.keys()): if buf[provider_id]: logger.verbose("No of events this request = {0}".format(events_per_provider[provider_id])) _send_event(provider_id, debug_info) debug_info.report_debug_info() def report_status_event(self, message, is_success): report_event(op=WALAEventOperation.ReportStatus, is_success=is_success, message=message, log_event=not is_success) def get_header(self): return { "x-ms-agent-name": "WALinuxAgent", "x-ms-version": PROTOCOL_VERSION } def get_header_for_xml_content(self): return { "x-ms-agent-name": "WALinuxAgent", "x-ms-version": PROTOCOL_VERSION, "Content-Type": "text/xml;charset=utf-8" } def get_header_for_cert(self): trans_cert_file = os.path.join(conf.get_lib_dir(), TRANSPORT_CERT_FILE_NAME) try: content = fileutil.read_file(trans_cert_file) except IOError as e: raise ProtocolError("Failed to read {0}: {1}".format(trans_cert_file, e)) cert = get_bytes_from_pem(content) return { "x-ms-agent-name": "WALinuxAgent", "x-ms-version": PROTOCOL_VERSION, "x-ms-cipher-name": "DES_EDE3_CBC", "x-ms-guest-agent-public-x509-cert": cert } def get_host_plugin(self): if self._host_plugin is None: self._host_plugin = HostPluginProtocol(self.get_endpoint()) GoalState.update_host_plugin_headers(self) return self._host_plugin def get_on_hold(self): return self.get_goal_state().extensions_goal_state.on_hold def upload_logs(self, content): host = self.get_host_plugin() return host.put_vm_log(content) class VersionInfo(object): def __init__(self, xml_text): """ Query endpoint server for wire protocol version. Fail if our desired protocol version is not seen. """ logger.verbose("Load Version.xml") self.parse(xml_text) def parse(self, xml_text): xml_doc = parse_doc(xml_text) preferred = find(xml_doc, "Preferred") self.preferred = findtext(preferred, "Version") logger.info("Fabric preferred wire protocol version:{0}", self.preferred) self.supported = [] supported = find(xml_doc, "Supported") supported_version = findall(supported, "Version") for node in supported_version: version = gettext(node) logger.verbose("Fabric supported wire protocol version:{0}", version) self.supported.append(version) def get_preferred(self): return self.preferred def get_supported(self): return self.supported # Do not extend this class class InVMArtifactsProfile(object): """ deserialized json string of InVMArtifactsProfile. It is expected to contain the following fields: * inVMArtifactsProfileBlobSeqNo * profileId (optional) * onHold (optional) * certificateThumbprint (optional) * encryptedHealthChecks (optional) * encryptedApplicationProfile (optional) """ def __init__(self, artifacts_profile): if not textutil.is_str_empty(artifacts_profile): self.__dict__.update(parse_json(artifacts_profile)) def is_on_hold(self): # hasattr() is not available in Python 2.6 if 'onHold' in self.__dict__: return str(self.onHold).lower() == 'true' # pylint: disable=E1101 return False Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/singletonperthread.py000066400000000000000000000030561462617747000265240ustar00rootroot00000000000000from threading import Lock, currentThread class _SingletonPerThreadMetaClass(type): """ A metaclass that creates a SingletonPerThread base class when called. """ _instances = {} _lock = Lock() def __call__(cls, *args, **kwargs): with cls._lock: obj_name = "%s__%s" % (cls.__name__, currentThread().getName()) # Object Name = className__threadName if obj_name not in cls._instances: cls._instances[obj_name] = super(_SingletonPerThreadMetaClass, cls).__call__(*args, **kwargs) return cls._instances[obj_name] class SingletonPerThread(_SingletonPerThreadMetaClass('SingleObjectPerThreadMetaClass', (object,), {})): # This base class calls the metaclass above to create the singleton per thread object. This class provides an # abstraction over how to invoke the Metaclass so just inheriting this class makes the # child class a singleton per thread (As opposed to invoking the Metaclass separately for each derived classes) # More info here - https://stackoverflow.com/questions/6760685/creating-a-singleton-in-python # # Usage: # Inheriting this class will create a Singleton per thread for that class # To delete the cached object of a class, call DerivedClassName.clear() to delete the object per thread # Note: If the thread dies and is recreated with the same thread name, the existing object would be reused # and no new object for the derived class would be created unless DerivedClassName.clear() is called explicitly to # delete the cache pass Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/telemetryevent.py000066400000000000000000000073301462617747000256760ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.common.datacontract import DataContract, DataContractList from azurelinuxagent.common.version import AGENT_NAME class CommonTelemetryEventSchema(object): # Common schema keys for GuestAgentExtensionEvents, GuestAgentGenericLogs # and GuestAgentPerformanceCounterEvents tables in Kusto. EventPid = "EventPid" EventTid = "EventTid" GAVersion = "GAVersion" ContainerId = "ContainerId" TaskName = "TaskName" OpcodeName = "OpcodeName" KeywordName = "KeywordName" OSVersion = "OSVersion" ExecutionMode = "ExecutionMode" RAM = "RAM" Processors = "Processors" TenantName = "TenantName" RoleName = "RoleName" RoleInstanceName = "RoleInstanceName" Location = "Location" SubscriptionId = "SubscriptionId" ResourceGroupName = "ResourceGroupName" VMId = "VMId" ImageOrigin = "ImageOrigin" class GuestAgentGenericLogsSchema(CommonTelemetryEventSchema): # GuestAgentGenericLogs table specific schema keys EventName = "EventName" CapabilityUsed = "CapabilityUsed" Context1 = "Context1" Context2 = "Context2" Context3 = "Context3" class GuestAgentExtensionEventsSchema(CommonTelemetryEventSchema): # GuestAgentExtensionEvents table specific schema keys ExtensionType = "ExtensionType" IsInternal = "IsInternal" Name = "Name" Version = "Version" Operation = "Operation" OperationSuccess = "OperationSuccess" Message = "Message" Duration = "Duration" class GuestAgentPerfCounterEventsSchema(CommonTelemetryEventSchema): # GuestAgentPerformanceCounterEvents table specific schema keys Category = "Category" Counter = "Counter" Instance = "Instance" Value = "Value" class TelemetryEventParam(DataContract): def __init__(self, name=None, value=None): self.name = name self.value = value def __eq__(self, other): return isinstance(other, TelemetryEventParam) and other.name == self.name and other.value == self.value class TelemetryEvent(DataContract): def __init__(self, eventId=None, providerId=None): self.eventId = eventId self.providerId = providerId self.parameters = DataContractList(TelemetryEventParam) self.file_type = "" # Checking if the particular param name is in the TelemetryEvent. def __contains__(self, param_name): return param_name in [param.name for param in self.parameters] def is_extension_event(self): # Events originating from the agent have "WALinuxAgent" as the Name parameter, or they don't have a Name # parameter, in the case of log and metric events. So, in case the Name parameter exists and it is not # "WALinuxAgent", it is an extension event. for param in self.parameters: if param.name == GuestAgentExtensionEventsSchema.Name: return param.value != AGENT_NAME return False def get_version(self): for param in self.parameters: if param.name == GuestAgentExtensionEventsSchema.Version: return param.value return None Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/utils/000077500000000000000000000000001462617747000234055ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/utils/__init__.py000066400000000000000000000011661462617747000255220ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/utils/archive.py000066400000000000000000000255071462617747000254110ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. import errno import glob import os import re import shutil import zipfile from azurelinuxagent.common import conf from azurelinuxagent.common import logger from azurelinuxagent.common.utils import fileutil, timeutil # pylint: disable=W0105 """ archive.py The module supports the archiving of guest agent state. Guest agent state is flushed whenever there is a incarnation change. The flush is archived periodically (once a day). The process works as follows whenever a new incarnation arrives. 1. Flush - move all state files to a new directory under .../history/timestamp/. 2. Archive - enumerate all directories under .../history/timestamp and create a .zip file named timestamp.zip. Delete the archive directory 3. Purge - glob the list .zip files, sort by timestamp in descending order, keep the first 50 results, and delete the rest. ... is the directory where the agent's state resides, by default this is /var/lib/waagent. The timestamp is an ISO8601 formatted value. """ # pylint: enable=W0105 ARCHIVE_DIRECTORY_NAME = 'history' # TODO: See comment in GoalStateHistory._save_placeholder and remove this code when no longer needed _PLACEHOLDER_FILE_NAME = 'GoalState.1.xml' # END TODO _MAX_ARCHIVED_STATES = 50 _CACHE_PATTERNS = [ # # Note that SharedConfig.xml is not included here; this file is used by other components (Azsec and Singularity/HPC Infiniband) # re.compile(r"^VmSettings\.\d+\.json$"), re.compile(r"^(.*)\.(\d+)\.(agentsManifest)$", re.IGNORECASE), re.compile(r"^(.*)\.(\d+)\.(manifest\.xml)$", re.IGNORECASE), re.compile(r"^(.*)\.(\d+)\.(xml)$", re.IGNORECASE), re.compile(r"^HostingEnvironmentConfig\.xml$", re.IGNORECASE), re.compile(r"^RemoteAccess\.xml$", re.IGNORECASE), re.compile(r"^waagent_status\.\d+\.json$"), ] # # Legacy names # 2018-04-06T08:21:37.142697 # 2018-04-06T08:21:37.142697.zip # 2018-04-06T08:21:37.142697_incarnation_N # 2018-04-06T08:21:37.142697_incarnation_N.zip # 2018-04-06T08:21:37.142697_N-M # 2018-04-06T08:21:37.142697_N-M.zip # # Current names # # 2018-04-06T08-21-37__N-M # 2018-04-06T08-21-37__N-M.zip # _ARCHIVE_BASE_PATTERN = r"\d{4}\-\d{2}\-\d{2}T\d{2}[:-]\d{2}[:-]\d{2}(\.\d+)?((_incarnation)?_+(\d+|status)(-\d+)?)?" _ARCHIVE_PATTERNS_DIRECTORY = re.compile(r'^{0}$'.format(_ARCHIVE_BASE_PATTERN)) _ARCHIVE_PATTERNS_ZIP = re.compile(r'^{0}\.zip$'.format(_ARCHIVE_BASE_PATTERN)) _GOAL_STATE_FILE_NAME = "GoalState.xml" _VM_SETTINGS_FILE_NAME = "VmSettings.json" _CERTIFICATES_FILE_NAME = "Certificates.json" _HOSTING_ENV_FILE_NAME = "HostingEnvironmentConfig.xml" _REMOTE_ACCESS_FILE_NAME = "RemoteAccess.xml" _EXT_CONF_FILE_NAME = "ExtensionsConfig.xml" _MANIFEST_FILE_NAME = "{0}.manifest.xml" AGENT_STATUS_FILE = "waagent_status.json" SHARED_CONF_FILE_NAME = "SharedConfig.xml" # TODO: use @total_ordering once RHEL/CentOS and SLES 11 are EOL. # @total_ordering first appeared in Python 2.7 and 3.2 # If there are more use cases for @total_ordering, I will # consider re-implementing it. class State(object): def __init__(self, path, timestamp): self._path = path self._timestamp = timestamp @property def timestamp(self): return self._timestamp def delete(self): pass def archive(self): pass def __eq__(self, other): return self._timestamp == other.timestamp def __ne__(self, other): return self._timestamp != other.timestamp def __lt__(self, other): return self._timestamp < other.timestamp def __gt__(self, other): return self._timestamp > other.timestamp def __le__(self, other): return self._timestamp <= other.timestamp def __ge__(self, other): return self._timestamp >= other.timestamp class StateZip(State): def delete(self): os.remove(self._path) class StateDirectory(State): def delete(self): shutil.rmtree(self._path) def archive(self): fn_tmp = "{0}.zip.tmp".format(self._path) filename = "{0}.zip".format(self._path) ziph = None try: # contextmanager for zipfile.ZipFile doesn't exist for py2.6, manually closing it ziph = zipfile.ZipFile(fn_tmp, 'w') for current_file in os.listdir(self._path): full_path = os.path.join(self._path, current_file) ziph.write(full_path, current_file, zipfile.ZIP_DEFLATED) finally: if ziph is not None: ziph.close() os.rename(fn_tmp, filename) shutil.rmtree(self._path) class StateArchiver(object): def __init__(self, lib_dir): self._source = os.path.join(lib_dir, ARCHIVE_DIRECTORY_NAME) if not os.path.isdir(self._source): try: fileutil.mkdir(self._source, mode=0o700) except IOError as exception: if exception.errno != errno.EEXIST: logger.warn("{0} : {1}", self._source, exception.strerror) @staticmethod def purge_legacy_goal_state_history(): lib_dir = conf.get_lib_dir() for current_file in os.listdir(lib_dir): # Don't remove the placeholder goal state file. # TODO: See comment in GoalStateHistory._save_placeholder and remove this code when no longer needed if current_file == _PLACEHOLDER_FILE_NAME: continue # END TODO full_path = os.path.join(lib_dir, current_file) for pattern in _CACHE_PATTERNS: match = pattern.match(current_file) if match is not None: try: os.remove(full_path) except Exception as e: logger.warn("Cannot delete legacy history file '{0}': {1}".format(full_path, e)) break def archive(self): states = self._get_archive_states() if len(states) > 0: # Skip the most recent goal state, since it may still be in use for state in states[1:]: state.archive() def _get_archive_states(self): states = [] for current_file in os.listdir(self._source): full_path = os.path.join(self._source, current_file) match = _ARCHIVE_PATTERNS_DIRECTORY.match(current_file) if match is not None: states.append(StateDirectory(full_path, match.group(0))) match = _ARCHIVE_PATTERNS_ZIP.match(current_file) if match is not None: states.append(StateZip(full_path, match.group(0))) states.sort(key=lambda state: os.path.getctime(state._path), reverse=True) return states class GoalStateHistory(object): def __init__(self, time, tag): self._errors = False timestamp = timeutil.create_history_timestamp(time) self._root = os.path.join(conf.get_lib_dir(), ARCHIVE_DIRECTORY_NAME, "{0}__{1}".format(timestamp, tag) if tag is not None else timestamp) GoalStateHistory._purge() @staticmethod def tag_exists(tag): """ Returns True when an item with the given 'tag' already exists in the history directory """ return len(glob.glob(os.path.join(conf.get_lib_dir(), ARCHIVE_DIRECTORY_NAME, "*_{0}".format(tag)))) > 0 def save(self, data, file_name): try: if not os.path.exists(self._root): fileutil.mkdir(self._root, mode=0o700) with open(os.path.join(self._root, file_name), "w") as handle: handle.write(data) except Exception as e: if not self._errors: # report only 1 error per directory self._errors = True logger.warn("Failed to save {0} to the goal state history: {1} [no additional errors saving the goal state will be reported]".format(file_name, e)) _purge_error_count = 0 @staticmethod def _purge(): """ Delete "old" history directories and .zip archives. Old is defined as any directories or files older than the X newest ones. """ try: history_root = os.path.join(conf.get_lib_dir(), ARCHIVE_DIRECTORY_NAME) if not os.path.exists(history_root): return items = [] for current_item in os.listdir(history_root): full_path = os.path.join(history_root, current_item) items.append(full_path) items.sort(key=os.path.getctime, reverse=True) for current_item in items[_MAX_ARCHIVED_STATES:]: if os.path.isfile(current_item): os.remove(current_item) else: shutil.rmtree(current_item) if GoalStateHistory._purge_error_count > 0: GoalStateHistory._purge_error_count = 0 # Log a success message when we are recovering from errors. logger.info("Successfully cleaned up the goal state history directory") except Exception as e: GoalStateHistory._purge_error_count += 1 if GoalStateHistory._purge_error_count < 5: logger.warn("Failed to clean up the goal state history directory: {0}".format(e)) elif GoalStateHistory._purge_error_count == 5: logger.warn("Failed to clean up the goal state history directory [will stop reporting these errors]: {0}".format(e)) @staticmethod def _save_placeholder(): """ Some internal components took a dependency in the legacy GoalState.*.xml file. We create it here while those components are updated to remove the dependency. When removing this code, also remove the check in StateArchiver.purge_legacy_goal_state_history, and the definition of _PLACEHOLDER_FILE_NAME """ try: placeholder = os.path.join(conf.get_lib_dir(), _PLACEHOLDER_FILE_NAME) with open(placeholder, "w") as handle: handle.write("empty placeholder file") except Exception as e: logger.warn("Failed to save placeholder file ({0}): {1}".format(_PLACEHOLDER_FILE_NAME, e)) def save_goal_state(self, text): self.save(text, _GOAL_STATE_FILE_NAME) self._save_placeholder() def save_extensions_config(self, text): self.save(text, _EXT_CONF_FILE_NAME) def save_vm_settings(self, text): self.save(text, _VM_SETTINGS_FILE_NAME) def save_remote_access(self, text): self.save(text, _REMOTE_ACCESS_FILE_NAME) def save_certificates(self, text): self.save(text, _CERTIFICATES_FILE_NAME) def save_hosting_env(self, text): self.save(text, _HOSTING_ENV_FILE_NAME) def save_shared_conf(self, text): self.save(text, SHARED_CONF_FILE_NAME) def save_manifest(self, name, text): self.save(text, _MANIFEST_FILE_NAME.format(name)) Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/utils/cryptutil.py000066400000000000000000000164601462617747000260250ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import base64 import errno import struct import os.path import subprocess from azurelinuxagent.common.future import ustr, bytebuffer from azurelinuxagent.common.exception import CryptError import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil DECRYPT_SECRET_CMD = "{0} cms -decrypt -inform DER -inkey {1} -in /dev/stdin" class CryptUtil(object): def __init__(self, openssl_cmd): self.openssl_cmd = openssl_cmd def gen_transport_cert(self, prv_file, crt_file): """ Create ssl certificate for https communication with endpoint server. """ cmd = [self.openssl_cmd, "req", "-x509", "-nodes", "-subj", "/CN=LinuxTransport", "-days", "730", "-newkey", "rsa:2048", "-keyout", prv_file, "-out", crt_file] try: shellutil.run_command(cmd) except shellutil.CommandError as cmd_err: msg = "Failed to create {0} and {1} certificates.\n[stdout]\n{2}\n\n[stderr]\n{3}\n"\ .format(prv_file, crt_file, cmd_err.stdout, cmd_err.stderr) logger.error(msg) def get_pubkey_from_prv(self, file_name): if not os.path.exists(file_name): raise IOError(errno.ENOENT, "File not found", file_name) # OpenSSL's pkey command may not be available on older versions so try 'rsa' first. try: command = [self.openssl_cmd, "rsa", "-in", file_name, "-pubout"] return shellutil.run_command(command, log_error=False) except shellutil.CommandError as error: if not ("Not an RSA key" in error.stderr or "expecting an rsa key" in error.stderr): logger.error( "Command: [{0}], return code: [{1}], stdout: [{2}] stderr: [{3}]", " ".join(command), error.returncode, error.stdout, error.stderr) raise return shellutil.run_command([self.openssl_cmd, "pkey", "-in", file_name, "-pubout"], log_error=True) def get_pubkey_from_crt(self, file_name): if not os.path.exists(file_name): raise IOError(errno.ENOENT, "File not found", file_name) else: cmd = [self.openssl_cmd, "x509", "-in", file_name, "-pubkey", "-noout"] pub = shellutil.run_command(cmd, log_error=True) return pub def get_thumbprint_from_crt(self, file_name): if not os.path.exists(file_name): raise IOError(errno.ENOENT, "File not found", file_name) else: cmd = [self.openssl_cmd, "x509", "-in", file_name, "-fingerprint", "-noout"] thumbprint = shellutil.run_command(cmd) thumbprint = thumbprint.rstrip().split('=')[1].replace(':', '').upper() return thumbprint def decrypt_p7m(self, p7m_file, trans_prv_file, trans_cert_file, pem_file): if not os.path.exists(p7m_file): raise IOError(errno.ENOENT, "File not found", p7m_file) elif not os.path.exists(trans_prv_file): raise IOError(errno.ENOENT, "File not found", trans_prv_file) else: try: shellutil.run_pipe([ [self.openssl_cmd, "cms", "-decrypt", "-in", p7m_file, "-inkey", trans_prv_file, "-recip", trans_cert_file], [self.openssl_cmd, "pkcs12", "-nodes", "-password", "pass:", "-out", pem_file]]) except shellutil.CommandError as command_error: logger.error("Failed to decrypt {0} (return code: {1})\n[stdout]\n{2}\n[stderr]\n{3}", p7m_file, command_error.returncode, command_error.stdout, command_error.stderr) def crt_to_ssh(self, input_file, output_file): with open(output_file, "ab") as file_out: cmd = ["ssh-keygen", "-i", "-m", "PKCS8", "-f", input_file] try: shellutil.run_command(cmd, stdout=file_out, log_error=True) except shellutil.CommandError: pass # nothing to do; the error is already logged def asn1_to_ssh(self, pubkey): lines = pubkey.split("\n") lines = [x for x in lines if not x.startswith("----")] base64_encoded = "".join(lines) try: #TODO remove pyasn1 dependency from pyasn1.codec.der import decoder as der_decoder der_encoded = base64.b64decode(base64_encoded) der_encoded = der_decoder.decode(der_encoded)[0][1] # pylint: disable=unsubscriptable-object key = der_decoder.decode(self.bits_to_bytes(der_encoded))[0] n=key[0] # pylint: disable=unsubscriptable-object e=key[1] # pylint: disable=unsubscriptable-object keydata = bytearray() keydata.extend(struct.pack('>I', len("ssh-rsa"))) keydata.extend(b"ssh-rsa") keydata.extend(struct.pack('>I', len(self.num_to_bytes(e)))) keydata.extend(self.num_to_bytes(e)) keydata.extend(struct.pack('>I', len(self.num_to_bytes(n)) + 1)) keydata.extend(b"\0") keydata.extend(self.num_to_bytes(n)) keydata_base64 = base64.b64encode(bytebuffer(keydata)) return ustr(b"ssh-rsa " + keydata_base64 + b"\n", encoding='utf-8') except ImportError as e: raise CryptError("Failed to load pyasn1.codec.der") def num_to_bytes(self, num): """ Pack number into bytes. Retun as string. """ result = bytearray() while num: result.append(num & 0xFF) num >>= 8 result.reverse() return result def bits_to_bytes(self, bits): """ Convert an array contains bits, [0,1] to a byte array """ index = 7 byte_array = bytearray() curr = 0 for bit in bits: curr = curr | (bit << index) index = index - 1 if index == -1: byte_array.append(curr) curr = 0 index = 7 return bytes(byte_array) def decrypt_secret(self, encrypted_password, private_key): try: decoded = base64.b64decode(encrypted_password) args = DECRYPT_SECRET_CMD.format(self.openssl_cmd, private_key).split(' ') output = shellutil.run_command(args, input=decoded, stderr=subprocess.STDOUT, encode_input=False, encode_output=False) return output.decode('utf-16') except shellutil.CommandError as command_error: raise subprocess.CalledProcessError(command_error.returncode, "openssl cms -decrypt", output=command_error.stdout) except Exception as e: raise CryptError("Error decoding secret", e) Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/utils/fileutil.py000066400000000000000000000154001462617747000255740ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # """ File operation util functions """ import errno as errno import glob import os import pwd import re import shutil import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.textutil as textutil from azurelinuxagent.common.future import ustr KNOWN_IOERRORS = [ errno.EIO, # I/O error errno.ENOMEM, # Out of memory errno.ENFILE, # File table overflow errno.EMFILE, # Too many open files errno.ENOSPC, # Out of space errno.ENAMETOOLONG, # Name too long errno.ELOOP, # Too many symbolic links encountered 121 # Remote I/O error (errno.EREMOTEIO -- not present in all Python 2.7+) ] def read_file(filepath, asbin=False, remove_bom=False, encoding='utf-8'): """ Read and return contents of 'filepath'. """ mode = 'rb' with open(filepath, mode) as in_file: data = in_file.read() if data is None: return None if asbin: return data if remove_bom: # remove bom on bytes data before it is converted into string. data = textutil.remove_bom(data) data = ustr(data, encoding=encoding) return data def write_file(filepath, contents, asbin=False, encoding='utf-8', append=False): """ Write 'contents' to 'filepath'. """ mode = "ab" if append else "wb" data = contents if not asbin: data = contents.encode(encoding) with open(filepath, mode) as out_file: out_file.write(data) def append_file(filepath, contents, asbin=False, encoding='utf-8'): """ Append 'contents' to 'filepath'. """ write_file(filepath, contents, asbin=asbin, encoding=encoding, append=True) def base_name(path): head, tail = os.path.split(path) # pylint: disable=W0612 return tail def get_line_startingwith(prefix, filepath): """ Return line from 'filepath' if the line startswith 'prefix' """ for line in read_file(filepath).split('\n'): if line.startswith(prefix): return line return None def mkdir(dirpath, mode=None, owner=None, reset_mode_and_owner=True): if not os.path.isdir(dirpath): os.makedirs(dirpath) reset_mode_and_owner = True # force setting the mode and owner if reset_mode_and_owner: if mode is not None: chmod(dirpath, mode) if owner is not None: chowner(dirpath, owner) def chowner(path, owner): if not os.path.exists(path): logger.error("Path does not exist: {0}".format(path)) else: owner_info = pwd.getpwnam(owner) os.chown(path, owner_info[2], owner_info[3]) def chmod(path, mode): if not os.path.exists(path): logger.error("Path does not exist: {0}".format(path)) else: os.chmod(path, mode) def rm_files(*args): for paths in args: # find all possible file paths for path in glob.glob(paths): if os.path.isfile(path): os.remove(path) def rm_dirs(*args): """ Remove the contents of each directory """ for p in args: if not os.path.isdir(p): continue for pp in os.listdir(p): path = os.path.join(p, pp) if os.path.isfile(path): os.remove(path) elif os.path.islink(path): os.unlink(path) elif os.path.isdir(path): shutil.rmtree(path) def trim_ext(path, ext): if not ext.startswith("."): ext = "." + ext return path.split(ext)[0] if path.endswith(ext) else path def update_conf_file(path, line_start, val, chk_err=False): conf = [] if not os.path.isfile(path) and chk_err: raise IOError("Can't find config file:{0}".format(path)) conf = read_file(path).split('\n') conf = [x for x in conf if x is not None and len(x) > 0 and not x.startswith(line_start)] conf.append(val) write_file(path, '\n'.join(conf) + '\n') def search_file(target_dir_name, target_file_name): for root, dirs, files in os.walk(target_dir_name): # pylint: disable=W0612 for file_name in files: if file_name == target_file_name: return os.path.join(root, file_name) return None def chmod_tree(path, mode): for root, dirs, files in os.walk(path): # pylint: disable=W0612 for file_name in files: os.chmod(os.path.join(root, file_name), mode) def findstr_in_file(file_path, line_str): """ Return True if the line is in the file; False otherwise. (Trailing whitespace is ignored.) """ try: with open(file_path, 'r') as fh: for line in fh.readlines(): if line_str == line.rstrip(): return True except Exception: # swallow exception pass return False def findre_in_file(file_path, line_re): """ Return match object if found in file. """ try: with open(file_path, 'r') as fh: pattern = re.compile(line_re) for line in fh.readlines(): match = re.search(pattern, line) if match: return match except: # pylint: disable=W0702 pass return None def get_all_files(root_path): """ Find all files under the given root path """ result = [] for root, dirs, files in os.walk(root_path): # pylint: disable=W0612 result.extend([os.path.join(root, file) for file in files]) # pylint: disable=redefined-builtin return result def clean_ioerror(e, paths=None): """ Clean-up possibly bad files and directories after an IO error. The code ignores *all* errors since disk state may be unhealthy. """ if paths is None: paths = [] if isinstance(e, IOError) and e.errno in KNOWN_IOERRORS: for path in paths: if path is None: continue try: if os.path.isdir(path): shutil.rmtree(path, ignore_errors=True) else: os.remove(path) except Exception: # swallow exception pass Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/utils/flexible_version.py000066400000000000000000000175361462617747000273320ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from distutils import version # pylint: disable=no-name-in-module import re class FlexibleVersion(version.Version): """ A more flexible implementation of distutils.version.StrictVersion The implementation allows to specify: - an arbitrary number of version numbers: not only '1.2.3' , but also '1.2.3.4.5' - the separator between version numbers: '1-2-3' is allowed when '-' is specified as separator - a flexible pre-release separator: '1.2.3.alpha1', '1.2.3-alpha1', and '1.2.3alpha1' are considered equivalent - an arbitrary ordering of pre-release tags: 1.1alpha3 < 1.1beta2 < 1.1rc1 < 1.1 when ["alpha", "beta", "rc"] is specified as pre-release tag list Inspiration from this discussion at StackOverflow: http://stackoverflow.com/questions/12255554/sort-versions-in-python """ def __init__(self, vstring=None, sep='.', prerel_tags=('alpha', 'beta', 'rc')): version.Version.__init__(self) if sep is None: sep = '.' if prerel_tags is None: prerel_tags = () self.sep = sep self.prerel_sep = '' self.prerel_tags = tuple(prerel_tags) if prerel_tags is not None else () self._compile_pattern() self.prerelease = None self.version = () if vstring: self._parse(str(vstring)) return _nn_version = 'version' _nn_prerel_sep = 'prerel_sep' _nn_prerel_tag = 'tag' _nn_prerel_num = 'tag_num' _re_prerel_sep = r'(?P<{pn}>{sep})?'.format( pn=_nn_prerel_sep, sep='|'.join(map(re.escape, ('.', '-')))) @property def major(self): return self.version[0] if len(self.version) > 0 else 0 @property def minor(self): return self.version[1] if len(self.version) > 1 else 0 @property def patch(self): return self.version[2] if len(self.version) > 2 else 0 def _parse(self, vstring): m = self.version_re.match(vstring) if not m: raise ValueError("Invalid version number '{0}'".format(vstring)) self.prerelease = None self.version = () self.prerel_sep = m.group(self._nn_prerel_sep) tag = m.group(self._nn_prerel_tag) tag_num = m.group(self._nn_prerel_num) if tag is not None and tag_num is not None: self.prerelease = (tag, int(tag_num) if len(tag_num) else None) self.version = tuple(map(int, self.sep_re.split(m.group(self._nn_version)))) return def __add__(self, increment): version = list(self.version) # pylint: disable=W0621 version[-1] += increment vstring = self._assemble(version, self.sep, self.prerel_sep, self.prerelease) return FlexibleVersion(vstring=vstring, sep=self.sep, prerel_tags=self.prerel_tags) def __sub__(self, decrement): version = list(self.version) # pylint: disable=W0621 if version[-1] <= 0: raise ArithmeticError("Cannot decrement final numeric component of {0} below zero" \ .format(self)) version[-1] -= decrement vstring = self._assemble(version, self.sep, self.prerel_sep, self.prerelease) return FlexibleVersion(vstring=vstring, sep=self.sep, prerel_tags=self.prerel_tags) def __repr__(self): return "{cls} ('{vstring}', '{sep}', {prerel_tags})"\ .format( cls=self.__class__.__name__, vstring=str(self), sep=self.sep, prerel_tags=self.prerel_tags) def __str__(self): return self._assemble(self.version, self.sep, self.prerel_sep, self.prerelease) def __ge__(self, that): return not self.__lt__(that) def __gt__(self, that): return (not self.__lt__(that)) and (not self.__eq__(that)) def __le__(self, that): return (self.__lt__(that)) or (self.__eq__(that)) def __lt__(self, that): this_version, that_version = self._ensure_compatible(that) if this_version != that_version \ or self.prerelease is None and that.prerelease is None: return this_version < that_version if self.prerelease is not None and that.prerelease is None: return True if self.prerelease is None and that.prerelease is not None: return False this_index = self.prerel_tags_set[self.prerelease[0]] that_index = self.prerel_tags_set[that.prerelease[0]] if this_index == that_index: return self.prerelease[1] < that.prerelease[1] return this_index < that_index def __ne__(self, that): return not self.__eq__(that) def __eq__(self, that): this_version, that_version = self._ensure_compatible(that) if this_version != that_version: return False if self.prerelease != that.prerelease: return False return True def matches(self, that): if self.sep != that.sep or len(self.version) > len(that.version): return False for i in range(len(self.version)): if self.version[i] != that.version[i]: return False if self.prerel_tags: return self.prerel_tags == that.prerel_tags return True def _assemble(self, version, sep, prerel_sep, prerelease): # pylint: disable=W0621 s = sep.join(map(str, version)) if prerelease is not None: if prerel_sep is not None: s += prerel_sep s += prerelease[0] if prerelease[1] is not None: s += str(prerelease[1]) return s def _compile_pattern(self): sep, self.sep_re = self._compile_separator(self.sep) if self.prerel_tags: tags = '|'.join(re.escape(tag) for tag in self.prerel_tags) self.prerel_tags_set = dict(zip(self.prerel_tags, range(len(self.prerel_tags)))) release_re = '(?:{prerel_sep}(?P<{tn}>{tags})(?P<{nn}>\d*))?'.format( # pylint: disable=W1401 prerel_sep=self._re_prerel_sep, tags=tags, tn=self._nn_prerel_tag, nn=self._nn_prerel_num) else: release_re = '' version_re = r'^(?P<{vn}>\d+(?:(?:{sep}\d+)*)?){rel}$'.format( vn=self._nn_version, sep=sep, rel=release_re) self.version_re = re.compile(version_re) return def _compile_separator(self, sep): if sep is None: return '', re.compile('') return re.escape(sep), re.compile(re.escape(sep)) def _ensure_compatible(self, that): """ Ensures the instances have the same structure and, if so, returns length compatible version lists (so that x.y.0.0 is equivalent to x.y). """ if self.prerel_tags != that.prerel_tags or self.sep != that.sep: raise ValueError("Unable to compare: versions have different structures") this_version = list(self.version[:]) that_version = list(that.version[:]) while len(this_version) < len(that_version): this_version.append(0) while len(that_version) < len(this_version): that_version.append(0) return this_version, that_version Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/utils/networkutil.py000066400000000000000000000272421462617747000263550ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.common.future import ustr from azurelinuxagent.common.utils import shellutil from azurelinuxagent.common.utils.shellutil import CommandError class RouteEntry(object): """ Represents a single route. The destination, gateway, and mask members are hex representations of the IPv4 address in network byte order. """ def __init__(self, interface, destination, gateway, mask, flags, metric): self.interface = interface self.destination = destination self.gateway = gateway self.mask = mask self.flags = int(flags, 16) self.metric = int(metric) @staticmethod def _net_hex_to_dotted_quad(value): if len(value) != 8: raise Exception("String to dotted quad conversion must be 8 characters") octets = [] for idx in range(6, -2, -2): octets.append(str(int(value[idx:idx + 2], 16))) return ".".join(octets) def destination_quad(self): return self._net_hex_to_dotted_quad(self.destination) def gateway_quad(self): return self._net_hex_to_dotted_quad(self.gateway) def mask_quad(self): return self._net_hex_to_dotted_quad(self.mask) def to_json(self): f = '{{"Iface": "{0}", "Destination": "{1}", "Gateway": "{2}", "Mask": "{3}", "Flags": "{4:#06x}", "Metric": "{5}"}}' return f.format(self.interface, self.destination_quad(), self.gateway_quad(), self.mask_quad(), self.flags, self.metric) def __str__(self): f = "Iface: {0}\tDestination: {1}\tGateway: {2}\tMask: {3}\tFlags: {4:#06x}\tMetric: {5}" return f.format(self.interface, self.destination_quad(), self.gateway_quad(), self.mask_quad(), self.flags, self.metric) def __repr__(self): return 'RouteEntry("{0}", "{1}", "{2}", "{3}", "{4:#04x}", "{5}")' \ .format(self.interface, self.destination, self.gateway, self.mask, self.flags, self.metric) class NetworkInterfaceCard: def __init__(self, name, link_info): self.name = name self.ipv4 = set() self.ipv6 = set() self.link = link_info def add_ipv4(self, info): self.ipv4.add(info) def add_ipv6(self, info): self.ipv6.add(info) def __eq__(self, other): return self.link == other.link and \ self.ipv4 == other.ipv4 and \ self.ipv6 == other.ipv6 @staticmethod def _json_array(items): return "[{0}]".format(",".join(['"{0}"'.format(x) for x in sorted(items)])) def __str__(self): entries = ['"name": "{0}"'.format(self.name), '"link": "{0}"'.format(self.link)] if len(self.ipv4) > 0: entries.append('"ipv4": {0}'.format(self._json_array(self.ipv4))) if len(self.ipv6) > 0: entries.append('"ipv6": {0}'.format(self._json_array(self.ipv6))) return "{{ {0} }}".format(", ".join(entries)) class FirewallCmdDirectCommands(object): # firewall-cmd --direct --permanent --passthrough ipv4 -t security -A OUTPUT -d 1.2.3.5 -p tcp -m owner --uid-owner 999 -j ACCEPT # success # adds the firewalld rule and returns the status PassThrough = "--passthrough" # firewall-cmd --direct --query-passthrough ipv4 -t security -A OUTPUT -d 1.2.3.5 -p tcp -m owner --uid-owner 9999 -j ACCEPT # yes # firewall-cmd --direct --permanent --query-passthrough ipv4 -t security -A OUTPUT -d 1.2.3.5 -p tcp -m owner --uid-owner 999 -j ACCEPT # no # checks if the firewalld rule is present or not QueryPassThrough = "--query-passthrough" # firewall-cmd --permanent --direct --remove-passthrough ipv4 -t security -A OUTPUT -d 168.63.129.16 -p tcp -m owner --uid-owner 0 -j ACCEPT # success # remove the firewalld rule RemovePassThrough = "--remove-passthrough" class AddFirewallRules(object): """ This class is a utility class which is only meant to orchestrate adding Firewall rules (both iptables and firewalld). This would also be called from a separate utility binary which would be very early up in the boot order of the VM, due to which it would not have access to basic mounts like file-system. Please make sure to not log anything in any function this class. """ # -A adds the rule to the end of the iptable chain APPEND_COMMAND = "-A" # -I inserts the rule at the index specified. If no number specified the rules get added to the top of the chain # iptables -t security -I OUTPUT 1 -d 168.63.129.16 -p tcp --destination-port 53 -j ACCEPT -w and # iptables -t security -I OUTPUT -d 168.63.129.16 -p tcp --destination-port 53 -j ACCEPT -w both adds the rule as the first rule of the chain INSERT_COMMAND = "-I" # -D deletes the specific rule in the iptable chain DELETE_COMMAND = "-D" # -C checks if a specific rule exists CHECK_COMMAND = "-C" @staticmethod def __get_iptables_base_command(wait=""): """ If 'wait' is True, adds the wait option (-w) to the given iptables command line """ if wait != "": return ["iptables", "-w"] return ["iptables"] @staticmethod def __get_firewalld_base_command(command): # For more documentation - https://firewalld.org/documentation/man-pages/firewall-cmd.html return ["firewall-cmd", "--permanent", "--direct", command, "ipv4"] @staticmethod def __get_common_command_params(command, destination): return ["-t", "security", command, "OUTPUT", "-d", destination, "-p", "tcp"] @staticmethod def __get_firewall_base_command(command, destination, firewalld_command="", wait=""): # Firewalld.service fails if we set `-w` in the iptables command, so not adding it at all for firewalld commands if firewalld_command != "": cmd = AddFirewallRules.__get_firewalld_base_command(firewalld_command) else: cmd = AddFirewallRules.__get_iptables_base_command(wait) cmd.extend(AddFirewallRules.__get_common_command_params(command, destination)) return cmd @staticmethod def get_accept_tcp_rule(command, destination, firewalld_command="", wait=""): # This rule allows DNS TCP request to wireserver ip for non root users cmd = AddFirewallRules.__get_firewall_base_command(command, destination, firewalld_command, wait) cmd.extend(['--destination-port', '53', '-j', 'ACCEPT']) return cmd @staticmethod def get_wire_root_accept_rule(command, destination, owner_uid, firewalld_command="", wait=""): cmd = AddFirewallRules.__get_firewall_base_command(command, destination, firewalld_command, wait) cmd.extend(["-m", "owner", "--uid-owner", str(owner_uid), "-j", "ACCEPT"]) return cmd @staticmethod def get_wire_non_root_drop_rule(command, destination, firewalld_command="", wait=""): cmd = AddFirewallRules.__get_firewall_base_command(command, destination, firewalld_command, wait) cmd.extend(["-m", "conntrack", "--ctstate", "INVALID,NEW", "-j", "DROP"]) return cmd @staticmethod def __raise_if_empty(val, name): if val == "": raise Exception("{0} should not be empty".format(name)) @staticmethod def __execute_cmd(cmd): try: shellutil.run_command(cmd) except CommandError as error: msg = "Command {0} failed with exit-code: {1}\nStdout: {2}\nStderr: {3}".format(' '.join(cmd), error.returncode, ustr(error.stdout), ustr(error.stderr)) raise Exception(msg) @staticmethod def __execute_check_command(cmd): # Here we primarily check if an iptable rule exist. True if it exits , false if not try: shellutil.run_command(cmd) return True except CommandError as err: # return code 1 is expected while using the check command. Raise if encounter any other return code if err.returncode != 1: raise return False @staticmethod def get_missing_iptables_rules(wait, dst_ip, uid): missing = [] check_cmd_tcp_rule = AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.CHECK_COMMAND, dst_ip, wait=wait) if not AddFirewallRules.__execute_check_command(check_cmd_tcp_rule): missing.append("ACCEPT DNS") check_cmd_accept_rule = AddFirewallRules.get_wire_root_accept_rule(AddFirewallRules.CHECK_COMMAND, dst_ip, uid, wait=wait) if not AddFirewallRules.__execute_check_command(check_cmd_accept_rule): missing.append("ACCEPT") check_cmd_drop_rule = AddFirewallRules.get_wire_non_root_drop_rule(AddFirewallRules.CHECK_COMMAND, dst_ip, wait=wait) if not AddFirewallRules.__execute_check_command(check_cmd_drop_rule): missing.append("DROP") return missing @staticmethod def __execute_firewall_commands(dst_ip, uid, command=APPEND_COMMAND, firewalld_command="", wait=""): # The order in which the below rules are added matters for the ip table rules to work as expected AddFirewallRules.__raise_if_empty(dst_ip, "Destination IP") AddFirewallRules.__raise_if_empty(uid, "User ID") accept_tcp_rule = AddFirewallRules.get_accept_tcp_rule(command, dst_ip, firewalld_command=firewalld_command, wait=wait) AddFirewallRules.__execute_cmd(accept_tcp_rule) accept_cmd = AddFirewallRules.get_wire_root_accept_rule(command, dst_ip, uid, firewalld_command=firewalld_command, wait=wait) AddFirewallRules.__execute_cmd(accept_cmd) drop_cmd = AddFirewallRules.get_wire_non_root_drop_rule(command, dst_ip, firewalld_command=firewalld_command, wait=wait) AddFirewallRules.__execute_cmd(drop_cmd) @staticmethod def add_iptables_rules(wait, dst_ip, uid): AddFirewallRules.__execute_firewall_commands(dst_ip, uid, command=AddFirewallRules.APPEND_COMMAND, wait=wait) @staticmethod def add_firewalld_rules(dst_ip, uid): # Firewalld.service fails if we set `-w` in the iptables command, so not adding it at all for firewalld commands # Firewalld.service with the "--permanent --passthrough" parameter ensures that a firewall rule is set only once even if command is executed multiple times AddFirewallRules.__execute_firewall_commands(dst_ip, uid, firewalld_command=FirewallCmdDirectCommands.PassThrough) @staticmethod def check_firewalld_rule_applied(dst_ip, uid): AddFirewallRules.__execute_firewall_commands(dst_ip, uid, firewalld_command=FirewallCmdDirectCommands.QueryPassThrough) @staticmethod def remove_firewalld_rules(dst_ip, uid): AddFirewallRules.__execute_firewall_commands(dst_ip, uid, firewalld_command=FirewallCmdDirectCommands.RemovePassThrough) Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/utils/restutil.py000066400000000000000000000537151462617747000256450ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import re import threading import time import socket import struct import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.textutil as textutil from azurelinuxagent.common.exception import HttpError, ResourceGoneError, InvalidContainerError from azurelinuxagent.common.future import httpclient, urlparse, ustr from azurelinuxagent.common.version import PY_VERSION_MAJOR, AGENT_NAME, GOAL_STATE_AGENT_VERSION SECURE_WARNING_EMITTED = False DEFAULT_RETRIES = 6 DELAY_IN_SECONDS = 1 THROTTLE_RETRIES = 25 THROTTLE_DELAY_IN_SECONDS = 1 REDACTED_TEXT = "" SAS_TOKEN_RETRIEVAL_REGEX = re.compile(r'^(https?://[a-zA-Z0-9.].*sig=)([a-zA-Z0-9%-]*)(.*)$') RETRY_CODES = [ httpclient.RESET_CONTENT, httpclient.PARTIAL_CONTENT, httpclient.FORBIDDEN, httpclient.INTERNAL_SERVER_ERROR, httpclient.NOT_IMPLEMENTED, httpclient.BAD_GATEWAY, httpclient.SERVICE_UNAVAILABLE, httpclient.GATEWAY_TIMEOUT, httpclient.INSUFFICIENT_STORAGE, 429, # Request Rate Limit Exceeded ] # # Currently the HostGAPlugin has an issue its cache that may produce a BAD_REQUEST failure for valid URIs when using the extensionArtifact API. # Add this status to the retryable codes, but use it only when requesting downloads via the HostGAPlugin. The retry logic in the download code # would give enough time to the HGAP to refresh its cache. Once the fix to address that issue is deployed, consider removing the use of # HGAP_GET_EXTENSION_ARTIFACT_RETRY_CODES. # HGAP_GET_EXTENSION_ARTIFACT_RETRY_CODES = RETRY_CODES[:] # make a copy of RETRY_CODES HGAP_GET_EXTENSION_ARTIFACT_RETRY_CODES.append(httpclient.BAD_REQUEST) RESOURCE_GONE_CODES = [ httpclient.GONE ] OK_CODES = [ httpclient.OK, httpclient.CREATED, httpclient.ACCEPTED ] NOT_MODIFIED_CODES = [ httpclient.NOT_MODIFIED ] HOSTPLUGIN_UPSTREAM_FAILURE_CODES = [ 502 ] THROTTLE_CODES = [ httpclient.FORBIDDEN, httpclient.SERVICE_UNAVAILABLE, 429, # Request Rate Limit Exceeded ] RETRY_EXCEPTIONS = [ httpclient.NotConnected, httpclient.IncompleteRead, httpclient.ImproperConnectionState, httpclient.BadStatusLine ] # http://www.gnu.org/software/wget/manual/html_node/Proxies.html HTTP_PROXY_ENV = "http_proxy" HTTPS_PROXY_ENV = "https_proxy" NO_PROXY_ENV = "no_proxy" HTTP_USER_AGENT = "{0}/{1}".format(AGENT_NAME, GOAL_STATE_AGENT_VERSION) HTTP_USER_AGENT_HEALTH = "{0}+health".format(HTTP_USER_AGENT) INVALID_CONTAINER_CONFIGURATION = "InvalidContainerConfiguration" REQUEST_ROLE_CONFIG_FILE_NOT_FOUND = "RequestRoleConfigFileNotFound" KNOWN_WIRESERVER_IP = '168.63.129.16' HOST_PLUGIN_PORT = 32526 class IOErrorCounter(object): _lock = threading.RLock() _protocol_endpoint = KNOWN_WIRESERVER_IP _counts = {"hostplugin":0, "protocol":0, "other":0} @staticmethod def increment(host=None, port=None): with IOErrorCounter._lock: if host == IOErrorCounter._protocol_endpoint: if port == HOST_PLUGIN_PORT: IOErrorCounter._counts["hostplugin"] += 1 else: IOErrorCounter._counts["protocol"] += 1 else: IOErrorCounter._counts["other"] += 1 @staticmethod def get_and_reset(): with IOErrorCounter._lock: counts = IOErrorCounter._counts.copy() IOErrorCounter.reset() return counts @staticmethod def reset(): with IOErrorCounter._lock: IOErrorCounter._counts = {"hostplugin":0, "protocol":0, "other":0} @staticmethod def set_protocol_endpoint(endpoint=KNOWN_WIRESERVER_IP): IOErrorCounter._protocol_endpoint = endpoint def _compute_delay(retry_attempt=1, delay=DELAY_IN_SECONDS): fib = (1, 1) for _ in range(retry_attempt): fib = (fib[1], fib[0]+fib[1]) return delay*fib[1] def _is_retry_status(status, retry_codes=None): if retry_codes is None: retry_codes = RETRY_CODES return status in retry_codes def _is_retry_exception(e): return len([x for x in RETRY_EXCEPTIONS if isinstance(e, x)]) > 0 def _is_throttle_status(status): return status in THROTTLE_CODES def _parse_url(url): """ Parse URL to get the components of the URL broken down to host, port :rtype: string, int, bool, string """ o = urlparse(url) rel_uri = o.path if o.fragment: rel_uri = "{0}#{1}".format(rel_uri, o.fragment) if o.query: rel_uri = "{0}?{1}".format(rel_uri, o.query) secure = False if o.scheme.lower() == "https": secure = True return o.hostname, o.port, secure, rel_uri def _trim_url_parameters(url): """ Parse URL and return scheme://hostname:port/path """ o = urlparse(url) if o.hostname: if o.port: return "{0}://{1}:{2}{3}".format(o.scheme, o.hostname, o.port, o.path) else: return "{0}://{1}{2}".format(o.scheme, o.hostname, o.path) return url def is_valid_cidr(string_network): """ Very simple check of the cidr format in no_proxy variable. :rtype: bool """ if string_network.count('/') == 1: try: mask = int(string_network.split('/')[1]) except ValueError: return False if mask < 1 or mask > 32: return False try: socket.inet_aton(string_network.split('/')[0]) except socket.error: return False else: return False return True def dotted_netmask(mask): """Converts mask from /xx format to xxx.xxx.xxx.xxx Example: if mask is 24 function returns 255.255.255.0 :rtype: str """ bits = 0xffffffff ^ (1 << 32 - mask) - 1 return socket.inet_ntoa(struct.pack('>I', bits)) def address_in_network(ip, net): """This function allows you to check if an IP belongs to a network subnet Example: returns True if ip = 192.168.1.1 and net = 192.168.1.0/24 returns False if ip = 192.168.1.1 and net = 192.168.100.0/24 :rtype: bool """ ipaddr = struct.unpack('=L', socket.inet_aton(ip))[0] netaddr, bits = net.split('/') netmask = struct.unpack('=L', socket.inet_aton(dotted_netmask(int(bits))))[0] network = struct.unpack('=L', socket.inet_aton(netaddr))[0] & netmask return (ipaddr & netmask) == (network & netmask) def is_ipv4_address(string_ip): """ :rtype: bool """ try: socket.inet_aton(string_ip) except socket.error: return False return True def get_no_proxy(): no_proxy = os.environ.get(NO_PROXY_ENV) or os.environ.get(NO_PROXY_ENV.upper()) if no_proxy: no_proxy = [host for host in no_proxy.replace(' ', '').split(',') if host] # no_proxy in the proxies argument takes precedence return no_proxy def bypass_proxy(host): no_proxy = get_no_proxy() if no_proxy: if is_ipv4_address(host): for proxy_ip in no_proxy: if is_valid_cidr(proxy_ip): if address_in_network(host, proxy_ip): return True elif host == proxy_ip: # If no_proxy ip was defined in plain IP notation instead of cidr notation & # matches the IP of the index return True else: for proxy_domain in no_proxy: if host.lower().endswith(proxy_domain.lower()): # The URL does match something in no_proxy, so we don't want # to apply the proxies on this URL. return True return False def _get_http_proxy(secure=False): # Prefer the configuration settings over environment variables host = conf.get_httpproxy_host() port = None if not host is None: port = conf.get_httpproxy_port() else: http_proxy_env = HTTPS_PROXY_ENV if secure else HTTP_PROXY_ENV http_proxy_url = None for v in [http_proxy_env, http_proxy_env.upper()]: if v in os.environ: http_proxy_url = os.environ[v] break if not http_proxy_url is None: host, port, _, _ = _parse_url(http_proxy_url) return host, port def redact_sas_tokens_in_urls(url): return SAS_TOKEN_RETRIEVAL_REGEX.sub(r"\1" + REDACTED_TEXT + r"\3", url) def _http_request(method, host, rel_uri, timeout, port=None, data=None, secure=False, headers=None, proxy_host=None, proxy_port=None, redact_data=False): headers = {} if headers is None else headers headers['Connection'] = 'close' use_proxy = proxy_host is not None and proxy_port is not None if port is None: port = 443 if secure else 80 if 'User-Agent' not in headers: headers['User-Agent'] = HTTP_USER_AGENT if use_proxy: conn_host, conn_port = proxy_host, proxy_port scheme = "https" if secure else "http" url = "{0}://{1}:{2}{3}".format(scheme, host, port, rel_uri) else: conn_host, conn_port = host, port url = rel_uri if secure: conn = httpclient.HTTPSConnection(conn_host, conn_port, timeout=timeout) if use_proxy: conn.set_tunnel(host, port) else: conn = httpclient.HTTPConnection(conn_host, conn_port, timeout=timeout) payload = data if redact_data: payload = "[REDACTED]" # Logger requires the msg to be a ustr to log properly, ensuring that the data string that we log is always ustr logger.verbose("HTTP connection [{0}] [{1}] [{2}] [{3}]", method, redact_sas_tokens_in_urls(url), textutil.str_to_encoded_ustr(payload), headers) conn.request(method=method, url=url, body=data, headers=headers) return conn.getresponse() def http_request(method, url, data, timeout, headers=None, use_proxy=False, max_retry=None, retry_codes=None, retry_delay=DELAY_IN_SECONDS, redact_data=False, return_raw_response=False): """ NOTE: This method provides some logic to handle errors in the HTTP request, including checking the HTTP status of the response and handling some exceptions. If return_raw_response is set to True all the error handling will be skipped and the method will return the actual HTTP response and bubble up any exceptions while issuing the request. Also note that if return_raw_response is True no retries will be done. """ if max_retry is None: max_retry = DEFAULT_RETRIES if retry_codes is None: retry_codes = RETRY_CODES global SECURE_WARNING_EMITTED # pylint: disable=W0603 host, port, secure, rel_uri = _parse_url(url) # Use the HTTP(S) proxy proxy_host, proxy_port = (None, None) if use_proxy and not bypass_proxy(host): proxy_host, proxy_port = _get_http_proxy(secure=secure) if proxy_host or proxy_port: logger.verbose("HTTP proxy: [{0}:{1}]", proxy_host, proxy_port) # If httplib module is not built with ssl support, # fallback to HTTP if allowed if secure and not hasattr(httpclient, "HTTPSConnection"): if not conf.get_allow_http(): raise HttpError("HTTPS is unavailable and required") secure = False if not SECURE_WARNING_EMITTED: logger.warn("Python does not include SSL support") SECURE_WARNING_EMITTED = True # If httplib module doesn't support HTTPS tunnelling, # fallback to HTTP if allowed if secure and \ proxy_host is not None and \ proxy_port is not None \ and not hasattr(httpclient.HTTPSConnection, "set_tunnel"): if not conf.get_allow_http(): raise HttpError("HTTPS tunnelling is unavailable and required") secure = False if not SECURE_WARNING_EMITTED: logger.warn("Python does not support HTTPS tunnelling") SECURE_WARNING_EMITTED = True msg = '' attempt = 0 delay = 0 was_throttled = False while attempt < max_retry: if attempt > 0: # Compute the request delay # -- Use a fixed delay if the server ever rate-throttles the request # (with a safe, minimum number of retry attempts) # -- Otherwise, compute a delay that is the product of the next # item in the Fibonacci series and the initial delay value delay = THROTTLE_DELAY_IN_SECONDS \ if was_throttled \ else _compute_delay(retry_attempt=attempt, delay=retry_delay) logger.verbose("[HTTP Retry] " "Attempt {0} of {1} will delay {2} seconds: {3}", attempt+1, max_retry, delay, msg) time.sleep(delay) attempt += 1 try: resp = _http_request(method, host, rel_uri, timeout, port=port, data=data, secure=secure, headers=headers, proxy_host=proxy_host, proxy_port=proxy_port, redact_data=redact_data) logger.verbose("[HTTP Response] Status Code {0}", resp.status) if return_raw_response: # skip all error handling return resp if request_failed(resp): if _is_retry_status(resp.status, retry_codes=retry_codes): msg = '[HTTP Retry] {0} {1} -- Status Code {2}'.format(method, url, resp.status) # Note if throttled and ensure a safe, minimum number of # retry attempts if _is_throttle_status(resp.status): was_throttled = True max_retry = max(max_retry, THROTTLE_RETRIES) continue # If we got a 410 (resource gone) for any reason, raise an exception. The caller will handle it by # forcing a goal state refresh and retrying the call. if resp.status in RESOURCE_GONE_CODES: response_error = read_response_error(resp) raise ResourceGoneError(response_error) # If we got a 400 (bad request) because the container id is invalid, it could indicate a stale goal # state. The caller will handle this exception by forcing a goal state refresh and retrying the call. if resp.status == httpclient.BAD_REQUEST: response_error = read_response_error(resp) if INVALID_CONTAINER_CONFIGURATION in response_error: raise InvalidContainerError(response_error) return resp except httpclient.HTTPException as e: if return_raw_response: # skip all error handling raise clean_url = _trim_url_parameters(url) msg = '[HTTP Failed] {0} {1} -- HttpException {2}'.format(method, clean_url, e) if _is_retry_exception(e): continue break except IOError as e: if return_raw_response: # skip all error handling raise IOErrorCounter.increment(host=host, port=port) clean_url = _trim_url_parameters(url) msg = '[HTTP Failed] {0} {1} -- IOError {2}'.format(method, clean_url, e) continue raise HttpError("{0} -- {1} attempts made".format(msg, attempt)) def http_get(url, headers=None, use_proxy=False, max_retry=None, retry_codes=None, retry_delay=DELAY_IN_SECONDS, return_raw_response=False, timeout=10): """ NOTE: This method provides some logic to handle errors in the HTTP request, including checking the HTTP status of the response and handling some exceptions. If return_raw_response is set to True all the error handling will be skipped and the method will return the actual HTTP response and bubble up any exceptions while issuing the request. Also note that if return_raw_response is True no retries will be done. """ if max_retry is None: max_retry = DEFAULT_RETRIES if retry_codes is None: retry_codes = RETRY_CODES return http_request("GET", url, None, timeout, headers=headers, use_proxy=use_proxy, max_retry=max_retry, retry_codes=retry_codes, retry_delay=retry_delay, return_raw_response=return_raw_response) def http_head(url, headers=None, use_proxy=False, max_retry=None, retry_codes=None, retry_delay=DELAY_IN_SECONDS, timeout=10): if max_retry is None: max_retry = DEFAULT_RETRIES if retry_codes is None: retry_codes = RETRY_CODES return http_request("HEAD", url, None, timeout, headers=headers, use_proxy=use_proxy, max_retry=max_retry, retry_codes=retry_codes, retry_delay=retry_delay) def http_post(url, data, headers=None, use_proxy=False, max_retry=None, retry_codes=None, retry_delay=DELAY_IN_SECONDS, timeout=10): if max_retry is None: max_retry = DEFAULT_RETRIES if retry_codes is None: retry_codes = RETRY_CODES return http_request("POST", url, data, timeout, headers=headers, use_proxy=use_proxy, max_retry=max_retry, retry_codes=retry_codes, retry_delay=retry_delay) def http_put(url, data, headers=None, use_proxy=False, max_retry=None, retry_codes=None, retry_delay=DELAY_IN_SECONDS, redact_data=False, timeout=10): if max_retry is None: max_retry = DEFAULT_RETRIES if retry_codes is None: retry_codes = RETRY_CODES return http_request("PUT", url, data, timeout, headers=headers, use_proxy=use_proxy, max_retry=max_retry, retry_codes=retry_codes, retry_delay=retry_delay, redact_data=redact_data) def http_delete(url, headers=None, use_proxy=False, max_retry=None, retry_codes=None, retry_delay=DELAY_IN_SECONDS, timeout=10): if max_retry is None: max_retry = DEFAULT_RETRIES if retry_codes is None: retry_codes = RETRY_CODES return http_request("DELETE", url, None, timeout, headers=headers, use_proxy=use_proxy, max_retry=max_retry, retry_codes=retry_codes, retry_delay=retry_delay) def request_failed(resp, ok_codes=None): if ok_codes is None: ok_codes = OK_CODES return not request_succeeded(resp, ok_codes=ok_codes) def request_succeeded(resp, ok_codes=None): if ok_codes is None: ok_codes = OK_CODES return resp is not None and resp.status in ok_codes def request_not_modified(resp): return resp is not None and resp.status in NOT_MODIFIED_CODES def request_failed_at_hostplugin(resp, upstream_failure_codes=None): """ Host plugin will return 502 for any upstream issue, so a failure is any 5xx except 502 """ if upstream_failure_codes is None: upstream_failure_codes = HOSTPLUGIN_UPSTREAM_FAILURE_CODES return resp is not None and resp.status >= 500 and resp.status not in upstream_failure_codes def read_response_error(resp): result = '' if resp is not None: try: result = "[HTTP Failed] [{0}: {1}] {2}".format( resp.status, resp.reason, resp.read()) # this result string is passed upstream to several methods # which do a raise HttpError() or a format() of some kind; # as a result it cannot have any unicode characters if PY_VERSION_MAJOR < 3: result = ustr(result, encoding='ascii', errors='ignore') else: result = result\ .encode(encoding='ascii', errors='ignore')\ .decode(encoding='ascii', errors='ignore') result = textutil.replace_non_ascii(result) except Exception as e: logger.warn(textutil.format_exception(e)) return result Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/utils/shellutil.py000066400000000000000000000401121462617747000257620ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import subprocess import sys import tempfile import threading if sys.version_info[0] == 2: # TimeoutExpired was introduced on Python 3; define a dummy class for Python 2 class TimeoutExpired(Exception): pass else: from subprocess import TimeoutExpired import azurelinuxagent.common.logger as logger from azurelinuxagent.common.future import ustr if not hasattr(subprocess, 'check_output'): def check_output(*popenargs, **kwargs): r"""Backport from subprocess module from python 2.7""" if 'stdout' in kwargs: raise ValueError('stdout argument not allowed, ' 'it will be overridden.') process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs) output, unused_err = process.communicate() retcode = process.poll() if retcode: cmd = kwargs.get("args") if cmd is None: cmd = popenargs[0] raise subprocess.CalledProcessError(retcode, cmd, output=output) return output # Exception classes used by this module. class CalledProcessError(Exception): def __init__(self, returncode, cmd, output=None): # pylint: disable=W0231 self.returncode = returncode self.cmd = cmd self.output = output def __str__(self): return ("Command '{0}' returned non-zero exit status {1}" "").format(self.cmd, self.returncode) subprocess.check_output = check_output subprocess.CalledProcessError = CalledProcessError # pylint: disable=W0105 """ Shell command util functions """ # pylint: enable=W0105 def has_command(cmd): """ Return True if the given command is on the path """ return not run(cmd, False) def run(cmd, chk_err=True, expected_errors=None): """ Note: Deprecating in favour of `azurelinuxagent.common.utils.shellutil.run_command` function. Calls run_get_output on 'cmd', returning only the return code. If chk_err=True then errors will be reported in the log. If chk_err=False then errors will be suppressed from the log. """ if expected_errors is None: expected_errors = [] retcode, out = run_get_output(cmd, chk_err=chk_err, expected_errors=expected_errors) # pylint: disable=W0612 return retcode def run_get_output(cmd, chk_err=True, log_cmd=True, expected_errors=None): """ Wrapper for subprocess.check_output. Execute 'cmd'. Returns return code and STDOUT, trapping expected exceptions. Reports exceptions to Error if chk_err parameter is True For new callers, consider using run_command instead as it separates stdout from stderr, returns only stdout on success, logs both outputs and return code on error and raises an exception. """ if expected_errors is None: expected_errors = [] if log_cmd: logger.verbose(u"Command: [{0}]", cmd) try: process = _popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True) output, _ = process.communicate() _on_command_completed(process.pid) output = __encode_command_output(output) if process.returncode != 0: if chk_err: msg = u"Command: [{0}], " \ u"return code: [{1}], " \ u"result: [{2}]".format(cmd, process.returncode, output) if process.returncode in expected_errors: logger.info(msg) else: logger.error(msg) return process.returncode, output except Exception as exception: if chk_err: logger.error(u"Command [{0}] raised unexpected exception: [{1}]" .format(cmd, ustr(exception))) return -1, ustr(exception) return 0, output def __format_command(command): """ Formats the command taken by run_command/run_pipe. Examples: > __format_command("sort") 'sort' > __format_command(["sort", "-u"]) 'sort -u' > __format_command([["sort"], ["unique", "-n"]]) 'sort | unique -n' """ if isinstance(command, list): if command and isinstance(command[0], list): return " | ".join([" ".join(cmd) for cmd in command]) return " ".join(command) return command def __encode_command_output(output): """ Encodes the stdout/stderr returned by subprocess.communicate() """ return ustr(output if output is not None else b'', encoding='utf-8', errors="backslashreplace") class CommandError(Exception): """ Exception raised by run_command/run_pipe when the command returns an error """ @staticmethod def _get_message(command, return_code, stderr): command_name = command[0] if isinstance(command, list) and len(command) > 0 else command return "'{0}' failed: {1} ({2})".format(command_name, return_code, stderr.rstrip()) def __init__(self, command, return_code, stdout, stderr): super(Exception, self).__init__(CommandError._get_message(command, return_code, stderr)) # pylint: disable=E1003 self.command = command self.returncode = return_code self.stdout = stdout self.stderr = stderr def __run_command(command_action, command, log_error, encode_output): """ Executes the given command_action and returns its stdout. The command_action is a function that executes a command/pipe and returns its exit code, stdout, and stderr. If there are any errors executing the command it raises a RunCommandException; if 'log_error' is True, it also logs details about the error. If encode_output is True the stdout is returned as a string, otherwise it is returned as a bytes object. """ try: return_code, stdout, stderr = command_action() if encode_output: stdout = __encode_command_output(stdout) stderr = __encode_command_output(stderr) if return_code != 0: if log_error: logger.error( "Command: [{0}], return code: [{1}], stdout: [{2}] stderr: [{3}]", __format_command(command), return_code, stdout, stderr) raise CommandError(command=__format_command(command), return_code=return_code, stdout=stdout, stderr=stderr) return stdout except CommandError: raise except Exception as exception: if log_error: logger.error(u"Command [{0}] raised unexpected exception: [{1}]", __format_command(command), ustr(exception)) raise # W0622: Redefining built-in 'input' -- disabled: the parameter name mimics subprocess.communicate() def run_command(command, input=None, stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE, log_error=False, encode_input=True, encode_output=True, track_process=True, timeout=None): # pylint:disable=W0622 """ Executes the given command and returns its stdout. If there are any errors executing the command it raises a RunCommandException; if 'log_error' is True, it also logs details about the error. If encode_output is True the stdout is returned as a string, otherwise it is returned as a bytes object. If track_process is False the command is not added to list of running commands This function is a thin wrapper around Popen/communicate in the subprocess module: * The 'input' parameter corresponds to the same parameter in communicate * The 'stdin' parameter corresponds to the same parameters in Popen * Only one of 'input' and 'stdin' can be specified * The 'stdout' and 'stderr' parameters correspond to the same parameters in Popen, except that they default to subprocess.PIPE instead of None * If the output of the command is redirected using the 'stdout' or 'stderr' parameters (i.e. if the value for these parameters is anything other than the default (subprocess.PIPE)), then the corresponding values returned by this function or the CommandError exception will be empty strings. NOTE: The 'timeout' parameter is ignored on Python 2 NOTE: This is the preferred method to execute shell commands over `azurelinuxagent.common.utils.shellutil.run` function. """ if input is not None and stdin is not None: raise ValueError("The input and stdin arguments are mutually exclusive") def command_action(): popen_stdin = communicate_input = None if input is not None: popen_stdin = subprocess.PIPE communicate_input = input.encode() if encode_input and isinstance(input, str) else input # communicate() needs an array of bytes if stdin is not None: popen_stdin = stdin communicate_input = None if track_process: process = _popen(command, stdin=popen_stdin, stdout=stdout, stderr=stderr, shell=False) else: process = subprocess.Popen(command, stdin=popen_stdin, stdout=stdout, stderr=stderr, shell=False) try: if sys.version_info[0] == 2: # communicate() doesn't support timeout on Python 2 command_stdout, command_stderr = process.communicate(input=communicate_input) else: command_stdout, command_stderr = process.communicate(input=communicate_input, timeout=timeout) except TimeoutExpired: if log_error: logger.error(u"Command [{0}] timed out", __format_command(command)) command_stdout, command_stderr = '', '' try: process.kill() # try to get any output from the command, but ignore any errors if we can't try: command_stdout, command_stderr = process.communicate() # W0702: No exception type(s) specified (bare-except) except: # pylint: disable=W0702 pass except Exception as exception: if log_error: logger.error(u"Can't terminate timed out process: {0}", ustr(exception)) raise CommandError(command=__format_command(command), return_code=-1, stdout=command_stdout, stderr="command timeout\n{0}".format(command_stderr)) if track_process: _on_command_completed(process.pid) return process.returncode, command_stdout, command_stderr return __run_command(command_action=command_action, command=command, log_error=log_error, encode_output=encode_output) def run_pipe(pipe, stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE, log_error=False, encode_output=True): """ Executes the given commands as a pipe and returns its stdout as a string. The pipe is a list of commands, which in turn are a list of strings, e.g. [["sort"], ["uniq", "-n"]] represents 'sort | unique -n' If there are any errors executing the command it raises a RunCommandException; if 'log_error' is True, it also logs details about the error. If encode_output is True the stdout is returned as a string, otherwise it is returned as a bytes object. This function is a thin wrapper around Popen/communicate in the subprocess module: * The 'stdin' parameter is used as input for the first command in the pipe * The 'stdout', and 'stderr' can be used to redirect the output of the pipe * If the output of the pipe is redirected using the 'stdout' or 'stderr' parameters (i.e. if the value for these parameters is anything other than the default (subprocess.PIPE)), then the corresponding values returned by this function or the CommandError exception will be empty strings. """ if len(pipe) < 2: raise ValueError("The pipe must consist of at least 2 commands") def command_action(): stderr_file = None try: popen_stdin = stdin # If stderr is subprocess.PIPE each call to Popen would create a new pipe. We want to collect the stderr of all the # commands in the pipe so we replace stderr with a temporary file that we read once the pipe completes. if stderr == subprocess.PIPE: stderr_file = tempfile.TemporaryFile() popen_stderr = stderr_file else: popen_stderr = stderr processes = [] i = 0 while i < len(pipe) - 1: processes.append(_popen(pipe[i], stdin=popen_stdin, stdout=subprocess.PIPE, stderr=popen_stderr)) popen_stdin = processes[i].stdout i += 1 processes.append(_popen(pipe[i], stdin=popen_stdin, stdout=stdout, stderr=popen_stderr)) i = 0 while i < len(processes) - 1: processes[i].stdout.close() # see https://docs.python.org/2/library/subprocess.html#replacing-shell-pipeline i += 1 pipe_stdout, pipe_stderr = processes[i].communicate() for proc in processes: _on_command_completed(proc.pid) if stderr_file is not None: stderr_file.seek(0) pipe_stderr = stderr_file.read() return processes[i].returncode, pipe_stdout, pipe_stderr finally: if stderr_file is not None: stderr_file.close() return __run_command(command_action=command_action, command=pipe, log_error=log_error, encode_output=encode_output) def quote(word_list): """ Quote a list or tuple of strings for Unix Shell as words, using the byte-literal single quote. The resulting string is safe for use with ``shell=True`` in ``subprocess``, and in ``os.system``. ``assert shlex.split(ShellQuote(wordList)) == wordList``. See POSIX.1:2013 Vol 3, Chap 2, Sec 2.2.2: http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_02_02 """ if not isinstance(word_list, (tuple, list)): word_list = (word_list,) return " ".join(list("'{0}'".format(s.replace("'", "'\\''")) for s in word_list)) # # The run_command/run_pipe/run/run_get_output functions maintain a list of the commands that they are currently executing. # # _running_commands = [] _running_commands_lock = threading.RLock() PARENT_PROCESS_NAME = "AZURE_GUEST_AGENT_PARENT_PROCESS_NAME" AZURE_GUEST_AGENT = "AZURE_GUEST_AGENT" def _popen(*args, **kwargs): with _running_commands_lock: # Add the environment variables env = {} if 'env' in kwargs: env.update(kwargs['env']) else: env.update(os.environ) # Set the marker before process start env[PARENT_PROCESS_NAME] = AZURE_GUEST_AGENT kwargs['env'] = env process = subprocess.Popen(*args, **kwargs) _running_commands.append(process.pid) return process def _on_command_completed(pid): with _running_commands_lock: _running_commands.remove(pid) def get_running_commands(): """ Returns the commands started by run/run_get_output/run_command/run_pipe that are currently running. NOTE: This function is not synchronized with process completion, so the returned array may include processes that have already completed. Also, keep in mind that by the time this function returns additional processes may have started or completed. """ with _running_commands_lock: return _running_commands[:] # return a copy, since the call may originate on another thread Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/utils/textutil.py000066400000000000000000000303501462617747000256420ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import base64 import crypt import hashlib import random import re import string import struct import sys import traceback import xml.dom.minidom as minidom import zlib from azurelinuxagent.common.future import ustr def parse_doc(xml_text): """ Parse xml document from string """ # The minidom lib has some issue with unicode in python2. # Encode the string into utf-8 first xml_text = xml_text.encode('utf-8') return minidom.parseString(xml_text) def findall(root, tag, namespace=None): """ Get all nodes by tag and namespace under Node root. """ if root is None: return [] if namespace is None: return root.getElementsByTagName(tag) else: return root.getElementsByTagNameNS(namespace, tag) def find(root, tag, namespace=None): """ Get first node by tag and namespace under Node root. """ nodes = findall(root, tag, namespace=namespace) if nodes is not None and len(nodes) >= 1: return nodes[0] else: return None def gettext(node): """ Get node text """ if node is None: return None for child in node.childNodes: if child.nodeType == child.TEXT_NODE: return child.data return None def findtext(root, tag, namespace=None): """ Get text of node by tag and namespace under Node root. """ node = find(root, tag, namespace=namespace) return gettext(node) def getattrib(node, attr_name): """ Get attribute of xml node """ if node is not None: return node.getAttribute(attr_name) else: return None def unpack(buf, offset, value_range): """ Unpack bytes into python values. """ result = 0 for i in value_range: result = (result << 8) | str_to_ord(buf[offset + i]) return result def unpack_little_endian(buf, offset, length): """ Unpack little endian bytes into python values. """ return unpack(buf, offset, list(range(length - 1, -1, -1))) def unpack_big_endian(buf, offset, length): """ Unpack big endian bytes into python values. """ return unpack(buf, offset, list(range(0, length))) def hex_dump3(buf, offset, length): """ Dump range of buf in formatted hex. """ return ''.join(['%02X' % str_to_ord(char) for char in buf[offset:offset + length]]) def hex_dump2(buf): """ Dump buf in formatted hex. """ return hex_dump3(buf, 0, len(buf)) def is_in_range(a, low, high): """ Return True if 'a' in 'low' <= a <= 'high' """ return low <= a <= high def is_printable(ch): """ Return True if character is displayable. """ return (is_in_range(ch, str_to_ord('A'), str_to_ord('Z')) or is_in_range(ch, str_to_ord('a'), str_to_ord('z')) or is_in_range(ch, str_to_ord('0'), str_to_ord('9'))) def hex_dump(buffer, size): # pylint: disable=redefined-builtin """ Return Hex formated dump of a 'buffer' of 'size'. """ if size < 0: size = len(buffer) result = "" for i in range(0, size): if (i % 16) == 0: result += "%06X: " % i byte = buffer[i] if type(byte) == str: byte = ord(byte.decode('latin1')) result += "%02X " % byte if (i & 15) == 7: result += " " if ((i + 1) % 16) == 0 or (i + 1) == size: j = i while ((j + 1) % 16) != 0: result += " " if (j & 7) == 7: result += " " j += 1 result += " " for j in range(i - (i % 16), i + 1): byte = buffer[j] if type(byte) == str: byte = str_to_ord(byte.decode('latin1')) k = '.' if is_printable(byte): k = chr(byte) result += k if (i + 1) != size: result += "\n" return result def str_to_ord(a): """ Allows indexing into a string or an array of integers transparently. Generic utility function. """ if type(a) == type(b'') or type(a) == type(u''): a = ord(a) return a def compare_bytes(a, b, start, length): for offset in range(start, start + length): if str_to_ord(a[offset]) != str_to_ord(b[offset]): return False return True def int_to_ip4_addr(a): """ Build DHCP request string. """ return "%u.%u.%u.%u" % ((a >> 24) & 0xFF, (a >> 16) & 0xFF, (a >> 8) & 0xFF, (a) & 0xFF) def hexstr_to_bytearray(a): """ Return hex string packed into a binary struct. """ b = b"" for c in range(0, len(a) // 2): b += struct.pack("B", int(a[c * 2:c * 2 + 2], 16)) return b def set_ssh_config(config, name, val): found = False no_match = -1 match_start = no_match for i in range(0, len(config)): if config[i].startswith(name) and match_start == no_match: config[i] = "{0} {1}".format(name, val) found = True elif config[i].lower().startswith("match"): if config[i].lower().startswith("match all"): # outside match block match_start = no_match elif match_start == no_match: # inside match block match_start = i if not found: if match_start != no_match: i = match_start config.insert(i, "{0} {1}".format(name, val)) return config def set_ini_config(config, name, val): notfound = True nameEqual = name + '=' length = len(config) text = "{0}=\"{1}\"".format(name, val) for i in reversed(range(0, length)): if config[i].startswith(nameEqual): config[i] = text notfound = False break if notfound: config.insert(length - 1, text) def replace_non_ascii(incoming, replace_char=''): outgoing = '' if incoming is not None: for c in incoming: if str_to_ord(c) > 128: outgoing += replace_char else: outgoing += c return outgoing def remove_bom(c): """ bom is comprised of a sequence of three chars,0xef, 0xbb, 0xbf, in case of utf-8. """ if not is_str_none_or_whitespace(c) and \ len(c) > 2 and \ str_to_ord(c[0]) > 128 and \ str_to_ord(c[1]) > 128 and \ str_to_ord(c[2]) > 128: c = c[3:] return c def gen_password_hash(password, crypt_id, salt_len): collection = string.ascii_letters + string.digits salt = ''.join(random.choice(collection) for _ in range(salt_len)) salt = "${0}${1}".format(crypt_id, salt) if sys.version_info[0] == 2: # if python 2.*, encode to type 'str' to prevent Unicode Encode Error from crypt.crypt password = password.encode('utf-8') return crypt.crypt(password, salt) def get_bytes_from_pem(pem_str): base64_bytes = "" for line in pem_str.split('\n'): if "----" not in line: base64_bytes += line return base64_bytes def compress(s): """ Compress a string, and return the base64 encoded result of the compression. This method returns a string instead of a byte array. It is expected that this method is called to compress smallish strings, not to compress the contents of a file. The output of this method is suitable for embedding in log statements. """ from azurelinuxagent.common.version import PY_VERSION_MAJOR if PY_VERSION_MAJOR > 2: return base64.b64encode(zlib.compress(bytes(s, 'utf-8'))).decode('utf-8') return base64.b64encode(zlib.compress(s)) def b64encode(s): from azurelinuxagent.common.version import PY_VERSION_MAJOR if PY_VERSION_MAJOR > 2: return base64.b64encode(bytes(s, 'utf-8')).decode('utf-8') return base64.b64encode(s) def b64decode(s): from azurelinuxagent.common.version import PY_VERSION_MAJOR if PY_VERSION_MAJOR > 2: return base64.b64decode(s).decode('utf-8') return base64.b64decode(s) def safe_shlex_split(s): import shlex from azurelinuxagent.common.version import PY_VERSION if PY_VERSION[:2] == (2, 6): return shlex.split(s.encode('utf-8')) return shlex.split(s) def swap_hexstring(s, width=2): r = len(s) % width if r != 0: s = ('0' * (width - (len(s) % width))) + s return ''.join(reversed( re.findall( r'[a-f0-9]{{{0}}}'.format(width), s, re.IGNORECASE))) def parse_json(json_str): """ Parse json string and return a resulting dictionary """ # trim null and whitespaces result = None if not is_str_empty(json_str): import json result = json.loads(json_str.rstrip(' \t\r\n\0')) return result def is_str_none_or_whitespace(s): return s is None or len(s) == 0 or s.isspace() def is_str_empty(s): return is_str_none_or_whitespace(s) or is_str_none_or_whitespace(s.rstrip(' \t\r\n\0')) def hash_strings(string_list): """ Compute a cryptographic hash of a list of strings :param string_list: The strings to be hashed :return: The cryptographic hash (digest) of the strings in the order provided """ sha1_hash = hashlib.sha1() for item in string_list: sha1_hash.update(item.encode()) return sha1_hash.digest() def format_memory_value(unit, value): units = {'bytes': 1, 'kilobytes': 1024, 'megabytes': 1024*1024, 'gigabytes': 1024*1024*1024} if unit not in units: raise ValueError("Unit must be one of {0}".format(units.keys())) try: value = float(value) except TypeError: raise TypeError('Value must be convertible to a float') return int(value * units[unit]) def str_to_encoded_ustr(s, encoding='utf-8'): """ This function takes the string and converts it into the corresponding encoded ustr if its not already a ustr. The encoding is utf-8 by default if not specified. Note: ustr() is a unicode object for Py2 and a str object for Py3. :param s: The string to convert to ustr :param encoding: Encoding to use. Utf-8 by default :return: Returns the corresponding ustr string. Returns None if input is None. """ # TODO: Import at the top of the file instead of a local import (using local import here to avoid cyclic dependency) from azurelinuxagent.common.version import PY_VERSION_MAJOR if s is None or type(s) is ustr: # If its already a ustr/None then return as is return s if PY_VERSION_MAJOR > 2: try: # For py3+, str() is unicode by default if isinstance(s, bytes): # str.encode() returns bytes which should be decoded to get the str. return s.decode(encoding) else: # If its not encoded, just return the string return ustr(s) except Exception: # If some issues in decoding, just return the string return ustr(s) # For Py2, explicitly convert the string to unicode with the specified encoding return ustr(s, encoding=encoding) def format_exception(exception): # Function to format exception message e = None if sys.version_info[0] == 2: _, e, tb = sys.exc_info() else: tb = exception.__traceback__ msg = ustr(exception) + "\n" if tb is None or (sys.version_info[0] == 2 and e != exception): msg += "[Traceback not available]" else: msg += ''.join(traceback.format_exception(type(exception), value=exception, tb=tb)) return msg Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/utils/timeutil.py000066400000000000000000000022741462617747000256200ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. import datetime def create_timestamp(dt=None): """ Returns a string with the given datetime in iso format. If no datetime is given as parameter, it uses datetime.utcnow(). """ if dt is None: dt = datetime.datetime.utcnow() return dt.isoformat() def create_history_timestamp(dt=None): """ Returns a string with the given datetime formatted as a timestamp for the agent's history folder """ if dt is None: dt = datetime.datetime.utcnow() return dt.strftime('%Y-%m-%dT%H-%M-%S') def datetime_to_ticks(dt): """ Converts 'dt', a datetime, to the number of ticks (1 tick == 1/10000000 sec) since datetime.min (0001-01-01 00:00:00). Note that the resolution of a datetime goes only to microseconds. """ return int(10 ** 7 * total_seconds(dt - datetime.datetime.min)) def total_seconds(dt): """ Compute the total_seconds for timedelta 'td'. Used instead timedelta.total_seconds() because 2.6 does not implement total_seconds. """ return ((24.0 * 60 * 60 * dt.days + dt.seconds) * 10 ** 6 + dt.microseconds) / 10 ** 6 Azure-WALinuxAgent-2b21de5/azurelinuxagent/common/version.py000066400000000000000000000252711462617747000243130ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import re import platform import sys import azurelinuxagent.common.conf as conf import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.future import ustr, get_linux_distribution __DAEMON_VERSION_ENV_VARIABLE = '_AZURE_GUEST_AGENT_DAEMON_VERSION_' """ The daemon process sets this variable's value to the daemon's version number. The variable is set only on versions >= 2.2.53 """ def set_daemon_version(version): """ Sets the value of the _AZURE_GUEST_AGENT_DAEMON_VERSION_ environment variable. The given 'version' can be a FlexibleVersion or a string that can be parsed into a FlexibleVersion """ flexible_version = version if isinstance(version, FlexibleVersion) else FlexibleVersion(version) os.environ[__DAEMON_VERSION_ENV_VARIABLE] = ustr(flexible_version) def get_daemon_version(): """ Retrieves the value of the _AZURE_GUEST_AGENT_DAEMON_VERSION_ environment variable. The value indicates the version of the daemon that started the current agent process or, if the current process is the daemon, the version of the current process. If the variable is not set (because the agent is < 2.2.53, or the process was not started by the daemon and the process is not the daemon itself) the function returns "0.0.0.0" """ if __DAEMON_VERSION_ENV_VARIABLE in os.environ: return FlexibleVersion(os.environ[__DAEMON_VERSION_ENV_VARIABLE]) return FlexibleVersion("0.0.0.0") def get_f5_platform(): """ Add this workaround for detecting F5 products because BIG-IP/IQ/etc do not show their version info in the /etc/product-version location. Instead, the version and product information is contained in the /VERSION file. """ result = [None, None, None, None] f5_version = re.compile("^Version: (\d+\.\d+\.\d+)") # pylint: disable=W1401 f5_product = re.compile("^Product: ([\w-]+)") # pylint: disable=W1401 with open('/VERSION', 'r') as fh: content = fh.readlines() for line in content: version_matches = f5_version.match(line) product_matches = f5_product.match(line) if version_matches: result[1] = version_matches.group(1) elif product_matches: result[3] = product_matches.group(1) if result[3] == "BIG-IP": result[0] = "bigip" result[2] = "bigip" elif result[3] == "BIG-IQ": result[0] = "bigiq" result[2] = "bigiq" elif result[3] == "iWorkflow": result[0] = "iworkflow" result[2] = "iworkflow" return result def get_checkpoint_platform(): take = build = release = "" full_name = open("/etc/cp-release").read().strip() with open("/etc/cloud-version") as f: for line in f: k, _, v = line.partition(": ") v = v.strip() if k == "release": release = v elif k == "take": take = v elif k == "build": build = v return ["gaia", take + "." + build, release, full_name] def get_distro(): if 'FreeBSD' in platform.system(): release = re.sub('\-.*\Z', '', ustr(platform.release())) # pylint: disable=W1401 osinfo = ['freebsd', release, '', 'freebsd'] elif 'OpenBSD' in platform.system(): release = re.sub('\-.*\Z', '', ustr(platform.release())) # pylint: disable=W1401 osinfo = ['openbsd', release, '', 'openbsd'] elif 'Linux' in platform.system(): osinfo = get_linux_distribution(0, 'alpine') elif 'NS-BSD' in platform.system(): release = re.sub('\-.*\Z', '', ustr(platform.release())) # pylint: disable=W1401 osinfo = ['nsbsd', release, '', 'nsbsd'] else: try: # dist() removed in Python 3.8 osinfo = list(platform.dist()) + [''] # pylint: disable=W1505,E1101 except Exception: osinfo = ['UNKNOWN', 'FFFF', '', ''] # The platform.py lib has issue with detecting oracle linux distribution. # Merge the following patch provided by oracle as a temporary fix. if os.path.exists("/etc/oracle-release"): osinfo[2] = "oracle" osinfo[3] = "Oracle Linux" if os.path.exists("/etc/euleros-release"): osinfo[0] = "euleros" if os.path.exists("/etc/UnionTech-release"): osinfo[0] = "uos" if os.path.exists("/etc/mariner-release"): osinfo[0] = "mariner" # The platform.py lib has issue with detecting BIG-IP linux distribution. # Merge the following patch provided by F5. if os.path.exists("/shared/vadc"): osinfo = get_f5_platform() if os.path.exists("/etc/cp-release"): osinfo = get_checkpoint_platform() if os.path.exists("/home/guestshell/azure"): osinfo = ['iosxe', 'csr1000v', '', 'Cisco IOSXE Linux'] if os.path.exists("/etc/photon-release"): osinfo[0] = "photonos" # Remove trailing whitespace and quote in distro name osinfo[0] = osinfo[0].strip('"').strip(' ').lower() return osinfo COMMAND_ABSENT = ustr("Absent") COMMAND_FAILED = ustr("Failed") def get_lis_version(): """ This uses the Linux kernel's 'modinfo' command to retrieve the "version" field for the "hv_vmbus" kernel module (the LIS drivers). This is the documented method to retrieve the LIS module version. Every Linux guest on Hyper-V will have this driver, but it may not be installed as a module (it could instead be built into the kernel). In that case, this will return "Absent" instead of the version, indicating the driver version can be deduced from the kernel version. It will only return "Failed" in the presence of an exception. This function is used to generate telemetry for the version of the LIS drivers installed on the VM. The function and associated telemetry can be removed after a few releases. """ try: modinfo_output = shellutil.run_command(["modinfo", "-F", "version", "hv_vmbus"]) if modinfo_output: return modinfo_output # If the system doesn't have LIS drivers, 'modinfo' will # return nothing on stdout, which will cause 'run_command' # to return an empty string. return COMMAND_ABSENT except Exception: # Ignore almost every possible exception because this is in a # critical code path. Unfortunately the logger isn't already # imported in this module or we'd log this too. return COMMAND_FAILED def has_logrotate(): try: logrotate_version = shellutil.run_command(["logrotate", "--version"]).split("\n")[0] return logrotate_version except shellutil.CommandError: # A non-zero return code means that logrotate isn't present on # the system; --version shouldn't fail otherwise. return COMMAND_ABSENT except Exception: return COMMAND_FAILED AGENT_NAME = "WALinuxAgent" AGENT_LONG_NAME = "Azure Linux Agent" # # IMPORTANT: Please be sure that the version is always 9.9.9.9 on the develop branch. Automation requires this, otherwise # DCR may test the wrong agent version. # # When doing a release, be sure to use the actual agent version. Current agent version: 2.4.0.0 # AGENT_VERSION = '2.11.1.4' AGENT_LONG_VERSION = "{0}-{1}".format(AGENT_NAME, AGENT_VERSION) AGENT_DESCRIPTION = """ The Azure Linux Agent supports the provisioning and running of Linux VMs in the Azure cloud. This package should be installed on Linux disk images that are built to run in the Azure environment. """ AGENT_DIR_GLOB = "{0}-*".format(AGENT_NAME) AGENT_PKG_GLOB = "{0}-*.zip".format(AGENT_NAME) AGENT_PATTERN = "{0}-(.*)".format(AGENT_NAME) AGENT_NAME_PATTERN = re.compile(AGENT_PATTERN) AGENT_PKG_PATTERN = re.compile(AGENT_PATTERN+"\.zip") # pylint: disable=W1401 AGENT_DIR_PATTERN = re.compile(".*/{0}".format(AGENT_PATTERN)) # The execution mode of the VM - IAAS or PAAS. Linux VMs are only executed in IAAS mode. AGENT_EXECUTION_MODE = "IAAS" EXT_HANDLER_PATTERN = b".*/WALinuxAgent-(\d+.\d+.\d+[.\d+]*).*-run-exthandlers" # pylint: disable=W1401 EXT_HANDLER_REGEX = re.compile(EXT_HANDLER_PATTERN) __distro__ = get_distro() DISTRO_NAME = __distro__[0] DISTRO_VERSION = __distro__[1] DISTRO_CODE_NAME = __distro__[2] DISTRO_FULL_NAME = __distro__[3] PY_VERSION = sys.version_info PY_VERSION_MAJOR = sys.version_info[0] PY_VERSION_MINOR = sys.version_info[1] PY_VERSION_MICRO = sys.version_info[2] # Set the CURRENT_AGENT and CURRENT_VERSION to match the agent directory name # - This ensures the agent will "see itself" using the same name and version # as the code that downloads agents. def set_current_agent(): path = os.getcwd() lib_dir = conf.get_lib_dir() if lib_dir[-1] != os.path.sep: lib_dir += os.path.sep agent = path[len(lib_dir):].split(os.path.sep)[0] match = AGENT_NAME_PATTERN.match(agent) if match: version = match.group(1) else: agent = AGENT_LONG_VERSION version = AGENT_VERSION return agent, FlexibleVersion(version) def is_agent_package(path): path = os.path.basename(path) return not re.match(AGENT_PKG_PATTERN, path) is None def is_agent_path(path): path = os.path.basename(path) return not re.match(AGENT_NAME_PATTERN, path) is None CURRENT_AGENT, CURRENT_VERSION = set_current_agent() def set_goal_state_agent(): agent = None if os.path.isdir("/proc"): pids = [pid for pid in os.listdir('/proc') if pid.isdigit()] else: pids = [] for pid in pids: try: pname = open(os.path.join('/proc', pid, 'cmdline'), 'rb').read() match = EXT_HANDLER_REGEX.match(pname) if match: agent = match.group(1) if PY_VERSION_MAJOR > 2: agent = agent.decode('UTF-8') break except IOError: continue if agent is None: agent = CURRENT_VERSION return agent GOAL_STATE_AGENT_VERSION = set_goal_state_agent() Azure-WALinuxAgent-2b21de5/azurelinuxagent/daemon/000077500000000000000000000000001462617747000222205ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/azurelinuxagent/daemon/__init__.py000066400000000000000000000012611462617747000243310ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.daemon.main import get_daemon_handler Azure-WALinuxAgent-2b21de5/azurelinuxagent/daemon/main.py000066400000000000000000000162441462617747000235250ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import sys import time import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.common.event import add_event, WALAEventOperation, initialize_event_logger_vminfo_common_parameters from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.protocol.goal_state import GoalState, GoalStateProperties from azurelinuxagent.common.protocol.util import get_protocol_util from azurelinuxagent.pa.rdma.rdma import setup_rdma_device from azurelinuxagent.common.utils import textutil from azurelinuxagent.common.version import AGENT_NAME, AGENT_LONG_NAME, \ AGENT_VERSION, \ DISTRO_NAME, DISTRO_VERSION, PY_VERSION_MAJOR, PY_VERSION_MINOR, \ PY_VERSION_MICRO from azurelinuxagent.daemon.resourcedisk import get_resourcedisk_handler from azurelinuxagent.daemon.scvmm import get_scvmm_handler from azurelinuxagent.ga.update import get_update_handler from azurelinuxagent.pa.provision import get_provision_handler from azurelinuxagent.pa.rdma import get_rdma_handler OPENSSL_FIPS_ENVIRONMENT = "OPENSSL_FIPS" def get_daemon_handler(): return DaemonHandler() class DaemonHandler(object): """ Main thread of daemon. It will invoke other threads to do actual work """ def __init__(self): self.running = True self.osutil = get_osutil() def run(self, child_args=None): # # The Container ID in telemetry events is retrieved from the goal state. We can fetch the goal state # only after protocol detection, which is done during provisioning. # # Be aware that telemetry events emitted before that will not include the Container ID. # logger.info("{0} Version: {1}", AGENT_LONG_NAME, AGENT_VERSION) logger.info("OS: {0} {1}", DISTRO_NAME, DISTRO_VERSION) logger.info("Python: {0}.{1}.{2}", PY_VERSION_MAJOR, PY_VERSION_MINOR, PY_VERSION_MICRO) self.check_pid() self.initialize_environment() # If FIPS is enabled, set the OpenSSL environment variable # Note: # -- Subprocesses inherit the current environment if conf.get_fips_enabled(): os.environ[OPENSSL_FIPS_ENVIRONMENT] = '1' while self.running: try: self.daemon(child_args) except Exception as e: # pylint: disable=W0612 err_msg = textutil.format_exception(e) add_event(name=AGENT_NAME, is_success=False, message=ustr(err_msg), op=WALAEventOperation.UnhandledError) logger.warn("Daemon ended with exception -- Sleep 15 seconds and restart daemon") time.sleep(15) def check_pid(self): """Check whether daemon is already running""" pid = None pid_file = conf.get_agent_pid_file_path() if os.path.isfile(pid_file): pid = fileutil.read_file(pid_file) if self.osutil.check_pid_alive(pid): logger.info("Daemon is already running: {0}", pid) sys.exit(0) fileutil.write_file(pid_file, ustr(os.getpid())) def sleep_if_disabled(self): agent_disabled_file_path = conf.get_disable_agent_file_path() if os.path.exists(agent_disabled_file_path): import threading logger.warn("Disabling the guest agent by sleeping forever; to re-enable, remove {0} and restart".format(agent_disabled_file_path)) logger.warn("To enable VM extensions, also ensure that the VM's osProfile.allowExtensionOperations property is set to true.") self.running = False disable_event = threading.Event() disable_event.wait() def initialize_environment(self): # Create lib dir if not os.path.isdir(conf.get_lib_dir()): fileutil.mkdir(conf.get_lib_dir(), mode=0o700) os.chdir(conf.get_lib_dir()) def _initialize_telemetry(self): protocol = self.protocol_util.get_protocol() initialize_event_logger_vminfo_common_parameters(protocol) def daemon(self, child_args=None): logger.info("Run daemon") self.protocol_util = get_protocol_util() # pylint: disable=W0201 self.scvmm_handler = get_scvmm_handler() # pylint: disable=W0201 self.resourcedisk_handler = get_resourcedisk_handler() # pylint: disable=W0201 self.rdma_handler = get_rdma_handler() # pylint: disable=W0201 self.provision_handler = get_provision_handler() # pylint: disable=W0201 self.update_handler = get_update_handler() # pylint: disable=W0201 if conf.get_detect_scvmm_env(): self.scvmm_handler.run() if conf.get_resourcedisk_format(): self.resourcedisk_handler.run() # Always redetermine the protocol start (e.g., wireserver vs. # on-premise) since a VHD can move between environments self.protocol_util.clear_protocol() self.provision_handler.run() # Once we have the protocol, complete initialization of the telemetry fields # that require the goal state and IMDS self._initialize_telemetry() # Enable RDMA, continue in errors if conf.enable_rdma(): nd_version = self.rdma_handler.get_rdma_version() self.rdma_handler.install_driver_if_needed() logger.info("RDMA capabilities are enabled in configuration") try: # Ensure the most recent SharedConfig is available # - Changes to RDMA state may not increment the goal state # incarnation number. A forced update ensures the most # current values. protocol = self.protocol_util.get_protocol() goal_state = GoalState(protocol, goal_state_properties=GoalStateProperties.SharedConfig) setup_rdma_device(nd_version, goal_state.shared_conf) except Exception as e: logger.error("Error setting up rdma device: %s" % e) else: logger.info("RDMA capabilities are not enabled, skipping") self.sleep_if_disabled() # Disable output to /dev/console once provisioning has completed if logger.console_output_enabled(): logger.info("End of log to /dev/console. The agent will now check for updates and then will process extensions.") logger.disable_console_output() while self.running: self.update_handler.run_latest(child_args=child_args) Azure-WALinuxAgent-2b21de5/azurelinuxagent/daemon/resourcedisk/000077500000000000000000000000001462617747000247225ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/azurelinuxagent/daemon/resourcedisk/__init__.py000066400000000000000000000013471462617747000270400ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.daemon.resourcedisk.factory import get_resourcedisk_handler Azure-WALinuxAgent-2b21de5/azurelinuxagent/daemon/resourcedisk/default.py000066400000000000000000000354131462617747000267260ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import re import stat import sys import threading from time import sleep import azurelinuxagent.common.logger as logger from azurelinuxagent.common.future import ustr import azurelinuxagent.common.conf as conf from azurelinuxagent.common.event import add_event, WALAEventOperation import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.exception import ResourceDiskError from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.version import AGENT_NAME DATALOSS_WARNING_FILE_NAME = "DATALOSS_WARNING_README.txt" DATA_LOSS_WARNING = """\ WARNING: THIS IS A TEMPORARY DISK. Any data stored on this drive is SUBJECT TO LOSS and THERE IS NO WAY TO RECOVER IT. Please do not use this disk for storing any personal or application data. For additional details to please refer to the MSDN documentation at : http://msdn.microsoft.com/en-us/library/windowsazure/jj672979.aspx """ class ResourceDiskHandler(object): def __init__(self): self.osutil = get_osutil() self.fs = conf.get_resourcedisk_filesystem() def start_activate_resource_disk(self): disk_thread = threading.Thread(target=self.run) disk_thread.start() def run(self): mount_point = None if conf.get_resourcedisk_format(): mount_point = self.activate_resource_disk() if mount_point is not None and \ conf.get_resourcedisk_enable_swap(): self.enable_swap(mount_point) def activate_resource_disk(self): logger.info("Activate resource disk") try: mount_point = conf.get_resourcedisk_mountpoint() mount_point = self.mount_resource_disk(mount_point) warning_file = os.path.join(mount_point, DATALOSS_WARNING_FILE_NAME) try: fileutil.write_file(warning_file, DATA_LOSS_WARNING) except IOError as e: logger.warn("Failed to write data loss warning:{0}", e) return mount_point except ResourceDiskError as e: logger.error("Failed to mount resource disk {0}", e) add_event(name=AGENT_NAME, is_success=False, message=ustr(e), op=WALAEventOperation.ActivateResourceDisk) return None def enable_swap(self, mount_point): logger.info("Enable swap") try: size_mb = conf.get_resourcedisk_swap_size_mb() self.create_swap_space(mount_point, size_mb) except ResourceDiskError as e: logger.error("Failed to enable swap {0}", e) def reread_partition_table(self, device): if shellutil.run("sfdisk -R {0}".format(device), chk_err=False): shellutil.run("blockdev --rereadpt {0}".format(device), chk_err=False) def mount_resource_disk(self, mount_point): device = self.osutil.device_for_ide_port(1) if device is None: raise ResourceDiskError("unable to detect disk topology") device = "/dev/{0}".format(device) partition = device + "1" mount_list = shellutil.run_get_output("mount")[1] existing = self.osutil.get_mount_point(mount_list, device) if existing: logger.info("Resource disk [{0}] is already mounted [{1}]", partition, existing) return existing try: fileutil.mkdir(mount_point, mode=0o755) except OSError as ose: msg = "Failed to create mount point " \ "directory [{0}]: {1}".format(mount_point, ose) logger.error(msg) raise ResourceDiskError(msg=msg, inner=ose) logger.info("Examining partition table") ret = shellutil.run_get_output("parted {0} print".format(device)) if ret[0]: raise ResourceDiskError("Could not determine partition info for " "{0}: {1}".format(device, ret[1])) force_option = 'F' if self.fs == 'xfs': force_option = 'f' mkfs_string = "mkfs.{0} -{2} {1}".format( self.fs, partition, force_option) if "gpt" in ret[1]: logger.info("GPT detected, finding partitions") parts = [x for x in ret[1].split("\n") if re.match(r"^\s*[0-9]+", x)] logger.info("Found {0} GPT partition(s).", len(parts)) if len(parts) > 1: logger.info("Removing old GPT partitions") for i in range(1, len(parts) + 1): logger.info("Remove partition {0}", i) shellutil.run("parted {0} rm {1}".format(device, i)) logger.info("Creating new GPT partition") shellutil.run( "parted {0} mkpart primary 0% 100%".format(device)) logger.info("Format partition [{0}]", mkfs_string) shellutil.run(mkfs_string) else: logger.info("GPT not detected, determining filesystem") ret = self.change_partition_type( suppress_message=True, option_str="{0} 1 -n".format(device)) ptype = ret[1].strip() if ptype == "7" and self.fs != "ntfs": logger.info("The partition is formatted with ntfs, updating " "partition type to 83") self.change_partition_type( suppress_message=False, option_str="{0} 1 83".format(device)) self.reread_partition_table(device) logger.info("Format partition [{0}]", mkfs_string) shellutil.run(mkfs_string) else: logger.info("The partition type is {0}", ptype) mount_options = conf.get_resourcedisk_mountoptions() mount_string = self.get_mount_string(mount_options, partition, mount_point) attempts = 5 while not os.path.exists(partition) and attempts > 0: logger.info("Waiting for partition [{0}], {1} attempts remaining", partition, attempts) sleep(5) attempts -= 1 if not os.path.exists(partition): raise ResourceDiskError( "Partition was not created [{0}]".format(partition)) logger.info("Mount resource disk [{0}]", mount_string) ret, output = shellutil.run_get_output(mount_string, chk_err=False) # if the exit code is 32, then the resource disk can be already mounted if ret == 32 and output.find("is already mounted") != -1: logger.warn("Could not mount resource disk: {0}", output) elif ret != 0: # Some kernels seem to issue an async partition re-read after a # 'parted' command invocation. This causes mount to fail if the # partition re-read is not complete by the time mount is # attempted. Seen in CentOS 7.2. Force a sequential re-read of # the partition and try mounting. logger.warn("Failed to mount resource disk. " "Retry mounting after re-reading partition info.") self.reread_partition_table(device) ret, output = shellutil.run_get_output(mount_string, chk_err=False) if ret: logger.warn("Failed to mount resource disk. " "Attempting to format and retry mount. [{0}]", output) shellutil.run(mkfs_string) ret, output = shellutil.run_get_output(mount_string) if ret: raise ResourceDiskError("Could not mount {0} " "after syncing partition table: " "[{1}] {2}".format(partition, ret, output)) logger.info("Resource disk {0} is mounted at {1} with {2}", device, mount_point, self.fs) return mount_point def change_partition_type(self, suppress_message, option_str): """ use sfdisk to change partition type. First try with --part-type; if fails, fall back to -c """ option_to_use = '--part-type' command = "sfdisk {0} {1} {2}".format( option_to_use, '-f' if suppress_message else '', option_str) err_code, output = shellutil.run_get_output( command, chk_err=False, log_cmd=True) # fall back to -c if err_code != 0: logger.info( "sfdisk with --part-type failed [{0}], retrying with -c", err_code) option_to_use = '-c' command = "sfdisk {0} {1} {2}".format( option_to_use, '-f' if suppress_message else '', option_str) err_code, output = shellutil.run_get_output(command, log_cmd=True) if err_code == 0: logger.info('{0} succeeded', command) else: logger.error('{0} failed [{1}: {2}]', command, err_code, output) return err_code, output def get_mount_string(self, mount_options, partition, mount_point): if mount_options is not None: return 'mount -t {0} -o {1} {2} {3}'.format( self.fs, mount_options, partition, mount_point ) else: return 'mount -t {0} {1} {2}'.format( self.fs, partition, mount_point ) @staticmethod def check_existing_swap_file(swapfile, swaplist, size): if swapfile in swaplist and os.path.isfile( swapfile) and os.path.getsize(swapfile) == size: logger.info("Swap already enabled") # restrict access to owner (remove all access from group, others) swapfile_mode = os.stat(swapfile).st_mode if swapfile_mode & (stat.S_IRWXG | stat.S_IRWXO): swapfile_mode = swapfile_mode & ~(stat.S_IRWXG | stat.S_IRWXO) logger.info( "Changing mode of {0} to {1:o}".format( swapfile, swapfile_mode)) os.chmod(swapfile, swapfile_mode) return True return False def create_swap_space(self, mount_point, size_mb): size_kb = size_mb * 1024 size = size_kb * 1024 swapfile = os.path.join(mount_point, 'swapfile') swaplist = shellutil.run_get_output("swapon -s")[1] if self.check_existing_swap_file(swapfile, swaplist, size): return if os.path.isfile(swapfile) and os.path.getsize(swapfile) != size: logger.info("Remove old swap file") shellutil.run("swapoff {0}".format(swapfile), chk_err=False) os.remove(swapfile) if not os.path.isfile(swapfile): logger.info("Create swap file") self.mkfile(swapfile, size_kb * 1024) shellutil.run("mkswap {0}".format(swapfile)) if shellutil.run("swapon {0}".format(swapfile)): raise ResourceDiskError("{0}".format(swapfile)) logger.info("Enabled {0}KB of swap at {1}".format(size_kb, swapfile)) def mkfile(self, filename, nbytes): """ Create a non-sparse file of that size. Deletes and replaces existing file. To allow efficient execution, fallocate will be tried first. This includes ``os.posix_fallocate`` on Python 3.3+ (unix) and the ``fallocate`` command in the popular ``util-linux{,-ng}`` package. A dd fallback will be tried too. When size < 64M, perform single-pass dd. Otherwise do two-pass dd. """ if not isinstance(nbytes, int): nbytes = int(nbytes) if nbytes <= 0: raise ResourceDiskError("Invalid swap size [{0}]".format(nbytes)) if os.path.isfile(filename): os.remove(filename) # If file system is xfs, use dd right away as we have been reported that # swap enabling fails in xfs fs when disk space is allocated with # fallocate ret = 0 fn_sh = shellutil.quote((filename,)) if self.fs not in ['xfs', 'ext4']: # os.posix_fallocate if sys.version_info >= (3, 3): # Probable errors: # - OSError: Seen on Cygwin, libc notimpl? # - AttributeError: What if someone runs this under... fd = None try: fd = os.open( filename, os.O_CREAT | os.O_WRONLY | os.O_EXCL, stat.S_IRUSR | stat.S_IWUSR) os.posix_fallocate(fd, 0, nbytes) # pylint: disable=no-member return 0 except BaseException: # Not confident with this thing, just keep trying... pass finally: if fd is not None: os.close(fd) # fallocate command ret = shellutil.run( u"umask 0077 && fallocate -l {0} {1}".format(nbytes, fn_sh)) if ret == 0: return ret logger.info("fallocate unsuccessful, falling back to dd") # dd fallback dd_maxbs = 64 * 1024 ** 2 dd_cmd = "umask 0077 && dd if=/dev/zero bs={0} count={1} " \ "conv=notrunc of={2}" blocks = int(nbytes / dd_maxbs) if blocks > 0: ret = shellutil.run(dd_cmd.format(dd_maxbs, blocks, fn_sh)) << 8 remains = int(nbytes % dd_maxbs) if remains > 0: ret += shellutil.run(dd_cmd.format(remains, 1, fn_sh)) if ret == 0: logger.info("dd successful") else: logger.error("dd unsuccessful") return ret Azure-WALinuxAgent-2b21de5/azurelinuxagent/daemon/resourcedisk/factory.py000066400000000000000000000025751462617747000267540ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.common.version import DISTRO_NAME, DISTRO_VERSION, DISTRO_FULL_NAME from .default import ResourceDiskHandler from .freebsd import FreeBSDResourceDiskHandler from .openbsd import OpenBSDResourceDiskHandler from .openwrt import OpenWRTResourceDiskHandler def get_resourcedisk_handler(distro_name=DISTRO_NAME, distro_version=DISTRO_VERSION, # pylint: disable=W0613 distro_full_name=DISTRO_FULL_NAME): # pylint: disable=W0613 if distro_name == "freebsd": return FreeBSDResourceDiskHandler() if distro_name == "openbsd": return OpenBSDResourceDiskHandler() if distro_name == "openwrt": return OpenWRTResourceDiskHandler() return ResourceDiskHandler() Azure-WALinuxAgent-2b21de5/azurelinuxagent/daemon/resourcedisk/freebsd.py000066400000000000000000000162501462617747000267120ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.conf as conf from azurelinuxagent.common.exception import ResourceDiskError from azurelinuxagent.daemon.resourcedisk.default import ResourceDiskHandler class FreeBSDResourceDiskHandler(ResourceDiskHandler): """ This class handles resource disk mounting for FreeBSD. The resource disk locates at following slot: scbus2 on blkvsc1 bus 0: at scbus2 target 1 lun 0 (da1,pass2) There are 2 variations based on partition table type: 1. MBR: The resource disk partition is /dev/da1s1 2. GPT: The resource disk partition is /dev/da1p2, /dev/da1p1 is for reserved usage. """ def __init__(self): # pylint: disable=W0235 super(FreeBSDResourceDiskHandler, self).__init__() @staticmethod def parse_gpart_list(data): dic = {} for line in data.split('\n'): if line.find("Geom name: ") != -1: geom_name = line[11:] elif line.find("scheme: ") != -1: dic[geom_name] = line[8:] return dic def mount_resource_disk(self, mount_point): fs = self.fs if fs != 'ufs': raise ResourceDiskError( "Unsupported filesystem type:{0}, only ufs is supported.".format(fs)) # 1. Detect device err, output = shellutil.run_get_output('gpart list') if err: raise ResourceDiskError( "Unable to detect resource disk device:{0}".format(output)) disks = self.parse_gpart_list(output) device = self.osutil.device_for_ide_port(1) if device is None or device not in disks: # fallback logic to find device err, output = shellutil.run_get_output( 'camcontrol periphlist 2:1:0') if err: # try again on "3:1:0" err, output = shellutil.run_get_output( 'camcontrol periphlist 3:1:0') if err: raise ResourceDiskError( "Unable to detect resource disk device:{0}".format(output)) # 'da1: generation: 4 index: 1 status: MORE\npass2: generation: 4 index: 2 status: LAST\n' for line in output.split('\n'): index = line.find(':') if index > 0: geom_name = line[:index] if geom_name in disks: device = geom_name break if not device: raise ResourceDiskError("Unable to detect resource disk device.") logger.info('Resource disk device {0} found.', device) # 2. Detect partition partition_table_type = disks[device] if partition_table_type == 'MBR': provider_name = device + 's1' elif partition_table_type == 'GPT': provider_name = device + 'p2' else: raise ResourceDiskError( "Unsupported partition table type:{0}".format(output)) err, output = shellutil.run_get_output( 'gpart show -p {0}'.format(device)) if err or output.find(provider_name) == -1: raise ResourceDiskError("Resource disk partition not found.") partition = '/dev/' + provider_name logger.info('Resource disk partition {0} found.', partition) # 3. Mount partition mount_list = shellutil.run_get_output("mount")[1] existing = self.osutil.get_mount_point(mount_list, partition) if existing: logger.info("Resource disk {0} is already mounted", partition) return existing fileutil.mkdir(mount_point, mode=0o755) mount_cmd = 'mount -t {0} {1} {2}'.format(fs, partition, mount_point) err = shellutil.run(mount_cmd, chk_err=False) if err: logger.info( 'Creating {0} filesystem on partition {1}'.format( fs, partition)) err, output = shellutil.run_get_output( 'newfs -U {0}'.format(partition)) if err: raise ResourceDiskError( "Failed to create new filesystem on partition {0}, error:{1}" .format( partition, output)) err, output = shellutil.run_get_output(mount_cmd, chk_err=False) if err: raise ResourceDiskError( "Failed to mount partition {0}, error {1}".format( partition, output)) logger.info( "Resource disk partition {0} is mounted at {1} with fstype {2}", partition, mount_point, fs) return mount_point def create_swap_space(self, mount_point, size_mb): size_kb = size_mb * 1024 size = size_kb * 1024 swapfile = os.path.join(mount_point, 'swapfile') swaplist = shellutil.run_get_output("swapctl -l")[1] if self.check_existing_swap_file(swapfile, swaplist, size): return if os.path.isfile(swapfile) and os.path.getsize(swapfile) != size: logger.info("Remove old swap file") shellutil.run("swapoff {0}".format(swapfile), chk_err=False) os.remove(swapfile) if not os.path.isfile(swapfile): logger.info("Create swap file") self.mkfile(swapfile, size_kb * 1024) mddevice = shellutil.run_get_output( "mdconfig -a -t vnode -f {0}".format(swapfile))[1].rstrip() shellutil.run("chmod 0600 /dev/{0}".format(mddevice)) if conf.get_resourcedisk_enable_swap_encryption(): shellutil.run("kldload aesni") shellutil.run("kldload cryptodev") shellutil.run("kldload geom_eli") shellutil.run( "geli onetime -e AES-XTS -l 256 -d /dev/{0}".format(mddevice)) shellutil.run("chmod 0600 /dev/{0}.eli".format(mddevice)) if shellutil.run("swapon /dev/{0}.eli".format(mddevice)): raise ResourceDiskError("/dev/{0}.eli".format(mddevice)) logger.info( "Enabled {0}KB of swap at /dev/{1}.eli ({2})".format(size_kb, mddevice, swapfile)) else: if shellutil.run("swapon /dev/{0}".format(mddevice)): raise ResourceDiskError("/dev/{0}".format(mddevice)) logger.info( "Enabled {0}KB of swap at /dev/{1} ({2})".format(size_kb, mddevice, swapfile)) Azure-WALinuxAgent-2b21de5/azurelinuxagent/daemon/resourcedisk/openbsd.py000066400000000000000000000114431462617747000267310ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # Copyright 2017 Reyk Floeter # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and OpenSSL 1.0+ # import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.conf as conf from azurelinuxagent.common.exception import ResourceDiskError from azurelinuxagent.daemon.resourcedisk.default import ResourceDiskHandler class OpenBSDResourceDiskHandler(ResourceDiskHandler): def __init__(self): super(OpenBSDResourceDiskHandler, self).__init__() # Fase File System (FFS) is UFS if self.fs == 'ufs' or self.fs == 'ufs2': self.fs = 'ffs' def create_swap_space(self, mount_point, size_mb): pass def enable_swap(self, mount_point): size_mb = conf.get_resourcedisk_swap_size_mb() if size_mb: logger.info("Enable swap") device = self.osutil.device_for_ide_port(1) err, output = shellutil.run_get_output("swapctl -a /dev/" "{0}b".format(device), chk_err=False) if err: logger.error("Failed to enable swap, error {0}", output) def mount_resource_disk(self, mount_point): fs = self.fs if fs != 'ffs': raise ResourceDiskError("Unsupported filesystem type: {0}, only " "ufs/ffs is supported.".format(fs)) # 1. Get device device = self.osutil.device_for_ide_port(1) if not device: raise ResourceDiskError("Unable to detect resource disk device.") logger.info('Resource disk device {0} found.', device) # 2. Get partition partition = "/dev/{0}a".format(device) # 3. Mount partition mount_list = shellutil.run_get_output("mount")[1] existing = self.osutil.get_mount_point(mount_list, partition) if existing: logger.info("Resource disk {0} is already mounted", partition) return existing fileutil.mkdir(mount_point, mode=0o755) mount_cmd = 'mount -t {0} {1} {2}'.format(self.fs, partition, mount_point) err = shellutil.run(mount_cmd, chk_err=False) if err: logger.info('Creating {0} filesystem on {1}'.format(fs, device)) fdisk_cmd = "/sbin/fdisk -yi {0}".format(device) err, output = shellutil.run_get_output(fdisk_cmd, chk_err=False) if err: raise ResourceDiskError("Failed to create new MBR on {0}, " "error: {1}".format(device, output)) size_mb = conf.get_resourcedisk_swap_size_mb() if size_mb: if size_mb > 512 * 1024: size_mb = 512 * 1024 disklabel_cmd = ("echo -e '{0} 1G-* 50%\nswap 1-{1}M 50%' " "| disklabel -w -A -T /dev/stdin " "{2}").format(mount_point, size_mb, device) ret, output = shellutil.run_get_output( disklabel_cmd, chk_err=False) if ret: raise ResourceDiskError("Failed to create new disklabel " "on {0}, error " "{1}".format(device, output)) err, output = shellutil.run_get_output("newfs -O2 {0}a" "".format(device)) if err: raise ResourceDiskError("Failed to create new filesystem on " "partition {0}, error " "{1}".format(partition, output)) err, output = shellutil.run_get_output(mount_cmd, chk_err=False) if err: raise ResourceDiskError("Failed to mount partition {0}, " "error {1}".format(partition, output)) logger.info("Resource disk partition {0} is mounted at {1} with fstype " "{2}", partition, mount_point, fs) return mount_point Azure-WALinuxAgent-2b21de5/azurelinuxagent/daemon/resourcedisk/openwrt.py000066400000000000000000000133371462617747000270010ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # Copyright 2018 Sonus Networks, Inc. (d.b.a. Ribbon Communications Operating Company) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os from time import sleep import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.conf as conf from azurelinuxagent.common.exception import ResourceDiskError from azurelinuxagent.daemon.resourcedisk.default import ResourceDiskHandler class OpenWRTResourceDiskHandler(ResourceDiskHandler): def __init__(self): super(OpenWRTResourceDiskHandler, self).__init__() # Fase File System (FFS) is UFS if self.fs == 'ufs' or self.fs == 'ufs2': self.fs = 'ffs' def reread_partition_table(self, device): ret, output = shellutil.run_get_output("hdparm -z {0}".format(device), chk_err=False) # pylint: disable=W0612 if ret != 0: logger.warn("Failed refresh the partition table.") def mount_resource_disk(self, mount_point): device = self.osutil.device_for_ide_port(1) if device is None: raise ResourceDiskError("unable to detect disk topology") logger.info('Resource disk device {0} found.', device) # 2. Get partition device = "/dev/{0}".format(device) partition = device + "1" logger.info('Resource disk partition {0} found.', partition) # 3. Mount partition mount_list = shellutil.run_get_output("mount")[1] existing = self.osutil.get_mount_point(mount_list, device) if existing: logger.info("Resource disk [{0}] is already mounted [{1}]", partition, existing) return existing try: fileutil.mkdir(mount_point, mode=0o755) except OSError as ose: msg = "Failed to create mount point " \ "directory [{0}]: {1}".format(mount_point, ose) logger.error(msg) raise ResourceDiskError(msg=msg, inner=ose) force_option = 'F' if self.fs == 'xfs': force_option = 'f' mkfs_string = "mkfs.{0} -{2} {1}".format(self.fs, partition, force_option) # Compare to the Default mount_resource_disk, we don't check for GPT that is not supported on OpenWRT ret = self.change_partition_type(suppress_message=True, option_str="{0} 1 -n".format(device)) ptype = ret[1].strip() if ptype == "7" and self.fs != "ntfs": logger.info("The partition is formatted with ntfs, updating " "partition type to 83") self.change_partition_type(suppress_message=False, option_str="{0} 1 83".format(device)) self.reread_partition_table(device) logger.info("Format partition [{0}]", mkfs_string) shellutil.run(mkfs_string) else: logger.info("The partition type is {0}", ptype) mount_options = conf.get_resourcedisk_mountoptions() mount_string = self.get_mount_string(mount_options, partition, mount_point) attempts = 5 while not os.path.exists(partition) and attempts > 0: logger.info("Waiting for partition [{0}], {1} attempts remaining", partition, attempts) sleep(5) attempts -= 1 if not os.path.exists(partition): raise ResourceDiskError("Partition was not created [{0}]".format(partition)) if os.path.ismount(mount_point): logger.warn("Disk is already mounted on {0}", mount_point) else: # Some kernels seem to issue an async partition re-read after a # command invocation. This causes mount to fail if the # partition re-read is not complete by the time mount is # attempted. Seen in CentOS 7.2. Force a sequential re-read of # the partition and try mounting. logger.info("Mounting after re-reading partition info.") self.reread_partition_table(device) logger.info("Mount resource disk [{0}]", mount_string) ret, output = shellutil.run_get_output(mount_string) if ret: logger.warn("Failed to mount resource disk. " "Attempting to format and retry mount. [{0}]", output) shellutil.run(mkfs_string) ret, output = shellutil.run_get_output(mount_string) if ret: raise ResourceDiskError("Could not mount {0} " "after syncing partition table: " "[{1}] {2}".format(partition, ret, output)) logger.info("Resource disk {0} is mounted at {1} with {2}", device, mount_point, self.fs) return mount_point Azure-WALinuxAgent-2b21de5/azurelinuxagent/daemon/scvmm.py000066400000000000000000000053271462617747000237260ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import re import os import sys import subprocess import time import azurelinuxagent.common.logger as logger import azurelinuxagent.common.conf as conf from azurelinuxagent.common.osutil import get_osutil VMM_CONF_FILE_NAME = "linuxosconfiguration.xml" VMM_STARTUP_SCRIPT_NAME= "install" def get_scvmm_handler(): return ScvmmHandler() class ScvmmHandler(object): def __init__(self): self.osutil = get_osutil() def detect_scvmm_env(self, dev_dir='/dev'): logger.info("Detecting Microsoft System Center VMM Environment") found=False # try to load the ATAPI driver, continue on failure self.osutil.try_load_atapiix_mod() # cycle through all available /dev/sr*|hd*|cdrom*|cd* looking for the scvmm configuration file mount_point = conf.get_dvd_mount_point() for devices in filter(lambda x: x is not None, [re.match(r'(sr[0-9]|hd[c-z]|cdrom[0-9]?|cd[0-9]+)', dev) for dev in os.listdir(dev_dir)]): dvd_device = os.path.join(dev_dir, devices.group(0)) self.osutil.mount_dvd(max_retry=1, chk_err=False, dvd_device=dvd_device, mount_point=mount_point) found = os.path.isfile(os.path.join(mount_point, VMM_CONF_FILE_NAME)) if found: self.start_scvmm_agent(mount_point=mount_point) break else: self.osutil.umount_dvd(chk_err=False, mount_point=mount_point) return found def start_scvmm_agent(self, mount_point=None): logger.info("Starting Microsoft System Center VMM Initialization " "Process") if mount_point is None: mount_point = conf.get_dvd_mount_point() startup_script = os.path.join(mount_point, VMM_STARTUP_SCRIPT_NAME) with open(os.devnull, 'w') as devnull: subprocess.Popen(["/bin/bash", startup_script, "-p " + mount_point], stdout=devnull, stderr=devnull) def run(self): if self.detect_scvmm_env(): logger.info("Exiting") time.sleep(300) sys.exit(0) Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/000077500000000000000000000000001462617747000213445ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/__init__.py000066400000000000000000000011661462617747000234610ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/agent_update_handler.py000066400000000000000000000302341462617747000260550ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import os from azurelinuxagent.common import conf, logger from azurelinuxagent.common.event import add_event, WALAEventOperation from azurelinuxagent.common.exception import AgentUpgradeExitException, AgentUpdateError, AgentFamilyMissingError from azurelinuxagent.common.future import ustr from azurelinuxagent.common.protocol.restapi import VMAgentUpdateStatuses, VMAgentUpdateStatus, VERSION_0 from azurelinuxagent.common.utils import textutil from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.version import get_daemon_version from azurelinuxagent.ga.rsm_version_updater import RSMVersionUpdater from azurelinuxagent.ga.self_update_version_updater import SelfUpdateVersionUpdater def get_agent_update_handler(protocol): return AgentUpdateHandler(protocol) class AgentUpdateHandler(object): """ This class handles two type of agent updates. Handler initializes the updater to SelfUpdateVersionUpdater and switch to appropriate updater based on below conditions: RSM update: This is the update requested by RSM. The contract between CRP and agent is we get following properties in the goal state: version: it will have what version to update isVersionFromRSM: True if the version is from RSM deployment. isVMEnabledForRSMUpgrades: True if the VM is enabled for RSM upgrades. if vm enabled for RSM upgrades, we use RSM update path. But if requested update is not by rsm deployment we ignore the update. Self update: We fallback to this if above is condition not met. This update to the largest version available in the manifest Note: Self-update don't support downgrade. Handler keeps the rsm state of last update is with RSM or not on every new goal state. Once handler decides which updater to use, then does following steps: 1. Retrieve the agent version from the goal state. 2. Check if we allowed to update for that version. 3. Log the update message. 4. Purge the extra agents from disk. 5. Download the new agent. 6. Proceed with update. [Note: 1.0.8.147 is the minimum supported version of HGPA which will have the isVersionFromRSM and isVMEnabledForRSMUpgrades properties in vmsettings.] """ def __init__(self, protocol): self._protocol = protocol self._gs_id = "unknown" self._ga_family_type = conf.get_autoupdate_gafamily() self._daemon_version = self._get_daemon_version_for_update() self._last_attempted_update_error_msg = "" # restore the state of rsm update. Default to self-update if last update is not with RSM. if not self._get_is_last_update_with_rsm(): self._updater = SelfUpdateVersionUpdater(self._gs_id) else: self._updater = RSMVersionUpdater(self._gs_id, self._daemon_version) @staticmethod def _get_daemon_version_for_update(): daemon_version = get_daemon_version() if daemon_version != FlexibleVersion(VERSION_0): return daemon_version # We return 0.0.0.0 if daemon version is not specified. In that case, # use the min version as 2.2.53 as we started setting the daemon version starting 2.2.53. return FlexibleVersion("2.2.53") @staticmethod def _get_rsm_update_state_file(): """ This file keeps if last attempted update is rsm or not. """ return os.path.join(conf.get_lib_dir(), "rsm_update.json") def _save_rsm_update_state(self): """ Save the rsm state empty file when we switch to RSM """ try: with open(self._get_rsm_update_state_file(), "w"): pass except Exception as e: logger.warn("Error creating the RSM state ({0}): {1}", self._get_rsm_update_state_file(), ustr(e)) def _remove_rsm_update_state(self): """ Remove the rsm state file when we switch to self-update """ try: if os.path.exists(self._get_rsm_update_state_file()): os.remove(self._get_rsm_update_state_file()) except Exception as e: logger.warn("Error removing the RSM state ({0}): {1}", self._get_rsm_update_state_file(), ustr(e)) def _get_is_last_update_with_rsm(self): """ Returns True if state file exists as this consider as last update with RSM is true """ return os.path.exists(self._get_rsm_update_state_file()) def _get_agent_family_manifest(self, goal_state): """ Get the agent_family from last GS for the given family Returns: first entry of Manifest Exception if no manifests found in the last GS and log it only on new goal state """ family = self._ga_family_type agent_families = goal_state.extensions_goal_state.agent_families family_found = False agent_family_manifests = [] for m in agent_families: if m.name == family: family_found = True if len(m.uris) > 0: agent_family_manifests.append(m) if not family_found: raise AgentFamilyMissingError(u"Agent family: {0} not found in the goal state: {1}, skipping agent update \n" u"[Note: This error is permanent for this goal state and Will not log same error until we receive new goal state]".format(family, self._gs_id)) if len(agent_family_manifests) == 0: raise AgentFamilyMissingError( u"No manifest links found for agent family: {0} for goal state: {1}, skipping agent update \n" u"[Note: This error is permanent for this goal state and will not log same error until we receive new goal state]".format( family, self._gs_id)) return agent_family_manifests[0] def run(self, goal_state, ext_gs_updated): try: # If auto update is disabled, we don't proceed with update if not conf.get_auto_update_to_latest_version(): return # Update the state only on new goal state if ext_gs_updated: self._gs_id = goal_state.extensions_goal_state.id self._updater.sync_new_gs_id(self._gs_id) agent_family = self._get_agent_family_manifest(goal_state) # Updater will return True or False if we need to switch the updater # If self-updater receives RSM update enabled, it will switch to RSM updater # If RSM updater receives RSM update disabled, it will switch to self-update # No change in updater if GS not updated is_rsm_update_enabled = self._updater.is_rsm_update_enabled(agent_family, ext_gs_updated) if not is_rsm_update_enabled and isinstance(self._updater, RSMVersionUpdater): msg = "VM not enabled for RSM updates, switching to self-update mode" logger.info(msg) add_event(op=WALAEventOperation.AgentUpgrade, message=msg, log_event=False) self._updater = SelfUpdateVersionUpdater(self._gs_id) self._remove_rsm_update_state() if is_rsm_update_enabled and isinstance(self._updater, SelfUpdateVersionUpdater): msg = "VM enabled for RSM updates, switching to RSM update mode" logger.info(msg) add_event(op=WALAEventOperation.AgentUpgrade, message=msg, log_event=False) self._updater = RSMVersionUpdater(self._gs_id, self._daemon_version) self._save_rsm_update_state() # If updater is changed in previous step, we allow update as it consider as first attempt. If not, it checks below condition # RSM checks new goal state; self-update checks manifest download interval if not self._updater.is_update_allowed_this_time(ext_gs_updated): return self._updater.retrieve_agent_version(agent_family, goal_state) if not self._updater.is_retrieved_version_allowed_to_update(agent_family): return self._updater.log_new_agent_update_message() agent = self._updater.download_and_get_new_agent(self._protocol, agent_family, goal_state) # Below condition is to break the update loop if new agent is in bad state in previous attempts # If the bad agent update already attempted 3 times, we don't want to continue with update anymore. # Otherewise we allow the update by increment the update attempt count and clear the bad state to make good agent # [Note: As a result, it is breaking contract between RSM and agent, we may NOT honor the RSM retries for that version] if agent.get_update_attempt_count() >= 3: msg = "Attempted enough update retries for version: {0} but still agent not recovered from bad state. So, we stop updating to this version".format(str(agent.version)) raise AgentUpdateError(msg) else: agent.clear_error() agent.inc_update_attempt_count() msg = "Agent update attempt count: {0} for version: {1}".format(agent.get_update_attempt_count(), str(agent.version)) logger.info(msg) add_event(op=WALAEventOperation.AgentUpgrade, message=msg, log_event=False) self._updater.purge_extra_agents_from_disk() self._updater.proceed_with_update() except Exception as err: log_error = True if isinstance(err, AgentUpgradeExitException): raise err elif isinstance(err, AgentUpdateError): error_msg = ustr(err) elif isinstance(err, AgentFamilyMissingError): error_msg = ustr(err) # Agent family missing error is permanent in the given goal state, so we don't want to log it on every iteration of main loop if there is no new goal state log_error = ext_gs_updated else: error_msg = "Unable to update Agent: {0}".format(textutil.format_exception(err)) if log_error: logger.warn(error_msg) add_event(op=WALAEventOperation.AgentUpgrade, is_success=False, message=error_msg, log_event=False) self._last_attempted_update_error_msg = error_msg def get_vmagent_update_status(self): """ This function gets the VMAgent update status as per the last attempted update. Returns: None if fail to report or update never attempted with rsm version specified in GS Note: We send the status regardless of updater type. Since we call this main loop, want to avoid fetching agent family to decide and send only if vm enabled for rsm updates. """ try: if conf.get_enable_ga_versioning(): if not self._last_attempted_update_error_msg: status = VMAgentUpdateStatuses.Success code = 0 else: status = VMAgentUpdateStatuses.Error code = 1 return VMAgentUpdateStatus(expected_version=str(self._updater.version), status=status, code=code, message=self._last_attempted_update_error_msg) except Exception as err: msg = "Unable to report agent update status: {0}".format(textutil.format_exception(err)) logger.warn(msg) add_event(op=WALAEventOperation.AgentUpgrade, is_success=False, message=msg, log_event=True) return None Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/cgroup.py000066400000000000000000000361241462617747000232230ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import errno import os import re from datetime import timedelta from azurelinuxagent.common import logger, conf from azurelinuxagent.common.exception import CGroupsException from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.utils import fileutil _REPORT_EVERY_HOUR = timedelta(hours=1) _DEFAULT_REPORT_PERIOD = timedelta(seconds=conf.get_cgroup_check_period()) AGENT_NAME_TELEMETRY = "walinuxagent.service" # Name used for telemetry; it needs to be consistent even if the name of the service changes AGENT_LOG_COLLECTOR = "azure-walinuxagent-logcollector" class CounterNotFound(Exception): pass class MetricValue(object): """ Class for defining all the required metric fields to send telemetry. """ def __init__(self, category, counter, instance, value, report_period=_DEFAULT_REPORT_PERIOD): self._category = category self._counter = counter self._instance = instance self._value = value self._report_period = report_period @property def category(self): return self._category @property def counter(self): return self._counter @property def instance(self): return self._instance @property def value(self): return self._value @property def report_period(self): return self._report_period class MetricsCategory(object): MEMORY_CATEGORY = "Memory" CPU_CATEGORY = "CPU" class MetricsCounter(object): PROCESSOR_PERCENT_TIME = "% Processor Time" TOTAL_MEM_USAGE = "Total Memory Usage" MAX_MEM_USAGE = "Max Memory Usage" THROTTLED_TIME = "Throttled Time" SWAP_MEM_USAGE = "Swap Memory Usage" AVAILABLE_MEM = "Available MBytes" USED_MEM = "Used MBytes" re_user_system_times = re.compile(r'user (\d+)\nsystem (\d+)\n') class CGroup(object): def __init__(self, name, cgroup_path): """ Initialize _data collection for the Memory controller :param: name: Name of the CGroup :param: cgroup_path: Path of the controller :return: """ self.name = name self.path = cgroup_path def __str__(self): return "{0} [{1}]".format(self.name, self.path) def _get_cgroup_file(self, file_name): return os.path.join(self.path, file_name) def _get_file_contents(self, file_name): """ Retrieve the contents to file. :param str file_name: Name of file within that metric controller :return: Entire contents of the file :rtype: str """ parameter_file = self._get_cgroup_file(file_name) return fileutil.read_file(parameter_file) def _get_parameters(self, parameter_name, first_line_only=False): """ Retrieve the values of a parameter from a controller. Returns a list of values in the file. :param first_line_only: return only the first line. :param str parameter_name: Name of file within that metric controller :return: The first line of the file, without line terminator :rtype: [str] """ result = [] try: values = self._get_file_contents(parameter_name).splitlines() result = values[0] if first_line_only else values except IndexError: parameter_filename = self._get_cgroup_file(parameter_name) logger.error("File {0} is empty but should not be".format(parameter_filename)) raise CGroupsException("File {0} is empty but should not be".format(parameter_filename)) except Exception as e: if isinstance(e, (IOError, OSError)) and e.errno == errno.ENOENT: # pylint: disable=E1101 raise e parameter_filename = self._get_cgroup_file(parameter_name) raise CGroupsException("Exception while attempting to read {0}".format(parameter_filename), e) return result def is_active(self): try: tasks = self._get_parameters("tasks") if tasks: return len(tasks) != 0 except (IOError, OSError) as e: if e.errno == errno.ENOENT: # only suppressing file not found exceptions. pass else: logger.periodic_warn(logger.EVERY_HALF_HOUR, 'Could not get list of tasks from "tasks" file in the cgroup: {0}.' ' Internal error: {1}'.format(self.path, ustr(e))) except CGroupsException as e: logger.periodic_warn(logger.EVERY_HALF_HOUR, 'Could not get list of tasks from "tasks" file in the cgroup: {0}.' ' Internal error: {1}'.format(self.path, ustr(e))) return False def get_tracked_metrics(self, **_): """ Retrieves the current value of the metrics tracked for this cgroup and returns them as an array. Note: Agent won't track the metrics if the current cpu ticks less than previous value and returns empty array. """ raise NotImplementedError() class CpuCgroup(CGroup): def __init__(self, name, cgroup_path): super(CpuCgroup, self).__init__(name, cgroup_path) self._osutil = get_osutil() self._previous_cgroup_cpu = None self._previous_system_cpu = None self._current_cgroup_cpu = None self._current_system_cpu = None self._previous_throttled_time = None self._current_throttled_time = None def _get_cpu_ticks(self, allow_no_such_file_or_directory_error=False): """ Returns the number of USER_HZ of CPU time (user and system) consumed by this cgroup. If allow_no_such_file_or_directory_error is set to True and cpuacct.stat does not exist the function returns 0; this is useful when the function can be called before the cgroup has been created. """ try: cpuacct_stat = self._get_file_contents('cpuacct.stat') except Exception as e: if not isinstance(e, (IOError, OSError)) or e.errno != errno.ENOENT: # pylint: disable=E1101 raise CGroupsException("Failed to read cpuacct.stat: {0}".format(ustr(e))) if not allow_no_such_file_or_directory_error: raise e cpuacct_stat = None cpu_ticks = 0 if cpuacct_stat is not None: # # Sample file: # # cat /sys/fs/cgroup/cpuacct/azure.slice/walinuxagent.service/cpuacct.stat # user 10190 # system 3160 # match = re_user_system_times.match(cpuacct_stat) if not match: raise CGroupsException( "The contents of {0} are invalid: {1}".format(self._get_cgroup_file('cpuacct.stat'), cpuacct_stat)) cpu_ticks = int(match.groups()[0]) + int(match.groups()[1]) return cpu_ticks def get_throttled_time(self): try: with open(os.path.join(self.path, 'cpu.stat')) as cpu_stat: # # Sample file: # # # cat /sys/fs/cgroup/cpuacct/azure.slice/walinuxagent.service/cpu.stat # nr_periods 51660 # nr_throttled 19461 # throttled_time 1529590856339 # for line in cpu_stat: match = re.match(r'throttled_time\s+(\d+)', line) if match is not None: return int(match.groups()[0]) raise Exception("Cannot find throttled_time") except (IOError, OSError) as e: if e.errno == errno.ENOENT: return 0 raise CGroupsException("Failed to read cpu.stat: {0}".format(ustr(e))) except Exception as e: raise CGroupsException("Failed to read cpu.stat: {0}".format(ustr(e))) def _cpu_usage_initialized(self): return self._current_cgroup_cpu is not None and self._current_system_cpu is not None def initialize_cpu_usage(self): """ Sets the initial values of CPU usage. This function must be invoked before calling get_cpu_usage(). """ if self._cpu_usage_initialized(): raise CGroupsException("initialize_cpu_usage() should be invoked only once") self._current_cgroup_cpu = self._get_cpu_ticks(allow_no_such_file_or_directory_error=True) self._current_system_cpu = self._osutil.get_total_cpu_ticks_since_boot() self._current_throttled_time = self.get_throttled_time() def get_cpu_usage(self): """ Computes the CPU used by the cgroup since the last call to this function. The usage is measured as a percentage of utilization of 1 core in the system. For example, using 1 core all of the time on a 4-core system would be reported as 100%. NOTE: initialize_cpu_usage() must be invoked before calling get_cpu_usage() """ if not self._cpu_usage_initialized(): raise CGroupsException("initialize_cpu_usage() must be invoked before the first call to get_cpu_usage()") self._previous_cgroup_cpu = self._current_cgroup_cpu self._previous_system_cpu = self._current_system_cpu self._current_cgroup_cpu = self._get_cpu_ticks() self._current_system_cpu = self._osutil.get_total_cpu_ticks_since_boot() cgroup_delta = self._current_cgroup_cpu - self._previous_cgroup_cpu system_delta = max(1, self._current_system_cpu - self._previous_system_cpu) return round(100.0 * self._osutil.get_processor_cores() * float(cgroup_delta) / float(system_delta), 3) def get_cpu_throttled_time(self, read_previous_throttled_time=True): """ Computes the throttled time (in seconds) since the last call to this function. NOTE: initialize_cpu_usage() must be invoked before calling this function Compute only current throttled time if read_previous_throttled_time set to False """ if not read_previous_throttled_time: return float(self.get_throttled_time() / 1E9) if not self._cpu_usage_initialized(): raise CGroupsException( "initialize_cpu_usage() must be invoked before the first call to get_throttled_time()") self._previous_throttled_time = self._current_throttled_time self._current_throttled_time = self.get_throttled_time() return float(self._current_throttled_time - self._previous_throttled_time) / 1E9 def get_tracked_metrics(self, **kwargs): tracked = [] cpu_usage = self.get_cpu_usage() if cpu_usage >= float(0): tracked.append( MetricValue(MetricsCategory.CPU_CATEGORY, MetricsCounter.PROCESSOR_PERCENT_TIME, self.name, cpu_usage)) if 'track_throttled_time' in kwargs and kwargs['track_throttled_time']: throttled_time = self.get_cpu_throttled_time() if cpu_usage >= float(0) and throttled_time >= float(0): tracked.append( MetricValue(MetricsCategory.CPU_CATEGORY, MetricsCounter.THROTTLED_TIME, self.name, throttled_time)) return tracked class MemoryCgroup(CGroup): def __init__(self, name, cgroup_path): super(MemoryCgroup, self).__init__(name, cgroup_path) self._counter_not_found_error_count = 0 def _get_memory_stat_counter(self, counter_name): try: with open(os.path.join(self.path, 'memory.stat')) as memory_stat: # cat /sys/fs/cgroup/memory/azure.slice/memory.stat # cache 67178496 # rss 42340352 # rss_huge 6291456 # swap 0 for line in memory_stat: re_memory_counter = r'{0}\s+(\d+)'.format(counter_name) match = re.match(re_memory_counter, line) if match is not None: return int(match.groups()[0]) except (IOError, OSError) as e: if e.errno == errno.ENOENT: raise raise CGroupsException("Failed to read memory.stat: {0}".format(ustr(e))) except Exception as e: raise CGroupsException("Failed to read memory.stat: {0}".format(ustr(e))) raise CounterNotFound("Cannot find counter: {0}".format(counter_name)) def get_memory_usage(self): """ Collect RSS+CACHE from memory.stat cgroup. :return: Memory usage in bytes :rtype: int """ cache = self._get_memory_stat_counter("cache") rss = self._get_memory_stat_counter("rss") return cache + rss def try_swap_memory_usage(self): """ Collect SWAP from memory.stat cgroup. :return: Memory usage in bytes :rtype: int Note: stat file is the only place to get the SWAP since other swap related file memory.memsw.usage_in_bytes is for total Memory+SWAP. """ try: return self._get_memory_stat_counter("swap") except CounterNotFound as e: if self._counter_not_found_error_count < 1: logger.periodic_info(logger.EVERY_HALF_HOUR, '{0} from "memory.stat" file in the cgroup: {1}---[Note: This log for informational purpose only and can be ignored]'.format(ustr(e), self.path)) self._counter_not_found_error_count += 1 return 0 def get_max_memory_usage(self): """ Collect memory.max_usage_in_bytes from the cgroup. :return: Memory usage in bytes :rtype: int """ usage = 0 try: usage = int(self._get_parameters('memory.max_usage_in_bytes', first_line_only=True)) except Exception as e: if isinstance(e, (IOError, OSError)) and e.errno == errno.ENOENT: # pylint: disable=E1101 raise raise CGroupsException("Exception while attempting to read {0}".format("memory.max_usage_in_bytes"), e) return usage def get_tracked_metrics(self, **_): return [ MetricValue(MetricsCategory.MEMORY_CATEGORY, MetricsCounter.TOTAL_MEM_USAGE, self.name, self.get_memory_usage()), MetricValue(MetricsCategory.MEMORY_CATEGORY, MetricsCounter.MAX_MEM_USAGE, self.name, self.get_max_memory_usage(), _REPORT_EVERY_HOUR), MetricValue(MetricsCategory.MEMORY_CATEGORY, MetricsCounter.SWAP_MEM_USAGE, self.name, self.try_swap_memory_usage(), _REPORT_EVERY_HOUR) ] Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/cgroupapi.py000066400000000000000000000412121462617747000237070ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import os import re import shutil import subprocess import threading import uuid from azurelinuxagent.common import logger from azurelinuxagent.ga.cgroup import CpuCgroup, MemoryCgroup from azurelinuxagent.ga.cgroupstelemetry import CGroupsTelemetry from azurelinuxagent.common.conf import get_agent_pid_file_path from azurelinuxagent.common.exception import CGroupsException, ExtensionErrorCodes, ExtensionError, \ ExtensionOperationError from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil import systemd from azurelinuxagent.common.utils import fileutil, shellutil from azurelinuxagent.ga.extensionprocessutil import handle_process_completion, read_output, \ TELEMETRY_MESSAGE_MAX_LEN from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.version import get_distro CGROUPS_FILE_SYSTEM_ROOT = '/sys/fs/cgroup' CGROUP_CONTROLLERS = ["cpu", "memory"] EXTENSION_SLICE_PREFIX = "azure-vmextensions" class SystemdRunError(CGroupsException): """ Raised when systemd-run fails """ def __init__(self, msg=None): super(SystemdRunError, self).__init__(msg) class CGroupsApi(object): @staticmethod def cgroups_supported(): distro_info = get_distro() distro_name = distro_info[0] try: distro_version = FlexibleVersion(distro_info[1]) except ValueError: return False return (distro_name.lower() == 'ubuntu' and distro_version.major >= 16) or \ (distro_name.lower() in ('centos', 'redhat') and 8 <= distro_version.major < 9) @staticmethod def track_cgroups(extension_cgroups): try: for cgroup in extension_cgroups: CGroupsTelemetry.track_cgroup(cgroup) except Exception as exception: logger.warn("Cannot add cgroup '{0}' to tracking list; resource usage will not be tracked. " "Error: {1}".format(cgroup.path, ustr(exception))) @staticmethod def get_processes_in_cgroup(cgroup_path): with open(os.path.join(cgroup_path, "cgroup.procs"), "r") as cgroup_procs: return [int(pid) for pid in cgroup_procs.read().split()] @staticmethod def _foreach_legacy_cgroup(operation): """ Previous versions of the daemon (2.2.31-2.2.40) wrote their PID to /sys/fs/cgroup/{cpu,memory}/WALinuxAgent/WALinuxAgent; starting from version 2.2.41 we track the agent service in walinuxagent.service instead of WALinuxAgent/WALinuxAgent. Also, when running under systemd, the PIDs should not be explicitly moved to the cgroup filesystem. The older daemons would incorrectly do that under certain conditions. This method checks for the existence of the legacy cgroups and, if the daemon's PID has been added to them, executes the given operation on the cgroups. After this check, the method attempts to remove the legacy cgroups. :param operation: The function to execute on each legacy cgroup. It must take 2 arguments: the controller and the daemon's PID """ legacy_cgroups = [] for controller in ['cpu', 'memory']: cgroup = os.path.join(CGROUPS_FILE_SYSTEM_ROOT, controller, "WALinuxAgent", "WALinuxAgent") if os.path.exists(cgroup): logger.info('Found legacy cgroup {0}', cgroup) legacy_cgroups.append((controller, cgroup)) try: for controller, cgroup in legacy_cgroups: procs_file = os.path.join(cgroup, "cgroup.procs") if os.path.exists(procs_file): procs_file_contents = fileutil.read_file(procs_file).strip() daemon_pid = CGroupsApi.get_daemon_pid() if ustr(daemon_pid) in procs_file_contents: operation(controller, daemon_pid) finally: for _, cgroup in legacy_cgroups: logger.info('Removing {0}', cgroup) shutil.rmtree(cgroup, ignore_errors=True) return len(legacy_cgroups) @staticmethod def get_daemon_pid(): return int(fileutil.read_file(get_agent_pid_file_path()).strip()) class SystemdCgroupsApi(CGroupsApi): """ Cgroups interface via systemd """ def __init__(self): self._cgroup_mountpoints = None self._agent_unit_name = None self._systemd_run_commands = [] self._systemd_run_commands_lock = threading.RLock() def get_systemd_run_commands(self): """ Returns a list of the systemd-run commands currently running (given as PIDs) """ with self._systemd_run_commands_lock: return self._systemd_run_commands[:] def get_cgroup_mount_points(self): """ Returns a tuple with the mount points for the cpu and memory controllers; the values can be None if the corresponding controller is not mounted """ # the output of mount is similar to # $ mount -t cgroup # cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd) # cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) # cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) # etc # if self._cgroup_mountpoints is None: cpu = None memory = None for line in shellutil.run_command(['mount', '-t', 'cgroup']).splitlines(): match = re.search(r'on\s+(?P/\S+(memory|cpuacct))\s', line) if match is not None: path = match.group('path') if 'cpuacct' in path: cpu = path else: memory = path self._cgroup_mountpoints = {'cpu': cpu, 'memory': memory} return self._cgroup_mountpoints['cpu'], self._cgroup_mountpoints['memory'] @staticmethod def get_process_cgroup_relative_paths(process_id): """ Returns a tuple with the path of the cpu and memory cgroups for the given process (relative to the mount point of the corresponding controller). The 'process_id' can be a numeric PID or the string "self" for the current process. The values returned can be None if the process is not in a cgroup for that controller (e.g. the controller is not mounted). """ # The contents of the file are similar to # # cat /proc/1218/cgroup # 10:memory:/system.slice/walinuxagent.service # 3:cpu,cpuacct:/system.slice/walinuxagent.service # etc cpu_path = None memory_path = None for line in fileutil.read_file("/proc/{0}/cgroup".format(process_id)).splitlines(): match = re.match(r'\d+:(?P(memory|.*cpuacct.*)):(?P.+)', line) if match is not None: controller = match.group('controller') path = match.group('path').lstrip('/') if match.group('path') != '/' else None if controller == 'memory': memory_path = path else: cpu_path = path return cpu_path, memory_path def get_process_cgroup_paths(self, process_id): """ Returns a tuple with the path of the cpu and memory cgroups for the given process. The 'process_id' can be a numeric PID or the string "self" for the current process. The values returned can be None if the process is not in a cgroup for that controller (e.g. the controller is not mounted). """ cpu_cgroup_relative_path, memory_cgroup_relative_path = self.get_process_cgroup_relative_paths(process_id) cpu_mount_point, memory_mount_point = self.get_cgroup_mount_points() cpu_cgroup_path = os.path.join(cpu_mount_point, cpu_cgroup_relative_path) \ if cpu_mount_point is not None and cpu_cgroup_relative_path is not None else None memory_cgroup_path = os.path.join(memory_mount_point, memory_cgroup_relative_path) \ if memory_mount_point is not None and memory_cgroup_relative_path is not None else None return cpu_cgroup_path, memory_cgroup_path def get_unit_cgroup_paths(self, unit_name): """ Returns a tuple with the path of the cpu and memory cgroups for the given unit. The values returned can be None if the controller is not mounted. Ex: ControlGroup=/azure.slice/walinuxagent.service controlgroup_path[1:] = azure.slice/walinuxagent.service """ controlgroup_path = systemd.get_unit_property(unit_name, "ControlGroup") cpu_mount_point, memory_mount_point = self.get_cgroup_mount_points() cpu_cgroup_path = os.path.join(cpu_mount_point, controlgroup_path[1:]) \ if cpu_mount_point is not None else None memory_cgroup_path = os.path.join(memory_mount_point, controlgroup_path[1:]) \ if memory_mount_point is not None else None return cpu_cgroup_path, memory_cgroup_path @staticmethod def get_cgroup2_controllers(): """ Returns a tuple with the mount point for the cgroups v2 controllers, and the currently mounted controllers; either value can be None if cgroups v2 or its controllers are not mounted """ # the output of mount is similar to # $ mount -t cgroup2 # cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate) # for line in shellutil.run_command(['mount', '-t', 'cgroup2']).splitlines(): match = re.search(r'on\s+(?P/\S+)\s', line) if match is not None: mount_point = match.group('path') controllers = None controllers_file = os.path.join(mount_point, 'cgroup.controllers') if os.path.exists(controllers_file): controllers = fileutil.read_file(controllers_file) return mount_point, controllers return None, None @staticmethod def _is_systemd_failure(scope_name, stderr): stderr.seek(0) stderr = ustr(stderr.read(TELEMETRY_MESSAGE_MAX_LEN), encoding='utf-8', errors='backslashreplace') unit_not_found = "Unit {0} not found.".format(scope_name) return unit_not_found in stderr or scope_name not in stderr @staticmethod def get_extension_slice_name(extension_name, old_slice=False): # The old slice makes it difficult for user to override the limits because they need to place drop-in files on every upgrade if extension slice is different for each version. # old slice includes .- # new slice without version . if not old_slice: extension_name = extension_name.rsplit("-", 1)[0] # Since '-' is used as a separator in systemd unit names, we replace it with '_' to prevent side-effects. return EXTENSION_SLICE_PREFIX + "-" + extension_name.replace('-', '_') + ".slice" def start_extension_command(self, extension_name, command, cmd_name, timeout, shell, cwd, env, stdout, stderr, error_code=ExtensionErrorCodes.PluginUnknownFailure): scope = "{0}_{1}".format(cmd_name, uuid.uuid4()) extension_slice_name = self.get_extension_slice_name(extension_name) with self._systemd_run_commands_lock: process = subprocess.Popen( # pylint: disable=W1509 # Some distros like ubuntu20 by default cpu and memory accounting enabled. Thus create nested cgroups under the extension slice # So disabling CPU and Memory accounting prevents from creating nested cgroups, so that all the counters will be present in extension Cgroup # since slice unit file configured with accounting enabled. "systemd-run --property=CPUAccounting=no --property=MemoryAccounting=no --unit={0} --scope --slice={1} {2}".format(scope, extension_slice_name, command), shell=shell, cwd=cwd, stdout=stdout, stderr=stderr, env=env, preexec_fn=os.setsid) # We start systemd-run with shell == True so process.pid is the shell's pid, not the pid for systemd-run self._systemd_run_commands.append(process.pid) scope_name = scope + '.scope' logger.info("Started extension in unit '{0}'", scope_name) cpu_cgroup = None try: cgroup_relative_path = os.path.join('azure.slice/azure-vmextensions.slice', extension_slice_name) cpu_cgroup_mountpoint, memory_cgroup_mountpoint = self.get_cgroup_mount_points() if cpu_cgroup_mountpoint is None: logger.info("The CPU controller is not mounted; will not track resource usage") else: cpu_cgroup_path = os.path.join(cpu_cgroup_mountpoint, cgroup_relative_path) cpu_cgroup = CpuCgroup(extension_name, cpu_cgroup_path) CGroupsTelemetry.track_cgroup(cpu_cgroup) if memory_cgroup_mountpoint is None: logger.info("The Memory controller is not mounted; will not track resource usage") else: memory_cgroup_path = os.path.join(memory_cgroup_mountpoint, cgroup_relative_path) memory_cgroup = MemoryCgroup(extension_name, memory_cgroup_path) CGroupsTelemetry.track_cgroup(memory_cgroup) except IOError as e: if e.errno == 2: # 'No such file or directory' logger.info("The extension command already completed; will not track resource usage") logger.info("Failed to start tracking resource usage for the extension: {0}", ustr(e)) except Exception as e: logger.info("Failed to start tracking resource usage for the extension: {0}", ustr(e)) # Wait for process completion or timeout try: return handle_process_completion(process=process, command=command, timeout=timeout, stdout=stdout, stderr=stderr, error_code=error_code, cpu_cgroup=cpu_cgroup) except ExtensionError as e: # The extension didn't terminate successfully. Determine whether it was due to systemd errors or # extension errors. if not self._is_systemd_failure(scope, stderr): # There was an extension error; it either timed out or returned a non-zero exit code. Re-raise the error raise # There was an issue with systemd-run. We need to log it and retry the extension without systemd. process_output = read_output(stdout, stderr) # Reset the stdout and stderr stdout.truncate(0) stderr.truncate(0) if isinstance(e, ExtensionOperationError): # no-member: Instance of 'ExtensionError' has no 'exit_code' member (no-member) - Disabled: e is actually an ExtensionOperationError err_msg = 'Systemd process exited with code %s and output %s' % ( e.exit_code, process_output) # pylint: disable=no-member else: err_msg = "Systemd timed-out, output: %s" % process_output raise SystemdRunError(err_msg) finally: with self._systemd_run_commands_lock: self._systemd_run_commands.remove(process.pid) def cleanup_legacy_cgroups(self): """ Previous versions of the daemon (2.2.31-2.2.40) wrote their PID to /sys/fs/cgroup/{cpu,memory}/WALinuxAgent/WALinuxAgent; starting from version 2.2.41 we track the agent service in walinuxagent.service instead of WALinuxAgent/WALinuxAgent. If we find that any of the legacy groups include the PID of the daemon then we need to disable data collection for this instance (under systemd, moving PIDs across the cgroup file system can produce unpredictable results) """ return CGroupsApi._foreach_legacy_cgroup(lambda *_: None) Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/cgroupconfigurator.py000066400000000000000000001601211462617747000256410ustar00rootroot00000000000000# -*- encoding: utf-8 -*- # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import glob import json import os import re import subprocess import threading from azurelinuxagent.common import conf from azurelinuxagent.common import logger from azurelinuxagent.ga.cgroup import CpuCgroup, AGENT_NAME_TELEMETRY, MetricsCounter, MemoryCgroup from azurelinuxagent.ga.cgroupapi import CGroupsApi, SystemdCgroupsApi, SystemdRunError, EXTENSION_SLICE_PREFIX from azurelinuxagent.ga.cgroupstelemetry import CGroupsTelemetry from azurelinuxagent.common.exception import ExtensionErrorCodes, CGroupsException, AgentMemoryExceededException from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil import get_osutil, systemd from azurelinuxagent.common.version import get_distro from azurelinuxagent.common.utils import shellutil, fileutil from azurelinuxagent.ga.extensionprocessutil import handle_process_completion from azurelinuxagent.common.event import add_event, WALAEventOperation AZURE_SLICE = "azure.slice" _AZURE_SLICE_CONTENTS = """ [Unit] Description=Slice for Azure VM Agent and Extensions DefaultDependencies=no Before=slices.target """ _VMEXTENSIONS_SLICE = EXTENSION_SLICE_PREFIX + ".slice" _AZURE_VMEXTENSIONS_SLICE = AZURE_SLICE + "/" + _VMEXTENSIONS_SLICE _VMEXTENSIONS_SLICE_CONTENTS = """ [Unit] Description=Slice for Azure VM Extensions DefaultDependencies=no Before=slices.target [Slice] CPUAccounting=yes MemoryAccounting=yes """ _EXTENSION_SLICE_CONTENTS = """ [Unit] Description=Slice for Azure VM extension {extension_name} DefaultDependencies=no Before=slices.target [Slice] CPUAccounting=yes CPUQuota={cpu_quota} MemoryAccounting=yes """ LOGCOLLECTOR_SLICE = "azure-walinuxagent-logcollector.slice" # More info on resource limits properties in systemd here: # https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/resource_management_guide/sec-modifying_control_groups _LOGCOLLECTOR_SLICE_CONTENTS_FMT = """ [Unit] Description=Slice for Azure VM Agent Periodic Log Collector DefaultDependencies=no Before=slices.target [Slice] CPUAccounting=yes CPUQuota={cpu_quota} MemoryAccounting=yes """ _LOGCOLLECTOR_CPU_QUOTA = "5%" LOGCOLLECTOR_MEMORY_LIMIT = 30 * 1024 ** 2 # 30Mb _AGENT_DROP_IN_FILE_SLICE = "10-Slice.conf" _AGENT_DROP_IN_FILE_SLICE_CONTENTS = """ # This drop-in unit file was created by the Azure VM Agent. # Do not edit. [Service] Slice=azure.slice """ _DROP_IN_FILE_CPU_ACCOUNTING = "11-CPUAccounting.conf" _DROP_IN_FILE_CPU_ACCOUNTING_CONTENTS = """ # This drop-in unit file was created by the Azure VM Agent. # Do not edit. [Service] CPUAccounting=yes """ _DROP_IN_FILE_CPU_QUOTA = "12-CPUQuota.conf" _DROP_IN_FILE_CPU_QUOTA_CONTENTS_FORMAT = """ # This drop-in unit file was created by the Azure VM Agent. # Do not edit. [Service] CPUQuota={0} """ _DROP_IN_FILE_MEMORY_ACCOUNTING = "13-MemoryAccounting.conf" _DROP_IN_FILE_MEMORY_ACCOUNTING_CONTENTS = """ # This drop-in unit file was created by the Azure VM Agent. # Do not edit. [Service] MemoryAccounting=yes """ class DisableCgroups(object): ALL = "all" AGENT = "agent" EXTENSIONS = "extensions" def _log_cgroup_info(format_string, *args): message = format_string.format(*args) logger.info("[CGI] " + message) add_event(op=WALAEventOperation.CGroupsInfo, message=message) def _log_cgroup_warning(format_string, *args): message = format_string.format(*args) logger.info("[CGW] " + message) # log as INFO for now, in the future it should be logged as WARNING add_event(op=WALAEventOperation.CGroupsInfo, message=message, is_success=False, log_event=False) class CGroupConfigurator(object): """ This class implements the high-level operations on CGroups (e.g. initialization, creation, etc) NOTE: with the exception of start_extension_command, none of the methods in this class raise exceptions (cgroup operations should not block extensions) """ class _Impl(object): def __init__(self): self._initialized = False self._cgroups_supported = False self._agent_cgroups_enabled = False self._extensions_cgroups_enabled = False self._cgroups_api = None self._agent_cpu_cgroup_path = None self._agent_memory_cgroup_path = None self._agent_memory_cgroup = None self._check_cgroups_lock = threading.RLock() # Protect the check_cgroups which is called from Monitor thread and main loop. def initialize(self): try: if self._initialized: return # This check is to reset the quotas if agent goes from cgroup supported to unsupported distros later in time. if not CGroupsApi.cgroups_supported(): agent_drop_in_path = systemd.get_agent_drop_in_path() try: if os.path.exists(agent_drop_in_path) and os.path.isdir(agent_drop_in_path): files_to_cleanup = [] agent_drop_in_file_slice = os.path.join(agent_drop_in_path, _AGENT_DROP_IN_FILE_SLICE) agent_drop_in_file_cpu_accounting = os.path.join(agent_drop_in_path, _DROP_IN_FILE_CPU_ACCOUNTING) agent_drop_in_file_memory_accounting = os.path.join(agent_drop_in_path, _DROP_IN_FILE_MEMORY_ACCOUNTING) agent_drop_in_file_cpu_quota = os.path.join(agent_drop_in_path, _DROP_IN_FILE_CPU_QUOTA) files_to_cleanup.extend([agent_drop_in_file_slice, agent_drop_in_file_cpu_accounting, agent_drop_in_file_memory_accounting, agent_drop_in_file_cpu_quota]) self.__cleanup_all_files(files_to_cleanup) self.__reload_systemd_config() logger.info("Agent reset the quotas if distro: {0} goes from supported to unsupported list", get_distro()) except Exception as err: logger.warn("Unable to delete Agent drop-in files while resetting the quotas: {0}".format(err)) # check whether cgroup monitoring is supported on the current distro self._cgroups_supported = CGroupsApi.cgroups_supported() if not self._cgroups_supported: logger.info("Cgroup monitoring is not supported on {0}", get_distro()) return # check that systemd is detected correctly self._cgroups_api = SystemdCgroupsApi() if not systemd.is_systemd(): _log_cgroup_warning("systemd was not detected on {0}", get_distro()) return _log_cgroup_info("systemd version: {0}", systemd.get_version()) # This is temporarily disabled while we analyze telemetry. Likely it will be removed. # self.__collect_azure_unit_telemetry() # self.__collect_agent_unit_files_telemetry() if not self.__check_no_legacy_cgroups(): return agent_unit_name = systemd.get_agent_unit_name() agent_slice = systemd.get_unit_property(agent_unit_name, "Slice") if agent_slice not in (AZURE_SLICE, "system.slice"): _log_cgroup_warning("The agent is within an unexpected slice: {0}", agent_slice) return self.__setup_azure_slice() cpu_controller_root, memory_controller_root = self.__get_cgroup_controllers() self._agent_cpu_cgroup_path, self._agent_memory_cgroup_path = self.__get_agent_cgroups(agent_slice, cpu_controller_root, memory_controller_root) if self._agent_cpu_cgroup_path is not None or self._agent_memory_cgroup_path is not None: self.enable() if self._agent_cpu_cgroup_path is not None: _log_cgroup_info("Agent CPU cgroup: {0}", self._agent_cpu_cgroup_path) self.__set_cpu_quota(conf.get_agent_cpu_quota()) CGroupsTelemetry.track_cgroup(CpuCgroup(AGENT_NAME_TELEMETRY, self._agent_cpu_cgroup_path)) if self._agent_memory_cgroup_path is not None: _log_cgroup_info("Agent Memory cgroup: {0}", self._agent_memory_cgroup_path) self._agent_memory_cgroup = MemoryCgroup(AGENT_NAME_TELEMETRY, self._agent_memory_cgroup_path) CGroupsTelemetry.track_cgroup(self._agent_memory_cgroup) _log_cgroup_info('Agent cgroups enabled: {0}', self._agent_cgroups_enabled) except Exception as exception: _log_cgroup_warning("Error initializing cgroups: {0}", ustr(exception)) finally: self._initialized = True @staticmethod def __collect_azure_unit_telemetry(): azure_units = [] try: units = shellutil.run_command(['systemctl', 'list-units', 'azure*', '-all']) for line in units.split('\n'): match = re.match(r'\s?(azure[^\s]*)\s?', line, re.IGNORECASE) if match is not None: azure_units.append((match.group(1), line)) except shellutil.CommandError as command_error: _log_cgroup_warning("Failed to list systemd units: {0}", ustr(command_error)) for unit_name, unit_description in azure_units: unit_slice = "Unknown" try: unit_slice = systemd.get_unit_property(unit_name, "Slice") except Exception as exception: _log_cgroup_warning("Failed to query Slice for {0}: {1}", unit_name, ustr(exception)) _log_cgroup_info("Found an Azure unit under slice {0}: {1}", unit_slice, unit_description) if len(azure_units) == 0: try: cgroups = shellutil.run_command('systemd-cgls') for line in cgroups.split('\n'): if re.match(r'[^\x00-\xff]+azure\.slice\s*', line, re.UNICODE): logger.info(ustr("Found a cgroup for azure.slice\n{0}").format(cgroups)) # Don't add the output of systemd-cgls to the telemetry, since currently it does not support Unicode add_event(op=WALAEventOperation.CGroupsInfo, message="Found a cgroup for azure.slice") except shellutil.CommandError as command_error: _log_cgroup_warning("Failed to list systemd units: {0}", ustr(command_error)) @staticmethod def __collect_agent_unit_files_telemetry(): agent_unit_files = [] agent_service_name = get_osutil().get_service_name() try: fragment_path = systemd.get_unit_property(agent_service_name, "FragmentPath") if fragment_path != systemd.get_agent_unit_file(): agent_unit_files.append(fragment_path) except Exception as exception: _log_cgroup_warning("Failed to query the agent's FragmentPath: {0}", ustr(exception)) try: drop_in_paths = systemd.get_unit_property(agent_service_name, "DropInPaths") for path in drop_in_paths.split(): agent_unit_files.append(path) except Exception as exception: _log_cgroup_warning("Failed to query the agent's DropInPaths: {0}", ustr(exception)) for unit_file in agent_unit_files: try: with open(unit_file, "r") as file_object: _log_cgroup_info("Found a custom unit file for the agent: {0}\n{1}", unit_file, file_object.read()) except Exception as exception: _log_cgroup_warning("Can't read {0}: {1}", unit_file, ustr(exception)) def __check_no_legacy_cgroups(self): """ Older versions of the daemon (2.2.31-2.2.40) wrote their PID to /sys/fs/cgroup/{cpu,memory}/WALinuxAgent/WALinuxAgent. When running under systemd this could produce invalid resource usage data. Cgroups should not be enabled under this condition. """ legacy_cgroups = self._cgroups_api.cleanup_legacy_cgroups() if legacy_cgroups > 0: _log_cgroup_warning("The daemon's PID was added to a legacy cgroup; will not monitor resource usage.") return False return True def __get_cgroup_controllers(self): # # check v1 controllers # cpu_controller_root, memory_controller_root = self._cgroups_api.get_cgroup_mount_points() if cpu_controller_root is not None: logger.info("The CPU cgroup controller is mounted at {0}", cpu_controller_root) else: _log_cgroup_warning("The CPU cgroup controller is not mounted") if memory_controller_root is not None: logger.info("The memory cgroup controller is mounted at {0}", memory_controller_root) else: _log_cgroup_warning("The memory cgroup controller is not mounted") # # check v2 controllers # cgroup2_mount_point, cgroup2_controllers = self._cgroups_api.get_cgroup2_controllers() if cgroup2_mount_point is not None: _log_cgroup_info("cgroups v2 mounted at {0}. Controllers: [{1}]", cgroup2_mount_point, cgroup2_controllers) return cpu_controller_root, memory_controller_root @staticmethod def __setup_azure_slice(): """ The agent creates "azure.slice" for use by extensions and the agent. The agent runs under "azure.slice" directly and each extension runs under its own slice ("Microsoft.CPlat.Extension.slice" in the example below). All the slices for extensions are grouped under "vmextensions.slice". Example: -.slice ├─user.slice ├─system.slice └─azure.slice ├─walinuxagent.service │ ├─5759 /usr/bin/python3 -u /usr/sbin/waagent -daemon │ └─5764 python3 -u bin/WALinuxAgent-2.2.53-py2.7.egg -run-exthandlers └─azure-vmextensions.slice └─Microsoft.CPlat.Extension.slice └─5894 /usr/bin/python3 /var/lib/waagent/Microsoft.CPlat.Extension-1.0.0.0/enable.py This method ensures that the "azure" and "vmextensions" slices are created. Setup should create those slices under /lib/systemd/system; but if they do not exist, __ensure_azure_slices_exist will create them. It also creates drop-in files to set the agent's Slice and CPUAccounting if they have not been set up in the agent's unit file. Lastly, the method also cleans up unit files left over from previous versions of the agent. """ # Older agents used to create this slice, but it was never used. Cleanup the file. CGroupConfigurator._Impl.__cleanup_unit_file("/etc/systemd/system/system-walinuxagent.extensions.slice") unit_file_install_path = systemd.get_unit_file_install_path() azure_slice = os.path.join(unit_file_install_path, AZURE_SLICE) vmextensions_slice = os.path.join(unit_file_install_path, _VMEXTENSIONS_SLICE) logcollector_slice = os.path.join(unit_file_install_path, LOGCOLLECTOR_SLICE) agent_unit_file = systemd.get_agent_unit_file() agent_drop_in_path = systemd.get_agent_drop_in_path() agent_drop_in_file_slice = os.path.join(agent_drop_in_path, _AGENT_DROP_IN_FILE_SLICE) agent_drop_in_file_cpu_accounting = os.path.join(agent_drop_in_path, _DROP_IN_FILE_CPU_ACCOUNTING) agent_drop_in_file_memory_accounting = os.path.join(agent_drop_in_path, _DROP_IN_FILE_MEMORY_ACCOUNTING) files_to_create = [] if not os.path.exists(azure_slice): files_to_create.append((azure_slice, _AZURE_SLICE_CONTENTS)) if not os.path.exists(vmextensions_slice): files_to_create.append((vmextensions_slice, _VMEXTENSIONS_SLICE_CONTENTS)) # Update log collector slice contents slice_contents = _LOGCOLLECTOR_SLICE_CONTENTS_FMT.format(cpu_quota=_LOGCOLLECTOR_CPU_QUOTA) files_to_create.append((logcollector_slice, slice_contents)) if fileutil.findre_in_file(agent_unit_file, r"Slice=") is not None: CGroupConfigurator._Impl.__cleanup_unit_file(agent_drop_in_file_slice) else: if not os.path.exists(agent_drop_in_file_slice): files_to_create.append((agent_drop_in_file_slice, _AGENT_DROP_IN_FILE_SLICE_CONTENTS)) if fileutil.findre_in_file(agent_unit_file, r"CPUAccounting=") is not None: CGroupConfigurator._Impl.__cleanup_unit_file(agent_drop_in_file_cpu_accounting) else: if not os.path.exists(agent_drop_in_file_cpu_accounting): files_to_create.append((agent_drop_in_file_cpu_accounting, _DROP_IN_FILE_CPU_ACCOUNTING_CONTENTS)) if fileutil.findre_in_file(agent_unit_file, r"MemoryAccounting=") is not None: CGroupConfigurator._Impl.__cleanup_unit_file(agent_drop_in_file_memory_accounting) else: if not os.path.exists(agent_drop_in_file_memory_accounting): files_to_create.append( (agent_drop_in_file_memory_accounting, _DROP_IN_FILE_MEMORY_ACCOUNTING_CONTENTS)) if len(files_to_create) > 0: # create the unit files, but if 1 fails remove all and return try: for path, contents in files_to_create: CGroupConfigurator._Impl.__create_unit_file(path, contents) except Exception as exception: _log_cgroup_warning("Failed to create unit files for the azure slice: {0}", ustr(exception)) for unit_file in files_to_create: CGroupConfigurator._Impl.__cleanup_unit_file(unit_file) return CGroupConfigurator._Impl.__reload_systemd_config() @staticmethod def __reload_systemd_config(): # reload the systemd configuration; the new slices will be used once the agent's service restarts try: logger.info("Executing systemctl daemon-reload...") shellutil.run_command(["systemctl", "daemon-reload"]) except Exception as exception: _log_cgroup_warning("daemon-reload failed (create azure slice): {0}", ustr(exception)) @staticmethod def __create_unit_file(path, contents): parent, _ = os.path.split(path) if not os.path.exists(parent): fileutil.mkdir(parent, mode=0o755) exists = os.path.exists(path) fileutil.write_file(path, contents) _log_cgroup_info("{0} {1}", "Updated" if exists else "Created", path) @staticmethod def __cleanup_unit_file(path): if os.path.exists(path): try: os.remove(path) _log_cgroup_info("Removed {0}", path) except Exception as exception: _log_cgroup_warning("Failed to remove {0}: {1}", path, ustr(exception)) @staticmethod def __cleanup_all_files(files_to_cleanup): for path in files_to_cleanup: if os.path.exists(path): try: os.remove(path) _log_cgroup_info("Removed {0}", path) except Exception as exception: _log_cgroup_warning("Failed to remove {0}: {1}", path, ustr(exception)) @staticmethod def __create_all_files(files_to_create): # create the unit files, but if 1 fails remove all and return try: for path, contents in files_to_create: CGroupConfigurator._Impl.__create_unit_file(path, contents) except Exception as exception: _log_cgroup_warning("Failed to create unit files : {0}", ustr(exception)) for unit_file in files_to_create: CGroupConfigurator._Impl.__cleanup_unit_file(unit_file) return def is_extension_resource_limits_setup_completed(self, extension_name, cpu_quota=None): unit_file_install_path = systemd.get_unit_file_install_path() old_extension_slice_path = os.path.join(unit_file_install_path, SystemdCgroupsApi.get_extension_slice_name(extension_name, old_slice=True)) # clean up the old slice from the disk if os.path.exists(old_extension_slice_path): CGroupConfigurator._Impl.__cleanup_unit_file(old_extension_slice_path) extension_slice_path = os.path.join(unit_file_install_path, SystemdCgroupsApi.get_extension_slice_name(extension_name)) cpu_quota = str( cpu_quota) + "%" if cpu_quota is not None else "" # setting an empty value resets to the default (infinity) slice_contents = _EXTENSION_SLICE_CONTENTS.format(extension_name=extension_name, cpu_quota=cpu_quota) if os.path.exists(extension_slice_path): with open(extension_slice_path, "r") as file_: if file_.read() == slice_contents: return True return False def __get_agent_cgroups(self, agent_slice, cpu_controller_root, memory_controller_root): agent_unit_name = systemd.get_agent_unit_name() expected_relative_path = os.path.join(agent_slice, agent_unit_name) cpu_cgroup_relative_path, memory_cgroup_relative_path = self._cgroups_api.get_process_cgroup_relative_paths( "self") if cpu_cgroup_relative_path is None: _log_cgroup_warning("The agent's process is not within a CPU cgroup") else: if cpu_cgroup_relative_path == expected_relative_path: _log_cgroup_info('CPUAccounting: {0}', systemd.get_unit_property(agent_unit_name, "CPUAccounting")) _log_cgroup_info('CPUQuota: {0}', systemd.get_unit_property(agent_unit_name, "CPUQuotaPerSecUSec")) else: _log_cgroup_warning( "The Agent is not in the expected CPU cgroup; will not enable monitoring. Cgroup:[{0}] Expected:[{1}]", cpu_cgroup_relative_path, expected_relative_path) cpu_cgroup_relative_path = None # Set the path to None to prevent monitoring if memory_cgroup_relative_path is None: _log_cgroup_warning("The agent's process is not within a memory cgroup") else: if memory_cgroup_relative_path == expected_relative_path: memory_accounting = systemd.get_unit_property(agent_unit_name, "MemoryAccounting") _log_cgroup_info('MemoryAccounting: {0}', memory_accounting) else: _log_cgroup_info( "The Agent is not in the expected memory cgroup; will not enable monitoring. CGroup:[{0}] Expected:[{1}]", memory_cgroup_relative_path, expected_relative_path) memory_cgroup_relative_path = None # Set the path to None to prevent monitoring if cpu_controller_root is not None and cpu_cgroup_relative_path is not None: agent_cpu_cgroup_path = os.path.join(cpu_controller_root, cpu_cgroup_relative_path) else: agent_cpu_cgroup_path = None if memory_controller_root is not None and memory_cgroup_relative_path is not None: agent_memory_cgroup_path = os.path.join(memory_controller_root, memory_cgroup_relative_path) else: agent_memory_cgroup_path = None return agent_cpu_cgroup_path, agent_memory_cgroup_path def supported(self): return self._cgroups_supported def enabled(self): return self._agent_cgroups_enabled or self._extensions_cgroups_enabled def agent_enabled(self): return self._agent_cgroups_enabled def extensions_enabled(self): return self._extensions_cgroups_enabled def enable(self): if not self.supported(): raise CGroupsException( "Attempted to enable cgroups, but they are not supported on the current platform") self._agent_cgroups_enabled = True self._extensions_cgroups_enabled = True def disable(self, reason, disable_cgroups): if disable_cgroups == DisableCgroups.ALL: # disable all # Reset quotas self.__reset_agent_cpu_quota() extension_services = self.get_extension_services_list() for extension in extension_services: logger.info("Resetting extension : {0} and it's services: {1} CPUQuota".format(extension, extension_services[extension])) self.__reset_extension_cpu_quota(extension_name=extension) self.__reset_extension_services_cpu_quota(extension_services[extension]) self.__reload_systemd_config() CGroupsTelemetry.reset() self._agent_cgroups_enabled = False self._extensions_cgroups_enabled = False elif disable_cgroups == DisableCgroups.AGENT: # disable agent self._agent_cgroups_enabled = False self.__reset_agent_cpu_quota() CGroupsTelemetry.stop_tracking(CpuCgroup(AGENT_NAME_TELEMETRY, self._agent_cpu_cgroup_path)) message = "[CGW] Disabling resource usage monitoring. Reason: {0}".format(reason) logger.info(message) # log as INFO for now, in the future it should be logged as WARNING add_event(op=WALAEventOperation.CGroupsDisabled, message=message, is_success=False, log_event=False) @staticmethod def __set_cpu_quota(quota): """ Sets the agent's CPU quota to the given percentage (100% == 1 CPU) NOTE: This is done using a dropin file in the default dropin directory; any local overrides on the VM will take precedence over this setting. """ quota_percentage = "{0}%".format(quota) _log_cgroup_info("Ensuring the agent's CPUQuota is {0}", quota_percentage) if CGroupConfigurator._Impl.__try_set_cpu_quota(quota_percentage): CGroupsTelemetry.set_track_throttled_time(True) @staticmethod def __reset_agent_cpu_quota(): """ Removes any CPUQuota on the agent NOTE: This resets the quota on the agent's default dropin file; any local overrides on the VM will take precedence over this setting. """ logger.info("Resetting agent's CPUQuota") if CGroupConfigurator._Impl.__try_set_cpu_quota(''): # setting an empty value resets to the default (infinity) _log_cgroup_info('CPUQuota: {0}', systemd.get_unit_property(systemd.get_agent_unit_name(), "CPUQuotaPerSecUSec")) @staticmethod def __try_set_cpu_quota(quota): try: drop_in_file = os.path.join(systemd.get_agent_drop_in_path(), _DROP_IN_FILE_CPU_QUOTA) contents = _DROP_IN_FILE_CPU_QUOTA_CONTENTS_FORMAT.format(quota) if os.path.exists(drop_in_file): with open(drop_in_file, "r") as file_: if file_.read() == contents: return True # no need to update the file; return here to avoid doing a daemon-reload CGroupConfigurator._Impl.__create_unit_file(drop_in_file, contents) except Exception as exception: _log_cgroup_warning('Failed to set CPUQuota: {0}', ustr(exception)) return False try: logger.info("Executing systemctl daemon-reload...") shellutil.run_command(["systemctl", "daemon-reload"]) except Exception as exception: _log_cgroup_warning("daemon-reload failed (set quota): {0}", ustr(exception)) return False return True def check_cgroups(self, cgroup_metrics): self._check_cgroups_lock.acquire() try: if not self.enabled(): return errors = [] process_check_success = False try: self._check_processes_in_agent_cgroup() process_check_success = True except CGroupsException as exception: errors.append(exception) quota_check_success = False try: if cgroup_metrics: self._check_agent_throttled_time(cgroup_metrics) quota_check_success = True except CGroupsException as exception: errors.append(exception) reason = "Check on cgroups failed:\n{0}".format("\n".join([ustr(e) for e in errors])) if not process_check_success and conf.get_cgroup_disable_on_process_check_failure(): self.disable(reason, DisableCgroups.ALL) if not quota_check_success and conf.get_cgroup_disable_on_quota_check_failure(): self.disable(reason, DisableCgroups.AGENT) finally: self._check_cgroups_lock.release() def _check_processes_in_agent_cgroup(self): """ Verifies that the agent's cgroup includes only the current process, its parent, commands started using shellutil and instances of systemd-run (those processes correspond, respectively, to the extension handler, the daemon, commands started by the extension handler, and the systemd-run commands used to start extensions on their own cgroup). Other processes started by the agent (e.g. extensions) and processes not started by the agent (e.g. services installed by extensions) are reported as unexpected, since they should belong to their own cgroup. Raises a CGroupsException if the check fails """ unexpected = [] agent_cgroup_proc_names = [] try: daemon = os.getppid() extension_handler = os.getpid() agent_commands = set() agent_commands.update(shellutil.get_running_commands()) systemd_run_commands = set() systemd_run_commands.update(self._cgroups_api.get_systemd_run_commands()) agent_cgroup = CGroupsApi.get_processes_in_cgroup(self._agent_cpu_cgroup_path) # get the running commands again in case new commands started or completed while we were fetching the processes in the cgroup; agent_commands.update(shellutil.get_running_commands()) systemd_run_commands.update(self._cgroups_api.get_systemd_run_commands()) for process in agent_cgroup: agent_cgroup_proc_names.append(self.__format_process(process)) # Note that the agent uses systemd-run to start extensions; systemd-run belongs to the agent cgroup, though the extensions don't. if process in (daemon, extension_handler) or process in systemd_run_commands: continue # check shell systemd_run process if above process check didn't catch it if self._check_systemd_run_process(process): continue # systemd_run_commands contains the shell that started systemd-run, so we also need to check for the parent if self._get_parent(process) in systemd_run_commands and self._get_command( process) == 'systemd-run': continue # check if the process is a command started by the agent or a descendant of one of those commands current = process while current != 0 and current not in agent_commands: current = self._get_parent(current) # Verify if Process started by agent based on the marker found in process environment or process is in Zombie state. # If so, consider it as valid process in agent cgroup. if current == 0 and not (self.__is_process_descendant_of_the_agent(process) or self.__is_zombie_process(process)): unexpected.append(self.__format_process(process)) if len(unexpected) >= 5: # collect just a small sample break except Exception as exception: _log_cgroup_warning("Error checking the processes in the agent's cgroup: {0}".format(ustr(exception))) if len(unexpected) > 0: self._report_agent_cgroups_procs(agent_cgroup_proc_names, unexpected) raise CGroupsException("The agent's cgroup includes unexpected processes: {0}".format(unexpected)) @staticmethod def _get_command(pid): try: with open('/proc/{0}/comm'.format(pid), "r") as file_: comm = file_.read() if comm and comm[-1] == '\x00': # if null-terminated, remove the null comm = comm[:-1] return comm.rstrip() except Exception: return "UNKNOWN" @staticmethod def __format_process(pid): """ Formats the given PID as a string containing the PID and the corresponding command line truncated to 64 chars """ try: cmdline = '/proc/{0}/cmdline'.format(pid) if os.path.exists(cmdline): with open(cmdline, "r") as cmdline_file: return "[PID: {0}] {1:64.64}".format(pid, cmdline_file.read()) except Exception: pass return "[PID: {0}] UNKNOWN".format(pid) @staticmethod def __is_process_descendant_of_the_agent(pid): """ Returns True if the process is descendant of the agent by looking at the env flag(AZURE_GUEST_AGENT_PARENT_PROCESS_NAME) that we set when the process starts otherwise False. """ try: env = '/proc/{0}/environ'.format(pid) if os.path.exists(env): with open(env, "r") as env_file: environ = env_file.read() if environ and environ[-1] == '\x00': environ = environ[:-1] return "{0}={1}".format(shellutil.PARENT_PROCESS_NAME, shellutil.AZURE_GUEST_AGENT) in environ except Exception: pass return False @staticmethod def __is_zombie_process(pid): """ Returns True if process is in Zombie state otherwise False. Ex: cat /proc/18171/stat 18171 (python3) S 18103 18103 18103 0 -1 4194624 57736 64902 0 3 """ try: stat = '/proc/{0}/stat'.format(pid) if os.path.exists(stat): with open(stat, "r") as stat_file: return stat_file.read().split()[2] == 'Z' except Exception: pass return False @staticmethod def _check_systemd_run_process(process): """ Returns True if process is shell systemd-run process started by agent otherwise False. Ex: sh,7345 -c systemd-run --unit=enable_7c5cab19-eb79-4661-95d9-9e5091bd5ae0 --scope --slice=azure-vmextensions-Microsoft.OSTCExtensions.VMAccessForLinux_1.5.11.slice /var/lib/waagent/Microsoft.OSTCExtensions.VMAccessForLinux-1.5.11/processes.sh """ try: process_name = "UNKNOWN" cmdline = '/proc/{0}/cmdline'.format(process) if os.path.exists(cmdline): with open(cmdline, "r") as cmdline_file: process_name = "{0}".format(cmdline_file.read()) match = re.search(r'systemd-run.*--unit=.*--scope.*--slice=azure-vmextensions.*', process_name) if match is not None: return True except Exception: pass return False @staticmethod def _report_agent_cgroups_procs(agent_cgroup_proc_names, unexpected): for proc_name in unexpected: if 'UNKNOWN' in proc_name: msg = "Agent includes following processes when UNKNOWN process found: {0}".format("\n".join([ustr(proc) for proc in agent_cgroup_proc_names])) add_event(op=WALAEventOperation.CGroupsInfo, message=msg) @staticmethod def _check_agent_throttled_time(cgroup_metrics): for metric in cgroup_metrics: if metric.instance == AGENT_NAME_TELEMETRY and metric.counter == MetricsCounter.THROTTLED_TIME: if metric.value > conf.get_agent_cpu_throttled_time_threshold(): raise CGroupsException("The agent has been throttled for {0} seconds".format(metric.value)) def check_agent_memory_usage(self): if self.enabled() and self._agent_memory_cgroup: metrics = self._agent_memory_cgroup.get_tracked_metrics() current_usage = 0 for metric in metrics: if metric.counter == MetricsCounter.TOTAL_MEM_USAGE: current_usage += metric.value elif metric.counter == MetricsCounter.SWAP_MEM_USAGE: current_usage += metric.value if current_usage > conf.get_agent_memory_quota(): raise AgentMemoryExceededException("The agent memory limit {0} bytes exceeded. The current reported usage is {1} bytes.".format(conf.get_agent_memory_quota(), current_usage)) @staticmethod def _get_parent(pid): """ Returns the parent of the given process. If the parent cannot be determined returns 0 (which is the PID for the scheduler) """ try: stat = '/proc/{0}/stat'.format(pid) if os.path.exists(stat): with open(stat, "r") as stat_file: return int(stat_file.read().split()[3]) except Exception: pass return 0 def start_tracking_unit_cgroups(self, unit_name): """ TODO: Start tracking Memory Cgroups """ try: cpu_cgroup_path, memory_cgroup_path = self._cgroups_api.get_unit_cgroup_paths(unit_name) if cpu_cgroup_path is None: logger.info("The CPU controller is not mounted; will not track resource usage") else: CGroupsTelemetry.track_cgroup(CpuCgroup(unit_name, cpu_cgroup_path)) if memory_cgroup_path is None: logger.info("The Memory controller is not mounted; will not track resource usage") else: CGroupsTelemetry.track_cgroup(MemoryCgroup(unit_name, memory_cgroup_path)) except Exception as exception: logger.info("Failed to start tracking resource usage for the extension: {0}", ustr(exception)) def stop_tracking_unit_cgroups(self, unit_name): """ TODO: remove Memory cgroups from tracked list. """ try: cpu_cgroup_path, memory_cgroup_path = self._cgroups_api.get_unit_cgroup_paths(unit_name) if cpu_cgroup_path is not None: CGroupsTelemetry.stop_tracking(CpuCgroup(unit_name, cpu_cgroup_path)) if memory_cgroup_path is not None: CGroupsTelemetry.stop_tracking(MemoryCgroup(unit_name, memory_cgroup_path)) except Exception as exception: logger.info("Failed to stop tracking resource usage for the extension service: {0}", ustr(exception)) def stop_tracking_extension_cgroups(self, extension_name): """ TODO: remove extension Memory cgroups from tracked list """ try: extension_slice_name = SystemdCgroupsApi.get_extension_slice_name(extension_name) cgroup_relative_path = os.path.join(_AZURE_VMEXTENSIONS_SLICE, extension_slice_name) cpu_cgroup_mountpoint, memory_cgroup_mountpoint = self._cgroups_api.get_cgroup_mount_points() cpu_cgroup_path = os.path.join(cpu_cgroup_mountpoint, cgroup_relative_path) memory_cgroup_path = os.path.join(memory_cgroup_mountpoint, cgroup_relative_path) if cpu_cgroup_path is not None: CGroupsTelemetry.stop_tracking(CpuCgroup(extension_name, cpu_cgroup_path)) if memory_cgroup_path is not None: CGroupsTelemetry.stop_tracking(MemoryCgroup(extension_name, memory_cgroup_path)) except Exception as exception: logger.info("Failed to stop tracking resource usage for the extension service: {0}", ustr(exception)) def start_extension_command(self, extension_name, command, cmd_name, timeout, shell, cwd, env, stdout, stderr, error_code=ExtensionErrorCodes.PluginUnknownFailure): """ Starts a command (install/enable/etc) for an extension and adds the command's PID to the extension's cgroup :param extension_name: The extension executing the command :param command: The command to invoke :param cmd_name: The type of the command(enable, install, etc.) :param timeout: Number of seconds to wait for command completion :param cwd: The working directory for the command :param env: The environment to pass to the command's process :param stdout: File object to redirect stdout to :param stderr: File object to redirect stderr to :param stderr: File object to redirect stderr to :param error_code: Extension error code to raise in case of error """ if self.enabled(): try: return self._cgroups_api.start_extension_command(extension_name, command, cmd_name, timeout, shell=shell, cwd=cwd, env=env, stdout=stdout, stderr=stderr, error_code=error_code) except SystemdRunError as exception: reason = 'Failed to start {0} using systemd-run, will try invoking the extension directly. Error: {1}'.format( extension_name, ustr(exception)) self.disable(reason, DisableCgroups.ALL) # fall-through and re-invoke the extension # subprocess-popen-preexec-fn Disabled: code is not multi-threaded process = subprocess.Popen(command, shell=shell, cwd=cwd, env=env, stdout=stdout, stderr=stderr, preexec_fn=os.setsid) # pylint: disable=W1509 return handle_process_completion(process=process, command=command, timeout=timeout, stdout=stdout, stderr=stderr, error_code=error_code) def __reset_extension_cpu_quota(self, extension_name): """ Removes any CPUQuota on the extension NOTE: This resets the quota on the extension's slice; any local overrides on the VM will take precedence over this setting. """ if self.enabled(): self.setup_extension_slice(extension_name, cpu_quota=None) def setup_extension_slice(self, extension_name, cpu_quota): """ Each extension runs under its own slice (Ex "Microsoft.CPlat.Extension.slice"). All the slices for extensions are grouped under "azure-vmextensions.slice. This method ensures that the extension slice is created. Setup should create under /lib/systemd/system if it is not exist. TODO: set memory quotas """ if self.enabled(): unit_file_install_path = systemd.get_unit_file_install_path() extension_slice_path = os.path.join(unit_file_install_path, SystemdCgroupsApi.get_extension_slice_name(extension_name)) try: cpu_quota = str(cpu_quota) + "%" if cpu_quota is not None else "" # setting an empty value resets to the default (infinity) if cpu_quota == "": _log_cgroup_info("CPUQuota not set for {0}", extension_name) else: _log_cgroup_info("Ensuring the {0}'s CPUQuota is {1}", extension_name, cpu_quota) slice_contents = _EXTENSION_SLICE_CONTENTS.format(extension_name=extension_name, cpu_quota=cpu_quota) CGroupConfigurator._Impl.__create_unit_file(extension_slice_path, slice_contents) except Exception as exception: _log_cgroup_warning("Failed to set the extension {0} slice and quotas: {1}", extension_name, ustr(exception)) CGroupConfigurator._Impl.__cleanup_unit_file(extension_slice_path) def remove_extension_slice(self, extension_name): """ This method ensures that the extension slice gets removed from /lib/systemd/system if it exist Lastly stop the unit. This would ensure the cleanup the /sys/fs/cgroup controller paths """ if self.enabled(): unit_file_install_path = systemd.get_unit_file_install_path() extension_slice_name = SystemdCgroupsApi.get_extension_slice_name(extension_name) extension_slice_path = os.path.join(unit_file_install_path, extension_slice_name) if os.path.exists(extension_slice_path): self.stop_tracking_extension_cgroups(extension_name) CGroupConfigurator._Impl.__cleanup_unit_file(extension_slice_path) def set_extension_services_cpu_memory_quota(self, services_list): """ Each extension service will have name, systemd path and it's quotas. This method ensures that drop-in files are created under service.d folder if quotas given. ex: /lib/systemd/system/extension.service.d/11-CPUAccounting.conf TODO: set memory quotas """ if self.enabled() and services_list is not None: for service in services_list: service_name = service.get('name', None) unit_file_path = systemd.get_unit_file_install_path() if service_name is not None and unit_file_path is not None: files_to_create = [] drop_in_path = os.path.join(unit_file_path, "{0}.d".format(service_name)) drop_in_file_cpu_accounting = os.path.join(drop_in_path, _DROP_IN_FILE_CPU_ACCOUNTING) files_to_create.append((drop_in_file_cpu_accounting, _DROP_IN_FILE_CPU_ACCOUNTING_CONTENTS)) drop_in_file_memory_accounting = os.path.join(drop_in_path, _DROP_IN_FILE_MEMORY_ACCOUNTING) files_to_create.append( (drop_in_file_memory_accounting, _DROP_IN_FILE_MEMORY_ACCOUNTING_CONTENTS)) cpu_quota = service.get('cpuQuotaPercentage', None) if cpu_quota is not None: cpu_quota = str(cpu_quota) + "%" _log_cgroup_info("Ensuring the {0}'s CPUQuota is {1}", service_name, cpu_quota) drop_in_file_cpu_quota = os.path.join(drop_in_path, _DROP_IN_FILE_CPU_QUOTA) cpu_quota_contents = _DROP_IN_FILE_CPU_QUOTA_CONTENTS_FORMAT.format(cpu_quota) files_to_create.append((drop_in_file_cpu_quota, cpu_quota_contents)) self.__create_all_files(files_to_create) self.__reload_systemd_config() def __reset_extension_services_cpu_quota(self, services_list): """ Removes any CPUQuota on the extension service NOTE: This resets the quota on the extension service's default dropin file; any local overrides on the VM will take precedence over this setting. """ if self.enabled() and services_list is not None: service_name = None try: for service in services_list: service_name = service.get('name', None) unit_file_path = systemd.get_unit_file_install_path() if service_name is not None and unit_file_path is not None: files_to_create = [] drop_in_path = os.path.join(unit_file_path, "{0}.d".format(service_name)) cpu_quota = "" # setting an empty value resets to the default (infinity) drop_in_file_cpu_quota = os.path.join(drop_in_path, _DROP_IN_FILE_CPU_QUOTA) cpu_quota_contents = _DROP_IN_FILE_CPU_QUOTA_CONTENTS_FORMAT.format(cpu_quota) if os.path.exists(drop_in_file_cpu_quota): with open(drop_in_file_cpu_quota, "r") as file_: if file_.read() == cpu_quota_contents: return files_to_create.append((drop_in_file_cpu_quota, cpu_quota_contents)) self.__create_all_files(files_to_create) except Exception as exception: _log_cgroup_warning('Failed to reset CPUQuota for {0} : {1}', service_name, ustr(exception)) def remove_extension_services_drop_in_files(self, services_list): """ Remove the dropin files from service .d folder for the given service """ if services_list is not None: for service in services_list: service_name = service.get('name', None) unit_file_path = systemd.get_unit_file_install_path() if service_name is not None and unit_file_path is not None: files_to_cleanup = [] drop_in_path = os.path.join(unit_file_path, "{0}.d".format(service_name)) drop_in_file_cpu_accounting = os.path.join(drop_in_path, _DROP_IN_FILE_CPU_ACCOUNTING) files_to_cleanup.append(drop_in_file_cpu_accounting) drop_in_file_memory_accounting = os.path.join(drop_in_path, _DROP_IN_FILE_MEMORY_ACCOUNTING) files_to_cleanup.append(drop_in_file_memory_accounting) cpu_quota = service.get('cpuQuotaPercentage', None) if cpu_quota is not None: drop_in_file_cpu_quota = os.path.join(drop_in_path, _DROP_IN_FILE_CPU_QUOTA) files_to_cleanup.append(drop_in_file_cpu_quota) CGroupConfigurator._Impl.__cleanup_all_files(files_to_cleanup) _log_cgroup_info("Drop in files removed for {0}".format(service_name)) def stop_tracking_extension_services_cgroups(self, services_list): """ Remove the cgroup entry from the tracked groups to stop tracking. """ if self.enabled() and services_list is not None: for service in services_list: service_name = service.get('name', None) if service_name is not None: self.stop_tracking_unit_cgroups(service_name) def start_tracking_extension_services_cgroups(self, services_list): """ Add the cgroup entry to start tracking the services cgroups. """ if self.enabled() and services_list is not None: for service in services_list: service_name = service.get('name', None) if service_name is not None: self.start_tracking_unit_cgroups(service_name) @staticmethod def get_extension_services_list(): """ ResourceLimits for extensions are coming from /HandlerManifest.json file. Use this pattern to determine all the installed extension HandlerManifest files and read the extension services if ResourceLimits are present. """ extensions_services = {} for manifest_path in glob.iglob(os.path.join(conf.get_lib_dir(), "*/HandlerManifest.json")): match = re.search("(?P[\\w+\\.-]+).HandlerManifest\\.json", manifest_path) if match is not None: extensions_name = match.group('extname') if not extensions_name.startswith('WALinuxAgent'): try: data = json.loads(fileutil.read_file(manifest_path)) resource_limits = data[0].get('resourceLimits', None) services = resource_limits.get('services') if resource_limits else None extensions_services[extensions_name] = services except (IOError, OSError) as e: _log_cgroup_warning( 'Failed to load manifest file ({0}): {1}'.format(manifest_path, e.strerror)) except ValueError: _log_cgroup_warning('Malformed manifest file ({0}).'.format(manifest_path)) return extensions_services # unique instance for the singleton _instance = None @staticmethod def get_instance(): if CGroupConfigurator._instance is None: CGroupConfigurator._instance = CGroupConfigurator._Impl() return CGroupConfigurator._instance Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/cgroupstelemetry.py000066400000000000000000000075201462617747000253370ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import errno import threading from azurelinuxagent.common import logger from azurelinuxagent.ga.cgroup import CpuCgroup from azurelinuxagent.common.future import ustr class CGroupsTelemetry(object): """ """ _tracked = {} _track_throttled_time = False _rlock = threading.RLock() @staticmethod def set_track_throttled_time(value): CGroupsTelemetry._track_throttled_time = value @staticmethod def get_track_throttled_time(): return CGroupsTelemetry._track_throttled_time @staticmethod def track_cgroup(cgroup): """ Adds the given item to the dictionary of tracked cgroups """ if isinstance(cgroup, CpuCgroup): # set the current cpu usage cgroup.initialize_cpu_usage() with CGroupsTelemetry._rlock: if not CGroupsTelemetry.is_tracked(cgroup.path): CGroupsTelemetry._tracked[cgroup.path] = cgroup logger.info("Started tracking cgroup {0}", cgroup) @staticmethod def is_tracked(path): """ Returns true if the given item is in the list of tracked items O(1) operation. """ with CGroupsTelemetry._rlock: if path in CGroupsTelemetry._tracked: return True return False @staticmethod def stop_tracking(cgroup): """ Stop tracking the cgroups for the given path """ with CGroupsTelemetry._rlock: if cgroup.path in CGroupsTelemetry._tracked: CGroupsTelemetry._tracked.pop(cgroup.path) logger.info("Stopped tracking cgroup {0}", cgroup) @staticmethod def poll_all_tracked(): metrics = [] inactive_cgroups = [] with CGroupsTelemetry._rlock: for cgroup in CGroupsTelemetry._tracked.values(): try: metrics.extend(cgroup.get_tracked_metrics(track_throttled_time=CGroupsTelemetry._track_throttled_time)) except Exception as e: # There can be scenarios when the CGroup has been deleted by the time we are fetching the values # from it. This would raise IOError with file entry not found (ERRNO: 2). We do not want to log # every occurrences of such case as it would be very verbose. We do want to log all the other # exceptions which could occur, which is why we do a periodic log for all the other errors. if not isinstance(e, (IOError, OSError)) or e.errno != errno.ENOENT: # pylint: disable=E1101 logger.periodic_warn(logger.EVERY_HOUR, '[PERIODIC] Could not collect metrics for cgroup ' '{0}. Error : {1}'.format(cgroup.name, ustr(e))) if not cgroup.is_active(): inactive_cgroups.append(cgroup) for inactive_cgroup in inactive_cgroups: CGroupsTelemetry.stop_tracking(inactive_cgroup) return metrics @staticmethod def reset(): with CGroupsTelemetry._rlock: CGroupsTelemetry._tracked.clear() # emptying the dictionary CGroupsTelemetry._track_throttled_time = False Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/collect_logs.py000066400000000000000000000337341462617747000244010ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import datetime import os import sys import threading import time from azurelinuxagent.ga import logcollector, cgroupconfigurator import azurelinuxagent.common.conf as conf from azurelinuxagent.common import logger from azurelinuxagent.ga.cgroup import MetricsCounter from azurelinuxagent.common.event import elapsed_milliseconds, add_event, WALAEventOperation, report_metric from azurelinuxagent.common.future import ustr from azurelinuxagent.ga.interfaces import ThreadHandlerInterface from azurelinuxagent.ga.logcollector import COMPRESSED_ARCHIVE_PATH, GRACEFUL_KILL_ERRCODE from azurelinuxagent.ga.cgroupconfigurator import CGroupConfigurator, LOGCOLLECTOR_MEMORY_LIMIT from azurelinuxagent.common.protocol.util import get_protocol_util from azurelinuxagent.common.utils import shellutil from azurelinuxagent.common.utils.shellutil import CommandError from azurelinuxagent.common.version import PY_VERSION_MAJOR, PY_VERSION_MINOR, AGENT_NAME, CURRENT_VERSION _INITIAL_LOG_COLLECTION_DELAY = 5 * 60 # Five minutes of delay def get_collect_logs_handler(): return CollectLogsHandler() def is_log_collection_allowed(): # There are three conditions that need to be met in order to allow periodic log collection: # 1) It should be enabled in the configuration. # 2) The system must be using cgroups to manage services. Needed for resource limiting of the log collection. # 3) The python version must be greater than 2.6 in order to support the ZipFile library used when collecting. conf_enabled = conf.get_collect_logs() cgroups_enabled = CGroupConfigurator.get_instance().enabled() supported_python = PY_VERSION_MINOR >= 6 if PY_VERSION_MAJOR == 2 else PY_VERSION_MAJOR == 3 is_allowed = conf_enabled and cgroups_enabled and supported_python msg = "Checking if log collection is allowed at this time [{0}]. All three conditions must be met: " \ "configuration enabled [{1}], cgroups enabled [{2}], python supported: [{3}]".format(is_allowed, conf_enabled, cgroups_enabled, supported_python) logger.info(msg) add_event( name=AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.LogCollection, is_success=is_allowed, message=msg, log_event=False) return is_allowed class CollectLogsHandler(ThreadHandlerInterface): """ Periodically collects and uploads logs from the VM to the host. """ _THREAD_NAME = "CollectLogsHandler" __CGROUPS_FLAG_ENV_VARIABLE = "_AZURE_GUEST_AGENT_LOG_COLLECTOR_MONITOR_CGROUPS_" @staticmethod def get_thread_name(): return CollectLogsHandler._THREAD_NAME @staticmethod def enable_monitor_cgroups_check(): os.environ[CollectLogsHandler.__CGROUPS_FLAG_ENV_VARIABLE] = "1" @staticmethod def disable_monitor_cgroups_check(): if CollectLogsHandler.__CGROUPS_FLAG_ENV_VARIABLE in os.environ: del os.environ[CollectLogsHandler.__CGROUPS_FLAG_ENV_VARIABLE] @staticmethod def is_enabled_monitor_cgroups_check(): if CollectLogsHandler.__CGROUPS_FLAG_ENV_VARIABLE in os.environ: return os.environ[CollectLogsHandler.__CGROUPS_FLAG_ENV_VARIABLE] == "1" return False def __init__(self): self.protocol = None self.protocol_util = None self.event_thread = None self.should_run = True self.last_state = None self.period = conf.get_collect_logs_period() def run(self): self.start() def keep_alive(self): return self.should_run def is_alive(self): return self.event_thread.is_alive() def start(self): self.event_thread = threading.Thread(target=self.daemon) self.event_thread.setDaemon(True) self.event_thread.setName(self.get_thread_name()) self.event_thread.start() def join(self): self.event_thread.join() def stopped(self): return not self.should_run def stop(self): self.should_run = False if self.is_alive(): try: self.join() except RuntimeError: pass def init_protocols(self): # The initialization of ProtocolUtil for the log collection thread should be done within the thread itself # rather than initializing it in the ExtHandler thread. This is done to avoid any concurrency issues as each # thread would now have its own ProtocolUtil object as per the SingletonPerThread model. self.protocol_util = get_protocol_util() self.protocol = self.protocol_util.get_protocol() def daemon(self): # Delay the first collector on start up to give short lived VMs (that might be dead before the second # collection has a chance to run) an opportunity to do produce meaningful logs to collect. time.sleep(_INITIAL_LOG_COLLECTION_DELAY) try: CollectLogsHandler.enable_monitor_cgroups_check() if self.protocol_util is None or self.protocol is None: self.init_protocols() while not self.stopped(): try: self.collect_and_send_logs() except Exception as e: logger.error("An error occurred in the log collection thread main loop; " "will skip the current iteration.\n{0}", ustr(e)) finally: time.sleep(self.period) except Exception as e: logger.error("An error occurred in the log collection thread; will exit the thread.\n{0}", ustr(e)) finally: CollectLogsHandler.disable_monitor_cgroups_check() def collect_and_send_logs(self): if self._collect_logs(): self._send_logs() def _collect_logs(self): logger.info("Starting log collection...") # Invoke the command line tool in the agent to collect logs, with resource limits on CPU. # Some distros like ubuntu20 by default cpu and memory accounting enabled. Thus create nested cgroups under the logcollector slice # So disabling CPU and Memory accounting prevents from creating nested cgroups, so that all the counters will be present in logcollector Cgroup systemd_cmd = [ "systemd-run", "--property=CPUAccounting=no", "--property=MemoryAccounting=no", "--unit={0}".format(logcollector.CGROUPS_UNIT), "--slice={0}".format(cgroupconfigurator.LOGCOLLECTOR_SLICE), "--scope" ] # The log tool is invoked from the current agent's egg with the command line option collect_logs_cmd = [sys.executable, "-u", sys.argv[0], "-collect-logs"] final_command = systemd_cmd + collect_logs_cmd def exec_command(): start_time = datetime.datetime.utcnow() success = False msg = None try: shellutil.run_command(final_command, log_error=False) duration = elapsed_milliseconds(start_time) archive_size = os.path.getsize(COMPRESSED_ARCHIVE_PATH) msg = "Successfully collected logs. Archive size: {0} b, elapsed time: {1} ms.".format(archive_size, duration) logger.info(msg) success = True return True except Exception as e: duration = elapsed_milliseconds(start_time) err_msg = ustr(e) if isinstance(e, CommandError): # pylint has limited (i.e. no) awareness of control flow w.r.t. typing. we disable=no-member # here because we know e must be a CommandError but pylint still considers the case where # e is a different type of exception. err_msg = ustr("Log Collector exited with code {0}").format( e.returncode) # pylint: disable=no-member if e.returncode == logcollector.INVALID_CGROUPS_ERRCODE: # pylint: disable=no-member logger.info("Disabling periodic log collection until service restart due to process error.") self.stop() # When the log collector memory limit is exceeded, Agent gracefully exit the process with this error code. # Stop the periodic operation because it seems to be persistent. elif e.returncode == logcollector.GRACEFUL_KILL_ERRCODE: # pylint: disable=no-member logger.info("Disabling periodic log collection until service restart due to exceeded process memory limit.") self.stop() else: logger.info(err_msg) msg = "Failed to collect logs. Elapsed time: {0} ms. Error: {1}".format(duration, err_msg) # No need to log to the local log since we logged stdout, stderr from the process. return False finally: add_event( name=AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.LogCollection, is_success=success, message=msg, log_event=False) return exec_command() def _send_logs(self): msg = None success = False try: with open(COMPRESSED_ARCHIVE_PATH, "rb") as fh: archive_content = fh.read() self.protocol.upload_logs(archive_content) msg = "Successfully uploaded logs." logger.info(msg) success = True except Exception as e: msg = "Failed to upload logs. Error: {0}".format(ustr(e)) logger.warn(msg) finally: add_event( name=AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.LogCollection, is_success=success, message=msg, log_event=False) def get_log_collector_monitor_handler(cgroups): return LogCollectorMonitorHandler(cgroups) class LogCollectorMonitorHandler(ThreadHandlerInterface): """ Periodically monitor and checks the Log collector Cgroups and sends telemetry to Kusto. """ _THREAD_NAME = "LogCollectorMonitorHandler" @staticmethod def get_thread_name(): return LogCollectorMonitorHandler._THREAD_NAME def __init__(self, cgroups): self.event_thread = None self.should_run = True self.period = 2 # Log collector monitor runs every 2 secs. self.cgroups = cgroups self.__log_metrics = conf.get_cgroup_log_metrics() def run(self): self.start() def stop(self): self.should_run = False if self.is_alive(): self.join() def join(self): self.event_thread.join() def stopped(self): return not self.should_run def is_alive(self): return self.event_thread is not None and self.event_thread.is_alive() def start(self): self.event_thread = threading.Thread(target=self.daemon) self.event_thread.setDaemon(True) self.event_thread.setName(self.get_thread_name()) self.event_thread.start() def daemon(self): try: while not self.stopped(): try: metrics = self._poll_resource_usage() self._send_telemetry(metrics) self._verify_memory_limit(metrics) except Exception as e: logger.error("An error occurred in the log collection monitor thread loop; " "will skip the current iteration.\n{0}", ustr(e)) finally: time.sleep(self.period) except Exception as e: logger.error( "An error occurred in the MonitorLogCollectorCgroupsHandler thread; will exit the thread.\n{0}", ustr(e)) def _poll_resource_usage(self): metrics = [] for cgroup in self.cgroups: metrics.extend(cgroup.get_tracked_metrics(track_throttled_time=True)) return metrics def _send_telemetry(self, metrics): for metric in metrics: report_metric(metric.category, metric.counter, metric.instance, metric.value, log_event=self.__log_metrics) def _verify_memory_limit(self, metrics): current_usage = 0 for metric in metrics: if metric.counter == MetricsCounter.TOTAL_MEM_USAGE: current_usage += metric.value elif metric.counter == MetricsCounter.SWAP_MEM_USAGE: current_usage += metric.value if current_usage > LOGCOLLECTOR_MEMORY_LIMIT: msg = "Log collector memory limit {0} bytes exceeded. The max reported usage is {1} bytes.".format(LOGCOLLECTOR_MEMORY_LIMIT, current_usage) logger.info(msg) add_event( name=AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.LogCollection, message=msg) os._exit(GRACEFUL_KILL_ERRCODE) Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/collect_telemetry_events.py000066400000000000000000000700641462617747000270300ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import datetime import json import os import re import threading from collections import defaultdict import azurelinuxagent.common.logger as logger from azurelinuxagent.common import conf from azurelinuxagent.common.agent_supported_feature import get_supported_feature_by_name, SupportedFeatureNames from azurelinuxagent.common.event import EVENTS_DIRECTORY, TELEMETRY_LOG_EVENT_ID, \ TELEMETRY_LOG_PROVIDER_ID, add_event, WALAEventOperation, add_log_event, get_event_logger, \ CollectOrReportEventDebugInfo, EVENT_FILE_REGEX, parse_event from azurelinuxagent.common.exception import InvalidExtensionEventError, ServiceStoppedError from azurelinuxagent.common.future import ustr from azurelinuxagent.ga.interfaces import ThreadHandlerInterface from azurelinuxagent.common.telemetryevent import TelemetryEvent, TelemetryEventParam, \ GuestAgentGenericLogsSchema, GuestAgentExtensionEventsSchema from azurelinuxagent.common.utils import textutil from azurelinuxagent.ga.exthandlers import HANDLER_NAME_PATTERN from azurelinuxagent.ga.periodic_operation import PeriodicOperation def get_collect_telemetry_events_handler(send_telemetry_events_handler): return CollectTelemetryEventsHandler(send_telemetry_events_handler) class ExtensionEventSchema(object): """ Class for defining the schema for Extension Events. Sample Extension Event Example: { "Version":"1.0.0.23", "Timestamp":"2018-01-02T22:08:12.510696Z" //(time in UTC (ISO-8601 standard), "TaskName":"TestRun" //Open for publishers, "EventLevel":"Critical/Error/Warning/Verbose/Informational/LogAlways", "Message": "Successful test" //(max 3K, 3072 characters), "EventPid":"1", "EventTid":"2", "OperationId":"Guid (str)" } From next version(2.10+) we accept integer values for EventPid and EventTid fields. But we still support string type for backward compatability """ Version = "Version" Timestamp = "Timestamp" TaskName = "TaskName" EventLevel = "EventLevel" Message = "Message" EventPid = "EventPid" EventTid = "EventTid" OperationId = "OperationId" class _ProcessExtensionEvents(PeriodicOperation): """ Periodic operation for collecting extension telemetry events and enqueueing them for the SendTelemetryHandler thread. """ _EXTENSION_EVENT_COLLECTION_PERIOD = datetime.timedelta(seconds=conf.get_etp_collection_period()) _EXTENSION_EVENT_FILE_NAME_REGEX = re.compile(r"^(\d+)\.json$", re.IGNORECASE) # Limits _MAX_NUMBER_OF_EVENTS_PER_EXTENSION_PER_PERIOD = 360 _EXTENSION_EVENT_FILE_MAX_SIZE = 4 * 1024 * 1024 # 4 MB = 4 * 1,048,576 Bytes _EXTENSION_EVENT_MAX_SIZE = 1024 * 6 # 6Kb or 6144 characters. Limit for the whole event. Prevent oversized events. _EXTENSION_EVENT_MAX_MSG_LEN = 1024 * 3 # 3Kb or 3072 chars. _EXTENSION_EVENT_REQUIRED_FIELDS = [attr.lower() for attr in dir(ExtensionEventSchema) if not callable(getattr(ExtensionEventSchema, attr)) and not attr.startswith("__")] def __init__(self, send_telemetry_events_handler): super(_ProcessExtensionEvents, self).__init__(_ProcessExtensionEvents._EXTENSION_EVENT_COLLECTION_PERIOD) self._send_telemetry_events_handler = send_telemetry_events_handler def _operation(self): if self._send_telemetry_events_handler.stopped(): logger.warn("{0} service is not running, skipping current iteration".format( self._send_telemetry_events_handler.get_thread_name())) return delete_all_event_files = True extension_handler_with_event_dirs = [] try: extension_handler_with_event_dirs = self._get_extension_events_dir_with_handler_name(conf.get_ext_log_dir()) if not extension_handler_with_event_dirs: logger.verbose("No Extension events directory exist") return for extension_handler_with_event_dir in extension_handler_with_event_dirs: handler_name = extension_handler_with_event_dir[0] handler_event_dir_path = extension_handler_with_event_dir[1] self._capture_extension_events(handler_name, handler_event_dir_path) except ServiceStoppedError: # Since the service stopped, we should not delete the extension files and retry sending them whenever # the telemetry service comes back up delete_all_event_files = False except Exception as error: msg = "Unknown error occurred when trying to collect extension events:{0}".format( textutil.format_exception(error)) add_event(op=WALAEventOperation.ExtensionTelemetryEventProcessing, message=msg, is_success=False) finally: # Always ensure that the events directory are being deleted each run except when Telemetry Service is stopped, # even if we run into an error and dont process them this run. if delete_all_event_files: self._ensure_all_events_directories_empty(extension_handler_with_event_dirs) @staticmethod def _get_extension_events_dir_with_handler_name(extension_log_dir): """ Get the full path to events directory for all extension handlers that have one :param extension_log_dir: Base log directory for all extensions :return: A list of full paths of existing events directory for all handlers """ extension_handler_with_event_dirs = [] for ext_handler_name in os.listdir(extension_log_dir): # Check if its an Extension directory if not os.path.isdir(os.path.join(extension_log_dir, ext_handler_name)) \ or re.match(HANDLER_NAME_PATTERN, ext_handler_name) is None: continue # Check if EVENTS_DIRECTORY directory exists extension_event_dir = os.path.join(extension_log_dir, ext_handler_name, EVENTS_DIRECTORY) if os.path.exists(extension_event_dir): extension_handler_with_event_dirs.append((ext_handler_name, extension_event_dir)) return extension_handler_with_event_dirs def _event_file_size_allowed(self, event_file_path): event_file_size = os.stat(event_file_path).st_size if event_file_size > self._EXTENSION_EVENT_FILE_MAX_SIZE: convert_to_mb = lambda x: (1.0 * x) / (1000 * 1000) msg = "Skipping file: {0} as its size is {1:.2f} Mb > Max size allowed {2:.1f} Mb".format( event_file_path, convert_to_mb(event_file_size), convert_to_mb(self._EXTENSION_EVENT_FILE_MAX_SIZE)) logger.warn(msg) add_log_event(level=logger.LogLevel.WARNING, message=msg, forced=True) return False return True def _capture_extension_events(self, handler_name, handler_event_dir_path): """ Capture Extension events and add them to the events_list :param handler_name: Complete Handler Name. Eg: Microsoft.CPlat.Core.RunCommandLinux :param handler_event_dir_path: Full path. Eg: '/var/log/azure/Microsoft.CPlat.Core.RunCommandLinux/events' """ # Filter out the files that do not follow the pre-defined EXTENSION_EVENT_FILE_NAME_REGEX event_files = [event_file for event_file in os.listdir(handler_event_dir_path) if re.match(self._EXTENSION_EVENT_FILE_NAME_REGEX, event_file) is not None] # Pick the latest files first, we'll discard older events if len(events) > MAX_EVENT_COUNT event_files.sort(reverse=True) captured_extension_events_count = 0 dropped_events_with_error_count = defaultdict(int) try: for event_file in event_files: event_file_path = os.path.join(handler_event_dir_path, event_file) try: logger.verbose("Processing event file: {0}", event_file_path) if not self._event_file_size_allowed(event_file_path): continue # We support multiple events in a file, read the file and parse events. captured_extension_events_count = self._enqueue_events_and_get_count(handler_name, event_file_path, captured_extension_events_count, dropped_events_with_error_count) # We only allow MAX_NUMBER_OF_EVENTS_PER_EXTENSION_PER_PERIOD=300 maximum events per period per handler if captured_extension_events_count >= self._MAX_NUMBER_OF_EVENTS_PER_EXTENSION_PER_PERIOD: msg = "Reached max count for the extension: {0}; Max Limit: {1}. Skipping the rest.".format( handler_name, self._MAX_NUMBER_OF_EVENTS_PER_EXTENSION_PER_PERIOD) logger.warn(msg) add_log_event(level=logger.LogLevel.WARNING, message=msg, forced=True) break except ServiceStoppedError: # Not logging here as already logged once, re-raising # Since we already started processing this file, deleting it as we could've already sent some events out # This is a trade-off between data replication vs data loss. raise except Exception as error: msg = "Failed to process event file {0}:{1}".format(event_file, textutil.format_exception(error)) logger.warn(msg) add_log_event(level=logger.LogLevel.WARNING, message=msg, forced=True) finally: # Todo: We should delete files after ensuring that we sent the data to Wireserver successfully # from our end rather than deleting first and sending later. This is to ensure the data reliability # of the agent telemetry pipeline. os.remove(event_file_path) finally: if dropped_events_with_error_count: msg = "Dropped events for Extension: {0}; Details:\n\t{1}".format(handler_name, '\n\t'.join( ["Reason: {0}; Dropped Count: {1}".format(k, v) for k, v in dropped_events_with_error_count.items()])) logger.warn(msg) add_log_event(level=logger.LogLevel.WARNING, message=msg, forced=True) if captured_extension_events_count > 0: logger.info("Collected {0} events for extension: {1}".format(captured_extension_events_count, handler_name)) @staticmethod def _ensure_all_events_directories_empty(extension_events_directories): if not extension_events_directories: return for extension_handler_with_event_dir in extension_events_directories: event_dir_path = extension_handler_with_event_dir[1] if not os.path.exists(event_dir_path): return log_err = True # Delete any residue files in the events directory for residue_file in os.listdir(event_dir_path): try: os.remove(os.path.join(event_dir_path, residue_file)) except Exception as error: # Only log the first error once per handler per run to keep the logfile clean if log_err: logger.error("Failed to completely clear the {0} directory. Exception: {1}", event_dir_path, ustr(error)) log_err = False def _enqueue_events_and_get_count(self, handler_name, event_file_path, captured_events_count, dropped_events_with_error_count): event_file_time = datetime.datetime.fromtimestamp(os.path.getmtime(event_file_path)) # Read event file and decode it properly with open(event_file_path, "rb") as event_file_descriptor: event_data = event_file_descriptor.read().decode("utf-8") # Parse the string and get the list of events events = json.loads(event_data) # We allow multiple events in a file but there can be an instance where the file only has a single # JSON event and not a list. Handling that condition too if not isinstance(events, list): events = [events] for event in events: try: self._send_telemetry_events_handler.enqueue_event( self._parse_telemetry_event(handler_name, event, event_file_time) ) captured_events_count += 1 except InvalidExtensionEventError as invalid_error: # These are the errors thrown if there's an error parsing the event. We want to report these back to the # extension publishers so that they are aware of the issues. # The error messages are all static messages, we will use this to create a dict and emit an event at the # end of each run to notify if there were any errors parsing events for the extension dropped_events_with_error_count[ustr(invalid_error)] += 1 except ServiceStoppedError as stopped_error: logger.error( "Unable to enqueue events as service stopped: {0}. Stopping collecting extension events".format( ustr(stopped_error))) raise except Exception as error: logger.warn("Unable to parse and transmit event, error: {0}".format(error)) if captured_events_count >= self._MAX_NUMBER_OF_EVENTS_PER_EXTENSION_PER_PERIOD: break return captured_events_count def _parse_telemetry_event(self, handler_name, extension_unparsed_event, event_file_time): """ Parse the Json event file and convert it to TelemetryEvent object with the required data. :return: Complete TelemetryEvent with all required fields filled up properly. Raises if event breaches contract. """ extension_event = self._parse_event_and_ensure_it_is_valid(extension_unparsed_event) # Create a telemetry event, add all common parameters to the event # and then overwrite all the common params with extension events params if same event = TelemetryEvent(TELEMETRY_LOG_EVENT_ID, TELEMETRY_LOG_PROVIDER_ID) event.file_type = "json" CollectTelemetryEventsHandler.add_common_params_to_telemetry_event(event, event_file_time) replace_or_add_params = { GuestAgentGenericLogsSchema.EventName: "{0}-{1}".format(handler_name, extension_event[ ExtensionEventSchema.Version.lower()]), GuestAgentGenericLogsSchema.CapabilityUsed: extension_event[ExtensionEventSchema.EventLevel.lower()], GuestAgentGenericLogsSchema.TaskName: extension_event[ExtensionEventSchema.TaskName.lower()], GuestAgentGenericLogsSchema.Context1: extension_event[ExtensionEventSchema.Message.lower()], GuestAgentGenericLogsSchema.Context2: extension_event[ExtensionEventSchema.Timestamp.lower()], GuestAgentGenericLogsSchema.Context3: extension_event[ExtensionEventSchema.OperationId.lower()], GuestAgentGenericLogsSchema.EventPid: extension_event[ExtensionEventSchema.EventPid.lower()], GuestAgentGenericLogsSchema.EventTid: extension_event[ExtensionEventSchema.EventTid.lower()] } self._replace_or_add_param_in_event(event, replace_or_add_params) return event def _parse_event_and_ensure_it_is_valid(self, extension_event): """ Parse the Json event from file. Raise InvalidExtensionEventError if the event breaches pre-set contract. :param extension_event: The json event from file :return: Verified Json event that qualifies the contract. """ def _clean_value(k, v): if v is not None: if isinstance(v, int): if k.lower() in [ExtensionEventSchema.EventPid.lower(), ExtensionEventSchema.EventTid.lower()]: return str(v) return v.strip() return v event_size = 0 key_err_msg = "{0}: {1} not found" # Convert the dict to all lower keys to avoid schema confusion. # Only pick the params that we care about and skip the rest. event = dict((k.lower(), _clean_value(k, v)) for k, v in extension_event.items() if k.lower() in self._EXTENSION_EVENT_REQUIRED_FIELDS) # Trim message and only pick the first 3k chars message_key = ExtensionEventSchema.Message.lower() if message_key in event: event[message_key] = event[message_key][:self._EXTENSION_EVENT_MAX_MSG_LEN] else: raise InvalidExtensionEventError( key_err_msg.format(InvalidExtensionEventError.MissingKeyError, ExtensionEventSchema.Message)) if not event[message_key]: raise InvalidExtensionEventError( "{0}: {1} should not be empty".format(InvalidExtensionEventError.EmptyMessageError, ExtensionEventSchema.Message)) for required_key in self._EXTENSION_EVENT_REQUIRED_FIELDS: # If all required keys not in event then raise if required_key not in event: raise InvalidExtensionEventError( key_err_msg.format(InvalidExtensionEventError.MissingKeyError, required_key)) # If the event_size > _EXTENSION_EVENT_MAX_SIZE=6k, then raise if event_size > self._EXTENSION_EVENT_MAX_SIZE: raise InvalidExtensionEventError( "{0}: max event size allowed: {1}".format(InvalidExtensionEventError.OversizeEventError, self._EXTENSION_EVENT_MAX_SIZE)) event_size += len(event[required_key]) return event @staticmethod def _replace_or_add_param_in_event(event, replace_or_add_params): for param in event.parameters: if param.name in replace_or_add_params: param.value = replace_or_add_params.pop(param.name) if not replace_or_add_params: # All values replaced, return return # Add the remaining params to the event for param_name in replace_or_add_params: event.parameters.append(TelemetryEventParam(param_name, replace_or_add_params[param_name])) class _CollectAndEnqueueEvents(PeriodicOperation): """ Periodic operation to collect telemetry events located in the events folder and enqueue them for the SendTelemetryHandler thread. """ _EVENT_COLLECTION_PERIOD = datetime.timedelta(minutes=1) def __init__(self, send_telemetry_events_handler): super(_CollectAndEnqueueEvents, self).__init__(_CollectAndEnqueueEvents._EVENT_COLLECTION_PERIOD) self._send_telemetry_events_handler = send_telemetry_events_handler def _operation(self): """ Periodically send any events located in the events folder """ try: if self._send_telemetry_events_handler.stopped(): logger.warn("{0} service is not running, skipping iteration.".format( self._send_telemetry_events_handler.get_thread_name())) return self.process_events() except Exception as error: err_msg = "Failure in collecting telemetry events: {0}".format(ustr(error)) add_event(op=WALAEventOperation.UnhandledError, message=err_msg, is_success=False) def process_events(self): """ Returns a list of events that need to be sent to the telemetry pipeline and deletes the corresponding files from the events directory. """ event_directory_full_path = os.path.join(conf.get_lib_dir(), EVENTS_DIRECTORY) event_files = os.listdir(event_directory_full_path) debug_info = CollectOrReportEventDebugInfo(operation=CollectOrReportEventDebugInfo.OP_COLLECT) for event_file in event_files: try: match = EVENT_FILE_REGEX.search(event_file) if match is None: continue event_file_path = os.path.join(event_directory_full_path, event_file) try: logger.verbose("Processing event file: {0}", event_file_path) with open(event_file_path, "rb") as event_fd: event_data = event_fd.read().decode("utf-8") event = parse_event(event_data) # "legacy" events are events produced by previous versions of the agent (<= 2.2.46) and extensions; # they do not include all the telemetry fields, so we add them here is_legacy_event = match.group('agent_event') is None if is_legacy_event: # We'll use the file creation time for the event's timestamp event_file_creation_time_epoch = os.path.getmtime(event_file_path) event_file_creation_time = datetime.datetime.fromtimestamp(event_file_creation_time_epoch) if event.is_extension_event(): _CollectAndEnqueueEvents._trim_legacy_extension_event_parameters(event) CollectTelemetryEventsHandler.add_common_params_to_telemetry_event(event, event_file_creation_time) else: _CollectAndEnqueueEvents._update_legacy_agent_event(event, event_file_creation_time) self._send_telemetry_events_handler.enqueue_event(event) finally: # Todo: We should delete files after ensuring that we sent the data to Wireserver successfully # from our end rather than deleting first and sending later. This is to ensure the data reliability # of the agent telemetry pipeline. os.remove(event_file_path) except ServiceStoppedError as stopped_error: logger.error( "Unable to enqueue events as service stopped: {0}, skipping events collection".format( ustr(stopped_error))) except UnicodeError as uni_err: debug_info.update_unicode_error(uni_err) except Exception as error: debug_info.update_op_error(error) debug_info.report_debug_info() @staticmethod def _update_legacy_agent_event(event, event_creation_time): # Ensure that if an agent event is missing a field from the schema defined since 2.2.47, the missing fields # will be appended, ensuring the event schema is complete before the event is reported. new_event = TelemetryEvent() new_event.parameters = [] CollectTelemetryEventsHandler.add_common_params_to_telemetry_event(new_event, event_creation_time) event_params = dict([(param.name, param.value) for param in event.parameters]) new_event_params = dict([(param.name, param.value) for param in new_event.parameters]) missing_params = set(new_event_params.keys()).difference(set(event_params.keys())) params_to_add = [] for param_name in missing_params: params_to_add.append(TelemetryEventParam(param_name, new_event_params[param_name])) event.parameters.extend(params_to_add) @staticmethod def _trim_legacy_extension_event_parameters(event): """ This method is called for extension events before they are sent out. Per the agreement with extension publishers, the parameters that belong to extensions and will be reported intact are Name, Version, Operation, OperationSuccess, Message, and Duration. Since there is nothing preventing extensions to instantiate other fields (which belong to the agent), we call this method to ensure the rest of the parameters are trimmed since they will be replaced with values coming from the agent. :param event: Extension event to trim. :return: Trimmed extension event; containing only extension-specific parameters. """ params_to_keep = dict().fromkeys([ GuestAgentExtensionEventsSchema.Name, GuestAgentExtensionEventsSchema.Version, GuestAgentExtensionEventsSchema.Operation, GuestAgentExtensionEventsSchema.OperationSuccess, GuestAgentExtensionEventsSchema.Message, GuestAgentExtensionEventsSchema.Duration ]) trimmed_params = [] for param in event.parameters: if param.name in params_to_keep: trimmed_params.append(param) event.parameters = trimmed_params class CollectTelemetryEventsHandler(ThreadHandlerInterface): """ This Handler takes care of fetching the Extension Telemetry events from the {extension_events_dir} and sends it to Kusto for advanced debuggability. """ _THREAD_NAME = "TelemetryEventsCollector" def __init__(self, send_telemetry_events_handler): self.should_run = True self.thread = None self._send_telemetry_events_handler = send_telemetry_events_handler @staticmethod def get_thread_name(): return CollectTelemetryEventsHandler._THREAD_NAME def run(self): logger.info("Start Extension Telemetry service.") self.start() def is_alive(self): return self.thread is not None and self.thread.is_alive() def start(self): self.thread = threading.Thread(target=self.daemon) self.thread.setDaemon(True) self.thread.setName(CollectTelemetryEventsHandler.get_thread_name()) self.thread.start() def stop(self): """ Stop server communication and join the thread to main thread. """ self.should_run = False if self.is_alive(): self.thread.join() def stopped(self): return not self.should_run def daemon(self): periodic_operations = [ _CollectAndEnqueueEvents(self._send_telemetry_events_handler) ] is_etp_enabled = get_supported_feature_by_name(SupportedFeatureNames.ExtensionTelemetryPipeline).is_supported logger.info("Extension Telemetry pipeline enabled: {0}".format(is_etp_enabled)) if is_etp_enabled: periodic_operations.append(_ProcessExtensionEvents(self._send_telemetry_events_handler)) logger.info("Successfully started the {0} thread".format(self.get_thread_name())) while not self.stopped(): try: for periodic_op in periodic_operations: periodic_op.run() except Exception as error: logger.warn( "An error occurred in the Telemetry Extension thread main loop; will skip the current iteration.\n{0}", ustr(error)) finally: PeriodicOperation.sleep_until_next_operation(periodic_operations) @staticmethod def add_common_params_to_telemetry_event(event, event_time): reporter = get_event_logger() reporter.add_common_event_parameters(event, event_time)Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/env.py000066400000000000000000000266631462617747000225230ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import datetime import re import os import socket import threading import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger from azurelinuxagent.common.dhcp import get_dhcp_handler from azurelinuxagent.common.event import add_periodic, WALAEventOperation, add_event from azurelinuxagent.common.future import ustr from azurelinuxagent.ga.interfaces import ThreadHandlerInterface from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.protocol.util import get_protocol_util from azurelinuxagent.common.version import AGENT_NAME, CURRENT_VERSION from azurelinuxagent.ga.periodic_operation import PeriodicOperation CACHE_PATTERNS = [ re.compile("^(.*)\.(\d+)\.(agentsManifest)$", re.IGNORECASE), # pylint: disable=W1401 re.compile("^(.*)\.(\d+)\.(manifest\.xml)$", re.IGNORECASE), # pylint: disable=W1401 re.compile("^(.*)\.(\d+)\.(xml)$", re.IGNORECASE) # pylint: disable=W1401 ] MAXIMUM_CACHED_FILES = 50 def get_env_handler(): return EnvHandler() class RemovePersistentNetworkRules(PeriodicOperation): def __init__(self, osutil): super(RemovePersistentNetworkRules, self).__init__(conf.get_remove_persistent_net_rules_period()) self.osutil = osutil def _operation(self): self.osutil.remove_rules_files() class MonitorDhcpClientRestart(PeriodicOperation): def __init__(self, osutil): super(MonitorDhcpClientRestart, self).__init__(conf.get_monitor_dhcp_client_restart_period()) self.osutil = osutil self.dhcp_handler = get_dhcp_handler() self.dhcp_handler.conf_routes() self.dhcp_warning_enabled = True self.dhcp_id_list = [] def _operation(self): if len(self.dhcp_id_list) == 0: self.dhcp_id_list = self._get_dhcp_client_pid() return if all(self.osutil.check_pid_alive(pid) for pid in self.dhcp_id_list): return new_pid = self._get_dhcp_client_pid() if len(new_pid) != 0 and new_pid != self.dhcp_id_list: logger.info("EnvMonitor: Detected dhcp client restart. Restoring routing table.") self.dhcp_handler.conf_routes() self.dhcp_id_list = new_pid def _get_dhcp_client_pid(self): pid = [] try: # return a sorted list since handle_dhclient_restart needs to compare the previous value with # the new value and the comparison should not be affected by the order of the items in the list pid = sorted(self.osutil.get_dhcp_pid()) if len(pid) == 0 and self.dhcp_warning_enabled: logger.warn("Dhcp client is not running.") except Exception as exception: if self.dhcp_warning_enabled: logger.error("Failed to get the PID of the DHCP client: {0}", ustr(exception)) self.dhcp_warning_enabled = len(pid) != 0 return pid class EnableFirewall(PeriodicOperation): def __init__(self, osutil, protocol): super(EnableFirewall, self).__init__(conf.get_enable_firewall_period()) self._osutil = osutil self._protocol = protocol self._try_remove_legacy_firewall_rule = False self._is_first_setup = True self._reset_count = 0 self._report_after = datetime.datetime.min self._report_period = None # None indicates "report immediately" def _operation(self): # If the rules ever change we must reset all rules and start over again. # # There was a rule change at 2.2.26, which started dropping non-root traffic # to WireServer. The previous rules allowed traffic. Having both rules in # place negated the fix in 2.2.26. Removing only the legacy rule and keeping other rules intact. # # We only try to remove the legacy firewall rule once on service start (irrespective of its exit code). if not self._try_remove_legacy_firewall_rule: self._osutil.remove_legacy_firewall_rule(dst_ip=self._protocol.get_endpoint()) self._try_remove_legacy_firewall_rule = True success, missing_firewall_rules = self._osutil.enable_firewall(dst_ip=self._protocol.get_endpoint(), uid=os.getuid()) if len(missing_firewall_rules) > 0: if self._is_first_setup: msg = "Created firewall rules for the Azure Fabric:\n{0}".format(self._get_firewall_state()) logger.info(msg) add_event(op=WALAEventOperation.Firewall, message=msg) else: self._reset_count += 1 # We report immediately (when period is None) the first 5 instances, then we switch the period to every few hours if self._report_period is None: msg = "Some firewall rules were missing: {0}. Re-created all the rules:\n{1}".format(missing_firewall_rules, self._get_firewall_state()) if self._reset_count >= 5: self._report_period = datetime.timedelta(hours=3) self._reset_count = 0 self._report_after = datetime.datetime.now() + self._report_period elif datetime.datetime.now() >= self._report_after: msg = "Some firewall rules were missing: {0}. This has happened {1} time(s) since the last report. Re-created all the rules:\n{2}".format( missing_firewall_rules, self._reset_count, self._get_firewall_state()) self._reset_count = 0 self._report_after = datetime.datetime.now() + self._report_period else: msg = "" if msg != "": logger.info(msg) add_event(op=WALAEventOperation.ResetFirewall, message=msg) add_periodic( logger.EVERY_HOUR, AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.Firewall, is_success=success, log_event=False) self._is_first_setup = False def _get_firewall_state(self): try: return self._osutil.get_firewall_list() except Exception as e: return "Failed to get the firewall state: {0}".format(ustr(e)) class LogFirewallRules(PeriodicOperation): """ Log firewall rules state once a day. Goal is to capture the firewall state when the agent service startup, in addition to add more debug data and would be more useful long term. """ def __init__(self, osutil): super(LogFirewallRules, self).__init__(conf.get_firewall_rules_log_period()) self._osutil = osutil def _operation(self): # Log firewall rules state once a day logger.info("Current Firewall rules:\n{0}".format(self._osutil.get_firewall_list())) class SetRootDeviceScsiTimeout(PeriodicOperation): def __init__(self, osutil): super(SetRootDeviceScsiTimeout, self).__init__(conf.get_root_device_scsi_timeout_period()) self._osutil = osutil def _operation(self): self._osutil.set_scsi_disks_timeout(conf.get_root_device_scsi_timeout()) class MonitorHostNameChanges(PeriodicOperation): def __init__(self, osutil): super(MonitorHostNameChanges, self).__init__(conf.get_monitor_hostname_period()) self._osutil = osutil self._hostname = self._osutil.get_hostname_record() def _operation(self): curr_hostname = socket.gethostname() if curr_hostname != self._hostname: logger.info("EnvMonitor: Detected hostname change: {0} -> {1}", self._hostname, curr_hostname) self._osutil.set_hostname(curr_hostname) try: self._osutil.publish_hostname(curr_hostname, recover_nic=True) except Exception as e: msg = "Error while publishing the hostname: {0}".format(e) add_event(AGENT_NAME, op=WALAEventOperation.HostnamePublishing, is_success=False, message=msg, log_event=False) self._hostname = curr_hostname class EnvHandler(ThreadHandlerInterface): """ Monitor changes to dhcp and hostname. If dhcp client process re-start has occurred, reset routes, dhcp with fabric. Monitor scsi disk. If new scsi disk found, set timeout """ _THREAD_NAME = "EnvHandler" @staticmethod def get_thread_name(): return EnvHandler._THREAD_NAME def __init__(self): self.stopped = True self.hostname = None self.env_thread = None def run(self): if not self.stopped: logger.info("Stop existing env monitor service.") self.stop() self.stopped = False logger.info("Starting env monitor service.") self.start() def is_alive(self): return self.env_thread.is_alive() def start(self): self.env_thread = threading.Thread(target=self.daemon) self.env_thread.setDaemon(True) self.env_thread.setName(self.get_thread_name()) self.env_thread.start() def daemon(self): try: # The initialization of the protocol needs to be done within the environment thread itself rather # than initializing it in the ExtHandler thread. This is done to avoid any concurrency issues as each # thread would now have its own ProtocolUtil object as per the SingletonPerThread model. protocol_util = get_protocol_util() protocol = protocol_util.get_protocol() osutil = get_osutil() periodic_operations = [ RemovePersistentNetworkRules(osutil), MonitorDhcpClientRestart(osutil), ] if conf.enable_firewall(): periodic_operations.append(EnableFirewall(osutil, protocol)) periodic_operations.append(LogFirewallRules(osutil)) if conf.get_root_device_scsi_timeout() is not None: periodic_operations.append(SetRootDeviceScsiTimeout(osutil)) if conf.get_monitor_hostname(): periodic_operations.append(MonitorHostNameChanges(osutil)) while not self.stopped: try: for op in periodic_operations: op.run() except Exception as e: logger.error("An error occurred in the environment thread main loop; will skip the current iteration.\n{0}", ustr(e)) finally: PeriodicOperation.sleep_until_next_operation(periodic_operations) except Exception as e: logger.error("An error occurred in the environment thread; will exit the thread.\n{0}", ustr(e)) def stop(self): """ Stop server communication and join the thread to main thread. """ self.stopped = True if self.env_thread is not None: self.env_thread.join() Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/extensionprocessutil.py000066400000000000000000000221441462617747000262320ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License, Version 2.0 (the "License"); # # You may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import re import signal import time from azurelinuxagent.common import conf from azurelinuxagent.common import logger from azurelinuxagent.common.event import WALAEventOperation, add_event from azurelinuxagent.common.exception import ExtensionErrorCodes, ExtensionOperationError, ExtensionError from azurelinuxagent.common.future import ustr TELEMETRY_MESSAGE_MAX_LEN = 3200 def wait_for_process_completion_or_timeout(process, timeout, cpu_cgroup): """ Utility function that waits for the process to complete within the given time frame. This function will terminate the process if when the given time frame elapses. :param process: Reference to a running process :param timeout: Number of seconds to wait for the process to complete before killing it :return: Two parameters: boolean for if the process timed out and the return code of the process (None if timed out) """ while timeout > 0 and process.poll() is None: time.sleep(1) timeout -= 1 return_code = None throttled_time = 0 if timeout == 0: throttled_time = get_cpu_throttled_time(cpu_cgroup) os.killpg(os.getpgid(process.pid), signal.SIGKILL) else: # process completed or forked; sleep 1 sec to give the child process (if any) a chance to start time.sleep(1) return_code = process.wait() return timeout == 0, return_code, throttled_time def handle_process_completion(process, command, timeout, stdout, stderr, error_code, cpu_cgroup=None): """ Utility function that waits for process completion and retrieves its output (stdout and stderr) if it completed before the timeout period. Otherwise, the process will get killed and an ExtensionError will be raised. In case the return code is non-zero, ExtensionError will be raised. :param process: Reference to a running process :param command: The extension command to run :param timeout: Number of seconds to wait before killing the process :param stdout: Must be a file since we seek on it when parsing the subprocess output :param stderr: Must be a file since we seek on it when parsing the subprocess outputs :param error_code: The error code to set if we raise an ExtensionError :param cpu_cgroup: Reference the cpu cgroup name and path :return: """ # Wait for process completion or timeout timed_out, return_code, throttled_time = wait_for_process_completion_or_timeout(process, timeout, cpu_cgroup) process_output = read_output(stdout, stderr) if timed_out: if cpu_cgroup is not None: # Report CPUThrottledTime when timeout happens raise ExtensionError("Timeout({0});CPUThrottledTime({1}secs): {2}\n{3}".format(timeout, throttled_time, command, process_output), code=ExtensionErrorCodes.PluginHandlerScriptTimedout) raise ExtensionError("Timeout({0}): {1}\n{2}".format(timeout, command, process_output), code=ExtensionErrorCodes.PluginHandlerScriptTimedout) if return_code != 0: noexec_warning = "" if return_code == 126: # Permission denied noexec_path = _check_noexec() if noexec_path is not None: noexec_warning = "\nWARNING: {0} is mounted with the noexec flag, which can prevent execution of VM Extensions.".format(noexec_path) raise ExtensionOperationError( "Non-zero exit code: {0}, {1}{2}\n{3}".format(return_code, command, noexec_warning, process_output), code=error_code, exit_code=return_code) return process_output # # Collect a sample of errors while checking for the noexec flag. Consider removing this telemetry after a few releases. # _COLLECT_NOEXEC_ERRORS = True def _check_noexec(): """ Check if /var is mounted with the noexec flag. """ # W0603: Using the global statement (global-statement) # OK to disable; _COLLECT_NOEXEC_ERRORS is used only within _check_noexec, but needs to persist across calls. global _COLLECT_NOEXEC_ERRORS # pylint: disable=W0603 try: agent_dir = conf.get_lib_dir() with open('/proc/mounts', 'r') as f: while True: line = f.readline() if line == "": # EOF break # The mount point is on the second column, and the flags are on the fourth. e.g. # # # grep /var /proc/mounts # /dev/mapper/rootvg-varlv /var xfs rw,seclabel,noexec,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0 # columns = line.split() mount_point = columns[1] flags = columns[3] if agent_dir.startswith(mount_point) and "noexec" in flags: message = "The noexec flag is set on {0}. This can prevent extensions from executing.".format(mount_point) logger.warn(message) add_event(op=WALAEventOperation.NoExec, is_success=False, message=message) return mount_point except Exception as e: message = "Error while checking the noexec flag: {0}".format(e) logger.warn(message) if _COLLECT_NOEXEC_ERRORS: _COLLECT_NOEXEC_ERRORS = False add_event(op=WALAEventOperation.NoExec, is_success=False, log_event=False, message="Error while checking the noexec flag: {0}".format(e)) return None SAS_TOKEN_RE = re.compile(r'(https://\S+\?)((sv|st|se|sr|sp|sip|spr|sig)=\S+)+', flags=re.IGNORECASE) def read_output(stdout, stderr): """ Read the output of the process sent to stdout and stderr and trim them to the max appropriate length. :param stdout: File containing the stdout of the process :param stderr: File containing the stderr of the process :return: Returns the formatted concatenated stdout and stderr of the process """ try: stdout.seek(0) stderr.seek(0) stdout = ustr(stdout.read(TELEMETRY_MESSAGE_MAX_LEN), encoding='utf-8', errors='backslashreplace') stderr = ustr(stderr.read(TELEMETRY_MESSAGE_MAX_LEN), encoding='utf-8', errors='backslashreplace') def redact(s): # redact query strings that look like SAS tokens return SAS_TOKEN_RE.sub(r'\1', s) return format_stdout_stderr(redact(stdout), redact(stderr)) except Exception as e: return format_stdout_stderr("", "Cannot read stdout/stderr: {0}".format(ustr(e))) def format_stdout_stderr(stdout, stderr): """ Format stdout and stderr's output to make it suitable in telemetry. The goal is to maximize the amount of output given the constraints of telemetry. For example, if there is more stderr output than stdout output give more buffer space to stderr. :param str stdout: characters captured from stdout :param str stderr: characters captured from stderr :param int max_len: maximum length of the string to return :return: a string formatted with stdout and stderr that is less than or equal to max_len. :rtype: str """ template = "[stdout]\n{0}\n\n[stderr]\n{1}" # +6 == len("{0}") + len("{1}") max_len = TELEMETRY_MESSAGE_MAX_LEN max_len_each = int((max_len - len(template) + 6) / 2) if max_len_each <= 0: return '' def to_s(captured_stdout, stdout_offset, captured_stderr, stderr_offset): s = template.format(captured_stdout[stdout_offset:], captured_stderr[stderr_offset:]) return s if len(stdout) + len(stderr) < max_len: return to_s(stdout, 0, stderr, 0) elif len(stdout) < max_len_each: bonus = max_len_each - len(stdout) stderr_len = min(max_len_each + bonus, len(stderr)) return to_s(stdout, 0, stderr, -1*stderr_len) elif len(stderr) < max_len_each: bonus = max_len_each - len(stderr) stdout_len = min(max_len_each + bonus, len(stdout)) return to_s(stdout, -1*stdout_len, stderr, 0) else: return to_s(stdout, -1*max_len_each, stderr, -1*max_len_each) def get_cpu_throttled_time(cpu_cgroup): """ return the throttled time for the given cgroup. """ throttled_time = 0 if cpu_cgroup is not None: try: throttled_time = cpu_cgroup.get_cpu_throttled_time(read_previous_throttled_time=False) except Exception as e: logger.warn("Failed to get cpu throttled time for the extension: {0}", ustr(e)) return throttled_time Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/exthandlers.py000066400000000000000000003445351462617747000242550ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import copy import datetime import glob import json import os import re import shutil import stat import tempfile import time import zipfile from distutils.version import LooseVersion from collections import defaultdict from functools import partial from azurelinuxagent.common import conf from azurelinuxagent.common import logger from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.utils import fileutil from azurelinuxagent.common import version from azurelinuxagent.common.agent_supported_feature import get_agent_supported_features_list_for_extensions, \ SupportedFeatureNames, get_supported_feature_by_name, get_agent_supported_features_list_for_crp from azurelinuxagent.ga.cgroupconfigurator import CGroupConfigurator from azurelinuxagent.common.datacontract import get_properties, set_properties from azurelinuxagent.common.errorstate import ErrorState from azurelinuxagent.common.event import add_event, elapsed_milliseconds, WALAEventOperation, \ add_periodic, EVENTS_DIRECTORY from azurelinuxagent.common.exception import ExtensionDownloadError, ExtensionError, ExtensionErrorCodes, \ ExtensionOperationError, ExtensionUpdateError, ProtocolError, ProtocolNotFoundError, ExtensionsGoalStateError, \ GoalStateAggregateStatusCodes, MultiConfigExtensionEnableError from azurelinuxagent.common.future import ustr, is_file_not_found_error from azurelinuxagent.common.protocol.extensions_goal_state import GoalStateSource from azurelinuxagent.common.protocol.restapi import ExtensionStatus, ExtensionSubStatus, Extension, ExtHandlerStatus, \ VMStatus, GoalStateAggregateStatus, ExtensionState, ExtensionRequestedState, ExtensionSettings from azurelinuxagent.common.utils import textutil from azurelinuxagent.common.utils.archive import ARCHIVE_DIRECTORY_NAME from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.version import AGENT_NAME, CURRENT_VERSION _HANDLER_NAME_PATTERN = r'^([^-]+)' _HANDLER_VERSION_PATTERN = r'(\d+(?:\.\d+)*)' _HANDLER_PATTERN = _HANDLER_NAME_PATTERN + r"-" + _HANDLER_VERSION_PATTERN _HANDLER_PKG_PATTERN = re.compile(_HANDLER_PATTERN + r'\.zip$', re.IGNORECASE) _DEFAULT_EXT_TIMEOUT_MINUTES = 90 _VALID_HANDLER_STATUS = ['Ready', 'NotReady', "Installing", "Unresponsive"] HANDLER_NAME_PATTERN = re.compile(_HANDLER_NAME_PATTERN, re.IGNORECASE) HANDLER_COMPLETE_NAME_PATTERN = re.compile(_HANDLER_PATTERN + r'$', re.IGNORECASE) HANDLER_PKG_EXT = ".zip" # This is the default value for the env variables, whenever we call a command which is not an update scenario, we # set the env variable value to NOT_RUN to reduce ambiguity for the extension publishers NOT_RUN = "NOT_RUN" # Max size of individual status file _MAX_STATUS_FILE_SIZE_IN_BYTES = 128 * 1024 # 128K # Truncating length of fields. _MAX_STATUS_MESSAGE_LENGTH = 1024 # 1k message allowed to be shown in the portal. _MAX_SUBSTATUS_FIELD_LENGTH = 10 * 1024 # Making 10K; allowing fields to have enough debugging information.. _TRUNCATED_SUFFIX = u" ... [TRUNCATED]" # Status file specific retries and delays. _NUM_OF_STATUS_FILE_RETRIES = 5 _STATUS_FILE_RETRY_DELAY = 2 # seconds # This is the default sequence number we use when there are no settings available for Handlers _DEFAULT_SEQ_NO = "0" class ExtHandlerStatusValue(object): """ Statuses for Extension Handlers """ ready = "Ready" not_ready = "NotReady" class ExtensionStatusValue(object): """ Statuses for Extensions """ transitioning = "transitioning" warning = "warning" error = "error" success = "success" STRINGS = ['transitioning', 'warning', 'error', 'success'] _EXTENSION_TERMINAL_STATUSES = [ExtensionStatusValue.error, ExtensionStatusValue.success] class ExtCommandEnvVariable(object): Prefix = "AZURE_GUEST_AGENT" DisableReturnCode = "{0}_DISABLE_CMD_EXIT_CODE".format(Prefix) DisableReturnCodeMultipleExtensions = "{0}_DISABLE_CMD_EXIT_CODES_MULTIPLE_EXTENSIONS".format(Prefix) UninstallReturnCode = "{0}_UNINSTALL_CMD_EXIT_CODE".format(Prefix) ExtensionPath = "{0}_EXTENSION_PATH".format(Prefix) ExtensionVersion = "{0}_EXTENSION_VERSION".format(Prefix) ExtensionSeqNumber = "ConfigSequenceNumber" # At par with Windows Guest Agent ExtensionName = "ConfigExtensionName" UpdatingFromVersion = "{0}_UPDATING_FROM_VERSION".format(Prefix) WireProtocolAddress = "{0}_WIRE_PROTOCOL_ADDRESS".format(Prefix) ExtensionSupportedFeatures = "{0}_EXTENSION_SUPPORTED_FEATURES".format(Prefix) def validate_has_key(obj, key, full_key_path): if key not in obj: raise ExtensionStatusError(msg="Invalid status format by extension: Missing {0} key".format(full_key_path), code=ExtensionStatusError.StatusFileMalformed) def validate_in_range(val, valid_range, name): if val not in valid_range: raise ExtensionStatusError(msg="Invalid value {0} in range {1} at the node {2}".format(val, valid_range, name), code=ExtensionStatusError.StatusFileMalformed) def parse_formatted_message(formatted_message): if formatted_message is None: return None validate_has_key(formatted_message, 'lang', 'formattedMessage/lang') validate_has_key(formatted_message, 'message', 'formattedMessage/message') return formatted_message.get('message') def parse_ext_substatus(substatus): # Check extension sub status format validate_has_key(substatus, 'status', 'substatus/status') validate_in_range(substatus['status'], ExtensionStatusValue.STRINGS, 'substatus/status') status = ExtensionSubStatus() status.name = substatus.get('name') status.status = substatus.get('status') status.code = substatus.get('code', 0) formatted_message = substatus.get('formattedMessage') status.message = parse_formatted_message(formatted_message) return status def parse_ext_status(ext_status, data): if data is None: return if not isinstance(data, list): data_string = ustr(data)[:4096] raise ExtensionStatusError(msg="The extension status must be an array: {0}".format(data_string), code=ExtensionStatusError.StatusFileMalformed) if not data: return # Currently, only the first status will be reported data = data[0] # Check extension status format validate_has_key(data, 'status', 'status') status_data = data['status'] validate_has_key(status_data, 'status', 'status/status') status = status_data['status'] if status not in ExtensionStatusValue.STRINGS: status = ExtensionStatusValue.error applied_time = status_data.get('configurationAppliedTime') ext_status.configurationAppliedTime = applied_time ext_status.operation = status_data.get('operation') ext_status.status = status ext_status.code = status_data.get('code', 0) formatted_message = status_data.get('formattedMessage') ext_status.message = parse_formatted_message(formatted_message) substatus_list = status_data.get('substatus', []) # some extensions incorrectly report an empty substatus with a null value if substatus_list is None: substatus_list = [] for substatus in substatus_list: if substatus is not None: ext_status.substatusList.append(parse_ext_substatus(substatus)) def migrate_handler_state(): """ Migrate handler state and status (if they exist) from an agent-owned directory into the handler-owned config directory Notes: - The v2.0.x branch wrote all handler-related state into the handler-owned config directory (e.g., /var/lib/waagent/Microsoft.Azure.Extensions.LinuxAsm-2.0.1/config). - The v2.1.x branch original moved that state into an agent-owned handler state directory (e.g., /var/lib/waagent/handler_state). - This move can cause v2.1.x agents to multiply invoke a handler's install command. It also makes clean-up more difficult since the agent must remove the state as well as the handler directory. """ handler_state_path = os.path.join(conf.get_lib_dir(), "handler_state") if not os.path.isdir(handler_state_path): return for handler_path in glob.iglob(os.path.join(handler_state_path, "*")): handler = os.path.basename(handler_path) handler_config_path = os.path.join(conf.get_lib_dir(), handler, "config") if os.path.isdir(handler_config_path): for file in ("State", "Status"): # pylint: disable=redefined-builtin from_path = os.path.join(handler_state_path, handler, file.lower()) to_path = os.path.join(handler_config_path, "Handler" + file) if os.path.isfile(from_path) and not os.path.isfile(to_path): try: shutil.move(from_path, to_path) except Exception as e: logger.warn( "Exception occurred migrating {0} {1} file: {2}", handler, file, str(e)) try: shutil.rmtree(handler_state_path) except Exception as e: logger.warn("Exception occurred removing {0}: {1}", handler_state_path, str(e)) return class ExtHandlerState(object): NotInstalled = "NotInstalled" Installed = "Installed" Enabled = "Enabled" FailedUpgrade = "FailedUpgrade" class GoalStateStatus(object): """ This is an Enum to define the State of the GoalState as a whole. This is reported as part of the 'vmArtifactsAggregateStatus.goalStateAggregateStatus' in the status blob. Note: not to be confused with the State of the ExtHandler which reported as part of 'handlerAggregateStatus' """ Success = "Success" Failed = "Failed" # The following field is not used now but would be needed once Status reporting is moved to a separate thread. Initialize = "Initialize" Transitioning = "Transitioning" def get_exthandlers_handler(protocol): return ExtHandlersHandler(protocol) def list_agent_lib_directory(skip_agent_package=True, ignore_names=None): lib_dir = conf.get_lib_dir() for name in os.listdir(lib_dir): path = os.path.join(lib_dir, name) if ignore_names is not None and any(ignore_names) and name in ignore_names: continue if skip_agent_package and (version.is_agent_package(path) or version.is_agent_path(path)): continue yield name, path class ExtHandlersHandler(object): def __init__(self, protocol): self.protocol = protocol self.ext_handlers = None # The GoalState Aggregate status needs to report the last status of the GoalState. Since we only process # extensions on goal state change, we need to maintain its state. # Setting the status to None here. This would be overridden as soon as the first GoalState is processed self.__gs_aggregate_status = None self.report_status_error_state = ErrorState() def __last_gs_unsupported(self): # Return if the last GoalState was unsupported return self.__gs_aggregate_status is not None and \ self.__gs_aggregate_status.status == GoalStateStatus.Failed and \ self.__gs_aggregate_status.code == GoalStateAggregateStatusCodes.GoalStateUnsupportedRequiredFeatures def run(self): try: gs = self.protocol.get_goal_state() egs = gs.extensions_goal_state # self.ext_handlers needs to be initialized before returning, since status reporting depends on it; also # we make a deep copy of the extensions, since changes are made to self.ext_handlers while processing the extensions self.ext_handlers = copy.deepcopy(egs.extensions) if self._extensions_on_hold(): return utc_start = datetime.datetime.utcnow() error = None message = "ProcessExtensionsGoalState started [{0} channel: {1} source: {2} activity: {3} correlation {4} created: {5}]".format( egs.id, egs.channel, egs.source, egs.activity_id, egs.correlation_id, egs.created_on_timestamp) logger.info('') logger.info(message) add_event(op=WALAEventOperation.ExtensionProcessing, message=message) try: self.__process_and_handle_extensions(egs.svd_sequence_number, egs.id) self._cleanup_outdated_handlers() except Exception as e: error = u"Error processing extensions:{0}".format(textutil.format_exception(e)) finally: duration = elapsed_milliseconds(utc_start) if error is None: message = 'ProcessExtensionsGoalState completed [{0} {1} ms]\n'.format(egs.id, duration) logger.info(message) else: message = 'ProcessExtensionsGoalState failed [{0} {1} ms]\n{2}'.format(egs.id, duration, error) logger.error(message) add_event(op=WALAEventOperation.ExtensionProcessing, is_success=(error is None), message=message, log_event=False, duration=duration) except Exception as error: msg = u"ProcessExtensionsInGoalState - Exception processing extension handlers:{0}".format(textutil.format_exception(error)) logger.error(msg) add_event(op=WALAEventOperation.ExtensionProcessing, is_success=False, message=msg, log_event=False) def __get_unsupported_features(self): required_features = self.protocol.get_goal_state().extensions_goal_state.required_features supported_features = get_agent_supported_features_list_for_crp() return [feature for feature in required_features if feature not in supported_features] def __process_and_handle_extensions(self, svd_sequence_number, goal_state_id): try: # Verify we satisfy all required features, if any. If not, report failure here itself, no need to process anything further. unsupported_features = self.__get_unsupported_features() if any(unsupported_features): msg = "Failing GS {0} as Unsupported features found: {1}".format(goal_state_id, ', '.join(unsupported_features)) logger.warn(msg) self.__gs_aggregate_status = GoalStateAggregateStatus(status=GoalStateStatus.Failed, seq_no=svd_sequence_number, code=GoalStateAggregateStatusCodes.GoalStateUnsupportedRequiredFeatures, message=msg) add_event(op=WALAEventOperation.GoalStateUnsupportedFeatures, is_success=False, message=msg, log_event=False) else: self.handle_ext_handlers(goal_state_id) self.__gs_aggregate_status = GoalStateAggregateStatus(status=GoalStateStatus.Success, seq_no=svd_sequence_number, code=GoalStateAggregateStatusCodes.Success, message="GoalState executed successfully") except Exception as error: msg = "Unexpected error when processing goal state:{0}".format(textutil.format_exception(error)) self.__gs_aggregate_status = GoalStateAggregateStatus(status=GoalStateStatus.Failed, seq_no=svd_sequence_number, code=GoalStateAggregateStatusCodes.GoalStateUnknownFailure, message=msg) logger.warn(msg) add_event(op=WALAEventOperation.ExtensionProcessing, is_success=False, message=msg, log_event=False) @staticmethod def get_ext_handler_instance_from_path(name, path, protocol, skip_handlers=None): if not os.path.isdir(path) or re.match(HANDLER_NAME_PATTERN, name) is None: return None separator = name.rfind('-') handler_name = name[0:separator] if skip_handlers is not None and handler_name in skip_handlers: # Handler in skip_handlers list, not parsing it return None eh = Extension(name=handler_name) eh.version = str(FlexibleVersion(name[separator + 1:])) return ExtHandlerInstance(eh, protocol) def _cleanup_outdated_handlers(self): # Skip cleanup if the previous GS was Unsupported if self.__last_gs_unsupported(): return handlers = [] pkgs = [] ext_handlers_in_gs = [ext_handler.name for ext_handler in self.ext_handlers] # Build a collection of uninstalled handlers and orphaned packages # Note: # -- An orphaned package is one without a corresponding handler # directory for item, path in list_agent_lib_directory(skip_agent_package=True): try: handler_instance = ExtHandlersHandler.get_ext_handler_instance_from_path(name=item, path=path, protocol=self.protocol, skip_handlers=ext_handlers_in_gs) if handler_instance is not None: # Since this handler name doesn't exist in the GS, marking it for deletion handlers.append(handler_instance) continue except Exception: continue if os.path.isfile(path) and \ not os.path.isdir(path[0:-len(HANDLER_PKG_EXT)]): if not re.match(_HANDLER_PKG_PATTERN, item): continue pkgs.append(path) # Then, remove the orphaned packages for pkg in pkgs: try: os.remove(pkg) logger.verbose("Removed orphaned extension package {0}".format(pkg)) except OSError as e: logger.warn("Failed to remove orphaned package {0}: {1}".format(pkg, e.strerror)) # Finally, remove the directories and packages of the orphaned handlers, i.e. Any extension directory that # is still in the FileSystem but not in the GoalState for handler in handlers: handler.remove_ext_handler() pkg = os.path.join(conf.get_lib_dir(), handler.get_full_name() + HANDLER_PKG_EXT) if os.path.isfile(pkg): try: os.remove(pkg) logger.verbose("Removed extension package {0}".format(pkg)) except OSError as e: logger.warn("Failed to remove extension package {0}: {1}".format(pkg, e.strerror)) def _extensions_on_hold(self): if conf.get_enable_overprovisioning(): if self.protocol.get_goal_state().extensions_goal_state.on_hold: msg = "Extension handling is on hold" logger.info(msg) add_event(op=WALAEventOperation.ExtensionProcessing, message=msg) return True return False @staticmethod def __get_dependency_level(tup): (extension, handler) = tup if extension is not None: return extension.dependency_level_sort_key(handler.state) return handler.dependency_level_sort_key() def __get_sorted_extensions_for_processing(self): all_extensions = [] for handler in self.ext_handlers: if any(handler.settings): all_extensions.extend([(ext, handler) for ext in handler.settings]) else: # We need to process the Handler even if no settings specified from CRP (legacy behavior) logger.info("No extension/run-time settings settings found for {0}".format(handler.name)) all_extensions.append((None, handler)) all_extensions.sort(key=self.__get_dependency_level) return all_extensions def handle_ext_handlers(self, goal_state_id): if not self.ext_handlers: logger.info("No extension handlers found, not processing anything.") return wait_until = datetime.datetime.utcnow() + datetime.timedelta(minutes=_DEFAULT_EXT_TIMEOUT_MINUTES) all_extensions = self.__get_sorted_extensions_for_processing() # Since all_extensions are sorted based on sort_key, the last element would be the maximum based on the sort_key max_dep_level = self.__get_dependency_level(all_extensions[-1]) if any(all_extensions) else 0 depends_on_err_msg = None extensions_enabled = conf.get_extensions_enabled() for extension, ext_handler in all_extensions: handler_i = ExtHandlerInstance(ext_handler, self.protocol, extension=extension) # In case of extensions disabled, we skip processing extensions. But CRP is still waiting for some status # back for the skipped extensions. In order to propagate the status back to CRP, we will report status back # here with an error message. if not extensions_enabled: agent_conf_file_path = get_osutil().agent_conf_file_path msg = "Extension will not be processed since extension processing is disabled. To enable extension " \ "processing, set Extensions.Enabled=y in '{0}'".format(agent_conf_file_path) ext_full_name = handler_i.get_extension_full_name(extension) logger.info('') logger.info("{0}: {1}".format(ext_full_name, msg)) add_event(op=WALAEventOperation.ExtensionProcessing, message="{0}: {1}".format(ext_full_name, msg)) handler_i.set_handler_status(status=ExtHandlerStatusValue.not_ready, message=msg, code=-1) handler_i.create_status_file_if_not_exist(extension, status=ExtensionStatusValue.error, code=-1, operation=handler_i.operation, message=msg) continue # In case of depends-on errors, we skip processing extensions if there was an error processing dependent extensions. # But CRP is still waiting for some status back for the skipped extensions. In order to propagate the status back to CRP, # we will report status back here with the relevant error message for each of the dependent extension. if depends_on_err_msg is not None: # For MC extensions, report the HandlerStatus as is and create a new placeholder per extension if doesnt exist if handler_i.should_perform_multi_config_op(extension): # Ensure some handler status exists for the Handler, if not, set it here if handler_i.get_handler_status() is None: handler_i.set_handler_status(message=depends_on_err_msg, code=-1) handler_i.create_status_file_if_not_exist(extension, status=ExtensionStatusValue.error, code=-1, operation=WALAEventOperation.ExtensionProcessing, message=depends_on_err_msg) # For SC extensions, overwrite the HandlerStatus with the relevant message else: handler_i.set_handler_status(message=depends_on_err_msg, code=-1) continue # Process extensions and get if it was successfully executed or not extension_success = self.handle_ext_handler(handler_i, extension, goal_state_id) dep_level = self.__get_dependency_level((extension, ext_handler)) if 0 <= dep_level < max_dep_level: extension_full_name = handler_i.get_extension_full_name(extension) try: # Do no wait for extension status if the handler failed if not extension_success: raise Exception("Skipping processing of extensions since execution of dependent extension {0} failed".format( extension_full_name)) # Wait for the extension installation until it is handled. # This is done for the install and enable. Not for the uninstallation. # If handled successfully, proceed with the current handler. # Otherwise, skip the rest of the extension installation. self.wait_for_handler_completion(handler_i, wait_until, extension=extension) except Exception as error: logger.warn( "Dependent extension {0} failed or timed out, will skip processing the rest of the extensions".format( extension_full_name)) depends_on_err_msg = ustr(error) add_event(name=extension_full_name, version=handler_i.ext_handler.version, op=WALAEventOperation.ExtensionProcessing, is_success=False, message=depends_on_err_msg) @staticmethod def wait_for_handler_completion(handler_i, wait_until, extension=None): """ Check the status of the extension being handled. Wait until it has a terminal state or times out. :raises: Exception if it is not handled successfully. """ extension_name = handler_i.get_extension_full_name(extension) # If the handler had no settings, we should not wait at all for handler to report status. if extension is None: logger.info("No settings found for {0}, not waiting for it's status".format(extension_name)) return try: ext_completed, status = False, None # Keep polling for the extension status until it succeeds or times out while datetime.datetime.utcnow() <= wait_until: ext_completed, status = handler_i.is_ext_handling_complete(extension) if ext_completed: break time.sleep(5) except Exception as e: msg = "Failed to wait for Handler completion due to unknown error. Marking the dependent extension as failed: {0}, {1}".format( extension_name, textutil.format_exception(e)) raise Exception(msg) # In case of timeout or terminal error state, we log it and raise # Incase extension reported status at the last sec, we should prioritize reporting status over timeout if not ext_completed and datetime.datetime.utcnow() > wait_until: msg = "Dependent Extension {0} did not reach a terminal state within the allowed timeout. Last status was {1}".format( extension_name, status) raise Exception(msg) if status != ExtensionStatusValue.success: msg = "Dependent Extension {0} did not succeed. Status was {1}".format(extension_name, status) raise Exception(msg) def handle_ext_handler(self, ext_handler_i, extension, goal_state_id): """ Execute the requested command for the handler and return if success :param ext_handler_i: The ExtHandlerInstance object to execute the command on :param extension: The extension settings on which to run the command on :param goal_state_id: ID of the current GoalState :return: True if the operation was successful, False if not """ try: # Ensure the extension config was valid if ext_handler_i.ext_handler.is_invalid_setting: raise ExtensionsGoalStateError(ext_handler_i.ext_handler.invalid_setting_reason) handler_state = ext_handler_i.ext_handler.state # The Guest Agent currently only supports 1 installed version per extension on the VM. # If the extension version is unregistered and the customers wants to uninstall the extension, # we should let it go through even if the installed version doesnt exist in Handler manifest (PIR) anymore. # If target state is enabled and version not found in manifest, do not process the extension. if ext_handler_i.decide_version(target_state=handler_state, extension=extension) is None and handler_state == ExtensionRequestedState.Enabled: handler_version = ext_handler_i.ext_handler.version name = ext_handler_i.ext_handler.name err_msg = "Unable to find version {0} in manifest for extension {1}".format(handler_version, name) ext_handler_i.set_operation(WALAEventOperation.Download) raise ExtensionError(msg=err_msg) # Handle everything on an extension level rather than Handler level ext_handler_i.logger.info("Target handler state: {0} [{1}]", handler_state, goal_state_id) if handler_state == ExtensionRequestedState.Enabled: self.handle_enable(ext_handler_i, extension) elif handler_state == ExtensionRequestedState.Disabled: self.handle_disable(ext_handler_i, extension) elif handler_state == ExtensionRequestedState.Uninstall: self.handle_uninstall(ext_handler_i, extension=extension) else: message = u"Unknown ext handler state:{0}".format(handler_state) raise ExtensionError(message) return True except MultiConfigExtensionEnableError as error: ext_name = ext_handler_i.get_extension_full_name(extension) err_msg = "Error processing MultiConfig extension {0}: {1}".format(ext_name, ustr(error)) # This error is only thrown for enable operation on MultiConfig extension. # Since these are maintained by the extensions, the expectation here is that they would update their status files appropriately with their errors. # The extensions should already have a placeholder status file, but incase they dont, setting one here to fail fast. ext_handler_i.create_status_file_if_not_exist(extension, status=ExtensionStatusValue.error, code=error.code, operation=ext_handler_i.operation, message=err_msg) add_event(name=ext_name, version=ext_handler_i.ext_handler.version, op=ext_handler_i.operation, is_success=False, log_event=True, message=err_msg) except ExtensionsGoalStateError as error: # Catch and report Invalid ExtensionConfig errors here to fail fast rather than timing out after 90 min err_msg = "Ran into config errors: {0}. \nPlease retry again as another operation with updated settings".format( ustr(error)) self.__handle_and_report_ext_handler_errors(ext_handler_i, error, report_op=WALAEventOperation.InvalidExtensionConfig, message=err_msg, extension=extension) except ExtensionUpdateError as error: # Not reporting the error as it has already been reported from the old version self.__handle_and_report_ext_handler_errors(ext_handler_i, error, ext_handler_i.operation, ustr(error), report=False, extension=extension) except ExtensionDownloadError as error: msg = "Failed to download artifacts: {0}".format(ustr(error)) self.__handle_and_report_ext_handler_errors(ext_handler_i, error, report_op=WALAEventOperation.Download, message=msg, extension=extension) except ExtensionError as error: self.__handle_and_report_ext_handler_errors(ext_handler_i, error, ext_handler_i.operation, ustr(error), extension=extension) except Exception as error: error.code = -1 self.__handle_and_report_ext_handler_errors(ext_handler_i, error, ext_handler_i.operation, ustr(error), extension=extension) return False @staticmethod def __handle_and_report_ext_handler_errors(ext_handler_i, error, report_op, message, report=True, extension=None): # This function is only called for Handler level errors, we capture MultiConfig errors separately, # so report only HandlerStatus here. ext_handler_i.set_handler_status(message=message, code=error.code) # If the handler supports multi-config, create a status file with failed status if no status file exists. # This is for correctly reporting errors back to CRP for failed Handler level operations for MultiConfig extensions. # In case of Handler failures, we will retry each time for each extension, so we need to create a status # file with failure since the extensions wont be called where they can create their status files. # This way we guarantee reporting back to CRP if ext_handler_i.should_perform_multi_config_op(extension): ext_handler_i.create_status_file_if_not_exist(extension, status=ExtensionStatusValue.error, code=error.code, operation=report_op, message=message) if report: name = ext_handler_i.get_extension_full_name(extension) handler_version = ext_handler_i.ext_handler.version add_event(name=name, version=handler_version, op=report_op, is_success=False, log_event=True, message=message) def handle_enable(self, ext_handler_i, extension): """ 1- Ensure the handler is installed 2- Check if extension is enabled or disabled and then process accordingly """ uninstall_exit_code = None old_ext_handler_i = ext_handler_i.get_installed_ext_handler() current_handler_state = ext_handler_i.get_handler_state() ext_handler_i.logger.info("[Enable] current handler state is: {0}", current_handler_state.lower()) # We go through the entire process of downloading and initializing the extension if it's either a fresh # extension or if it's a retry of a previously failed upgrade. if current_handler_state == ExtHandlerState.NotInstalled or current_handler_state == ExtHandlerState.FailedUpgrade: self.__setup_new_handler(ext_handler_i, extension) if old_ext_handler_i is None: ext_handler_i.install(extension=extension) elif ext_handler_i.version_ne(old_ext_handler_i): # This is a special case, we need to update the handler version here but to do that we need to also # disable each enabled extension of this handler. uninstall_exit_code = ExtHandlersHandler._update_extension_handler_and_return_if_failed( old_ext_handler_i, ext_handler_i, extension) else: ext_handler_i.ensure_consistent_data_for_mc() ext_handler_i.update_settings(extension) self.__handle_extension(ext_handler_i, extension, uninstall_exit_code) @staticmethod def __setup_new_handler(ext_handler_i, extension): ext_handler_i.set_handler_state(ExtHandlerState.NotInstalled) ext_handler_i.download() ext_handler_i.initialize() ext_handler_i.update_settings(extension) @staticmethod def __handle_extension(ext_handler_i, extension, uninstall_exit_code): # Check if extension level settings provided for the handler, if not, call enable for the handler. # This is legacy behavior, we can have handlers with no settings. if extension is None: ext_handler_i.enable() return # MultiConfig: Handle extension level ops here ext_handler_i.logger.info("Requested extension state: {0}", extension.state) if extension.state == ExtensionState.Enabled: ext_handler_i.enable(extension, uninstall_exit_code=uninstall_exit_code) elif extension.state == ExtensionState.Disabled: # Only disable extension if the requested state == Disabled and current state is != Disabled if ext_handler_i.get_extension_state(extension) != ExtensionState.Disabled: # Extensions can only be disabled for Multi Config extensions. Disable operation for extension is # tantamount to uninstalling Handler so ignoring errors incase of Disable failure and deleting state. ext_handler_i.disable(extension, ignore_error=True) else: ext_handler_i.logger.info("Extension already disabled, not doing anything") else: raise ExtensionsGoalStateError( "Unknown requested state for Extension {0}: {1}".format(extension.name, extension.state)) @staticmethod def _update_extension_handler_and_return_if_failed(old_ext_handler_i, ext_handler_i, extension=None): def execute_old_handler_command_and_return_if_succeeds(func): """ Created a common wrapper to execute all commands that need to be executed from the old handler so that it can have a common exception handling mechanism :param func: The command to be executed on the old handler :return: True if command execution succeeds and False if it fails """ continue_on_update_failure = False exit_code = 0 try: continue_on_update_failure = ext_handler_i.load_manifest().is_continue_on_update_failure() func() except ExtensionError as e: # Reporting the event with the old handler and raising a new ExtensionUpdateError to set the # handler status on the new version msg = "%s; ContinueOnUpdate: %s" % (ustr(e), continue_on_update_failure) old_ext_handler_i.report_event(message=msg, is_success=False) if not continue_on_update_failure: raise ExtensionUpdateError(msg) exit_code = e.code if isinstance(e, ExtensionOperationError): exit_code = e.exit_code # pylint: disable=E1101 logger.info("Continue on Update failure flag is set, proceeding with update") return exit_code disable_exit_codes = defaultdict(lambda: NOT_RUN) # We only want to disable the old handler if it is currently enabled; no other state makes sense. if old_ext_handler_i.get_handler_state() == ExtHandlerState.Enabled: # Corner case - If the old handler is a Single config Handler with no extensions at all, # we should just disable the handler if not old_ext_handler_i.supports_multi_config and not any(old_ext_handler_i.extensions): disable_exit_codes[ old_ext_handler_i.ext_handler.name] = execute_old_handler_command_and_return_if_succeeds( func=partial(old_ext_handler_i.disable, extension=None)) # Else we disable all enabled extensions of this handler # Note: If MC is supported this will disable only enabled_extensions else it will disable all extensions for old_ext in old_ext_handler_i.enabled_extensions: disable_exit_codes[old_ext.name] = execute_old_handler_command_and_return_if_succeeds( func=partial(old_ext_handler_i.disable, extension=old_ext)) ext_handler_i.copy_status_files(old_ext_handler_i) if ext_handler_i.version_gt(old_ext_handler_i): ext_handler_i.update(disable_exit_codes=disable_exit_codes, updating_from_version=old_ext_handler_i.ext_handler.version, extension=extension) else: updating_from_version = ext_handler_i.ext_handler.version old_ext_handler_i.update(handler_version=updating_from_version, disable_exit_codes=disable_exit_codes, updating_from_version=updating_from_version, extension=extension) uninstall_exit_code = execute_old_handler_command_and_return_if_succeeds( func=partial(old_ext_handler_i.uninstall, extension=extension)) old_ext_handler_i.remove_ext_handler() ext_handler_i.update_with_install(uninstall_exit_code=uninstall_exit_code, extension=extension) return uninstall_exit_code def handle_disable(self, ext_handler_i, extension=None): """ Disable is a legacy behavior, CRP doesn't support it, its only for XML based extensions. In case we get a disable request, just disable that extension. """ handler_state = ext_handler_i.get_handler_state() ext_handler_i.logger.info("[Disable] current handler state is: {0}", handler_state.lower()) if handler_state == ExtHandlerState.Enabled: ext_handler_i.disable(extension) def handle_uninstall(self, ext_handler_i, extension): """ To Uninstall the handler, first ensure all extensions are disabled 1- Disable all enabled extensions first if Handler is Enabled and then Disable the handler (disabled extensions wont have any extensions dependent on them so we can just go ahead and remove all of them at once if HandlerState==Uninstall. CRP will only set the HandlerState to Uninstall if all its extensions are set to be disabled) 2- Finally uninstall the handler """ handler_state = ext_handler_i.get_handler_state() ext_handler_i.logger.info("[Uninstall] current handler state is: {0}", handler_state.lower()) if handler_state != ExtHandlerState.NotInstalled: if handler_state == ExtHandlerState.Enabled: # Corner case - Single config Handler with no extensions at all # If there are no extension settings for Handler, we should just disable the handler if not ext_handler_i.supports_multi_config and not any(ext_handler_i.extensions): ext_handler_i.disable() # If Handler is Enabled, there should be atleast 1 enabled extension for the handler # Note: If MC is supported this will disable only enabled_extensions else it will disable all extensions for enabled_ext in ext_handler_i.enabled_extensions: ext_handler_i.disable(enabled_ext) # Try uninstalling the extension and swallow any exceptions in case of failures after logging them try: ext_handler_i.uninstall(extension=extension) except ExtensionError as e: ext_handler_i.report_event(message=ustr(e), is_success=False) ext_handler_i.remove_ext_handler() def __get_handlers_on_file_system(self, goal_state_changed): handlers_to_report = [] # Ignoring the `history` and `events` directories as they're not handlers and are agent-generated for item, path in list_agent_lib_directory(skip_agent_package=True, ignore_names=[EVENTS_DIRECTORY, ARCHIVE_DIRECTORY_NAME]): try: handler_instance = ExtHandlersHandler.get_ext_handler_instance_from_path(name=item, path=path, protocol=self.protocol) if handler_instance is not None: ext_handler = handler_instance.ext_handler # For each handler we need to add extensions to report their status. # For Single Config, we just need to add one extension with name as Handler Name # For Multi Config, walk the config directory and find all unique extension names # and add them as extensions to the handler. extensions_names = set() # Settings for Multi Config are saved as ..settings. # Use this pattern to determine if Handler supports Multi Config or not and add extensions for settings_path in glob.iglob(os.path.join(handler_instance.get_conf_dir(), "*.*.settings")): match = re.search("(?P\\w+)\\.\\d+\\.settings", settings_path) if match is not None: extensions_names.add(match.group("extname")) ext_handler.supports_multi_config = True # If nothing found with that pattern then its a Single Config, add an extension with Handler Name if not any(extensions_names): extensions_names.add(ext_handler.name) for ext_name in extensions_names: ext = ExtensionSettings(name=ext_name) # Fetch the last modified sequence number seq_no, _ = handler_instance.get_status_file_path(ext) ext.sequenceNumber = seq_no # Append extension to the list of extensions for the handler ext_handler.settings.append(ext) handlers_to_report.append(ext_handler) except Exception as error: # Log error once per goal state if goal_state_changed: logger.warn("Can't fetch ExtHandler from path: {0}; Error: {1}".format(path, ustr(error))) return handlers_to_report def report_ext_handlers_status(self, goal_state_changed=False, vm_agent_update_status=None, vm_agent_supports_fast_track=False): """ Go through handler_state dir, collect and report status. Returns the status it reported, or None if an error occurred. """ try: vm_status = VMStatus(status="Ready", message="Guest Agent is running", gs_aggregate_status=self.__gs_aggregate_status, vm_agent_update_status=vm_agent_update_status) vm_status.vmAgent.set_supports_fast_track(vm_agent_supports_fast_track) handlers_to_report = [] # In case of Unsupported error, report the status of the handlers in the VM if self.__last_gs_unsupported(): handlers_to_report = self.__get_handlers_on_file_system(goal_state_changed) # If GoalState supported, report the status of extension handlers that were requested by the GoalState elif not self.__last_gs_unsupported() and self.ext_handlers is not None: handlers_to_report = self.ext_handlers for ext_handler in handlers_to_report: try: self.report_ext_handler_status(vm_status, ext_handler, goal_state_changed) except ExtensionError as error: add_event(op=WALAEventOperation.ExtensionProcessing, is_success=False, message=ustr(error)) logger.verbose("Report vm agent status") try: self.protocol.report_vm_status(vm_status) logger.verbose("Completed vm agent status report successfully") self.report_status_error_state.reset() except ProtocolNotFoundError as error: self.report_status_error_state.incr() message = "Failed to report vm agent status: {0}".format(error) logger.verbose(message) except ProtocolError as error: self.report_status_error_state.incr() message = "Failed to report vm agent status: {0}".format(error) add_event(AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.ExtensionProcessing, is_success=False, message=message) if self.report_status_error_state.is_triggered(): message = "Failed to report vm agent status for more than {0}" \ .format(self.report_status_error_state.min_timedelta) add_event(AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.ReportStatusExtended, is_success=False, message=message) self.report_status_error_state.reset() return vm_status except Exception as error: msg = u"Failed to report status: {0}".format(textutil.format_exception(error)) logger.warn(msg) add_event(AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.ReportStatus, is_success=False, message=msg) return None def report_ext_handler_status(self, vm_status, ext_handler, goal_state_changed): ext_handler_i = ExtHandlerInstance(ext_handler, self.protocol) handler_status = ext_handler_i.get_handler_status() # If nothing available, skip reporting if handler_status is None: # We should always have some handler status if requested state != Uninstall irrespective of single or # multi-config. If state is != Uninstall, report error if ext_handler.state != ExtensionRequestedState.Uninstall: msg = "No handler status found for {0}. Not reporting anything for it.".format(ext_handler.name) ext_handler_i.report_error_on_incarnation_change(goal_state_changed, log_msg=msg, event_msg=msg) return handler_state = ext_handler_i.get_handler_state() ext_handler_statuses = [] # For MultiConfig, we need to report status per extension even for Handler level failures. # If we have HandlerStatus for a MultiConfig handler and GS is requesting for it, we would report status per # extension even if HandlerState == NotInstalled (Sample scenario: ExtensionsGoalStateError, DecideVersionError, etc) # We also need to report extension status for an uninstalled handler if extensions are disabled because CRP # waits for extension runtime status before failing the extension operation. if handler_state != ExtHandlerState.NotInstalled or ext_handler.supports_multi_config or not conf.get_extensions_enabled(): # Since we require reading the Manifest for reading the heartbeat, this would fail if HandlerManifest not found. # Only try to read heartbeat if HandlerState != NotInstalled. if handler_state != ExtHandlerState.NotInstalled: # Heartbeat is a handler level thing only, so we dont need to modify this try: heartbeat = ext_handler_i.collect_heartbeat() if heartbeat is not None: handler_status.status = heartbeat.get('status') if 'formattedMessage' in heartbeat: handler_status.message = parse_formatted_message(heartbeat.get('formattedMessage')) except ExtensionError as e: ext_handler_i.set_handler_status(message=ustr(e), code=e.code) ext_handler_statuses = ext_handler_i.get_extension_handler_statuses(handler_status, goal_state_changed) # If not any extension status reported, report the Handler status if not any(ext_handler_statuses): ext_handler_statuses.append(handler_status) vm_status.vmAgent.extensionHandlers.extend(ext_handler_statuses) class ExtHandlerInstance(object): def __init__(self, ext_handler, protocol, execution_log_max_size=(10 * 1024 * 1024), extension=None): self.ext_handler = ext_handler self.protocol = protocol self.operation = None self.pkg = None self.pkg_file = None self.logger = None self.set_logger(extension=extension, execution_log_max_size=execution_log_max_size) @property def supports_multi_config(self): return self.ext_handler.supports_multi_config @property def extensions(self): return self.ext_handler.settings @property def enabled_extensions(self): """ In case of Single config, just return all the extensions of the handler (expectation being that there'll only be a single extension per handler). We will not be maintaining extension level state for Single config Handlers """ if self.supports_multi_config: return [ext for ext in self.extensions if self.get_extension_state(ext) == ExtensionState.Enabled] return self.extensions def get_extension_full_name(self, extension=None): """ Get the full name of the extension .. :param extension: The requested extension :return: if MultiConfig not supported or extension == None, else . """ if self.should_perform_multi_config_op(extension): return "{0}.{1}".format(self.ext_handler.name, extension.name) return self.ext_handler.name def __set_command_execution_log(self, extension, execution_log_max_size): try: fileutil.mkdir(self.get_log_dir(), mode=0o755, reset_mode_and_owner=False) except IOError as e: self.logger.error(u"Failed to create extension log dir: {0}", e) else: log_file_name = "CommandExecution.log" if not self.should_perform_multi_config_op( extension) else "CommandExecution_{0}.log".format(extension.name) log_file = os.path.join(self.get_log_dir(), log_file_name) self.__truncate_file_head(log_file, execution_log_max_size, self.get_extension_full_name(extension)) self.logger.add_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, log_file) @staticmethod def __truncate_file_head(filename, max_size, extension_name): try: if os.stat(filename).st_size <= max_size: return with open(filename, "rb") as existing_file: existing_file.seek(-1 * max_size, 2) _ = existing_file.readline() with open(filename + ".tmp", "wb") as tmp_file: shutil.copyfileobj(existing_file, tmp_file) os.rename(filename + ".tmp", filename) except (IOError, OSError) as e: if is_file_not_found_error(e): # If CommandExecution.log does not exist, it's not noteworthy; # this just means that no extension with self.ext_handler.name is # installed. return logger.error( "Exception occurred while attempting to truncate {0} for extension {1}. Exception is: {2}", filename, extension_name, ustr(e)) for f in (filename, filename + ".tmp"): try: os.remove(f) except (IOError, OSError) as cleanup_exception: if is_file_not_found_error(cleanup_exception): logger.info("File '{0}' does not exist.", f) else: logger.warn("Exception occurred while attempting to remove file '{0}': {1}", f, cleanup_exception) def decide_version(self, target_state=None, extension=None): self.logger.verbose("Decide which version to use") try: manifest = self.protocol.get_goal_state().fetch_extension_manifest(self.ext_handler.name, self.ext_handler.manifest_uris) pkg_list = manifest.pkg_list except ProtocolError as e: raise ExtensionError("Failed to get ext handler pkgs", e) except ExtensionDownloadError: self.set_operation(WALAEventOperation.Download) raise # Determine the desired and installed versions requested_version = FlexibleVersion(str(self.ext_handler.version)) installed_version_string = self.get_installed_version() installed_version = requested_version if installed_version_string is None else FlexibleVersion(installed_version_string) # Divide packages # - Find the installed package (its version must exactly match) # - Find the internal candidate (its version must exactly match) # - Separate the public packages selected_pkg = None installed_pkg = None pkg_list.versions.sort(key=lambda p: FlexibleVersion(p.version)) for pkg in pkg_list.versions: pkg_version = FlexibleVersion(pkg.version) if pkg_version == installed_version: installed_pkg = pkg if requested_version.matches(pkg_version): selected_pkg = pkg # Finally, update the version only if not downgrading # Note: # - A downgrade, which will be bound to the same major version, # is allowed if the installed version is no longer available if target_state in (ExtensionRequestedState.Uninstall, ExtensionRequestedState.Disabled): if installed_pkg is None: msg = "Failed to find installed version: {0} of Handler: {1} in handler manifest to uninstall.".format( installed_version, self.ext_handler.name) self.logger.warn(msg) self.pkg = installed_pkg self.ext_handler.version = str(installed_version) \ if installed_version is not None else None else: self.pkg = selected_pkg if self.pkg is not None: self.ext_handler.version = str(selected_pkg.version) if self.pkg is not None: self.logger.verbose("Use version: {0}", self.pkg.version) # We reset the logger here incase the handler version changes if not requested_version.matches(FlexibleVersion(self.ext_handler.version)): self.set_logger(extension=extension) return self.pkg def set_logger(self, execution_log_max_size=(10 * 1024 * 1024), extension=None): prefix = "[{0}]".format(self.get_full_name(extension)) self.logger = logger.Logger(logger.DEFAULT_LOGGER, prefix) self.__set_command_execution_log(extension, execution_log_max_size) def version_gt(self, other): self_version = self.ext_handler.version other_version = other.ext_handler.version return FlexibleVersion(self_version) > FlexibleVersion(other_version) def version_ne(self, other): self_version = self.ext_handler.version other_version = other.ext_handler.version return FlexibleVersion(self_version) != FlexibleVersion(other_version) def get_installed_ext_handler(self): latest_version = self.get_installed_version() if latest_version is None: return None installed_handler = copy.deepcopy(self.ext_handler) installed_handler.version = latest_version return ExtHandlerInstance(installed_handler, self.protocol) def get_installed_version(self): latest_version = None for path in glob.iglob(os.path.join(conf.get_lib_dir(), self.ext_handler.name + "-*")): if not os.path.isdir(path): continue separator = path.rfind('-') version_from_path = FlexibleVersion(path[separator + 1:]) state_path = os.path.join(path, 'config', 'HandlerState') if not os.path.exists(state_path) or fileutil.read_file(state_path) == ExtHandlerState.NotInstalled \ or fileutil.read_file(state_path) == ExtHandlerState.FailedUpgrade: logger.verbose("Ignoring version of uninstalled or failed extension: {0}".format(path)) continue if latest_version is None or latest_version < version_from_path: latest_version = version_from_path return str(latest_version) if latest_version is not None else None def copy_status_files(self, old_ext_handler_i): self.logger.info("Copy status files from old plugin to new") old_ext_dir = old_ext_handler_i.get_base_dir() new_ext_dir = self.get_base_dir() old_ext_mrseq_file = os.path.join(old_ext_dir, "mrseq") if os.path.isfile(old_ext_mrseq_file): logger.info("Migrating {0} to {1}.", old_ext_mrseq_file, new_ext_dir) shutil.copy2(old_ext_mrseq_file, new_ext_dir) else: logger.info("{0} does not exist, no migration is needed.", old_ext_mrseq_file) old_ext_status_dir = old_ext_handler_i.get_status_dir() new_ext_status_dir = self.get_status_dir() if os.path.isdir(old_ext_status_dir): for status_file in os.listdir(old_ext_status_dir): status_file = os.path.join(old_ext_status_dir, status_file) if os.path.isfile(status_file): shutil.copy2(status_file, new_ext_status_dir) def set_operation(self, op): self.operation = op def report_event(self, name=None, message="", is_success=True, duration=0, log_event=True): ext_handler_version = self.ext_handler.version name = self.ext_handler.name if name is None else name add_event(name=name, version=ext_handler_version, message=message, op=self.operation, is_success=is_success, duration=duration, log_event=log_event) def _unzip_extension_package(self, source_file, target_directory): self.logger.info("Unzipping extension package: {0}", source_file) try: zipfile.ZipFile(source_file).extractall(target_directory) except Exception as exception: logger.info("Error while unzipping extension package: {0}", ustr(exception)) os.remove(source_file) if os.path.exists(target_directory): shutil.rmtree(target_directory) return False return True def download(self): begin_utc = datetime.datetime.utcnow() self.set_operation(WALAEventOperation.Download) if self.pkg is None or self.pkg.uris is None or len(self.pkg.uris) == 0: raise ExtensionDownloadError("No package uri found") package_file = os.path.join(conf.get_lib_dir(), self.get_extension_package_zipfile_name()) package_exists = False if os.path.exists(package_file): self.logger.info("Using existing extension package: {0}", package_file) if self._unzip_extension_package(package_file, self.get_base_dir()): package_exists = True else: self.logger.info("The existing extension package is invalid, will ignore it.") if not package_exists: is_fast_track_goal_state = self.protocol.get_goal_state().extensions_goal_state.source == GoalStateSource.FastTrack self.protocol.client.download_zip_package("extension package", self.pkg.uris, package_file, self.get_base_dir(), use_verify_header=is_fast_track_goal_state) self.report_event(message="Download succeeded", duration=elapsed_milliseconds(begin_utc)) self.pkg_file = package_file def ensure_consistent_data_for_mc(self): # If CRP expects Handler to support MC, ensure the HandlerManifest also reflects that. # Even though the HandlerManifest.json is not expected to change once the extension is installed, # CRP can wrongfully request send a Multi-Config GoalState even if the Handler supports only Single Config. # Checking this only if HandlerState == Enable. In case of Uninstall, we dont care. if self.supports_multi_config and not self.load_manifest().supports_multiple_extensions(): raise ExtensionsGoalStateError( "Handler {0} does not support MultiConfig but CRP expects it, failing due to inconsistent data".format( self.ext_handler.name)) def initialize(self): self.logger.info("Initializing extension {0}".format(self.get_full_name())) # Add user execute permission to all files under the base dir for file in fileutil.get_all_files(self.get_base_dir()): # pylint: disable=redefined-builtin fileutil.chmod(file, os.stat(file).st_mode | stat.S_IXUSR) # Save HandlerManifest.json man_file = fileutil.search_file(self.get_base_dir(), 'HandlerManifest.json') if man_file is None: raise ExtensionDownloadError("HandlerManifest.json not found") try: man = fileutil.read_file(man_file, remove_bom=True) fileutil.write_file(self.get_manifest_file(), man) except IOError as e: fileutil.clean_ioerror(e, paths=[self.get_base_dir(), self.pkg_file]) raise ExtensionDownloadError(u"Failed to save HandlerManifest.json", e) self.ensure_consistent_data_for_mc() # Create status and config dir try: status_dir = self.get_status_dir() fileutil.mkdir(status_dir, mode=0o700) conf_dir = self.get_conf_dir() fileutil.mkdir(conf_dir, mode=0o700) if get_supported_feature_by_name(SupportedFeatureNames.ExtensionTelemetryPipeline).is_supported: fileutil.mkdir(self.get_extension_events_dir(), mode=0o700) except IOError as e: fileutil.clean_ioerror(e, paths=[self.get_base_dir(), self.pkg_file]) raise ExtensionDownloadError(u"Failed to initialize extension '{0}'".format(self.get_full_name()), e) # Save HandlerEnvironment.json self.create_handler_env() self.set_extension_resource_limits() def set_extension_resource_limits(self): extension_name = self.get_full_name() # setup the resource limits for extension operations and it's services. man = self.load_manifest() resource_limits = man.get_resource_limits(extension_name, self.ext_handler.version) if not CGroupConfigurator.get_instance().is_extension_resource_limits_setup_completed(extension_name, cpu_quota=resource_limits.get_extension_slice_cpu_quota()): CGroupConfigurator.get_instance().setup_extension_slice( extension_name=extension_name, cpu_quota=resource_limits.get_extension_slice_cpu_quota()) CGroupConfigurator.get_instance().set_extension_services_cpu_memory_quota(resource_limits.get_service_list()) def create_status_file_if_not_exist(self, extension, status, code, operation, message): _, status_path = self.get_status_file_path(extension) if status_path is not None and not os.path.exists(status_path): now = datetime.datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ") status_contents = [ { "version": 1.0, "timestampUTC": now, "status": { "name": self.get_extension_full_name(extension), "operation": operation, "status": status, "code": code, "formattedMessage": { "lang": "en-US", "message": message } } } ] # Create status directory if not exists. This is needed in the case where the Handler fails before even # initializing the directories (ExtensionsGoalStateError, Version deleted from PIR error, etc) if not os.path.exists(os.path.dirname(status_path)): fileutil.mkdir(os.path.dirname(status_path), mode=0o700) self.logger.info("Creating a placeholder status file {0} with status: {1}".format(status_path, status)) fileutil.write_file(status_path, json.dumps(status_contents)) def enable(self, extension=None, uninstall_exit_code=None): try: self._enable_extension(extension, uninstall_exit_code) except ExtensionError as error: if self.should_perform_multi_config_op(extension): raise MultiConfigExtensionEnableError(error) raise # Even if a single extension is enabled for this handler, set the Handler state as Enabled self.set_handler_state(ExtHandlerState.Enabled) self.set_handler_status(status=ExtHandlerStatusValue.ready, message="Plugin enabled") def should_perform_multi_config_op(self, extension): return self.supports_multi_config and extension is not None def _enable_extension(self, extension, uninstall_exit_code): uninstall_exit_code = str(uninstall_exit_code) if uninstall_exit_code is not None else NOT_RUN env = { ExtCommandEnvVariable.UninstallReturnCode: uninstall_exit_code } # This check to call the setup if extension already installed and not called setup before self.set_extension_resource_limits() self.set_operation(WALAEventOperation.Enable) man = self.load_manifest() enable_cmd = man.get_enable_command() self.logger.info("Enable extension: [{0}]".format(enable_cmd)) self.launch_command(enable_cmd, cmd_name="enable", timeout=300, extension_error_code=ExtensionErrorCodes.PluginEnableProcessingFailed, env=env, extension=extension) if self.should_perform_multi_config_op(extension): # Only save extension state if MC supported self.__set_extension_state(extension, ExtensionState.Enabled) # start tracking the extension services cgroup. resource_limits = man.get_resource_limits(self.get_full_name(), self.ext_handler.version) CGroupConfigurator.get_instance().start_tracking_extension_services_cgroups( resource_limits.get_service_list()) def _disable_extension(self, extension=None): self.set_operation(WALAEventOperation.Disable) man = self.load_manifest() disable_cmd = man.get_disable_command() self.logger.info("Disable extension: [{0}]".format(disable_cmd)) self.launch_command(disable_cmd, cmd_name="disable", timeout=900, extension_error_code=ExtensionErrorCodes.PluginDisableProcessingFailed, extension=extension) def disable(self, extension=None, ignore_error=False): try: self._disable_extension(extension) except ExtensionError as error: if not ignore_error: raise msg = "[Ignored Error] Ran into error disabling extension:{0}".format(ustr(error)) self.logger.info(msg) self.report_event(name=self.get_extension_full_name(extension), message=msg, is_success=False, log_event=False) # Clean extension state For Multi Config extensions on Disable if self.should_perform_multi_config_op(extension): self.__remove_extension_state_files(extension) # For Single config, dont check enabled_extensions because no extension state is maintained. # For MultiConfig, Set the handler state to Installed only when all extensions have been disabled if not self.supports_multi_config or not any(self.enabled_extensions): self.set_handler_state(ExtHandlerState.Installed) self.set_handler_status(status=ExtHandlerStatusValue.not_ready, message="Plugin disabled") def install(self, uninstall_exit_code=None, extension=None): # For Handler level operations, extension just specifies the settings that initiated the install. # This is needed to provide the sequence number and extension name in case the extension needs to report # failure/status using status file. uninstall_exit_code = str(uninstall_exit_code) if uninstall_exit_code is not None else NOT_RUN env = {ExtCommandEnvVariable.UninstallReturnCode: uninstall_exit_code} man = self.load_manifest() install_cmd = man.get_install_command() self.logger.info("Install extension [{0}]".format(install_cmd)) self.set_operation(WALAEventOperation.Install) self.launch_command(install_cmd, cmd_name="install", timeout=900, extension=extension, extension_error_code=ExtensionErrorCodes.PluginInstallProcessingFailed, env=env) self.set_handler_state(ExtHandlerState.Installed) self.set_handler_status(status=ExtHandlerStatusValue.not_ready, message="Plugin installed but not enabled") def uninstall(self, extension=None): # For Handler level operations, extension just specifies the settings that initiated the uninstall. # This is needed to provide the sequence number and extension name in case the extension needs to report # failure/status using status file. self.set_operation(WALAEventOperation.UnInstall) man = self.load_manifest() # stop tracking extension services cgroup. resource_limits = man.get_resource_limits(self.get_full_name(), self.ext_handler.version) CGroupConfigurator.get_instance().stop_tracking_extension_services_cgroups( resource_limits.get_service_list()) CGroupConfigurator.get_instance().remove_extension_services_drop_in_files( resource_limits.get_service_list()) uninstall_cmd = man.get_uninstall_command() self.logger.info("Uninstall extension [{0}]".format(uninstall_cmd)) self.launch_command(uninstall_cmd, cmd_name="uninstall", extension=extension) def remove_ext_handler(self): try: zip_filename = os.path.join(conf.get_lib_dir(), self.get_extension_package_zipfile_name()) if os.path.exists(zip_filename): os.remove(zip_filename) self.logger.verbose("Deleted the extension zip at path {0}", zip_filename) base_dir = self.get_base_dir() if os.path.isdir(base_dir): self.logger.info("Remove extension handler directory: {0}", base_dir) # some extensions uninstall asynchronously so ignore error 2 while removing them def on_rmtree_error(_, __, exc_info): _, exception, _ = exc_info if not isinstance(exception, OSError) or exception.errno != 2: # [Errno 2] No such file or directory raise exception shutil.rmtree(base_dir, onerror=on_rmtree_error) self.logger.info("Remove the extension slice: {0}".format(self.get_full_name())) CGroupConfigurator.get_instance().remove_extension_slice( extension_name=self.get_full_name()) except IOError as e: message = "Failed to remove extension handler directory: {0}".format(e) self.report_event(message=message, is_success=False) self.logger.warn(message) def update(self, handler_version=None, disable_exit_codes=None, updating_from_version=None, extension=None): # For Handler level operations, extension just specifies the settings that initiated the update. # This is needed to provide the sequence number and extension name in case the extension needs to report # failure/status using status file. if handler_version is None: handler_version = self.ext_handler.version env = { 'VERSION': handler_version, ExtCommandEnvVariable.UpdatingFromVersion: updating_from_version } if not self.supports_multi_config: # For single config, extension.name == ext_handler.name env[ExtCommandEnvVariable.DisableReturnCode] = ustr(disable_exit_codes.get(self.ext_handler.name)) else: disable_codes = [] for ext in self.extensions: disable_codes.append({ "extensionName": ext.name, "exitCode": ustr(disable_exit_codes.get(ext.name)) }) env[ExtCommandEnvVariable.DisableReturnCodeMultipleExtensions] = json.dumps(disable_codes) try: self.set_operation(WALAEventOperation.Update) man = self.load_manifest() update_cmd = man.get_update_command() self.logger.info("Update extension [{0}]".format(update_cmd)) self.launch_command(update_cmd, cmd_name="update", timeout=900, extension_error_code=ExtensionErrorCodes.PluginUpdateProcessingFailed, env=env, extension=extension) except ExtensionError: # Mark the handler as Failed so we don't clean it up and can keep reporting its status self.set_handler_state(ExtHandlerState.FailedUpgrade) raise def update_with_install(self, uninstall_exit_code=None, extension=None): man = self.load_manifest() if man.is_update_with_install(): self.install(uninstall_exit_code=uninstall_exit_code, extension=extension) else: self.logger.info("UpdateWithInstall not set. " "Skip install during upgrade.") self.set_handler_state(ExtHandlerState.Installed) def _get_last_modified_seq_no_from_config_files(self, extension): """ The sequence number is not guaranteed to always be strictly increasing. To ensure we always get the latest one, fetching the sequence number from config file that was last modified (and not necessarily the largest). :return: Last modified Sequence number or -1 on errors """ seq_no = -1 if self.supports_multi_config and (extension is None or extension.name is None): # If no extension name is provided for Multi Config, don't try to parse any sequence number from filesystem return seq_no try: largest_modified_time = 0 conf_dir = self.get_conf_dir() for item in os.listdir(conf_dir): item_path = os.path.join(conf_dir, item) if not os.path.isfile(item_path): continue try: # Settings file for Multi Config look like - ..settings # Settings file for Single Config look like - .settings match = re.search("((?P\\w+)\\.)*(?P\\d+)\\.settings", item_path) if match is not None: ext_name = match.group('ext_name') if self.supports_multi_config and extension.name != ext_name: continue curr_seq_no = int(match.group("seq_no")) curr_modified_time = os.path.getmtime(item_path) if curr_modified_time > largest_modified_time: seq_no = curr_seq_no largest_modified_time = curr_modified_time except (ValueError, IndexError, TypeError): self.logger.verbose("Failed to parse file name: {0}", item) continue except Exception as error: logger.verbose("Error fetching sequence number from config files: {0}".format(ustr(error))) seq_no = -1 return seq_no def get_status_file_path(self, extension=None): """ We should technically only fetch the sequence number from GoalState and not rely on the filesystem at all, But there are certain scenarios where we need to fetch the latest sequence number from the filesystem (For example when we need to report the status for extensions of previous GS if the current GS is Unsupported). Always prioritizing sequence number from extensions but falling back to filesystem :param extension: Extension for which the sequence number is required :return: Sequence number for the extension, Status file path or -1, None """ path = None seq_no = None if extension is not None and extension.sequenceNumber is not None: try: seq_no = int(extension.sequenceNumber) except ValueError: logger.error('Sequence number [{0}] does not appear to be valid'.format(extension.sequenceNumber)) if seq_no is None: # If we're unable to fetch Sequence number from Extension for any reason, # try fetching it from the last modified Settings file. seq_no = self._get_last_modified_seq_no_from_config_files(extension) if seq_no is not None and seq_no > -1: if self.should_perform_multi_config_op(extension) and extension is not None and extension.name is not None: path = os.path.join(self.get_status_dir(), "{0}.{1}.status".format(extension.name, seq_no)) elif not self.supports_multi_config: path = os.path.join(self.get_status_dir(), "{0}.status").format(seq_no) return seq_no if seq_no is not None else -1, path def collect_ext_status(self, ext): self.logger.verbose("Collect extension status for {0}".format(self.get_extension_full_name(ext))) seq_no, ext_status_file = self.get_status_file_path(ext) # We should never try to read any status file if the handler has no settings, returning None in that case if seq_no == -1 or ext is None: return None data = None data_str = None # Extension.name contains the extension name in case of MC and Handler name in case of Single Config. ext_status = ExtensionStatus(name=ext.name, seq_no=seq_no) try: data_str, data = self._read_status_file(ext_status_file) except ExtensionStatusError as e: msg = "" ext_status.status = ExtensionStatusValue.error if e.code == ExtensionStatusError.CouldNotReadStatusFile: ext_status.code = ExtensionErrorCodes.PluginUnknownFailure msg = u"We couldn't read any status for {0} extension, for the sequence number {1}. It failed due" \ u" to {2}".format(self.get_full_name(ext), seq_no, ustr(e)) elif e.code == ExtensionStatusError.InvalidJsonFile: ext_status.code = ExtensionErrorCodes.PluginSettingsStatusInvalid msg = u"The status reported by the extension {0}(Sequence number {1}), was in an " \ u"incorrect format and the agent could not parse it correctly. Failed due to {2}" \ .format(self.get_full_name(ext), seq_no, ustr(e)) elif e.code == ExtensionStatusError.FileNotExists: msg = "This status is being reported by the Guest Agent since no status file was " \ "reported by extension {0}: {1}".format(self.get_extension_full_name(ext), ustr(e)) # Reporting a success code and transitioning status to keep in accordance with existing code that # creates default status placeholder file ext_status.code = ExtensionErrorCodes.PluginSuccess ext_status.status = ExtensionStatusValue.transitioning # This log is periodic due to the verbose nature of the status check. Please make sure that the message # constructed above does not change very frequently and includes important info such as sequence number, # extension name to make sure that the log reflects changes in the extension sequence for which the # status is being sent. logger.periodic_warn(logger.EVERY_HALF_HOUR, u"[PERIODIC] " + msg) add_periodic(delta=logger.EVERY_HALF_HOUR, name=self.get_extension_full_name(ext), version=self.ext_handler.version, op=WALAEventOperation.StatusProcessing, is_success=False, message=msg, log_event=False) ext_status.message = msg return ext_status # We did not encounter InvalidJsonFile/CouldNotReadStatusFile and thus the status file was correctly written # and has valid json. try: parse_ext_status(ext_status, data) if len(data_str) > _MAX_STATUS_FILE_SIZE_IN_BYTES: raise ExtensionStatusError(msg="For Extension Handler {0} for the sequence number {1}, the status " "file {2} of size {3} bytes is too big. Max Limit allowed is {4} bytes" .format(self.get_full_name(ext), seq_no, ext_status_file, len(data_str), _MAX_STATUS_FILE_SIZE_IN_BYTES), code=ExtensionStatusError.MaxSizeExceeded) except ExtensionStatusError as e: msg = u"For Extension Handler {0} for the sequence number {1}, the status file {2}. " \ u"Encountered the following error: {3}".format(self.get_full_name(ext), seq_no, ext_status_file, ustr(e)) logger.periodic_warn(logger.EVERY_DAY, u"[PERIODIC] " + msg) add_periodic(delta=logger.EVERY_HALF_HOUR, name=self.get_extension_full_name(ext), version=self.ext_handler.version, op=WALAEventOperation.StatusProcessing, is_success=False, message=msg, log_event=False) if e.code == ExtensionStatusError.MaxSizeExceeded: ext_status.message, field_size = self._truncate_message(ext_status.message, _MAX_STATUS_MESSAGE_LENGTH) ext_status.substatusList = self._process_substatus_list(ext_status.substatusList, field_size) elif e.code == ExtensionStatusError.StatusFileMalformed: ext_status.message = "Could not get a valid status from the extension {0}. Encountered the " \ "following error: {1}".format(self.get_full_name(ext), ustr(e)) ext_status.code = ExtensionErrorCodes.PluginSettingsStatusInvalid ext_status.status = ExtensionStatusValue.error return ext_status def get_ext_handling_status(self, ext): seq_no, ext_status_file = self.get_status_file_path(ext) # This is legacy scenario for cases when no extension settings is available if seq_no < 0 or ext_status_file is None: return None # Missing status file is considered a non-terminal state here # so that extension sequencing can wait until it becomes existing if not os.path.exists(ext_status_file): status = ExtensionStatusValue.warning else: ext_status = self.collect_ext_status(ext) status = ext_status.status if ext_status is not None else None return status def is_ext_handling_complete(self, ext): status = self.get_ext_handling_status(ext) # when seq < 0 (i.e. no new user settings), the handling is complete and return None status if status is None: return True, None # If not in terminal state, it is incomplete if status not in _EXTENSION_TERMINAL_STATUSES: return False, status # Extension completed, return its status return True, status def report_error_on_incarnation_change(self, goal_state_changed, log_msg, event_msg, extension=None, op=WALAEventOperation.ReportStatus): # Since this code is called on a loop, logging as a warning only on goal state change, else logging it # as verbose if goal_state_changed: logger.warn(log_msg) add_event(name=self.get_extension_full_name(extension), version=self.ext_handler.version, op=op, message=event_msg, is_success=False, log_event=False) else: logger.verbose(log_msg) def get_extension_handler_statuses(self, handler_status, goal_state_changed): """ Get the list of ExtHandlerStatus objects corresponding to each extension in the Handler. Each object might have its own status for the Extension status but the Handler status would be the same for each extension in a Handle :return: List of ExtHandlerStatus objects for each extension in the Handler """ ext_handler_statuses = [] # TODO Refactor or remove this common code pattern (for each extension subordinate to an ext_handler, do X). for ext in self.extensions: # In MC, for disabled extensions we dont need to report status. Skip reporting if disabled and state == disabled # Extension.state corresponds to the state requested by CRP, self.__get_extension_state() corresponds to the # state of the extension on the VM. Skip reporting only if both are Disabled if self.should_perform_multi_config_op(ext) and \ ext.state == ExtensionState.Disabled and self.get_extension_state(ext) == ExtensionState.Disabled: continue # Breaking off extension reporting in 2 parts, one which is Handler dependent and the other that is Extension dependent try: ext_handler_status = ExtHandlerStatus() set_properties("ExtHandlerStatus", ext_handler_status, get_properties(handler_status)) except Exception as error: msg = "Something went wrong when trying to get a copy of the Handler status for {0}".format( self.get_extension_full_name()) self.report_error_on_incarnation_change(goal_state_changed, event_msg=msg, log_msg="{0}.\nStack Trace: {1}".format( msg, textutil.format_exception(error))) # Since this is a Handler level error and we need to do it per extension, breaking here and logging # error since we wont be able to report error anyways and saving it as a handler status (legacy behavior) self.set_handler_status(message=msg, code=-1) break # For the extension dependent stuff, if there's some unhandled error, we will report it back to CRP as an extension error. try: ext_status = self.collect_ext_status(ext) if ext_status is not None: ext_handler_status.extension_status = ext_status ext_handler_statuses.append(ext_handler_status) except ExtensionError as error: msg = "Unknown error when trying to fetch status from extension {0}".format( self.get_extension_full_name(ext)) self.report_error_on_incarnation_change(goal_state_changed, event_msg=msg, log_msg="{0}.\nStack Trace: {1}".format( msg, textutil.format_exception(error)), extension=ext) # Unexpected error, for single config, keep the behavior as is if not self.should_perform_multi_config_op(ext): self.set_handler_status(message=ustr(error), code=error.code) break # For MultiConfig, create a custom ExtensionStatus object with the error details and attach it to the Handler. # This way the error would be reported back to CRP and the failure would be propagated instantly as compared to CRP eventually timing it out. ext_status = ExtensionStatus(name=ext.name, seq_no=ext.sequenceNumber, code=ExtensionErrorCodes.PluginUnknownFailure, status=ExtensionStatusValue.error, message=msg) ext_handler_status.extension_status = ext_status ext_handler_statuses.append(ext_handler_status) return ext_handler_statuses def collect_heartbeat(self): # pylint: disable=R1710 man = self.load_manifest() if not man.is_report_heartbeat(): return heartbeat_file = os.path.join(conf.get_lib_dir(), self.get_heartbeat_file()) if not os.path.isfile(heartbeat_file): raise ExtensionError("Failed to get heart beat file") if not self.is_responsive(heartbeat_file): return { "status": "Unresponsive", "code": -1, "message": "Extension heartbeat is not responsive" } try: heartbeat_json = fileutil.read_file(heartbeat_file) heartbeat = json.loads(heartbeat_json)[0]['heartbeat'] except IOError as e: raise ExtensionError("Failed to get heartbeat file:{0}".format(e)) except (ValueError, KeyError) as e: raise ExtensionError("Malformed heartbeat file: {0}".format(e)) return heartbeat @staticmethod def is_responsive(heartbeat_file): """ Was heartbeat_file updated within the last ten (10) minutes? :param heartbeat_file: str :return: bool """ last_update = int(time.time() - os.stat(heartbeat_file).st_mtime) return last_update <= 600 def launch_command(self, cmd, cmd_name=None, timeout=300, extension_error_code=ExtensionErrorCodes.PluginProcessingError, env=None, extension=None): begin_utc = datetime.datetime.utcnow() self.logger.verbose("Launch command: [{0}]", cmd) base_dir = self.get_base_dir() with tempfile.TemporaryFile(dir=base_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=base_dir, mode="w+b") as stderr: if env is None: env = {} # Always add Extension Path and version to the current launch_command (Ask from publishers) env.update({ ExtCommandEnvVariable.ExtensionPath: base_dir, ExtCommandEnvVariable.ExtensionVersion: str(self.ext_handler.version), ExtCommandEnvVariable.WireProtocolAddress: self.protocol.get_endpoint(), # Setting sequence number to 0 incase no settings provided to keep in accordance with the empty # 0.settings file that we create for such extensions. ExtCommandEnvVariable.ExtensionSeqNumber: str( extension.sequenceNumber) if extension is not None else _DEFAULT_SEQ_NO }) if self.should_perform_multi_config_op(extension): env[ExtCommandEnvVariable.ExtensionName] = extension.name supported_features = [] for _, feature in get_agent_supported_features_list_for_extensions().items(): supported_features.append( { "Key": feature.name, "Value": feature.version } ) if supported_features: env[ExtCommandEnvVariable.ExtensionSupportedFeatures] = json.dumps(supported_features) ext_name = self.get_extension_full_name(extension) try: # Some extensions erroneously begin cmd with a slash; don't interpret those # as root-relative. (Issue #1170) command_full_path = os.path.join(base_dir, cmd.lstrip(os.path.sep)) log_msg = "Executing command: {0} with environment variables: {1}".format(command_full_path, json.dumps(env)) self.logger.info(log_msg) self.report_event(name=ext_name, message=log_msg, log_event=False) # Add the os environment variables before executing command env.update(os.environ) process_output = CGroupConfigurator.get_instance().start_extension_command( extension_name=self.get_full_name(extension), command=command_full_path, cmd_name=cmd_name, timeout=timeout, shell=True, cwd=base_dir, env=env, stdout=stdout, stderr=stderr, error_code=extension_error_code) except OSError as e: raise ExtensionError("Failed to launch '{0}': {1}".format(command_full_path, e.strerror), code=extension_error_code) duration = elapsed_milliseconds(begin_utc) log_msg = "Command: {0}\n{1}".format(cmd, "\n".join( [line for line in process_output.split('\n') if line != ""])) self.logger.info(log_msg) self.report_event(name=ext_name, message=log_msg, duration=duration, log_event=False) return process_output def load_manifest(self): man_file = self.get_manifest_file() try: data = json.loads(fileutil.read_file(man_file)) except (IOError, OSError) as e: raise ExtensionError('Failed to load manifest file ({0}): {1}'.format(man_file, e.strerror), code=ExtensionErrorCodes.PluginHandlerManifestNotFound) except ValueError: raise ExtensionError('Malformed manifest file ({0}).'.format(man_file), code=ExtensionErrorCodes.PluginHandlerManifestDeserializationError) return HandlerManifest(data[0]) def update_settings_file(self, settings_file, settings): settings_file = os.path.join(self.get_conf_dir(), settings_file) try: fileutil.write_file(settings_file, settings) except IOError as e: fileutil.clean_ioerror(e, paths=[settings_file]) raise ExtensionError(u"Failed to update settings file", e) def update_settings(self, extension): if self.extensions is None or len(self.extensions) == 0 or extension is None: # This is the behavior of waagent 2.0.x # The new agent has to be consistent with the old one. self.logger.info("Extension has no settings, write empty 0.settings") self.update_settings_file("{0}.settings".format(_DEFAULT_SEQ_NO), "") return settings = { 'publicSettings': extension.publicSettings, 'protectedSettings': extension.protectedSettings, 'protectedSettingsCertThumbprint': extension.certificateThumbprint } ext_settings = { "runtimeSettings": [{ "handlerSettings": settings }] } # MultiConfig: change the name to ..settings for MC and .settings for SC settings_file = "{0}.{1}.settings".format(extension.name, extension.sequenceNumber) if \ self.should_perform_multi_config_op(extension) else "{0}.settings".format(extension.sequenceNumber) self.logger.info("Update settings file: {0}", settings_file) self.update_settings_file(settings_file, json.dumps(ext_settings)) def create_handler_env(self): handler_env = { HandlerEnvironment.logFolder: self.get_log_dir(), HandlerEnvironment.configFolder: self.get_conf_dir(), HandlerEnvironment.statusFolder: self.get_status_dir(), HandlerEnvironment.heartbeatFile: self.get_heartbeat_file() } if get_supported_feature_by_name(SupportedFeatureNames.ExtensionTelemetryPipeline).is_supported: handler_env[HandlerEnvironment.eventsFolder] = self.get_extension_events_dir() # For now, keep the preview key to not break extensions that were using the preview. handler_env[HandlerEnvironment.eventsFolder_preview] = self.get_extension_events_dir() env = [{ HandlerEnvironment.name: self.ext_handler.name, HandlerEnvironment.version: HandlerEnvironment.schemaVersion, HandlerEnvironment.handlerEnvironment: handler_env }] try: fileutil.write_file(self.get_env_file(), json.dumps(env)) except IOError as e: fileutil.clean_ioerror(e, paths=[self.get_base_dir(), self.pkg_file]) raise ExtensionDownloadError(u"Failed to save handler environment", e) def __get_handler_state_file_name(self, extension=None): if self.should_perform_multi_config_op(extension): return "{0}.HandlerState".format(extension.name) return "HandlerState" def set_handler_state(self, handler_state): self.__set_state(name=self.__get_handler_state_file_name(), value=handler_state) def get_handler_state(self): return self.__get_state(name=self.__get_handler_state_file_name(), default=ExtHandlerState.NotInstalled) def __set_extension_state(self, extension, extension_state): self.__set_state(name=self.__get_handler_state_file_name(extension), value=extension_state) def get_extension_state(self, extension=None): return self.__get_state(name=self.__get_handler_state_file_name(extension), default=ExtensionState.Disabled) def __set_state(self, name, value): state_dir = self.get_conf_dir() state_file = os.path.join(state_dir, name) try: if not os.path.exists(state_dir): fileutil.mkdir(state_dir, mode=0o700) fileutil.write_file(state_file, value) except IOError as e: fileutil.clean_ioerror(e, paths=[state_file]) self.logger.error("Failed to set state: {0}", e) def __get_state(self, name, default=None): state_dir = self.get_conf_dir() state_file = os.path.join(state_dir, name) if not os.path.isfile(state_file): return default try: return fileutil.read_file(state_file) except IOError as e: self.logger.error("Failed to get state: {0}", e) return default def __remove_extension_state_files(self, extension): self.logger.info("Removing states files for disabled extension: {0}".format(extension.name)) try: # MultiConfig: Remove all config/.*.settings, status/.*.status and config/.HandlerState files files_to_delete = [ os.path.join(self.get_conf_dir(), "{0}.*.settings".format(extension.name)), os.path.join(self.get_status_dir(), "{0}.*.status".format(extension.name)), os.path.join(self.get_conf_dir(), self.__get_handler_state_file_name(extension)) ] fileutil.rm_files(*files_to_delete) except Exception as error: extension_name = self.get_extension_full_name(extension) message = "Failed to remove extension state files for {0}: {1}".format(extension_name, ustr(error)) self.report_event(name=extension_name, message=message, is_success=False, log_event=False) self.logger.warn(message) def set_handler_status(self, status=ExtHandlerStatusValue.not_ready, message="", code=0): state_dir = self.get_conf_dir() handler_status = ExtHandlerStatus() handler_status.name = self.ext_handler.name handler_status.version = str(self.ext_handler.version) handler_status.message = message handler_status.code = code handler_status.status = status handler_status.supports_multi_config = self.ext_handler.supports_multi_config status_file = os.path.join(state_dir, "HandlerStatus") try: handler_status_json = json.dumps(get_properties(handler_status)) if handler_status_json is not None: if not os.path.exists(state_dir): fileutil.mkdir(state_dir, mode=0o700) fileutil.write_file(status_file, handler_status_json) else: self.logger.error("Failed to create JSON document of handler status for {0} version {1}".format( self.ext_handler.name, self.ext_handler.version)) except (IOError, ValueError, ProtocolError) as error: fileutil.clean_ioerror(error, paths=[status_file]) self.logger.error("Failed to save handler status: {0}", textutil.format_exception(error)) def get_handler_status(self): state_dir = self.get_conf_dir() status_file = os.path.join(state_dir, "HandlerStatus") if not os.path.isfile(status_file): return None handler_status_contents = "" try: handler_status_contents = fileutil.read_file(status_file) data = json.loads(handler_status_contents) handler_status = ExtHandlerStatus() set_properties("ExtHandlerStatus", handler_status, data) return handler_status except (IOError, ValueError) as error: self.logger.error("Failed to get handler status: {0}", error) except Exception as error: error_msg = "Failed to get handler status message: {0}.\n Contents of file: {1}".format( ustr(error), handler_status_contents).replace('"', '\'') add_periodic( delta=logger.EVERY_HOUR, name=AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.ExtensionProcessing, is_success=False, message=error_msg) raise return None def get_extension_package_zipfile_name(self): return "{0}__{1}{2}".format(self.ext_handler.name, self.ext_handler.version, HANDLER_PKG_EXT) def get_full_name(self, extension=None): """ :return: - if extension is None or Handler does not support Multi Config, else then return - .- """ return "{0}-{1}".format(self.get_extension_full_name(extension), self.ext_handler.version) def get_base_dir(self): return os.path.join(conf.get_lib_dir(), self.get_full_name()) def get_status_dir(self): return os.path.join(self.get_base_dir(), "status") def get_conf_dir(self): return os.path.join(self.get_base_dir(), 'config') def get_extension_events_dir(self): return os.path.join(self.get_log_dir(), EVENTS_DIRECTORY) def get_heartbeat_file(self): return os.path.join(self.get_base_dir(), 'heartbeat.log') def get_manifest_file(self): return os.path.join(self.get_base_dir(), 'HandlerManifest.json') def get_env_file(self): return os.path.join(self.get_base_dir(), HandlerEnvironment.fileName) def get_log_dir(self): return os.path.join(conf.get_ext_log_dir(), self.ext_handler.name) @staticmethod def is_azuremonitorlinuxagent(extension_name): cgroup_monitor_extension_name = conf.get_cgroup_monitor_extension_name() if re.match(r"\A" + cgroup_monitor_extension_name, extension_name) is not None\ and datetime.datetime.utcnow() < datetime.datetime.strptime(conf.get_cgroup_monitor_expiry_time(), "%Y-%m-%d"): return True return False @staticmethod def _read_status_file(ext_status_file): err_count = 0 while True: try: return ExtHandlerInstance._read_and_parse_json_status_file(ext_status_file) except Exception: err_count += 1 if err_count >= _NUM_OF_STATUS_FILE_RETRIES: raise time.sleep(_STATUS_FILE_RETRY_DELAY) @staticmethod def _read_and_parse_json_status_file(ext_status_file): if not os.path.exists(ext_status_file): raise ExtensionStatusError(msg="Status file {0} does not exist".format(ext_status_file), code=ExtensionStatusError.FileNotExists) try: data_str = fileutil.read_file(ext_status_file) except IOError as e: raise ExtensionStatusError(msg=ustr(e), inner=e, code=ExtensionStatusError.CouldNotReadStatusFile) try: data = json.loads(data_str) except (ValueError, TypeError) as e: raise ExtensionStatusError(msg="{0} \n First 2000 Bytes of status file:\n {1}".format(ustr(e), ustr(data_str)[:2000]), inner=e, code=ExtensionStatusError.InvalidJsonFile) return data_str, data def _process_substatus_list(self, substatus_list, current_status_size=0): processed_substatus = [] # Truncating the substatus to reduce the size, and preserve other fields of the text for substatus in substatus_list: substatus.name, field_size = self._truncate_message(substatus.name, _MAX_SUBSTATUS_FIELD_LENGTH) current_status_size += field_size substatus.message, field_size = self._truncate_message(substatus.message, _MAX_SUBSTATUS_FIELD_LENGTH) current_status_size += field_size if current_status_size <= _MAX_STATUS_FILE_SIZE_IN_BYTES: processed_substatus.append(substatus) else: break return processed_substatus @staticmethod def _truncate_message(field, truncate_size=_MAX_SUBSTATUS_FIELD_LENGTH): # pylint: disable=R1710 if field is None: # pylint: disable=R1705 return else: truncated_field = field if len(field) < truncate_size else field[:truncate_size] + _TRUNCATED_SUFFIX return truncated_field, len(truncated_field) class HandlerEnvironment(object): # HandlerEnvironment.json schema version schemaVersion = 1.0 fileName = "HandlerEnvironment.json" handlerEnvironment = "handlerEnvironment" logFolder = "logFolder" configFolder = "configFolder" statusFolder = "statusFolder" heartbeatFile = "heartbeatFile" eventsFolder_preview = "eventsFolder_preview" eventsFolder = "eventsFolder" name = "name" version = "version" class HandlerManifest(object): def __init__(self, data): if data is None or data['handlerManifest'] is None: raise ExtensionError('Malformed manifest file.') self.data = data def get_name(self): return self.data["name"] def get_version(self): return self.data["version"] def get_install_command(self): return self.data['handlerManifest']["installCommand"] def get_uninstall_command(self): return self.data['handlerManifest']["uninstallCommand"] def get_update_command(self): return self.data['handlerManifest']["updateCommand"] def get_enable_command(self): return self.data['handlerManifest']["enableCommand"] def get_disable_command(self): return self.data['handlerManifest']["disableCommand"] def is_report_heartbeat(self): return self.data['handlerManifest'].get('reportHeartbeat', False) def is_update_with_install(self): update_mode = self.data['handlerManifest'].get('updateMode') if update_mode is None: return True return update_mode.lower() == "updatewithinstall" def is_continue_on_update_failure(self): return self.data['handlerManifest'].get('continueOnUpdateFailure', False) def supports_multiple_extensions(self): return self.data['handlerManifest'].get('supportsMultipleExtensions', False) def get_resource_limits(self, extension_name, str_version): """ Placeholder values for testing and monitoring the monitor extension resource usage. This is not effective after nov 30th. """ if ExtHandlerInstance.is_azuremonitorlinuxagent(extension_name): if LooseVersion(str_version) < LooseVersion("1.12"): test_man = { "resourceLimits": { "services": [ { "name": "mdsd.service" } ] } } return ResourceLimits(test_man.get('resourceLimits', None)) else: test_man = { "resourceLimits": { "services": [ { "name": "azuremonitoragent.service" } ] } } return ResourceLimits(test_man.get('resourceLimits', None)) return ResourceLimits(self.data.get('resourceLimits', None)) class ResourceLimits(object): def __init__(self, data): self.data = data def get_extension_slice_cpu_quota(self): if self.data is not None: return self.data.get('cpuQuotaPercentage', None) return None def get_extension_slice_memory_quota(self): if self.data is not None: return self.data.get('memoryQuotaInMB', None) return None def get_service_list(self): if self.data is not None: return self.data.get('services', None) return None class ExtensionStatusError(ExtensionError): """ When extension failed to provide a valid status file """ CouldNotReadStatusFile = 1 InvalidJsonFile = 2 StatusFileMalformed = 3 MaxSizeExceeded = 4 FileNotExists = 5 def __init__(self, msg=None, inner=None, code=-1): # pylint: disable=W0235 super(ExtensionStatusError, self).__init__(msg, inner, code) Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/ga_version_updater.py000066400000000000000000000175311462617747000256050ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import glob import os import shutil from azurelinuxagent.common import conf, logger from azurelinuxagent.common.exception import AgentUpdateError from azurelinuxagent.common.future import ustr from azurelinuxagent.common.protocol.extensions_goal_state import GoalStateSource from azurelinuxagent.common.utils import fileutil from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.version import AGENT_NAME, AGENT_DIR_PATTERN, CURRENT_VERSION from azurelinuxagent.ga.guestagent import GuestAgent, AGENT_MANIFEST_FILE class GAVersionUpdater(object): def __init__(self, gs_id): self._gs_id = gs_id self._version = FlexibleVersion("0.0.0.0") # Initialize to zero and retrieve from goal state later stage self._agent_manifest = None # Initialize to None and fetch from goal state at different stage for different updater def is_update_allowed_this_time(self, ext_gs_updated): """ This function checks if we allowed to update the agent. @param ext_gs_updated: True if extension goal state updated else False @return false when we don't allow updates. """ raise NotImplementedError def is_rsm_update_enabled(self, agent_family, ext_gs_updated): """ return True if we need to switch to RSM-update from self-update and vice versa. @param agent_family: agent family @param ext_gs_updated: True if extension goal state updated else False @return: False when agent need to stop rsm updates True: when agent need to switch to rsm update """ raise NotImplementedError def retrieve_agent_version(self, agent_family, goal_state): """ This function fetches the agent version from the goal state for the given family. @param agent_family: agent family @param goal_state: goal state """ raise NotImplementedError def is_retrieved_version_allowed_to_update(self, agent_family): """ Checks all base condition if new version allow to update. @param agent_family: agent family @return: True if allowed to update else False """ raise NotImplementedError def log_new_agent_update_message(self): """ This function logs the update message after we check agent allowed to update. """ raise NotImplementedError def proceed_with_update(self): """ performs upgrade/downgrade @return: AgentUpgradeExitException """ raise NotImplementedError @property def version(self): """ Return version """ return self._version def sync_new_gs_id(self, gs_id): """ Update gs_id @param gs_id: goal state id """ self._gs_id = gs_id @staticmethod def download_new_agent_pkg(package_to_download, protocol, is_fast_track_goal_state): """ Function downloads the new agent. @param package_to_download: package to download @param protocol: protocol object @param is_fast_track_goal_state: True if goal state is fast track else False """ agent_name = "{0}-{1}".format(AGENT_NAME, package_to_download.version) agent_dir = os.path.join(conf.get_lib_dir(), agent_name) agent_pkg_path = ".".join((os.path.join(conf.get_lib_dir(), agent_name), "zip")) agent_handler_manifest_file = os.path.join(agent_dir, AGENT_MANIFEST_FILE) if not os.path.exists(agent_dir) or not os.path.isfile(agent_handler_manifest_file): protocol.client.download_zip_package("agent package", package_to_download.uris, agent_pkg_path, agent_dir, use_verify_header=is_fast_track_goal_state) else: logger.info("Agent {0} was previously downloaded - skipping download", agent_name) if not os.path.isfile(agent_handler_manifest_file): try: # Clean up the agent directory if the manifest file is missing logger.info("Agent handler manifest file is missing, cleaning up the agent directory: {0}".format(agent_dir)) if os.path.isdir(agent_dir): shutil.rmtree(agent_dir, ignore_errors=True) except Exception as err: logger.warn("Unable to delete Agent directory: {0}".format(err)) raise AgentUpdateError("Downloaded agent package: {0} is missing agent handler manifest file: {1}".format(agent_name, agent_handler_manifest_file)) def download_and_get_new_agent(self, protocol, agent_family, goal_state): """ Function downloads the new agent and returns the downloaded version. @param protocol: protocol object @param agent_family: agent family @param goal_state: goal state @return: GuestAgent: downloaded agent """ if self._agent_manifest is None: # Fetch agent manifest if it's not already done self._agent_manifest = goal_state.fetch_agent_manifest(agent_family.name, agent_family.uris) package_to_download = self._get_agent_package_to_download(self._agent_manifest, self._version) is_fast_track_goal_state = goal_state.extensions_goal_state.source == GoalStateSource.FastTrack self.download_new_agent_pkg(package_to_download, protocol, is_fast_track_goal_state) agent = GuestAgent.from_agent_package(package_to_download) return agent def purge_extra_agents_from_disk(self): """ Remove the agents from disk except current version and new agent version """ known_agents = [CURRENT_VERSION, self._version] self._purge_unknown_agents_from_disk(known_agents) def _get_agent_package_to_download(self, agent_manifest, version): """ Returns the package of the given Version found in the manifest. If not found, returns exception """ for pkg in agent_manifest.pkg_list.versions: if FlexibleVersion(pkg.version) == version: # Found a matching package, only download that one return pkg raise AgentUpdateError("No matching package found in the agent manifest for version: {0} in goal state incarnation: {1}, " "skipping agent update".format(str(version), self._gs_id)) @staticmethod def _purge_unknown_agents_from_disk(known_agents): """ Remove from disk all directories and .zip files of unknown agents """ path = os.path.join(conf.get_lib_dir(), "{0}-*".format(AGENT_NAME)) for agent_path in glob.iglob(path): try: name = fileutil.trim_ext(agent_path, "zip") m = AGENT_DIR_PATTERN.match(name) if m is not None and FlexibleVersion(m.group(1)) not in known_agents: if os.path.isfile(agent_path): logger.info(u"Purging outdated Agent file {0}", agent_path) os.remove(agent_path) else: logger.info(u"Purging outdated Agent directory {0}", agent_path) shutil.rmtree(agent_path) except Exception as e: logger.warn(u"Purging {0} raised exception: {1}", agent_path, ustr(e)) Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/guestagent.py000066400000000000000000000274411462617747000240740ustar00rootroot00000000000000import json import os import shutil import time from azurelinuxagent.common.event import add_event, WALAEventOperation from azurelinuxagent.common.future import ustr from azurelinuxagent.common.utils import textutil from azurelinuxagent.common import logger, conf from azurelinuxagent.common.exception import UpdateError from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.version import AGENT_DIR_PATTERN, AGENT_NAME from azurelinuxagent.ga.exthandlers import HandlerManifest AGENT_ERROR_FILE = "error.json" # File name for agent error record AGENT_MANIFEST_FILE = "HandlerManifest.json" MAX_FAILURE = 3 # Max failure allowed for agent before declare bad agent AGENT_UPDATE_COUNT_FILE = "update_attempt.json" # File for tracking agent update attempt count class GuestAgent(object): def __init__(self, path, pkg): """ If 'path' is given, the object is initialized to the version installed under that path. If 'pkg' is given, the version specified in the package information is downloaded and the object is initialized to that version. NOTE: Prefer using the from_installed_agent and from_agent_package methods instead of calling __init__ directly """ self.pkg = pkg version = None if path is not None: m = AGENT_DIR_PATTERN.match(path) if m is None: raise UpdateError(u"Illegal agent directory: {0}".format(path)) version = m.group(1) elif self.pkg is not None: version = pkg.version if version is None: raise UpdateError(u"Illegal agent version: {0}".format(version)) self.version = FlexibleVersion(version) location = u"disk" if path is not None else u"package" logger.verbose(u"Loading Agent {0} from {1}", self.name, location) self.error = GuestAgentError(self.get_agent_error_file()) self.error.load() self.update_attempt_data = GuestAgentUpdateAttempt(self.get_agent_update_count_file()) self.update_attempt_data.load() try: self._ensure_loaded() except Exception as e: # If we're unable to unpack the agent, delete the Agent directory try: if os.path.isdir(self.get_agent_dir()): shutil.rmtree(self.get_agent_dir(), ignore_errors=True) except Exception as err: logger.warn("Unable to delete Agent files: {0}".format(err)) msg = u"Agent {0} install failed with exception:".format( self.name) detailed_msg = '{0} {1}'.format(msg, textutil.format_exception(e)) add_event( AGENT_NAME, version=self.version, op=WALAEventOperation.Install, is_success=False, message=detailed_msg) @staticmethod def from_installed_agent(path): """ Creates an instance of GuestAgent using the agent installed in the given 'path'. """ return GuestAgent(path, None) @staticmethod def from_agent_package(package): """ Creates an instance of GuestAgent using the information provided in the 'package'; if that version of the agent is not installed it, it installs it. """ return GuestAgent(None, package) @property def name(self): return "{0}-{1}".format(AGENT_NAME, self.version) def get_agent_cmd(self): return self.manifest.get_enable_command() def get_agent_dir(self): return os.path.join(conf.get_lib_dir(), self.name) def get_agent_error_file(self): return os.path.join(conf.get_lib_dir(), self.name, AGENT_ERROR_FILE) def get_agent_update_count_file(self): return os.path.join(conf.get_lib_dir(), self.name, AGENT_UPDATE_COUNT_FILE) def get_agent_manifest_path(self): return os.path.join(self.get_agent_dir(), AGENT_MANIFEST_FILE) def get_agent_pkg_path(self): return ".".join((os.path.join(conf.get_lib_dir(), self.name), "zip")) def clear_error(self): self.error.clear() self.error.save() @property def is_available(self): return self.is_downloaded and not self.is_blacklisted @property def is_blacklisted(self): return self.error is not None and self.error.is_blacklisted @property def is_downloaded(self): return self.is_blacklisted or \ os.path.isfile(self.get_agent_manifest_path()) def mark_failure(self, is_fatal=False, reason=''): try: if not os.path.isdir(self.get_agent_dir()): os.makedirs(self.get_agent_dir()) self.error.mark_failure(is_fatal=is_fatal, reason=reason) self.error.save() if self.error.is_blacklisted: msg = u"Agent {0} is permanently blacklisted".format(self.name) logger.warn(msg) add_event(op=WALAEventOperation.AgentBlacklisted, is_success=False, message=msg, log_event=False, version=self.version) except Exception as e: logger.warn(u"Agent {0} failed recording error state: {1}", self.name, ustr(e)) def inc_update_attempt_count(self): try: self.update_attempt_data.inc_count() self.update_attempt_data.save() except Exception as e: logger.warn(u"Agent {0} failed recording update attempt: {1}", self.name, ustr(e)) def get_update_attempt_count(self): return self.update_attempt_data.count def _ensure_loaded(self): self._load_manifest() self._load_error() def _load_error(self): try: self.error = GuestAgentError(self.get_agent_error_file()) self.error.load() logger.verbose(u"Agent {0} error state: {1}", self.name, ustr(self.error)) except Exception as e: logger.warn(u"Agent {0} failed loading error state: {1}", self.name, ustr(e)) def _load_manifest(self): path = self.get_agent_manifest_path() if not os.path.isfile(path): msg = u"Agent {0} is missing the {1} file".format(self.name, AGENT_MANIFEST_FILE) raise UpdateError(msg) with open(path, "r") as manifest_file: try: manifests = json.load(manifest_file) except Exception as e: msg = u"Agent {0} has a malformed {1} ({2})".format(self.name, AGENT_MANIFEST_FILE, ustr(e)) raise UpdateError(msg) if type(manifests) is list: if len(manifests) <= 0: msg = u"Agent {0} has an empty {1}".format(self.name, AGENT_MANIFEST_FILE) raise UpdateError(msg) manifest = manifests[0] else: manifest = manifests try: self.manifest = HandlerManifest(manifest) # pylint: disable=W0201 if len(self.manifest.get_enable_command()) <= 0: raise Exception(u"Manifest is missing the enable command") except Exception as e: msg = u"Agent {0} has an illegal {1}: {2}".format( self.name, AGENT_MANIFEST_FILE, ustr(e)) raise UpdateError(msg) logger.verbose( u"Agent {0} loaded manifest from {1}", self.name, self.get_agent_manifest_path()) logger.verbose(u"Successfully loaded Agent {0} {1}: {2}", self.name, AGENT_MANIFEST_FILE, ustr(self.manifest.data)) return class GuestAgentError(object): def __init__(self, path): self.last_failure = 0.0 self.was_fatal = False if path is None: raise UpdateError(u"GuestAgentError requires a path") self.path = path self.failure_count = 0 self.reason = '' self.clear() return def mark_failure(self, is_fatal=False, reason=''): self.last_failure = time.time() self.failure_count += 1 self.was_fatal = is_fatal self.reason = reason return def clear(self): self.last_failure = 0.0 self.failure_count = 0 self.was_fatal = False self.reason = '' return @property def is_blacklisted(self): return self.was_fatal or self.failure_count >= MAX_FAILURE def load(self): if self.path is not None and os.path.isfile(self.path): try: with open(self.path, 'r') as f: self.from_json(json.load(f)) except Exception as error: # The error.json file is only supposed to be written only by the agent. # If for whatever reason the file is malformed, just delete it to reset state of the errors. logger.warn( "Ran into error when trying to load error file {0}, deleting it to clean state. Error: {1}".format( self.path, textutil.format_exception(error))) try: os.remove(self.path) except Exception: # We try best case efforts to delete the file, ignore error if we're unable to do so pass return def save(self): if os.path.isdir(os.path.dirname(self.path)): with open(self.path, 'w') as f: json.dump(self.to_json(), f) return def from_json(self, data): self.last_failure = max(self.last_failure, data.get(u"last_failure", 0.0)) self.failure_count = max(self.failure_count, data.get(u"failure_count", 0)) self.was_fatal = self.was_fatal or data.get(u"was_fatal", False) reason = data.get(u"reason", '') self.reason = reason if reason != '' else self.reason return def to_json(self): data = { u"last_failure": self.last_failure, u"failure_count": self.failure_count, u"was_fatal": self.was_fatal, u"reason": ustr(self.reason) } return data def __str__(self): return "Last Failure: {0}, Total Failures: {1}, Fatal: {2}, Reason: {3}".format( self.last_failure, self.failure_count, self.was_fatal, self.reason) class GuestAgentUpdateAttempt(object): def __init__(self, path): self.count = 0 if path is None: raise UpdateError(u"GuestAgentUpdateAttempt requires a path") self.path = path self.clear() def inc_count(self): self.count += 1 def clear(self): self.count = 0 def load(self): if self.path is not None and os.path.isfile(self.path): try: with open(self.path, 'r') as f: self.from_json(json.load(f)) except Exception as error: # The update_attempt.json file is only supposed to be written only by the agent. # If for whatever reason the file is malformed, just delete it to reset state of the errors. logger.warn( "Ran into error when trying to load error file {0}, deleting it to clean state. Error: {1}".format( self.path, textutil.format_exception(error))) try: os.remove(self.path) except Exception: # We try best case efforts to delete the file, ignore error if we're unable to do so pass def save(self): if os.path.isdir(os.path.dirname(self.path)): with open(self.path, 'w') as f: json.dump(self.to_json(), f) def from_json(self, data): self.count = data.get(u"count", 0) def to_json(self): data = { u"count": self.count } return data Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/interfaces.py000066400000000000000000000027561462617747000240530ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # class ThreadHandlerInterface(object): """ Interface for all thread handlers created and maintained by the GuestAgent. """ @staticmethod def get_thread_name(): raise NotImplementedError("get_thread_name() not implemented") def run(self): raise NotImplementedError("run() not implemented") def keep_alive(self): """ Returns true if the thread handler should be restarted when the thread dies and false when it should remain dead. Defaults to True and can be overridden by sub-classes. """ return True def is_alive(self): raise NotImplementedError("is_alive() not implemented") def start(self): raise NotImplementedError("start() not implemented") def stop(self): raise NotImplementedError("stop() not implemented")Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/logcollector.py000066400000000000000000000426741462617747000244230ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import glob import logging import os import subprocess import time import zipfile from datetime import datetime from heapq import heappush, heappop from azurelinuxagent.common.conf import get_lib_dir, get_ext_log_dir, get_agent_log_file from azurelinuxagent.common.event import initialize_event_logger_vminfo_common_parameters from azurelinuxagent.common.future import ustr from azurelinuxagent.ga.logcollector_manifests import MANIFEST_NORMAL, MANIFEST_FULL # Please note: be careful when adding agent dependencies in this module. # This module uses its own logger and logs to its own file, not to the agent log. from azurelinuxagent.common.protocol.goal_state import GoalStateProperties from azurelinuxagent.common.protocol.util import get_protocol_util _EXTENSION_LOG_DIR = get_ext_log_dir() _AGENT_LIB_DIR = get_lib_dir() _AGENT_LOG = get_agent_log_file() _LOG_COLLECTOR_DIR = os.path.join(_AGENT_LIB_DIR, "logcollector") _TRUNCATED_FILES_DIR = os.path.join(_LOG_COLLECTOR_DIR, "truncated") OUTPUT_RESULTS_FILE_PATH = os.path.join(_LOG_COLLECTOR_DIR, "results.txt") COMPRESSED_ARCHIVE_PATH = os.path.join(_LOG_COLLECTOR_DIR, "logs.zip") CGROUPS_UNIT = "collect-logs.scope" GRACEFUL_KILL_ERRCODE = 3 INVALID_CGROUPS_ERRCODE = 2 _MUST_COLLECT_FILES = [ _AGENT_LOG, os.path.join(_AGENT_LIB_DIR, "waagent_status.json"), os.path.join(_AGENT_LIB_DIR, "history", "*.zip"), os.path.join(_EXTENSION_LOG_DIR, "*", "*"), os.path.join(_EXTENSION_LOG_DIR, "*", "*", "*"), "{0}.*".format(_AGENT_LOG) # any additional waagent.log files (e.g., waagent.log.1.gz) ] _FILE_SIZE_LIMIT = 30 * 1024 * 1024 # 30 MB _UNCOMPRESSED_ARCHIVE_SIZE_LIMIT = 150 * 1024 * 1024 # 150 MB _LOGGER = logging.getLogger(__name__) class LogCollector(object): _TRUNCATED_FILE_PREFIX = "truncated_" def __init__(self, is_full_mode=False): self._is_full_mode = is_full_mode self._manifest = MANIFEST_FULL if is_full_mode else MANIFEST_NORMAL self._must_collect_files = self._expand_must_collect_files() self._create_base_dirs() self._set_logger() self._initialize_telemetry() @staticmethod def _mkdir(dirname): if not os.path.isdir(dirname): os.makedirs(dirname) @staticmethod def _reset_file(filepath): with open(filepath, "wb") as out_file: out_file.write("".encode("utf-8")) @staticmethod def _create_base_dirs(): LogCollector._mkdir(_LOG_COLLECTOR_DIR) LogCollector._mkdir(_TRUNCATED_FILES_DIR) @staticmethod def _set_logger(): _f_handler = logging.FileHandler(OUTPUT_RESULTS_FILE_PATH, encoding="utf-8") _f_format = logging.Formatter(fmt='%(asctime)s %(levelname)s %(message)s', datefmt=u'%Y-%m-%dT%H:%M:%SZ') _f_format.converter = time.gmtime _f_handler.setFormatter(_f_format) _LOGGER.addHandler(_f_handler) _LOGGER.setLevel(logging.INFO) @staticmethod def _initialize_telemetry(): protocol = get_protocol_util().get_protocol(init_goal_state=False) protocol.client.reset_goal_state(goal_state_properties=GoalStateProperties.RoleConfig | GoalStateProperties.HostingEnv) # Initialize the common parameters for telemetry events initialize_event_logger_vminfo_common_parameters(protocol) @staticmethod def _run_shell_command(command, stdout=subprocess.PIPE, log_output=False): """ Runs a shell command in a subprocess, logs any errors to the log file, enables changing the stdout stream, and logs the output of the command to the log file if indicated by the `log_output` parameter. :param command: Shell command to run :param stdout: Where to write the output of the command :param log_output: If true, log the command output to the log file """ def format_command(cmd): return " ".join(cmd) if isinstance(cmd, list) else command def _encode_command_output(output): return ustr(output, encoding="utf-8", errors="backslashreplace") try: process = subprocess.Popen(command, stdout=stdout, stderr=subprocess.PIPE, shell=False) stdout, stderr = process.communicate() return_code = process.returncode except Exception as e: error_msg = u"Command [{0}] raised unexpected exception: [{1}]".format(format_command(command), ustr(e)) _LOGGER.error(error_msg) return if return_code != 0: encoded_stdout = _encode_command_output(stdout) encoded_stderr = _encode_command_output(stderr) error_msg = "Command: [{0}], return code: [{1}], stdout: [{2}] stderr: [{3}]".format(format_command(command), return_code, encoded_stdout, encoded_stderr) _LOGGER.error(error_msg) return if log_output: msg = "Output of command [{0}]:\n{1}".format(format_command(command), _encode_command_output(stdout)) _LOGGER.info(msg) @staticmethod def _expand_must_collect_files(): # Match the regexes from the MUST_COLLECT_FILES list to existing file paths on disk. manifest = [] for path in _MUST_COLLECT_FILES: manifest.extend(sorted(glob.glob(path))) return manifest def _read_manifest(self): return self._manifest.splitlines() @staticmethod def _process_ll_command(folder): LogCollector._run_shell_command(["ls", "-alF", folder], log_output=True) @staticmethod def _process_echo_command(message): _LOGGER.info(message) @staticmethod def _process_copy_command(path): file_paths = glob.glob(path) for file_path in file_paths: _LOGGER.info(file_path) return file_paths @staticmethod def _convert_file_name_to_archive_name(file_name): # File name is the name of the file on disk, whereas archive name is the name of that same file in the archive. # For non-truncated files: /var/log/waagent.log on disk becomes var/log/waagent.log in archive # (leading separator is removed by the archive). # For truncated files: /var/lib/waagent/logcollector/truncated/var/log/syslog.1 on disk becomes # truncated_var_log_syslog.1 in the archive. if file_name.startswith(_TRUNCATED_FILES_DIR): original_file_path = file_name[len(_TRUNCATED_FILES_DIR):].lstrip(os.path.sep) archive_file_name = LogCollector._TRUNCATED_FILE_PREFIX + original_file_path.replace(os.path.sep, "_") return archive_file_name else: return file_name.lstrip(os.path.sep) @staticmethod def _remove_uncollected_truncated_files(files_to_collect): # After log collection is completed, see if there are any old truncated files which were not collected # and remove them since they probably won't be collected in the future. This is possible when the # original file got deleted, so there is no need to keep its truncated version anymore. truncated_files = os.listdir(_TRUNCATED_FILES_DIR) for file_path in truncated_files: full_path = os.path.join(_TRUNCATED_FILES_DIR, file_path) if full_path not in files_to_collect: if os.path.isfile(full_path): os.remove(full_path) @staticmethod def _expand_parameters(manifest_data): _LOGGER.info("Using %s as $LIB_DIR", _AGENT_LIB_DIR) _LOGGER.info("Using %s as $LOG_DIR", _EXTENSION_LOG_DIR) _LOGGER.info("Using %s as $AGENT_LOG", _AGENT_LOG) new_manifest = [] for line in manifest_data: new_line = line.replace("$LIB_DIR", _AGENT_LIB_DIR) new_line = new_line.replace("$LOG_DIR", _EXTENSION_LOG_DIR) new_line = new_line.replace("$AGENT_LOG", _AGENT_LOG) new_manifest.append(new_line) return new_manifest def _process_manifest_file(self): files_to_collect = set() data = self._read_manifest() manifest_entries = LogCollector._expand_parameters(data) for entry in manifest_entries: # The entry can be one of the four flavours: # 1) ll,/etc/udev/rules.d -- list out contents of the folder and store to results file # 2) echo,### Gathering Configuration Files ### -- print message to results file # 3) copy,/var/lib/waagent/provisioned -- add file to list of files to be collected # 4) diskinfo, -- ignore commands from manifest other than ll, echo, and copy for now contents = entry.split(",") if len(contents) != 2: # If it's not a comment or an empty line, it's a malformed entry if not entry.startswith("#") and len(entry.strip()) > 0: _LOGGER.error("Couldn't parse \"%s\"", entry) continue command, value = contents if command == "ll": self._process_ll_command(value) elif command == "echo": self._process_echo_command(value) elif command == "copy": files_to_collect.update(self._process_copy_command(value)) return files_to_collect @staticmethod def _truncate_large_file(file_path): # Truncate large file to size limit (keep freshest entries of the file), copy file to a temporary location # and update file path in list of files to collect try: # Binary files cannot be truncated, don't include large binary files ext = os.path.splitext(file_path)[1] if ext in [".gz", ".zip", ".xz"]: _LOGGER.warning("Discarding large binary file %s", file_path) return None truncated_file_path = os.path.join(_TRUNCATED_FILES_DIR, file_path.replace(os.path.sep, "_")) if os.path.exists(truncated_file_path): original_file_mtime = os.path.getmtime(file_path) truncated_file_mtime = os.path.getmtime(truncated_file_path) # If the original file hasn't been updated since the truncated file, it means there were no changes # and we don't need to truncate it again. if original_file_mtime < truncated_file_mtime: return truncated_file_path # Get the last N bytes of the file with open(truncated_file_path, "w+") as fh: LogCollector._run_shell_command(["tail", "-c", str(_FILE_SIZE_LIMIT), file_path], stdout=fh) return truncated_file_path except OSError as e: _LOGGER.error("Failed to truncate large file: %s", ustr(e)) return None def _get_file_priority(self, file_entry): # The sooner the file appears in the must collect list, the bigger its priority. # Priority is higher the lower the number (0 is highest priority). try: return self._must_collect_files.index(file_entry) except ValueError: # Doesn't matter, file is not in the must collect list, assign a low priority return 999999999 def _get_priority_files_list(self, file_list): # Given a list of files to collect, determine if they show up in the must collect list and build a priority # queue. The queue will determine the order in which the files are collected, highest priority files first. priority_file_queue = [] for file_entry in file_list: priority = self._get_file_priority(file_entry) heappush(priority_file_queue, (priority, file_entry)) return priority_file_queue def _get_final_list_for_archive(self, priority_file_queue): # Given a priority queue of files to collect, add one by one while the archive size is under the size limit. # If a single file is over the file size limit, truncate it before adding it to the archive. _LOGGER.info("### Preparing list of files to add to archive ###") total_uncompressed_size = 0 final_files_to_collect = [] while priority_file_queue: file_path = heappop(priority_file_queue)[1] # (priority, file_path) file_size = min(os.path.getsize(file_path), _FILE_SIZE_LIMIT) if total_uncompressed_size + file_size > _UNCOMPRESSED_ARCHIVE_SIZE_LIMIT: _LOGGER.warning("Archive too big, done with adding files.") break if os.path.getsize(file_path) <= _FILE_SIZE_LIMIT: final_files_to_collect.append(file_path) _LOGGER.info("Adding file %s, size %s b", file_path, file_size) else: truncated_file_path = self._truncate_large_file(file_path) if truncated_file_path: _LOGGER.info("Adding truncated file %s, size %s b", truncated_file_path, file_size) final_files_to_collect.append(truncated_file_path) total_uncompressed_size += file_size _LOGGER.info("Uncompressed archive size is %s b", total_uncompressed_size) return final_files_to_collect def _create_list_of_files_to_collect(self): # The final list of files to be collected by zip is created in three steps: # 1) Parse given manifest file, expanding wildcards and keeping a list of files that exist on disk # 2) Assign those files a priority depending on whether they are in the must collect file list. # 3) In priority order, add files to the final list to be collected, until the size of the archive is under # the size limit. parsed_file_paths = self._process_manifest_file() prioritized_file_paths = self._get_priority_files_list(parsed_file_paths) files_to_collect = self._get_final_list_for_archive(prioritized_file_paths) return files_to_collect def collect_logs_and_get_archive(self): """ Public method that collects necessary log files in a compressed zip archive. :return: Returns the path of the collected compressed archive """ files_to_collect = [] try: # Clear previous run's output and create base directories if they don't exist already. self._create_base_dirs() LogCollector._reset_file(OUTPUT_RESULTS_FILE_PATH) start_time = datetime.utcnow() _LOGGER.info("Starting log collection at %s", start_time.strftime("%Y-%m-%dT%H:%M:%SZ")) _LOGGER.info("Using log collection mode %s", "full" if self._is_full_mode else "normal") files_to_collect = self._create_list_of_files_to_collect() _LOGGER.info("### Creating compressed archive ###") compressed_archive = None try: compressed_archive = zipfile.ZipFile(COMPRESSED_ARCHIVE_PATH, "w", compression=zipfile.ZIP_DEFLATED) max_errors = 8 error_count = 0 for file_to_collect in files_to_collect: try: archive_file_name = LogCollector._convert_file_name_to_archive_name(file_to_collect) compressed_archive.write(file_to_collect.encode("utf-8"), arcname=archive_file_name) except Exception as e: error_count += 1 if error_count >= max_errors: raise Exception("Too many errors, giving up. Last error: {0}".format(ustr(e))) else: _LOGGER.warning("Failed to add file %s to the archive: %s", file_to_collect, ustr(e)) compressed_archive_size = os.path.getsize(COMPRESSED_ARCHIVE_PATH) _LOGGER.info("Successfully compressed files. Compressed archive size is %s b", compressed_archive_size) end_time = datetime.utcnow() duration = end_time - start_time elapsed_ms = int(((duration.days * 24 * 60 * 60 + duration.seconds) * 1000) + (duration.microseconds / 1000.0)) _LOGGER.info("Finishing log collection at %s", end_time.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ")) _LOGGER.info("Elapsed time: %s ms", elapsed_ms) compressed_archive.write(OUTPUT_RESULTS_FILE_PATH.encode("utf-8"), arcname="results.txt") finally: if compressed_archive is not None: compressed_archive.close() return COMPRESSED_ARCHIVE_PATH except Exception as e: msg = "Failed to collect logs: {0}".format(ustr(e)) _LOGGER.error(msg) raise finally: self._remove_uncollected_truncated_files(files_to_collect) Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/logcollector_manifests.py000066400000000000000000000063521462617747000264650ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # MANIFEST_NORMAL = """echo,### Probing Directories ### ll,/var/log ll,$LIB_DIR echo,### Gathering Configuration Files ### copy,/etc/*-release copy,/etc/HOSTNAME copy,/etc/hostname copy,/etc/waagent.conf echo, echo,### Gathering Log Files ### copy,$AGENT_LOG* copy,/var/log/dmesg* copy,/var/log/syslog* copy,/var/log/auth* copy,$LOG_DIR/*/* copy,$LOG_DIR/*/*/* copy,$LOG_DIR/custom-script/handler.log echo, echo,### Gathering Extension Files ### copy,$LIB_DIR/ovf-env.xml copy,$LIB_DIR/waagent_status.json copy,$LIB_DIR/*/status/*.status copy,$LIB_DIR/*/config/*.settings copy,$LIB_DIR/*/config/HandlerState copy,$LIB_DIR/*/config/HandlerStatus copy,$LIB_DIR/error.json copy,$LIB_DIR/history/*.zip echo, """ MANIFEST_FULL = """echo,### Probing Directories ### ll,/var/log ll,$LIB_DIR ll,/etc/udev/rules.d echo,### Gathering Configuration Files ### copy,$LIB_DIR/provisioned copy,/etc/fstab copy,/etc/ssh/sshd_config copy,/boot/grub*/grub.c* copy,/boot/grub*/menu.lst copy,/etc/*-release copy,/etc/HOSTNAME copy,/etc/hostname copy,/etc/network/interfaces copy,/etc/network/interfaces.d/*.cfg copy,/etc/netplan/50-cloud-init.yaml copy,/etc/nsswitch.conf copy,/etc/resolv.conf copy,/run/systemd/resolve/stub-resolv.conf copy,/run/resolvconf/resolv.conf copy,/etc/sysconfig/iptables copy,/etc/sysconfig/network copy,/etc/sysconfig/network/ifcfg-eth* copy,/etc/sysconfig/network/routes copy,/etc/sysconfig/network-scripts/ifcfg-eth* copy,/etc/sysconfig/network-scripts/route-eth* copy,/etc/sysconfig/SuSEfirewall2 copy,/etc/ufw/ufw.conf copy,/etc/waagent.conf copy,/var/lib/dhcp/dhclient.eth0.leases copy,/var/lib/dhclient/dhclient-eth0.leases copy,/var/lib/wicked/lease-eth0-dhcp-ipv4.xml copy,/run/systemd/netif/leases/2 echo, echo,### Gathering Log Files ### copy,$AGENT_LOG* copy,/var/log/syslog* copy,/var/log/rsyslog* copy,/var/log/messages* copy,/var/log/kern* copy,/var/log/dmesg* copy,/var/log/dpkg* copy,/var/log/yum* copy,/var/log/cloud-init* copy,/var/log/boot* copy,/var/log/auth* copy,/var/log/secure* copy,$LOG_DIR/*/* copy,$LOG_DIR/*/*/* copy,$LOG_DIR/custom-script/handler.log copy,$LOG_DIR/run-command/handler.log echo, echo,### Gathering Extension Files ### copy,$LIB_DIR/ovf-env.xml copy,$LIB_DIR/*/status/*.status copy,$LIB_DIR/*/config/*.settings copy,$LIB_DIR/*/config/HandlerState copy,$LIB_DIR/*/config/HandlerStatus copy,$LIB_DIR/SharedConfig.xml copy,$LIB_DIR/ManagedIdentity-*.json copy,$LIB_DIR/*/error.json copy,$LIB_DIR/waagent_status.json copy,$LIB_DIR/history/*.zip echo, echo,### Gathering Disk Info ### diskinfo, echo,### Gathering Guest ProxyAgent Log Files ### copy,/var/log/azure-proxy-agent/* echo, """ Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/monitor.py000066400000000000000000000314511462617747000234110ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import datetime import os import threading import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.networkutil as networkutil from azurelinuxagent.ga.cgroup import MetricValue, MetricsCategory, MetricsCounter from azurelinuxagent.ga.cgroupconfigurator import CGroupConfigurator from azurelinuxagent.ga.cgroupstelemetry import CGroupsTelemetry from azurelinuxagent.common.errorstate import ErrorState from azurelinuxagent.common.event import add_event, WALAEventOperation, report_metric from azurelinuxagent.common.future import ustr from azurelinuxagent.ga.interfaces import ThreadHandlerInterface from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.protocol.healthservice import HealthService from azurelinuxagent.common.protocol.imds import get_imds_client from azurelinuxagent.common.protocol.util import get_protocol_util from azurelinuxagent.common.utils.restutil import IOErrorCounter from azurelinuxagent.common.utils.textutil import hash_strings from azurelinuxagent.common.version import AGENT_NAME, CURRENT_VERSION from azurelinuxagent.ga.periodic_operation import PeriodicOperation def get_monitor_handler(): return MonitorHandler() class PollResourceUsage(PeriodicOperation): """ Periodic operation to poll the tracked cgroups for resource usage data. It also checks whether there are processes in the agent's cgroup that should not be there. """ def __init__(self): super(PollResourceUsage, self).__init__(conf.get_cgroup_check_period()) self.__log_metrics = conf.get_cgroup_log_metrics() self.__periodic_metrics = {} def _operation(self): tracked_metrics = CGroupsTelemetry.poll_all_tracked() for metric in tracked_metrics: key = metric.category + metric.counter + metric.instance if key not in self.__periodic_metrics or (self.__periodic_metrics[key] + metric.report_period) <= datetime.datetime.now(): report_metric(metric.category, metric.counter, metric.instance, metric.value, log_event=self.__log_metrics) self.__periodic_metrics[key] = datetime.datetime.now() CGroupConfigurator.get_instance().check_cgroups(tracked_metrics) class PollSystemWideResourceUsage(PeriodicOperation): def __init__(self): super(PollSystemWideResourceUsage, self).__init__(datetime.timedelta(hours=1)) self.__log_metrics = conf.get_cgroup_log_metrics() self.osutil = get_osutil() def poll_system_memory_metrics(self): used_mem, available_mem = self.osutil.get_used_and_available_system_memory() return [ MetricValue(MetricsCategory.MEMORY_CATEGORY, MetricsCounter.USED_MEM, "", used_mem), MetricValue(MetricsCategory.MEMORY_CATEGORY, MetricsCounter.AVAILABLE_MEM, "", available_mem) ] def _operation(self): metrics = self.poll_system_memory_metrics() for metric in metrics: report_metric(metric.category, metric.counter, metric.instance, metric.value, log_event=self.__log_metrics) class ResetPeriodicLogMessages(PeriodicOperation): """ Periodic operation to clean up the hash-tables maintained by the loggers. For reference, please check azurelinuxagent.common.logger.Logger and azurelinuxagent.common.event.EventLogger classes """ def __init__(self): super(ResetPeriodicLogMessages, self).__init__(datetime.timedelta(hours=12)) def _operation(self): logger.reset_periodic() class ReportNetworkErrors(PeriodicOperation): def __init__(self): super(ReportNetworkErrors, self).__init__(datetime.timedelta(minutes=30)) def _operation(self): io_errors = IOErrorCounter.get_and_reset() hostplugin_errors = io_errors.get("hostplugin") protocol_errors = io_errors.get("protocol") other_errors = io_errors.get("other") if hostplugin_errors > 0 or protocol_errors > 0 or other_errors > 0: msg = "hostplugin:{0};protocol:{1};other:{2}".format(hostplugin_errors, protocol_errors, other_errors) add_event(op=WALAEventOperation.HttpErrors, message=msg) class ReportNetworkConfigurationChanges(PeriodicOperation): """ Periodic operation to check and log changes in network configuration. """ def __init__(self): super(ReportNetworkConfigurationChanges, self).__init__(datetime.timedelta(minutes=1)) self.osutil = get_osutil() self.last_route_table_hash = b'' self.last_nic_state = {} def log_network_configuration(self): try: route_file = '/proc/net/route' if os.path.exists(route_file): lines = [] with open(route_file) as file_object: for line in file_object: lines.append(line) if len(lines) >= 100: lines.append("= self._last_warning_time + self._LOG_WARNING_PERIOD: logger.warn(warning) self._last_warning_time = datetime.datetime.utcnow() self._last_warning = warning def next_run_time(self): return self._next_run_time def _operation(self): """ Derived classes must override this with the definition of the operation they need to perform """ raise NotImplementedError() @staticmethod def sleep_until_next_operation(operations): """ Takes a list of operations, finds the operation that should be executed next (that with the closest next_run_time) and sleeps until it is time to execute that operation. """ next_operation_time = min([op.next_run_time() for op in operations]) sleep_timedelta = next_operation_time - datetime.datetime.utcnow() # timedelta.total_seconds() is not available on Python 2.6, do the computation manually sleep_seconds = ((sleep_timedelta.days * 24 * 3600 + sleep_timedelta.seconds) * 10.0 ** 6 + sleep_timedelta.microseconds) / 10.0 ** 6 if sleep_seconds > 0: time.sleep(sleep_seconds) Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/persist_firewall_rules.py000066400000000000000000000377421462617747000265230ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import sys import azurelinuxagent.common.conf as conf from azurelinuxagent.common import logger from azurelinuxagent.common.event import add_event, WALAEventOperation from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil import get_osutil, systemd from azurelinuxagent.common.utils import shellutil, fileutil, textutil from azurelinuxagent.common.utils.networkutil import AddFirewallRules from azurelinuxagent.common.utils.shellutil import CommandError class PersistFirewallRulesHandler(object): __SERVICE_FILE_CONTENT = """ # This unit file (Version={version}) was created by the Azure VM Agent. # Do not edit. [Unit] Description=Setup network rules for WALinuxAgent Before=network-pre.target Wants=network-pre.target DefaultDependencies=no ConditionPathExists={binary_path} [Service] Type=oneshot ExecStart={py_path} {binary_path} RemainAfterExit=yes [Install] WantedBy=network.target """ __BINARY_CONTENTS = """ # This python file was created by the Azure VM Agent. Please do not edit. import os if __name__ == '__main__': if os.path.exists("{egg_path}"): os.system("{py_path} {egg_path} --setup-firewall --dst_ip={wire_ip} --uid={user_id} {wait}") else: print("{egg_path} file not found, skipping execution of firewall execution setup for this boot") """ _AGENT_NETWORK_SETUP_NAME_FORMAT = "{0}-network-setup.service" BINARY_FILE_NAME = "waagent-network-setup.py" _FIREWALLD_RUNNING_CMD = ["firewall-cmd", "--state"] # The current version of the unit file; Update it whenever the unit file is modified to ensure Agent can dynamically # modify the unit file on VM too _UNIT_VERSION = "1.3" @staticmethod def get_service_file_path(): osutil = get_osutil() service_name = PersistFirewallRulesHandler._AGENT_NETWORK_SETUP_NAME_FORMAT.format(osutil.get_service_name()) return os.path.join(osutil.get_systemd_unit_file_install_path(), service_name) def __init__(self, dst_ip, uid): """ This class deals with ensuring that Firewall rules are persisted over system reboots. It tries to employ using Firewalld.service if present first as it already has provisions for persistent rules. If not, it then creates a new agent-network-setup.service file and copy it over to the osutil.get_systemd_unit_file_install_path() dynamically On top of it, on every service restart it ensures that the WireIP is overwritten and the new IP is blocked as well. """ osutil = get_osutil() self._network_setup_service_name = self._AGENT_NETWORK_SETUP_NAME_FORMAT.format(osutil.get_service_name()) self._is_systemd = systemd.is_systemd() self._systemd_file_path = osutil.get_systemd_unit_file_install_path() self._dst_ip = dst_ip self._uid = uid self._wait = osutil.get_firewall_will_wait() # The custom service will try to call the current agent executable to setup the firewall rules self._current_agent_executable_path = os.path.join(os.getcwd(), sys.argv[0]) @staticmethod def _is_firewall_service_running(): # Check if firewall-cmd can connect to the daemon # https://docs.fedoraproject.org/en-US/Fedora/19/html/Security_Guide/sec-Check_if_firewalld_is_running.html # Eg: firewall-cmd --state # > running firewalld_state = PersistFirewallRulesHandler._FIREWALLD_RUNNING_CMD try: return shellutil.run_command(firewalld_state).rstrip() == "running" except Exception as error: logger.verbose("{0} command failed: {1}".format(' '.join(firewalld_state), ustr(error))) return False def setup(self): if not systemd.is_systemd(): logger.warn("Did not detect Systemd, unable to set {0}".format(self._network_setup_service_name)) return if self._is_firewall_service_running(): logger.info("Firewalld.service present on the VM, setting up permanent rules on the VM") # In case of a failure, this would throw. In such a case, we don't need to try to setup our custom service # because on system reboot, all iptable rules are reset by firewalld.service so it would be a no-op. self._setup_permanent_firewalld_rules() # Remove custom service if exists to avoid problems with firewalld try: fileutil.rm_files(*[self.get_service_file_path(), os.path.join(conf.get_lib_dir(), self.BINARY_FILE_NAME)]) except Exception as error: logger.info( "Unable to delete existing service {0}: {1}".format(self._network_setup_service_name, ustr(error))) return logger.info( "Firewalld service not running/unavailable, trying to set up {0}".format(self._network_setup_service_name)) self._setup_network_setup_service() def __verify_firewall_rules_enabled(self): # Check if firewall-rules have already been enabled # This function would also return False if the dest-ip is changed. So no need to check separately for that try: AddFirewallRules.check_firewalld_rule_applied(self._dst_ip, self._uid) except Exception as error: logger.verbose( "Check if Firewall rules already applied using firewalld.service failed: {0}".format(ustr(error))) return False return True def __remove_firewalld_rules(self): try: AddFirewallRules.remove_firewalld_rules(self._dst_ip, self._uid) except Exception as error: logger.warn( "failed to remove rule using firewalld.service: {0}".format(ustr(error))) def _setup_permanent_firewalld_rules(self): if self.__verify_firewall_rules_enabled(): logger.info("Firewall rules already set. No change needed.") return logger.info("Firewall rules not added yet, adding them now using firewalld.service") # Remove first if partial list present self.__remove_firewalld_rules() # Add rules if not already set AddFirewallRules.add_firewalld_rules(self._dst_ip, self._uid) logger.info("Successfully added the firewall commands using firewalld.service") def __verify_network_setup_service_enabled(self): # Check if the custom service has already been enabled cmd = ["systemctl", "is-enabled", self._network_setup_service_name] try: return shellutil.run_command(cmd).rstrip() == "enabled" except CommandError as error: msg = "{0} not enabled. Command: {1}, ExitCode: {2}\nStdout: {3}\nStderr: {4}".format( self._network_setup_service_name, ' '.join(cmd), error.returncode, error.stdout, error.stderr) except Exception as error: msg = "Ran into error, {0} not enabled. Error: {1}".format(self._network_setup_service_name, ustr(error)) logger.verbose(msg) return False def _setup_network_setup_service(self): # Even if service is enabled, we need to overwrite the binary file with the current IP in case it changed. # This is to handle the case where WireIP can change midway on service restarts. # Additionally, incase of auto-update this would also update the location of the new EGG file ensuring that # the service is always run from the most latest agent. self.__setup_binary_file() network_service_enabled = self.__verify_network_setup_service_enabled() if network_service_enabled and not self.__unit_file_version_modified(): logger.info("Service: {0} already enabled. No change needed.".format(self._network_setup_service_name)) self.__log_network_setup_service_logs() else: if not network_service_enabled: logger.info("Service: {0} not enabled. Adding it now".format(self._network_setup_service_name)) else: logger.info( "Unit file {0} version modified to {1}, setting it up again".format(self.get_service_file_path(), self._UNIT_VERSION)) # Create unit file with default values self.__set_service_unit_file() # Reload systemd configurations when we setup the service for the first time to avoid systemctl warnings self.__reload_systemd_conf() logger.info("Successfully added and enabled the {0}".format(self._network_setup_service_name)) def __setup_binary_file(self): binary_file_path = os.path.join(conf.get_lib_dir(), self.BINARY_FILE_NAME) try: fileutil.write_file(binary_file_path, self.__BINARY_CONTENTS.format(egg_path=self._current_agent_executable_path, wire_ip=self._dst_ip, user_id=self._uid, wait=self._wait, py_path=sys.executable)) logger.info("Successfully updated the Binary file {0} for firewall setup".format(binary_file_path)) except Exception: logger.warn( "Unable to setup binary file, removing the service unit file {0} to ensure its not run on system reboot".format( self.get_service_file_path())) self.__remove_file_without_raising(binary_file_path) self.__remove_file_without_raising(self.get_service_file_path()) raise def __set_service_unit_file(self): service_unit_file = self.get_service_file_path() binary_path = os.path.join(conf.get_lib_dir(), self.BINARY_FILE_NAME) try: fileutil.write_file(service_unit_file, self.__SERVICE_FILE_CONTENT.format(binary_path=binary_path, py_path=sys.executable, version=self._UNIT_VERSION)) fileutil.chmod(service_unit_file, 0o644) # Finally enable the service. This is needed to ensure the service is started on system boot cmd = ["systemctl", "enable", self._network_setup_service_name] try: shellutil.run_command(cmd) except CommandError as error: msg = ustr( "Unable to enable service: {0}; deleting service file: {1}. Command: {2}, Exit-code: {3}.\nstdout: {4}\nstderr: {5}").format( self._network_setup_service_name, service_unit_file, ' '.join(cmd), error.returncode, error.stdout, error.stderr) raise Exception(msg) except Exception: self.__remove_file_without_raising(service_unit_file) raise @staticmethod def __remove_file_without_raising(file_path): if os.path.exists(file_path): try: os.remove(file_path) except Exception as error: logger.warn("Unable to delete file: {0}; Error: {1}".format(file_path, ustr(error))) def __verify_network_setup_service_failed(self): # Check if the agent-network-setup.service failed in its last run # Note: # The `systemctl is-failed ` command would return "failed" and ExitCode: 0 if the service was actually # in a failed state. # For the rest of the cases (eg: active, in-active, dead, etc) it would return the state and a non-0 ExitCode. cmd = ["systemctl", "is-failed", self._network_setup_service_name] try: return shellutil.run_command(cmd).rstrip() == "failed" except CommandError as error: msg = "{0} not in a failed state. Command: {1}, ExitCode: {2}\nStdout: {3}\nStderr: {4}".format( self._network_setup_service_name, ' '.join(cmd), error.returncode, error.stdout, error.stderr) except Exception as error: msg = "Ran into error, {0} not failed. Error: {1}".format(self._network_setup_service_name, ustr(error)) logger.verbose(msg) return False def __log_network_setup_service_logs(self): # Get logs from journalctl - https://www.freedesktop.org/software/systemd/man/journalctl.html cmd = ["journalctl", "-u", self._network_setup_service_name, "-b", "--utc"] service_failed = self.__verify_network_setup_service_failed() try: stdout = shellutil.run_command(cmd) msg = ustr("Logs from the {0} since system boot:\n {1}").format(self._network_setup_service_name, stdout) logger.info(msg) except CommandError as error: msg = "Unable to fetch service logs, Command: {0} failed with ExitCode: {1}\nStdout: {2}\nStderr: {3}".format( ' '.join(cmd), error.returncode, error.stdout, error.stderr) logger.warn(msg) except Exception as e: msg = "Ran into unexpected error when getting logs for {0} service. Error: {1}".format( self._network_setup_service_name, textutil.format_exception(e)) logger.warn(msg) # Log service status and logs if we can fetch them from journalctl and send it to Kusto, # else just log the error of the failure of fetching logs add_event( op=WALAEventOperation.PersistFirewallRules, is_success=(not service_failed), message=msg, log_event=False) def __reload_systemd_conf(self): try: logger.info("Executing systemctl daemon-reload for setting up {0}".format(self._network_setup_service_name)) shellutil.run_command(["systemctl", "daemon-reload"]) except Exception as exception: logger.warn("Unable to reload systemctl configurations: {0}".format(ustr(exception))) def __get_unit_file_version(self): if not os.path.exists(self.get_service_file_path()): raise OSError("{0} not found".format(self.get_service_file_path())) match = fileutil.findre_in_file(self.get_service_file_path(), line_re="This unit file \\(Version=([\\d.]+)\\) was created by the Azure VM Agent.") if match is None: raise ValueError("Version tag not found in the unit file") return match.group(1).strip() def __unit_file_version_modified(self): """ Check if the unit file version changed from the expected version :return: True if unit file version changed else False """ try: unit_file_version = self.__get_unit_file_version() except Exception as error: logger.info("Unable to determine version of unit file: {0}, overwriting unit file".format(ustr(error))) # Since we can't determine the version, marking the file as modified to overwrite the unit file return True if unit_file_version != self._UNIT_VERSION: logger.info( "Unit file version: {0} does not match with expected version: {1}, overwriting unit file".format( unit_file_version, self._UNIT_VERSION)) return True logger.info( "Unit file version matches with expected version: {0}, not overwriting unit file".format(unit_file_version)) return False Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/remoteaccess.py000066400000000000000000000143411462617747000243760ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import os.path from datetime import datetime, timedelta import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger from azurelinuxagent.common.event import add_event, WALAEventOperation from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.utils import textutil from azurelinuxagent.common.utils.cryptutil import CryptUtil from azurelinuxagent.common.version import AGENT_NAME, CURRENT_VERSION REMOTE_USR_EXPIRATION_FORMAT = "%a, %d %b %Y %H:%M:%S %Z" DATE_FORMAT = "%Y-%m-%d" TRANSPORT_PRIVATE_CERT = "TransportPrivate.pem" REMOTE_ACCESS_ACCOUNT_COMMENT = "JIT_Account" MAX_TRY_ATTEMPT = 5 FAILED_ATTEMPT_THROTTLE = 1 def get_remote_access_handler(protocol): return RemoteAccessHandler(protocol) class RemoteAccessHandler(object): def __init__(self, protocol): self._os_util = get_osutil() self._protocol = protocol self._cryptUtil = CryptUtil(conf.get_openssl_cmd()) self._remote_access = None self._check_existing_jit_users = True def run(self): try: if self._os_util.jit_enabled: # Handle remote access if any. self._remote_access = self._protocol.client.get_remote_access() self._handle_remote_access() except Exception as e: msg = u"Exception processing goal state for remote access users: {0}".format(textutil.format_exception(e)) add_event(AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.RemoteAccessHandling, is_success=False, message=msg) def _get_existing_jit_users(self): all_users = self._os_util.get_users() return set(u[0] for u in all_users if self._is_jit_user(u[4])) def _handle_remote_access(self): if self._remote_access is not None: logger.info("Processing remote access users in goal state.") self._check_existing_jit_users = True existing_jit_users = self._get_existing_jit_users() goal_state_users = set(u.name for u in self._remote_access.user_list.users) for acc in self._remote_access.user_list.users: try: raw_expiration = acc.expiration account_expiration = datetime.strptime(raw_expiration, REMOTE_USR_EXPIRATION_FORMAT) now = datetime.utcnow() if acc.name not in existing_jit_users and now < account_expiration: self._add_user(acc.name, acc.encrypted_password, account_expiration) elif acc.name in existing_jit_users and now > account_expiration: # user account expired, delete it. logger.info("Remote access user '{0}' expired.", acc.name) self._remove_user(acc.name) except Exception as e: logger.error("Error processing remote access user '{0}' - {1}", acc.name, ustr(e)) for user in existing_jit_users: try: if user not in goal_state_users: # user explicitly removed self._remove_user(user) except Exception as e: logger.error("Error removing remote access user '{0}' - {1}", user, ustr(e)) else: # There are no JIT users in the goal state; that may mean that they were removed or that they # were never added. Enumerating the users on the current vm can be very slow and this path is hit # on each goal state; we use self._check_existing_jit_users to avoid enumerating the users # every single time. if self._check_existing_jit_users: logger.info("Looking for existing remote access users.") existing_jit_users = self._get_existing_jit_users() remove_user_errors = False for user in existing_jit_users: try: self._remove_user(user) except Exception as e: logger.error("Error removing remote access user '{0}' - {1}", user, ustr(e)) remove_user_errors = True if not remove_user_errors: self._check_existing_jit_users = False @staticmethod def _is_jit_user(comment): return comment == REMOTE_ACCESS_ACCOUNT_COMMENT def _add_user(self, username, encrypted_password, account_expiration): user_added = False try: expiration_date = (account_expiration + timedelta(days=1)).strftime(DATE_FORMAT) logger.info("Adding remote access user '{0}' with expiration date {1}", username, expiration_date) self._os_util.useradd(username, expiration_date, REMOTE_ACCESS_ACCOUNT_COMMENT) user_added = True logger.info("Adding remote access user '{0}' to sudoers", username) prv_key = os.path.join(conf.get_lib_dir(), TRANSPORT_PRIVATE_CERT) pwd = self._cryptUtil.decrypt_secret(encrypted_password, prv_key) self._os_util.chpasswd(username, pwd, conf.get_password_cryptid(), conf.get_password_crypt_salt_len()) self._os_util.conf_sudoer(username) except Exception: if user_added: self._remove_user(username) raise def _remove_user(self, username): logger.info("Removing remote access user '{0}'", username) self._os_util.del_account(username) Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/rsm_version_updater.py000066400000000000000000000151411462617747000260120ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import glob import os from azurelinuxagent.common import conf, logger from azurelinuxagent.common.event import add_event, WALAEventOperation from azurelinuxagent.common.exception import AgentUpgradeExitException, AgentUpdateError from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.version import CURRENT_VERSION, AGENT_NAME from azurelinuxagent.ga.ga_version_updater import GAVersionUpdater from azurelinuxagent.ga.guestagent import GuestAgent class RSMVersionUpdater(GAVersionUpdater): def __init__(self, gs_id, daemon_version): super(RSMVersionUpdater, self).__init__(gs_id) self._daemon_version = daemon_version @staticmethod def _get_all_agents_on_disk(): path = os.path.join(conf.get_lib_dir(), "{0}-*".format(AGENT_NAME)) return [GuestAgent.from_installed_agent(path=agent_dir) for agent_dir in glob.iglob(path) if os.path.isdir(agent_dir)] def _get_available_agents_on_disk(self): available_agents = [agent for agent in self._get_all_agents_on_disk() if agent.is_available] return sorted(available_agents, key=lambda agent: agent.version, reverse=True) def is_update_allowed_this_time(self, ext_gs_updated): """ RSM update allowed if we have a new goal state """ return ext_gs_updated def is_rsm_update_enabled(self, agent_family, ext_gs_updated): """ Checks if there is a new goal state and decide if we need to continue with rsm update or switch to self-update. Firstly it checks agent supports GA versioning or not. If not, we return false to switch to self-update. if vm is enabled for RSM updates and continue with rsm update, otherwise we return false to switch to self-update. if either isVersionFromRSM or isVMEnabledForRSMUpgrades or version is missing in the goal state, we ignore the update as we consider it as invalid goal state. """ if ext_gs_updated: if not conf.get_enable_ga_versioning(): return False if agent_family.is_vm_enabled_for_rsm_upgrades is None: raise AgentUpdateError( "Received invalid goal state:{0}, missing isVMEnabledForRSMUpgrades property. So, skipping agent update".format( self._gs_id)) elif not agent_family.is_vm_enabled_for_rsm_upgrades: return False else: if agent_family.is_version_from_rsm is None: raise AgentUpdateError( "Received invalid goal state:{0}, missing isVersionFromRSM property. So, skipping agent update".format( self._gs_id)) if agent_family.version is None: raise AgentUpdateError( "Received invalid goal state:{0}, missing version property. So, skipping agent update".format( self._gs_id)) return True def retrieve_agent_version(self, agent_family, goal_state): """ Get the agent version from the goal state """ self._version = FlexibleVersion(agent_family.version) def is_retrieved_version_allowed_to_update(self, agent_family): """ Once version retrieved from goal state, we check if we allowed to update for that version allow update If new version not same as current version, not below than daemon version and if version is from rsm request """ if not agent_family.is_version_from_rsm or self._version < self._daemon_version or self._version == CURRENT_VERSION: return False return True def log_new_agent_update_message(self): """ This function logs the update message after we check version allowed to update. """ msg = "New agent version:{0} requested by RSM in Goal state {1}, will update the agent before processing the goal state.".format( str(self._version), self._gs_id) logger.info(msg) add_event(op=WALAEventOperation.AgentUpgrade, message=msg, log_event=False) def proceed_with_update(self): """ upgrade/downgrade to the new version. Raises: AgentUpgradeExitException """ if self._version < CURRENT_VERSION: # In case of a downgrade, we mark the current agent as bad version to avoid starting it back up ever again # (the expectation here being that if we get request to a downgrade, # there's a good reason for not wanting the current version). prefix = "downgrade" try: # We should always have an agent directory for the CURRENT_VERSION agents_on_disk = self._get_available_agents_on_disk() current_agent = next(agent for agent in agents_on_disk if agent.version == CURRENT_VERSION) msg = "Marking the agent {0} as bad version since a downgrade was requested in the GoalState, " \ "suggesting that we really don't want to execute any extensions using this version".format( CURRENT_VERSION) logger.info(msg) add_event(op=WALAEventOperation.AgentUpgrade, message=msg, log_event=False) current_agent.mark_failure(is_fatal=True, reason=msg) except StopIteration: logger.warn( "Could not find a matching agent with current version {0} to blacklist, skipping it".format( CURRENT_VERSION)) else: # In case of an upgrade, we don't need to exclude anything as the daemon will automatically # start the next available highest version which would be the target version prefix = "upgrade" raise AgentUpgradeExitException( "Current Agent {0} completed all update checks, exiting current process to {1} to the new Agent version {2}".format(CURRENT_VERSION, prefix, self._version)) Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/self_update_version_updater.py000066400000000000000000000202731462617747000275060ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import datetime from azurelinuxagent.common import conf, logger from azurelinuxagent.common.event import add_event, WALAEventOperation from azurelinuxagent.common.exception import AgentUpgradeExitException, AgentUpdateError from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.version import CURRENT_VERSION from azurelinuxagent.ga.ga_version_updater import GAVersionUpdater class SelfUpdateType(object): """ Enum for different modes of Self updates """ Hotfix = "Hotfix" Regular = "Regular" class SelfUpdateVersionUpdater(GAVersionUpdater): def __init__(self, gs_id): super(SelfUpdateVersionUpdater, self).__init__(gs_id) self._last_attempted_manifest_download_time = datetime.datetime.min self._last_attempted_self_update_time = datetime.datetime.min @staticmethod def _get_largest_version(agent_manifest): """ Get the largest version from the agent manifest """ largest_version = FlexibleVersion("0.0.0.0") for pkg in agent_manifest.pkg_list.versions: pkg_version = FlexibleVersion(pkg.version) if pkg_version > largest_version: largest_version = pkg_version return largest_version @staticmethod def _get_agent_upgrade_type(version): # We follow semantic versioning for the agent, if .. is same, then has changed. # In this case, we consider it as a Hotfix upgrade. Else we consider it a Regular upgrade. if version.major == CURRENT_VERSION.major and version.minor == CURRENT_VERSION.minor and version.patch == CURRENT_VERSION.patch: return SelfUpdateType.Hotfix return SelfUpdateType.Regular @staticmethod def _get_next_process_time(last_val, frequency, now): """ Get the next upgrade time """ return now if last_val == datetime.datetime.min else last_val + datetime.timedelta(seconds=frequency) def _is_new_agent_allowed_update(self): """ This method ensure that update is allowed only once per (hotfix/Regular) upgrade frequency """ now = datetime.datetime.utcnow() upgrade_type = self._get_agent_upgrade_type(self._version) if upgrade_type == SelfUpdateType.Hotfix: next_update_time = self._get_next_process_time(self._last_attempted_self_update_time, conf.get_self_update_hotfix_frequency(), now) else: next_update_time = self._get_next_process_time(self._last_attempted_self_update_time, conf.get_self_update_regular_frequency(), now) if self._version > CURRENT_VERSION: message = "Self-update discovered new {0} upgrade WALinuxAgent-{1}; Will upgrade on or after {2}".format( upgrade_type, str(self._version), next_update_time.strftime(logger.Logger.LogTimeFormatInUTC)) logger.info(message) add_event(op=WALAEventOperation.AgentUpgrade, message=message, log_event=False) if next_update_time <= now: # Update the last upgrade check time even if no new agent is available for upgrade self._last_attempted_self_update_time = now return True return False def _should_agent_attempt_manifest_download(self): """ The agent should attempt to download the manifest if the agent has not attempted to download the manifest in the last 1 hour If we allow update, we update the last attempted manifest download time """ now = datetime.datetime.utcnow() if self._last_attempted_manifest_download_time != datetime.datetime.min: next_attempt_time = self._last_attempted_manifest_download_time + datetime.timedelta( seconds=conf.get_autoupdate_frequency()) else: next_attempt_time = now if next_attempt_time > now: return False self._last_attempted_manifest_download_time = now return True def is_update_allowed_this_time(self, ext_gs_updated): """ Checks if we allowed download manifest as per manifest download frequency """ if not self._should_agent_attempt_manifest_download(): return False return True def is_rsm_update_enabled(self, agent_family, ext_gs_updated): """ Checks if there is a new goal state and decide if we need to continue with self-update or switch to rsm update. if vm is not enabled for RSM updates or agent not supports GA versioning then we continue with self update, otherwise we return true to switch to rsm update. if isVersionFromRSM is missing but isVMEnabledForRSMUpgrades is present in the goal state, we ignore the update as we consider it as invalid goal state. """ if ext_gs_updated: if conf.get_enable_ga_versioning() and agent_family.is_vm_enabled_for_rsm_upgrades is not None and agent_family.is_vm_enabled_for_rsm_upgrades: if agent_family.is_version_from_rsm is None: raise AgentUpdateError( "Received invalid goal state:{0}, missing isVersionFromRSM property. So, skipping agent update".format( self._gs_id)) else: if agent_family.version is None: raise AgentUpdateError( "Received invalid goal state:{0}, missing version property. So, skipping agent update".format( self._gs_id)) return True return False def retrieve_agent_version(self, agent_family, goal_state): """ Get the largest version from the agent manifest """ self._agent_manifest = goal_state.fetch_agent_manifest(agent_family.name, agent_family.uris) largest_version = self._get_largest_version(self._agent_manifest) self._version = largest_version def is_retrieved_version_allowed_to_update(self, agent_family): """ checks update is spread per (as specified in the conf.get_self_update_hotfix_frequency() or conf.get_self_update_regular_frequency()) or if version below than current version return false when we don't allow updates. """ if not self._is_new_agent_allowed_update(): return False if self._version <= CURRENT_VERSION: return False return True def log_new_agent_update_message(self): """ This function logs the update message after we check version allowed to update. """ msg = "Self-update is ready to upgrade the new agent: {0} now before processing the goal state: {1}".format( str(self._version), self._gs_id) logger.info(msg) add_event(op=WALAEventOperation.AgentUpgrade, message=msg, log_event=False) def proceed_with_update(self): """ upgrade to largest version. Downgrade is not supported. Raises: AgentUpgradeExitException """ if self._version > CURRENT_VERSION: # In case of an upgrade, we don't need to exclude anything as the daemon will automatically # start the next available highest version which would be the target version raise AgentUpgradeExitException( "Current Agent {0} completed all update checks, exiting current process to upgrade to the new Agent version {1}".format(CURRENT_VERSION, self._version)) Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/send_telemetry_events.py000066400000000000000000000162671462617747000263410ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import datetime import threading import time from azurelinuxagent.common import logger from azurelinuxagent.common.event import add_event, WALAEventOperation from azurelinuxagent.common.exception import ServiceStoppedError from azurelinuxagent.common.future import ustr, Queue, Empty from azurelinuxagent.ga.interfaces import ThreadHandlerInterface from azurelinuxagent.common.utils import textutil def get_send_telemetry_events_handler(protocol_util): return SendTelemetryEventsHandler(protocol_util) class SendTelemetryEventsHandler(ThreadHandlerInterface): """ This Handler takes care of sending all telemetry out of the agent to Wireserver. It sends out data as soon as there's any data available in the queue to send. """ _THREAD_NAME = "SendTelemetryHandler" _MAX_TIMEOUT = datetime.timedelta(seconds=5).seconds _MIN_EVENTS_TO_BATCH = 30 _MIN_BATCH_WAIT_TIME = datetime.timedelta(seconds=5) def __init__(self, protocol_util): self._protocol = protocol_util.get_protocol() self.should_run = True self._thread = None # We're using a Queue for handling the communication between threads. We plan to remove any dependency on the # filesystem in the future and use add_event to directly queue events into the queue rather than writing to # a file and then parsing it later. # Once we move add_event to directly queue events, we need to add a maxsize here to ensure some limitations are # being set (currently our limits are enforced by collector_threads but that would become obsolete once we # start enqueuing events directly). self._queue = Queue() @staticmethod def get_thread_name(): return SendTelemetryEventsHandler._THREAD_NAME def run(self): logger.info("Start SendTelemetryHandler service.") self.start() def is_alive(self): return self._thread is not None and self._thread.is_alive() def start(self): self._thread = threading.Thread(target=self._process_telemetry_thread) self._thread.setDaemon(True) self._thread.setName(self.get_thread_name()) self._thread.start() def stop(self): """ Stop server communication and join the thread to main thread. """ self.should_run = False if self.is_alive(): self.join() def join(self): self._queue.join() self._thread.join() def stopped(self): return not self.should_run def enqueue_event(self, event): # Add event to queue and set event if self.stopped(): raise ServiceStoppedError("{0} is stopped, not accepting anymore events".format(self.get_thread_name())) # Queue.put() can block if the queue is full which can be an uninterruptible wait. Blocking for a max of # SendTelemetryEventsHandler._MAX_TIMEOUT seconds and raising a ServiceStoppedError to retry later. # Todo: Queue.put() will only raise a Full exception if a maxsize is set for the Queue. Once some size # limitations are set for the Queue, ensure to handle that correctly here. try: self._queue.put(event, timeout=SendTelemetryEventsHandler._MAX_TIMEOUT) except Exception as error: raise ServiceStoppedError( "Unable to enqueue due to: {0}, stopping any more enqueuing until the next run".format(ustr(error))) def _wait_for_event_in_queue(self): """ Wait for atleast one event in Queue or timeout after SendTelemetryEventsHandler._MAX_TIMEOUT seconds. In case of a timeout, set the event to None. :return: event if an event is added to the Queue or None to signify no events were added in queue. This would raise in case of an error. """ try: event = self._queue.get(timeout=SendTelemetryEventsHandler._MAX_TIMEOUT) self._queue.task_done() except Empty: # No elements in Queue, return None event = None return event def _process_telemetry_thread(self): logger.info("Successfully started the {0} thread".format(self.get_thread_name())) try: # On demand wait, start processing as soon as there is any data available in the queue. In worst case, # also keep checking every SendTelemetryEventsHandler._MAX_TIMEOUT secs to avoid uninterruptible waits. # Incase the service is stopped but we have events in queue, ensure we send them out before killing the thread. while not self.stopped() or not self._queue.empty(): first_event = self._wait_for_event_in_queue() if first_event: # Start processing queue only if first event is not None (i.e. Queue has atleast 1 event), # else do nothing self._send_events_in_queue(first_event) except Exception as error: err_msg = "An unknown error occurred in the {0} thread main loop, stopping thread.{1}".format( self.get_thread_name(), textutil.format_exception(error)) add_event(op=WALAEventOperation.UnhandledError, message=err_msg, is_success=False) def _send_events_in_queue(self, first_event): # Process everything in Queue start_time = datetime.datetime.utcnow() while not self.stopped() and (self._queue.qsize() + 1) < self._MIN_EVENTS_TO_BATCH and ( start_time + self._MIN_BATCH_WAIT_TIME) > datetime.datetime.utcnow(): # To promote batching, we either wait for atleast _MIN_EVENTS_TO_BATCH events or _MIN_BATCH_WAIT_TIME secs # before sending out the first request to wireserver. # If the thread is requested to stop midway, we skip batching and send whatever we have in the queue. logger.verbose("Waiting for events to batch. Total events so far: {0}, Time elapsed: {1} secs", self._queue.qsize()+1, (datetime.datetime.utcnow() - start_time).seconds) time.sleep(1) # Delete files after sending the data rather than deleting and sending self._protocol.report_event(self._get_events_in_queue(first_event)) def _get_events_in_queue(self, first_event): yield first_event while not self._queue.empty(): try: event = self._queue.get_nowait() self._queue.task_done() yield event except Exception as error: logger.error("Some exception when fetching event from queue: {0}".format(textutil.format_exception(error)))Azure-WALinuxAgent-2b21de5/azurelinuxagent/ga/update.py000066400000000000000000001574631462617747000232200ustar00rootroot00000000000000# Windows Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import glob import os import platform import re import shutil import signal import stat import subprocess import sys import time import uuid from datetime import datetime, timedelta from azurelinuxagent.common import conf from azurelinuxagent.common import logger from azurelinuxagent.common.protocol.imds import get_imds_client from azurelinuxagent.common.utils import fileutil, textutil from azurelinuxagent.common.agent_supported_feature import get_supported_feature_by_name, SupportedFeatureNames, \ get_agent_supported_features_list_for_crp from azurelinuxagent.ga.cgroupconfigurator import CGroupConfigurator from azurelinuxagent.common.event import add_event, initialize_event_logger_vminfo_common_parameters, \ WALAEventOperation, EVENTS_DIRECTORY from azurelinuxagent.common.exception import ExitException, AgentUpgradeExitException, AgentMemoryExceededException from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil import get_osutil, systemd from azurelinuxagent.ga.persist_firewall_rules import PersistFirewallRulesHandler from azurelinuxagent.common.protocol.hostplugin import HostPluginProtocol, VmSettingsNotSupported from azurelinuxagent.common.protocol.restapi import VERSION_0 from azurelinuxagent.common.protocol.util import get_protocol_util from azurelinuxagent.common.utils import shellutil from azurelinuxagent.common.utils.archive import StateArchiver, AGENT_STATUS_FILE from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.utils.networkutil import AddFirewallRules from azurelinuxagent.common.utils.shellutil import CommandError from azurelinuxagent.common.version import AGENT_LONG_NAME, AGENT_NAME, AGENT_DIR_PATTERN, CURRENT_AGENT, AGENT_VERSION, \ CURRENT_VERSION, DISTRO_NAME, DISTRO_VERSION, get_lis_version, \ has_logrotate, PY_VERSION_MAJOR, PY_VERSION_MINOR, PY_VERSION_MICRO, get_daemon_version from azurelinuxagent.ga.agent_update_handler import get_agent_update_handler from azurelinuxagent.ga.collect_logs import get_collect_logs_handler, is_log_collection_allowed from azurelinuxagent.ga.collect_telemetry_events import get_collect_telemetry_events_handler from azurelinuxagent.ga.env import get_env_handler from azurelinuxagent.ga.exthandlers import ExtHandlersHandler, list_agent_lib_directory, \ ExtensionStatusValue, ExtHandlerStatusValue from azurelinuxagent.ga.guestagent import GuestAgent from azurelinuxagent.ga.monitor import get_monitor_handler from azurelinuxagent.ga.send_telemetry_events import get_send_telemetry_events_handler AGENT_PARTITION_FILE = "partition" CHILD_HEALTH_INTERVAL = 15 * 60 CHILD_LAUNCH_INTERVAL = 5 * 60 CHILD_LAUNCH_RESTART_MAX = 3 CHILD_POLL_INTERVAL = 60 GOAL_STATE_PERIOD_EXTENSIONS_DISABLED = 5 * 60 ORPHAN_POLL_INTERVAL = 3 ORPHAN_WAIT_INTERVAL = 15 * 60 AGENT_SENTINEL_FILE = "current_version" # This file marks that the first goal state (after provisioning) has been completed, either because it converged or because we received another goal # state before it converged. The contents will be an instance of ExtensionsSummary. If the file does not exist then we have not finished processing # the goal state. INITIAL_GOAL_STATE_FILE = "initial_goal_state" READONLY_FILE_GLOBS = [ "*.crt", "*.p7m", "*.pem", "*.prv", "ovf-env.xml" ] class ExtensionsSummary(object): """ The extensions summary is a list of (extension name, extension status) tuples for the current goal state; it is used to report changes in the status of extensions and to keep track of when the goal state converges (i.e. when all extensions in the goal state reach a terminal state: success or error.) The summary is computed from the VmStatus reported to blob storage. """ def __init__(self, vm_status=None): if vm_status is None: self.summary = [] self.converged = True else: # take the name and status of the extension if is it not None, else use the handler's self.summary = [(o.name, o.status) for o in map(lambda h: h.extension_status if h.extension_status is not None else h, vm_status.vmAgent.extensionHandlers)] self.summary.sort(key=lambda s: s[0]) # sort by extension name to make comparisons easier self.converged = all(status in (ExtensionStatusValue.success, ExtensionStatusValue.error, ExtHandlerStatusValue.ready, ExtHandlerStatusValue.not_ready) for _, status in self.summary) def __eq__(self, other): return self.summary == other.summary def __ne__(self, other): return not (self == other) def __str__(self): return ustr(self.summary) def get_update_handler(): return UpdateHandler() class UpdateHandler(object): TELEMETRY_HEARTBEAT_PERIOD = timedelta(minutes=30) CHECK_MEMORY_USAGE_PERIOD = timedelta(seconds=conf.get_cgroup_check_period()) def __init__(self): self.osutil = get_osutil() self.protocol_util = get_protocol_util() self._is_running = True self.agents = [] self.child_agent = None self.child_launch_time = None self.child_launch_attempts = 0 self.child_process = None self.signal_handler = None self._last_telemetry_heartbeat = None self._heartbeat_id = str(uuid.uuid4()).upper() self._heartbeat_counter = 0 self._initial_attempt_check_memory_usage = True self._last_check_memory_usage_time = time.time() self._check_memory_usage_last_error_report = datetime.min self._cloud_init_completed = False # Only used when Extensions.WaitForCloudInit is enabled; note that this variable is always reset on service start. # VM Size is reported via the heartbeat, default it here. self._vm_size = None # these members are used to avoid reporting errors too frequently self._heartbeat_update_goal_state_error_count = 0 self._update_goal_state_error_count = 0 self._update_goal_state_last_error_report = datetime.min self._report_status_last_failed_goal_state = None # incarnation of the last goal state that has been fully processed # (None if no goal state has been processed) self._last_incarnation = None # ID of the last extensions goal state that has been fully processed (incarnation for WireServer goal states or etag for HostGAPlugin goal states) # (None if no extensions goal state has been processed) self._last_extensions_gs_id = None # Goal state that is currently been processed (None if no goal state is being processed) self._goal_state = None # Whether the agent supports FastTrack (it does, as long as the HostGAPlugin supports the vmSettings API) self._supports_fast_track = False self._extensions_summary = ExtensionsSummary() self._is_initial_goal_state = not os.path.exists(self._initial_goal_state_file_path()) if not conf.get_extensions_enabled(): self._goal_state_period = GOAL_STATE_PERIOD_EXTENSIONS_DISABLED else: if self._is_initial_goal_state: self._goal_state_period = conf.get_initial_goal_state_period() else: self._goal_state_period = conf.get_goal_state_period() def run_latest(self, child_args=None): """ This method is called from the daemon to find and launch the most current, downloaded agent. Note: - Most events should be tagged to the launched agent (agent_version) """ if self.child_process is not None: raise Exception("Illegal attempt to launch multiple goal state Agent processes") if self.signal_handler is None: self.signal_handler = signal.signal(signal.SIGTERM, self.forward_signal) latest_agent = None if not conf.get_autoupdate_enabled() else self.get_latest_agent_greater_than_daemon( daemon_version=CURRENT_VERSION) if latest_agent is None: logger.info(u"Installed Agent {0} is the most current agent", CURRENT_AGENT) agent_cmd = "python -u {0} -run-exthandlers".format(sys.argv[0]) agent_dir = os.getcwd() agent_name = CURRENT_AGENT agent_version = CURRENT_VERSION else: logger.info(u"Determined Agent {0} to be the latest agent", latest_agent.name) agent_cmd = latest_agent.get_agent_cmd() agent_dir = latest_agent.get_agent_dir() agent_name = latest_agent.name agent_version = latest_agent.version if child_args is not None: agent_cmd = "{0} {1}".format(agent_cmd, child_args) try: # Launch the correct Python version for python-based agents cmds = textutil.safe_shlex_split(agent_cmd) if cmds[0].lower() == "python": cmds[0] = sys.executable agent_cmd = " ".join(cmds) self._evaluate_agent_health(latest_agent) self.child_process = subprocess.Popen( cmds, cwd=agent_dir, stdout=sys.stdout, stderr=sys.stderr, env=os.environ) logger.verbose(u"Agent {0} launched with command '{1}'", agent_name, agent_cmd) # Setting the poll interval to poll every second to reduce the agent provisioning time; # The daemon shouldn't wait for 60secs before starting the ext-handler in case the # ext-handler kills itself during agent-update during the first 15 mins (CHILD_HEALTH_INTERVAL) poll_interval = 1 ret = None start_time = time.time() while (time.time() - start_time) < CHILD_HEALTH_INTERVAL: time.sleep(poll_interval) try: ret = self.child_process.poll() except OSError: # if child_process has terminated, calling poll could raise an exception ret = -1 if ret is not None: break if ret is None or ret <= 0: msg = u"Agent {0} launched with command '{1}' is successfully running".format( agent_name, agent_cmd) logger.info(msg) add_event( AGENT_NAME, version=agent_version, op=WALAEventOperation.Enable, is_success=True, message=msg, log_event=False) if ret is None: # Wait for the process to exit if self.child_process.wait() > 0: msg = u"ExtHandler process {0} launched with command '{1}' exited with return code: {2}".format( agent_name, agent_cmd, ret) logger.warn(msg) else: msg = u"Agent {0} launched with command '{1}' failed with return code: {2}".format( agent_name, agent_cmd, ret) logger.warn(msg) add_event( AGENT_NAME, version=agent_version, op=WALAEventOperation.Enable, is_success=False, message=msg) except Exception as e: # Ignore child errors during termination if self.is_running: msg = u"Agent {0} launched with command '{1}' failed with exception: \n".format( agent_name, agent_cmd) logger.warn(msg) detailed_message = '{0} {1}'.format(msg, textutil.format_exception(e)) add_event( AGENT_NAME, version=agent_version, op=WALAEventOperation.Enable, is_success=False, message=detailed_message) if latest_agent is not None: latest_agent.mark_failure(is_fatal=True, reason=detailed_message) self.child_process = None return def run(self, debug=False): """ This is the main loop which watches for agent and extension updates. """ try: logger.info("{0} (Goal State Agent version {1})", AGENT_LONG_NAME, AGENT_VERSION) logger.info("OS: {0} {1}", DISTRO_NAME, DISTRO_VERSION) logger.info("Python: {0}.{1}.{2}", PY_VERSION_MAJOR, PY_VERSION_MINOR, PY_VERSION_MICRO) vm_arch = self.osutil.get_vm_arch() logger.info("CPU Arch: {0}", vm_arch) os_info_msg = u"Distro: {dist_name}-{dist_ver}; "\ u"OSUtil: {util_name}; "\ u"AgentService: {service_name}; "\ u"Python: {py_major}.{py_minor}.{py_micro}; "\ u"Arch: {vm_arch}; "\ u"systemd: {systemd}; "\ u"LISDrivers: {lis_ver}; "\ u"logrotate: {has_logrotate};".format( dist_name=DISTRO_NAME, dist_ver=DISTRO_VERSION, util_name=type(self.osutil).__name__, service_name=self.osutil.service_name, py_major=PY_VERSION_MAJOR, py_minor=PY_VERSION_MINOR, py_micro=PY_VERSION_MICRO, vm_arch=vm_arch, systemd=systemd.is_systemd(), lis_ver=get_lis_version(), has_logrotate=has_logrotate() ) logger.info(os_info_msg) # # Initialize the goal state; some components depend on information provided by the goal state and this # call ensures the required info is initialized (e.g. telemetry depends on the container ID.) # protocol = self.protocol_util.get_protocol(save_to_history=True) self._initialize_goal_state(protocol) # Initialize the common parameters for telemetry events initialize_event_logger_vminfo_common_parameters(protocol) # Send telemetry for the OS-specific info. add_event(AGENT_NAME, op=WALAEventOperation.OSInfo, message=os_info_msg) self._log_openssl_info() # # Perform initialization tasks # from azurelinuxagent.ga.exthandlers import get_exthandlers_handler, migrate_handler_state exthandlers_handler = get_exthandlers_handler(protocol) migrate_handler_state() from azurelinuxagent.ga.remoteaccess import get_remote_access_handler remote_access_handler = get_remote_access_handler(protocol) agent_update_handler = get_agent_update_handler(protocol) self._ensure_no_orphans() self._emit_restart_event() self._emit_changes_in_default_configuration() self._ensure_partition_assigned() self._ensure_readonly_files() self._ensure_cgroups_initialized() self._ensure_extension_telemetry_state_configured_properly(protocol) self._ensure_firewall_rules_persisted(dst_ip=protocol.get_endpoint()) self._add_accept_tcp_firewall_rule_if_not_enabled(dst_ip=protocol.get_endpoint()) self._cleanup_legacy_goal_state_history() # Get all thread handlers telemetry_handler = get_send_telemetry_events_handler(self.protocol_util) all_thread_handlers = [ get_monitor_handler(), get_env_handler(), telemetry_handler, get_collect_telemetry_events_handler(telemetry_handler) ] if is_log_collection_allowed(): all_thread_handlers.append(get_collect_logs_handler()) # Launch all monitoring threads self._start_threads(all_thread_handlers) logger.info("Goal State Period: {0} sec. This indicates how often the agent checks for new goal states and reports status.", self._goal_state_period) while self.is_running: self._check_daemon_running(debug) self._check_threads_running(all_thread_handlers) self._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) self._send_heartbeat_telemetry(protocol) self._check_agent_memory_usage() time.sleep(self._goal_state_period) except AgentUpgradeExitException as exitException: add_event(op=WALAEventOperation.AgentUpgrade, message=exitException.reason, log_event=False) logger.info(exitException.reason) except ExitException as exitException: logger.info(exitException.reason) except Exception as error: msg = u"Agent {0} failed with exception: {1}".format(CURRENT_AGENT, ustr(error)) self._set_sentinel(msg=msg) logger.warn(msg) logger.warn(textutil.format_exception(error)) sys.exit(1) # additional return here because sys.exit is mocked in unit tests return self._shutdown() sys.exit(0) @staticmethod def _log_openssl_info(): try: version = shellutil.run_command(["openssl", "version"]) message = "OpenSSL version: {0}".format(version) logger.info(message) add_event(op=WALAEventOperation.OpenSsl, message=message, is_success=True) except Exception as e: message = "Failed to get OpenSSL version: {0}".format(e) logger.info(message) add_event(op=WALAEventOperation.OpenSsl, message=message, is_success=False, log_event=False) # # Collect telemetry about the 'pkey' command. CryptUtil get_pubkey_from_prv() uses the 'pkey' command only as a fallback after trying 'rsa'. # 'pkey' also works for RSA keys, but it may not be available on older versions of OpenSSL. Check telemetry after a few releases and if there # are no versions of OpenSSL that do not support 'pkey' consider removing the use of 'rsa' altogether. # try: shellutil.run_command(["openssl", "help", "pkey"]) except Exception as e: message = "OpenSSL does not support the pkey command: {0}".format(e) logger.info(message) add_event(op=WALAEventOperation.OpenSsl, message=message, is_success=False, log_event=False) def _initialize_goal_state(self, protocol): # # Block until we can fetch the first goal state (self._try_update_goal_state() does its own logging and error handling). # while not self._try_update_goal_state(protocol): time.sleep(conf.get_goal_state_period()) # # If FastTrack is disabled we need to check if the current goal state (which will be retrieved using the WireServer and # hence will be a Fabric goal state) is outdated. # if not conf.get_enable_fast_track(): last_fast_track_timestamp = HostPluginProtocol.get_fast_track_timestamp() if last_fast_track_timestamp is not None: egs = protocol.client.get_goal_state().extensions_goal_state if egs.created_on_timestamp < last_fast_track_timestamp: egs.is_outdated = True logger.info("The current Fabric goal state is older than the most recent FastTrack goal state; will skip it.\nFabric: {0}\nFastTrack: {1}", egs.created_on_timestamp, last_fast_track_timestamp) def _wait_for_cloud_init(self): if conf.get_wait_for_cloud_init() and not self._cloud_init_completed: message = "Waiting for cloud-init to complete..." logger.info(message) add_event(op=WALAEventOperation.CloudInit, message=message) try: output = shellutil.run_command(["cloud-init", "status", "--wait"], timeout=conf.get_wait_for_cloud_init_timeout()) message = "cloud-init completed\n{0}".format(output) logger.info(message) add_event(op=WALAEventOperation.CloudInit, message=message) except Exception as e: message = "An error occurred while waiting for cloud-init; will proceed to execute VM extensions. Extensions that have conflicts with cloud-init may fail.\n{0}".format(ustr(e)) logger.error(message) add_event(op=WALAEventOperation.CloudInit, message=message, is_success=False, log_event=False) self._cloud_init_completed = True # Mark as completed even on error since we will proceed to execute extensions def _get_vm_size(self, protocol): """ Including VMSize is meant to capture the architecture of the VM (i.e. arm64 VMs will have arm64 included in their vmsize field and amd64 will have no architecture indicated). """ if self._vm_size is None: imds_client = get_imds_client(protocol.get_endpoint()) try: imds_info = imds_client.get_compute() self._vm_size = imds_info.vmSize except Exception as e: err_msg = "Attempts to retrieve VM size information from IMDS are failing: {0}".format(textutil.format_exception(e)) logger.periodic_warn(logger.EVERY_SIX_HOURS, "[PERIODIC] {0}".format(err_msg)) return "unknown" return self._vm_size def _get_vm_arch(self): return platform.machine() def _check_daemon_running(self, debug): # Check that the parent process (the agent's daemon) is still running if not debug and self._is_orphaned: raise ExitException("Agent {0} is an orphan -- exiting".format(CURRENT_AGENT)) def _start_threads(self, all_thread_handlers): for thread_handler in all_thread_handlers: thread_handler.run() def _check_threads_running(self, all_thread_handlers): # Check that all the threads are still running for thread_handler in all_thread_handlers: if thread_handler.keep_alive() and not thread_handler.is_alive(): logger.warn("{0} thread died, restarting".format(thread_handler.get_thread_name())) thread_handler.start() def _try_update_goal_state(self, protocol): """ Attempts to update the goal state and returns True on success or False on failure, sending telemetry events about the failures. """ try: max_errors_to_log = 3 protocol.client.update_goal_state(silent=self._update_goal_state_error_count >= max_errors_to_log, save_to_history=True) self._goal_state = protocol.get_goal_state() if self._update_goal_state_error_count > 0: message = u"Fetching the goal state recovered from previous errors. Fetched {0} (certificates: {1})".format( self._goal_state.extensions_goal_state.id, self._goal_state.certs.summary) add_event(AGENT_NAME, op=WALAEventOperation.FetchGoalState, version=CURRENT_VERSION, is_success=True, message=message, log_event=False) logger.info(message) self._update_goal_state_error_count = 0 try: self._supports_fast_track = conf.get_enable_fast_track() and protocol.client.get_host_plugin().check_vm_settings_support() except VmSettingsNotSupported: self._supports_fast_track = False except Exception as e: self._update_goal_state_error_count += 1 self._heartbeat_update_goal_state_error_count += 1 if self._update_goal_state_error_count <= max_errors_to_log: message = u"Error fetching the goal state: {0}".format(textutil.format_exception(e)) logger.error(message) add_event(op=WALAEventOperation.FetchGoalState, is_success=False, message=message, log_event=False) self._update_goal_state_last_error_report = datetime.now() else: if self._update_goal_state_last_error_report + timedelta(hours=6) > datetime.now(): self._update_goal_state_last_error_report = datetime.now() message = u"Fetching the goal state is still failing: {0}".format(textutil.format_exception(e)) logger.error(message) add_event(op=WALAEventOperation.FetchGoalState, is_success=False, message=message, log_event=False) return False return True def _processing_new_incarnation(self): """ True if we are currently processing a new incarnation (i.e. WireServer goal state) """ return self._goal_state is not None and self._goal_state.incarnation != self._last_incarnation def _processing_new_extensions_goal_state(self): """ True if we are currently processing a new extensions goal state """ return self._goal_state is not None and self._goal_state.extensions_goal_state.id != self._last_extensions_gs_id and not self._goal_state.extensions_goal_state.is_outdated def _process_goal_state(self, exthandlers_handler, remote_access_handler, agent_update_handler): protocol = exthandlers_handler.protocol # update self._goal_state if not self._try_update_goal_state(protocol): agent_update_handler.run(self._goal_state, self._processing_new_extensions_goal_state()) # status reporting should be done even when the goal state is not updated self._report_status(exthandlers_handler, agent_update_handler) return # check for agent updates agent_update_handler.run(self._goal_state, self._processing_new_extensions_goal_state()) self._wait_for_cloud_init() try: if self._processing_new_extensions_goal_state(): if not self._extensions_summary.converged: message = "A new goal state was received, but not all the extensions in the previous goal state have completed: {0}".format(self._extensions_summary) logger.warn(message) add_event(op=WALAEventOperation.GoalState, message=message, is_success=False, log_event=False) if self._is_initial_goal_state: self._on_initial_goal_state_completed(self._extensions_summary) self._extensions_summary = ExtensionsSummary() exthandlers_handler.run() # check cgroup and disable if any extension started in agent cgroup after goal state processed. # Note: Monitor thread periodically checks this in addition to here. CGroupConfigurator.get_instance().check_cgroups(cgroup_metrics=[]) # report status before processing the remote access, since that operation can take a long time self._report_status(exthandlers_handler, agent_update_handler) if self._processing_new_incarnation(): remote_access_handler.run() # lastly, archive the goal state history (but do it only on new goal states - no need to do it on every iteration) if self._processing_new_extensions_goal_state(): UpdateHandler._archive_goal_state_history() finally: if self._goal_state is not None: self._last_incarnation = self._goal_state.incarnation self._last_extensions_gs_id = self._goal_state.extensions_goal_state.id @staticmethod def _archive_goal_state_history(): try: archiver = StateArchiver(conf.get_lib_dir()) archiver.archive() except Exception as exception: logger.warn("Error cleaning up the goal state history: {0}", ustr(exception)) @staticmethod def _cleanup_legacy_goal_state_history(): try: StateArchiver.purge_legacy_goal_state_history() except Exception as exception: logger.warn("Error removing legacy history files: {0}", ustr(exception)) def _report_status(self, exthandlers_handler, agent_update_handler): # report_ext_handlers_status does its own error handling and returns None if an error occurred vm_status = exthandlers_handler.report_ext_handlers_status( goal_state_changed=self._processing_new_extensions_goal_state(), vm_agent_update_status=agent_update_handler.get_vmagent_update_status(), vm_agent_supports_fast_track=self._supports_fast_track) if vm_status is not None: self._report_extensions_summary(vm_status) if self._goal_state is not None: status_blob_text = exthandlers_handler.protocol.get_status_blob_data() if status_blob_text is None: status_blob_text = "{}" self._goal_state.save_to_history(status_blob_text, AGENT_STATUS_FILE) if self._goal_state.extensions_goal_state.is_outdated: exthandlers_handler.protocol.client.get_host_plugin().clear_fast_track_state() def _report_extensions_summary(self, vm_status): try: extensions_summary = ExtensionsSummary(vm_status) if self._extensions_summary != extensions_summary: self._extensions_summary = extensions_summary message = "Extension status: {0}".format(self._extensions_summary) logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message, is_success=True) if self._extensions_summary.converged: message = "All extensions in the goal state have reached a terminal state: {0}".format(extensions_summary) logger.info(message) add_event(op=WALAEventOperation.GoalState, message=message, is_success=True) if self._is_initial_goal_state: self._on_initial_goal_state_completed(self._extensions_summary) except Exception as error: # report errors only once per goal state if self._report_status_last_failed_goal_state != self._goal_state.extensions_goal_state.id: self._report_status_last_failed_goal_state = self._goal_state.extensions_goal_state.id msg = u"Error logging the goal state summary: {0}".format(textutil.format_exception(error)) logger.warn(msg) add_event(op=WALAEventOperation.GoalState, is_success=False, message=msg) def _on_initial_goal_state_completed(self, extensions_summary): fileutil.write_file(self._initial_goal_state_file_path(), ustr(extensions_summary)) if conf.get_extensions_enabled() and self._goal_state_period != conf.get_goal_state_period(): self._goal_state_period = conf.get_goal_state_period() logger.info("Initial goal state completed, switched the goal state period to {0}", self._goal_state_period) self._is_initial_goal_state = False def forward_signal(self, signum, frame): if signum == signal.SIGTERM: self._shutdown() if self.child_process is None: return logger.info( u"Agent {0} forwarding signal {1} to {2}\n", CURRENT_AGENT, signum, self.child_agent.name if self.child_agent is not None else CURRENT_AGENT) self.child_process.send_signal(signum) if self.signal_handler not in (None, signal.SIG_IGN, signal.SIG_DFL): self.signal_handler(signum, frame) elif self.signal_handler is signal.SIG_DFL: if signum == signal.SIGTERM: self._shutdown() sys.exit(0) return @staticmethod def __get_daemon_version_for_update(): daemon_version = get_daemon_version() if daemon_version != FlexibleVersion(VERSION_0): return daemon_version # We return 0.0.0.0 if daemon version is not specified. In that case, # use the min version as 2.2.53 as we started setting the daemon version starting 2.2.53. return FlexibleVersion("2.2.53") def get_latest_agent_greater_than_daemon(self, daemon_version=None): """ If autoupdate is enabled, return the most current, downloaded, non-blacklisted agent which is not the current version (if any) and is greater than the `daemon_version`. Otherwise, return None (implying to use the installed agent). If `daemon_version` is None, we fetch it from the environment variable set by the DaemonHandler """ self._find_agents() daemon_version = self.__get_daemon_version_for_update() if daemon_version is None else daemon_version # Fetch the downloaded agents that are different from the current version and greater than the daemon version available_agents = [agent for agent in self.agents if agent.is_available and agent.version != CURRENT_VERSION and agent.version > daemon_version] return available_agents[0] if len(available_agents) >= 1 else None def _emit_restart_event(self): try: if not self._is_clean_start: msg = u"Agent did not terminate cleanly: {0}".format( fileutil.read_file(self._sentinel_file_path())) logger.info(msg) add_event( AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.Restart, is_success=False, message=msg) except Exception: pass return @staticmethod def _emit_changes_in_default_configuration(): try: def log_event(msg): logger.info("******** {0} ********", msg) add_event(AGENT_NAME, op=WALAEventOperation.ConfigurationChange, message=msg) def log_if_int_changed_from_default(name, current, message=""): default = conf.get_int_default_value(name) if default != current: log_event("{0} changed from its default: {1}. New value: {2}. {3}".format(name, default, current, message)) def log_if_op_disabled(name, value): if not value: log_event("{0} is set to False, not processing the operation".format(name)) def log_if_agent_versioning_feature_disabled(): supports_ga_versioning = False for _, feature in get_agent_supported_features_list_for_crp().items(): if feature.name == SupportedFeatureNames.GAVersioningGovernance: supports_ga_versioning = True break if not supports_ga_versioning: msg = "Agent : {0} doesn't support GA Versioning".format(CURRENT_VERSION) log_event(msg) log_if_int_changed_from_default("Extensions.GoalStatePeriod", conf.get_goal_state_period(), "Changing this value affects how often extensions are processed and status for the VM is reported. Too small a value may report the VM as unresponsive") log_if_int_changed_from_default("Extensions.InitialGoalStatePeriod", conf.get_initial_goal_state_period(), "Changing this value affects how often extensions are processed and status for the VM is reported. Too small a value may report the VM as unresponsive") log_if_op_disabled("OS.EnableFirewall", conf.enable_firewall()) log_if_op_disabled("Extensions.Enabled", conf.get_extensions_enabled()) log_if_op_disabled("AutoUpdate.Enabled", conf.get_autoupdate_enabled()) log_if_op_disabled("AutoUpdate.UpdateToLatestVersion", conf.get_auto_update_to_latest_version()) if conf.is_present("AutoUpdate.Enabled") and conf.get_autoupdate_enabled() != conf.get_auto_update_to_latest_version(): msg = "AutoUpdate.Enabled property is **Deprecated** now but it's set to different value from AutoUpdate.UpdateToLatestVersion. Please consider removing it if added by mistake" logger.warn(msg) add_event(AGENT_NAME, op=WALAEventOperation.ConfigurationChange, message=msg) if conf.enable_firewall(): log_if_int_changed_from_default("OS.EnableFirewallPeriod", conf.get_enable_firewall_period()) if conf.get_autoupdate_enabled(): log_if_int_changed_from_default("Autoupdate.Frequency", conf.get_autoupdate_frequency()) if conf.get_enable_fast_track(): log_if_op_disabled("Debug.EnableFastTrack", conf.get_enable_fast_track()) if conf.get_lib_dir() != "/var/lib/waagent": log_event("lib dir is in an unexpected location: {0}".format(conf.get_lib_dir())) log_if_agent_versioning_feature_disabled() except Exception as e: logger.warn("Failed to log changes in configuration: {0}", ustr(e)) def _ensure_no_orphans(self, orphan_wait_interval=ORPHAN_WAIT_INTERVAL): pid_files, ignored = self._write_pid_file() # pylint: disable=W0612 for pid_file in pid_files: try: pid = fileutil.read_file(pid_file) wait_interval = orphan_wait_interval while self.osutil.check_pid_alive(pid): wait_interval -= ORPHAN_POLL_INTERVAL if wait_interval <= 0: logger.warn( u"{0} forcibly terminated orphan process {1}", CURRENT_AGENT, pid) os.kill(pid, signal.SIGKILL) break logger.info( u"{0} waiting for orphan process {1} to terminate", CURRENT_AGENT, pid) time.sleep(ORPHAN_POLL_INTERVAL) os.remove(pid_file) except Exception as e: logger.warn( u"Exception occurred waiting for orphan agent to terminate: {0}", ustr(e)) return def _ensure_partition_assigned(self): """ Assign the VM to a partition (0 - 99). Downloaded updates may be configured to run on only some VMs; the assigned partition determines eligibility. """ if not os.path.exists(self._partition_file): partition = ustr(int(datetime.utcnow().microsecond / 10000)) fileutil.write_file(self._partition_file, partition) add_event( AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.Partition, is_success=True, message=partition) def _ensure_readonly_files(self): for g in READONLY_FILE_GLOBS: for path in glob.iglob(os.path.join(conf.get_lib_dir(), g)): os.chmod(path, stat.S_IRUSR) def _ensure_cgroups_initialized(self): configurator = CGroupConfigurator.get_instance() configurator.initialize() def _evaluate_agent_health(self, latest_agent): """ Evaluate the health of the selected agent: If it is restarting too frequently, raise an Exception to force blacklisting. """ if latest_agent is None: self.child_agent = None return if self.child_agent is None or latest_agent.version != self.child_agent.version: self.child_agent = latest_agent self.child_launch_time = None self.child_launch_attempts = 0 if self.child_launch_time is None: self.child_launch_time = time.time() self.child_launch_attempts += 1 if (time.time() - self.child_launch_time) <= CHILD_LAUNCH_INTERVAL \ and self.child_launch_attempts >= CHILD_LAUNCH_RESTART_MAX: msg = u"Agent {0} restarted more than {1} times in {2} seconds".format( self.child_agent.name, CHILD_LAUNCH_RESTART_MAX, CHILD_LAUNCH_INTERVAL) raise Exception(msg) return def _filter_blacklisted_agents(self): self.agents = [agent for agent in self.agents if not agent.is_blacklisted] def _find_agents(self): """ Load all non-blacklisted agents currently on disk. """ try: self._set_and_sort_agents(self._load_agents()) self._filter_blacklisted_agents() except Exception as e: logger.warn(u"Exception occurred loading available agents: {0}", ustr(e)) return def _get_pid_parts(self): pid_file = conf.get_agent_pid_file_path() pid_dir = os.path.dirname(pid_file) pid_name = os.path.basename(pid_file) pid_re = re.compile("(\d+)_{0}".format(re.escape(pid_name))) # pylint: disable=W1401 return pid_dir, pid_name, pid_re def _get_pid_files(self): pid_dir, pid_name, pid_re = self._get_pid_parts() # pylint: disable=W0612 pid_files = [os.path.join(pid_dir, f) for f in os.listdir(pid_dir) if pid_re.match(f)] pid_files.sort(key=lambda f: int(pid_re.match(os.path.basename(f)).group(1))) return pid_files @property def is_running(self): return self._is_running @is_running.setter def is_running(self, value): self._is_running = value @property def _is_clean_start(self): return not os.path.isfile(self._sentinel_file_path()) @property def _is_orphaned(self): parent_pid = os.getppid() if parent_pid in (1, None): return True if not os.path.isfile(conf.get_agent_pid_file_path()): return True return fileutil.read_file(conf.get_agent_pid_file_path()) != ustr(parent_pid) def _load_agents(self): path = os.path.join(conf.get_lib_dir(), "{0}-*".format(AGENT_NAME)) return [GuestAgent.from_installed_agent(agent_dir) for agent_dir in glob.iglob(path) if os.path.isdir(agent_dir)] def _partition(self): return int(fileutil.read_file(self._partition_file)) @property def _partition_file(self): return os.path.join(conf.get_lib_dir(), AGENT_PARTITION_FILE) def _purge_agents(self): """ Remove from disk all directories and .zip files of unknown agents (without removing the current, running agent). """ path = os.path.join(conf.get_lib_dir(), "{0}-*".format(AGENT_NAME)) known_versions = [agent.version for agent in self.agents] if CURRENT_VERSION not in known_versions: logger.verbose( u"Running Agent {0} was not found in the agent manifest - adding to list", CURRENT_VERSION) known_versions.append(CURRENT_VERSION) for agent_path in glob.iglob(path): try: name = fileutil.trim_ext(agent_path, "zip") m = AGENT_DIR_PATTERN.match(name) if m is not None and FlexibleVersion(m.group(1)) not in known_versions: if os.path.isfile(agent_path): logger.info(u"Purging outdated Agent file {0}", agent_path) os.remove(agent_path) else: logger.info(u"Purging outdated Agent directory {0}", agent_path) shutil.rmtree(agent_path) except Exception as e: logger.warn(u"Purging {0} raised exception: {1}", agent_path, ustr(e)) return def _set_and_sort_agents(self, agents=None): if agents is None: agents = [] self.agents = agents self.agents.sort(key=lambda agent: agent.version, reverse=True) return def _set_sentinel(self, agent=CURRENT_AGENT, msg="Unknown cause"): try: fileutil.write_file( self._sentinel_file_path(), "[{0}] [{1}]".format(agent, msg)) except Exception as e: logger.warn( u"Exception writing sentinel file {0}: {1}", self._sentinel_file_path(), str(e)) return def _sentinel_file_path(self): return os.path.join(conf.get_lib_dir(), AGENT_SENTINEL_FILE) @staticmethod def _initial_goal_state_file_path(): return os.path.join(conf.get_lib_dir(), INITIAL_GOAL_STATE_FILE) def _shutdown(self): # Todo: Ensure all threads stopped when shutting down the main extension handler to ensure that the state of # all threads is clean. self.is_running = False if not os.path.isfile(self._sentinel_file_path()): return try: os.remove(self._sentinel_file_path()) except Exception as e: logger.warn( u"Exception removing sentinel file {0}: {1}", self._sentinel_file_path(), str(e)) return def _write_pid_file(self): pid_files = self._get_pid_files() pid_dir, pid_name, pid_re = self._get_pid_parts() previous_pid_file = None if len(pid_files) <= 0 else pid_files[-1] pid_index = -1 \ if previous_pid_file is None \ else int(pid_re.match(os.path.basename(previous_pid_file)).group(1)) pid_file = os.path.join(pid_dir, "{0}_{1}".format(pid_index + 1, pid_name)) try: fileutil.write_file(pid_file, ustr(os.getpid())) logger.info(u"{0} running as process {1}", CURRENT_AGENT, ustr(os.getpid())) except Exception as e: pid_file = None logger.warn( u"Expection writing goal state agent {0} pid to {1}: {2}", CURRENT_AGENT, pid_file, ustr(e)) return pid_files, pid_file def _send_heartbeat_telemetry(self, protocol): if self._last_telemetry_heartbeat is None: self._last_telemetry_heartbeat = datetime.utcnow() - UpdateHandler.TELEMETRY_HEARTBEAT_PERIOD if datetime.utcnow() >= (self._last_telemetry_heartbeat + UpdateHandler.TELEMETRY_HEARTBEAT_PERIOD): dropped_packets = self.osutil.get_firewall_dropped_packets(protocol.get_endpoint()) auto_update_enabled = 1 if conf.get_autoupdate_enabled() else 0 telemetry_msg = "{0};{1};{2};{3};{4}".format(self._heartbeat_counter, self._heartbeat_id, dropped_packets, self._heartbeat_update_goal_state_error_count, auto_update_enabled) debug_log_msg = "[DEBUG HeartbeatCounter: {0};HeartbeatId: {1};DroppedPackets: {2};" \ "UpdateGSErrors: {3};AutoUpdate: {4}]".format(self._heartbeat_counter, self._heartbeat_id, dropped_packets, self._heartbeat_update_goal_state_error_count, auto_update_enabled) # Write Heartbeat events/logs add_event(name=AGENT_NAME, version=CURRENT_VERSION, op=WALAEventOperation.HeartBeat, is_success=True, message=telemetry_msg, log_event=False) logger.info(u"[HEARTBEAT] Agent {0} is running as the goal state agent {1}", CURRENT_AGENT, debug_log_msg) # Update/Reset the counters self._heartbeat_counter += 1 self._heartbeat_update_goal_state_error_count = 0 self._last_telemetry_heartbeat = datetime.utcnow() def _check_agent_memory_usage(self): """ This checks the agent current memory usage and safely exit the process if agent reaches the memory limit """ try: if conf.get_enable_agent_memory_usage_check() and self._extensions_summary.converged: # we delay first attempt memory usage check, so that current agent won't get blacklisted due to multiple restarts(because of memory limit reach) too frequently if (self._initial_attempt_check_memory_usage and time.time() - self._last_check_memory_usage_time > CHILD_LAUNCH_INTERVAL) or \ (not self._initial_attempt_check_memory_usage and time.time() - self._last_check_memory_usage_time > conf.get_cgroup_check_period()): self._last_check_memory_usage_time = time.time() self._initial_attempt_check_memory_usage = False CGroupConfigurator.get_instance().check_agent_memory_usage() except AgentMemoryExceededException as exception: msg = "Check on agent memory usage:\n{0}".format(ustr(exception)) logger.info(msg) add_event(AGENT_NAME, op=WALAEventOperation.AgentMemory, is_success=True, message=msg) raise ExitException("Agent {0} is reached memory limit -- exiting".format(CURRENT_AGENT)) except Exception as exception: if self._check_memory_usage_last_error_report == datetime.min or (self._check_memory_usage_last_error_report + timedelta(hours=6)) > datetime.now(): self._check_memory_usage_last_error_report = datetime.now() msg = "Error checking the agent's memory usage: {0} --- [NOTE: Will not log the same error for the 6 hours]".format(ustr(exception)) logger.warn(msg) add_event(AGENT_NAME, op=WALAEventOperation.AgentMemory, is_success=False, message=msg) @staticmethod def _ensure_extension_telemetry_state_configured_properly(protocol): etp_enabled = get_supported_feature_by_name(SupportedFeatureNames.ExtensionTelemetryPipeline).is_supported for name, path in list_agent_lib_directory(skip_agent_package=True): try: handler_instance = ExtHandlersHandler.get_ext_handler_instance_from_path(name=name, path=path, protocol=protocol) except Exception: # Ignore errors if any continue try: if handler_instance is not None: # Recreate the HandlerEnvironment for existing extensions on startup. # This is to ensure that existing extensions can start using the telemetry pipeline if they support # it and also ensures that the extensions are not sending out telemetry if the Agent has to disable the feature. handler_instance.create_handler_env() events_dir = handler_instance.get_extension_events_dir() # If ETP is enabled and events directory doesn't exist for handler, create it if etp_enabled and not(os.path.exists(events_dir)): fileutil.mkdir(events_dir, mode=0o700) except Exception as e: logger.warn( "Unable to re-create HandlerEnvironment file on service startup. Error: {0}".format(ustr(e))) continue try: if not etp_enabled: # If extension telemetry pipeline is disabled, ensure we delete all existing extension events directory # because the agent will not be listening on those events. extension_event_dirs = glob.glob(os.path.join(conf.get_ext_log_dir(), "*", EVENTS_DIRECTORY)) for ext_dir in extension_event_dirs: shutil.rmtree(ext_dir, ignore_errors=True) except Exception as e: logger.warn("Error when trying to delete existing Extension events directory. Error: {0}".format(ustr(e))) @staticmethod def _ensure_firewall_rules_persisted(dst_ip): if not conf.enable_firewall(): logger.info("Not setting up persistent firewall rules as OS.EnableFirewall=False") return is_success = False logger.info("Starting setup for Persistent firewall rules") try: PersistFirewallRulesHandler(dst_ip=dst_ip, uid=os.getuid()).setup() msg = "Persistent firewall rules setup successfully" is_success = True logger.info(msg) except Exception as error: msg = "Unable to setup the persistent firewall rules: {0}".format(ustr(error)) logger.error(msg) add_event( op=WALAEventOperation.PersistFirewallRules, is_success=is_success, message=msg, log_event=False) def _add_accept_tcp_firewall_rule_if_not_enabled(self, dst_ip): if not conf.enable_firewall(): return def _execute_run_command(command): # Helper to execute a run command, returns True if no exception # Here we primarily check if an iptable rule exist. True if it exits , false if not try: shellutil.run_command(command) return True except CommandError as err: # return code 1 is expected while using the check command. Raise if encounter any other return code if err.returncode != 1: raise return False try: wait = self.osutil.get_firewall_will_wait() # "-C" checks if the iptable rule is available in the chain. It throws an exception with return code 1 if the ip table rule doesnt exist drop_rule = AddFirewallRules.get_wire_non_root_drop_rule(AddFirewallRules.CHECK_COMMAND, dst_ip, wait=wait) if not _execute_run_command(drop_rule): # DROP command doesn't exist indicates then none of the firewall rules are set yet # exiting here as the environment thread will set up all firewall rules logger.info("DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up.") return else: # DROP rule exists in the ip table chain. Hence checking if the DNS TCP to wireserver rule exists. If not we add it. accept_tcp_rule = AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.CHECK_COMMAND, dst_ip, wait=wait) if not _execute_run_command(accept_tcp_rule): try: logger.info( "Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now.") accept_tcp_rule = AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.INSERT_COMMAND, dst_ip, wait=wait) shellutil.run_command(accept_tcp_rule) logger.info( "Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver") except CommandError as error: msg = "Unable to set the non root tcp access firewall rule :" \ "Run command execution for {0} failed with error:{1}.Return Code:{2}"\ .format(error.command, error.stderr, error.returncode) logger.error(msg) else: logger.info( "Not setting the firewall rule to allow DNS TCP request to wireserver for a non root user since it already exists") except Exception as e: msg = "Error while checking ip table rules:{0}".format(ustr(e)) logger.error(msg) Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/000077500000000000000000000000001462617747000213555ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/__init__.py000066400000000000000000000011661462617747000234720ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/deprovision/000077500000000000000000000000001462617747000237165ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/deprovision/__init__.py000066400000000000000000000013501462617747000260260ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.pa.deprovision.factory import get_deprovision_handler __all__ = ["get_deprovision_handler"] Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/deprovision/arch.py000066400000000000000000000025021462617747000252040ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.pa.deprovision.default import DeprovisionHandler, \ DeprovisionAction class ArchDeprovisionHandler(DeprovisionHandler): def __init__(self): # pylint: disable=W0235 super(ArchDeprovisionHandler, self).__init__() def setup(self, deluser): warnings, actions = super(ArchDeprovisionHandler, self).setup(deluser) warnings.append("WARNING! /etc/machine-id will be removed.") files_to_del = ['/etc/machine-id'] actions.append(DeprovisionAction(fileutil.rm_files, files_to_del)) return warnings, actions Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/deprovision/clearlinux.py000066400000000000000000000023501462617747000264360ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # # pylint: disable=W0611 import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.pa.deprovision.default import DeprovisionHandler, \ DeprovisionAction # pylint: enable=W0611 class ClearLinuxDeprovisionHandler(DeprovisionHandler): def __init__(self, distro): # pylint: disable=W0231 self.distro = distro def setup(self, deluser): warnings, actions = super(ClearLinuxDeprovisionHandler, self).setup(deluser) # Probably should just wipe /etc and /var here return warnings, actions Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/deprovision/coreos.py000066400000000000000000000025111462617747000255610ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.pa.deprovision.default import DeprovisionHandler, \ DeprovisionAction class CoreOSDeprovisionHandler(DeprovisionHandler): def __init__(self): # pylint: disable=W0235 super(CoreOSDeprovisionHandler, self).__init__() def setup(self, deluser): warnings, actions = super(CoreOSDeprovisionHandler, self).setup(deluser) warnings.append("WARNING! /etc/machine-id will be removed.") files_to_del = ['/etc/machine-id'] actions.append(DeprovisionAction(fileutil.rm_files, files_to_del)) return warnings, actions Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/deprovision/default.py000066400000000000000000000267131462617747000257250ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import glob import os.path import re import signal import sys import azurelinuxagent.common.conf as conf import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.common import version from azurelinuxagent.ga.cgroupconfigurator import _AGENT_DROP_IN_FILE_SLICE, _DROP_IN_FILE_CPU_ACCOUNTING, \ _DROP_IN_FILE_CPU_QUOTA, _DROP_IN_FILE_MEMORY_ACCOUNTING, LOGCOLLECTOR_SLICE from azurelinuxagent.common.exception import ProtocolError from azurelinuxagent.common.osutil import get_osutil, systemd from azurelinuxagent.ga.persist_firewall_rules import PersistFirewallRulesHandler from azurelinuxagent.common.protocol.util import get_protocol_util from azurelinuxagent.ga.exthandlers import HANDLER_COMPLETE_NAME_PATTERN def read_input(message): if sys.version_info[0] >= 3: return input(message) else: # This is not defined in python3, and the linter will thus # throw an undefined-variable error on this line. # Suppress it here. return raw_input(message) # pylint: disable=E0602 class DeprovisionAction(object): def __init__(self, func, args=None, kwargs=None): if args is None: args = [] if kwargs is None: kwargs = {} self.func = func self.args = args self.kwargs = kwargs def invoke(self): self.func(*self.args, **self.kwargs) class DeprovisionHandler(object): def __init__(self): self.osutil = get_osutil() self.protocol_util = get_protocol_util() self.actions_running = False signal.signal(signal.SIGINT, self.handle_interrupt_signal) def del_root_password(self, warnings, actions): warnings.append("WARNING! root password will be disabled. " "You will not be able to login as root.") actions.append(DeprovisionAction(self.osutil.del_root_password)) def del_user(self, warnings, actions): try: ovfenv = self.protocol_util.get_ovf_env() except ProtocolError: warnings.append("WARNING! ovf-env.xml is not found.") warnings.append("WARNING! Skip delete user.") return username = ovfenv.username warnings.append(("WARNING! {0} account and entire home directory " "will be deleted.").format(username)) actions.append(DeprovisionAction(self.osutil.del_account, [username])) def regen_ssh_host_key(self, warnings, actions): warnings.append("WARNING! All SSH host key pairs will be deleted.") actions.append(DeprovisionAction(fileutil.rm_files, [conf.get_ssh_key_glob()])) def stop_agent_service(self, warnings, actions): warnings.append("WARNING! The waagent service will be stopped.") actions.append(DeprovisionAction(self.osutil.stop_agent_service)) def del_dirs(self, warnings, actions): # pylint: disable=W0613 dirs = [conf.get_lib_dir(), conf.get_ext_log_dir()] actions.append(DeprovisionAction(fileutil.rm_dirs, dirs)) def del_files(self, warnings, actions): # pylint: disable=W0613 files = ['/root/.bash_history', conf.get_agent_log_file()] actions.append(DeprovisionAction(fileutil.rm_files, files)) # For OpenBSD actions.append(DeprovisionAction(fileutil.rm_files, ["/etc/random.seed", "/var/db/host.random", "/etc/isakmpd/local.pub", "/etc/isakmpd/private/local.key", "/etc/iked/private/local.key", "/etc/iked/local.pub"])) def del_resolv(self, warnings, actions): warnings.append("WARNING! /etc/resolv.conf will be deleted.") files_to_del = ["/etc/resolv.conf"] actions.append(DeprovisionAction(fileutil.rm_files, files_to_del)) def del_dhcp_lease(self, warnings, actions): warnings.append("WARNING! Cached DHCP leases will be deleted.") dirs_to_del = ["/var/lib/dhclient", "/var/lib/dhcpcd", "/var/lib/dhcp"] actions.append(DeprovisionAction(fileutil.rm_dirs, dirs_to_del)) # For FreeBSD and OpenBSD actions.append(DeprovisionAction(fileutil.rm_files, ["/var/db/dhclient.leases.*"])) # For FreeBSD, NM controlled actions.append(DeprovisionAction(fileutil.rm_files, ["/var/lib/NetworkManager/dhclient-*.lease"])) # For Ubuntu >= 18.04, using systemd-networkd actions.append(DeprovisionAction(fileutil.rm_files, ["/run/systemd/netif/leases/*"])) def del_ext_handler_files(self, warnings, actions): # pylint: disable=W0613 ext_dirs = [d for d in os.listdir(conf.get_lib_dir()) if os.path.isdir(os.path.join(conf.get_lib_dir(), d)) and re.match(HANDLER_COMPLETE_NAME_PATTERN, d) is not None and not version.is_agent_path(d)] for ext_dir in ext_dirs: ext_base = os.path.join(conf.get_lib_dir(), ext_dir) files = glob.glob(os.path.join(ext_base, 'status', '*.status')) files += glob.glob(os.path.join(ext_base, 'config', '*.settings')) files += glob.glob(os.path.join(ext_base, 'config', 'HandlerStatus')) files += glob.glob(os.path.join(ext_base, 'mrseq')) if len(files) > 0: actions.append(DeprovisionAction(fileutil.rm_files, files)) def del_lib_dir_files(self, warnings, actions): # pylint: disable=W0613 known_files = [ 'HostingEnvironmentConfig.xml', 'Incarnation', 'partition', 'Protocol', 'SharedConfig.xml', 'WireServerEndpoint', 'published_hostname', 'fast_track.json', 'initial_goal_state', 'rsm_update.json' ] known_files_glob = [ 'Extensions.*.xml', 'ExtensionsConfig.*.xml', 'GoalState.*.xml' ] lib_dir = conf.get_lib_dir() files = [f for f in \ [os.path.join(lib_dir, kf) for kf in known_files] \ if os.path.isfile(f)] for p in known_files_glob: files += glob.glob(os.path.join(lib_dir, p)) if len(files) > 0: actions.append(DeprovisionAction(fileutil.rm_files, files)) def reset_hostname(self, warnings, actions): # pylint: disable=W0613 localhost = ["localhost.localdomain"] actions.append(DeprovisionAction(self.osutil.set_hostname, localhost)) actions.append(DeprovisionAction(self.osutil.set_dhcp_hostname, localhost)) def setup(self, deluser): warnings = [] actions = [] self.stop_agent_service(warnings, actions) if conf.get_regenerate_ssh_host_key(): self.regen_ssh_host_key(warnings, actions) self.del_dhcp_lease(warnings, actions) self.reset_hostname(warnings, actions) if conf.get_delete_root_password(): self.del_root_password(warnings, actions) self.del_dirs(warnings, actions) self.del_files(warnings, actions) self.del_resolv(warnings, actions) if deluser: self.del_user(warnings, actions) self.del_persist_firewall_rules(actions) self.remove_agent_cgroup_config(actions) return warnings, actions def setup_changed_unique_id(self): warnings = [] actions = [] self.del_dhcp_lease(warnings, actions) self.del_lib_dir_files(warnings, actions) self.del_ext_handler_files(warnings, actions) self.del_persist_firewall_rules(actions) self.remove_agent_cgroup_config(actions) return warnings, actions def run(self, force=False, deluser=False): warnings, actions = self.setup(deluser) self.do_warnings(warnings) if self.do_confirmation(force=force): self.do_actions(actions) def run_changed_unique_id(self): ''' Clean-up files and directories that may interfere when the VM unique identifier has changed. While users *should* manually deprovision a VM, the files removed by this routine will help keep the agent from getting confused (since incarnation and extension settings, among other items, will no longer be monotonically increasing). ''' warnings, actions = self.setup_changed_unique_id() self.do_warnings(warnings) self.do_actions(actions) def do_actions(self, actions): self.actions_running = True for action in actions: action.invoke() self.actions_running = False def do_confirmation(self, force=False): if force: return True confirm = read_input("Do you want to proceed (y/n)") return True if confirm.lower().startswith('y') else False def do_warnings(self, warnings): for warning in warnings: print(warning) def handle_interrupt_signal(self, signum, frame): # pylint: disable=W0613 if not self.actions_running: print("Deprovision is interrupted.") sys.exit(0) print ('Deprovisioning may not be interrupted.') return @staticmethod def del_persist_firewall_rules(actions): agent_network_service_path = PersistFirewallRulesHandler.get_service_file_path() actions.append(DeprovisionAction(fileutil.rm_files, [agent_network_service_path, os.path.join(conf.get_lib_dir(), PersistFirewallRulesHandler.BINARY_FILE_NAME)])) @staticmethod def remove_agent_cgroup_config(actions): # Get all service drop in file paths agent_drop_in_path = systemd.get_agent_drop_in_path() slice_path = os.path.join(agent_drop_in_path, _AGENT_DROP_IN_FILE_SLICE) cpu_accounting_path = os.path.join(agent_drop_in_path, _DROP_IN_FILE_CPU_ACCOUNTING) cpu_quota_path = os.path.join(agent_drop_in_path, _DROP_IN_FILE_CPU_QUOTA) mem_accounting_path = os.path.join(agent_drop_in_path, _DROP_IN_FILE_MEMORY_ACCOUNTING) # Get log collector slice unit_file_install_path = systemd.get_unit_file_install_path() log_collector_slice_path = os.path.join(unit_file_install_path, LOGCOLLECTOR_SLICE) actions.append(DeprovisionAction(fileutil.rm_files, [slice_path, cpu_accounting_path, cpu_quota_path, mem_accounting_path, log_collector_slice_path])) Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/deprovision/factory.py000066400000000000000000000033401462617747000257370ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from distutils.version import LooseVersion as Version # pylint: disable=no-name-in-module, import-error from azurelinuxagent.common.version import DISTRO_NAME, DISTRO_VERSION, DISTRO_FULL_NAME from .arch import ArchDeprovisionHandler from .clearlinux import ClearLinuxDeprovisionHandler from .coreos import CoreOSDeprovisionHandler from .default import DeprovisionHandler from .ubuntu import UbuntuDeprovisionHandler, Ubuntu1804DeprovisionHandler def get_deprovision_handler(distro_name=DISTRO_NAME, distro_version=DISTRO_VERSION, distro_full_name=DISTRO_FULL_NAME): if distro_name == "arch": return ArchDeprovisionHandler() if distro_name == "ubuntu": if Version(distro_version) >= Version('18.04'): return Ubuntu1804DeprovisionHandler() else: return UbuntuDeprovisionHandler() if distro_name in ("flatcar", "coreos"): return CoreOSDeprovisionHandler() if "Clear Linux" in distro_full_name: return ClearLinuxDeprovisionHandler() # pylint: disable=E1120 return DeprovisionHandler() Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/deprovision/ubuntu.py000066400000000000000000000042141462617747000256130ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.pa.deprovision.default import DeprovisionHandler, \ DeprovisionAction class UbuntuDeprovisionHandler(DeprovisionHandler): def __init__(self): # pylint: disable=W0235 super(UbuntuDeprovisionHandler, self).__init__() def del_resolv(self, warnings, actions): if os.path.realpath( '/etc/resolv.conf') != '/run/resolvconf/resolv.conf': warnings.append("WARNING! /etc/resolv.conf will be deleted.") files_to_del = ["/etc/resolv.conf"] actions.append(DeprovisionAction(fileutil.rm_files, files_to_del)) else: warnings.append("WARNING! /etc/resolvconf/resolv.conf.d/tail " "and /etc/resolvconf/resolv.conf.d/original will " "be deleted.") files_to_del = ["/etc/resolvconf/resolv.conf.d/tail", "/etc/resolvconf/resolv.conf.d/original"] actions.append(DeprovisionAction(fileutil.rm_files, files_to_del)) class Ubuntu1804DeprovisionHandler(UbuntuDeprovisionHandler): def __init__(self): # pylint: disable=W0235 super(Ubuntu1804DeprovisionHandler, self).__init__() def del_resolv(self, warnings, actions): # no changes will be made to /etc/resolv.conf warnings.append("WARNING! /etc/resolv.conf will NOT be removed, this is a behavior change to earlier " "versions of Ubuntu.") Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/provision/000077500000000000000000000000001462617747000234055ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/provision/__init__.py000066400000000000000000000012751462617747000255230ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.pa.provision.factory import get_provision_handler Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/provision/cloudinit.py000066400000000000000000000143771462617747000257650ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import os.path import time from datetime import datetime import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil # pylint: disable=W0611 from azurelinuxagent.common.event import elapsed_milliseconds, WALAEventOperation # pylint: disable=W0611 from azurelinuxagent.common.exception import ProvisionError, ProtocolError from azurelinuxagent.common.future import ustr from azurelinuxagent.common.protocol.util import OVF_FILE_NAME from azurelinuxagent.common.protocol.ovfenv import OvfEnv from azurelinuxagent.pa.provision.default import ProvisionHandler from azurelinuxagent.pa.provision.cloudinitdetect import cloud_init_is_enabled class CloudInitProvisionHandler(ProvisionHandler): def __init__(self): # pylint: disable=W0235 super(CloudInitProvisionHandler, self).__init__() def run(self): try: if super(CloudInitProvisionHandler, self).check_provisioned_file(): logger.info("Provisioning already completed, skipping.") return utc_start = datetime.utcnow() logger.info("Running CloudInit provisioning handler") self.wait_for_ovfenv() self.protocol_util.get_protocol() # Trigger protocol detection self.report_not_ready("Provisioning", "Starting") thumbprint = self.wait_for_ssh_host_key() # pylint: disable=W0612 self.write_provisioned() logger.info("Finished provisioning") self.report_ready() self.report_event("Provisioning with cloud-init succeeded ({0}s)".format(self._get_uptime_seconds()), is_success=True, duration=elapsed_milliseconds(utc_start)) except ProvisionError as e: msg = "Provisioning with cloud-init failed: {0} ({1}s)".format(ustr(e), self._get_uptime_seconds()) logger.error(msg) self.report_not_ready("ProvisioningFailed", ustr(e)) self.report_event(msg) return def wait_for_ovfenv(self, max_retry=1800, sleep_time=1): """ Wait for cloud-init to copy ovf-env.xml file from provision ISO """ ovf_file_path = os.path.join(conf.get_lib_dir(), OVF_FILE_NAME) logging_interval = 10 max_logging_interval = 320 for retry in range(0, max_retry): if os.path.isfile(ovf_file_path): try: ovf_env = OvfEnv(fileutil.read_file(ovf_file_path)) self.handle_provision_guest_agent(ovf_env.provision_guest_agent) return except ProtocolError as pe: raise ProvisionError("OVF xml could not be parsed " "[{0}]: {1}".format(ovf_file_path, ustr(pe))) else: if retry < max_retry - 1: if retry % logging_interval == 0: logger.info( "Waiting for cloud-init to copy ovf-env.xml to {0} " "[{1} retries remaining, " "sleeping {2}s between retries]".format(ovf_file_path, max_retry - retry, sleep_time)) if not cloud_init_is_enabled(): logger.warn("cloud-init does not appear to be enabled") logging_interval = min(logging_interval * 2, max_logging_interval) time.sleep(sleep_time) raise ProvisionError("Giving up, ovf-env.xml was not copied to {0} " "after {1}s".format(ovf_file_path, max_retry * sleep_time)) def wait_for_ssh_host_key(self, max_retry=1800, sleep_time=1): """ Wait for cloud-init to generate ssh host key """ keypair_type = conf.get_ssh_host_keypair_type() # pylint: disable=W0612 path = conf.get_ssh_key_public_path() logging_interval = 10 max_logging_interval = 320 for retry in range(0, max_retry): if os.path.isfile(path): logger.info("ssh host key found at: {0}".format(path)) try: thumbprint = self.get_ssh_host_key_thumbprint(chk_err=False) logger.info("Thumbprint obtained from : {0}".format(path)) return thumbprint except ProvisionError: logger.warn("Could not get thumbprint from {0}".format(path)) if retry < max_retry - 1: if retry % logging_interval == 0: logger.info("Waiting for ssh host key be generated at {0} " "[{1} attempts remaining, " "sleeping {2}s between retries]".format(path, max_retry - retry, sleep_time)) if not cloud_init_is_enabled(): logger.warn("cloud-init does not appear to be running") logging_interval = min(logging_interval * 2, max_logging_interval) time.sleep(sleep_time) raise ProvisionError("Giving up, ssh host key was not found at {0} " "after {1}s".format(path, max_retry * sleep_time)) Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/provision/cloudinitdetect.py000066400000000000000000000035261462617747000271500ustar00rootroot00000000000000"""Module for detecting the existence of cloud-init""" import subprocess import azurelinuxagent.common.logger as logger def _cloud_init_is_enabled_systemd(): """ Determine whether or not cloud-init is enabled on a systemd machine. Args: None Returns: bool: True if cloud-init is enabled, False if otherwise. """ try: systemctl_output = subprocess.check_output([ 'systemctl', 'is-enabled', 'cloud-init-local.service' ], stderr=subprocess.STDOUT).decode('utf-8').replace('\n', '') unit_is_enabled = systemctl_output == 'enabled' # pylint: disable=broad-except except Exception as exc: logger.info('Unable to get cloud-init enabled status from systemctl: {0}'.format(exc)) unit_is_enabled = False return unit_is_enabled def _cloud_init_is_enabled_service(): """ Determine whether or not cloud-init is enabled on a non-systemd machine. Args: None Returns: bool: True if cloud-init is enabled, False if otherwise. """ try: subprocess.check_output([ 'service', 'cloud-init', 'status' ], stderr=subprocess.STDOUT) unit_is_enabled = True # pylint: disable=broad-except except Exception as exc: logger.info('Unable to get cloud-init enabled status from service: {0}'.format(exc)) unit_is_enabled = False return unit_is_enabled def cloud_init_is_enabled(): """ Determine whether or not cloud-init is enabled. Args: None Returns: bool: True if cloud-init is enabled, False if otherwise. """ unit_is_enabled = _cloud_init_is_enabled_systemd() or _cloud_init_is_enabled_service() logger.info('cloud-init is enabled: {0}'.format(unit_is_enabled)) return unit_is_enabled Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/provision/default.py000066400000000000000000000266071462617747000254160ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # """ Provision handler """ import os import os.path import re import time from datetime import datetime import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.common.future import ustr from azurelinuxagent.common.event import add_event, WALAEventOperation, \ elapsed_milliseconds from azurelinuxagent.common.exception import ProvisionError, ProtocolError, \ OSUtilError from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.protocol.restapi import ProvisionStatus from azurelinuxagent.common.protocol.util import get_protocol_util from azurelinuxagent.common.version import AGENT_NAME from azurelinuxagent.pa.provision.cloudinitdetect import cloud_init_is_enabled CUSTOM_DATA_FILE = "CustomData" CLOUD_INIT_PATTERN = b".*/bin/cloud-init.*" CLOUD_INIT_REGEX = re.compile(CLOUD_INIT_PATTERN) PROVISIONED_FILE = 'provisioned' class ProvisionHandler(object): def __init__(self): self.osutil = get_osutil() self.protocol_util = get_protocol_util() def run(self): if not conf.get_provision_enabled(): logger.info("Provisioning is disabled, skipping.") self.write_provisioned() self.report_ready() return try: utc_start = datetime.utcnow() thumbprint = None # pylint: disable=W0612 if self.check_provisioned_file(): logger.info("Provisioning already completed, skipping.") return logger.info("Running default provisioning handler") if cloud_init_is_enabled(): raise ProvisionError("cloud-init appears to be installed and enabled, " "this is not expected, cannot continue") logger.info("Copying ovf-env.xml") ovf_env = self.protocol_util.copy_ovf_env() self.protocol_util.get_protocol() # Trigger protocol detection self.report_not_ready("Provisioning", "Starting") logger.info("Starting provisioning") self.provision(ovf_env) thumbprint = self.reg_ssh_host_key() self.osutil.restart_ssh_service() self.write_provisioned() self.report_event("Provisioning succeeded ({0}s)".format(self._get_uptime_seconds()), is_success=True, duration=elapsed_milliseconds(utc_start)) self.handle_provision_guest_agent(ovf_env.provision_guest_agent) self.report_ready() logger.info("Provisioning complete") except (ProtocolError, ProvisionError) as e: msg = "Provisioning failed: {0} ({1}s)".format(ustr(e), self._get_uptime_seconds()) logger.error(msg) self.report_not_ready("ProvisioningFailed", ustr(e)) self.report_event(msg, is_success=False) return @staticmethod def _get_uptime_seconds(): try: with open('/proc/uptime') as fh: uptime, _ = fh.readline().split() return uptime except: # pylint: disable=W0702 return 0 def reg_ssh_host_key(self): keypair_type = conf.get_ssh_host_keypair_type() if conf.get_regenerate_ssh_host_key(): fileutil.rm_files(conf.get_ssh_key_glob()) if conf.get_ssh_host_keypair_mode() == "auto": # pylint: disable=W0105 ''' The -A option generates all supported key types. This is supported since OpenSSH 5.9 (2011). ''' # pylint: enable=W0105 shellutil.run("ssh-keygen -A") else: keygen_cmd = "ssh-keygen -N '' -t {0} -f {1}" shellutil.run(keygen_cmd. format(keypair_type, conf.get_ssh_key_private_path())) return self.get_ssh_host_key_thumbprint() def get_ssh_host_key_thumbprint(self, chk_err=True): cmd = "ssh-keygen -lf {0}".format(conf.get_ssh_key_public_path()) ret = shellutil.run_get_output(cmd, chk_err=chk_err) if ret[0] == 0: return ret[1].rstrip().split()[1].replace(':', '') else: raise ProvisionError(("Failed to generate ssh host key: " "ret={0}, out= {1}").format(ret[0], ret[1])) @staticmethod def provisioned_file_path(): return os.path.join(conf.get_lib_dir(), PROVISIONED_FILE) @staticmethod def is_provisioned(): """ A VM is considered provisioned *anytime* the provisioning sentinel file exists and not provisioned *anytime* the file is absent. """ return os.path.isfile(ProvisionHandler.provisioned_file_path()) def check_provisioned_file(self): """ If the VM was provisioned using an agent that did not record the VM unique identifier, the provisioning file will be re-written to include the identifier. A warning is logged *if* the VM unique identifier has changed since VM was provisioned. Returns False if the VM has not been provisioned. """ if not ProvisionHandler.is_provisioned(): return False s = fileutil.read_file(ProvisionHandler.provisioned_file_path()).strip() if not self.osutil.is_current_instance_id(s): if len(s) > 0: msg = "VM is provisioned, but the VM unique identifier has changed. This indicates the VM may be " \ "created from an image that was not properly deprovisioned or generalized, which can result in " \ "unexpected behavior from the guest agent -- clearing cached state" logger.warn(msg) self.report_event(msg) from azurelinuxagent.pa.deprovision \ import get_deprovision_handler deprovision_handler = get_deprovision_handler() deprovision_handler.run_changed_unique_id() self.write_provisioned() self.report_ready() return True def write_provisioned(self): fileutil.write_file( ProvisionHandler.provisioned_file_path(), get_osutil().get_instance_id()) @staticmethod def write_agent_disabled(): logger.warn("Disabling guest agent in accordance with ovf-env.xml") fileutil.write_file(conf.get_disable_agent_file_path(), '') def handle_provision_guest_agent(self, provision_guest_agent): self.report_event(message=provision_guest_agent, is_success=True, duration=0, operation=WALAEventOperation.ProvisionGuestAgent) if provision_guest_agent and provision_guest_agent.lower() == 'false': self.write_agent_disabled() def provision(self, ovfenv): logger.info("Handle ovf-env.xml.") try: logger.info("Set hostname [{0}]".format(ovfenv.hostname)) self.osutil.set_hostname(ovfenv.hostname) logger.info("Publish hostname [{0}]".format(ovfenv.hostname)) self.osutil.publish_hostname(ovfenv.hostname) self.config_user_account(ovfenv) self.save_customdata(ovfenv) if conf.get_delete_root_password(): self.osutil.del_root_password() except OSUtilError as e: raise ProvisionError("Failed to provision: {0}".format(ustr(e))) def config_user_account(self, ovfenv): logger.info("Create user account if not exists") self.osutil.useradd(ovfenv.username) if ovfenv.user_password is not None: logger.info("Set user password.") crypt_id = conf.get_password_cryptid() salt_len = conf.get_password_crypt_salt_len() self.osutil.chpasswd(ovfenv.username, ovfenv.user_password, crypt_id=crypt_id, salt_len=salt_len) logger.info("Configure sudoer") self.osutil.conf_sudoer(ovfenv.username, nopasswd=ovfenv.user_password is None) logger.info("Configure sshd") self.osutil.conf_sshd(ovfenv.disable_ssh_password_auth) self.deploy_ssh_pubkeys(ovfenv) self.deploy_ssh_keypairs(ovfenv) def save_customdata(self, ovfenv): customdata = ovfenv.customdata if customdata is None: return lib_dir = conf.get_lib_dir() if conf.get_decode_customdata() or conf.get_execute_customdata(): logger.info("Decode custom data") customdata = self.osutil.decode_customdata(customdata) logger.info("Save custom data") customdata_file = os.path.join(lib_dir, CUSTOM_DATA_FILE) fileutil.write_file(customdata_file, customdata) if conf.get_execute_customdata(): start = time.time() logger.info("Execute custom data") os.chmod(customdata_file, 0o700) shellutil.run(customdata_file) add_event(name=AGENT_NAME, duration=int(time.time() - start), is_success=True, op=WALAEventOperation.CustomData) def deploy_ssh_pubkeys(self, ovfenv): for pubkey in ovfenv.ssh_pubkeys: logger.info("Deploy ssh public key.") self.osutil.deploy_ssh_pubkey(ovfenv.username, pubkey) def deploy_ssh_keypairs(self, ovfenv): for keypair in ovfenv.ssh_keypairs: logger.info("Deploy ssh key pairs.") self.osutil.deploy_ssh_keypair(ovfenv.username, keypair) def report_event(self, message, is_success=False, duration=0, operation=WALAEventOperation.Provision): add_event(name=AGENT_NAME, message=message, duration=duration, is_success=is_success, op=operation) def report_not_ready(self, sub_status, description): status = ProvisionStatus(status="NotReady", subStatus=sub_status, description=description) try: protocol = self.protocol_util.get_protocol() protocol.report_provision_status(status) except ProtocolError as e: logger.error("Reporting NotReady failed: {0}", e) self.report_event(ustr(e)) def report_ready(self): status = ProvisionStatus(status="Ready") try: protocol = self.protocol_util.get_protocol() protocol.report_provision_status(status) except ProtocolError as e: logger.error("Reporting Ready failed: {0}", e) self.report_event(ustr(e)) Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/provision/factory.py000066400000000000000000000030441462617747000254270ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import azurelinuxagent.common.conf as conf from azurelinuxagent.common import logger from azurelinuxagent.common.version import DISTRO_NAME, DISTRO_VERSION, \ DISTRO_FULL_NAME from .default import ProvisionHandler from .cloudinit import CloudInitProvisionHandler, cloud_init_is_enabled def get_provision_handler(distro_name=DISTRO_NAME, # pylint: disable=W0613 distro_version=DISTRO_VERSION, # pylint: disable=W0613 distro_full_name=DISTRO_FULL_NAME): # pylint: disable=W0613 provisioning_agent = conf.get_provisioning_agent() if provisioning_agent == 'cloud-init' or ( provisioning_agent == 'auto' and cloud_init_is_enabled()): logger.info('Using cloud-init for provisioning') return CloudInitProvisionHandler() logger.info('Using waagent for provisioning') return ProvisionHandler() Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/rdma/000077500000000000000000000000001462617747000223005ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/rdma/__init__.py000066400000000000000000000012631462617747000244130ustar00rootroot00000000000000# Copyright 2016 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.pa.rdma.factory import get_rdma_handler Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/rdma/centos.py000066400000000000000000000243351462617747000241540ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import glob # pylint: disable=W0611 import os import re import time import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.pa.rdma.rdma import RDMAHandler class CentOSRDMAHandler(RDMAHandler): rdma_user_mode_package_name = 'microsoft-hyper-v-rdma' rdma_kernel_mode_package_name = 'kmod-microsoft-hyper-v-rdma' rdma_wrapper_package_name = 'msft-rdma-drivers' hyper_v_package_name = "hypervkvpd" hyper_v_package_name_new = "microsoft-hyper-v" version_major = None version_minor = None def __init__(self, distro_version): v = distro_version.split('.') if len(v) < 2: raise Exception('Unexpected centos version: %s' % distro_version) self.version_major, self.version_minor = v[0], v[1] def install_driver(self): """ Install the KVP daemon and the appropriate RDMA driver package for the RDMA firmware. """ # Check and install the KVP deamon if it not running time.sleep(10) # give some time for the hv_hvp_daemon to start up. kvpd_running = RDMAHandler.is_kvp_daemon_running() logger.info('RDMA: kvp daemon running: %s' % kvpd_running) if not kvpd_running: self.check_or_install_kvp_daemon() time.sleep(10) # wait for post-install reboot or kvp to come up # Find out RDMA firmware version and see if the existing package needs # updating or if the package is missing altogether (and install it) fw_version = self.get_rdma_version() if not fw_version: raise Exception('Cannot determine RDMA firmware version') logger.info("RDMA: found firmware version: {0}".format(fw_version)) fw_version = self.get_int_rdma_version(fw_version) installed_pkg = self.get_rdma_package_info() if installed_pkg: logger.info( 'RDMA: driver package present: {0}'.format(installed_pkg)) if self.is_rdma_package_up_to_date(installed_pkg, fw_version): logger.info('RDMA: driver package is up-to-date') return else: logger.info('RDMA: driver package needs updating') self.update_rdma_package(fw_version) else: logger.info('RDMA: driver package is NOT installed') self.update_rdma_package(fw_version) def is_rdma_package_up_to_date(self, pkg, fw_version): # Example match (pkg name, -, followed by 3 segments, fw_version and -): # - pkg=microsoft-hyper-v-rdma-4.1.0.142-20160323.x86_64 # - fw_version=142 pattern = '{0}-(\d+\.){{3,}}({1})-'.format(self.rdma_user_mode_package_name, fw_version) # pylint: disable=W1401 return re.match(pattern, pkg) @staticmethod def get_int_rdma_version(version): s = version.split('.') if len(s) == 0: raise Exception('Unexpected RDMA firmware version: "%s"' % version) return s[0] def get_rdma_package_info(self): """ Returns the installed rdma package name or None """ ret, output = shellutil.run_get_output( 'rpm -q %s' % self.rdma_user_mode_package_name, chk_err=False) if ret != 0: return None return output def update_rdma_package(self, fw_version): logger.info("RDMA: updating RDMA packages") self.refresh_repos() self.force_install_package(self.rdma_wrapper_package_name) self.install_rdma_drivers(fw_version) def force_install_package(self, pkg_name): """ Attempts to remove existing package and installs the package """ logger.info('RDMA: Force installing package: %s' % pkg_name) if self.uninstall_package(pkg_name) != 0: logger.info('RDMA: Erasing package failed but will continue') if self.install_package(pkg_name) != 0: raise Exception('Failed to install package "{0}"'.format(pkg_name)) logger.info('RDMA: installation completed: %s' % pkg_name) @staticmethod def uninstall_package(pkg_name): return shellutil.run('yum erase -y -q {0}'.format(pkg_name)) @staticmethod def install_package(pkg_name): return shellutil.run('yum install -y -q {0}'.format(pkg_name)) def refresh_repos(self): logger.info("RDMA: refreshing yum repos") if shellutil.run('yum clean all') != 0: raise Exception('Cleaning yum repositories failed') if shellutil.run('yum updateinfo') != 0: raise Exception('Failed to act on yum repo update information') logger.info("RDMA: repositories refreshed") def install_rdma_drivers(self, fw_version): """ Installs the drivers from /opt/rdma/rhel[Major][Minor] directory, particularly the microsoft-hyper-v-rdma-* kmod-* and (no debuginfo or src). Tries to uninstall them first. """ pkg_dir = '/opt/microsoft/rdma/rhel{0}{1}'.format( self.version_major, self.version_minor) logger.info('RDMA: pkgs dir: {0}'.format(pkg_dir)) if not os.path.isdir(pkg_dir): raise Exception('RDMA packages directory %s is missing' % pkg_dir) pkgs = os.listdir(pkg_dir) logger.info('RDMA: found %d files in package directory' % len(pkgs)) # Uninstal KVP daemon first (if exists) self.uninstall_kvp_driver_package_if_exists() # Install kernel mode driver (kmod-microsoft-hyper-v-rdma-*) kmod_pkg = self.get_file_by_pattern( pkgs, "%s-(\d+\.){3,}(%s)-\d{8}\.x86_64.rpm" % (self.rdma_kernel_mode_package_name, fw_version)) # pylint: disable=W1401 if not kmod_pkg: raise Exception("RDMA kernel mode package not found") kmod_pkg_path = os.path.join(pkg_dir, kmod_pkg) self.uninstall_pkg_and_install_from( 'kernel mode', self.rdma_kernel_mode_package_name, kmod_pkg_path) # Install user mode driver (microsoft-hyper-v-rdma-*) umod_pkg = self.get_file_by_pattern( pkgs, "%s-(\d+\.){3,}(%s)-\d{8}\.x86_64.rpm" % (self.rdma_user_mode_package_name, fw_version)) # pylint: disable=W1401 if not umod_pkg: raise Exception("RDMA user mode package not found") umod_pkg_path = os.path.join(pkg_dir, umod_pkg) self.uninstall_pkg_and_install_from( 'user mode', self.rdma_user_mode_package_name, umod_pkg_path) logger.info("RDMA: driver packages installed") if not self.load_driver_module() or not self.is_driver_loaded(): logger.info("RDMA: driver module is not loaded; reboot required") self.reboot_system() else: logger.info("RDMA: kernel module is loaded") @staticmethod def get_file_by_pattern(file_list, pattern): for l in file_list: if re.match(pattern, l): return l return None def uninstall_pkg_and_install_from(self, pkg_type, pkg_name, pkg_path): logger.info( "RDMA: Processing {0} driver: {1}".format(pkg_type, pkg_path)) logger.info("RDMA: Try to uninstall existing version: %s" % pkg_name) if self.uninstall_package(pkg_name) == 0: logger.info("RDMA: Successfully uninstaled %s" % pkg_name) logger.info( "RDMA: Installing {0} package from {1}".format(pkg_type, pkg_path)) if self.install_package(pkg_path) != 0: raise Exception( "Failed to install RDMA {0} package".format(pkg_type)) @staticmethod def is_package_installed(pkg): """Runs rpm -q and checks return code to find out if a package is installed""" return shellutil.run("rpm -q %s" % pkg, chk_err=False) == 0 def uninstall_kvp_driver_package_if_exists(self): logger.info('RDMA: deleting existing kvp driver packages') kvp_pkgs = [self.hyper_v_package_name, self.hyper_v_package_name_new] for kvp_pkg in kvp_pkgs: if not self.is_package_installed(kvp_pkg): logger.info( "RDMA: kvp package %s does not exist, skipping" % kvp_pkg) else: logger.info('RDMA: erasing kvp package "%s"' % kvp_pkg) if shellutil.run("yum erase -q -y %s" % kvp_pkg, chk_err=False) == 0: logger.info("RDMA: successfully erased package") else: logger.error("RDMA: failed to erase package") def check_or_install_kvp_daemon(self): """Checks if kvp daemon package is installed, if not installs the package and reboots the machine. """ logger.info("RDMA: Checking kvp daemon packages.") kvp_pkgs = [self.hyper_v_package_name, self.hyper_v_package_name_new] for pkg in kvp_pkgs: logger.info("RDMA: Checking if package %s installed" % pkg) installed = self.is_package_installed(pkg) if installed: raise Exception('RDMA: package %s is installed, but the kvp daemon is not running' % pkg) kvp_pkg_to_install=self.hyper_v_package_name logger.info("RDMA: no kvp drivers installed, will install '%s'" % kvp_pkg_to_install) logger.info("RDMA: trying to install kvp package '%s'" % kvp_pkg_to_install) if self.install_package(kvp_pkg_to_install) != 0: raise Exception("RDMA: failed to install kvp daemon package '%s'" % kvp_pkg_to_install) logger.info("RDMA: package '%s' successfully installed" % kvp_pkg_to_install) logger.info("RDMA: Machine will now be rebooted.") self.reboot_system() Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/rdma/factory.py000066400000000000000000000035461462617747000243310ustar00rootroot00000000000000# Copyright 2016 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from distutils.version import LooseVersion as Version # pylint: disable=no-name-in-module, import-error import azurelinuxagent.common.logger as logger from azurelinuxagent.pa.rdma.rdma import RDMAHandler from azurelinuxagent.common.version import DISTRO_FULL_NAME, DISTRO_VERSION from .centos import CentOSRDMAHandler from .suse import SUSERDMAHandler from .ubuntu import UbuntuRDMAHandler def get_rdma_handler( distro_full_name=DISTRO_FULL_NAME, distro_version=DISTRO_VERSION ): """Return the handler object for RDMA driver handling""" if ( (distro_full_name == 'SUSE Linux Enterprise Server' or distro_full_name == 'SLES' or distro_full_name == 'SLE_HPC') and Version(distro_version) > Version('11') ): return SUSERDMAHandler() if distro_full_name in ('CentOS Linux', 'CentOS', 'Red Hat Enterprise Linux Server', 'AlmaLinux', 'CloudLinux', 'Rocky Linux'): return CentOSRDMAHandler(distro_version) if distro_full_name == 'Ubuntu': return UbuntuRDMAHandler() logger.info("No RDMA handler exists for distro='{0}' version='{1}'", distro_full_name, distro_version) return RDMAHandler() Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/rdma/rdma.py000066400000000000000000000534251462617747000236060ustar00rootroot00000000000000# Windows Azure Linux Agent # # Copyright 2016 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # """ Handle packages and modules to enable RDMA for IB networking """ import os import re import time import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.fileutil as fileutil import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.utils.textutil import parse_doc, find, getattrib dapl_config_paths = [ '/etc/dat.conf', '/etc/rdma/dat.conf', '/usr/local/etc/dat.conf' ] def setup_rdma_device(nd_version, shared_conf): logger.verbose("Parsing SharedConfig XML contents for RDMA details") xml_doc = parse_doc(shared_conf.xml_text) if xml_doc is None: logger.error("Could not parse SharedConfig XML document") return instance_elem = find(xml_doc, "Instance") if not instance_elem: logger.error("Could not find in SharedConfig document") return rdma_ipv4_addr = getattrib(instance_elem, "rdmaIPv4Address") if not rdma_ipv4_addr: logger.error( "Could not find rdmaIPv4Address attribute on Instance element of SharedConfig.xml document") return rdma_mac_addr = getattrib(instance_elem, "rdmaMacAddress") if not rdma_mac_addr: logger.error( "Could not find rdmaMacAddress attribute on Instance element of SharedConfig.xml document") return # add colons to the MAC address (e.g. 00155D33FF1D -> # 00:15:5D:33:FF:1D) rdma_mac_addr = ':'.join([rdma_mac_addr[i:i + 2] for i in range(0, len(rdma_mac_addr), 2)]) logger.info("Found RDMA details. IPv4={0} MAC={1}".format( rdma_ipv4_addr, rdma_mac_addr)) # Set up the RDMA device with collected informatino RDMADeviceHandler(rdma_ipv4_addr, rdma_mac_addr, nd_version).start() logger.info("RDMA: device is set up") return class RDMAHandler(object): driver_module_name = 'hv_network_direct' nd_version = None def get_rdma_version(self): # pylint: disable=R1710 """Retrieve the firmware version information from the system. This depends on information provided by the Linux kernel.""" if self.nd_version: return self.nd_version kvp_key_size = 512 kvp_value_size = 2048 driver_info_source = '/var/lib/hyperv/.kvp_pool_0' base_kernel_err_msg = 'Kernel does not provide the necessary ' base_kernel_err_msg += 'information or the kvp daemon is not running.' if not os.path.isfile(driver_info_source): error_msg = 'RDMA: Source file "%s" does not exist. ' error_msg += base_kernel_err_msg logger.error(error_msg % driver_info_source) return with open(driver_info_source, "rb") as pool_file: while True: key = pool_file.read(kvp_key_size) value = pool_file.read(kvp_value_size) if key and value: key_0 = key.partition(b"\x00")[0] if key_0: key_0 = key_0.decode() value_0 = value.partition(b"\x00")[0] if value_0: value_0 = value_0.decode() if key_0 == "NdDriverVersion": self.nd_version = value_0 return self.nd_version else: break error_msg = 'RDMA: NdDriverVersion not found in "%s"' logger.error(error_msg % driver_info_source) return @staticmethod def is_kvp_daemon_running(): """Look for kvp daemon names in ps -ef output and return True/False """ # for centos, the hypervkvpd and the hv_kvp_daemon both are ok. # for suse, it uses hv_kvp_daemon kvp_daemon_names = ['hypervkvpd', 'hv_kvp_daemon'] exitcode, ps_out = shellutil.run_get_output("ps -ef") if exitcode != 0: raise Exception('RDMA: ps -ef failed: %s' % ps_out) for n in kvp_daemon_names: if n in ps_out: logger.info('RDMA: kvp daemon (%s) is running' % n) return True else: logger.verbose('RDMA: kvp daemon (%s) is not running' % n) return False def load_driver_module(self): """Load the kernel driver, this depends on the proper driver to be installed with the install_driver() method""" logger.info("RDMA: probing module '%s'" % self.driver_module_name) result = shellutil.run('modprobe --first-time %s' % self.driver_module_name) if result != 0: error_msg = 'Could not load "%s" kernel module. ' error_msg += 'Run "modprobe --first-time %s" as root for more details' logger.error( error_msg % (self.driver_module_name, self.driver_module_name) ) return False logger.info('RDMA: Loaded the kernel driver successfully.') return True def install_driver_if_needed(self): if self.nd_version: if conf.enable_check_rdma_driver(): self.install_driver() else: logger.info('RDMA: check RDMA driver is disabled, skip installing driver') else: logger.info('RDMA: skip installing driver when ndversion not present\n') def install_driver(self): """Install the driver. This is distribution specific and must be overwritten in the child implementation.""" logger.error('RDMAHandler.install_driver not implemented') def is_driver_loaded(self): """Check if the network module is loaded in kernel space""" cmd = 'lsmod | grep ^%s' % self.driver_module_name status, loaded_modules = shellutil.run_get_output(cmd) # pylint: disable=W0612 logger.info('RDMA: Checking if the module loaded.') if loaded_modules: logger.info('RDMA: module loaded.') return True logger.info('RDMA: module not loaded.') return False def reboot_system(self): """Reboot the system. This is required as the kernel module for the rdma driver cannot be unloaded with rmmod""" logger.info('RDMA: Rebooting system.') ret = shellutil.run('shutdown -r now') if ret != 0: logger.error('RDMA: Failed to reboot the system') dapl_config_paths = [ '/etc/dat.conf', '/etc/rdma/dat.conf', '/usr/local/etc/dat.conf'] class RDMADeviceHandler(object): """ Responsible for writing RDMA IP and MAC address to the /dev/hvnd_rdma interface. """ rdma_dev = '/dev/hvnd_rdma' sriov_dir = '/sys/class/infiniband' device_check_timeout_sec = 120 device_check_interval_sec = 1 ipoib_check_timeout_sec = 60 ipoib_check_interval_sec = 1 ipv4_addr = None mac_addr = None nd_version = None def __init__(self, ipv4_addr, mac_addr, nd_version): self.ipv4_addr = ipv4_addr self.mac_addr = mac_addr self.nd_version = nd_version def start(self): logger.info("RDMA: starting device processing.") self.process() logger.info("RDMA: completed device processing.") def process(self): try: if not self.nd_version: logger.info("RDMA: provisioning SRIOV RDMA device.") self.provision_sriov_rdma() else: logger.info("RDMA: provisioning Network Direct RDMA device.") self.provision_network_direct_rdma() except Exception as e: logger.error("RDMA: device processing failed: {0}".format(e)) def provision_network_direct_rdma(self): RDMADeviceHandler.update_dat_conf(dapl_config_paths, self.ipv4_addr) if not conf.enable_check_rdma_driver(): logger.info("RDMA: skip checking RDMA driver version") RDMADeviceHandler.update_network_interface(self.mac_addr, self.ipv4_addr) return skip_rdma_device = False module_name = "hv_network_direct" retcode, out = shellutil.run_get_output("modprobe -R %s" % module_name, chk_err=False) if retcode == 0: module_name = out.strip() else: logger.info("RDMA: failed to resolve module name. Use original name") retcode, out = shellutil.run_get_output("modprobe %s" % module_name) if retcode != 0: logger.error("RDMA: failed to load module %s" % module_name) return retcode, out = shellutil.run_get_output("modinfo %s" % module_name) if retcode == 0: version = re.search("version:\s+(\d+)\.(\d+)\.(\d+)\D", out, re.IGNORECASE) # pylint: disable=W1401 if version: v1 = int(version.groups(0)[0]) v2 = int(version.groups(0)[1]) if v1 > 4 or v1 == 4 and v2 > 0: logger.info("Skip setting /dev/hvnd_rdma on 4.1 or later") skip_rdma_device = True else: logger.info("RDMA: hv_network_direct driver version not present, assuming 4.0.x or older.") else: logger.warn("RDMA: failed to get module info on hv_network_direct.") if not skip_rdma_device: RDMADeviceHandler.wait_rdma_device( self.rdma_dev, self.device_check_timeout_sec, self.device_check_interval_sec) RDMADeviceHandler.write_rdma_config_to_device( self.rdma_dev, self.ipv4_addr, self.mac_addr) RDMADeviceHandler.update_network_interface(self.mac_addr, self.ipv4_addr) def provision_sriov_rdma(self): (key, value) = self.read_ipoib_data() if key: # provision multiple IP over IB addresses logger.info("RDMA: provisioning multiple IP over IB addresses") self.provision_sriov_multiple_ib(value) elif self.ipv4_addr: logger.info("RDMA: provisioning single IP over IB address") # provision a single IP over IB address RDMADeviceHandler.wait_any_rdma_device(self.sriov_dir, self.device_check_timeout_sec, self.device_check_interval_sec) RDMADeviceHandler.update_iboip_interface(self.ipv4_addr, self.ipoib_check_timeout_sec, self.ipoib_check_interval_sec) else: logger.info("RDMA: missing IP address") def read_ipoib_data(self) : # read from KVP pool 0 to figure out the IP over IB addresses kvp_key_size = 512 kvp_value_size = 2048 driver_info_source = '/var/lib/hyperv/.kvp_pool_0' if not os.path.isfile(driver_info_source): logger.error("RDMA: can't read KVP pool 0") return (None, None) key_0 = None value_0 = None with open(driver_info_source, "rb") as pool_file: while True: key = pool_file.read(kvp_key_size) value = pool_file.read(kvp_value_size) if key and value: key_0 = key.partition(b"\x00")[0] if key_0 : key_0 = key_0.decode() if key_0 == "IPoIB_Data": value_0 = value.partition(b"\x00")[0] if value_0 : value_0 = value_0.decode() break else: break if key_0 == "IPoIB_Data": return (key_0, value_0) return (None, None) def provision_sriov_multiple_ib(self, value) : mac_ip_array = [] values = value.split("|") num_ips = len(values) - 1 # values[0] tells how many IPs. Format - NUMPAIRS: match = re.match(r"NUMPAIRS:(\d+)", values[0]) if match: num = int(match.groups(0)[0]) if num != num_ips: logger.error("RDMA: multiple IPs reported num={0} actual number of IPs={1}".format(num, num_ips)) return else: logger.error("RDMA: failed to find number of IP addresses in {0}".format(values[0])) return for i in range(1, num_ips+1): # each MAC/IP entry is of format : match = re.match(r"([^:]+):(\d+\.\d+\.\d+\.\d+)", values[i]) if match: mac_addr = match.groups(0)[0] ipv4_addr = match.groups(0)[1] mac_ip_array.append((mac_addr, ipv4_addr)) else: logger.error("RDMA: failed to find MAC/IP address in {0}".format(values[i])) return # try to assign all MAC/IP addresses to IB interfaces # retry for up to 60 times, with 1 seconds delay between each retry = 60 while retry > 0: count = self.update_iboip_interfaces(mac_ip_array) if count == len(mac_ip_array): return time.sleep(1) retry -= 1 logger.error("RDMA: failed to set all IP over IB addresses") # Assign addresses to all IP over IB interfaces specified in mac_ip_array # Return the number of IP addresses successfully assigned def update_iboip_interfaces(self, mac_ip_array): net_dir = "/sys/class/net" nics = os.listdir(net_dir) count = 0 for nic in nics: # look for IBoIP interface of format ibXXX if not re.match(r"ib\w+", nic): continue mac_addr = None with open(os.path.join(net_dir, nic, "address")) as address_file: mac_addr = address_file.read() if not mac_addr: logger.error("RDMA: can't read address for device {0}".format(nic)) continue mac_addr = mac_addr.upper() match = re.match(r".+(\w\w):(\w\w):(\w\w):\w\w:\w\w:(\w\w):(\w\w):(\w\w)\n", mac_addr) if not match: logger.error("RDMA: failed to parse address for device {0} address {1}".format(nic, mac_addr)) continue # format an MAC address without : mac_addr = "" mac_addr = mac_addr.join(match.groups(0)) for mac_ip in mac_ip_array: if mac_ip[0] == mac_addr: ret = 0 try: # bring up the interface and set its IP address ip_command = ["ip", "link", "set", nic, "up"] shellutil.run_command(ip_command) ip_command = ["ip", "addr", "add", "{0}/16".format(mac_ip[1]), "dev", nic] shellutil.run_command(ip_command) except shellutil.CommandError as error: ret = error.returncode if ret == 0: logger.info("RDMA: set address {0} to device {1}".format(mac_ip[1], nic)) if ret and ret != 2: # return value 2 means the address is already set logger.error("RDMA: failed to set IP address {0} on device {1}".format(mac_ip[1], nic)) else: count += 1 break return count @staticmethod def update_iboip_interface(ipv4_addr, timeout_sec, check_interval_sec): logger.info("Wait for ib become available") total_retries = timeout_sec / check_interval_sec n = 0 found_ib = None while not found_ib and n < total_retries: ret, output = shellutil.run_get_output("ifconfig -a") if ret != 0: raise Exception("Failed to list network interfaces") found_ib = re.search(r"(ib\S+):", output, re.IGNORECASE) if found_ib: break time.sleep(check_interval_sec) n += 1 if not found_ib: raise Exception("ib is not available") ibname = found_ib.groups()[0] if shellutil.run("ifconfig {0} up".format(ibname)) != 0: raise Exception("Could not run ifconfig {0} up".format(ibname)) netmask = 16 logger.info("RDMA: configuring IPv4 addr and netmask on ipoib interface") addr = '{0}/{1}'.format(ipv4_addr, netmask) if shellutil.run("ifconfig {0} {1}".format(ibname, addr)) != 0: raise Exception("Could not set addr to {0} on {1}".format(addr, ibname)) logger.info("RDMA: ipoib address and netmask configured on interface") @staticmethod def update_dat_conf(paths, ipv4_addr): """ Looks at paths for dat.conf file and updates the ip address for the infiniband interface. """ logger.info("Updating DAPL configuration file") for f in paths: logger.info("RDMA: trying {0}".format(f)) if not os.path.isfile(f): logger.info( "RDMA: DAPL config not found at {0}".format(f)) continue logger.info("RDMA: DAPL config is at: {0}".format(f)) cfg = fileutil.read_file(f) new_cfg = RDMADeviceHandler.replace_dat_conf_contents( cfg, ipv4_addr) fileutil.write_file(f, new_cfg) logger.info("RDMA: DAPL configuration is updated") return raise Exception("RDMA: DAPL configuration file not found at predefined paths") @staticmethod def replace_dat_conf_contents(cfg, ipv4_addr): old = "ofa-v2-ib0 u2.0 nonthreadsafe default libdaplofa.so.2 dapl.2.0 \"\S+ 0\"" # pylint: disable=W1401 new = "ofa-v2-ib0 u2.0 nonthreadsafe default libdaplofa.so.2 dapl.2.0 \"{0} 0\"".format( ipv4_addr) return re.sub(old, new, cfg) @staticmethod def write_rdma_config_to_device(path, ipv4_addr, mac_addr): data = RDMADeviceHandler.generate_rdma_config(ipv4_addr, mac_addr) logger.info( "RDMA: Updating device with configuration: {0}".format(data)) with open(path, "w") as f: logger.info("RDMA: Device opened for writing") f.write(data) logger.info("RDMA: Updated device with IPv4/MAC addr successfully") @staticmethod def generate_rdma_config(ipv4_addr, mac_addr): return 'rdmaMacAddress="{0}" rdmaIPv4Address="{1}"'.format(mac_addr, ipv4_addr) @staticmethod def wait_rdma_device(path, timeout_sec, check_interval_sec): logger.info("RDMA: waiting for device={0} timeout={1}s".format(path, timeout_sec)) total_retries = timeout_sec / check_interval_sec n = 0 while n < total_retries: if os.path.exists(path): logger.info("RDMA: device ready") return logger.verbose( "RDMA: device not ready, sleep {0}s".format(check_interval_sec)) time.sleep(check_interval_sec) n += 1 logger.error("RDMA device wait timed out") raise Exception("The device did not show up in {0} seconds ({1} retries)".format( timeout_sec, total_retries)) @staticmethod def wait_any_rdma_device(directory, timeout_sec, check_interval_sec): logger.info( "RDMA: waiting for any Infiniband device at directory={0} timeout={1}s".format( directory, timeout_sec)) total_retries = timeout_sec / check_interval_sec n = 0 while n < total_retries: r = os.listdir(directory) if r: logger.info("RDMA: device found in {0}".format(directory)) return logger.verbose( "RDMA: device not ready, sleep {0}s".format(check_interval_sec)) time.sleep(check_interval_sec) n += 1 logger.error("RDMA device wait timed out") raise Exception("The device did not show up in {0} seconds ({1} retries)".format( timeout_sec, total_retries)) @staticmethod def update_network_interface(mac_addr, ipv4_addr): netmask = 16 logger.info("RDMA: will update the network interface with IPv4/MAC") if_name = RDMADeviceHandler.get_interface_by_mac(mac_addr) logger.info("RDMA: network interface found: {0}", if_name) logger.info("RDMA: bringing network interface up") if shellutil.run("ifconfig {0} up".format(if_name)) != 0: raise Exception("Could not bring up RMDA interface: {0}".format(if_name)) logger.info("RDMA: configuring IPv4 addr and netmask on interface") addr = '{0}/{1}'.format(ipv4_addr, netmask) if shellutil.run("ifconfig {0} {1}".format(if_name, addr)) != 0: raise Exception("Could set addr to {1} on {0}".format(if_name, addr)) logger.info("RDMA: network address and netmask configured on interface") @staticmethod def get_interface_by_mac(mac): ret, output = shellutil.run_get_output("ifconfig -a") if ret != 0: raise Exception("Failed to list network interfaces") output = output.replace('\n', '') match = re.search(r"(eth\d).*(HWaddr|ether) {0}".format(mac), output, re.IGNORECASE) if match is None: raise Exception("Failed to get ifname with mac: {0}".format(mac)) output = match.group(0) eths = re.findall(r"eth\d", output) if eths is None or len(eths) == 0: raise Exception("ifname with mac: {0} not found".format(mac)) return eths[-1] Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/rdma/suse.py000066400000000000000000000176311462617747000236410ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import glob import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.pa.rdma.rdma import RDMAHandler from azurelinuxagent.common.version import DISTRO_VERSION from distutils.version import LooseVersion as Version class SUSERDMAHandler(RDMAHandler): def install_driver(self): # pylint: disable=R1710 """Install the appropriate driver package for the RDMA firmware""" if Version(DISTRO_VERSION) >= Version('15'): msg = 'SLE 15 and later only supports PCI pass through, no ' msg += 'special driver needed for IB interface' logger.info(msg) return True fw_version = self.get_rdma_version() if not fw_version: error_msg = 'RDMA: Could not determine firmware version. ' error_msg += 'Therefore, no driver will be installed.' logger.error(error_msg) return zypper_install = 'zypper -n in %s' zypper_install_noref = 'zypper -n --no-refresh in %s' zypper_lock = 'zypper addlock %s' zypper_remove = 'zypper -n rm %s' zypper_search = 'zypper -n se -s %s' zypper_unlock = 'zypper removelock %s' package_name = 'dummy' # Figure out the kernel that is running to find the proper kmp cmd = 'uname -r' status, kernel_release = shellutil.run_get_output(cmd) # pylint: disable=W0612 if 'default' in kernel_release: package_name = 'msft-rdma-kmp-default' info_msg = 'RDMA: Detected kernel-default' logger.info(info_msg) elif 'azure' in kernel_release: package_name = 'msft-rdma-kmp-azure' info_msg = 'RDMA: Detected kernel-azure' logger.info(info_msg) else: error_msg = 'RDMA: Could not detect kernel build, unable to ' error_msg += 'load kernel module. Kernel release: "%s"' logger.error(error_msg % kernel_release) return cmd = zypper_search % package_name status, repo_package_info = shellutil.run_get_output(cmd) driver_package_versions = [] driver_package_installed = False for entry in repo_package_info.split('\n'): if package_name in entry: sections = entry.split('|') if len(sections) < 4: error_msg = 'RDMA: Unexpected output from"%s": "%s"' logger.error(error_msg % (cmd, entry)) continue installed = sections[0].strip() version = sections[3].strip() driver_package_versions.append(version) if fw_version in version and installed.startswith('i'): info_msg = 'RDMA: Matching driver package "%s-%s" ' info_msg += 'is already installed, nothing to do.' logger.info(info_msg % (package_name, version)) return True if installed.startswith('i'): # A driver with a different version is installed driver_package_installed = True cmd = zypper_unlock % package_name result = shellutil.run(cmd) info_msg = 'Driver with different version installed ' info_msg += 'unlocked package "%s".' logger.info(info_msg % (package_name)) # If we get here the driver package is installed but the # version doesn't match or no package is installed requires_reboot = False if driver_package_installed: # Unloading the particular driver with rmmod does not work # We have to reboot after the new driver is installed if self.is_driver_loaded(): info_msg = 'RDMA: Currently loaded driver does not match the ' info_msg += 'firmware implementation, reboot will be required.' logger.info(info_msg) requires_reboot = True logger.info("RDMA: removing package %s" % package_name) cmd = zypper_remove % package_name shellutil.run(cmd) logger.info("RDMA: removed package %s" % package_name) logger.info("RDMA: looking for fw version %s in packages" % fw_version) for entry in driver_package_versions: if fw_version not in entry: logger.info("Package '%s' is not a match." % entry) else: logger.info("Package '%s' is a match. Installing." % entry) complete_name = '%s-%s' % (package_name, entry) cmd = zypper_install % complete_name result = shellutil.run(cmd) if result: error_msg = 'RDMA: Failed install of package "%s" ' error_msg += 'from available repositories.' logger.error(error_msg % complete_name) msg = 'RDMA: Successfully installed "%s" from ' msg += 'configured repositories' logger.info(msg % complete_name) # Lock the package so it does not accidentally get updated cmd = zypper_lock % package_name result = shellutil.run(cmd) info_msg = 'Applied lock to "%s"' % package_name logger.info(info_msg) if not self.load_driver_module() or requires_reboot: self.reboot_system() return True else: # pylint: disable=W0120 logger.info("RDMA: No suitable match in repos. Trying local.") local_packages = glob.glob('/opt/microsoft/rdma/*.rpm') for local_package in local_packages: logger.info("Examining: %s" % local_package) if local_package.endswith('.src.rpm'): continue if ( package_name in local_package and fw_version in local_package ): logger.info("RDMA: Installing: %s" % local_package) cmd = zypper_install_noref % local_package result = shellutil.run(cmd) if result and result != 106: error_msg = 'RDMA: Failed install of package "%s" ' error_msg += 'from local package cache' logger.error(error_msg % local_package) break msg = 'RDMA: Successfully installed "%s" from ' msg += 'local package cache' logger.info(msg % (local_package)) # Lock the package so it does not accidentally get updated cmd = zypper_lock % package_name result = shellutil.run(cmd) info_msg = 'Applied lock to "%s"' % package_name logger.info(info_msg) if not self.load_driver_module() or requires_reboot: self.reboot_system() return True else: error_msg = 'Unable to find driver package that matches ' error_msg += 'RDMA firmware version "%s"' % fw_version logger.error(error_msg) return Azure-WALinuxAgent-2b21de5/azurelinuxagent/pa/rdma/ubuntu.py000066400000000000000000000124271462617747000242020ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import glob # pylint: disable=W0611 import os import re import time # pylint: disable=W0611 import azurelinuxagent.common.conf as conf import azurelinuxagent.common.logger as logger import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.pa.rdma.rdma import RDMAHandler class UbuntuRDMAHandler(RDMAHandler): def install_driver(self): #Install the appropriate driver package for the RDMA firmware nd_version = self.get_rdma_version() if not nd_version: logger.error("RDMA: Could not determine firmware version. No driver will be installed") return #replace . with _, we are looking for number like 144_0 nd_version = re.sub('\.', '_', nd_version) # pylint: disable=W1401 #Check to see if we need to reconfigure driver status,module_name = shellutil.run_get_output('modprobe -R hv_network_direct', chk_err=False) if status != 0: logger.info("RDMA: modprobe -R hv_network_direct failed. Use module name hv_network_direct") module_name = "hv_network_direct" else: module_name = module_name.strip() logger.info("RDMA: current RDMA driver %s nd_version %s" % (module_name, nd_version)) if module_name == 'hv_network_direct_%s' % nd_version: logger.info("RDMA: driver is installed and ND version matched. Skip reconfiguring driver") return #Reconfigure driver if one is available status,output = shellutil.run_get_output('modinfo hv_network_direct_%s' % nd_version) if status == 0: logger.info("RDMA: driver with ND version is installed. Link to module name") self.update_modprobed_conf(nd_version) return #Driver not found. We need to check to see if we need to update kernel if not conf.enable_rdma_update(): logger.info("RDMA: driver update is disabled. Skip kernel update") return status,output = shellutil.run_get_output('uname -r') if status != 0: return if not re.search('-azure$', output): logger.error("RDMA: skip driver update on non-Azure kernel") return kernel_version = re.sub('-azure$', '', output) kernel_version = re.sub('-', '.', kernel_version) #Find the new kernel package version status,output = shellutil.run_get_output('apt-get update') if status != 0: return status,output = shellutil.run_get_output('apt-cache show --no-all-versions linux-azure') if status != 0: return r = re.search('Version: (\S+)', output) # pylint: disable=W1401 if not r: logger.error("RDMA: version not found in package linux-azure.") return package_version = r.groups()[0] #Remove the ending . after package_version = re.sub("\.\d+$", "", package_version) # pylint: disable=W1401 logger.info('RDMA: kernel_version=%s package_version=%s' % (kernel_version, package_version)) kernel_version_array = [ int(x) for x in kernel_version.split('.') ] package_version_array = [ int(x) for x in package_version.split('.') ] if kernel_version_array < package_version_array: logger.info("RDMA: newer version available, update kernel and reboot") status,output = shellutil.run_get_output('apt-get -y install linux-azure') if status: logger.error("RDMA: kernel update failed") return self.reboot_system() else: logger.error("RDMA: no kernel update is avaiable for ND version %s" % nd_version) def update_modprobed_conf(self, nd_version): #Update /etc/modprobe.d/vmbus-rdma.conf to point to the correct driver modprobed_file = '/etc/modprobe.d/vmbus-rdma.conf' lines = '' if not os.path.isfile(modprobed_file): logger.info("RDMA: %s not found, it will be created" % modprobed_file) else: with open(modprobed_file, 'r') as f: lines = f.read() r = re.search('alias hv_network_direct hv_network_direct_\S+', lines) # pylint: disable=W1401 if r: lines = re.sub('alias hv_network_direct hv_network_direct_\S+', 'alias hv_network_direct hv_network_direct_%s' % nd_version, lines) # pylint: disable=W1401 else: lines += '\nalias hv_network_direct hv_network_direct_%s\n' % nd_version with open('/etc/modprobe.d/vmbus-rdma.conf', 'w') as f: f.write(lines) logger.info("RDMA: hv_network_direct alias updated to ND %s" % nd_version) Azure-WALinuxAgent-2b21de5/bin/000077500000000000000000000000001462617747000163005ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/bin/py3/000077500000000000000000000000001462617747000170135ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/bin/py3/waagent000077500000000000000000000030471462617747000203730ustar00rootroot00000000000000#!/usr/bin/env python3 # # Azure Linux Agent # # Copyright 2015 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6 and Openssl 1.0+ # # Implements parts of RFC 2131, 1541, 1497 and # http://msdn.microsoft.com/en-us/library/cc227282%28PROT.10%29.aspx # http://msdn.microsoft.com/en-us/library/cc227259%28PROT.13%29.aspx # import os import sys if sys.version_info[0] == 2: import imp else: import importlib if __name__ == '__main__' : import azurelinuxagent.agent as agent """ Invoke main method of agent """ agent.main() if __name__ == 'waagent': """ Load waagent2.0 to support old version of extensions """ if sys.version_info[0] == 3: raise ImportError("waagent2.0 doesn't support python3") bin_path = os.path.dirname(os.path.abspath(__file__)) agent20_path = os.path.join(bin_path, "waagent2.0") if not os.path.isfile(agent20_path): raise ImportError("Can't load waagent") agent20 = imp.load_source('waagent', agent20_path) __all__ = dir(agent20) Azure-WALinuxAgent-2b21de5/bin/waagent000077500000000000000000000030461462617747000176570ustar00rootroot00000000000000#!/usr/bin/env python # # Azure Linux Agent # # Copyright 2015 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6 and Openssl 1.0+ # # Implements parts of RFC 2131, 1541, 1497 and # http://msdn.microsoft.com/en-us/library/cc227282%28PROT.10%29.aspx # http://msdn.microsoft.com/en-us/library/cc227259%28PROT.13%29.aspx # import os import sys if sys.version_info[0] == 2: import imp else: import importlib if __name__ == '__main__' : import azurelinuxagent.agent as agent """ Invoke main method of agent """ agent.main() if __name__ == 'waagent': """ Load waagent2.0 to support old version of extensions """ if sys.version_info[0] == 3: raise ImportError("waagent2.0 doesn't support python3") bin_path = os.path.dirname(os.path.abspath(__file__)) agent20_path = os.path.join(bin_path, "waagent2.0") if not os.path.isfile(agent20_path): raise ImportError("Can't load waagent") agent20 = imp.load_source('waagent', agent20_path) __all__ = dir(agent20) Azure-WALinuxAgent-2b21de5/bin/waagent2.0000066400000000000000000007551201462617747000201030ustar00rootroot00000000000000#!/usr/bin/env python # # Azure Linux Agent # # Copyright 2015 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6 and Openssl 1.0+ # # Implements parts of RFC 2131, 1541, 1497 and # http://msdn.microsoft.com/en-us/library/cc227282%28PROT.10%29.aspx # http://msdn.microsoft.com/en-us/library/cc227259%28PROT.13%29.aspx # import crypt import random import array import base64 import httplib import os import os.path import platform import pwd import re import shutil import socket import SocketServer import struct import string import subprocess import sys import tempfile import textwrap import threading import time import traceback import xml.dom.minidom import fcntl import inspect import zipfile import json import datetime import xml.sax.saxutils from distutils.version import LooseVersion if not hasattr(subprocess,'check_output'): def check_output(*popenargs, **kwargs): r"""Backport from subprocess module from python 2.7""" if 'stdout' in kwargs: raise ValueError('stdout argument not allowed, it will be overridden.') process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs) output, unused_err = process.communicate() retcode = process.poll() if retcode: cmd = kwargs.get("args") if cmd is None: cmd = popenargs[0] raise subprocess.CalledProcessError(retcode, cmd, output=output) return output # Exception classes used by this module. class CalledProcessError(Exception): def __init__(self, returncode, cmd, output=None): self.returncode = returncode self.cmd = cmd self.output = output def __str__(self): return "Command '%s' returned non-zero exit status %d" % (self.cmd, self.returncode) subprocess.check_output=check_output subprocess.CalledProcessError=CalledProcessError GuestAgentName = "WALinuxAgent" GuestAgentLongName = "Azure Linux Agent" GuestAgentVersion = "WALinuxAgent-2.0.16" ProtocolVersion = "2012-11-30" #WARNING this value is used to confirm the correct fabric protocol. Config = None WaAgent = None DiskActivated = False Openssl = "openssl" Children = [] ExtensionChildren = [] VMM_STARTUP_SCRIPT_NAME='install' VMM_CONFIG_FILE_NAME='linuxosconfiguration.xml' global RulesFiles RulesFiles = [ "/lib/udev/rules.d/75-persistent-net-generator.rules", "/etc/udev/rules.d/70-persistent-net.rules" ] VarLibDhcpDirectories = ["/var/lib/dhclient", "/var/lib/dhcpcd", "/var/lib/dhcp"] EtcDhcpClientConfFiles = ["/etc/dhcp/dhclient.conf", "/etc/dhcp3/dhclient.conf"] global LibDir LibDir = "/var/lib/waagent" global provisioned provisioned=False global provisionError provisionError=None HandlerStatusToAggStatus = {"installed":"Installing", "enabled":"Ready", "unintalled":"NotReady", "disabled":"NotReady"} WaagentConf = """\ # # Azure Linux Agent Configuration # Role.StateConsumer=None # Specified program is invoked with the argument "Ready" when we report ready status # to the endpoint server. Role.ConfigurationConsumer=None # Specified program is invoked with XML file argument specifying role configuration. Role.TopologyConsumer=None # Specified program is invoked with XML file argument specifying role topology. Provisioning.Enabled=y # Provisioning.DeleteRootPassword=y # Password authentication for root account will be unavailable. Provisioning.RegenerateSshHostKeyPair=y # Generate fresh host key pair. Provisioning.SshHostKeyPairType=rsa # Supported values are "rsa", "dsa" and "ecdsa". Provisioning.MonitorHostName=y # Monitor host name changes and publish changes via DHCP requests. ResourceDisk.Format=y # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Filesystem=ext4 # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.MountPoint=/mnt/resource # ResourceDisk.EnableSwap=n # Create and use swapfile on resource disk. ResourceDisk.SwapSizeMB=0 # Size of the swapfile. LBProbeResponder=y # Respond to load balancer probes if requested by Azure. Logs.Verbose=n # Enable verbose logs OS.RootDeviceScsiTimeout=300 # Root device timeout in seconds. OS.OpensslPath=None # If "None", the system default version is used. """ README_FILENAME="DATALOSS_WARNING_README.txt" README_FILECONTENT="""\ WARNING: THIS IS A TEMPORARY DISK. Any data stored on this drive is SUBJECT TO LOSS and THERE IS NO WAY TO RECOVER IT. Please do not use this disk for storing any personal or application data. For additional details to please refer to the MSDN documentation at : http://msdn.microsoft.com/en-us/library/windowsazure/jj672979.aspx """ ############################################################ # BEGIN DISTRO CLASS DEFS ############################################################ ############################################################ # AbstractDistro ############################################################ class AbstractDistro(object): """ AbstractDistro defines a skeleton neccesary for a concrete Distro class. Generic methods and attributes are kept here, distribution specific attributes and behavior are to be placed in the concrete child named distroDistro, where distro is the string returned by calling python platform.linux_distribution()[0]. So for CentOS the derived class is called 'centosDistro'. """ def __init__(self): """ Generic Attributes go here. These are based on 'majority rules'. This __init__() may be called or overriden by the child. """ self.agent_service_name = os.path.basename(sys.argv[0]) self.selinux=None self.service_cmd='/usr/sbin/service' self.ssh_service_restart_option='restart' self.ssh_service_name='ssh' self.ssh_config_file='/etc/ssh/sshd_config' self.hostname_file_path='/etc/hostname' self.dhcp_client_name='dhclient' self.requiredDeps = [ 'route', 'shutdown', 'ssh-keygen', 'useradd', 'usermod', 'openssl', 'sfdisk', 'fdisk', 'mkfs', 'sed', 'grep', 'sudo', 'parted' ] self.init_script_file='/etc/init.d/waagent' self.agent_package_name='WALinuxAgent' self.fileBlackList = [ "/root/.bash_history", "/var/log/waagent.log",'/etc/resolv.conf' ] self.agent_files_to_uninstall = ["/etc/waagent.conf", "/etc/logrotate.d/waagent"] self.grubKernelBootOptionsFile = '/etc/default/grub' self.grubKernelBootOptionsLine = 'GRUB_CMDLINE_LINUX_DEFAULT=' self.getpidcmd = 'pidof' self.mount_dvd_cmd = 'mount' self.sudoers_dir_base = '/etc' self.waagent_conf_file = WaagentConf self.shadow_file_mode=0600 self.shadow_file_path="/etc/shadow" self.dhcp_enabled = False def isSelinuxSystem(self): """ Checks and sets self.selinux = True if SELinux is available on system. """ if self.selinux == None: if Run("which getenforce",chk_err=False): self.selinux = False else: self.selinux = True return self.selinux def isSelinuxRunning(self): """ Calls shell command 'getenforce' and returns True if 'Enforcing'. """ if self.isSelinuxSystem(): return RunGetOutput("getenforce")[1].startswith("Enforcing") else: return False def setSelinuxEnforce(self,state): """ Calls shell command 'setenforce' with 'state' and returns resulting exit code. """ if self.isSelinuxSystem(): if state: s = '1' else: s='0' return Run("setenforce "+s) def setSelinuxContext(self,path,cn): """ Calls shell 'chcon' with 'path' and 'cn' context. Returns exit result. """ if self.isSelinuxSystem(): if not os.path.exists(path): Error("Path does not exist: {0}".format(path)) return 1 return Run('chcon ' + cn + ' ' + path) def setHostname(self,name): """ Shell call to hostname. Returns resulting exit code. """ return Run('hostname ' + name) def publishHostname(self,name): """ Set the contents of the hostname file to 'name'. Return 1 on failure. """ try: r=SetFileContents(self.hostname_file_path, name) for f in EtcDhcpClientConfFiles: if os.path.exists(f) and FindStringInFile(f,r'^[^#]*?send\s*host-name.*?(|gethostname[(,)])') == None : r=ReplaceFileContentsAtomic('/etc/dhcp/dhclient.conf', "send host-name \"" + name + "\";\n" + "\n".join(filter(lambda a: not a.startswith("send host-name"), GetFileContents('/etc/dhcp/dhclient.conf').split('\n')))) except: return 1 return r def installAgentServiceScriptFiles(self): """ Create the waagent support files for service installation. Called by registerAgentService() Abstract Virtual Function. Over-ridden in concrete Distro classes. """ pass def registerAgentService(self): """ Calls installAgentService to create service files. Shell exec service registration commands. (e.g. chkconfig --add waagent) Abstract Virtual Function. Over-ridden in concrete Distro classes. """ pass def uninstallAgentService(self): """ Call service subsystem to remove waagent script. Abstract Virtual Function. Over-ridden in concrete Distro classes. """ pass def unregisterAgentService(self): """ Calls self.stopAgentService and call self.uninstallAgentService() """ self.stopAgentService() self.uninstallAgentService() def startAgentService(self): """ Service call to start the Agent service """ return Run(self.service_cmd + ' ' + self.agent_service_name + ' start') def stopAgentService(self): """ Service call to stop the Agent service """ return Run(self.service_cmd + ' ' + self.agent_service_name + ' stop',False) def restartSshService(self): """ Service call to re(start) the SSH service """ sshRestartCmd = self.service_cmd + " " + self.ssh_service_name + " " + self.ssh_service_restart_option retcode = Run(sshRestartCmd) if retcode > 0: Error("Failed to restart SSH service with return code:" + str(retcode)) return retcode def sshDeployPublicKey(self,fprint,path): """ Generic sshDeployPublicKey - over-ridden in some concrete Distro classes due to minor differences in openssl packages deployed """ error=0 SshPubKey = OvfEnv().OpensslToSsh(fprint) if SshPubKey != None: AppendFileContents(path, SshPubKey) else: Error("Failed: " + fprint + ".crt -> " + path) error = 1 return error def checkPackageInstalled(self,p): """ Query package database for prescence of an installed package. Abstract Virtual Function. Over-ridden in concrete Distro classes. """ pass def checkPackageUpdateable(self,p): """ Online check if updated package of walinuxagent is available. Abstract Virtual Function. Over-ridden in concrete Distro classes. """ pass def deleteRootPassword(self): """ Generic root password removal. """ filepath="/etc/shadow" ReplaceFileContentsAtomic(filepath,"root:*LOCK*:14600::::::\n" + "\n".join(filter(lambda a: not a.startswith("root:"),GetFileContents(filepath).split('\n')))) os.chmod(filepath,self.shadow_file_mode) if self.isSelinuxSystem(): self.setSelinuxContext(filepath,'system_u:object_r:shadow_t:s0') Log("Root password deleted.") return 0 def changePass(self,user,password): Log("Change user password") crypt_id = Config.get("Provisioning.PasswordCryptId") if crypt_id is None: crypt_id = "6" salt_len = Config.get("Provisioning.PasswordCryptSaltLength") try: salt_len = int(salt_len) if salt_len < 0 or salt_len > 10: salt_len = 10 except (ValueError, TypeError): salt_len = 10 return self.chpasswd(user, password, crypt_id=crypt_id, salt_len=salt_len) def chpasswd(self, username, password, crypt_id=6, salt_len=10): passwd_hash = self.gen_password_hash(password, crypt_id, salt_len) cmd = "usermod -p '{0}' {1}".format(passwd_hash, username) ret, output = RunGetOutput(cmd, log_cmd=False) if ret != 0: return "Failed to set password for {0}: {1}".format(username, output) def gen_password_hash(self, password, crypt_id, salt_len): collection = string.ascii_letters + string.digits salt = ''.join(random.choice(collection) for _ in range(salt_len)) salt = "${0}${1}".format(crypt_id, salt) return crypt.crypt(password, salt) def load_ata_piix(self): return WaAgent.TryLoadAtapiix() def unload_ata_piix(self): """ Generic function to remove ata_piix.ko. """ return WaAgent.TryUnloadAtapiix() def deprovisionWarnUser(self): """ Generic user warnings used at deprovision. """ print("WARNING! Nameserver configuration in /etc/resolv.conf will be deleted.") def deprovisionDeleteFiles(self): """ Files to delete when VM is deprovisioned """ for a in VarLibDhcpDirectories: Run("rm -f " + a + "/*") # Clear LibDir, remove nameserver and root bash history for f in os.listdir(LibDir) + self.fileBlackList: try: os.remove(f) except: pass return 0 def uninstallDeleteFiles(self): """ Files to delete when agent is uninstalled. """ for f in self.agent_files_to_uninstall: try: os.remove(f) except: pass return 0 def checkDependencies(self): """ Generic dependency check. Return 1 unless all dependencies are satisfied. """ if self.checkPackageInstalled('NetworkManager'): Error(GuestAgentLongName + " is not compatible with network-manager.") return 1 try: m= __import__('pyasn1') except ImportError: Error(GuestAgentLongName + " requires python-pyasn1 for your Linux distribution.") return 1 for a in self.requiredDeps: if Run("which " + a + " > /dev/null 2>&1",chk_err=False): Error("Missing required dependency: " + a) return 1 return 0 def packagedInstall(self,buildroot): """ Called from setup.py for use by RPM. Copies generated files waagent.conf, under the buildroot. """ if not os.path.exists(buildroot+'/etc'): os.mkdir(buildroot+'/etc') SetFileContents(buildroot+'/etc/waagent.conf', MyDistro.waagent_conf_file) if not os.path.exists(buildroot+'/etc/logrotate.d'): os.mkdir(buildroot+'/etc/logrotate.d') SetFileContents(buildroot+'/etc/logrotate.d/waagent', WaagentLogrotate) self.init_script_file=buildroot+self.init_script_file # this allows us to call installAgentServiceScriptFiles() if not os.path.exists(os.path.dirname(self.init_script_file)): os.mkdir(os.path.dirname(self.init_script_file)) self.installAgentServiceScriptFiles() def GetIpv4Address(self): """ Return the ip of the first active non-loopback interface. """ addr='' iface,addr=GetFirstActiveNetworkInterfaceNonLoopback() return addr def GetMacAddress(self): return GetMacAddress() def GetInterfaceName(self): return GetFirstActiveNetworkInterfaceNonLoopback()[0] def RestartInterface(self, iface, max_retry=3): for retry in range(1, max_retry + 1): ret = Run("ifdown " + iface + " && ifup " + iface) if ret == 0: return Log("Failed to restart interface: {0}, ret={1}".format(iface, ret)) if retry < max_retry: Log("Retry restart interface in 5 seconds") time.sleep(5) def CreateAccount(self,user, password, expiration, thumbprint): return CreateAccount(user, password, expiration, thumbprint) def DeleteAccount(self,user): return DeleteAccount(user) def ActivateResourceDisk(self): """ Format, mount, and if specified in the configuration set resource disk as swap. """ global DiskActivated format = Config.get("ResourceDisk.Format") if format == None or format.lower().startswith("n"): DiskActivated = True return device = DeviceForIdePort(1) if device == None: Error("ActivateResourceDisk: Unable to detect disk topology.") return device = "/dev/" + device mountlist = RunGetOutput("mount")[1] mountpoint = GetMountPoint(mountlist, device) if(mountpoint): Log("ActivateResourceDisk: " + device + "1 is already mounted.") else: mountpoint = Config.get("ResourceDisk.MountPoint") if mountpoint == None: mountpoint = "/mnt/resource" CreateDir(mountpoint, "root", 0755) fs = Config.get("ResourceDisk.Filesystem") if fs == None: fs = "ext3" partition = device + "1" #Check partition type Log("Detect GPT...") ret = RunGetOutput("parted {0} print".format(device)) if ret[0] == 0 and "gpt" in ret[1]: Log("GPT detected.") #GPT(Guid Partition Table) is used. #Get partitions. parts = filter(lambda x : re.match("^\s*[0-9]+", x), ret[1].split("\n")) #If there are more than 1 partitions, remove all partitions #and create a new one using the entire disk space. if len(parts) > 1: for i in range(1, len(parts) + 1): Run("parted {0} rm {1}".format(device, i)) Run("parted {0} mkpart primary 0% 100%".format(device)) Run("mkfs." + fs + " " + partition + " -F") else: existingFS = RunGetOutput("sfdisk -q -c " + device + " 1", chk_err=False)[1].rstrip() if existingFS == "7" and fs != "ntfs": Run("sfdisk -c " + device + " 1 83") Run("mkfs." + fs + " " + partition) if Run("mount " + partition + " " + mountpoint, chk_err=False): #If mount failed, try to format the partition and mount again Warn("Failed to mount resource disk. Retry mounting.") Run("mkfs." + fs + " " + partition + " -F") if Run("mount " + partition + " " + mountpoint): Error("ActivateResourceDisk: Failed to mount resource disk (" + partition + ").") return Log("Resource disk (" + partition + ") is mounted at " + mountpoint + " with fstype " + fs) #Create README file under the root of resource disk SetFileContents(os.path.join(mountpoint,README_FILENAME), README_FILECONTENT) DiskActivated = True #Create swap space swap = Config.get("ResourceDisk.EnableSwap") if swap == None or swap.lower().startswith("n"): return sizeKB = int(Config.get("ResourceDisk.SwapSizeMB")) * 1024 if os.path.isfile(mountpoint + "/swapfile") and os.path.getsize(mountpoint + "/swapfile") != (sizeKB * 1024): os.remove(mountpoint + "/swapfile") if not os.path.isfile(mountpoint + "/swapfile"): Run("umask 0077 && dd if=/dev/zero of=" + mountpoint + "/swapfile bs=1024 count=" + str(sizeKB)) Run("mkswap " + mountpoint + "/swapfile") if not Run("swapon " + mountpoint + "/swapfile"): Log("Enabled " + str(sizeKB) + " KB of swap at " + mountpoint + "/swapfile") else: Error("ActivateResourceDisk: Failed to activate swap at " + mountpoint + "/swapfile") def Install(self): return Install() def mediaHasFilesystem(self,dsk): if len(dsk) == 0 : return False if Run("LC_ALL=C fdisk -l " + dsk + " | grep Disk"): return False return True def mountDVD(self,dvd,location): return RunGetOutput(self.mount_dvd_cmd + ' ' + dvd + ' ' + location) def GetHome(self): return GetHome() def getDhcpClientName(self): return self.dhcp_client_name def initScsiDiskTimeout(self): """ Set the SCSI disk timeout when the agent starts running """ self.setScsiDiskTimeout() def setScsiDiskTimeout(self): """ Iterate all SCSI disks(include hot-add) and set their timeout if their value are different from the OS.RootDeviceScsiTimeout """ try: scsiTimeout = Config.get("OS.RootDeviceScsiTimeout") for diskName in [disk for disk in os.listdir("/sys/block") if disk.startswith("sd")]: self.setBlockDeviceTimeout(diskName, scsiTimeout) except: pass def setBlockDeviceTimeout(self, device, timeout): """ Set SCSI disk timeout by set /sys/block/sd*/device/timeout """ if timeout != None and device: filePath = "/sys/block/" + device + "/device/timeout" if(GetFileContents(filePath).splitlines()[0].rstrip() != timeout): SetFileContents(filePath,timeout) Log("SetBlockDeviceTimeout: Update the device " + device + " with timeout " + timeout) def waitForSshHostKey(self, path): """ Provide a dummy waiting, since by default, ssh host key is created by waagent and the key should already been created. """ if(os.path.isfile(path)): return True else: Error("Can't find host key: {0}".format(path)) return False def isDHCPEnabled(self): return self.dhcp_enabled def stopDHCP(self): """ Stop the system DHCP client so that the agent can bind on its port. If the distro has set dhcp_enabled to True, it will need to provide an implementation of this method. """ raise NotImplementedError('stopDHCP method missing') def startDHCP(self): """ Start the system DHCP client. If the distro has set dhcp_enabled to True, it will need to provide an implementation of this method. """ raise NotImplementedError('startDHCP method missing') def translateCustomData(self, data): """ Translate the custom data from a Base64 encoding. Default to no-op. """ decodeCustomData = Config.get("Provisioning.DecodeCustomData") if decodeCustomData != None and decodeCustomData.lower().startswith("y"): return base64.b64decode(data) return data def getConfigurationPath(self): return "/etc/waagent.conf" def getProcessorCores(self): return int(RunGetOutput("grep 'processor.*:' /proc/cpuinfo |wc -l")[1]) def getTotalMemory(self): return int(RunGetOutput("grep MemTotal /proc/meminfo |awk '{print $2}'")[1])/1024 def getInterfaceNameByMac(self, mac): ret, output = RunGetOutput("ifconfig -a") if ret != 0: raise Exception("Failed to get network interface info") output = output.replace('\n', '') match = re.search(r"(eth\d).*(HWaddr|ether) {0}".format(mac), output, re.IGNORECASE) if match is None: raise Exception("Failed to get ifname with mac: {0}".format(mac)) output = match.group(0) eths = re.findall(r"eth\d", output) if eths is None or len(eths) == 0: raise Exception("Failed to get ifname with mac: {0}".format(mac)) return eths[-1] def configIpV4(self, ifName, addr, netmask=24): ret, output = RunGetOutput("ifconfig {0} up".format(ifName)) if ret != 0: raise Exception("Failed to bring up {0}: {1}".format(ifName, output)) ret, output = RunGetOutput("ifconfig {0} {1}/{2}".format(ifName, addr, netmask)) if ret != 0: raise Exception("Failed to config ipv4 for {0}: {1}".format(ifName, output)) def setDefaultGateway(self, gateway): Run("/sbin/route add default gw" + gateway, chk_err=False) def routeAdd(self, net, mask, gateway): Run("/sbin/route add -net " + net + " netmask " + mask + " gw " + gateway, chk_err=False) ############################################################ # GentooDistro ############################################################ gentoo_init_file = """\ #!/sbin/runscript command=/usr/sbin/waagent pidfile=/var/run/waagent.pid command_args=-daemon command_background=true name="Azure Linux Agent" depend() { need localmount use logger network after bootmisc modules } """ class gentooDistro(AbstractDistro): """ Gentoo distro concrete class """ def __init__(self): # super(gentooDistro,self).__init__() self.service_cmd='/sbin/service' self.ssh_service_name='sshd' self.hostname_file_path='/etc/conf.d/hostname' self.dhcp_client_name='dhcpcd' self.shadow_file_mode=0640 self.init_file=gentoo_init_file def publishHostname(self,name): try: if (os.path.isfile(self.hostname_file_path)): r=ReplaceFileContentsAtomic(self.hostname_file_path, "hostname=\"" + name + "\"\n" + "\n".join(filter(lambda a: not a.startswith("hostname="), GetFileContents(self.hostname_file_path).split("\n")))) except: return 1 return r def installAgentServiceScriptFiles(self): SetFileContents(self.init_script_file, self.init_file) os.chmod(self.init_script_file, 0755) def registerAgentService(self): self.installAgentServiceScriptFiles() return Run('rc-update add ' + self.agent_service_name + ' default') def uninstallAgentService(self): return Run('rc-update del ' + self.agent_service_name + ' default') def unregisterAgentService(self): self.stopAgentService() return self.uninstallAgentService() def checkPackageInstalled(self,p): if Run('eix -I ^' + p + '$',chk_err=False): return 0 else: return 1 def checkPackageUpdateable(self,p): if Run('eix -u ^' + p + '$',chk_err=False): return 0 else: return 1 def RestartInterface(self, iface): Run("/etc/init.d/net." + iface + " restart") ############################################################ # SuSEDistro ############################################################ suse_init_file = """\ #! /bin/sh # # Azure Linux Agent sysV init script # # Copyright 2013 Microsoft Corporation # Copyright SUSE LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # /etc/init.d/waagent # # and symbolic link # # /usr/sbin/rcwaagent # # System startup script for the waagent # ### BEGIN INIT INFO # Provides: AzureLinuxAgent # Required-Start: $network sshd # Required-Stop: $network sshd # Default-Start: 3 5 # Default-Stop: 0 1 2 6 # Description: Start the AzureLinuxAgent ### END INIT INFO PYTHON=/usr/bin/python WAZD_BIN=/usr/sbin/waagent WAZD_CONF=/etc/waagent.conf WAZD_PIDFILE=/var/run/waagent.pid test -x "$WAZD_BIN" || { echo "$WAZD_BIN not installed"; exit 5; } test -e "$WAZD_CONF" || { echo "$WAZD_CONF not found"; exit 6; } . /etc/rc.status # First reset status of this service rc_reset # Return values acc. to LSB for all commands but status: # 0 - success # 1 - misc error # 2 - invalid or excess args # 3 - unimplemented feature (e.g. reload) # 4 - insufficient privilege # 5 - program not installed # 6 - program not configured # # Note that starting an already running service, stopping # or restarting a not-running service as well as the restart # with force-reload (in case signalling is not supported) are # considered a success. case "$1" in start) echo -n "Starting AzureLinuxAgent" ## Start daemon with startproc(8). If this fails ## the echo return value is set appropriate. startproc -f ${PYTHON} ${WAZD_BIN} -daemon rc_status -v ;; stop) echo -n "Shutting down AzureLinuxAgent" ## Stop daemon with killproc(8) and if this fails ## set echo the echo return value. killproc -p ${WAZD_PIDFILE} ${PYTHON} ${WAZD_BIN} rc_status -v ;; try-restart) ## Stop the service and if this succeeds (i.e. the ## service was running before), start it again. $0 status >/dev/null && $0 restart rc_status ;; restart) ## Stop the service and regardless of whether it was ## running or not, start it again. $0 stop sleep 1 $0 start rc_status ;; force-reload|reload) rc_status ;; status) echo -n "Checking for service AzureLinuxAgent " ## Check status with checkproc(8), if process is running ## checkproc will return with exit status 0. checkproc -p ${WAZD_PIDFILE} ${PYTHON} ${WAZD_BIN} rc_status -v ;; probe) ;; *) echo "Usage: $0 {start|stop|status|try-restart|restart|force-reload|reload}" exit 1 ;; esac rc_exit """ class SuSEDistro(AbstractDistro): """ SuSE Distro concrete class Put SuSE specific behavior here... """ def __init__(self): super(SuSEDistro,self).__init__() self.service_cmd='/sbin/service' self.ssh_service_name='sshd' self.kernel_boot_options_file='/boot/grub/menu.lst' self.hostname_file_path='/etc/HOSTNAME' self.requiredDeps += [ "/sbin/insserv" ] self.init_file=suse_init_file self.dhcp_client_name='dhcpcd' if ((DistInfo(fullname=1)[0] == 'SUSE Linux Enterprise Server' and DistInfo()[1] >= '12') or \ (DistInfo(fullname=1)[0] == 'openSUSE' and DistInfo()[1] >= '13.2')): self.dhcp_client_name='wickedd-dhcp4' self.grubKernelBootOptionsFile = '/boot/grub/menu.lst' self.grubKernelBootOptionsLine = 'kernel' self.getpidcmd='pidof ' self.dhcp_enabled=True def checkPackageInstalled(self,p): if Run("rpm -q " + p,chk_err=False): return 0 else: return 1 def checkPackageUpdateable(self,p): if Run("zypper list-updates | grep " + p,chk_err=False): return 1 else: return 0 def installAgentServiceScriptFiles(self): try: SetFileContents(self.init_script_file, self.init_file) os.chmod(self.init_script_file, 0744) except: pass def registerAgentService(self): self.installAgentServiceScriptFiles() return Run('insserv ' + self.agent_service_name) def uninstallAgentService(self): return Run('insserv -r ' + self.agent_service_name) def unregisterAgentService(self): self.stopAgentService() return self.uninstallAgentService() def startDHCP(self): Run("service " + self.dhcp_client_name + " start", chk_err=False) def stopDHCP(self): Run("service " + self.dhcp_client_name + " stop", chk_err=False) ############################################################ # redhatDistro ############################################################ redhat_init_file= """\ #!/bin/bash # # Init file for AzureLinuxAgent. # # chkconfig: 2345 60 80 # description: AzureLinuxAgent # # source function library . /etc/rc.d/init.d/functions RETVAL=0 FriendlyName="AzureLinuxAgent" WAZD_BIN=/usr/sbin/waagent start() { echo -n $"Starting $FriendlyName: " $WAZD_BIN -daemon & } stop() { echo -n $"Stopping $FriendlyName: " killproc -p /var/run/waagent.pid $WAZD_BIN RETVAL=$? echo return $RETVAL } case "$1" in start) start ;; stop) stop ;; restart) stop start ;; reload) ;; report) ;; status) status $WAZD_BIN RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|status}" RETVAL=1 esac exit $RETVAL """ class redhatDistro(AbstractDistro): """ Redhat Distro concrete class Put Redhat specific behavior here... """ def __init__(self): super(redhatDistro,self).__init__() self.service_cmd='/sbin/service' self.ssh_service_restart_option='condrestart' self.ssh_service_name='sshd' self.hostname_file_path= None if DistInfo()[1] < '7.0' else '/etc/hostname' self.init_file=redhat_init_file self.grubKernelBootOptionsFile = '/boot/grub/menu.lst' self.grubKernelBootOptionsLine = 'kernel' def publishHostname(self,name): super(redhatDistro,self).publishHostname(name) if DistInfo()[1] < '7.0' : filepath = "/etc/sysconfig/network" if os.path.isfile(filepath): ReplaceFileContentsAtomic(filepath, "HOSTNAME=" + name + "\n" + "\n".join(filter(lambda a: not a.startswith("HOSTNAME"), GetFileContents(filepath).split('\n')))) ethernetInterface = MyDistro.GetInterfaceName() filepath = "/etc/sysconfig/network-scripts/ifcfg-" + ethernetInterface if os.path.isfile(filepath): ReplaceFileContentsAtomic(filepath, "DHCP_HOSTNAME=" + name + "\n" + "\n".join(filter(lambda a: not a.startswith("DHCP_HOSTNAME"), GetFileContents(filepath).split('\n')))) return 0 def installAgentServiceScriptFiles(self): SetFileContents(self.init_script_file, self.init_file) os.chmod(self.init_script_file, 0744) return 0 def registerAgentService(self): self.installAgentServiceScriptFiles() return Run('chkconfig --add waagent') def uninstallAgentService(self): return Run('chkconfig --del ' + self.agent_service_name) def unregisterAgentService(self): self.stopAgentService() return self.uninstallAgentService() def checkPackageInstalled(self,p): if Run("yum list installed " + p,chk_err=False): return 0 else: return 1 def checkPackageUpdateable(self,p): if Run("yum check-update | grep "+ p,chk_err=False): return 1 else: return 0 def checkDependencies(self): """ Generic dependency check. Return 1 unless all dependencies are satisfied. """ if DistInfo()[1] < '7.0' and self.checkPackageInstalled('NetworkManager'): Error(GuestAgentLongName + " is not compatible with network-manager.") return 1 try: m= __import__('pyasn1') except ImportError: Error(GuestAgentLongName + " requires python-pyasn1 for your Linux distribution.") return 1 for a in self.requiredDeps: if Run("which " + a + " > /dev/null 2>&1",chk_err=False): Error("Missing required dependency: " + a) return 1 return 0 ############################################################ # centosDistro ############################################################ class centosDistro(redhatDistro): """ CentOS Distro concrete class Put CentOS specific behavior here... """ def __init__(self): super(centosDistro,self).__init__() ############################################################ # eulerosDistro ############################################################ class eulerosDistro(redhatDistro): """ EulerOS Distro concrete class Put EulerOS specific behavior here... """ def __init__(self): super(eulerosDistro,self).__init__() ############################################################ # oracleDistro ############################################################ class oracleDistro(redhatDistro): """ Oracle Distro concrete class Put Oracle specific behavior here... """ def __init__(self): super(oracleDistro, self).__init__() ############################################################ # asianuxDistro ############################################################ class asianuxDistro(redhatDistro): """ Asianux Distro concrete class Put Asianux specific behavior here... """ def __init__(self): super(asianuxDistro,self).__init__() ############################################################ # CoreOSDistro ############################################################ class CoreOSDistro(AbstractDistro): """ CoreOS Distro concrete class Put CoreOS specific behavior here... """ CORE_UID = 500 def __init__(self): super(CoreOSDistro,self).__init__() self.requiredDeps += [ "/usr/bin/systemctl" ] self.agent_service_name = 'waagent' self.init_script_file='/etc/systemd/system/waagent.service' self.fileBlackList.append("/etc/machine-id") self.dhcp_client_name='systemd-networkd' self.getpidcmd='pidof ' self.shadow_file_mode=0640 self.waagent_path='/usr/share/oem/bin' self.python_path='/usr/share/oem/python/bin' self.dhcp_enabled=True if 'PATH' in os.environ: os.environ['PATH'] = "{0}:{1}".format(os.environ['PATH'], self.python_path) else: os.environ['PATH'] = self.python_path if 'PYTHONPATH' in os.environ: os.environ['PYTHONPATH'] = "{0}:{1}".format(os.environ['PYTHONPATH'], self.waagent_path) else: os.environ['PYTHONPATH'] = self.waagent_path def checkPackageInstalled(self,p): """ There is no package manager in CoreOS. Return 1 since it must be preinstalled. """ return 1 def checkDependencies(self): for a in self.requiredDeps: if Run("which " + a + " > /dev/null 2>&1",chk_err=False): Error("Missing required dependency: " + a) return 1 return 0 def checkPackageUpdateable(self,p): """ There is no package manager in CoreOS. Return 0 since it can't be updated via package. """ return 0 def startAgentService(self): return Run('systemctl start ' + self.agent_service_name) def stopAgentService(self): return Run('systemctl stop ' + self.agent_service_name) def restartSshService(self): """ SSH is socket activated on CoreOS. No need to restart it. """ return 0 def sshDeployPublicKey(self,fprint,path): """ We support PKCS8. """ if Run("ssh-keygen -i -m PKCS8 -f " + fprint + " >> " + path): return 1 else : return 0 def RestartInterface(self, iface): Run("systemctl restart systemd-networkd") def CreateAccount(self, user, password, expiration, thumbprint): """ Create a user account, with 'user', 'password', 'expiration', ssh keys and sudo permissions. Returns None if successful, error string on failure. """ userentry = None try: userentry = pwd.getpwnam(user) except: pass uidmin = None try: uidmin = int(GetLineStartingWith("UID_MIN", "/etc/login.defs").split()[1]) except: pass if uidmin == None: uidmin = 100 if userentry != None and userentry[2] < uidmin and userentry[2] != self.CORE_UID: Error("CreateAccount: " + user + " is a system user. Will not set password.") return "Failed to set password for system user: " + user + " (0x06)." if userentry == None: command = "useradd --create-home --password '*' " + user if expiration != None: command += " --expiredate " + expiration.split('.')[0] if Run(command): Error("Failed to create user account: " + user) return "Failed to create user account: " + user + " (0x07)." else: Log("CreateAccount: " + user + " already exists. Will update password.") if password != None: self.changePass(user, password) try: if password == None: SetFileContents("/etc/sudoers.d/waagent", user + " ALL = (ALL) NOPASSWD: ALL\n") else: SetFileContents("/etc/sudoers.d/waagent", user + " ALL = (ALL) ALL\n") os.chmod("/etc/sudoers.d/waagent", 0440) except: Error("CreateAccount: Failed to configure sudo access for user.") return "Failed to configure sudo privileges (0x08)." home = MyDistro.GetHome() if thumbprint != None: dir = home + "/" + user + "/.ssh" CreateDir(dir, user, 0700) pub = dir + "/id_rsa.pub" prv = dir + "/id_rsa" Run("ssh-keygen -y -f " + thumbprint + ".prv > " + pub) SetFileContents(prv, GetFileContents(thumbprint + ".prv")) for f in [pub, prv]: os.chmod(f, 0600) ChangeOwner(f, user) SetFileContents(dir + "/authorized_keys", GetFileContents(pub)) ChangeOwner(dir + "/authorized_keys", user) Log("Created user account: " + user) return None def startDHCP(self): Run("systemctl start " + self.dhcp_client_name, chk_err=False) def stopDHCP(self): Run("systemctl stop " + self.dhcp_client_name, chk_err=False) def translateCustomData(self, data): return base64.b64decode(data) def getConfigurationPath(self): return "/usr/share/oem/waagent.conf" ############################################################ # debianDistro ############################################################ debian_init_file = """\ #!/bin/sh ### BEGIN INIT INFO # Provides: AzureLinuxAgent # Required-Start: $network $syslog # Required-Stop: $network $syslog # Should-Start: $network $syslog # Should-Stop: $network $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: AzureLinuxAgent # Description: AzureLinuxAgent ### END INIT INFO . /lib/lsb/init-functions OPTIONS="-daemon" WAZD_BIN=/usr/sbin/waagent WAZD_PID=/var/run/waagent.pid case "$1" in start) log_begin_msg "Starting AzureLinuxAgent..." pid=$( pidofproc $WAZD_BIN ) if [ -n "$pid" ] ; then log_begin_msg "Already running." log_end_msg 0 exit 0 fi start-stop-daemon --start --quiet --oknodo --background --exec $WAZD_BIN -- $OPTIONS log_end_msg $? ;; stop) log_begin_msg "Stopping AzureLinuxAgent..." start-stop-daemon --stop --quiet --oknodo --pidfile $WAZD_PID ret=$? rm -f $WAZD_PID log_end_msg $ret ;; force-reload) $0 restart ;; restart) $0 stop $0 start ;; status) status_of_proc $WAZD_BIN && exit 0 || exit $? ;; *) log_success_msg "Usage: /etc/init.d/waagent {start|stop|force-reload|restart|status}" exit 1 ;; esac exit 0 """ class debianDistro(AbstractDistro): """ debian Distro concrete class Put debian specific behavior here... """ def __init__(self): super(debianDistro,self).__init__() self.requiredDeps += [ "/usr/sbin/update-rc.d" ] self.init_file=debian_init_file self.agent_package_name='walinuxagent' self.dhcp_client_name='dhclient' self.getpidcmd='pidof ' self.shadow_file_mode=0640 def checkPackageInstalled(self,p): """ Check that the package is installed. Return 1 if installed, 0 if not installed. This method of using dpkg-query allows wildcards to be present in the package name. """ if not Run("dpkg-query -W -f='${Status}\n' '" + p + "' | grep ' installed' 2>&1",chk_err=False): return 1 else: return 0 def checkDependencies(self): """ Debian dependency check. python-pyasn1 is NOT needed. Return 1 unless all dependencies are satisfied. NOTE: using network*manager will catch either package name in Ubuntu or debian. """ if self.checkPackageInstalled('network*manager'): Error(GuestAgentLongName + " is not compatible with network-manager.") return 1 for a in self.requiredDeps: if Run("which " + a + " > /dev/null 2>&1",chk_err=False): Error("Missing required dependency: " + a) return 1 return 0 def checkPackageUpdateable(self,p): if Run("apt-get update ; apt-get upgrade -us | grep " + p,chk_err=False): return 1 else: return 0 def installAgentServiceScriptFiles(self): """ If we are packaged - the service name is walinuxagent, do nothing. """ if self.agent_service_name == 'walinuxagent': return 0 try: SetFileContents(self.init_script_file, self.init_file) os.chmod(self.init_script_file, 0744) except OSError, e: ErrorWithPrefix('installAgentServiceScriptFiles','Exception: '+str(e)+' occured creating ' + self.init_script_file) return 1 return 0 def registerAgentService(self): if self.installAgentServiceScriptFiles() == 0: return Run('update-rc.d waagent defaults') else : return 1 def uninstallAgentService(self): return Run('update-rc.d -f ' + self.agent_service_name + ' remove') def unregisterAgentService(self): self.stopAgentService() return self.uninstallAgentService() def sshDeployPublicKey(self,fprint,path): """ We support PKCS8. """ if Run("ssh-keygen -i -m PKCS8 -f " + fprint + " >> " + path): return 1 else : return 0 ############################################################ # KaliDistro - WIP # Functioning on Kali 1.1.0a so far ############################################################ class KaliDistro(debianDistro): """ Kali Distro concrete class Put Kali specific behavior here... """ def __init__(self): super(KaliDistro,self).__init__() ############################################################ # UbuntuDistro ############################################################ ubuntu_upstart_file = """\ #walinuxagent - start Azure agent description "walinuxagent" author "Ben Howard " start on (filesystem and started rsyslog) pre-start script WALINUXAGENT_ENABLED=1 [ -r /etc/default/walinuxagent ] && . /etc/default/walinuxagent if [ "$WALINUXAGENT_ENABLED" != "1" ]; then exit 1 fi if [ ! -x /usr/sbin/waagent ]; then exit 1 fi #Load the udf module modprobe -b udf end script exec /usr/sbin/waagent -daemon """ class UbuntuDistro(debianDistro): """ Ubuntu Distro concrete class Put Ubuntu specific behavior here... """ def __init__(self): super(UbuntuDistro,self).__init__() self.init_script_file='/etc/init/waagent.conf' self.init_file=ubuntu_upstart_file self.fileBlackList = [ "/root/.bash_history", "/var/log/waagent.log"] self.dhcp_client_name=None self.getpidcmd='pidof ' def registerAgentService(self): return self.installAgentServiceScriptFiles() def uninstallAgentService(self): """ If we are packaged - the service name is walinuxagent, do nothing. """ if self.agent_service_name == 'walinuxagent': return 0 os.remove('/etc/init/' + self.agent_service_name + '.conf') def unregisterAgentService(self): """ If we are packaged - the service name is walinuxagent, do nothing. """ if self.agent_service_name == 'walinuxagent': return self.stopAgentService() return self.uninstallAgentService() def deprovisionWarnUser(self): """ Ubuntu specific warning string from Deprovision. """ print("WARNING! Nameserver configuration in /etc/resolvconf/resolv.conf.d/{tail,original} will be deleted.") def deprovisionDeleteFiles(self): """ Ubuntu uses resolv.conf by default, so removing /etc/resolv.conf will break resolvconf. Therefore, we check to see if resolvconf is in use, and if so, we remove the resolvconf artifacts. """ if os.path.realpath('/etc/resolv.conf') != '/run/resolvconf/resolv.conf': Log("resolvconf is not configured. Removing /etc/resolv.conf") self.fileBlackList.append('/etc/resolv.conf') else: Log("resolvconf is enabled; leaving /etc/resolv.conf intact") resolvConfD = '/etc/resolvconf/resolv.conf.d/' self.fileBlackList.extend([resolvConfD + 'tail', resolvConfD + 'original']) for f in os.listdir(LibDir)+self.fileBlackList: try: os.remove(f) except: pass return 0 def getDhcpClientName(self): if self.dhcp_client_name != None : return self.dhcp_client_name if DistInfo()[1] == '12.04' : self.dhcp_client_name='dhclient3' else : self.dhcp_client_name='dhclient' return self.dhcp_client_name def waitForSshHostKey(self, path): """ Wait until the ssh host key is generated by cloud init. """ for retry in range(0, 10): if(os.path.isfile(path)): return True time.sleep(1) Error("Can't find host key: {0}".format(path)) return False ############################################################ # LinuxMintDistro ############################################################ class LinuxMintDistro(UbuntuDistro): """ LinuxMint Distro concrete class Put LinuxMint specific behavior here... """ def __init__(self): super(LinuxMintDistro,self).__init__() ############################################################ # fedoraDistro ############################################################ fedora_systemd_service = """\ [Unit] Description=Azure Linux Agent After=network.target After=sshd.service ConditionFileIsExecutable=/usr/sbin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/sbin/waagent -daemon [Install] WantedBy=multi-user.target """ class fedoraDistro(redhatDistro): """ FedoraDistro concrete class Put Fedora specific behavior here... """ def __init__(self): super(fedoraDistro,self).__init__() self.service_cmd = '/usr/bin/systemctl' self.hostname_file_path = '/etc/hostname' self.init_script_file = '/usr/lib/systemd/system/' + self.agent_service_name + '.service' self.init_file = fedora_systemd_service self.grubKernelBootOptionsFile = '/etc/default/grub' self.grubKernelBootOptionsLine = 'GRUB_CMDLINE_LINUX=' def publishHostname(self, name): SetFileContents(self.hostname_file_path, name + '\n') ethernetInterface = MyDistro.GetInterfaceName() filepath = "/etc/sysconfig/network-scripts/ifcfg-" + ethernetInterface if os.path.isfile(filepath): ReplaceFileContentsAtomic(filepath, "DHCP_HOSTNAME=" + name + "\n" + "\n".join(filter(lambda a: not a.startswith("DHCP_HOSTNAME"), GetFileContents(filepath).split('\n')))) return 0 def installAgentServiceScriptFiles(self): SetFileContents(self.init_script_file, self.init_file) os.chmod(self.init_script_file, 0644) return Run(self.service_cmd + ' daemon-reload') def registerAgentService(self): self.installAgentServiceScriptFiles() return Run(self.service_cmd + ' enable ' + self.agent_service_name) def uninstallAgentService(self): """ Call service subsystem to remove waagent script. """ return Run(self.service_cmd + ' disable ' + self.agent_service_name) def unregisterAgentService(self): """ Calls self.stopAgentService and call self.uninstallAgentService() """ self.stopAgentService() self.uninstallAgentService() def startAgentService(self): """ Service call to start the Agent service """ return Run(self.service_cmd + ' start ' + self.agent_service_name) def stopAgentService(self): """ Service call to stop the Agent service """ return Run(self.service_cmd + ' stop ' + self.agent_service_name, False) def restartSshService(self): """ Service call to re(start) the SSH service """ sshRestartCmd = self.service_cmd + " " + self.ssh_service_restart_option + " " + self.ssh_service_name retcode = Run(sshRestartCmd) if retcode > 0: Error("Failed to restart SSH service with return code:" + str(retcode)) return retcode def checkPackageInstalled(self, p): """ Query package database for prescence of an installed package. """ import rpm ts = rpm.TransactionSet() rpms = ts.dbMatch(rpm.RPMTAG_PROVIDES, p) return bool(len(rpms) > 0) def deleteRootPassword(self): return Run("/sbin/usermod root -p '!!'") def packagedInstall(self,buildroot): """ Called from setup.py for use by RPM. Copies generated files waagent.conf, under the buildroot. """ if not os.path.exists(buildroot+'/etc'): os.mkdir(buildroot+'/etc') SetFileContents(buildroot+'/etc/waagent.conf', MyDistro.waagent_conf_file) if not os.path.exists(buildroot+'/etc/logrotate.d'): os.mkdir(buildroot+'/etc/logrotate.d') SetFileContents(buildroot+'/etc/logrotate.d/WALinuxAgent', WaagentLogrotate) self.init_script_file=buildroot+self.init_script_file # this allows us to call installAgentServiceScriptFiles() if not os.path.exists(os.path.dirname(self.init_script_file)): os.mkdir(os.path.dirname(self.init_script_file)) self.installAgentServiceScriptFiles() def CreateAccount(self, user, password, expiration, thumbprint): super(fedoraDistro, self).CreateAccount(user, password, expiration, thumbprint) Run('/sbin/usermod ' + user + ' -G wheel') def DeleteAccount(self, user): Run('/sbin/usermod ' + user + ' -G ""') super(fedoraDistro, self).DeleteAccount(user) ############################################################ # FreeBSD ############################################################ FreeBSDWaagentConf = """\ # # Azure Linux Agent Configuration # Role.StateConsumer=None # Specified program is invoked with the argument "Ready" when we report ready status # to the endpoint server. Role.ConfigurationConsumer=None # Specified program is invoked with XML file argument specifying role configuration. Role.TopologyConsumer=None # Specified program is invoked with XML file argument specifying role topology. Provisioning.Enabled=y # Provisioning.DeleteRootPassword=y # Password authentication for root account will be unavailable. Provisioning.RegenerateSshHostKeyPair=y # Generate fresh host key pair. Provisioning.SshHostKeyPairType=rsa # Supported values are "rsa", "dsa" and "ecdsa". Provisioning.MonitorHostName=y # Monitor host name changes and publish changes via DHCP requests. ResourceDisk.Format=y # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Filesystem=ufs2 # ResourceDisk.MountPoint=/mnt/resource # ResourceDisk.EnableSwap=n # Create and use swapfile on resource disk. ResourceDisk.SwapSizeMB=0 # Size of the swapfile. LBProbeResponder=y # Respond to load balancer probes if requested by Azure. Logs.Verbose=n # Enable verbose logs OS.RootDeviceScsiTimeout=300 # Root device timeout in seconds. OS.OpensslPath=None # If "None", the system default version is used. """ bsd_init_file="""\ #! /bin/sh # PROVIDE: waagent # REQUIRE: DAEMON cleanvar sshd # BEFORE: LOGIN # KEYWORD: nojail . /etc/rc.subr export PATH=$PATH:/usr/local/bin name="waagent" rcvar="waagent_enable" command="/usr/sbin/${name}" command_interpreter="/usr/local/bin/python" waagent_flags=" daemon &" pidfile="/var/run/waagent.pid" load_rc_config $name run_rc_command "$1" """ bsd_activate_resource_disk_txt="""\ #!/usr/bin/env python import os import sys if sys.version_info[0] == 2: import imp else: import importlib # waagent has no '.py' therefore create waagent module import manually. __name__='setupmain' #prevent waagent.__main__ from executing waagent=imp.load_source('waagent','/tmp/waagent') waagent.LoggerInit('/var/log/waagent.log','/dev/console') from waagent import RunGetOutput,Run Config=waagent.ConfigurationProvider(None) format = Config.get("ResourceDisk.Format") if format == None or format.lower().startswith("n"): sys.exit(0) device_base = 'da1' device = "/dev/" + device_base for entry in RunGetOutput("mount")[1].split(): if entry.startswith(device + "s1"): waagent.Log("ActivateResourceDisk: " + device + "s1 is already mounted.") sys.exit(0) mountpoint = Config.get("ResourceDisk.MountPoint") if mountpoint == None: mountpoint = "/mnt/resource" waagent.CreateDir(mountpoint, "root", 0755) fs = Config.get("ResourceDisk.Filesystem") if waagent.FreeBSDDistro().mediaHasFilesystem(device) == False : Run("newfs " + device + "s1") if Run("mount " + device + "s1 " + mountpoint): waagent.Error("ActivateResourceDisk: Failed to mount resource disk (" + device + "s1).") sys.exit(0) waagent.Log("Resource disk (" + device + "s1) is mounted at " + mountpoint + " with fstype " + fs) waagent.SetFileContents(os.path.join(mountpoint,waagent.README_FILENAME), waagent.README_FILECONTENT) swap = Config.get("ResourceDisk.EnableSwap") if swap == None or swap.lower().startswith("n"): sys.exit(0) sizeKB = int(Config.get("ResourceDisk.SwapSizeMB")) * 1024 if os.path.isfile(mountpoint + "/swapfile") and os.path.getsize(mountpoint + "/swapfile") != (sizeKB * 1024): os.remove(mountpoint + "/swapfile") if not os.path.isfile(mountpoint + "/swapfile"): Run("umask 0077 && dd if=/dev/zero of=" + mountpoint + "/swapfile bs=1024 count=" + str(sizeKB)) if Run("mdconfig -a -t vnode -f " + mountpoint + "/swapfile -u 0"): waagent.Error("ActivateResourceDisk: Configuring swap - Failed to create md0") if not Run("swapon /dev/md0"): waagent.Log("Enabled " + str(sizeKB) + " KB of swap at " + mountpoint + "/swapfile") else: waagent.Error("ActivateResourceDisk: Failed to activate swap at " + mountpoint + "/swapfile") """ class FreeBSDDistro(AbstractDistro): """ """ def __init__(self): """ Generic Attributes go here. These are based on 'majority rules'. This __init__() may be called or overriden by the child. """ super(FreeBSDDistro,self).__init__() self.agent_service_name = os.path.basename(sys.argv[0]) self.selinux=False self.ssh_service_name='sshd' self.ssh_config_file='/etc/ssh/sshd_config' self.hostname_file_path='/etc/hostname' self.dhcp_client_name='dhclient' self.requiredDeps = [ 'route', 'shutdown', 'ssh-keygen', 'pw' , 'openssl', 'fdisk', 'sed', 'grep' , 'sudo'] self.init_script_file='/etc/rc.d/waagent' self.init_file=bsd_init_file self.agent_package_name='WALinuxAgent' self.fileBlackList = [ "/root/.bash_history", "/var/log/waagent.log",'/etc/resolv.conf' ] self.agent_files_to_uninstall = ["/etc/waagent.conf"] self.grubKernelBootOptionsFile = '/boot/loader.conf' self.grubKernelBootOptionsLine = '' self.getpidcmd = 'pgrep -n' self.mount_dvd_cmd = 'dd bs=2048 count=33 skip=295 if=' # custom data max len is 64k self.sudoers_dir_base = '/usr/local/etc' self.waagent_conf_file = FreeBSDWaagentConf def installAgentServiceScriptFiles(self): SetFileContents(self.init_script_file, self.init_file) os.chmod(self.init_script_file, 0777) AppendFileContents("/etc/rc.conf","waagent_enable='YES'\n") return 0 def registerAgentService(self): self.installAgentServiceScriptFiles() return Run("services_mkdb " + self.init_script_file) def sshDeployPublicKey(self,fprint,path): """ We support PKCS8. """ if Run("ssh-keygen -i -m PKCS8 -f " + fprint + " >> " + path): return 1 else : return 0 def deleteRootPassword(self): """ BSD root password removal. """ filepath="/etc/master.passwd" ReplaceStringInFile(filepath,r'root:.*?:','root::') #ReplaceFileContentsAtomic(filepath,"root:*LOCK*:14600::::::\n" # + "\n".join(filter(lambda a: not a.startswith("root:"),GetFileContents(filepath).split('\n')))) os.chmod(filepath,self.shadow_file_mode) if self.isSelinuxSystem(): self.setSelinuxContext(filepath,'system_u:object_r:shadow_t:s0') RunGetOutput("pwd_mkdb -u root /etc/master.passwd") Log("Root password deleted.") return 0 def changePass(self,user,password): return RunSendStdin("pw usermod " + user + " -h 0 ",password, log_cmd=False) def load_ata_piix(self): return 0 def unload_ata_piix(self): return 0 def checkDependencies(self): """ FreeBSD dependency check. Return 1 unless all dependencies are satisfied. """ for a in self.requiredDeps: if Run("which " + a + " > /dev/null 2>&1",chk_err=False): Error("Missing required dependency: " + a) return 1 return 0 def packagedInstall(self,buildroot): pass def GetInterfaceName(self): """ Return the ip of the active ethernet interface. """ iface,inet,mac=self.GetFreeBSDEthernetInfo() return iface def RestartInterface(self, iface): Run("service netif restart") def GetIpv4Address(self): """ Return the ip of the active ethernet interface. """ iface,inet,mac=self.GetFreeBSDEthernetInfo() return inet def GetMacAddress(self): """ Return the ip of the active ethernet interface. """ iface,inet,mac=self.GetFreeBSDEthernetInfo() l=mac.split(':') r=[] for i in l: r.append(string.atoi(i,16)) return r def GetFreeBSDEthernetInfo(self): """ There is no SIOCGIFCONF on freeBSD - just parse ifconfig. Returns strings: iface, inet4_addr, and mac or 'None,None,None' if unable to parse. We will sleep and retry as the network must be up. """ code,output=RunGetOutput("ifconfig",chk_err=False) Log(output) retries=10 cmd='ifconfig | grep -A2 -B2 ether | grep -B3 inet | grep -A4 UP ' code=1 while code > 0 : if code > 0 and retries == 0: Error("GetFreeBSDEthernetInfo - Failed to detect ethernet interface") return None, None, None code,output=RunGetOutput(cmd,chk_err=False) retries-=1 if code > 0 and retries > 0 : Log("GetFreeBSDEthernetInfo - Error: retry ethernet detection " + str(retries)) if retries == 9 : c,o=RunGetOutput("ifconfig | grep -A1 -B2 ether",chk_err=False) if c == 0: t=o.replace('\n',' ') t=t.split() i=t[0][:-1] Log(RunGetOutput('id')[1]) Run('dhclient '+i) time.sleep(10) j=output.replace('\n',' ') j=j.split() iface=j[0][:-1] for i in range(len(j)): if j[i] == 'inet' : inet=j[i+1] elif j[i] == 'ether' : mac=j[i+1] return iface, inet, mac def CreateAccount(self,user, password, expiration, thumbprint): """ Create a user account, with 'user', 'password', 'expiration', ssh keys and sudo permissions. Returns None if successful, error string on failure. """ userentry = None try: userentry = pwd.getpwnam(user) except: pass uidmin = None try: if os.path.isfile("/etc/login.defs"): uidmin = int(GetLineStartingWith("UID_MIN", "/etc/login.defs").split()[1]) except: pass if uidmin == None: uidmin = 100 if userentry != None and userentry[2] < uidmin: Error("CreateAccount: " + user + " is a system user. Will not set password.") return "Failed to set password for system user: " + user + " (0x06)." if userentry == None: command = "pw useradd " + user + " -m" if expiration != None: command += " -e " + expiration.split('.')[0] if Run(command): Error("Failed to create user account: " + user) return "Failed to create user account: " + user + " (0x07)." else: Log("CreateAccount: " + user + " already exists. Will update password.") if password != None: self.changePass(user,password) try: # for older distros create sudoers.d if not os.path.isdir(MyDistro.sudoers_dir_base+'/sudoers.d/'): # create the /etc/sudoers.d/ directory os.mkdir(MyDistro.sudoers_dir_base+'/sudoers.d') # add the include of sudoers.d to the /etc/sudoers SetFileContents(MyDistro.sudoers_dir_base+'/sudoers',GetFileContents(MyDistro.sudoers_dir_base+'/sudoers')+'\n#includedir ' + MyDistro.sudoers_dir_base + '/sudoers.d\n') if password == None: SetFileContents(MyDistro.sudoers_dir_base+"/sudoers.d/waagent", user + " ALL = (ALL) NOPASSWD: ALL\n") else: SetFileContents(MyDistro.sudoers_dir_base+"/sudoers.d/waagent", user + " ALL = (ALL) ALL\n") os.chmod(MyDistro.sudoers_dir_base+"/sudoers.d/waagent", 0440) except: Error("CreateAccount: Failed to configure sudo access for user.") return "Failed to configure sudo privileges (0x08)." home = MyDistro.GetHome() if thumbprint != None: dir = home + "/" + user + "/.ssh" CreateDir(dir, user, 0700) pub = dir + "/id_rsa.pub" prv = dir + "/id_rsa" Run("ssh-keygen -y -f " + thumbprint + ".prv > " + pub) SetFileContents(prv, GetFileContents(thumbprint + ".prv")) for f in [pub, prv]: os.chmod(f, 0600) ChangeOwner(f, user) SetFileContents(dir + "/authorized_keys", GetFileContents(pub)) ChangeOwner(dir + "/authorized_keys", user) Log("Created user account: " + user) return None def DeleteAccount(self,user): """ Delete the 'user'. Clear utmp first, to avoid error. Removes the /etc/sudoers.d/waagent file. """ userentry = None try: userentry = pwd.getpwnam(user) except: pass if userentry == None: Error("DeleteAccount: " + user + " not found.") return uidmin = None try: if os.path.isfile("/etc/login.defs"): uidmin = int(GetLineStartingWith("UID_MIN", "/etc/login.defs").split()[1]) except: pass if uidmin == None: uidmin = 100 if userentry[2] < uidmin: Error("DeleteAccount: " + user + " is a system user. Will not delete account.") return Run("> /var/run/utmp") #Delete utmp to prevent error if we are the 'user' deleted pid = subprocess.Popen(['rmuser', '-y', user], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE).pid try: os.remove(MyDistro.sudoers_dir_base+"/sudoers.d/waagent") except: pass return def ActivateResourceDiskNoThread(self): """ Format, mount, and if specified in the configuration set resource disk as swap. """ global DiskActivated Run('cp /usr/sbin/waagent /tmp/') SetFileContents('/tmp/bsd_activate_resource_disk.py',bsd_activate_resource_disk_txt) Run('chmod +x /tmp/bsd_activate_resource_disk.py') pid = subprocess.Popen(["/tmp/bsd_activate_resource_disk.py", ""]).pid Log("Spawning bsd_activate_resource_disk.py") DiskActivated = True return def Install(self): """ Install the agent service. Check dependencies. Create /etc/waagent.conf and move old version to /etc/waagent.conf.old Copy RulesFiles to /var/lib/waagent Create /etc/logrotate.d/waagent Set /etc/ssh/sshd_config ClientAliveInterval to 180 Call ApplyVNUMAWorkaround() """ if MyDistro.checkDependencies(): return 1 os.chmod(sys.argv[0], 0755) SwitchCwd() for a in RulesFiles: if os.path.isfile(a): if os.path.isfile(GetLastPathElement(a)): os.remove(GetLastPathElement(a)) shutil.move(a, ".") Warn("Moved " + a + " -> " + LibDir + "/" + GetLastPathElement(a) ) MyDistro.registerAgentService() if os.path.isfile("/etc/waagent.conf"): try: os.remove("/etc/waagent.conf.old") except: pass try: os.rename("/etc/waagent.conf", "/etc/waagent.conf.old") Warn("Existing /etc/waagent.conf has been renamed to /etc/waagent.conf.old") except: pass SetFileContents("/etc/waagent.conf", self.waagent_conf_file) if os.path.exists('/usr/local/etc/logrotate.d/'): SetFileContents("/usr/local/etc/logrotate.d/waagent", WaagentLogrotate) filepath = "/etc/ssh/sshd_config" ReplaceFileContentsAtomic(filepath, "\n".join(filter(lambda a: not a.startswith("ClientAliveInterval"), GetFileContents(filepath).split('\n'))) + "\nClientAliveInterval 180\n") Log("Configured SSH client probing to keep connections alive.") #ApplyVNUMAWorkaround() return 0 def mediaHasFilesystem(self,dsk): if Run('LC_ALL=C fdisk -p ' + dsk + ' | grep "invalid fdisk partition table found" ',False): return False return True def mountDVD(self,dvd,location): #At this point we cannot read a joliet option udf DVD in freebsd10 - so we 'dd' it into our location retcode,out = RunGetOutput(self.mount_dvd_cmd + dvd + ' of=' + location + '/ovf-env.xml') if retcode != 0: return retcode,out ovfxml = (GetFileContents(location+"/ovf-env.xml",asbin=False)) if ord(ovfxml[0]) > 128 and ord(ovfxml[1]) > 128 and ord(ovfxml[2]) > 128 : ovfxml = ovfxml[3:] # BOM is not stripped. First three bytes are > 128 and not unicode chars so we ignore them. ovfxml = ovfxml.strip(chr(0x00)) ovfxml = "".join(filter(lambda x: ord(x)<128, ovfxml)) ovfxml = re.sub(r'.*\Z','',ovfxml,0,re.DOTALL) ovfxml += '' SetFileContents(location+"/ovf-env.xml", ovfxml) return retcode,out def GetHome(self): return '/home' def initScsiDiskTimeout(self): """ Set the SCSI disk timeout by updating the kernal config """ timeout = Config.get("OS.RootDeviceScsiTimeout") if timeout: Run("sysctl kern.cam.da.default_timeout=" + timeout) def setScsiDiskTimeout(self): return def setBlockDeviceTimeout(self, device, timeout): return def getProcessorCores(self): return int(RunGetOutput("sysctl hw.ncpu | awk '{print $2}'")[1]) def getTotalMemory(self): return int(RunGetOutput("sysctl hw.realmem | awk '{print $2}'")[1])/1024 def setDefaultGateway(self, gateway): Run("/sbin/route add default " + gateway, chk_err=False) def routeAdd(self, net, mask, gateway): Run("/sbin/route add -net " + net + " " + mask + " " + gateway, chk_err=False) ############################################################ # END DISTRO CLASS DEFS ############################################################ # This lets us index into a string or an array of integers transparently. def Ord(a): """ Allows indexing into a string or an array of integers transparently. Generic utility function. """ if type(a) == type("a"): a = ord(a) return a def IsLinux(): """ Returns True if platform is Linux. Generic utility function. """ return (platform.uname()[0] == "Linux") def GetLastPathElement(path): """ Similar to basename. Generic utility function. """ return path.rsplit('/', 1)[1] def GetFileContents(filepath,asbin=False): """ Read and return contents of 'filepath'. """ mode='r' if asbin: mode+='b' c=None try: with open(filepath, mode) as F : c=F.read() except IOError, e: ErrorWithPrefix('GetFileContents','Reading from file ' + filepath + ' Exception is ' + str(e)) return None return c def SetFileContents(filepath, contents): """ Write 'contents' to 'filepath'. """ if type(contents) == str : contents=contents.encode('latin-1', 'ignore') try: with open(filepath, "wb+") as F : F.write(contents) except IOError, e: ErrorWithPrefix('SetFileContents','Writing to file ' + filepath + ' Exception is ' + str(e)) return None return 0 def AppendFileContents(filepath, contents): """ Append 'contents' to 'filepath'. """ if type(contents) == str : contents=contents.encode('latin-1') try: with open(filepath, "a+") as F : F.write(contents) except IOError, e: ErrorWithPrefix('AppendFileContents','Appending to file ' + filepath + ' Exception is ' + str(e)) return None return 0 def ReplaceFileContentsAtomic(filepath, contents): """ Write 'contents' to 'filepath' by creating a temp file, and replacing original. """ handle, temp = tempfile.mkstemp(dir = os.path.dirname(filepath)) if type(contents) == str : contents=contents.encode('latin-1') try: os.write(handle, contents) except IOError, e: ErrorWithPrefix('ReplaceFileContentsAtomic','Writing to file ' + filepath + ' Exception is ' + str(e)) return None finally: os.close(handle) try: os.rename(temp, filepath) return None except IOError, e: ErrorWithPrefix('ReplaceFileContentsAtomic','Renaming ' + temp+ ' to ' + filepath + ' Exception is ' + str(e)) try: os.remove(filepath) except IOError, e: ErrorWithPrefix('ReplaceFileContentsAtomic','Removing '+ filepath + ' Exception is ' + str(e)) try: os.rename(temp,filepath) except IOError, e: ErrorWithPrefix('ReplaceFileContentsAtomic','Removing '+ filepath + ' Exception is ' + str(e)) return 1 return 0 def GetLineStartingWith(prefix, filepath): """ Return line from 'filepath' if the line startswith 'prefix' """ for line in GetFileContents(filepath).split('\n'): if line.startswith(prefix): return line return None def Run(cmd,chk_err=True): """ Calls RunGetOutput on 'cmd', returning only the return code. If chk_err=True then errors will be reported in the log. If chk_err=False then errors will be suppressed from the log. """ retcode,out=RunGetOutput(cmd,chk_err) return retcode def RunGetOutput(cmd, chk_err=True, log_cmd=True): """ Wrapper for subprocess.check_output. Execute 'cmd'. Returns return code and STDOUT, trapping expected exceptions. Reports exceptions to Error if chk_err parameter is True """ if log_cmd: LogIfVerbose(cmd) try: output=subprocess.check_output(cmd,stderr=subprocess.STDOUT,shell=True) except subprocess.CalledProcessError,e : if chk_err and log_cmd: Error('CalledProcessError. Error Code is ' + str(e.returncode) ) Error('CalledProcessError. Command string was ' + e.cmd ) Error('CalledProcessError. Command result was ' + (e.output[:-1]).decode('latin-1')) return e.returncode,e.output.decode('latin-1') return 0,output.decode('latin-1') def RunSendStdin(cmd, input, chk_err=True, log_cmd=True): """ Wrapper for subprocess.Popen. Execute 'cmd', sending 'input' to STDIN of 'cmd'. Returns return code and STDOUT, trapping expected exceptions. Reports exceptions to Error if chk_err parameter is True """ if log_cmd: LogIfVerbose(cmd+input) try: me=subprocess.Popen([cmd], shell=True, stdin=subprocess.PIPE,stderr=subprocess.STDOUT,stdout=subprocess.PIPE) output=me.communicate(input) except OSError , e : if chk_err and log_cmd: Error('CalledProcessError. Error Code is ' + str(me.returncode) ) Error('CalledProcessError. Command string was ' + cmd ) Error('CalledProcessError. Command result was ' + output[0].decode('latin-1')) return 1,output[0].decode('latin-1') if me.returncode is not 0 and chk_err is True and log_cmd: Error('CalledProcessError. Error Code is ' + str(me.returncode) ) Error('CalledProcessError. Command string was ' + cmd ) Error('CalledProcessError. Command result was ' + output[0].decode('latin-1')) return me.returncode,output[0].decode('latin-1') def GetNodeTextData(a): """ Filter non-text nodes from DOM tree """ for b in a.childNodes: if b.nodeType == b.TEXT_NODE: return b.data def GetHome(): """ Attempt to guess the $HOME location. Return the path string. """ home = None try: home = GetLineStartingWith("HOME", "/etc/default/useradd").split('=')[1].strip() except: pass if (home == None) or (home.startswith("/") == False): home = "/home" return home def ChangeOwner(filepath, user): """ Lookup user. Attempt chown 'filepath' to 'user'. """ p = None try: p = pwd.getpwnam(user) except: pass if p != None: if not os.path.exists(filepath): Error("Path does not exist: {0}".format(filepath)) else: os.chown(filepath, p[2], p[3]) def CreateDir(dirpath, user, mode): """ Attempt os.makedirs, catch all exceptions. Call ChangeOwner afterwards. """ try: os.makedirs(dirpath, mode) except: pass ChangeOwner(dirpath, user) def CreateAccount(user, password, expiration, thumbprint): """ Create a user account, with 'user', 'password', 'expiration', ssh keys and sudo permissions. Returns None if successful, error string on failure. """ userentry = None try: userentry = pwd.getpwnam(user) except: pass uidmin = None try: uidmin = int(GetLineStartingWith("UID_MIN", "/etc/login.defs").split()[1]) except: pass if uidmin == None: uidmin = 100 if userentry != None and userentry[2] < uidmin: Error("CreateAccount: " + user + " is a system user. Will not set password.") return "Failed to set password for system user: " + user + " (0x06)." if userentry == None: command = "useradd -m " + user if expiration != None: command += " -e " + expiration.split('.')[0] if Run(command): Error("Failed to create user account: " + user) return "Failed to create user account: " + user + " (0x07)." else: Log("CreateAccount: " + user + " already exists. Will update password.") if password != None: MyDistro.changePass(user, password) try: # for older distros create sudoers.d if not os.path.isdir('/etc/sudoers.d/'): # create the /etc/sudoers.d/ directory os.mkdir('/etc/sudoers.d/') # add the include of sudoers.d to the /etc/sudoers SetFileContents('/etc/sudoers',GetFileContents('/etc/sudoers')+'\n#includedir /etc/sudoers.d\n') if password == None: SetFileContents("/etc/sudoers.d/waagent", user + " ALL = (ALL) NOPASSWD: ALL\n") else: SetFileContents("/etc/sudoers.d/waagent", user + " ALL = (ALL) ALL\n") os.chmod("/etc/sudoers.d/waagent", 0440) except: Error("CreateAccount: Failed to configure sudo access for user.") return "Failed to configure sudo privileges (0x08)." home = MyDistro.GetHome() if thumbprint != None: dir = home + "/" + user + "/.ssh" CreateDir(dir, user, 0700) pub = dir + "/id_rsa.pub" prv = dir + "/id_rsa" Run("ssh-keygen -y -f " + thumbprint + ".prv > " + pub) SetFileContents(prv, GetFileContents(thumbprint + ".prv")) for f in [pub, prv]: os.chmod(f, 0600) ChangeOwner(f, user) SetFileContents(dir + "/authorized_keys", GetFileContents(pub)) ChangeOwner(dir + "/authorized_keys", user) Log("Created user account: " + user) return None def DeleteAccount(user): """ Delete the 'user'. Clear utmp first, to avoid error. Removes the /etc/sudoers.d/waagent file. """ userentry = None try: userentry = pwd.getpwnam(user) except: pass if userentry == None: Error("DeleteAccount: " + user + " not found.") return uidmin = None try: uidmin = int(GetLineStartingWith("UID_MIN", "/etc/login.defs").split()[1]) except: pass if uidmin == None: uidmin = 100 if userentry[2] < uidmin: Error("DeleteAccount: " + user + " is a system user. Will not delete account.") return Run("> /var/run/utmp") #Delete utmp to prevent error if we are the 'user' deleted Run("userdel -f -r " + user) try: os.remove("/etc/sudoers.d/waagent") except: pass return def IsInRangeInclusive(a, low, high): """ Return True if 'a' in 'low' <= a >= 'high' """ return (a >= low and a <= high) def IsPrintable(ch): """ Return True if character is displayable. """ return IsInRangeInclusive(ch, Ord('A'), Ord('Z')) or IsInRangeInclusive(ch, Ord('a'), Ord('z')) or IsInRangeInclusive(ch, Ord('0'), Ord('9')) def HexDump(buffer, size): """ Return Hex formated dump of a 'buffer' of 'size'. """ if size < 0: size = len(buffer) result = "" for i in range(0, size): if (i % 16) == 0: result += "%06X: " % i byte = buffer[i] if type(byte) == str: byte = ord(byte.decode('latin1')) result += "%02X " % byte if (i & 15) == 7: result += " " if ((i + 1) % 16) == 0 or (i + 1) == size: j = i while ((j + 1) % 16) != 0: result += " " if (j & 7) == 7: result += " " j += 1 result += " " for j in range(i - (i % 16), i + 1): byte=buffer[j] if type(byte) == str: byte = ord(byte.decode('latin1')) k = '.' if IsPrintable(byte): k = chr(byte) result += k if (i + 1) != size: result += "\n" return result def SimpleLog(file_path,message): if not file_path or len(message) < 1: return t = time.localtime() t = "%04u/%02u/%02u %02u:%02u:%02u " % (t.tm_year, t.tm_mon, t.tm_mday, t.tm_hour, t.tm_min, t.tm_sec) lines=re.sub(re.compile(r'^(.)',re.MULTILINE),t+r'\1',message) with open(file_path, "a") as F : lines = filter(lambda x : x in string.printable, lines) F.write(lines.encode('ascii','ignore') + "\n") class Logger(object): """ The Agent's logging assumptions are: For Log, and LogWithPrefix all messages are logged to the self.file_path and to the self.con_path. Setting either path parameter to None skips that log. If Verbose is enabled, messages calling the LogIfVerbose method will be logged to file_path yet not to con_path. Error and Warn messages are normal log messages with the 'ERROR:' or 'WARNING:' prefix added. """ def __init__(self,filepath,conpath,verbose=False): """ Construct an instance of Logger. """ self.file_path=filepath self.con_path=conpath self.verbose=verbose def ThrottleLog(self,counter): """ Log everything up to 10, every 10 up to 100, then every 100. """ return (counter < 10) or ((counter < 100) and ((counter % 10) == 0)) or ((counter % 100) == 0) def LogToFile(self,message): """ Write 'message' to logfile. """ if self.file_path: try: with open(self.file_path, "a") as F : message = filter(lambda x : x in string.printable, message) F.write(message.encode('ascii','ignore') + "\n") except IOError, e: print e pass def LogToCon(self,message): """ Write 'message' to /dev/console. This supports serial port logging if the /dev/console is redirected to ttys0 in kernel boot options. """ if self.con_path: try: with open(self.con_path, "w") as C : message = filter(lambda x : x in string.printable, message) C.write(message.encode('ascii','ignore') + "\n") except IOError, e: pass def Log(self,message): """ Standard Log function. Logs to self.file_path, and con_path """ self.LogWithPrefix("", message) def LogWithPrefix(self,prefix, message): """ Prefix each line of 'message' with current time+'prefix'. """ t = time.localtime() t = "%04u/%02u/%02u %02u:%02u:%02u " % (t.tm_year, t.tm_mon, t.tm_mday, t.tm_hour, t.tm_min, t.tm_sec) t += prefix for line in message.split('\n'): line = t + line self.LogToFile(line) self.LogToCon(line) def NoLog(self,message): """ Don't Log. """ pass def LogIfVerbose(self,message): """ Only log 'message' if global Verbose is True. """ self.LogWithPrefixIfVerbose('',message) def LogWithPrefixIfVerbose(self,prefix, message): """ Only log 'message' if global Verbose is True. Prefix each line of 'message' with current time+'prefix'. """ if self.verbose == True: t = time.localtime() t = "%04u/%02u/%02u %02u:%02u:%02u " % (t.tm_year, t.tm_mon, t.tm_mday, t.tm_hour, t.tm_min, t.tm_sec) t += prefix for line in message.split('\n'): line = t + line self.LogToFile(line) self.LogToCon(line) def Warn(self,message): """ Prepend the text "WARNING:" to the prefix for each line in 'message'. """ self.LogWithPrefix("WARNING:", message) def Error(self,message): """ Call ErrorWithPrefix(message). """ ErrorWithPrefix("", message) def ErrorWithPrefix(self,prefix, message): """ Prepend the text "ERROR:" to the prefix for each line in 'message'. Errors written to logfile, and /dev/console """ self.LogWithPrefix("ERROR:", message) def LoggerInit(log_file_path,log_con_path,verbose=False): """ Create log object and export its methods to global scope. """ global Log,LogWithPrefix,LogIfVerbose,LogWithPrefixIfVerbose,Error,ErrorWithPrefix,Warn,NoLog,ThrottleLog,myLogger l=Logger(log_file_path,log_con_path,verbose) Log,LogWithPrefix,LogIfVerbose,LogWithPrefixIfVerbose,Error,ErrorWithPrefix,Warn,NoLog,ThrottleLog,myLogger = l.Log,l.LogWithPrefix,l.LogIfVerbose,l.LogWithPrefixIfVerbose,l.Error,l.ErrorWithPrefix,l.Warn,l.NoLog,l.ThrottleLog,l def Linux_ioctl_GetInterfaceMac(ifname): """ Return the mac-address bound to the socket. """ s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) info = fcntl.ioctl(s.fileno(), 0x8927, struct.pack('256s', (ifname[:15]+('\0'*241)).encode('latin-1'))) return ''.join(['%02X' % Ord(char) for char in info[18:24]]) def GetFirstActiveNetworkInterfaceNonLoopback(): """ Return the interface name, and ip addr of the first active non-loopback interface. """ iface='' expected=16 # how many devices should I expect... is_64bits = sys.maxsize > 2**32 struct_size=40 if is_64bits else 32 # for 64bit the size is 40 bytes, for 32bits it is 32 bytes. s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) buff=array.array('B', b'\0' * (expected*struct_size)) retsize=(struct.unpack('iL', fcntl.ioctl(s.fileno(), 0x8912, struct.pack('iL',expected*struct_size,buff.buffer_info()[0]))))[0] if retsize == (expected*struct_size) : Warn('SIOCGIFCONF returned more than ' + str(expected) + ' up network interfaces.') s=buff.tostring() preferred_nic = Config.get("Network.Interface") for i in range(0,struct_size*expected,struct_size): iface=s[i:i+16].split(b'\0', 1)[0] if iface == b'lo': continue elif preferred_nic is None: break elif iface == preferred_nic: break return iface.decode('latin-1'), socket.inet_ntoa(s[i+20:i+24]) def GetIpv4Address(): """ Return the ip of the first active non-loopback interface. """ iface,addr=GetFirstActiveNetworkInterfaceNonLoopback() return addr def HexStringToByteArray(a): """ Return hex string packed into a binary struct. """ b = b"" for c in range(0, len(a) // 2): b += struct.pack("B", int(a[c * 2:c * 2 + 2], 16)) return b def GetMacAddress(): """ Convienience function, returns mac addr bound to first non-loobback interface. """ ifname='' while len(ifname) < 2 : ifname=GetFirstActiveNetworkInterfaceNonLoopback()[0] a = Linux_ioctl_GetInterfaceMac(ifname) return HexStringToByteArray(a) def DeviceForIdePort(n): """ Return device name attached to ide port 'n'. """ if n > 3: return None g0 = "00000000" if n > 1: g0 = "00000001" n = n - 2 device = None path = "/sys/bus/vmbus/devices/" for vmbus in os.listdir(path): guid = GetFileContents(path + vmbus + "/device_id").lstrip('{').split('-') if guid[0] == g0 and guid[1] == "000" + str(n): for root, dirs, files in os.walk(path + vmbus): if root.endswith("/block"): device = dirs[0] break else : #older distros for d in dirs: if ':' in d and "block" == d.split(':')[0]: device = d.split(':')[1] break break return device class HttpResourceGoneError(Exception): pass class Util(object): """ Http communication class. Base of GoalState, and Agent classes. """ RetryWaitingInterval=10 def __init__(self): self.Endpoint = None def _ParseUrl(self, url): secure = False host = self.Endpoint path = url port = None #"http[s]://hostname[:port][/]" if url.startswith("http://"): url = url[7:] if "/" in url: host = url[0: url.index("/")] path = url[url.index("/"):] else: host = url path = "/" elif url.startswith("https://"): secure = True url = url[8:] if "/" in url: host = url[0: url.index("/")] path = url[url.index("/"):] else: host = url path = "/" if host is None: raise ValueError("Host is invalid:{0}".format(url)) if(":" in host): pos = host.rfind(":") port = int(host[pos + 1:]) host = host[0:pos] return host, port, secure, path def GetHttpProxy(self, secure): """ Get http_proxy and https_proxy from environment variables. Username and password is not supported now. """ host = Config.get("HttpProxy.Host") port = Config.get("HttpProxy.Port") return (host, port) def _HttpRequest(self, method, host, path, port=None, data=None, secure=False, headers=None, proxyHost=None, proxyPort=None): resp = None conn = None try: if secure: port = 443 if port is None else port if proxyHost is not None and proxyPort is not None: conn = httplib.HTTPSConnection(proxyHost, proxyPort, timeout=10) conn.set_tunnel(host, port) #If proxy is used, full url is needed. path = "https://{0}:{1}{2}".format(host, port, path) else: conn = httplib.HTTPSConnection(host, port, timeout=10) else: port = 80 if port is None else port if proxyHost is not None and proxyPort is not None: conn = httplib.HTTPConnection(proxyHost, proxyPort, timeout=10) #If proxy is used, full url is needed. path = "http://{0}:{1}{2}".format(host, port, path) else: conn = httplib.HTTPConnection(host, port, timeout=10) if headers == None: conn.request(method, path, data) else: conn.request(method, path, data, headers) resp = conn.getresponse() except httplib.HTTPException, e: Error('HTTPException {0}, args:{1}'.format(e, repr(e.args))) except IOError, e: Error('Socket IOError {0}, args:{1}'.format(e, repr(e.args))) return resp def HttpRequest(self, method, url, data=None, headers=None, maxRetry=3, chkProxy=False): """ Sending http request to server On error, sleep 10 and maxRetry times. Return the output buffer or None. """ LogIfVerbose("HTTP Req: {0} {1}".format(method, url)) LogIfVerbose("HTTP Req: Data={0}".format(data)) LogIfVerbose("HTTP Req: Header={0}".format(headers)) try: host, port, secure, path = self._ParseUrl(url) except ValueError, e: Error("Failed to parse url:{0}".format(url)) return None #Check proxy proxyHost, proxyPort = (None, None) if chkProxy: proxyHost, proxyPort = self.GetHttpProxy(secure) #If httplib module is not built with ssl support. Fallback to http if secure and not hasattr(httplib, "HTTPSConnection"): Warn("httplib is not built with ssl support") secure = False proxyHost, proxyPort = self.GetHttpProxy(secure) #If httplib module doesn't support https tunnelling. Fallback to http if secure and \ proxyHost is not None and \ proxyPort is not None and \ not hasattr(httplib.HTTPSConnection, "set_tunnel"): Warn("httplib doesn't support https tunnelling(new in python 2.7)") secure = False proxyHost, proxyPort = self.GetHttpProxy(secure) resp = self._HttpRequest(method, host, path, port=port, data=data, secure=secure, headers=headers, proxyHost=proxyHost, proxyPort=proxyPort) for retry in range(0, maxRetry): if resp is not None and \ (resp.status == httplib.OK or \ resp.status == httplib.CREATED or \ resp.status == httplib.ACCEPTED): return resp; if resp is not None and resp.status == httplib.GONE: raise HttpResourceGoneError("Http resource gone.") Error("Retry={0}".format(retry)) Error("HTTP Req: {0} {1}".format(method, url)) Error("HTTP Req: Data={0}".format(data)) Error("HTTP Req: Header={0}".format(headers)) if resp is None: Error("HTTP Err: response is empty.".format(retry)) else: Error("HTTP Err: Status={0}".format(resp.status)) Error("HTTP Err: Reason={0}".format(resp.reason)) Error("HTTP Err: Header={0}".format(resp.getheaders())) Error("HTTP Err: Body={0}".format(resp.read())) time.sleep(self.__class__.RetryWaitingInterval) resp = self._HttpRequest(method, host, path, port=port, data=data, secure=secure, headers=headers, proxyHost=proxyHost, proxyPort=proxyPort) return None def HttpGet(self, url, headers=None, maxRetry=3, chkProxy=False): return self.HttpRequest("GET", url, headers=headers, maxRetry=maxRetry, chkProxy=chkProxy) def HttpHead(self, url, headers=None, maxRetry=3, chkProxy=False): return self.HttpRequest("HEAD", url, headers=headers, maxRetry=maxRetry, chkProxy=chkProxy) def HttpPost(self, url, data, headers=None, maxRetry=3, chkProxy=False): return self.HttpRequest("POST", url, data=data, headers=headers, maxRetry=maxRetry, chkProxy=chkProxy) def HttpPut(self, url, data, headers=None, maxRetry=3, chkProxy=False): return self.HttpRequest("PUT", url, data=data, headers=headers, maxRetry=maxRetry, chkProxy=chkProxy) def HttpDelete(self, url, headers=None, maxRetry=3, chkProxy=False): return self.HttpRequest("DELETE", url, headers=headers, maxRetry=maxRetry, chkProxy=chkProxy) def HttpGetWithoutHeaders(self, url, maxRetry=3, chkProxy=False): """ Return data from an HTTP get on 'url'. """ resp = self.HttpGet(url, headers=None, maxRetry=maxRetry, chkProxy=chkProxy) return resp.read() if resp is not None else None def HttpGetWithHeaders(self, url, maxRetry=3, chkProxy=False): """ Return data from an HTTP get on 'url' with x-ms-agent-name and x-ms-version headers. """ resp = self.HttpGet(url, headers={ "x-ms-agent-name": GuestAgentName, "x-ms-version": ProtocolVersion }, maxRetry=maxRetry, chkProxy=chkProxy) return resp.read() if resp is not None else None def HttpSecureGetWithHeaders(self, url, transportCert, maxRetry=3, chkProxy=False): """ Return output of get using ssl cert. """ resp = self.HttpGet(url, headers={ "x-ms-agent-name": GuestAgentName, "x-ms-version": ProtocolVersion, "x-ms-cipher-name": "DES_EDE3_CBC", "x-ms-guest-agent-public-x509-cert": transportCert }, maxRetry=maxRetry, chkProxy=chkProxy) return resp.read() if resp is not None else None def HttpPostWithHeaders(self, url, data, maxRetry=3, chkProxy=False): headers = { "x-ms-agent-name": GuestAgentName, "Content-Type": "text/xml; charset=utf-8", "x-ms-version": ProtocolVersion } try: return self.HttpPost(url, data=data, headers=headers, maxRetry=maxRetry, chkProxy=chkProxy) except HttpResourceGoneError as e: Error("Failed to post: {0} {1}".format(url, e)) return None __StorageVersion="2014-02-14" def GetBlobType(url): restutil = Util() #Check blob type LogIfVerbose("Check blob type.") timestamp = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()) blobPropResp = restutil.HttpHead(url, { "x-ms-date" : timestamp, 'x-ms-version' : __StorageVersion }, chkProxy=True); blobType = None if blobPropResp is None: Error("Can't get status blob type.") return None blobType = blobPropResp.getheader("x-ms-blob-type") LogIfVerbose("Blob type={0}".format(blobType)) return blobType def PutBlockBlob(url, data): restutil = Util() LogIfVerbose("Upload block blob") timestamp = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()) ret = restutil.HttpPut(url, data, { "x-ms-date" : timestamp, "x-ms-blob-type" : "BlockBlob", "Content-Length": str(len(data)), "x-ms-version" : __StorageVersion }, chkProxy=True) if ret is None: Error("Failed to upload block blob for status.") return -1 return 0 def PutPageBlob(url, data): restutil = Util() LogIfVerbose("Replace old page blob") timestamp = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()) #Align to 512 bytes pageBlobSize = ((len(data) + 511) / 512) * 512 ret = restutil.HttpPut(url, "", { "x-ms-date" : timestamp, "x-ms-blob-type" : "PageBlob", "Content-Length": "0", "x-ms-blob-content-length" : str(pageBlobSize), "x-ms-version" : __StorageVersion }, chkProxy=True) if ret is None: Error("Failed to clean up page blob for status") return -1 if url.index('?') < 0: url = "{0}?comp=page".format(url) else: url = "{0}&comp=page".format(url) LogIfVerbose("Upload page blob") pageMax = 4 * 1024 * 1024 #Max page size: 4MB start = 0 end = 0 while end < len(data): end = min(len(data), start + pageMax) contentSize = end - start #Align to 512 bytes pageEnd = ((end + 511) / 512) * 512 bufSize = pageEnd - start buf = bytearray(bufSize) buf[0 : contentSize] = data[start : end] ret = restutil.HttpPut(url, buffer(buf), { "x-ms-date" : timestamp, "x-ms-range" : "bytes={0}-{1}".format(start, pageEnd - 1), "x-ms-page-write" : "update", "x-ms-version" : __StorageVersion, "Content-Length": str(pageEnd - start) }, chkProxy=True) if ret is None: Error("Failed to upload page blob for status") return -1 start = end return 0 def UploadStatusBlob(url, data): LogIfVerbose("Upload status blob") LogIfVerbose("Status={0}".format(data)) blobType = GetBlobType(url) if blobType == "BlockBlob": return PutBlockBlob(url, data) elif blobType == "PageBlob": return PutPageBlob(url, data) else: Error("Unknown blob type: {0}".format(blobType)) return -1 class TCPHandler(SocketServer.BaseRequestHandler): """ Callback object for LoadBalancerProbeServer. Recv and send LB probe messages. """ def __init__(self,lb_probe): super(TCPHandler,self).__init__() self.lb_probe=lb_probe def GetHttpDateTimeNow(self): """ Return formatted gmtime "Date: Fri, 25 Mar 2011 04:53:10 GMT" """ return time.strftime("%a, %d %b %Y %H:%M:%S GMT", time.gmtime()) def handle(self): """ Log LB probe messages, read the socket buffer, send LB probe response back to server. """ self.lb_probe.ProbeCounter = (self.lb_probe.ProbeCounter + 1) % 1000000 log = [NoLog, LogIfVerbose][ThrottleLog(self.lb_probe.ProbeCounter)] strCounter = str(self.lb_probe.ProbeCounter) if self.lb_probe.ProbeCounter == 1: Log("Receiving LB probes.") log("Received LB probe # " + strCounter) self.request.recv(1024) self.request.send("HTTP/1.1 200 OK\r\nContent-Length: 2\r\nContent-Type: text/html\r\nDate: " + self.GetHttpDateTimeNow() + "\r\n\r\nOK") class LoadBalancerProbeServer(object): """ Threaded object to receive and send LB probe messages. Load Balancer messages but be recv'd by the load balancing server, or this node may be shut-down. """ def __init__(self, port): self.ProbeCounter = 0 self.server = SocketServer.TCPServer((self.get_ip(), port), TCPHandler) self.server_thread = threading.Thread(target = self.server.serve_forever) self.server_thread.setDaemon(True) self.server_thread.start() def shutdown(self): self.server.shutdown() def get_ip(self): for retry in range(1,6): ip = MyDistro.GetIpv4Address() if ip == None : Log("LoadBalancerProbeServer: GetIpv4Address() returned None, sleeping 10 before retry " + str(retry+1) ) time.sleep(10) else: return ip class ConfigurationProvider(object): """ Parse amd store key:values in waagent.conf """ def __init__(self, walaConfigFile): self.values = dict() if 'MyDistro' not in globals(): global MyDistro MyDistro = GetMyDistro() if walaConfigFile is None: walaConfigFile = MyDistro.getConfigurationPath() if os.path.isfile(walaConfigFile) == False: raise Exception("Missing configuration in {0}".format(walaConfigFile)) try: for line in GetFileContents(walaConfigFile).split('\n'): if not line.startswith("#") and "=" in line: parts = line.split()[0].split('=') value = parts[1].strip("\" ") if value != "None": self.values[parts[0]] = value else: self.values[parts[0]] = None except: Error("Unable to parse {0}".format(walaConfigFile)) raise return def get(self, key): return self.values.get(key) class EnvMonitor(object): """ Montor changes to dhcp and hostname. If dhcp clinet process re-start has occurred, reset routes, dhcp with fabric. """ def __init__(self): self.shutdown = False self.HostName = socket.gethostname() self.server_thread = threading.Thread(target = self.monitor) self.server_thread.setDaemon(True) self.server_thread.start() self.published = False def monitor(self): """ Monitor dhcp client pid and hostname. If dhcp clinet process re-start has occurred, reset routes, dhcp with fabric. """ publish = Config.get("Provisioning.MonitorHostName") dhcpcmd = MyDistro.getpidcmd+ ' ' + MyDistro.getDhcpClientName() dhcppid = RunGetOutput(dhcpcmd)[1] while not self.shutdown: for a in RulesFiles: if os.path.isfile(a): if os.path.isfile(GetLastPathElement(a)): os.remove(GetLastPathElement(a)) shutil.move(a, ".") Log("EnvMonitor: Moved " + a + " -> " + LibDir) MyDistro.setScsiDiskTimeout() if publish != None and publish.lower().startswith("y"): try: if socket.gethostname() != self.HostName: Log("EnvMonitor: Detected host name change: " + self.HostName + " -> " + socket.gethostname()) self.HostName = socket.gethostname() WaAgent.UpdateAndPublishHostName(self.HostName) dhcppid = RunGetOutput(dhcpcmd)[1] self.published = True except: pass else: self.published = True pid = "" if not os.path.isdir("/proc/" + dhcppid.strip()): pid = RunGetOutput(dhcpcmd)[1] if pid != "" and pid != dhcppid: Log("EnvMonitor: Detected dhcp client restart. Restoring routing table.") WaAgent.RestoreRoutes() dhcppid = pid for child in Children: if child.poll() != None: Children.remove(child) time.sleep(5) def SetHostName(self, name): """ Generic call to MyDistro.setHostname(name). Complian to Log on error. """ if socket.gethostname() == name: self.published = True elif MyDistro.setHostname(name): Error("Error: SetHostName: Cannot set hostname to " + name) return ("Error: SetHostName: Cannot set hostname to " + name) def IsHostnamePublished(self): """ Return self.published """ return self.published def ShutdownService(self): """ Stop server comminucation and join the thread to main thread. """ self.shutdown = True self.server_thread.join() class Certificates(object): """ Object containing certificates of host and provisioned user. Parses and splits certificates into files. """ # # 2010-12-15 # 2 # Pkcs7BlobWithPfxContents # MIILTAY... # # def __init__(self): self.reinitialize() def reinitialize(self): """ Reset the Role, Incarnation """ self.Incarnation = None self.Role = None def Parse(self, xmlText): """ Parse multiple certificates into seperate files. """ self.reinitialize() SetFileContents("Certificates.xml", xmlText) dom = xml.dom.minidom.parseString(xmlText) for a in [ "CertificateFile", "Version", "Incarnation", "Format", "Data", ]: if not dom.getElementsByTagName(a): Error("Certificates.Parse: Missing " + a) return None node = dom.childNodes[0] if node.localName != "CertificateFile": Error("Certificates.Parse: root not CertificateFile") return None SetFileContents("Certificates.p7m", "MIME-Version: 1.0\n" + "Content-Disposition: attachment; filename=\"Certificates.p7m\"\n" + "Content-Type: application/x-pkcs7-mime; name=\"Certificates.p7m\"\n" + "Content-Transfer-Encoding: base64\n\n" + GetNodeTextData(dom.getElementsByTagName("Data")[0])) if Run(Openssl + " cms -decrypt -in Certificates.p7m -inkey TransportPrivate.pem -recip TransportCert.pem | " + Openssl + " pkcs12 -nodes -password pass: -out Certificates.pem"): Error("Certificates.Parse: Failed to extract certificates from CMS message.") return self # There may be multiple certificates in this package. Split them. file = open("Certificates.pem") pindex = 1 cindex = 1 output = open("temp.pem", "w") for line in file.readlines(): output.write(line) if re.match(r'[-]+END .*?(KEY|CERTIFICATE)[-]+$',line): output.close() if re.match(r'[-]+END .*?KEY[-]+$',line): os.rename("temp.pem", str(pindex) + ".prv") pindex += 1 else: os.rename("temp.pem", str(cindex) + ".crt") cindex += 1 output = open("temp.pem", "w") output.close() os.remove("temp.pem") keys = dict() index = 1 filename = str(index) + ".crt" while os.path.isfile(filename): thumbprint = (RunGetOutput(Openssl + " x509 -in " + filename + " -fingerprint -noout")[1]).rstrip().split('=')[1].replace(':', '').upper() pubkey=RunGetOutput(Openssl + " x509 -in " + filename + " -pubkey -noout")[1] keys[pubkey] = thumbprint os.rename(filename, thumbprint + ".crt") os.chmod(thumbprint + ".crt", 0600) MyDistro.setSelinuxContext(thumbprint + '.crt','unconfined_u:object_r:ssh_home_t:s0') index += 1 filename = str(index) + ".crt" index = 1 filename = str(index) + ".prv" while os.path.isfile(filename): pubkey = RunGetOutput(Openssl + " rsa -in " + filename + " -pubout 2> /dev/null ")[1] os.rename(filename, keys[pubkey] + ".prv") os.chmod(keys[pubkey] + ".prv", 0600) MyDistro.setSelinuxContext( keys[pubkey] + '.prv','unconfined_u:object_r:ssh_home_t:s0') index += 1 filename = str(index) + ".prv" return self class SharedConfig(object): """ Parse role endpoint server and goal state config. """ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # def __init__(self): self.reinitialize() def reinitialize(self): """ Reset members. """ self.RdmaMacAddress = None self.RdmaIPv4Address = None self.xmlText = None def Parse(self, xmlText): """ Parse and write configuration to file SharedConfig.xml. """ LogIfVerbose(xmlText) self.reinitialize() self.xmlText = xmlText dom = xml.dom.minidom.parseString(xmlText) for a in [ "SharedConfig", "Deployment", "Service", "ServiceInstance", "Incarnation", "Role", ]: if not dom.getElementsByTagName(a): Error("SharedConfig.Parse: Missing " + a) node = dom.childNodes[0] if node.localName != "SharedConfig": Error("SharedConfig.Parse: root not SharedConfig") nodes = dom.getElementsByTagName("Instance") if nodes is not None and len(nodes) != 0: node = nodes[0] if node.hasAttribute("rdmaMacAddress"): addr = node.getAttribute("rdmaMacAddress") self.RdmaMacAddress = addr[0:2] for i in range(1, 6): self.RdmaMacAddress += ":" + addr[2 * i : 2 *i + 2] if node.hasAttribute("rdmaIPv4Address"): self.RdmaIPv4Address = node.getAttribute("rdmaIPv4Address") return self def Save(self): LogIfVerbose("Save SharedConfig.xml") SetFileContents("SharedConfig.xml", self.xmlText) def InvokeTopologyConsumer(self): program = Config.get("Role.TopologyConsumer") if program != None: try: Children.append(subprocess.Popen([program, LibDir + "/SharedConfig.xml"])) except OSError, e : ErrorWithPrefix('Agent.Run','Exception: '+ str(e) +' occured launching ' + program ) def Process(self): global rdma_configured if not rdma_configured and self.RdmaMacAddress is not None and self.RdmaIPv4Address is not None: handler = RdmaHandler(self.RdmaMacAddress, self.RdmaIPv4Address) handler.start() rdma_configured = True self.InvokeTopologyConsumer() rdma_configured = False class RdmaError(Exception): pass class RdmaHandler(object): """ Handle rdma configuration. """ def __init__(self, mac, ip_addr, dev="/dev/hvnd_rdma", dat_conf_files=['/etc/dat.conf', '/etc/rdma/dat.conf', '/usr/local/etc/dat.conf']): self.mac = mac self.ip_addr = ip_addr self.dev = dev self.dat_conf_files = dat_conf_files self.data = ('rdmaMacAddress="{0}" rdmaIPv4Address="{1}"' '').format(self.mac, self.ip_addr) def start(self): """ Start a new thread to process rdma """ threading.Thread(target=self.process).start() def process(self): try: self.set_dat_conf() self.set_rdma_dev() self.set_rdma_ip() except RdmaError as e: Error("Failed to config rdma device: {0}".format(e)) def set_dat_conf(self): """ Agent needs to search all possible locations for dat.conf """ Log("Set dat.conf") for dat_conf_file in self.dat_conf_files: if not os.path.isfile(dat_conf_file): continue try: self.write_dat_conf(dat_conf_file) except IOError as e: raise RdmaError("Failed to write to dat.conf: {0}".format(e)) def write_dat_conf(self, dat_conf_file): Log("Write config to {0}".format(dat_conf_file)) old = ("ofa-v2-ib0 u2.0 nonthreadsafe default libdaplofa.so.2 " "dapl.2.0 \"\S+ 0\"") new = ("ofa-v2-ib0 u2.0 nonthreadsafe default libdaplofa.so.2 " "dapl.2.0 \"{0} 0\"").format(self.ip_addr) lines = GetFileContents(dat_conf_file) lines = re.sub(old, new, lines) SetFileContents(dat_conf_file, lines) def set_rdma_dev(self): """ Write config string to /dev/hvnd_rdma """ Log("Set /dev/hvnd_rdma") self.wait_rdma_dev() self.write_rdma_dev_conf() def write_rdma_dev_conf(self): Log("Write rdma config to {0}: {1}".format(self.dev, self.data)) try: with open(self.dev, "w") as c: c.write(self.data) except IOError, e: raise RdmaError("Error writing {0}, {1}".format(self.dev, e)) def wait_rdma_dev(self): Log("Wait for /dev/hvnd_rdma") retry = 0 while retry < 120: if os.path.exists(self.dev): return time.sleep(1) retry += 1 raise RdmaError("The device doesn't show up in 120 seconds") def set_rdma_ip(self): Log("Set ip addr for rdma") try: if_name = MyDistro.getInterfaceNameByMac(self.mac) #Azure is using 12 bits network mask for infiniband. MyDistro.configIpV4(if_name, self.ip_addr, 12) except Exception as e: raise RdmaError("Failed to config rdma device: {0}".format(e)) class ExtensionsConfig(object): """ Parse ExtensionsConfig, downloading and unpacking them to /var/lib/waagent. Install if true, remove if it is set to false. """ # # # # # # # {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"1BE9A13AA1321C7C515EF109746998BAB6D86FD1", #"protectedSettings":"MIIByAYJKoZIhvcNAQcDoIIBuTCCAbUCAQAxggFxMIIBbQIBADBVMEExPzA9BgoJkiaJk/IsZAEZFi9XaW5kb3dzIEF6dXJlIFNlcnZpY2UgTWFuYWdlbWVudCBmb3IgR #Xh0ZW5zaW9ucwIQZi7dw+nhc6VHQTQpCiiV2zANBgkqhkiG9w0BAQEFAASCAQCKr09QKMGhwYe+O4/a8td+vpB4eTR+BQso84cV5KCAnD6iUIMcSYTrn9aveY6v6ykRLEw8GRKfri2d6 #tvVDggUrBqDwIgzejGTlCstcMJItWa8Je8gHZVSDfoN80AEOTws9Fp+wNXAbSuMJNb8EnpkpvigAWU2v6pGLEFvSKC0MCjDTkjpjqciGMcbe/r85RG3Zo21HLl0xNOpjDs/qqikc/ri43Y76E/X #v1vBSHEGMFprPy/Hwo3PqZCnulcbVzNnaXN3qi/kxV897xGMPPC3IrO7Nc++AT9qRLFI0841JLcLTlnoVG1okPzK9w6ttksDQmKBSHt3mfYV+skqs+EOMDsGCSqGSIb3DQEHATAUBggqh #kiG9w0DBwQITgu0Nu3iFPuAGD6/QzKdtrnCI5425fIUy7LtpXJGmpWDUA==","publicSettings":{"port":"3000"}}}]} # # #https://ostcextensions.blob.core.test-cint.azure-test.net/vhds/eg-plugin7-vm.eg-plugin7-vm.eg-plugin7-vm.status?sr=b&sp=rw& #se=9999-01-01&sk=key1&sv=2012-02-12&sig=wRUIDN1x2GC06FWaetBP9sjjifOWvRzS2y2XBB4qoBU%3D def __init__(self): self.reinitialize() def reinitialize(self): """ Reset members. """ self.Extensions = None self.Plugins = None self.Util = None def Parse(self, xmlText): """ Write configuration to file ExtensionsConfig.xml. Log plugin specific activity to /var/log/azure/.//CommandExecution.log. If state is enabled: if the plugin is installed: if the new plugin's version is higher if DisallowMajorVersionUpgrade is false or if true, the version is a minor version do upgrade: download the new archive do the updateCommand. disable the old plugin and remove enable the new plugin if the new plugin's version is the same or lower: create the new .settings file from the configuration received do the enableCommand if the plugin is not installed: download/unpack archive and call the installCommand/Enable if state is disabled: call disableCommand if state is uninstall: call uninstallCommand remove old plugin directory. """ self.reinitialize() self.Util=Util() dom = xml.dom.minidom.parseString(xmlText) LogIfVerbose(xmlText) self.plugin_log_dir='/var/log/azure' if not os.path.exists(self.plugin_log_dir): os.mkdir(self.plugin_log_dir) try: self.Extensions=dom.getElementsByTagName("Extensions") pg = dom.getElementsByTagName("Plugins") if len(pg) > 0: self.Plugins = pg[0].getElementsByTagName("Plugin") else: self.Plugins = [] incarnation=self.Extensions[0].getAttribute("goalStateIncarnation") SetFileContents('ExtensionsConfig.'+incarnation+'.xml', xmlText) except Exception, e: Error('ERROR: Error parsing ExtensionsConfig: {0}.'.format(e)) return None for p in self.Plugins: if len(p.getAttribute("location"))<1: # this plugin is inside the PluginSettings continue p.setAttribute('restricted','false') previous_version = None version=p.getAttribute("version") name=p.getAttribute("name") plog_dir=self.plugin_log_dir+'/'+name +'/'+ version if not os.path.exists(plog_dir): os.makedirs(plog_dir) p.plugin_log=plog_dir+'/CommandExecution.log' handler=name + '-' + version if p.getAttribute("isJson") != 'true': Error("Plugin " + name+" version: " +version+" is not a JSON Extension. Skipping.") continue Log("Found Plugin: " + name + ' version: ' + version) if p.getAttribute("state") == 'disabled' or p.getAttribute("state") == 'uninstall': #disable zip_dir=LibDir+"/" + name + '-' + version mfile=None for root, dirs, files in os.walk(zip_dir): for f in files: if f in ('HandlerManifest.json'): mfile=os.path.join(root,f) if mfile != None: break if mfile == None : Error('HandlerManifest.json not found.') continue manifest = GetFileContents(mfile) p.setAttribute('manifestdata',manifest) if self.launchCommand(p.plugin_log,name,version,'disableCommand') == None : self.SetHandlerState(handler, 'Enabled') Error('Unable to disable '+name) SimpleLog(p.plugin_log,'ERROR: Unable to disable '+name) else : self.SetHandlerState(handler, 'Disabled') Log(name+' is disabled') SimpleLog(p.plugin_log,name+' is disabled') # uninstall if needed if p.getAttribute("state") == 'uninstall': if self.launchCommand(p.plugin_log,name,version,'uninstallCommand') == None : self.SetHandlerState(handler, 'Installed') Error('Unable to uninstall '+name) SimpleLog(p.plugin_log,'Unable to uninstall '+name) else : self.SetHandlerState(handler, 'NotInstalled') Log(name+' uninstallCommand completed .') # remove the plugin Run('rm -rf ' + LibDir + '/' + name +'-'+ version + '*') Log(name +'-'+ version + ' extension files deleted.') SimpleLog(p.plugin_log,name +'-'+ version + ' extension files deleted.') continue # state is enabled # if the same plugin exists and the version is newer or # does not exist then download and unzip the new plugin plg_dir=None latest_version_installed = LooseVersion("0.0") for item in os.listdir(LibDir): itemPath = os.path.join(LibDir, item) if os.path.isdir(itemPath) and name in item: try: #Split plugin dir name with '-' to get intalled plugin name and version sperator = item.rfind('-') if sperator < 0: continue installed_plg_name = item[0:sperator] installed_plg_version = LooseVersion(item[sperator + 1:]) #Check installed plugin name and compare installed version to get the latest version installed if installed_plg_name == name and installed_plg_version > latest_version_installed: plg_dir = itemPath previous_version = str(installed_plg_version) latest_version_installed = installed_plg_version except Exception as e: Warn("Invalid plugin dir name: {0} {1}".format(item, e)) continue if plg_dir == None or LooseVersion(version) > LooseVersion(previous_version) : location=p.getAttribute("location") Log("Downloading plugin manifest: " + name + " from " + location) SimpleLog(p.plugin_log,"Downloading plugin manifest: " + name + " from " + location) self.Util.Endpoint=location.split('/')[2] Log("Plugin server is: " + self.Util.Endpoint) SimpleLog(p.plugin_log,"Plugin server is: " + self.Util.Endpoint) manifest=self.Util.HttpGetWithoutHeaders(location, chkProxy=True) if manifest == None: Error("Unable to download plugin manifest" + name + " from primary location. Attempting with failover location.") SimpleLog(p.plugin_log,"Unable to download plugin manifest" + name + " from primary location. Attempting with failover location.") failoverlocation=p.getAttribute("failoverlocation") self.Util.Endpoint=failoverlocation.split('/')[2] Log("Plugin failover server is: " + self.Util.Endpoint) SimpleLog(p.plugin_log,"Plugin failover server is: " + self.Util.Endpoint) manifest=self.Util.HttpGetWithoutHeaders(failoverlocation, chkProxy=True) #if failoverlocation also fail what to do then? if manifest == None: AddExtensionEvent(name,WALAEventOperation.Download,False,0,version,"Download mainfest fail "+failoverlocation) Log("Plugin manifest " + name + " downloading failed from failover location.") SimpleLog(p.plugin_log,"Plugin manifest " + name + " downloading failed from failover location.") filepath=LibDir+"/" + name + '.' + incarnation + '.manifest' if os.path.splitext(location)[-1] == '.xml' : #if this is an xml file we may have a BOM if ord(manifest[0]) > 128 and ord(manifest[1]) > 128 and ord(manifest[2]) > 128: manifest=manifest[3:] SetFileContents(filepath,manifest) #Get the bundle url from the manifest p.setAttribute('manifestdata',manifest) man_dom = xml.dom.minidom.parseString(manifest) bundle_uri = "" for mp in man_dom.getElementsByTagName("Plugin"): if GetNodeTextData(mp.getElementsByTagName("Version")[0]) == version: bundle_uri = GetNodeTextData(mp.getElementsByTagName("Uri")[0]) break if len(mp.getElementsByTagName("DisallowMajorVersionUpgrade")): if GetNodeTextData(mp.getElementsByTagName("DisallowMajorVersionUpgrade")[0]) == 'true' and previous_version !=None and previous_version.split('.')[0] != version.split('.')[0] : Log('DisallowMajorVersionUpgrade is true, this major version is restricted from upgrade.') SimpleLog(p.plugin_log,'DisallowMajorVersionUpgrade is true, this major version is restricted from upgrade.') p.setAttribute('restricted','true') continue if len(bundle_uri) < 1 : Error("Unable to fetch Bundle URI from manifest for " + name + " v " + version) SimpleLog(p.plugin_log,"Unable to fetch Bundle URI from manifest for " + name + " v " + version) continue Log("Bundle URI = " + bundle_uri) SimpleLog(p.plugin_log,"Bundle URI = " + bundle_uri) # Download the zipfile archive and save as '.zip' bundle=self.Util.HttpGetWithoutHeaders(bundle_uri, chkProxy=True) if bundle == None: AddExtensionEvent(name,WALAEventOperation.Download,True,0,version,"Download zip fail "+bundle_uri) Error("Unable to download plugin bundle" + bundle_uri ) SimpleLog(p.plugin_log,"Unable to download plugin bundle" + bundle_uri ) continue AddExtensionEvent(name,WALAEventOperation.Download,True,0,version,"Download Success") b=bytearray(bundle) filepath=LibDir+"/" + os.path.basename(bundle_uri) + '.zip' SetFileContents(filepath,b) Log("Plugin bundle" + bundle_uri + "downloaded successfully length = " + str(len(bundle))) SimpleLog(p.plugin_log,"Plugin bundle" + bundle_uri + "downloaded successfully length = " + str(len(bundle))) # unpack the archive z=zipfile.ZipFile(filepath) zip_dir=LibDir+"/" + name + '-' + version z.extractall(zip_dir) Log('Extracted ' + bundle_uri + ' to ' + zip_dir) SimpleLog(p.plugin_log,'Extracted ' + bundle_uri + ' to ' + zip_dir) # zip no file perms in .zip so set all the scripts to +x Run( "find " + zip_dir +" -type f | xargs chmod u+x ") #write out the base64 config data so the plugin can process it. mfile=None for root, dirs, files in os.walk(zip_dir): for f in files: if f in ('HandlerManifest.json'): mfile=os.path.join(root,f) if mfile != None: break if mfile == None : Error('HandlerManifest.json not found.') SimpleLog(p.plugin_log,'HandlerManifest.json not found.') continue manifest = GetFileContents(mfile) p.setAttribute('manifestdata',manifest) # create the status and config dirs Run('mkdir -p ' + root + '/status') Run('mkdir -p ' + root + '/config') # write out the configuration data to goalStateIncarnation.settings file in the config path. config='' seqNo='0' if len(dom.getElementsByTagName("PluginSettings")) != 0 : pslist=dom.getElementsByTagName("PluginSettings")[0].getElementsByTagName("Plugin") for ps in pslist: if name == ps.getAttribute("name") and version == ps.getAttribute("version"): Log("Found RuntimeSettings for " + name + " V " + version) SimpleLog(p.plugin_log,"Found RuntimeSettings for " + name + " V " + version) config=GetNodeTextData(ps.getElementsByTagName("RuntimeSettings")[0]) seqNo=ps.getElementsByTagName("RuntimeSettings")[0].getAttribute("seqNo") break if config == '': Log("No RuntimeSettings for " + name + " V " + version) SimpleLog(p.plugin_log,"No RuntimeSettings for " + name + " V " + version) SetFileContents(root +"/config/" + seqNo +".settings", config ) #create HandlerEnvironment.json handler_env='[{ "name": "'+name+'", "seqNo": "'+seqNo+'", "version": 1.0, "handlerEnvironment": { "logFolder": "'+os.path.dirname(p.plugin_log)+'", "configFolder": "' + root + '/config", "statusFolder": "' + root + '/status", "heartbeatFile": "'+ root + '/heartbeat.log"}}]' SetFileContents(root+'/HandlerEnvironment.json',handler_env) self.SetHandlerState(handler, 'NotInstalled') cmd = '' getcmd='installCommand' if plg_dir != None and previous_version != None and LooseVersion(version) > LooseVersion(previous_version): previous_handler=name+'-'+previous_version if self.GetHandlerState(previous_handler) != 'NotInstalled': getcmd='updateCommand' # disable the old plugin if it exists if self.launchCommand(p.plugin_log,name,previous_version,'disableCommand') == None : self.SetHandlerState(previous_handler, 'Enabled') Error('Unable to disable old plugin '+name+' version ' + previous_version) SimpleLog(p.plugin_log,'Unable to disable old plugin '+name+' version ' + previous_version) else : self.SetHandlerState(previous_handler, 'Disabled') Log(name+' version ' + previous_version + ' is disabled') SimpleLog(p.plugin_log,name+' version ' + previous_version + ' is disabled') try: Log("Copy status file from old plugin dir to new") old_plg_dir = plg_dir new_plg_dir = os.path.join(LibDir, "{0}-{1}".format(name, version)) old_ext_status_dir = os.path.join(old_plg_dir, "status") new_ext_status_dir = os.path.join(new_plg_dir, "status") if os.path.isdir(old_ext_status_dir): for status_file in os.listdir(old_ext_status_dir): status_file_path = os.path.join(old_ext_status_dir, status_file) if os.path.isfile(status_file_path): shutil.copy2(status_file_path, new_ext_status_dir) mrseq_file = os.path.join(old_plg_dir, "mrseq") if os.path.isfile(mrseq_file): shutil.copy(mrseq_file, new_plg_dir) except Exception as e: Error("Failed to copy status file.") isupgradeSuccess = True if getcmd=='updateCommand': if self.launchCommand(p.plugin_log,name,version,getcmd,previous_version) == None : Error('Update failed for '+name+'-'+version) SimpleLog(p.plugin_log,'Update failed for '+name+'-'+version) isupgradeSuccess=False else : Log('Update complete'+name+'-'+version) SimpleLog(p.plugin_log,'Update complete'+name+'-'+version) # if we updated - call unistall for the old plugin if self.launchCommand(p.plugin_log,name,previous_version,'uninstallCommand') == None : self.SetHandlerState(previous_handler, 'Installed') Error('Uninstall failed for '+name+'-'+previous_version) SimpleLog(p.plugin_log,'Uninstall failed for '+name+'-'+previous_version) isupgradeSuccess=False else : self.SetHandlerState(previous_handler, 'NotInstalled') Log('Uninstall complete'+ previous_handler ) SimpleLog(p.plugin_log,'Uninstall complete'+ name +'-' + previous_version) try: #rm old plugin dir if os.path.isdir(plg_dir): shutil.rmtree(plg_dir) Log(name +'-'+ previous_version + ' extension files deleted.') SimpleLog(p.plugin_log,name +'-'+ previous_version + ' extension files deleted.') except Exception as e: Error("Failed to remove old plugin directory") AddExtensionEvent(name,WALAEventOperation.Upgrade,isupgradeSuccess,0,previous_version) else : # run install if self.launchCommand(p.plugin_log,name,version,getcmd) == None : self.SetHandlerState(handler, 'NotInstalled') Error('Installation failed for '+name+'-'+version) SimpleLog(p.plugin_log,'Installation failed for '+name+'-'+version) else : self.SetHandlerState(handler, 'Installed') Log('Installation completed for '+name+'-'+version) SimpleLog(p.plugin_log,'Installation completed for '+name+'-'+version) #end if plg_dir == none or version > = prev # change incarnation of settings file so it knows how to name status... zip_dir=LibDir+"/" + name + '-' + version mfile=None for root, dirs, files in os.walk(zip_dir): for f in files: if f in ('HandlerManifest.json'): mfile=os.path.join(root,f) if mfile != None: break if mfile == None : Error('HandlerManifest.json not found.') SimpleLog(p.plugin_log,'HandlerManifest.json not found.') continue manifest = GetFileContents(mfile) p.setAttribute('manifestdata',manifest) config='' seqNo='0' if len(dom.getElementsByTagName("PluginSettings")) != 0 : try: pslist=dom.getElementsByTagName("PluginSettings")[0].getElementsByTagName("Plugin") except: Error('Error parsing ExtensionsConfig.') SimpleLog(p.plugin_log,'Error parsing ExtensionsConfig.') continue for ps in pslist: if name == ps.getAttribute("name") and version == ps.getAttribute("version"): Log("Found RuntimeSettings for " + name + " V " + version) SimpleLog(p.plugin_log,"Found RuntimeSettings for " + name + " V " + version) config=GetNodeTextData(ps.getElementsByTagName("RuntimeSettings")[0]) seqNo=ps.getElementsByTagName("RuntimeSettings")[0].getAttribute("seqNo") break if config == '': Error("No RuntimeSettings for " + name + " V " + version) SimpleLog(p.plugin_log,"No RuntimeSettings for " + name + " V " + version) SetFileContents(root +"/config/" + seqNo +".settings", config ) # state is still enable if (self.GetHandlerState(handler) == 'NotInstalled'): # run install first if true if self.launchCommand(p.plugin_log,name,version,'installCommand') == None : self.SetHandlerState(handler, 'NotInstalled') Error('Installation failed for '+name+'-'+version) SimpleLog(p.plugin_log,'Installation failed for '+name+'-'+version) else : self.SetHandlerState(handler, 'Installed') Log('Installation completed for '+name+'-'+version) SimpleLog(p.plugin_log,'Installation completed for '+name+'-'+version) if (self.GetHandlerState(handler) != 'NotInstalled'): if self.launchCommand(p.plugin_log,name,version,'enableCommand') == None : self.SetHandlerState(handler, 'Installed') Error('Enable failed for '+name+'-'+version) SimpleLog(p.plugin_log,'Enable failed for '+name+'-'+version) else : self.SetHandlerState(handler, 'Enabled') Log('Enable completed for '+name+'-'+version) SimpleLog(p.plugin_log,'Enable completed for '+name+'-'+version) # this plugin processing is complete Log('Processing completed for '+name+'-'+version) SimpleLog(p.plugin_log,'Processing completed for '+name+'-'+version) #end plugin processing loop Log('Finished processing ExtensionsConfig.xml') try: SimpleLog(p.plugin_log,'Finished processing ExtensionsConfig.xml') except: pass return self def launchCommand(self,plugin_log,name,version,command,prev_version=None): commandToEventOperation={ "installCommand":WALAEventOperation.Install, "uninstallCommand":WALAEventOperation.UnIsntall, "updateCommand": WALAEventOperation.Upgrade, "enableCommand": WALAEventOperation.Enable, "disableCommand": WALAEventOperation.Disable, } isSuccess=True start = datetime.datetime.now() r=self.__launchCommandWithoutEventLog(plugin_log,name,version,command,prev_version) if r==None: isSuccess=False Duration = int((datetime.datetime.now() - start).seconds) if commandToEventOperation.get(command): AddExtensionEvent(name,commandToEventOperation[command],isSuccess,Duration,version) return r def __launchCommandWithoutEventLog(self,plugin_log,name,version,command,prev_version=None): # get the manifest and read the command mfile=None zip_dir=LibDir+"/" + name + '-' + version for root, dirs, files in os.walk(zip_dir): for f in files: if f in ('HandlerManifest.json'): mfile=os.path.join(root,f) if mfile != None: break if mfile == None : Error('HandlerManifest.json not found.') SimpleLog(plugin_log,'HandlerManifest.json not found.') return None manifest = GetFileContents(mfile) try: jsn = json.loads(manifest) except: Error('Error parsing HandlerManifest.json.') SimpleLog(plugin_log,'Error parsing HandlerManifest.json.') return None if type(jsn)==list: jsn=jsn[0] if jsn.has_key('handlerManifest') : cmd = jsn['handlerManifest'][command] else : Error('Key handlerManifest not found. Handler cannot be installed.') SimpleLog(plugin_log,'Key handlerManifest not found. Handler cannot be installed.') if len(cmd) == 0 : Error('Unable to read ' + command ) SimpleLog(plugin_log,'Unable to read ' + command ) return None # for update we send the path of the old installation arg='' if prev_version != None : arg=' ' + LibDir+'/' + name + '-' + prev_version dirpath=os.path.dirname(mfile) LogIfVerbose('Command is '+ dirpath+'/'+ cmd) # launch pid=None try: child = subprocess.Popen(dirpath+'/'+cmd+arg,shell=True,cwd=dirpath,stdout=subprocess.PIPE) except Exception as e: Error('Exception launching ' + cmd + str(e)) SimpleLog(plugin_log,'Exception launching ' + cmd + str(e)) pid = child.pid if pid == None or pid < 1 : ExtensionChildren.append((-1,root)) Error('Error launching ' + cmd + '.') SimpleLog(plugin_log,'Error launching ' + cmd + '.') else : ExtensionChildren.append((pid,root)) Log("Spawned "+ cmd + " PID " + str(pid)) SimpleLog(plugin_log,"Spawned "+ cmd + " PID " + str(pid)) # wait until install/upgrade is finished timeout = 300 # 5 minutes retry = timeout/5 while retry > 0 and child.poll() == None: LogIfVerbose(cmd + ' still running with PID ' + str(pid)) time.sleep(5) retry-=1 if retry==0: Error('Process exceeded timeout of ' + str(timeout) + ' seconds. Terminating process ' + str(pid)) SimpleLog(plugin_log,'Process exceeded timeout of ' + str(timeout) + ' seconds. Terminating process ' + str(pid)) os.kill(pid,9) return None code = child.wait() if code == None or code != 0: Error('Process ' + str(pid) + ' returned non-zero exit code (' + str(code) + ')') SimpleLog(plugin_log,'Process ' + str(pid) + ' returned non-zero exit code (' + str(code) + ')') return None Log(command + ' completed.') SimpleLog(plugin_log,command + ' completed.') return 0 def ReportHandlerStatus(self): """ Collect all status reports. """ # { "version": "1.0", "timestampUTC": "2014-03-31T21:28:58Z", # "aggregateStatus": { # "guestAgentStatus": { "version": "2.0.4PRE", "status": "Ready", "formattedMessage": { "lang": "en-US", "message": "GuestAgent is running and accepting new configurations." } }, # "handlerAggregateStatus": [{ # "handlerName": "ExampleHandlerLinux", "handlerVersion": "1.0", "status": "Ready", "runtimeSettingsStatus": { # "sequenceNumber": "2", "settingsStatus": { "timestampUTC": "2014-03-31T23:46:00Z", "status": { "name": "ExampleHandlerLinux", "operation": "Command Execution Finished", "configurationAppliedTime": "2014-03-31T23:46:00Z", "status": "success", "formattedMessage": { "lang": "en-US", "message": "Finished executing command" }, # "substatus": [ # { "name": "StdOut", "status": "success", "formattedMessage": { "lang": "en-US", "message": "Goodbye world!" } }, # { "name": "StdErr", "status": "success", "formattedMessage": { "lang": "en-US", "message": "" } } # ] # } } } } # ] # }} try: incarnation=self.Extensions[0].getAttribute("goalStateIncarnation") except: Error('Error parsing attribute "goalStateIncarnation". Unable to send status reports') return -1 status='' statuses='' for p in self.Plugins: if p.getAttribute("state") == 'uninstall' or p.getAttribute("restricted") == 'true' : continue version=p.getAttribute("version") name=p.getAttribute("name") if p.getAttribute("isJson") != 'true': LogIfVerbose("Plugin " + name+" version: " +version+" is not a JSON Extension. Skipping.") continue reportHeartbeat = False if len(p.getAttribute("manifestdata"))<1: Error("Failed to get manifestdata.") else: reportHeartbeat = json.loads(p.getAttribute("manifestdata"))[0]['handlerManifest']['reportHeartbeat'] if len(statuses)>0: statuses+=',' statuses+=self.GenerateAggStatus(name, version, reportHeartbeat) tstamp=time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()) #header #agent state if provisioned == False: if provisionError == None : agent_state='Provisioning' agent_msg='Guest Agent is starting.' else: agent_state='Provisioning Error.' agent_msg=provisionError else: agent_state='Ready' agent_msg='GuestAgent is running and accepting new configurations.' status='{"version":"1.0","timestampUTC":"'+tstamp+'","aggregateStatus":{"guestAgentStatus":{"version":"'+GuestAgentVersion+'","status":"'+agent_state+'","formattedMessage":{"lang":"en-US","message":"'+agent_msg+'"}},"handlerAggregateStatus":['+statuses+']}}' try: uri=GetNodeTextData(self.Extensions[0].getElementsByTagName("StatusUploadBlob")[0]).replace('&','&') except: Error('Error parsing element "StatusUploadBlob". Unable to send status reports') return -1 LogIfVerbose('Status report '+status+' sent to ' + uri) return UploadStatusBlob(uri, status.encode("utf-8")) def GetCurrentSequenceNumber(self, plugin_base_dir): """ Get the settings file with biggest file number in config folder """ config_dir = os.path.join(plugin_base_dir, 'config') seq_no = 0 for subdir, dirs, files in os.walk(config_dir): for file in files: try: cur_seq_no = int(os.path.basename(file).split('.')[0]) if cur_seq_no > seq_no: seq_no = cur_seq_no except ValueError: continue return str(seq_no) def GenerateAggStatus(self, name, version, reportHeartbeat = False): """ Generate the status which Azure can understand by the status and heartbeat reported by extension """ plugin_base_dir = LibDir+'/'+name+'-'+version+'/' current_seq_no = self.GetCurrentSequenceNumber(plugin_base_dir) status_file=os.path.join(plugin_base_dir, 'status/', current_seq_no +'.status') heartbeat_file = os.path.join(plugin_base_dir, 'heartbeat.log') handler_state_file = os.path.join(plugin_base_dir, 'config', 'HandlerState') agg_state = 'NotReady' handler_state = None status_obj = None status_code = None formatted_message = None localized_message = None if os.path.exists(handler_state_file): handler_state = GetFileContents(handler_state_file).lower() if HandlerStatusToAggStatus.has_key(handler_state): agg_state = HandlerStatusToAggStatus[handler_state] if reportHeartbeat: if os.path.exists(heartbeat_file): d=int(time.time()-os.stat(heartbeat_file).st_mtime) if d > 600 : # not updated for more than 10 min agg_state = 'Unresponsive' else: try: heartbeat = json.loads(GetFileContents(heartbeat_file))[0]["heartbeat"] agg_state = heartbeat.get("status") status_code = heartbeat.get("code") formatted_message = heartbeat.get("formattedMessage") localized_message = heartbeat.get("message") except: Error("Incorrect heartbeat file. Ignore it. ") else: agg_state = 'Unresponsive' #get status file reported by extension if os.path.exists(status_file): # raw status generated by extension is an array, get the first item and remove the unnecessary element try: status_obj = json.loads(GetFileContents(status_file))[0] del status_obj["version"] except: Error("Incorrect status file. Will NOT settingsStatus in settings. ") agg_status_obj = {"handlerName": name, "handlerVersion": version, "status": agg_state, "runtimeSettingsStatus" : {"sequenceNumber": current_seq_no}} if status_obj: agg_status_obj["runtimeSettingsStatus"]["settingsStatus"] = status_obj if status_code != None: agg_status_obj["code"] = status_code if formatted_message: agg_status_obj["formattedMessage"] = formatted_message if localized_message: agg_status_obj["message"] = localized_message agg_status_string = json.dumps(agg_status_obj) LogIfVerbose("Handler Aggregated Status:" + agg_status_string) return agg_status_string def SetHandlerState(self, handler, state=''): zip_dir=LibDir+"/" + handler mfile=None for root, dirs, files in os.walk(zip_dir): for f in files: if f in ('HandlerManifest.json'): mfile=os.path.join(root,f) if mfile != None: break if mfile == None : Error('SetHandlerState(): HandlerManifest.json not found, cannot set HandlerState.') return None Log("SetHandlerState: "+handler+", "+state) return SetFileContents(os.path.dirname(mfile)+'/config/HandlerState', state) def GetHandlerState(self, handler): handlerState = GetFileContents(handler+'/config/HandlerState') if (handlerState): return handlerState.rstrip('\r\n') else: return 'NotInstalled' class HostingEnvironmentConfig(object): """ Parse Hosting enviromnet config and store in HostingEnvironmentConfig.xml """ # # # # # # # # # # # # # # # # # # # # # # # # # # def __init__(self): self.reinitialize() def reinitialize(self): """ Reset Members. """ self.StoredCertificates = None self.Deployment = None self.Incarnation = None self.Role = None self.HostingEnvironmentSettings = None self.ApplicationSettings = None self.Certificates = None self.ResourceReferences = None def Parse(self, xmlText): """ Parse and create HostingEnvironmentConfig.xml. """ self.reinitialize() SetFileContents("HostingEnvironmentConfig.xml", xmlText) dom = xml.dom.minidom.parseString(xmlText) for a in [ "HostingEnvironmentConfig", "Deployment", "Service", "ServiceInstance", "Incarnation", "Role", ]: if not dom.getElementsByTagName(a): Error("HostingEnvironmentConfig.Parse: Missing " + a) return None node = dom.childNodes[0] if node.localName != "HostingEnvironmentConfig": Error("HostingEnvironmentConfig.Parse: root not HostingEnvironmentConfig") return None self.ApplicationSettings = dom.getElementsByTagName("Setting") self.Certificates = dom.getElementsByTagName("StoredCertificate") return self def DecryptPassword(self, e): """ Return decrypted password. """ SetFileContents("password.p7m", "MIME-Version: 1.0\n" + "Content-Disposition: attachment; filename=\"password.p7m\"\n" + "Content-Type: application/x-pkcs7-mime; name=\"password.p7m\"\n" + "Content-Transfer-Encoding: base64\n\n" + textwrap.fill(e, 64)) return RunGetOutput(Openssl + " cms -decrypt -in password.p7m -inkey Certificates.pem -recip Certificates.pem")[1] def ActivateResourceDisk(self): return MyDistro.ActivateResourceDisk() def Process(self): """ Execute ActivateResourceDisk in separate thread. Create the user account. Launch ConfigurationConsumer if specified in the config. """ no_thread = False if DiskActivated == False: for m in inspect.getmembers(MyDistro): if 'ActivateResourceDiskNoThread' in m: no_thread = True break if no_thread == True : MyDistro.ActivateResourceDiskNoThread() else : diskThread = threading.Thread(target = self.ActivateResourceDisk) diskThread.start() User = None Pass = None Expiration = None Thumbprint = None for b in self.ApplicationSettings: sname = b.getAttribute("name") svalue = b.getAttribute("value") if User != None and Pass != None: if User != "root" and User != "" and Pass != "": CreateAccount(User, Pass, Expiration, Thumbprint) else: Error("Not creating user account: " + User) for c in self.Certificates: csha1 = c.getAttribute("certificateId").split(':')[1].upper() if os.path.isfile(csha1 + ".prv"): Log("Private key with thumbprint: " + csha1 + " was retrieved.") if os.path.isfile(csha1 + ".crt"): Log("Public cert with thumbprint: " + csha1 + " was retrieved.") program = Config.get("Role.ConfigurationConsumer") if program != None: try: Children.append(subprocess.Popen([program, LibDir + "/HostingEnvironmentConfig.xml"])) except OSError, e : ErrorWithPrefix('HostingEnvironmentConfig.Process','Exception: '+ str(e) +' occured launching ' + program ) class GoalState(Util): """ Primary container for all configuration except OvfXml. Encapsulates http communication with endpoint server. Initializes and populates: self.HostingEnvironmentConfig self.SharedConfig self.ExtensionsConfig self.Certificates """ # # # 2010-12-15 # 1 # # Started # # 16001 # # # # c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2 # # # MachineRole_IN_0 # Started # # http://10.115.153.40:80/machine/c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2/MachineRole%5FIN%5F0?comp=config&type=hostingEnvironmentConfig&incarnation=1 # http://10.115.153.40:80/machine/c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2/MachineRole%5FIN%5F0?comp=config&type=sharedConfig&incarnation=1 # http://10.115.153.40:80/machine/c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2/MachineRole%5FIN%5F0?comp=certificates&incarnation=1 # http://100.67.238.230:80/machine/9c87aa94-3bda-45e3-b2b7-0eb0fca7baff/1552dd64dc254e6884f8d5b8b68aa18f.eg%2Dplug%2Dvm?comp=config&type=extensionsConfig&incarnation=2 # http://100.67.238.230:80/machine/9c87aa94-3bda-45e3-b2b7-0eb0fca7baff/1552dd64dc254e6884f8d5b8b68aa18f.eg%2Dplug%2Dvm?comp=config&type=fullConfig&incarnation=2 # # # # # # # There is only one Role for VM images. # # Of primary interest is: # LBProbePorts -- an http server needs to run here # We also note Container/ContainerID and RoleInstance/InstanceId to form the health report. # And of course, Incarnation # def __init__(self, Agent): self.Agent = Agent self.Endpoint = Agent.Endpoint self.TransportCert = Agent.TransportCert self.reinitialize() def reinitialize(self): self.Incarnation = None # integer self.ExpectedState = None # "Started" self.HostingEnvironmentConfigUrl = None self.HostingEnvironmentConfigXml = None self.HostingEnvironmentConfig = None self.SharedConfigUrl = None self.SharedConfigXml = None self.SharedConfig = None self.CertificatesUrl = None self.CertificatesXml = None self.Certificates = None self.ExtensionsConfigUrl = None self.ExtensionsConfigXml = None self.ExtensionsConfig = None self.RoleInstanceId = None self.ContainerId = None self.LoadBalancerProbePort = None # integer, ?list of integers def Parse(self, xmlText): """ Request configuration data from endpoint server. Parse and populate contained configuration objects. Calls Certificates().Parse() Calls SharedConfig().Parse Calls ExtensionsConfig().Parse Calls HostingEnvironmentConfig().Parse """ self.reinitialize() LogIfVerbose(xmlText) node = xml.dom.minidom.parseString(xmlText).childNodes[0] if node.localName != "GoalState": Error("GoalState.Parse: root not GoalState") return None for a in node.childNodes: if a.nodeType == node.ELEMENT_NODE: if a.localName == "Incarnation": self.Incarnation = GetNodeTextData(a) elif a.localName == "Machine": for b in a.childNodes: if b.nodeType == node.ELEMENT_NODE: if b.localName == "ExpectedState": self.ExpectedState = GetNodeTextData(b) Log("ExpectedState: " + self.ExpectedState) elif b.localName == "LBProbePorts": for c in b.childNodes: if c.nodeType == node.ELEMENT_NODE and c.localName == "Port": self.LoadBalancerProbePort = int(GetNodeTextData(c)) elif a.localName == "Container": for b in a.childNodes: if b.nodeType == node.ELEMENT_NODE: if b.localName == "ContainerId": self.ContainerId = GetNodeTextData(b) Log("ContainerId: " + self.ContainerId) elif b.localName == "RoleInstanceList": for c in b.childNodes: if c.localName == "RoleInstance": for d in c.childNodes: if d.nodeType == node.ELEMENT_NODE: if d.localName == "InstanceId": self.RoleInstanceId = GetNodeTextData(d) Log("RoleInstanceId: " + self.RoleInstanceId) elif d.localName == "State": pass elif d.localName == "Configuration": for e in d.childNodes: if e.nodeType == node.ELEMENT_NODE: LogIfVerbose(e.localName) if e.localName == "HostingEnvironmentConfig": self.HostingEnvironmentConfigUrl = GetNodeTextData(e) LogIfVerbose("HostingEnvironmentConfigUrl:" + self.HostingEnvironmentConfigUrl) self.HostingEnvironmentConfigXml = self.HttpGetWithHeaders(self.HostingEnvironmentConfigUrl) self.HostingEnvironmentConfig = HostingEnvironmentConfig().Parse(self.HostingEnvironmentConfigXml) elif e.localName == "SharedConfig": self.SharedConfigUrl = GetNodeTextData(e) LogIfVerbose("SharedConfigUrl:" + self.SharedConfigUrl) self.SharedConfigXml = self.HttpGetWithHeaders(self.SharedConfigUrl) self.SharedConfig = SharedConfig().Parse(self.SharedConfigXml) self.SharedConfig.Save() elif e.localName == "ExtensionsConfig": self.ExtensionsConfigUrl = GetNodeTextData(e) LogIfVerbose("ExtensionsConfigUrl:" + self.ExtensionsConfigUrl) self.ExtensionsConfigXml = self.HttpGetWithHeaders(self.ExtensionsConfigUrl) elif e.localName == "Certificates": self.CertificatesUrl = GetNodeTextData(e) LogIfVerbose("CertificatesUrl:" + self.CertificatesUrl) self.CertificatesXml = self.HttpSecureGetWithHeaders(self.CertificatesUrl, self.TransportCert) self.Certificates = Certificates().Parse(self.CertificatesXml) if self.Incarnation == None: Error("GoalState.Parse: Incarnation missing") return None if self.ExpectedState == None: Error("GoalState.Parse: ExpectedState missing") return None if self.RoleInstanceId == None: Error("GoalState.Parse: RoleInstanceId missing") return None if self.ContainerId == None: Error("GoalState.Parse: ContainerId missing") return None SetFileContents("GoalState." + self.Incarnation + ".xml", xmlText) return self def Process(self): """ Calls HostingEnvironmentConfig.Process() """ LogIfVerbose("Process goalstate") self.HostingEnvironmentConfig.Process() self.SharedConfig.Process() class OvfEnv(object): """ Read, and process provisioning info from provisioning file OvfEnv.xml """ # # # # # 1.0 # # LinuxProvisioningConfiguration # HostName # UserName # UserPassword # false # # # # EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 # $HOME/UserName/.ssh/authorized_keys # # # # # EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 # $HOME/UserName/.ssh/id_rsa # # # # # # # def __init__(self): self.reinitialize() def reinitialize(self): """ Reset members. """ self.WaNs = "http://schemas.microsoft.com/windowsazure" self.OvfNs = "http://schemas.dmtf.org/ovf/environment/1" self.MajorVersion = 1 self.MinorVersion = 0 self.ComputerName = None self.AdminPassword = None self.UserName = None self.UserPassword = None self.CustomData = None self.DisableSshPasswordAuthentication = True self.SshPublicKeys = [] self.SshKeyPairs = [] def Parse(self, xmlText, isDeprovision = False): """ Parse xml tree, retreiving user and ssh key information. Return self. """ self.reinitialize() LogIfVerbose(re.sub(".*?<", "*<", xmlText)) dom = xml.dom.minidom.parseString(xmlText) if len(dom.getElementsByTagNameNS(self.OvfNs, "Environment")) != 1: Error("Unable to parse OVF XML.") section = None newer = False for p in dom.getElementsByTagNameNS(self.WaNs, "ProvisioningSection"): for n in p.childNodes: if n.localName == "Version": verparts = GetNodeTextData(n).split('.') major = int(verparts[0]) minor = int(verparts[1]) if major > self.MajorVersion: newer = True if major != self.MajorVersion: break if minor > self.MinorVersion: newer = True section = p if newer == True: Warn("Newer provisioning configuration detected. Please consider updating waagent.") if section == None: Error("Could not find ProvisioningSection with major version=" + str(self.MajorVersion)) return None self.ComputerName = GetNodeTextData(section.getElementsByTagNameNS(self.WaNs, "HostName")[0]) self.UserName = GetNodeTextData(section.getElementsByTagNameNS(self.WaNs, "UserName")[0]) if isDeprovision == True: return self try: self.UserPassword = GetNodeTextData(section.getElementsByTagNameNS(self.WaNs, "UserPassword")[0]) except: pass CDSection=None try: CDSection=section.getElementsByTagNameNS(self.WaNs, "CustomData") if len(CDSection) > 0 : self.CustomData=GetNodeTextData(CDSection[0]) if len(self.CustomData)>0: SetFileContents(LibDir + '/CustomData', bytearray(MyDistro.translateCustomData(self.CustomData), 'utf-8')) Log('Wrote ' + LibDir + '/CustomData') else : Error(' contains no data!') except Exception, e: Error( str(e)+' occured creating ' + LibDir + '/CustomData') disableSshPass = section.getElementsByTagNameNS(self.WaNs, "DisableSshPasswordAuthentication") if len(disableSshPass) != 0: self.DisableSshPasswordAuthentication = (GetNodeTextData(disableSshPass[0]).lower() == "true") for pkey in section.getElementsByTagNameNS(self.WaNs, "PublicKey"): LogIfVerbose(repr(pkey)) fp = None path = None for c in pkey.childNodes: if c.localName == "Fingerprint": fp = GetNodeTextData(c).upper() LogIfVerbose(fp) if c.localName == "Path": path = GetNodeTextData(c) LogIfVerbose(path) self.SshPublicKeys += [[fp, path]] for keyp in section.getElementsByTagNameNS(self.WaNs, "KeyPair"): fp = None path = None LogIfVerbose(repr(keyp)) for c in keyp.childNodes: if c.localName == "Fingerprint": fp = GetNodeTextData(c).upper() LogIfVerbose(fp) if c.localName == "Path": path = GetNodeTextData(c) LogIfVerbose(path) self.SshKeyPairs += [[fp, path]] return self def PrepareDir(self, filepath): """ Create home dir for self.UserName Change owner and return path. """ home = MyDistro.GetHome() # Expand HOME variable if present in path path = os.path.normpath(filepath.replace("$HOME", home)) if (path.startswith("/") == False) or (path.endswith("/") == True): return None dir = path.rsplit('/', 1)[0] if dir != "": CreateDir(dir, "root", 0700) if path.startswith(os.path.normpath(home + "/" + self.UserName + "/")): ChangeOwner(dir, self.UserName) return path def NumberToBytes(self, i): """ Pack number into bytes. Retun as string. """ result = [] while i: result.append(chr(i & 0xFF)) i >>= 8 result.reverse() return ''.join(result) def BitsToString(self, a): """ Return string representation of bits in a. """ index=7 s = "" c = 0 for bit in a: c = c | (bit << index) index = index - 1 if index == -1: s = s + struct.pack('>B', c) c = 0 index = 7 return s def OpensslToSsh(self, file): """ Return base-64 encoded key appropriate for ssh. """ from pyasn1.codec.der import decoder as der_decoder try: f = open(file).read().replace('\n','').split("KEY-----")[1].split('-')[0] k=der_decoder.decode(self.BitsToString(der_decoder.decode(base64.b64decode(f))[0][1]))[0] n=k[0] e=k[1] keydata="" keydata += struct.pack('>I',len("ssh-rsa")) keydata += "ssh-rsa" keydata += struct.pack('>I',len(self.NumberToBytes(e))) keydata += self.NumberToBytes(e) keydata += struct.pack('>I',len(self.NumberToBytes(n)) + 1) keydata += "\0" keydata += self.NumberToBytes(n) except Exception, e: print("OpensslToSsh: Exception " + str(e)) return None return "ssh-rsa " + base64.b64encode(keydata) + "\n" def Process(self): """ Process all certificate and key info. DisableSshPasswordAuthentication if configured. CreateAccount(user) Wait for WaAgent.EnvMonitor.IsHostnamePublished(). Restart ssh service. """ error = None if self.ComputerName == None : return "Error: Hostname missing" error=WaAgent.EnvMonitor.SetHostName(self.ComputerName) if error: return error if self.DisableSshPasswordAuthentication: filepath = "/etc/ssh/sshd_config" # Disable RFC 4252 and RFC 4256 authentication schemes. ReplaceFileContentsAtomic(filepath, "\n".join(filter(lambda a: not (a.startswith("PasswordAuthentication") or a.startswith("ChallengeResponseAuthentication")), GetFileContents(filepath).split('\n'))) + "\nPasswordAuthentication no\nChallengeResponseAuthentication no\n") Log("Disabled SSH password-based authentication methods.") if self.AdminPassword != None: MyDistro.changePass('root',self.AdminPassword) if self.UserName != None: error = MyDistro.CreateAccount(self.UserName, self.UserPassword, None, None) sel = MyDistro.isSelinuxRunning() if sel : MyDistro.setSelinuxEnforce(0) home = MyDistro.GetHome() for pkey in self.SshPublicKeys: Log("Deploy public key:{0}".format(pkey[0])) if not os.path.isfile(pkey[0] + ".crt"): Error("PublicKey not found: " + pkey[0]) error = "Failed to deploy public key (0x09)." continue path = self.PrepareDir(pkey[1]) if path == None: Error("Invalid path: " + pkey[1] + " for PublicKey: " + pkey[0]) error = "Invalid path for public key (0x03)." continue Run(Openssl + " x509 -in " + pkey[0] + ".crt -noout -pubkey > " + pkey[0] + ".pub") MyDistro.setSelinuxContext(pkey[0] + '.pub','unconfined_u:object_r:ssh_home_t:s0') MyDistro.sshDeployPublicKey(pkey[0] + '.pub',path) MyDistro.setSelinuxContext(path,'unconfined_u:object_r:ssh_home_t:s0') if path.startswith(os.path.normpath(home + "/" + self.UserName + "/")): ChangeOwner(path, self.UserName) for keyp in self.SshKeyPairs: Log("Deploy key pair:{0}".format(keyp[0])) if not os.path.isfile(keyp[0] + ".prv"): Error("KeyPair not found: " + keyp[0]) error = "Failed to deploy key pair (0x0A)." continue path = self.PrepareDir(keyp[1]) if path == None: Error("Invalid path: " + keyp[1] + " for KeyPair: " + keyp[0]) error = "Invalid path for key pair (0x05)." continue SetFileContents(path, GetFileContents(keyp[0] + ".prv")) os.chmod(path, 0600) Run("ssh-keygen -y -f " + keyp[0] + ".prv > " + path + ".pub") MyDistro.setSelinuxContext(path,'unconfined_u:object_r:ssh_home_t:s0') MyDistro.setSelinuxContext(path + '.pub','unconfined_u:object_r:ssh_home_t:s0') if path.startswith(os.path.normpath(home + "/" + self.UserName + "/")): ChangeOwner(path, self.UserName) ChangeOwner(path + ".pub", self.UserName) if sel : MyDistro.setSelinuxEnforce(1) while not WaAgent.EnvMonitor.IsHostnamePublished(): time.sleep(1) MyDistro.restartSshService() return error class WALAEvent(object): def __init__(self): self.providerId="" self.eventId=1 self.OpcodeName="" self.KeywordName="" self.TaskName="" self.TenantName="" self.RoleName="" self.RoleInstanceName="" self.ContainerId="" self.ExecutionMode="IAAS" self.OSVersion="" self.GAVersion="" self.RAM=0 self.Processors=0 def ToXml(self): strEventid=u''.format(self.eventId) strProviderid=u''.format(self.providerId) strRecordFormat = u'' strRecordNoQuoteFormat = u'' strMtStr=u'mt:wstr' strMtUInt64=u'mt:uint64' strMtBool=u'mt:bool' strMtFloat=u'mt:float64' strEventsData=u"" for attName in self.__dict__: if attName in ["eventId","filedCount","providerId"]: continue attValue = self.__dict__[attName] if type(attValue) is int: strEventsData+=strRecordFormat.format(attName,attValue,strMtUInt64) continue if type(attValue) is str: attValue = xml.sax.saxutils.quoteattr(attValue) strEventsData+=strRecordNoQuoteFormat.format(attName,attValue,strMtStr) continue if str(type(attValue)).count("'unicode'") >0 : attValue = xml.sax.saxutils.quoteattr(attValue) strEventsData+=strRecordNoQuoteFormat.format(attName,attValue,strMtStr) continue if type(attValue) is bool: strEventsData+=strRecordFormat.format(attName,attValue,strMtBool) continue if type(attValue) is float: strEventsData+=strRecordFormat.format(attName,attValue,strMtFloat) continue Log("Warning: property "+attName+":"+str(type(attValue))+":type"+str(type(attValue))+"Can't convert to events data:"+":type not supported") return u"{0}{1}{2}".format(strProviderid,strEventid,strEventsData) def Save(self): eventfolder = LibDir+"/events" if not os.path.exists(eventfolder): os.mkdir(eventfolder) os.chmod(eventfolder,0700) if len(os.listdir(eventfolder)) > 1000: raise Exception("WriteToFolder:Too many file under "+eventfolder+" exit") filename = os.path.join(eventfolder,str(int(time.time()*1000000))) with open(filename+".tmp",'wb+') as hfile: hfile.write(self.ToXml().encode("utf-8")) os.rename(filename+".tmp",filename+".tld") class WALAEventOperation: HeartBeat="HeartBeat" Provision = "Provision" Install = "Install" UnIsntall = "UnInstall" Disable = "Disable" Enable = "Enable" Download = "Download" Upgrade = "Upgrade" Update = "Update" def AddExtensionEvent(name,op,isSuccess,duration=0,version="1.0",message="",type="",isInternal=False): event = ExtensionEvent() event.Name=name event.Version=version event.IsInternal=isInternal event.Operation=op event.OperationSuccess=isSuccess event.Message=message event.Duration=duration event.ExtensionType=type try: event.Save() except: Error("Error "+traceback.format_exc()) class ExtensionEvent(WALAEvent): def __init__(self): WALAEvent.__init__(self) self.eventId=1 self.providerId="69B669B9-4AF8-4C50-BDC4-6006FA76E975" self.Name="" self.Version="" self.IsInternal=False self.Operation="" self.OperationSuccess=True self.ExtensionType="" self.Message="" self.Duration=0 class WALAEventMonitor(WALAEvent): def __init__(self,postMethod): WALAEvent.__init__(self) self.post = postMethod self.sysInfo={} self.eventdir = LibDir+"/events" self.issysteminfoinitilized = False def StartEventsLoop(self): eventThread = threading.Thread(target = self.EventsLoop) eventThread.setDaemon(True) eventThread.start() def EventsLoop(self): LastReportHeartBeatTime = datetime.datetime.min try: while True: if (datetime.datetime.now()-LastReportHeartBeatTime) > \ datetime.timedelta(minutes=30): LastReportHeartBeatTime = datetime.datetime.now() AddExtensionEvent(op=WALAEventOperation.HeartBeat,name="WALA",isSuccess=True) self.postNumbersInOneLoop=0 self.CollectAndSendWALAEvents() time.sleep(60) except: Error("Exception in events loop:"+traceback.format_exc()) def SendEvent(self,providerid,events): dataFormat = u'{1}'\ '' data = dataFormat.format(providerid,events) self.post("/machine/?comp=telemetrydata", data) def CollectAndSendWALAEvents(self): if not os.path.exists(self.eventdir): return #Throtting, can't send more than 3 events in 15 seconds eventSendNumber=0 eventFiles = os.listdir(self.eventdir) events = {} for file in eventFiles: if not file.endswith(".tld"): continue with open(os.path.join(self.eventdir,file),"rb") as hfile: #if fail to open or delete the file, throw exception xmlStr = hfile.read().decode("utf-8",'ignore') os.remove(os.path.join(self.eventdir,file)) params="" eventid="" providerid="" #if exception happen during process an event, catch it and continue try: xmlStr = self.AddSystemInfo(xmlStr) for node in xml.dom.minidom.parseString(xmlStr.encode("utf-8")).childNodes[0].childNodes: if node.tagName == "Param": params+=node.toxml() if node.tagName == "Event": eventid=node.getAttribute("id") if node.tagName == "Provider": providerid = node.getAttribute("id") except: Error(traceback.format_exc()) continue if len(params)==0 or len(eventid)==0 or len(providerid)==0: Error("Empty filed in params:"+params+" event id:"+eventid+" provider id:"+providerid) continue eventstr = u''.format(eventid,params) if not events.get(providerid): events[providerid]="" if len(events[providerid]) >0 and len(events.get(providerid)+eventstr)>= 63*1024: eventSendNumber+=1 self.SendEvent(providerid,events.get(providerid)) if eventSendNumber %3 ==0: time.sleep(15) events[providerid]="" if len(eventstr) >= 63*1024: Error("Signle event too large abort "+eventstr[:300]) continue events[providerid]=events.get(providerid)+eventstr for key in events.keys(): if len(events[key]) > 0: eventSendNumber+=1 self.SendEvent(key,events[key]) if eventSendNumber%3 == 0: time.sleep(15) def AddSystemInfo(self,eventData): if not self.issysteminfoinitilized: self.issysteminfoinitilized=True try: self.sysInfo["OSVersion"]=platform.system()+":"+"-".join(DistInfo(1))+":"+platform.release() self.sysInfo["GAVersion"]=GuestAgentVersion self.sysInfo["RAM"]=MyDistro.getTotalMemory() self.sysInfo["Processors"]=MyDistro.getProcessorCores() sharedConfig = xml.dom.minidom.parse("/var/lib/waagent/SharedConfig.xml").childNodes[0] hostEnvConfig= xml.dom.minidom.parse("/var/lib/waagent/HostingEnvironmentConfig.xml").childNodes[0] gfiles = RunGetOutput("ls -t /var/lib/waagent/GoalState.*.xml")[1] goalStateConfi = xml.dom.minidom.parse(gfiles.split("\n")[0]).childNodes[0] self.sysInfo["TenantName"]=hostEnvConfig.getElementsByTagName("Deployment")[0].getAttribute("name") self.sysInfo["RoleName"]=hostEnvConfig.getElementsByTagName("Role")[0].getAttribute("name") self.sysInfo["RoleInstanceName"]=sharedConfig.getElementsByTagName("Instance")[0].getAttribute("id") self.sysInfo["ContainerId"]=goalStateConfi.getElementsByTagName("ContainerId")[0].childNodes[0].nodeValue except: Error(traceback.format_exc()) eventObject = xml.dom.minidom.parseString(eventData.encode("utf-8")).childNodes[0] for node in eventObject.childNodes: if node.tagName == "Param": name = node.getAttribute("Name") if self.sysInfo.get(name): node.setAttribute("Value",xml.sax.saxutils.escape(str(self.sysInfo[name]))) return eventObject.toxml() class Agent(Util): """ Primary object container for the provisioning process. """ def __init__(self): self.GoalState = None self.Endpoint = None self.LoadBalancerProbeServer = None self.HealthReportCounter = 0 self.TransportCert = "" self.EnvMonitor = None self.SendData = None self.DhcpResponse = None def CheckVersions(self): """ Query endpoint server for wire protocol version. Fail if our desired protocol version is not seen. """ # # # # 2010-12-15 # # # 2010-12-15 # 2010-28-10 # # global ProtocolVersion protocolVersionSeen = False node = xml.dom.minidom.parseString(self.HttpGetWithoutHeaders("/?comp=versions")).childNodes[0] if node.localName != "Versions": Error("CheckVersions: root not Versions") return False for a in node.childNodes: if a.nodeType == node.ELEMENT_NODE and a.localName == "Supported": for b in a.childNodes: if b.nodeType == node.ELEMENT_NODE and b.localName == "Version": v = GetNodeTextData(b) LogIfVerbose("Fabric supported wire protocol version: " + v) if v == ProtocolVersion: protocolVersionSeen = True if a.nodeType == node.ELEMENT_NODE and a.localName == "Preferred": v = GetNodeTextData(a.getElementsByTagName("Version")[0]) Log("Fabric preferred wire protocol version: " + v) if not protocolVersionSeen: Warn("Agent supported wire protocol version: " + ProtocolVersion + " was not advertised by Fabric.") else: Log("Negotiated wire protocol version: " + ProtocolVersion) return True def Unpack(self, buffer, offset, range): """ Unpack bytes into python values. """ result = 0 for i in range: result = (result << 8) | Ord(buffer[offset + i]) return result def UnpackLittleEndian(self, buffer, offset, length): """ Unpack little endian bytes into python values. """ return self.Unpack(buffer, offset, list(range(length - 1, -1, -1))) def UnpackBigEndian(self, buffer, offset, length): """ Unpack big endian bytes into python values. """ return self.Unpack(buffer, offset, list(range(0, length))) def HexDump3(self, buffer, offset, length): """ Dump range of buffer in formatted hex. """ return ''.join(['%02X' % Ord(char) for char in buffer[offset:offset + length]]) def HexDump2(self, buffer): """ Dump buffer in formatted hex. """ return self.HexDump3(buffer, 0, len(buffer)) def BuildDhcpRequest(self): """ Build DHCP request string. """ # # typedef struct _DHCP { # UINT8 Opcode; /* op: BOOTREQUEST or BOOTREPLY */ # UINT8 HardwareAddressType; /* htype: ethernet */ # UINT8 HardwareAddressLength; /* hlen: 6 (48 bit mac address) */ # UINT8 Hops; /* hops: 0 */ # UINT8 TransactionID[4]; /* xid: random */ # UINT8 Seconds[2]; /* secs: 0 */ # UINT8 Flags[2]; /* flags: 0 or 0x8000 for broadcast */ # UINT8 ClientIpAddress[4]; /* ciaddr: 0 */ # UINT8 YourIpAddress[4]; /* yiaddr: 0 */ # UINT8 ServerIpAddress[4]; /* siaddr: 0 */ # UINT8 RelayAgentIpAddress[4]; /* giaddr: 0 */ # UINT8 ClientHardwareAddress[16]; /* chaddr: 6 byte ethernet MAC address */ # UINT8 ServerName[64]; /* sname: 0 */ # UINT8 BootFileName[128]; /* file: 0 */ # UINT8 MagicCookie[4]; /* 99 130 83 99 */ # /* 0x63 0x82 0x53 0x63 */ # /* options -- hard code ours */ # # UINT8 MessageTypeCode; /* 53 */ # UINT8 MessageTypeLength; /* 1 */ # UINT8 MessageType; /* 1 for DISCOVER */ # UINT8 End; /* 255 */ # } DHCP; # # tuple of 244 zeros # (struct.pack_into would be good here, but requires Python 2.5) sendData = [0] * 244 transactionID = os.urandom(4) macAddress = MyDistro.GetMacAddress() # Opcode = 1 # HardwareAddressType = 1 (ethernet/MAC) # HardwareAddressLength = 6 (ethernet/MAC/48 bits) for a in range(0, 3): sendData[a] = [1, 1, 6][a] # fill in transaction id (random number to ensure response matches request) for a in range(0, 4): sendData[4 + a] = Ord(transactionID[a]) LogIfVerbose("BuildDhcpRequest: transactionId:%s,%04X" % (self.HexDump2(transactionID), self.UnpackBigEndian(sendData, 4, 4))) # fill in ClientHardwareAddress for a in range(0, 6): sendData[0x1C + a] = Ord(macAddress[a]) # DHCP Magic Cookie: 99, 130, 83, 99 # MessageTypeCode = 53 DHCP Message Type # MessageTypeLength = 1 # MessageType = DHCPDISCOVER # End = 255 DHCP_END for a in range(0, 8): sendData[0xEC + a] = [99, 130, 83, 99, 53, 1, 1, 255][a] return array.array("B", sendData) def IntegerToIpAddressV4String(self, a): """ Build DHCP request string. """ return "%u.%u.%u.%u" % ((a >> 24) & 0xFF, (a >> 16) & 0xFF, (a >> 8) & 0xFF, a & 0xFF) def RouteAdd(self, net, mask, gateway): """ Add specified route using /sbin/route add -net. """ net = self.IntegerToIpAddressV4String(net) mask = self.IntegerToIpAddressV4String(mask) gateway = self.IntegerToIpAddressV4String(gateway) Log("Route add: net={0}, mask={1}, gateway={2}".format(net, mask, gateway)) MyDistro.routeAdd(net, mask, gateway) def SetDefaultGateway(self, gateway): """ Set default gateway """ gateway = self.IntegerToIpAddressV4String(gateway) Log("Set default gateway: {0}".format(gateway)) MyDistro.setDefaultGateway(gateway) def HandleDhcpResponse(self, sendData, receiveBuffer): """ Parse DHCP response: Set default gateway. Set default routes. Retrieve endpoint server. Returns endpoint server or None on error. """ LogIfVerbose("HandleDhcpResponse") bytesReceived = len(receiveBuffer) if bytesReceived < 0xF6: Error("HandleDhcpResponse: Too few bytes received " + str(bytesReceived)) return None LogIfVerbose("BytesReceived: " + hex(bytesReceived)) LogWithPrefixIfVerbose("DHCP response:", HexDump(receiveBuffer, bytesReceived)) # check transactionId, cookie, MAC address # cookie should never mismatch # transactionId and MAC address may mismatch if we see a response meant from another machine for offsets in [list(range(4, 4 + 4)), list(range(0x1C, 0x1C + 6)), list(range(0xEC, 0xEC + 4))]: for offset in offsets: sentByte = Ord(sendData[offset]) receivedByte = Ord(receiveBuffer[offset]) if sentByte != receivedByte: LogIfVerbose("HandleDhcpResponse: sent cookie:" + self.HexDump3(sendData, 0xEC, 4)) LogIfVerbose("HandleDhcpResponse: rcvd cookie:" + self.HexDump3(receiveBuffer, 0xEC, 4)) LogIfVerbose("HandleDhcpResponse: sent transactionID:" + self.HexDump3(sendData, 4, 4)) LogIfVerbose("HandleDhcpResponse: rcvd transactionID:" + self.HexDump3(receiveBuffer, 4, 4)) LogIfVerbose("HandleDhcpResponse: sent ClientHardwareAddress:" + self.HexDump3(sendData, 0x1C, 6)) LogIfVerbose("HandleDhcpResponse: rcvd ClientHardwareAddress:" + self.HexDump3(receiveBuffer, 0x1C, 6)) LogIfVerbose("HandleDhcpResponse: transactionId, cookie, or MAC address mismatch") return None endpoint = None # # Walk all the returned options, parsing out what we need, ignoring the others. # We need the custom option 245 to find the the endpoint we talk to, # as well as, to handle some Linux DHCP client incompatibilities, # options 3 for default gateway and 249 for routes. And 255 is end. # i = 0xF0 # offset to first option while i < bytesReceived: option = Ord(receiveBuffer[i]) length = 0 if (i + 1) < bytesReceived: length = Ord(receiveBuffer[i + 1]) LogIfVerbose("DHCP option " + hex(option) + " at offset:" + hex(i) + " with length:" + hex(length)) if option == 255: LogIfVerbose("DHCP packet ended at offset " + hex(i)) break elif option == 249: # http://msdn.microsoft.com/en-us/library/cc227282%28PROT.10%29.aspx LogIfVerbose("Routes at offset:" + hex(i) + " with length:" + hex(length)) if length < 5: Error("Data too small for option " + str(option)) j = i + 2 while j < (i + length + 2): maskLengthBits = Ord(receiveBuffer[j]) maskLengthBytes = (((maskLengthBits + 7) & ~7) >> 3) mask = 0xFFFFFFFF & (0xFFFFFFFF << (32 - maskLengthBits)) j += 1 net = self.UnpackBigEndian(receiveBuffer, j, maskLengthBytes) net <<= (32 - maskLengthBytes * 8) net &= mask j += maskLengthBytes gateway = self.UnpackBigEndian(receiveBuffer, j, 4) j += 4 self.RouteAdd(net, mask, gateway) if j != (i + length + 2): Error("HandleDhcpResponse: Unable to parse routes") elif option == 3 or option == 245: if i + 5 < bytesReceived: if length != 4: Error("HandleDhcpResponse: Endpoint or Default Gateway not 4 bytes") return None gateway = self.UnpackBigEndian(receiveBuffer, i + 2, 4) IpAddress = self.IntegerToIpAddressV4String(gateway) if option == 3: self.SetDefaultGateway(gateway) name = "DefaultGateway" else: endpoint = IpAddress name = "Azure wire protocol endpoint" LogIfVerbose(name + ": " + IpAddress + " at " + hex(i)) else: Error("HandleDhcpResponse: Data too small for option " + str(option)) else: LogIfVerbose("Skipping DHCP option " + hex(option) + " at " + hex(i) + " with length " + hex(length)) i += length + 2 return endpoint def DoDhcpWork(self): """ Discover the wire server via DHCP option 245. And workaround incompatibility with Azure DHCP servers. """ ShortSleep = False # Sleep 1 second before retrying DHCP queries. ifname=None sleepDurations = [0, 10, 30, 60, 60] maxRetry = len(sleepDurations) lastTry = (maxRetry - 1) for retry in range(0, maxRetry): try: #Open DHCP port if iptables is enabled. Run("iptables -D INPUT -p udp --dport 68 -j ACCEPT",chk_err=False) # We supress error logging on error. Run("iptables -I INPUT -p udp --dport 68 -j ACCEPT",chk_err=False) # We supress error logging on error. strRetry = str(retry) prefix = "DoDhcpWork: try=" + strRetry LogIfVerbose(prefix) sendData = self.BuildDhcpRequest() LogWithPrefixIfVerbose("DHCP request:", HexDump(sendData, len(sendData))) sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) sock.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) missingDefaultRoute = True try: if DistInfo()[0] == 'FreeBSD': missingDefaultRoute = True else: routes = RunGetOutput("route -n")[1] for line in routes.split('\n'): if line.startswith("0.0.0.0 ") or line.startswith("default "): missingDefaultRoute = False except: pass if missingDefaultRoute: # This is required because sending after binding to 0.0.0.0 fails with # network unreachable when the default gateway is not set up. ifname=MyDistro.GetInterfaceName() Log("DoDhcpWork: Missing default route - adding broadcast route for DHCP.") if DistInfo()[0] == 'FreeBSD': Run("route add -net 255.255.255.255 -iface " + ifname,chk_err=False) else: Run("route add 255.255.255.255 dev " + ifname,chk_err=False) if MyDistro.isDHCPEnabled(): MyDistro.stopDHCP() sock.bind(("0.0.0.0", 68)) sock.sendto(sendData, ("", 67)) sock.settimeout(10) Log("DoDhcpWork: Setting socket.timeout=10, entering recv") receiveBuffer = sock.recv(1024) endpoint = self.HandleDhcpResponse(sendData, receiveBuffer) if endpoint == None: LogIfVerbose("DoDhcpWork: No endpoint found") if endpoint != None or retry == lastTry: if endpoint != None: self.SendData = sendData self.DhcpResponse = receiveBuffer if retry == lastTry: LogIfVerbose("DoDhcpWork: try=" + strRetry) return endpoint sleepDuration = [sleepDurations[retry % len(sleepDurations)], 1][ShortSleep] LogIfVerbose("DoDhcpWork: sleep=" + str(sleepDuration)) time.sleep(sleepDuration) except Exception, e: ErrorWithPrefix(prefix, str(e)) ErrorWithPrefix(prefix, traceback.format_exc()) finally: sock.close() if missingDefaultRoute: #We added this route - delete it Log("DoDhcpWork: Removing broadcast route for DHCP.") if DistInfo()[0] == 'FreeBSD': Run("route del -net 255.255.255.255 -iface " + ifname,chk_err=False) else: Run("route del 255.255.255.255 dev " + ifname,chk_err=False) # We supress error logging on error. if MyDistro.isDHCPEnabled(): MyDistro.startDHCP() return None def UpdateAndPublishHostName(self, name): """ Set hostname locally and publish to iDNS """ Log("Setting host name: " + name) MyDistro.publishHostname(name) ethernetInterface = MyDistro.GetInterfaceName() MyDistro.RestartInterface(ethernetInterface) self.RestoreRoutes() def RestoreRoutes(self): """ If there is a DHCP response, then call HandleDhcpResponse. """ if self.SendData != None and self.DhcpResponse != None: self.HandleDhcpResponse(self.SendData, self.DhcpResponse) def UpdateGoalState(self): """ Retreive goal state information from endpoint server. Parse xml and initialize Agent.GoalState object. Return object or None on error. """ goalStateXml = None maxRetry = 9 log = NoLog for retry in range(1, maxRetry + 1): strRetry = str(retry) log("retry UpdateGoalState,retry=" + strRetry) goalStateXml = self.HttpGetWithHeaders("/machine/?comp=goalstate") if goalStateXml != None: break log = Log time.sleep(retry) if not goalStateXml: Error("UpdateGoalState failed.") return Log("Retrieved GoalState from Azure Fabric.") self.GoalState = GoalState(self).Parse(goalStateXml) return self.GoalState def ReportReady(self): """ Send health report 'Ready' to server. This signals the fabric that our provosion is completed, and the host is ready for operation. """ counter = (self.HealthReportCounter + 1) % 1000000 self.HealthReportCounter = counter healthReport = ("" + self.GoalState.Incarnation + "" + self.GoalState.ContainerId + "" + self.GoalState.RoleInstanceId + "Ready") a = self.HttpPostWithHeaders("/machine?comp=health", healthReport) if a != None: return a.getheader("x-ms-latest-goal-state-incarnation-number") return None def ReportNotReady(self, status, desc): """ Send health report 'Provisioning' to server. This signals the fabric that our provosion is starting. """ healthReport = ("" + self.GoalState.Incarnation + "" + self.GoalState.ContainerId + "" + self.GoalState.RoleInstanceId + "NotReady" + "
" + status + "" + desc + "
" + "
") a = self.HttpPostWithHeaders("/machine?comp=health", healthReport) if a != None: return a.getheader("x-ms-latest-goal-state-incarnation-number") return None def ReportRoleProperties(self, thumbprint): """ Send roleProperties and thumbprint to server. """ roleProperties = ("" + "" + self.GoalState.ContainerId + "" + "" + "" + self.GoalState.RoleInstanceId + "" + "" + "") a = self.HttpPostWithHeaders("/machine?comp=roleProperties", roleProperties) Log("Posted Role Properties. CertificateThumbprint=" + thumbprint) return a def LoadBalancerProbeServer_Shutdown(self): """ Shutdown the LoadBalancerProbeServer. """ if self.LoadBalancerProbeServer != None: self.LoadBalancerProbeServer.shutdown() self.LoadBalancerProbeServer = None def GenerateTransportCert(self): """ Create ssl certificate for https communication with endpoint server. """ Run(Openssl + " req -x509 -nodes -subj /CN=LinuxTransport -days 32768 -newkey rsa:2048 -keyout TransportPrivate.pem -out TransportCert.pem") cert = "" for line in GetFileContents("TransportCert.pem").split('\n'): if not "CERTIFICATE" in line: cert += line.rstrip() return cert def DoVmmStartup(self): """ Spawn the VMM startup script. """ Log("Starting Microsoft System Center VMM Initialization Process") pid = subprocess.Popen(["/bin/bash","/mnt/cdrom/secure/"+VMM_STARTUP_SCRIPT_NAME,"-p /mnt/cdrom/secure/ "]).pid time.sleep(5) sys.exit(0) def TryUnloadAtapiix(self): """ If global modloaded is True, then we loaded the ata_piix kernel module, unload it. """ if modloaded: Run("rmmod ata_piix.ko",chk_err=False) Log("Unloaded ata_piix.ko driver for ATAPI CD-ROM") def TryLoadAtapiix(self): """ Load the ata_piix kernel module if it exists. If successful, set global modloaded to True. If unable to load module leave modloaded False. """ global modloaded modloaded=False retcode,krn=RunGetOutput('uname -r') krn_pth='/lib/modules/'+krn.strip('\n')+'/kernel/drivers/ata/ata_piix.ko' if Run("lsmod | grep ata_piix",chk_err=False) == 0 : Log("Module " + krn_pth + " driver for ATAPI CD-ROM is already present.") return 0 if retcode: Error("Unable to provision: Failed to call uname -r") return "Unable to provision: Failed to call uname" if os.path.isfile(krn_pth): retcode,output=RunGetOutput("insmod " + krn_pth,chk_err=False) else: Log("Module " + krn_pth + " driver for ATAPI CD-ROM does not exist.") return 1 if retcode != 0: Error('Error calling insmod for '+ krn_pth + ' driver for ATAPI CD-ROM') return retcode time.sleep(1) # check 3 times if the mod is loaded for i in range(3): if Run('lsmod | grep ata_piix'): continue else : modloaded=True break if not modloaded: Error('Unable to load '+ krn_pth + ' driver for ATAPI CD-ROM') return 1 Log("Loaded " + krn_pth + " driver for ATAPI CD-ROM") # we have succeeded loading the ata_piix mod if it can be done. def SearchForVMMStartup(self): """ Search for a DVD/CDROM containing VMM's VMM_CONFIG_FILE_NAME. Call TryLoadAtapiix in case we must load the ata_piix module first. If VMM_CONFIG_FILE_NAME is found, call DoVmmStartup. Else, return to Azure Provisioning process. """ self.TryLoadAtapiix() if os.path.exists('/mnt/cdrom/secure') == False: CreateDir("/mnt/cdrom/secure", "root", 0700) mounted=False for dvds in [re.match(r'(sr[0-9]|hd[c-z]|cdrom[0-9]|cd[0-9]?)',x) for x in os.listdir('/dev/')]: if dvds == None: continue dvd = '/dev/'+dvds.group(0) if Run("LC_ALL=C fdisk -l " + dvd + " | grep Disk",chk_err=False): continue # Not mountable else: for retry in range(1,6): retcode,output=RunGetOutput("mount -v " + dvd + " /mnt/cdrom/secure") Log(output[:-1]) if retcode == 0: Log("mount succeeded on attempt #" + str(retry) ) mounted=True break if 'is already mounted on /mnt/cdrom/secure' in output: Log("Device " + dvd + " is already mounted on /mnt/cdrom/secure." + str(retry) ) mounted=True break Log("mount failed on attempt #" + str(retry) ) Log("mount loop sleeping 5...") time.sleep(5) if not mounted: # unable to mount continue if not os.path.isfile("/mnt/cdrom/secure/"+VMM_CONFIG_FILE_NAME): #nope - mount the next drive if mounted: Run("umount "+dvd,chk_err=False) mounted=False continue else : # it is the vmm startup self.DoVmmStartup() Log("VMM Init script not found. Provisioning for Azure") return def Provision(self): """ Responible for: Regenerate ssh keys, Mount, read, and parse ovfenv.xml from provisioning dvd rom Process the ovfenv.xml info Call ReportRoleProperties If configured, delete root password. Return None on success, error string on error. """ enabled = Config.get("Provisioning.Enabled") if enabled != None and enabled.lower().startswith("n"): return Log("Provisioning image started.") type = Config.get("Provisioning.SshHostKeyPairType") if type == None: type = "rsa" regenerateKeys = Config.get("Provisioning.RegenerateSshHostKeyPair") if regenerateKeys == None or regenerateKeys.lower().startswith("y"): Run("rm -f /etc/ssh/ssh_host_*key*") Run("ssh-keygen -N '' -t " + type + " -f /etc/ssh/ssh_host_" + type + "_key") MyDistro.restartSshService() #SetFileContents(LibDir + "/provisioned", "") dvd = None for dvds in [re.match(r'(sr[0-9]|hd[c-z]|cdrom[0-9]|cd[0-9]?)',x) for x in os.listdir('/dev/')]: if dvds == None : continue dvd = '/dev/'+dvds.group(0) if dvd == None: # No DVD device detected Error("No DVD device detected, unable to provision.") return "No DVD device detected, unable to provision." if MyDistro.mediaHasFilesystem(dvd) is False : out=MyDistro.load_ata_piix() if out: return out for i in range(10): # we may have to wait if os.path.exists(dvd): break Log("Waiting for DVD - sleeping 1 - "+str(i+1)+" try...") time.sleep(1) if os.path.exists('/mnt/cdrom/secure') == False: CreateDir("/mnt/cdrom/secure", "root", 0700) #begin mount loop - 5 tries - 5 sec wait between for retry in range(1,6): location='/mnt/cdrom/secure' retcode,output=MyDistro.mountDVD(dvd,location) Log(output[:-1]) if retcode == 0: Log("mount succeeded on attempt #" + str(retry) ) break if 'is already mounted on /mnt/cdrom/secure' in output: Log("Device " + dvd + " is already mounted on /mnt/cdrom/secure." + str(retry) ) break Log("mount failed on attempt #" + str(retry) ) Log("mount loop sleeping 5...") time.sleep(5) if not os.path.isfile("/mnt/cdrom/secure/ovf-env.xml"): Error("Unable to provision: Missing ovf-env.xml on DVD.") return "Failed to retrieve provisioning data (0x02)." ovfxml = (GetFileContents(u"/mnt/cdrom/secure/ovf-env.xml",asbin=False)) # use unicode here to ensure correct codec gets used. if ord(ovfxml[0]) > 128 and ord(ovfxml[1]) > 128 and ord(ovfxml[2]) > 128 : ovfxml = ovfxml[3:] # BOM is not stripped. First three bytes are > 128 and not unicode chars so we ignore them. ovfxml=ovfxml.strip(chr(0x00)) # we may have NULLs. ovfxml=ovfxml[ovfxml.find('.*?<", "*<", ovfxml)) Run("umount " + dvd,chk_err=False) MyDistro.unload_ata_piix() error = None if ovfxml != None: Log("Provisioning image using OVF settings in the DVD.") ovfobj = OvfEnv().Parse(ovfxml) if ovfobj != None: error = ovfobj.Process() if error : Error ("Provisioning image FAILED " + error) return ("Provisioning image FAILED " + error) Log("Ovf XML process finished") # This is done here because regenerated SSH host key pairs may be potentially overwritten when processing the ovfxml fingerprint = RunGetOutput("ssh-keygen -lf /etc/ssh/ssh_host_" + type + "_key.pub")[1].rstrip().split()[1].replace(':','') self.ReportRoleProperties(fingerprint) delRootPass = Config.get("Provisioning.DeleteRootPassword") if delRootPass != None and delRootPass.lower().startswith("y"): MyDistro.deleteRootPassword() Log("Provisioning image completed.") return error def Run(self): """ Called by 'waagent -daemon.' Main loop to process the goal state. State is posted every 25 seconds when provisioning has been completed. Search for VMM enviroment, start VMM script if found. Perform DHCP and endpoint server discovery by calling DoDhcpWork(). Check wire protocol versions. Set SCSI timeout on root device. Call GenerateTransportCert() to create ssl certs for server communication. Call UpdateGoalState(). If not provisioned, call ReportNotReady("Provisioning", "Starting") Call Provision(), set global provisioned = True if successful. Call goalState.Process() Start LBProbeServer if indicated in waagent.conf. Start the StateConsumer if indicated in waagent.conf. ReportReady if provisioning is complete. If provisioning failed, call ReportNotReady("ProvisioningFailed", provisionError) """ SetFileContents("/var/run/waagent.pid", str(os.getpid()) + "\n") reportHandlerStatusCount = 0 # Determine if we are in VMM. Spawn VMM_STARTUP_SCRIPT_NAME if found. self.SearchForVMMStartup() ipv4='' while ipv4 == '' or ipv4 == '0.0.0.0' : ipv4=MyDistro.GetIpv4Address() if ipv4 == '' or ipv4 == '0.0.0.0' : Log("Waiting for network.") time.sleep(10) Log("IPv4 address: " + ipv4) mac='' mac=MyDistro.GetMacAddress() if len(mac)>0 : Log("MAC address: " + ":".join(["%02X" % Ord(a) for a in mac])) # Consume Entropy in ACPI table provided by Hyper-V try: SetFileContents("/dev/random", GetFileContents("/sys/firmware/acpi/tables/OEM0")) except: pass Log("Probing for Azure environment.") self.Endpoint = self.DoDhcpWork() while self.Endpoint == None: Log("Azure environment not detected.") Log("Retry environment detection in 60 seconds") time.sleep(60) self.Endpoint = self.DoDhcpWork() Log("Discovered Azure endpoint: " + self.Endpoint) if not self.CheckVersions(): Error("Agent.CheckVersions failed") sys.exit(1) self.EnvMonitor = EnvMonitor() # Set SCSI timeout on SCSI disks MyDistro.initScsiDiskTimeout() global provisioned global provisionError global Openssl Openssl = Config.get("OS.OpensslPath") if Openssl == None: Openssl = "openssl" self.TransportCert = self.GenerateTransportCert() eventMonitor = None incarnation = None # goalStateIncarnationFromHealthReport currentPort = None # loadBalancerProbePort goalState = None # self.GoalState, instance of GoalState provisioned = os.path.exists(LibDir + "/provisioned") program = Config.get("Role.StateConsumer") provisionError = None lbProbeResponder = True setting = Config.get("LBProbeResponder") if setting != None and setting.lower().startswith("n"): lbProbeResponder = False while True: if (goalState == None) or (incarnation == None) or (goalState.Incarnation != incarnation): try: goalState = self.UpdateGoalState() except HttpResourceGoneError as e: Warn("Incarnation is out of date:{0}".format(e)) incarnation = None continue if goalState == None : Warn("Failed to fetch goalstate") continue if provisioned == False: self.ReportNotReady("Provisioning", "Starting") goalState.Process() if provisioned == False: provisionError = self.Provision() if provisionError == None : provisioned = True SetFileContents(LibDir + "/provisioned", "") lastCtime = "NOTFIND" try: walaConfigFile = MyDistro.getConfigurationPath() lastCtime = time.ctime(os.path.getctime(walaConfigFile)) except: pass #Get Ctime of wala config, can help identify the base image of this VM AddExtensionEvent(name="WALA",op=WALAEventOperation.Provision,isSuccess=True, message="WALA Config Ctime:"+lastCtime) executeCustomData = Config.get("Provisioning.ExecuteCustomData") if executeCustomData != None and executeCustomData.lower().startswith("y"): if os.path.exists(LibDir + '/CustomData'): Run('chmod +x ' + LibDir + '/CustomData') Run(LibDir + '/CustomData') else: Error(LibDir + '/CustomData does not exist.') # # only one port supported # restart server if new port is different than old port # stop server if no longer a port # goalPort = goalState.LoadBalancerProbePort if currentPort != goalPort: try: self.LoadBalancerProbeServer_Shutdown() currentPort = goalPort if currentPort != None and lbProbeResponder == True: self.LoadBalancerProbeServer = LoadBalancerProbeServer(currentPort) if self.LoadBalancerProbeServer == None : lbProbeResponder = False Log("Unable to create LBProbeResponder.") except Exception, e: Error("Failed to launch LBProbeResponder: {0}".format(e)) currentPort = None # Report SSH key fingerprint type = Config.get("Provisioning.SshHostKeyPairType") if type == None: type = "rsa" host_key_path = "/etc/ssh/ssh_host_" + type + "_key.pub" if(MyDistro.waitForSshHostKey(host_key_path)): fingerprint = RunGetOutput("ssh-keygen -lf /etc/ssh/ssh_host_" + type + "_key.pub")[1].rstrip().split()[1].replace(':','') self.ReportRoleProperties(fingerprint) if program != None and DiskActivated == True: try: Children.append(subprocess.Popen([program, "Ready"])) except OSError, e : ErrorWithPrefix('SharedConfig.Parse','Exception: '+ str(e) +' occured launching ' + program ) program = None sleepToReduceAccessDenied = 3 time.sleep(sleepToReduceAccessDenied) if provisionError != None: incarnation = self.ReportNotReady("ProvisioningFailed", provisionError) else: incarnation = self.ReportReady() # Process our extensions. if goalState.ExtensionsConfig == None and goalState.ExtensionsConfigXml != None : reportHandlerStatusCount = 0 #Reset count when new goal state comes goalState.ExtensionsConfig = ExtensionsConfig().Parse(goalState.ExtensionsConfigXml) # report the status/heartbeat results of extension processing if goalState.ExtensionsConfig != None : ret = goalState.ExtensionsConfig.ReportHandlerStatus() if ret != 0: Error("Failed to report handler status") elif reportHandlerStatusCount % 1000 == 0: #Agent report handler status every 25 seconds. Reduce the log entries by adding a count Log("Successfully reported handler status") reportHandlerStatusCount += 1 if not eventMonitor: eventMonitor = WALAEventMonitor(self.HttpPostWithHeaders) eventMonitor.StartEventsLoop() time.sleep(25 - sleepToReduceAccessDenied) WaagentLogrotate = """\ /var/log/waagent.log { monthly rotate 6 notifempty missingok } """ def GetMountPoint(mountlist, device): """ Example of mountlist: /dev/sda1 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) /dev/sdb1 on /mnt/resource type ext4 (rw) """ if (mountlist and device): for entry in mountlist.split('\n'): if(re.search(device, entry)): tokens = entry.split() #Return the 3rd column of this line return tokens[2] if len(tokens) > 2 else None return None def FindInLinuxKernelCmdline(option): """ Return match object if 'option' is present in the kernel boot options of the grub configuration. """ m=None matchs=r'^.*?'+MyDistro.grubKernelBootOptionsLine+r'.*?'+option+r'.*$' try: m=FindStringInFile(MyDistro.grubKernelBootOptionsFile,matchs) except IOError, e: Error('FindInLinuxKernelCmdline: Exception opening ' + MyDistro.grubKernelBootOptionsFile + 'Exception:' + str(e)) return m def AppendToLinuxKernelCmdline(option): """ Add 'option' to the kernel boot options of the grub configuration. """ if not FindInLinuxKernelCmdline(option): src=r'^(.*?'+MyDistro.grubKernelBootOptionsLine+r')(.*?)("?)$' rep=r'\1\2 '+ option + r'\3' try: ReplaceStringInFile(MyDistro.grubKernelBootOptionsFile,src,rep) except IOError, e : Error('AppendToLinuxKernelCmdline: Exception opening ' + MyDistro.grubKernelBootOptionsFile + 'Exception:' + str(e)) return 1 Run("update-grub",chk_err=False) return 0 def RemoveFromLinuxKernelCmdline(option): """ Remove 'option' to the kernel boot options of the grub configuration. """ if FindInLinuxKernelCmdline(option): src=r'^(.*?'+MyDistro.grubKernelBootOptionsLine+r'.*?)('+option+r')(.*?)("?)$' rep=r'\1\3\4' try: ReplaceStringInFile(MyDistro.grubKernelBootOptionsFile,src,rep) except IOError, e : Error('RemoveFromLinuxKernelCmdline: Exception opening ' + MyDistro.grubKernelBootOptionsFile + 'Exception:' + str(e)) return 1 Run("update-grub",chk_err=False) return 0 def FindStringInFile(fname,matchs): """ Return match object if found in file. """ try: ms=re.compile(matchs) for l in (open(fname,'r')).readlines(): m=re.search(ms,l) if m: return m except: raise return None def ReplaceStringInFile(fname,src,repl): """ Replace 'src' with 'repl' in file. """ try: sr=re.compile(src) if FindStringInFile(fname,src): updated='' for l in (open(fname,'r')).readlines(): n=re.sub(sr,repl,l) updated+=n ReplaceFileContentsAtomic(fname,updated) except : raise return def ApplyVNUMAWorkaround(): """ If kernel version has NUMA bug, add 'numa=off' to kernel boot options. """ VersionParts = platform.release().replace('-', '.').split('.') if int(VersionParts[0]) > 2: return if int(VersionParts[1]) > 6: return if int(VersionParts[2]) > 37: return if AppendToLinuxKernelCmdline("numa=off") == 0 : Log("Your kernel version " + platform.release() + " has a NUMA-related bug: NUMA has been disabled.") else : "Error adding 'numa=off'. NUMA has not been disabled." def RevertVNUMAWorkaround(): """ Remove 'numa=off' from kernel boot options. """ if RemoveFromLinuxKernelCmdline("numa=off") == 0 : Log('NUMA has been re-enabled') else : Log('NUMA has not been re-enabled') def Install(): """ Install the agent service. Check dependencies. Create /etc/waagent.conf and move old version to /etc/waagent.conf.old Copy RulesFiles to /var/lib/waagent Create /etc/logrotate.d/waagent Set /etc/ssh/sshd_config ClientAliveInterval to 180 Call ApplyVNUMAWorkaround() """ if MyDistro.checkDependencies(): return 1 os.chmod(sys.argv[0], 0755) SwitchCwd() for a in RulesFiles: if os.path.isfile(a): if os.path.isfile(GetLastPathElement(a)): os.remove(GetLastPathElement(a)) shutil.move(a, ".") Warn("Moved " + a + " -> " + LibDir + "/" + GetLastPathElement(a) ) MyDistro.registerAgentService() if os.path.isfile("/etc/waagent.conf"): try: os.remove("/etc/waagent.conf.old") except: pass try: os.rename("/etc/waagent.conf", "/etc/waagent.conf.old") Warn("Existing /etc/waagent.conf has been renamed to /etc/waagent.conf.old") except: pass SetFileContents("/etc/waagent.conf", MyDistro.waagent_conf_file) SetFileContents("/etc/logrotate.d/waagent", WaagentLogrotate) filepath = "/etc/ssh/sshd_config" ReplaceFileContentsAtomic(filepath, "\n".join(filter(lambda a: not a.startswith("ClientAliveInterval"), GetFileContents(filepath).split('\n'))) + "\nClientAliveInterval 180\n") Log("Configured SSH client probing to keep connections alive.") ApplyVNUMAWorkaround() return 0 def GetMyDistro(dist_class_name=''): """ Return MyDistro object. NOTE: Logging is not initialized at this point. """ if dist_class_name == '': if 'Linux' in platform.system(): Distro=DistInfo()[0] else : # I know this is not Linux! if 'FreeBSD' in platform.system(): Distro=platform.system() Distro=Distro.strip('"') Distro=Distro.strip(' ') dist_class_name=Distro+'Distro' else: Distro=dist_class_name if not globals().has_key(dist_class_name): print Distro+' is not a supported distribution.' return None return globals()[dist_class_name]() # the distro class inside this module. def DistInfo(fullname=0): if 'FreeBSD' in platform.system(): release = re.sub('\-.*\Z', '', str(platform.release())) distinfo = ['FreeBSD', release] return distinfo if 'linux_distribution' in dir(platform): distinfo = list(platform.linux_distribution(full_distribution_name=fullname)) distinfo[0] = distinfo[0].strip() # remove trailing whitespace in distro name if os.path.exists("/etc/euleros-release"): distinfo[0] = "euleros" return distinfo else: return platform.dist() def PackagedInstall(buildroot): """ Called from setup.py for use by RPM. Generic implementation Creates directories and files /etc/waagent.conf, /etc/init.d/waagent, /usr/sbin/waagent, /etc/logrotate.d/waagent, /etc/sudoers.d/waagent under buildroot. Copies generated files waagent.conf, into place and exits. """ MyDistro=GetMyDistro() if MyDistro == None : sys.exit(1) MyDistro.packagedInstall(buildroot) def LibraryInstall(buildroot): pass def Uninstall(): """ Uninstall the agent service. Copy RulesFiles back to original locations. Delete agent-related files. Call RevertVNUMAWorkaround(). """ SwitchCwd() for a in RulesFiles: if os.path.isfile(GetLastPathElement(a)): try: shutil.move(GetLastPathElement(a), a) Warn("Moved " + LibDir + "/" + GetLastPathElement(a) + " -> " + a ) except: pass MyDistro.unregisterAgentService() MyDistro.uninstallDeleteFiles() RevertVNUMAWorkaround() return 0 def Deprovision(force, deluser): """ Remove user accounts created by provisioning. Disables root password if Provisioning.DeleteRootPassword = 'y' Stop agent service. Remove SSH host keys if they were generated by the provision. Set hostname to 'localhost.localdomain'. Delete cached system configuration files in /var/lib and /var/lib/waagent. """ #Append blank line at the end of file, so the ctime of this file is changed every time Run("echo ''>>"+ MyDistro.getConfigurationPath()) SwitchCwd() ovfxml = GetFileContents(LibDir+"/ovf-env.xml") ovfobj = None if ovfxml != None: ovfobj = OvfEnv().Parse(ovfxml, True) print("WARNING! The waagent service will be stopped.") print("WARNING! All SSH host key pairs will be deleted.") print("WARNING! Cached DHCP leases will be deleted.") MyDistro.deprovisionWarnUser() delRootPass = Config.get("Provisioning.DeleteRootPassword") if delRootPass != None and delRootPass.lower().startswith("y"): print("WARNING! root password will be disabled. You will not be able to login as root.") if ovfobj != None and deluser == True: print("WARNING! " + ovfobj.UserName + " account and entire home directory will be deleted.") if force == False and not raw_input('Do you want to proceed (y/n)? ').startswith('y'): return 1 MyDistro.stopAgentService() # Remove SSH host keys regenerateKeys = Config.get("Provisioning.RegenerateSshHostKeyPair") if regenerateKeys == None or regenerateKeys.lower().startswith("y"): Run("rm -f /etc/ssh/ssh_host_*key*") # Remove root password if delRootPass != None and delRootPass.lower().startswith("y"): MyDistro.deleteRootPassword() # Remove distribution specific networking configuration MyDistro.publishHostname('localhost.localdomain') MyDistro.deprovisionDeleteFiles() if deluser == True: MyDistro.DeleteAccount(ovfobj.UserName) return 0 def SwitchCwd(): """ Switch to cwd to /var/lib/waagent. Create if not present. """ CreateDir(LibDir, "root", 0700) os.chdir(LibDir) def Usage(): """ Print the arguments to waagent. """ print("usage: " + sys.argv[0] + " [-verbose] [-force] [-help|-install|-uninstall|-deprovision[+user]|-version|-serialconsole|-daemon]") return 0 def main(): """ Instantiate MyDistro, exit if distro class is not defined. Parse command-line arguments, exit with usage() on error. Instantiate ConfigurationProvider. Call appropriate non-daemon methods and exit. If daemon mode, enter Agent.Run() loop. """ if GuestAgentVersion == "": print("WARNING! This is a non-standard agent that does not include a valid version string.") if len(sys.argv) == 1: sys.exit(Usage()) LoggerInit('/var/log/waagent.log','/dev/console') global LinuxDistro LinuxDistro=DistInfo()[0] global MyDistro MyDistro=GetMyDistro() if MyDistro == None : sys.exit(1) args = [] conf_file = None global force force = False for a in sys.argv[1:]: if re.match("^([-/]*)(help|usage|\?)", a): sys.exit(Usage()) elif re.match("^([-/]*)version", a): print(GuestAgentVersion + " running on " + LinuxDistro) sys.exit(0) elif re.match("^([-/]*)verbose", a): myLogger.verbose = True elif re.match("^([-/]*)force", a): force = True elif re.match("^(?:[-/]*)conf=.+", a): conf_file = re.match("^(?:[-/]*)conf=(.+)", a).groups()[0] elif re.match("^([-/]*)(setup|install)", a): sys.exit(MyDistro.Install()) elif re.match("^([-/]*)(uninstall)", a): sys.exit(Uninstall()) else: args.append(a) global Config Config = ConfigurationProvider(conf_file) logfile = Config.get("Logs.File") if logfile is not None: myLogger.file_path = logfile logconsole = Config.get("Logs.Console") if logconsole is not None and logconsole.lower().startswith("n"): myLogger.con_path = None verbose = Config.get("Logs.Verbose") if verbose != None and verbose.lower().startswith("y"): myLogger.verbose=True global daemon daemon = False for a in args: if re.match("^([-/]*)deprovision\+user", a): sys.exit(Deprovision(force, True)) elif re.match("^([-/]*)deprovision", a): sys.exit(Deprovision(force, False)) elif re.match("^([-/]*)daemon", a): daemon = True elif re.match("^([-/]*)serialconsole", a): AppendToLinuxKernelCmdline("console=ttyS0 earlyprintk=ttyS0") Log("Configured kernel to use ttyS0 as the boot console.") sys.exit(0) else: print("Invalid command line parameter:" + a) sys.exit(1) if daemon == False: sys.exit(Usage()) global modloaded modloaded = False while True: try: SwitchCwd() Log(GuestAgentLongName + " Version: " + GuestAgentVersion) if IsLinux(): Log("Linux Distribution Detected : " + LinuxDistro) global WaAgent WaAgent = Agent() WaAgent.Run() except Exception, e: Error(traceback.format_exc()) Error("Exception: " + str(e)) Log("Restart agent in 15 seconds") time.sleep(15) if __name__ == '__main__' : main() Azure-WALinuxAgent-2b21de5/ci/000077500000000000000000000000001462617747000161235ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/ci/2.7.pylintrc000066400000000000000000000065331462617747000202260ustar00rootroot00000000000000# python2.7 uses pylint 1.9.5, whose docs can be found here: http://pylint.pycqa.org/en/1.9/technical_reference/features.html#messages # python3.4 uses pylint 2.3.1, whose docs can be found here: http://pylint.pycqa.org/en/pylint-2.3.1/technical_reference/features.html [MESSAGES CONTROL] disable=C, # (C) convention, for programming standard violation consider-using-dict-comprehension, # (R1717): *Consider using a dictionary comprehension* consider-using-in, # (R1714): *Consider merging these comparisons with "in" to %r* consider-using-set-comprehension, # (R1718): *Consider using a set comprehension* consider-using-with, # (R1732): *Emitted if a resource-allocating assignment or call may be replaced by a 'with' block* duplicate-code, # (R0801): *Similar lines in %s files* no-init, # (W0232): Class has no __init__ method no-else-break, # (R1723): *Unnecessary "%s" after "break"* no-else-continue, # (R1724): *Unnecessary "%s" after "continue"* no-else-raise, # (R1720): *Unnecessary "%s" after "raise"* no-else-return, # (R1705): *Unnecessary "%s" after "return"* no-self-use, # (R0201): *Method could be a function* protected-access, # (W0212): Access to a protected member of a client class simplifiable-if-expression, # (R1719): *The if expression can be replaced with %s* simplifiable-if-statement, # (R1703): *The if statement can be replaced with %s* super-with-arguments, # (R1725): *Consider using Python 3 style super() without arguments* too-few-public-methods, # (R0903): *Too few public methods (%s/%s)* too-many-ancestors, # (R0901): *Too many ancestors (%s/%s)* too-many-arguments, # (R0913): *Too many arguments (%s/%s)* too-many-boolean-expressions, # (R0916): *Too many boolean expressions in if statement (%s/%s)* too-many-branches, # (R0912): *Too many branches (%s/%s)* too-many-instance-attributes, # (R0902): *Too many instance attributes (%s/%s)* too-many-locals, # (R0914): *Too many local variables (%s/%s)* too-many-nested-blocks, # (R1702): *Too many nested blocks (%s/%s)* too-many-public-methods, # (R0904): *Too many public methods (%s/%s)* too-many-return-statements, # (R0911): *Too many return statements (%s/%s)* too-many-statements, # (R0915): *Too many statements (%s/%s)* useless-object-inheritance, # (R0205): *Class %r inherits from object, can be safely removed from bases in python3* useless-return, # (R1711): *Useless return at end of function or method* bad-continuation, # Buggy, **REMOVED in pylint-2.6.0** bad-option-value, # pylint does not recognize the error code/symbol (needed to supress breaking changes across pylint versions) bad-whitespace, # Used when a wrong number of spaces is used around an operator, bracket or block opener. broad-except, # Used when an except catches a too general exception, possibly burying unrelated errors. deprecated-lambda, # Used when a lambda is the first argument to “map” or “filter”. It could be clearer as a list comprehension or generator expression. (2.7 only) missing-docstring, # Used when a module, function, class or method has no docstring old-style-class, # Used when a class is defined that does not inherit from another class and does not inherit explicitly from “object”. (2.7 only) fixme, # Used when a warning note as FIXME or TODO is detected Azure-WALinuxAgent-2b21de5/ci/3.6.pylintrc000066400000000000000000000054761462617747000202330ustar00rootroot00000000000000# python 3.6+ uses the latest pylint version, whose docs can be found here: http://pylint.pycqa.org/en/stable/technical_reference/features.html [MESSAGES CONTROL] disable=C, # (C) convention, for programming standard violation broad-except, # (W0703): *Catching too general exception %s* consider-using-dict-comprehension, # (R1717): *Consider using a dictionary comprehension* consider-using-in, # (R1714): *Consider merging these comparisons with "in" to %r* consider-using-set-comprehension, # (R1718): *Consider using a set comprehension* consider-using-with, # (R1732): *Emitted if a resource-allocating assignment or call may be replaced by a 'with' block* duplicate-code, # (R0801): *Similar lines in %s files* fixme, # Used when a warning note as FIXME or TODO is detected no-else-break, # (R1723): *Unnecessary "%s" after "break"* no-else-continue, # (R1724): *Unnecessary "%s" after "continue"* no-else-raise, # (R1720): *Unnecessary "%s" after "raise"* no-else-return, # (R1705): *Unnecessary "%s" after "return"* no-init, # (W0232): Class has no __init__ method no-self-use, # (R0201): *Method could be a function* protected-access, # (W0212): Access to a protected member of a client class raise-missing-from, # (W0707): *Consider explicitly re-raising using the 'from' keyword* redundant-u-string-prefix, # The u prefix for strings is no longer necessary in Python >=3.0 simplifiable-if-expression, # (R1719): *The if expression can be replaced with %s* simplifiable-if-statement, # (R1703): *The if statement can be replaced with %s* super-with-arguments, # (R1725): *Consider using Python 3 style super() without arguments* too-few-public-methods, # (R0903): *Too few public methods (%s/%s)* too-many-ancestors, # (R0901): *Too many ancestors (%s/%s)* too-many-arguments, # (R0913): *Too many arguments (%s/%s)* too-many-boolean-expressions, # (R0916): *Too many boolean expressions in if statement (%s/%s)* too-many-branches, # (R0912): *Too many branches (%s/%s)* too-many-instance-attributes, # (R0902): *Too many instance attributes (%s/%s)* too-many-locals, # (R0914): *Too many local variables (%s/%s)* too-many-nested-blocks, # (R1702): *Too many nested blocks (%s/%s)* too-many-public-methods, # (R0904): *Too many public methods (%s/%s)* too-many-return-statements, # (R0911): *Too many return statements (%s/%s)* too-many-statements, # (R0915): *Too many statements (%s/%s)* unspecified-encoding, # (W1514): Using open without explicitly specifying an encoding use-a-generator, # (R1729): *Use a generator instead '%s(%s)'* useless-object-inheritance, # (R0205): *Class %r inherits from object, can be safely removed from bases in python3* useless-return, # (R1711): *Useless return at end of function or method* Azure-WALinuxAgent-2b21de5/ci/nosetests.sh000077500000000000000000000014201462617747000205060ustar00rootroot00000000000000#!/usr/bin/env bash set -u EXIT_CODE=0 echo "=========================================" echo "nosetests -a '!requires_sudo' output" echo "=========================================" nosetests -a '!requires_sudo' tests $NOSEOPTS || EXIT_CODE=$(($EXIT_CODE || $?)) echo EXIT_CODE no_sudo nosetests = $EXIT_CODE [[ -f .coverage ]] && \ sudo mv .coverage coverage.$(uuidgen).no_sudo.data echo "=========================================" echo "nosetests -a 'requires_sudo' output" echo "=========================================" sudo env "PATH=$PATH" nosetests -a 'requires_sudo' tests $NOSEOPTS || EXIT_CODE=$(($EXIT_CODE || $?)) echo EXIT_CODE with_sudo nosetests = $EXIT_CODE [[ -f .coverage ]] && \ sudo mv .coverage coverage.$(uuidgen).with_sudo.data exit "$EXIT_CODE" Azure-WALinuxAgent-2b21de5/config/000077500000000000000000000000001462617747000167755ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/config/66-azure-storage.rules000066400000000000000000000034451462617747000231000ustar00rootroot00000000000000# Azure specific rules. ACTION!="add|change", GOTO="walinuxagent_end" SUBSYSTEM!="block", GOTO="walinuxagent_end" ATTRS{ID_VENDOR}!="Msft", GOTO="walinuxagent_end" ATTRS{ID_MODEL}!="Virtual_Disk", GOTO="walinuxagent_end" # Match the known ID parts for root and resource disks. ATTRS{device_id}=="?00000000-0000-*", ENV{fabric_name}="root", GOTO="wa_azure_names" ATTRS{device_id}=="?00000000-0001-*", ENV{fabric_name}="resource", GOTO="wa_azure_names" # Gen2 disk. ATTRS{device_id}=="{f8b3781a-1e82-4818-a1c3-63d806ec15bb}", ENV{fabric_scsi_controller}="scsi0", GOTO="azure_datadisk" # Create symlinks for data disks attached. ATTRS{device_id}=="{f8b3781b-1e82-4818-a1c3-63d806ec15bb}", ENV{fabric_scsi_controller}="scsi1", GOTO="azure_datadisk" ATTRS{device_id}=="{f8b3781c-1e82-4818-a1c3-63d806ec15bb}", ENV{fabric_scsi_controller}="scsi2", GOTO="azure_datadisk" ATTRS{device_id}=="{f8b3781d-1e82-4818-a1c3-63d806ec15bb}", ENV{fabric_scsi_controller}="scsi3", GOTO="azure_datadisk" GOTO="walinuxagent_end" # Parse out the fabric n ame based off of scsi indicators. LABEL="azure_datadisk" ENV{DEVTYPE}=="partition", PROGRAM="/bin/sh -c 'readlink /sys/class/block/%k/../device|cut -d: -f4'", ENV{fabric_name}="$env{fabric_scsi_controller}/lun$result" ENV{DEVTYPE}=="disk", PROGRAM="/bin/sh -c 'readlink /sys/class/block/%k/device|cut -d: -f4'", ENV{fabric_name}="$env{fabric_scsi_controller}/lun$result" ENV{fabric_name}=="scsi0/lun0", ENV{fabric_name}="root" ENV{fabric_name}=="scsi0/lun1", ENV{fabric_name}="resource" # Don't create a symlink for the cd-rom. ENV{fabric_name}=="scsi0/lun2", GOTO="walinuxagent_end" # Create the symlinks. LABEL="wa_azure_names" ENV{DEVTYPE}=="disk", SYMLINK+="disk/azure/$env{fabric_name}" ENV{DEVTYPE}=="partition", SYMLINK+="disk/azure/$env{fabric_name}-part%n" LABEL="walinuxagent_end" Azure-WALinuxAgent-2b21de5/config/99-azure-product-uuid.rules000066400000000000000000000005271462617747000240640ustar00rootroot00000000000000SUBSYSTEM!="dmi", GOTO="product_uuid-exit" ATTR{sys_vendor}!="Microsoft Corporation", GOTO="product_uuid-exit" ATTR{product_name}!="Virtual Machine", GOTO="product_uuid-exit" TEST!="/sys/devices/virtual/dmi/id/product_uuid", GOTO="product_uuid-exit" RUN+="/bin/chmod 0444 /sys/devices/virtual/dmi/id/product_uuid" LABEL="product_uuid-exit" Azure-WALinuxAgent-2b21de5/config/alpine/000077500000000000000000000000001462617747000202455ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/config/alpine/waagent.conf000066400000000000000000000057401462617747000225500ustar00rootroot00000000000000# # Windows Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=n # Decode CustomData from Base64. Provisioning.DecodeCustomData=y # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=y # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Respond to load balancer probes if requested by Windows Azure. LBProbeResponder=y # Enable logging to serial console (y|n) # When stdout is not enough... # 'y' if not set Logs.Console=y # Enable verbose logging (y|n) Logs.Verbose=n # Preferred network interface to communicate with Azure platform Network.Interface=eth0 # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # TODO: Update the wiki link and point to readme page or public facing doc # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y Azure-WALinuxAgent-2b21de5/config/arch/000077500000000000000000000000001462617747000177125ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/config/arch/waagent.conf000066400000000000000000000064031462617747000222120ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=n # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Respond to load balancer probes if requested by Windows Azure. LBProbeResponder=y # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y Azure-WALinuxAgent-2b21de5/config/bigip/000077500000000000000000000000001462617747000200675ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/config/bigip/waagent.conf000066400000000000000000000062171462617747000223720ustar00rootroot00000000000000# # Windows Azure Linux Agent Configuration # # Specified program is invoked with the argument "Ready" when we report ready status # to the endpoint server. Role.StateConsumer=None # Specified program is invoked with XML file argument specifying role # configuration. Role.ConfigurationConsumer=None # Specified program is invoked with XML file argument specifying role topology. Role.TopologyConsumer=None # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. # waagent cannot do this on BIG-IP VE Provisioning.MonitorHostName=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Respond to load balancer probes if requested by Windows Azure. LBProbeResponder=y # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # Specify location of waagent lib dir on BIG-IP Lib.Dir=/shared/vadc/azure/waagent/ # Specify location of sshd config file on BIG-IP OS.SshdConfigPath=/config/ssh/sshd_config # Disable RDMA management and set up OS.EnableRDMA=n # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y Azure-WALinuxAgent-2b21de5/config/clearlinux/000077500000000000000000000000001462617747000211435ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/config/clearlinux/waagent.conf000066400000000000000000000055411462617747000234450ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Specified program is invoked with the argument "Ready" when we report ready status # to the endpoint server. Role.StateConsumer=None # Specified program is invoked with XML file argument specifying role # configuration. Role.ConfigurationConsumer=None # Specified program is invoked with XML file argument specifying role topology. Role.TopologyConsumer=None # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=y # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services # Note: # - The default is false to protect the state of existing VMs OS.EnableFirewall=y Azure-WALinuxAgent-2b21de5/config/coreos/000077500000000000000000000000001462617747000202675ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/config/coreos/waagent.conf000066400000000000000000000066601462617747000225740ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=n # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=n # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=ed25519 # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Respond to load balancer probes if requested by Windows Azure. LBProbeResponder=y # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks OS.AllowHTTP=y # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y Azure-WALinuxAgent-2b21de5/config/debian/000077500000000000000000000000001462617747000202175ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/config/debian/waagent.conf000066400000000000000000000071621462617747000225220ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=auto # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Enable periodic log collection, default is y Logs.Collect=y # How frequently to collect logs, default is each hour Logs.CollectPeriod=3600 # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the SSH ClientAliveInterval # OS.SshClientAliveInterval=180 # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services # Note: # - The default is false to protect the state of existing VMs OS.EnableFirewall=n Azure-WALinuxAgent-2b21de5/config/devuan/000077500000000000000000000000001462617747000202575ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/config/devuan/waagent.conf000066400000000000000000000072331462617747000225610ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=auto # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the SSH ClientAliveInterval # OS.SshClientAliveInterval=180 # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services # Note: # - The default is false to protect the state of existing VMs OS.EnableFirewall=y # Enforce control groups limits on the agent and extensions CGroups.EnforceLimits=n # CGroups which are excluded from limits, comma separated CGroups.Excluded=customscript,runcommand Azure-WALinuxAgent-2b21de5/config/freebsd/000077500000000000000000000000001462617747000204075ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/config/freebsd/waagent.conf000066400000000000000000000065421462617747000227130ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs' here. ResourceDisk.Filesystem=ufs # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=y # Size of the swapfile. ResourceDisk.SwapSizeMB=16384 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh OS.PasswordPath=/etc/master.passwd OS.SudoersDir=/usr/local/etc/sudoers.d # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y Azure-WALinuxAgent-2b21de5/config/gaia/000077500000000000000000000000001462617747000176765ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/config/gaia/waagent.conf000066400000000000000000000070571462617747000222040ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=n # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=n # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=n # Decode CustomData from Base64. Provisioning.DecodeCustomData=y # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. Provisioning.PasswordCryptId=1 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=y # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext3 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=y # Size of the swapfile. ResourceDisk.SwapSizeMB=1024 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=/var/lib/waagent/openssl # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images OS.EnableRDMA=n # Enable or disable goal state processing auto-update, default is enabled # When turned off, it reverts to the pre-installed agent that comes with image # AutoUpdate.Enabled is a legacy parameter used only for backwards compatibility. We encourage users to transition to new option AutoUpdate.UpdateToLatestVersion # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details AutoUpdate.Enabled=n # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y Azure-WALinuxAgent-2b21de5/config/iosxe/000077500000000000000000000000001462617747000201245ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/config/iosxe/waagent.conf000066400000000000000000000064671462617747000224360ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=n # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=n # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=n # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the SSH ClientAliveInterval # OS.SshClientAliveInterval=180 # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services # Note: # - The default is false to protect the state of existing VMs OS.EnableFirewall=y Azure-WALinuxAgent-2b21de5/config/mariner/000077500000000000000000000000001462617747000204325ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/config/mariner/waagent.conf000066400000000000000000000053521462617747000227340ustar00rootroot00000000000000# Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Specified program is invoked with the argument "Ready" when we report ready status # to the endpoint server. Role.StateConsumer=None # Specified program is invoked with XML file argument specifying role # configuration. Role.ConfigurationConsumer=None # Specified program is invoked with XML file argument specifying role topology. Role.TopologyConsumer=None # Enable instance creation Provisioning.Enabled=n # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa" and "ecdsa". Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=y # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=n # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Enable verbose logging (y|n) Logs.Verbose=n # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is disabled # EnableOverProvisioning=n # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y Azure-WALinuxAgent-2b21de5/config/nsbsd/000077500000000000000000000000001462617747000201065ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/config/nsbsd/waagent.conf000066400000000000000000000067121462617747000224110ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=n # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=n # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=n # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs' here. ResourceDisk.Filesystem=ufs # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Enable verbose logging (y|n) TODO set n Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh OS.PasswordPath=/etc/master.passwd OS.SudoersDir=/usr/local/etc/sudoers.d # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # Lib.Dir=/usr/Firewall/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # Extension.LogDir=/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it reverts to the pre-installed agent that comes with image # AutoUpdate.Enabled is a legacy parameter used only for backwards compatibility. We encourage users to transition to new option AutoUpdate.UpdateToLatestVersion # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details AutoUpdate.Enabled=n # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is disabled # EnableOverProvisioning=n # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services # Note: # - The default is false to protect the state of existing VMs OS.EnableFirewall=n Azure-WALinuxAgent-2b21de5/config/openbsd/000077500000000000000000000000001462617747000204275ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/config/openbsd/waagent.conf000066400000000000000000000063721462617747000227340ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=auto # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. OpenBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ufs2 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=y # Max size of the swap partition in MB ResourceDisk.SwapSizeMB=65536 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=/usr/local/bin/eopenssl # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh OS.PasswordPath=/etc/master.passwd # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services # Note: # - The default is false to protect the state of existing VMs OS.EnableFirewall=y Azure-WALinuxAgent-2b21de5/config/photonos/000077500000000000000000000000001462617747000206465ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/config/photonos/waagent.conf000066400000000000000000000046321462617747000231500ustar00rootroot00000000000000# Microsoft Azure Linux Agent Configuration # # Specified program is invoked with the argument "Ready" when we report ready status # to the endpoint server. Role.StateConsumer=None # Specified program is invoked with XML file argument specifying role # configuration. Role.ConfigurationConsumer=None # Specified program is invoked with XML file argument specifying role topology. Role.TopologyConsumer=None # Enable instance creation Provisioning.Enabled=n # Rely on cloud-init to provision Provisioning.UseCloudInit=y # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa" and "ecdsa". Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=y # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=n # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Enable verbose logging (y|n) Logs.Verbose=n # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is disabled # EnableOverProvisioning=n Azure-WALinuxAgent-2b21de5/config/suse/000077500000000000000000000000001462617747000177545ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/config/suse/waagent.conf000066400000000000000000000072201462617747000222520ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=btrfs # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=compress=lzo # Respond to load balancer probes if requested by Microsoft Azure. LBProbeResponder=y # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Enable periodic log collection, default is y Logs.Collect=y # How frequently to collect logs, default is each hour Logs.CollectPeriod=3600 # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable checking RDMA driver version and update # OS.CheckRdmaDriver=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y Azure-WALinuxAgent-2b21de5/config/ubuntu/000077500000000000000000000000001462617747000203175ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/config/ubuntu/waagent.conf000066400000000000000000000070671462617747000226260ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=n # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=n # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=n # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Respond to load balancer probes if requested by Microsoft Azure. LBProbeResponder=y # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Enable periodic log collection, default is y Logs.Collect=y # How frequently to collect logs, default is each hour Logs.CollectPeriod=3600 # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable RDMA kernel update, this value is effective on Ubuntu # OS.UpdateRdmaDriver=y # Enable checking RDMA driver version and update # OS.CheckRdmaDriver=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y Azure-WALinuxAgent-2b21de5/config/waagent.conf000066400000000000000000000104051462617747000212720ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable extension handling. Do not disable this unless you do not need password reset, # backup, monitoring, or any extension handling whatsoever. Extensions.Enabled=y # How often (in seconds) to poll for new goal states Extensions.GoalStatePeriod=6 # Which provisioning agent to use. Supported values are "auto" (default), "waagent", # "cloud-init", or "disabled". Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # How often (in seconds) to monitor host name changes. Provisioning.MonitorHostNamePeriod=30 # Decode CustomData from Base64. Provisioning.DecodeCustomData=n # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-separated list of mount options. See mount(8) for valid options. ResourceDisk.MountOptions=None # Enable verbose logging (y|n) Logs.Verbose=n # Enable Console logging, default is y # Logs.Console=y # Enable periodic log collection, default is y Logs.Collect=y # How frequently to collect logs, default is each hour Logs.CollectPeriod=3600 # Is FIPS enabled OS.EnableFIPS=n # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # How often (in seconds) to set the root device timeout. OS.RootDeviceScsiTimeoutPeriod=30 # If "None", the system default version is used. OS.OpensslPath=None # Set the SSH ClientAliveInterval # OS.SshClientAliveInterval=180 # Set the path to SSH keys and configuration files OS.SshDir=/etc/ssh # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # Home.Dir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=y # Enable checking RDMA driver version and update # OS.CheckRdmaDriver=y # Enable or disable goal state processing auto-update, default is enabled # When turned off, it remains on latest version installed on the vm # Added this new option AutoUpdate.UpdateToLatestVersion in place of AutoUpdate.Enabled, and encourage users to transition to this new option # See wiki[https://github.com/Azure/WALinuxAgent/wiki/FAQ#autoupdateenabled-vs-autoupdateupdatetolatestversion] for more details # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services OS.EnableFirewall=y # How often (in seconds) to check the firewall rules OS.EnableFirewallPeriod=30 # How often (in seconds) to remove the udev rules for persistent network interface names (75-persistent-net-generator.rules and /etc/udev/rules.d/70-persistent-net.rules) OS.RemovePersistentNetRulesPeriod=30 # How often (in seconds) to monitor for DHCP client restarts OS.MonitorDhcpClientRestartPeriod=30 Azure-WALinuxAgent-2b21de5/config/waagent.logrotate000066400000000000000000000016141462617747000223470ustar00rootroot00000000000000/var/log/waagent.log { # Old versions of log files are compressed with gzip by default. compress # Rotate log files > 20 MB, and keep last 50 archived files. With an compression ratio ranging from 5-10%, The # archived files would take an average of around 50-100 MB. Even for extremely chatty agent, the average size of # the compressed files would not go beyond ~2 MB per day. size 20M rotate 50 # Add date as extension when rotating logs dateext # Formatting the date extension to YYYY-MM-DD-SSSSSSS format. logrotate does not provide hours, mins and seconds # option. Adding the %s (system clock epoch time) to differentiate rotated log files within the same day. dateformat -%Y-%m-%d-%s # Do not rotate the log if it is empty notifempty # If the log file is missing, go on to the next one without issuing an error message. missingok }Azure-WALinuxAgent-2b21de5/init/000077500000000000000000000000001462617747000164735ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/init/arch/000077500000000000000000000000001462617747000174105ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/init/arch/waagent.service000066400000000000000000000005371462617747000224250ustar00rootroot00000000000000[Unit] Description=Azure Linux Agent Wants=network-online.target sshd.service sshd-keygen.service After=network-online.target ConditionFileIsExecutable=/usr/bin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python -u /usr/bin/waagent -daemon Restart=always RestartSec=5 [Install] WantedBy=multi-user.target Azure-WALinuxAgent-2b21de5/init/azure-vmextensions.slice000066400000000000000000000002141462617747000233770ustar00rootroot00000000000000[Unit] Description=Slice for Azure VM Extensions DefaultDependencies=no Before=slices.target [Slice] CPUAccounting=yes MemoryAccounting=yes Azure-WALinuxAgent-2b21de5/init/azure.slice000066400000000000000000000001471462617747000206440ustar00rootroot00000000000000[Unit] Description=Slice for Azure VM Agent and Extensions DefaultDependencies=no Before=slices.target Azure-WALinuxAgent-2b21de5/init/clearlinux/000077500000000000000000000000001462617747000206415ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/init/clearlinux/waagent.service000066400000000000000000000005661462617747000236600ustar00rootroot00000000000000[Unit] Description=Azure Linux Agent Wants=network-online.target sshd.service sshd-keygen.service After=network-online.target ConditionFileIsExecutable=/usr/bin/waagent ConditionPathExists=/usr/share/defaults/waagent/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python -u /usr/bin/waagent -daemon Restart=always RestartSec=5 [Install] WantedBy=multi-user.target Azure-WALinuxAgent-2b21de5/init/coreos/000077500000000000000000000000001462617747000177655ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/init/coreos/cloud-config.yml000066400000000000000000000023511462617747000230620ustar00rootroot00000000000000#cloud-config coreos: units: - name: etcd.service runtime: true drop-ins: - name: 10-oem.conf content: | [Service] Environment=ETCD_PEER_ELECTION_TIMEOUT=1200 - name: etcd2.service runtime: true drop-ins: - name: 10-oem.conf content: | [Service] Environment=ETCD_ELECTION_TIMEOUT=1200 - name: waagent.service command: start runtime: true content: | [Unit] Description=Microsoft Azure Agent Wants=network-online.target sshd-keygen.service After=network-online.target sshd-keygen.service [Service] Type=simple Restart=always RestartSec=5s ExecStart=/usr/share/oem/python/bin/python /usr/share/oem/bin/waagent -daemon - name: oem-cloudinit.service command: restart runtime: yes content: | [Unit] Description=Cloudinit from Azure metadata [Service] Type=oneshot ExecStart=/usr/bin/coreos-cloudinit --oem=azure oem: id: azure name: Microsoft Azure version-id: 2.1.4 home-url: https://azure.microsoft.com/ bug-report-url: https://github.com/coreos/bugs/issues Azure-WALinuxAgent-2b21de5/init/devuan/000077500000000000000000000000001462617747000177555ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/init/devuan/default/000077500000000000000000000000001462617747000214015ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/init/devuan/default/walinuxagent000066400000000000000000000001321462617747000240260ustar00rootroot00000000000000# To disable the Microsoft Azure Agent, set WALINUXAGENT_ENABLED=0 WALINUXAGENT_ENABLED=1 Azure-WALinuxAgent-2b21de5/init/devuan/walinuxagent000066400000000000000000000226701462617747000224150ustar00rootroot00000000000000#!/bin/bash # walinuxagent # script to start and stop the waagent daemon. # # This script takes into account the possibility that both daemon and # non-daemon instances of waagent may be running concurrently, # and attempts to ensure that any non-daemon instances are preserved # when the daemon instance is stopped. # ### BEGIN INIT INFO # Provides: walinuxagent # Required-Start: $remote_fs $syslog $network # Required-Stop: $remote_fs # X-Start-Before: cloud-init # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Microsoft Azure Linux Agent ### END INIT INFO DESC="Microsoft Azure Linux Agent" INTERPRETER="/usr/bin/python3" DAEMON='/usr/sbin/waagent' DAEMON_ARGS='-daemon' START_ARGS='--background' NAME='waagent' # set to 1 to enable a lot of debugging output DEBUG=0 . /lib/lsb/init-functions debugmsg() { # output a console message if DEBUG is set # (can be enabled dynamically by giving "debug" as an extra argument) if [ "x${DEBUG}" == "x1" ] ; then echo "[debug]: $1" >&2 fi return 0 } check_non_daemon_instances() { # check if there are any non-daemon instances of waagent running local NDPIDLIST i NDPIDCT declare -a NDPIDLIST debugmsg "check_non_daemon_instance: after init, #NDPIDLIST=${#NDPIDLIST[*]}" readarray -t NDPIDLIST < <( ps ax | grep "${INTERPRETER}" | grep "${DAEMON}" | grep -v -- "${DAEMON_ARGS}" | grep -v "grep" | awk '{ print $1 }') NDPIDCT=${#NDPIDLIST[@]} debugmsg "check_non_daemon_instances: NDPIDCT=${NDPIDCT}" debugmsg "check_non_daemon_instances: NDPIDLIST[0] = ${NDPIDLIST[0]}" if [ ${NDPIDCT} -gt 0 ] ; then debugmsg "check_non_daemon_instances: WARNING: non-daemon instances of waagent exist" else debugmsg "check_non_daemon_instances: no non-daemon instances of waagent are currently running" fi for (( i = 0 ; i < ${NDPIDCT} ; i++ )) ; do debugmsg "check_non_daemon_instances: WARNING: process ${NDPIDLIST[${i}]} is a non-daemon waagent instance" done return 0 } get_daemon_pid() { # (re)create PIDLIST, return the first entry local PID create_pidlist PID=${PIDLIST[0]} if [ -z "${PID}" ] ; then debugmsg "get_daemon_pid: : WARNING: no waagent daemon process found" fi echo "${PID}" } recheck_status() { # after an attempt to stop the daemon, re-check the status # and take any further actions required. # (NB: at the moment, we only re-check once. Possible improvement # would be to iterate the re-check up to a given maximum tries). local STATUS NEWSTATUS get_status STATUS=$? debugmsg "stop_waagent: status is now ${STATUS}" # ideal if stop has been successful: STATUS=1 - no daemon process case ${STATUS} in 0) # stop didn't work # what to do? maybe try kill -9 ? debugmsg "recheck_status: ERROR: unable to stop waagent" debugmsg "recheck_status: trying again with kill -9" kill_daemon_from_pid 1 # probably need to check status again? get_status NEW_STATUS=$? if [ "x${NEW_STATUS}" == "x1" ] ; then debugmsg "recheck_status: successfully stopped." log_end_msg 0 || true else # could probably do something more productive here debugmsg "recheck_status: unable to stop daemon - giving up" log_end_msg 1 || true exit 1 fi ;; 1) # THIS IS THE EXPECTED CASE: daemon is no longer running and debugmsg "recheck_status: waagent daemon stopped successfully." log_end_msg 0 || true ;; 2) # so weird that we can't figure out what's going on debugmsg "recheck_status: ERROR: unable to determine waagent status" debugmsg "recheck_status: manual intervention required" log_end_msg 1 || true exit 1 ;; esac } start_waagent() { # we use start-stop-daemon for starting waagent local STATUS get_status STATUS=$? # check the status value - take appropriate action debugmsg "start_waagent: STATUS=${STATUS}" case "${STATUS}" in 0) debugmsg "start_waagent: waagent is already running" log_daemon_msg "waagent is already running" log_end_msg 0 || true ;; 1) # not running (we ignore presence/absence of pidfile) # just start waagent debugmsg "start_waagent: waagent is not currently running" log_daemon_msg "Starting ${NAME} daemon" start-stop-daemon --start --quiet --background --name "${NAME}" --exec ${INTERPRETER} -- ${DAEMON} ${DAEMON_ARGS} log_end_msg $? || true ;; 2) # get_status can't figure out what's going on. # try doing a stop to clean up, then attempt to start waagent # will probably require manual intervention debugmsg "start_waagent: unable to determine current status" debugmsg "start_waagent: trying to stop waagent first, and then start it" stop_waagent log_daemon_msg "Starting ${NAME} daemon" start-stop-daemon --start --quiet --background --name ${NAME} --exec ${INTERPRETER} -- ${DAEMON} ${DAEMON_ARGS} log_end_msg $? || true ;; esac } kill_daemon_from_pidlist() { # check the pidlist for at least one waagent daemon process # if found, kill it directly from the entry in the pidlist # Ignore any pidfile. Avoid killing any non-daemon # waagent processes. # If called with "1" as first argument, use kill -9 rather than # normal kill local i PIDCT FORCE FORCE=0 if [ "x${1}" == "x1" ] ; then debugmsg "kill_daemon_from_pidlist: WARNING: using kill -9" FORCE=1 fi debugmsg "kill_daemon_from_pidlist: killing daemon using pid(s) in PIDLIST" PIDCT=${#PIDLIST[*]} if [ "${PIDCT}" -eq 0 ] ; then debugmsg "kill_daemon_from_pidlist: ERROR: no pids in PIDLIST" return 1 fi for (( i=0 ; i < ${PIDCT} ; i++ )) ; do debugmsg "kill_daemon_from_pidlist: killing waagent daemon process ${PIDLIST[${i}]}" if [ "x${FORCE}" == "x1" ] ; then kill -9 ${PIDLIST[${i}]} else kill ${PIDLIST[${i}]} fi done return 0 } stop_waagent() { # check the current status and if the waagent daemon is running, attempt # to stop it. # start-stop-daemon is avoided here local STATUS PID RC get_status STATUS=$? debugmsg "stop_waagent: current status = ${STATUS}" case "${STATUS}" in 0) # - ignore any pidfile - kill directly from process list log_daemon_msg "Stopping ${NAME} daemon (using process list)" kill_daemon_from_pidlist recheck_status ;; 1) # not running - we ignore any pidfile # REVISIT: should we check for a pidfile and remove if found? debugmsg "waagent is not running" log_daemon_msg "waagent is already stopped" log_end_msg 0 || true ;; 2) # weirdness - call for help debugmsg "ERROR: unable to determine waagent status - manual intervention required" log_daemon_msg "WARNING: unable to determine status of waagent daemon - manual intervention required" log_end_msg 1 || true ;; esac } check_daemons() { # check for running waagent daemon processes local ENTRY ps ax | grep "${INTERPRETER}" | grep "${DAEMON}" | grep -- "${DAEMON_ARGS}" | grep -v 'grep' | while read ENTRY ; do debugmsg "check_daemons(): ENTRY='${ENTRY}'" done return 0 } create_pidlist() { # initialise the list of waagent daemon processes # NB: there should only be one - both this script and waagent itself # attempt to avoid starting more than one daemon process. # However, we use an array just in case. readarray -t PIDLIST < <( ps ax | grep "${INTERPRETER}" | grep "${DAEMON}" | grep -- "${DAEMON_ARGS}" | grep -v 'grep' | awk '{ print $1 }') if [ "${#PIDLIST[*]}" -eq 0 ] ; then debugmsg "create_pidlist: WARNING: no waagent daemons found" elif [ "${#PIDLIST[*]}" -gt 1 ] ; then debugmsg "create_pidlist: WARNING: multiple waagent daemons running" fi return 0 } get_status() { # simplified status - ignoring any pidfile # Possibilities: # 0 - waagent daemon running # 1 - waagent daemon not running # 2 - status unclear # (NB: if we find that multiple daemons exist, we just ignore the fact. # It should be virtually impossible for this to happen) local FOUND RPID ENTRY STATUS DAEMON_RUNNING PIDCT PIDCT=0 DAEMON_RUNNING= RPID= ENTRY= # assume the worst STATUS=2 check_daemons create_pidlist # should only be one daemon running - but we check, just in case PIDCT=${#PIDLIST[@]} debugmsg "get_status: PIDCT=${PIDCT}" if [ ${PIDCT} -eq 0 ] ; then # not running STATUS=1 else # at least one daemon process is running if [ ${PIDCT} -gt 1 ] ; then debugmsg "get_status: WARNING: more than one waagent daemon running" debugmsg "get_status: (should not happen)" else debugmsg "get_status: only one daemon instance running - as expected" fi STATUS=0 fi return ${STATUS} } waagent_status() { # get the current status of the waagent daemon, and return it local STATUS get_status STATUS=$? debugmsg "waagent status = ${STATUS}" case ${STATUS} in 0) log_daemon_msg "waagent is running" ;; 1) log_daemon_msg "WARNING: waagent is not running" ;; 2) log_daemon_msg "WARNING: waagent status cannot be determined" ;; esac log_end_msg 0 || true return 0 } ######################################################################### # MAINLINE # Usage: "service [scriptname] [ start | stop | status | restart ] [ debug ] # (specifying debug as extra argument enables debugging output) ######################################################################### export PATH="${PATH}:+$PATH:}/usr/sbin:/sbin" declare -a PIDLIST if [ ! -z "$2" -a "$2" == "debug" ] ; then DEBUG=1 fi # pre-check for non-daemon (e.g. console) instances of waagent check_non_daemon_instances case "$1" in start) start_waagent ;; stop) stop_waagent ;; status) waagent_status ;; restart) stop_waagent start_waagent ;; esac exit 0 Azure-WALinuxAgent-2b21de5/init/freebsd/000077500000000000000000000000001462617747000201055ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/init/freebsd/waagent000077500000000000000000000005151462617747000214620ustar00rootroot00000000000000#!/bin/sh # PROVIDE: waagent # REQUIRE: sshd netif dhclient # KEYWORD: nojail . /etc/rc.subr PATH=$PATH:/usr/local/bin:/usr/local/sbin name="waagent" rcvar="waagent_enable" pidfile="/var/run/waagent.pid" command="/usr/local/sbin/${name}" command_interpreter="python" command_args="start" load_rc_config $name run_rc_command "$1" Azure-WALinuxAgent-2b21de5/init/gaia/000077500000000000000000000000001462617747000173745ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/init/gaia/waagent000077500000000000000000000014561462617747000207560ustar00rootroot00000000000000#!/bin/bash # # Init file for AzureLinuxAgent. # # chkconfig: 2345 60 80 # description: AzureLinuxAgent # # source function library . /etc/rc.d/init.d/functions RETVAL=0 FriendlyName="AzureLinuxAgent" WAZD_BIN=/usr/sbin/waagent.sh start() { echo -n $"Starting $FriendlyName: " $WAZD_BIN -start & success echo } stop() { echo -n $"Stopping $FriendlyName: " killproc -p /var/run/waagent.pid $WAZD_BIN RETVAL=$? echo return $RETVAL } case "$1" in start) start ;; stop) stop ;; restart) stop start ;; reload) ;; report) ;; status) status $WAZD_BIN RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|status}" RETVAL=1 esac exit $RETVAL Azure-WALinuxAgent-2b21de5/init/mariner/000077500000000000000000000000001462617747000201305ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/init/mariner/waagent.service000066400000000000000000000006201462617747000231360ustar00rootroot00000000000000[Unit] Description=Azure Linux Agent Wants=systemd-networkd-wait-online.service sshd.service sshd-keygen.service After=systemd-networkd-wait-online.service cloud-init.service ConditionFileIsExecutable=/usr/bin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python -u /usr/bin/waagent -daemon Restart=always RestartSec=5 [Install] WantedBy=multi-user.target Azure-WALinuxAgent-2b21de5/init/openbsd/000077500000000000000000000000001462617747000201255ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/init/openbsd/waagent000066400000000000000000000002311462617747000214720ustar00rootroot00000000000000#!/bin/sh daemon="python2.7 /usr/local/sbin/waagent -start" . /etc/rc.d/rc.subr pexp="python /usr/local/sbin/waagent -daemon" rc_reload=NO rc_cmd $1 Azure-WALinuxAgent-2b21de5/init/openwrt/000077500000000000000000000000001462617747000201715ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/init/openwrt/waagent000077500000000000000000000027061462617747000215520ustar00rootroot00000000000000#!/bin/sh /etc/rc.common # Init file for AzureLinuxAgent. # # Copyright 2018 Microsoft Corporation # Copyright 2018 Sonus Networks, Inc. (d.b.a. Ribbon Communications Operating Company) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # # description: AzureLinuxAgent # START=60 STOP=80 RETVAL=0 FriendlyName="AzureLinuxAgent" WAZD_BIN=/usr/sbin/waagent WAZD_CONF=/etc/waagent.conf WAZD_PIDFILE=/var/run/waagent.pid test -x "$WAZD_BIN" || { echo "$WAZD_BIN not installed"; exit 5; } test -e "$WAZD_CONF" || { echo "$WAZD_CONF not found"; exit 6; } start() { echo -n "Starting $FriendlyName: " $WAZD_BIN -start RETVAL=$? echo return $RETVAL } stop() { echo -n "Stopping $FriendlyName: " if [ -f "$WAZD_PIDFILE" ] then kill -9 `cat ${WAZD_PIDFILE}` rm ${WAZD_PIDFILE} RETVAL=$? echo return $RETVAL else echo "$FriendlyName already stopped." fi } Azure-WALinuxAgent-2b21de5/init/photonos/000077500000000000000000000000001462617747000203445ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/init/photonos/waagent.service000066400000000000000000000006211462617747000233530ustar00rootroot00000000000000[Unit] Description=Azure Linux Agent Wants=systemd-networkd-wait-online.service sshd.service sshd-keygen.service After=systemd-networkd-wait-online.service cloud-init.service ConditionFileIsExecutable=/usr/bin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python3 -u /usr/bin/waagent -daemon Restart=always RestartSec=5 [Install] WantedBy=multi-user.target Azure-WALinuxAgent-2b21de5/init/redhat/000077500000000000000000000000001462617747000177425ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/init/redhat/py2/000077500000000000000000000000001462617747000204545ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/init/redhat/py2/waagent.service000066400000000000000000000006321462617747000234650ustar00rootroot00000000000000[Unit] Description=Azure Linux Agent Wants=network-online.target sshd.service sshd-keygen.service After=network-online.target ConditionFileIsExecutable=/usr/sbin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python -u /usr/sbin/waagent -daemon Restart=always RestartSec=5 Slice=azure.slice CPUAccounting=yes MemoryAccounting=yes [Install] WantedBy=multi-user.target Azure-WALinuxAgent-2b21de5/init/redhat/waagent.service000066400000000000000000000006331462617747000227540ustar00rootroot00000000000000[Unit] Description=Azure Linux Agent Wants=network-online.target sshd.service sshd-keygen.service After=network-online.target ConditionFileIsExecutable=/usr/sbin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python3 -u /usr/sbin/waagent -daemon Restart=always RestartSec=5 Slice=azure.slice CPUAccounting=yes MemoryAccounting=yes [Install] WantedBy=multi-user.target Azure-WALinuxAgent-2b21de5/init/sles/000077500000000000000000000000001462617747000174415ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/init/sles/waagent.service000066400000000000000000000005421462617747000224520ustar00rootroot00000000000000[Unit] Description=Azure Linux Agent Wants=network-online.target sshd.service sshd-keygen.service After=network-online.target ConditionFileIsExecutable=/usr/sbin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python3 -u /usr/sbin/waagent -daemon Restart=always RestartSec=5 [Install] WantedBy=multi-user.target Azure-WALinuxAgent-2b21de5/init/suse/000077500000000000000000000000001462617747000174525ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/init/suse/waagent000077500000000000000000000062011462617747000210250ustar00rootroot00000000000000#! /bin/sh # # Microsoft Azure Linux Agent sysV init script # # Copyright 2013 Microsoft Corporation # Copyright SUSE LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # /etc/init.d/waagent # # and symbolic link # # /usr/sbin/rcwaagent # # System startup script for the waagent # ### BEGIN INIT INFO # Provides: MicrosoftAzureLinuxAgent # Required-Start: $network sshd # Required-Stop: $network sshd # Default-Start: 3 5 # Default-Stop: 0 1 2 6 # Description: Start the MicrosoftAzureLinuxAgent ### END INIT INFO PYTHON=/usr/bin/python WAZD_BIN=/usr/sbin/waagent WAZD_CONF=/etc/waagent.conf WAZD_PIDFILE=/var/run/waagent.pid test -x "$WAZD_BIN" || { echo "$WAZD_BIN not installed"; exit 5; } test -e "$WAZD_CONF" || { echo "$WAZD_CONF not found"; exit 6; } . /etc/rc.status # First reset status of this service rc_reset # Return values acc. to LSB for all commands but status: # 0 - success # 1 - misc error # 2 - invalid or excess args # 3 - unimplemented feature (e.g. reload) # 4 - insufficient privilege # 5 - program not installed # 6 - program not configured # # Note that starting an already running service, stopping # or restarting a not-running service as well as the restart # with force-reload (in case signalling is not supported) are # considered a success. case "$1" in start) echo -n "Starting MicrosoftAzureLinuxAgent" ## Start daemon with startproc(8). If this fails ## the echo return value is set appropriate. startproc -f ${PYTHON} ${WAZD_BIN} -start rc_status -v ;; stop) echo -n "Shutting down MicrosoftAzureLinuxAgent" ## Stop daemon with killproc(8) and if this fails ## set echo the echo return value. killproc -p ${WAZD_PIDFILE} ${PYTHON} ${WAZD_BIN} rc_status -v ;; try-restart) ## Stop the service and if this succeeds (i.e. the ## service was running before), start it again. $0 status >/dev/null && $0 restart rc_status ;; restart) ## Stop the service and regardless of whether it was ## running or not, start it again. $0 stop sleep 1 $0 start rc_status ;; force-reload|reload) rc_status ;; status) echo -n "Checking for service MicrosoftAzureLinuxAgent " ## Check status with checkproc(8), if process is running ## checkproc will return with exit status 0. checkproc -p ${WAZD_PIDFILE} ${PYTHON} ${WAZD_BIN} rc_status -v ;; probe) ;; *) echo "Usage: $0 {start|stop|status|try-restart|restart|force-reload|reload}" exit 1 ;; esac rc_exit Azure-WALinuxAgent-2b21de5/init/ubuntu/000077500000000000000000000000001462617747000200155ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/init/ubuntu/walinuxagent000066400000000000000000000001321462617747000224420ustar00rootroot00000000000000# To disable the Microsoft Azure Agent, set WALINUXAGENT_ENABLED=0 WALINUXAGENT_ENABLED=1 Azure-WALinuxAgent-2b21de5/init/ubuntu/walinuxagent.conf000066400000000000000000000007321462617747000233740ustar00rootroot00000000000000description "Microsoft Azure Linux agent" author "Ben Howard " start on runlevel [2345] stop on runlevel [!2345] pre-start script [ -r /etc/default/walinuxagent ] && . /etc/default/walinuxagent if [ "$WALINUXAGENT_ENABLED" != "1" ]; then stop ; exit 0 fi if [ ! -x /usr/sbin/waagent ]; then stop ; exit 0 fi #Load the udf module modprobe -b udf end script exec /usr/sbin/waagent -daemon respawn Azure-WALinuxAgent-2b21de5/init/ubuntu/walinuxagent.service000066400000000000000000000011161462617747000241040ustar00rootroot00000000000000# # NOTE: # This file hosted on WALinuxAgent repository only for reference purposes. # Please refer to a recent image to find out the up-to-date systemd unit file. # [Unit] Description=Azure Linux Agent After=network-online.target cloud-init.service Wants=network-online.target sshd.service sshd-keygen.service ConditionFileIsExecutable=/usr/sbin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python3 -u /usr/sbin/waagent -daemon Restart=always Slice=azure.slice CPUAccounting=yes MemoryAccounting=yes [Install] WantedBy=multi-user.target Azure-WALinuxAgent-2b21de5/init/waagent000077500000000000000000000014761462617747000200570ustar00rootroot00000000000000#!/bin/bash # # Init file for AzureLinuxAgent. # # chkconfig: 2345 60 80 # description: AzureLinuxAgent # # source function library . /etc/rc.d/init.d/functions RETVAL=0 FriendlyName="AzureLinuxAgent" WAZD_BIN=/usr/sbin/waagent start() { echo -n $"Starting $FriendlyName: " $WAZD_BIN -start RETVAL=$? echo return $RETVAL } stop() { echo -n $"Stopping $FriendlyName: " killproc -p /var/run/waagent.pid $WAZD_BIN RETVAL=$? echo return $RETVAL } case "$1" in start) start ;; stop) stop ;; restart) stop start ;; reload) ;; report) ;; status) status $WAZD_BIN RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|status}" RETVAL=1 esac exit $RETVAL Azure-WALinuxAgent-2b21de5/init/waagent.service000066400000000000000000000005411462617747000215030ustar00rootroot00000000000000[Unit] Description=Azure Linux Agent Wants=network-online.target sshd.service sshd-keygen.service After=network-online.target ConditionFileIsExecutable=/usr/sbin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python -u /usr/sbin/waagent -daemon Restart=always RestartSec=5 [Install] WantedBy=multi-user.target Azure-WALinuxAgent-2b21de5/makepkg.py000077500000000000000000000103401462617747000175220ustar00rootroot00000000000000#!/usr/bin/env python3 import argparse import glob import logging import os.path import shutil import subprocess import sys from azurelinuxagent.common.version import AGENT_NAME, AGENT_VERSION, \ AGENT_LONG_VERSION from azurelinuxagent.ga.guestagent import AGENT_MANIFEST_FILE MANIFEST = '''[{{ "name": "{0}", "version": 1.0, "handlerManifest": {{ "installCommand": "", "uninstallCommand": "", "updateCommand": "", "enableCommand": "python -u {1} -run-exthandlers", "disableCommand": "", "rebootAfterInstall": false, "reportHeartbeat": false }} }}]''' PUBLISH_MANIFEST = ''' Microsoft.OSTCLinuxAgent {1} {0} VmRole Microsoft Azure Guest Agent for Linux IaaS true https://github.com/Azure/WALinuxAgent/blob/2.1/LICENSE.txt https://github.com/Azure/WALinuxAgent/blob/2.1/LICENSE.txt https://github.com/Azure/WALinuxAgent true Microsoft Linux ''' PUBLISH_MANIFEST_FILE = 'manifest.xml' def do(*args): try: return subprocess.check_output(args, stderr=subprocess.STDOUT) except subprocess.CalledProcessError as e: # pylint: disable=C0103 raise Exception("[{0}] failed:\n{1}\n{2}".format(" ".join(args), str(e), e.output)) def run(agent_family, output_directory, log): output_path = os.path.join(output_directory, "eggs") target_path = os.path.join(output_path, AGENT_LONG_VERSION) bin_path = os.path.join(target_path, "bin") egg_path = os.path.join(bin_path, AGENT_LONG_VERSION + ".egg") manifest_path = os.path.join(target_path, AGENT_MANIFEST_FILE) publish_manifest_path = os.path.join(target_path, PUBLISH_MANIFEST_FILE) pkg_name = os.path.join(output_path, AGENT_LONG_VERSION + ".zip") if os.path.isdir(target_path): shutil.rmtree(target_path) elif os.path.isfile(target_path): os.remove(target_path) if os.path.isfile(pkg_name): os.remove(pkg_name) os.makedirs(bin_path) log.info("Created {0} directory".format(target_path)) setup_path = os.path.join(os.path.dirname(__file__), "setup.py") args = ["python3", setup_path, "bdist_egg", "--dist-dir={0}".format(bin_path)] log.info("Creating egg {0}".format(egg_path)) do(*args) egg_name = os.path.join("bin", os.path.basename( glob.glob(os.path.join(bin_path, "*"))[0])) log.info("Writing {0}".format(manifest_path)) with open(manifest_path, mode='w') as manifest: manifest.write(MANIFEST.format(AGENT_NAME, egg_name)) log.info("Writing {0}".format(publish_manifest_path)) with open(publish_manifest_path, mode='w') as publish_manifest: publish_manifest.write(PUBLISH_MANIFEST.format(AGENT_VERSION, agent_family)) cwd = os.getcwd() os.chdir(target_path) try: log.info("Creating package {0}".format(pkg_name)) do("zip", "-r", pkg_name, egg_name) do("zip", "-j", pkg_name, AGENT_MANIFEST_FILE) do("zip", "-j", pkg_name, PUBLISH_MANIFEST_FILE) finally: os.chdir(cwd) log.info("Package {0} successfully created".format(pkg_name)) if __name__ == "__main__": logging.basicConfig(format='%(message)s', level=logging.INFO) parser = argparse.ArgumentParser() parser.add_argument('family', metavar='family', nargs='?', default='Test', help='Agent family') parser.add_argument('-o', '--output', default=os.getcwd(), help='Output directory') arguments = parser.parse_args() try: run(arguments.family, arguments.output, logging) except Exception as exception: logging.error(str(exception)) sys.exit(1) sys.exit(0) Azure-WALinuxAgent-2b21de5/requirements.txt000066400000000000000000000000461462617747000210140ustar00rootroot00000000000000distro; python_version >= '3.8' pyasn1Azure-WALinuxAgent-2b21de5/setup.py000077500000000000000000000332011462617747000172440ustar00rootroot00000000000000#!/usr/bin/env python # # Microsoft Azure Linux Agent setup.py # # Copyright 2013 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import os import subprocess import sys import setuptools from setuptools import find_packages from setuptools.command.install import install as _install from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.version import AGENT_NAME, AGENT_VERSION, \ AGENT_DESCRIPTION, \ DISTRO_NAME, DISTRO_VERSION, DISTRO_FULL_NAME root_dir = os.path.dirname(os.path.abspath(__file__)) # pylint: disable=invalid-name os.chdir(root_dir) def set_files(data_files, dest=None, src=None): data_files.append((dest, src)) def set_bin_files(data_files, dest, src=None): if src is None: src = ["bin/waagent", "bin/waagent2.0"] data_files.append((dest, src)) def set_conf_files(data_files, dest="/etc", src=None): if src is None: src = ["config/waagent.conf"] data_files.append((dest, src)) def set_logrotate_files(data_files, dest="/etc/logrotate.d", src=None): if src is None: src = ["config/waagent.logrotate"] data_files.append((dest, src)) def set_sysv_files(data_files, dest="/etc/rc.d/init.d", src=None): if src is None: src = ["init/waagent"] data_files.append((dest, src)) def set_systemd_files(data_files, dest, src=None): if src is None: src = ["init/waagent.service"] data_files.append((dest, src)) def set_freebsd_rc_files(data_files, dest="/etc/rc.d/", src=None): if src is None: src = ["init/freebsd/waagent"] data_files.append((dest, src)) def set_openbsd_rc_files(data_files, dest="/etc/rc.d/", src=None): if src is None: src = ["init/openbsd/waagent"] data_files.append((dest, src)) def set_udev_files(data_files, dest="/etc/udev/rules.d/", src=None): if src is None: src = ["config/66-azure-storage.rules", "config/99-azure-product-uuid.rules"] data_files.append((dest, src)) def get_data_files(name, version, fullname): # pylint: disable=R0912 """ Determine data_files according to distro name, version and init system type """ data_files = [] osutil = get_osutil() systemd_dir_path = osutil.get_systemd_unit_file_install_path() agent_bin_path = osutil.get_agent_bin_path() if name in ('redhat', 'rhel', 'centos', 'almalinux', 'cloudlinux', 'rocky'): if version.startswith("8") or version.startswith("9"): # redhat8+ default to py3 set_bin_files(data_files, dest=agent_bin_path, src=["bin/py3/waagent", "bin/waagent2.0"]) else: set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files) set_logrotate_files(data_files) set_udev_files(data_files) if version.startswith("8") or version.startswith("9"): # redhat 8+ uses systemd and python3 set_systemd_files(data_files, dest=systemd_dir_path, src=["init/redhat/waagent.service", "init/azure.slice", "init/azure-vmextensions.slice" ]) elif version.startswith("6"): set_sysv_files(data_files) else: # redhat7.0+ use systemd set_systemd_files(data_files, dest=systemd_dir_path, src=[ "init/redhat/py2/waagent.service", "init/azure.slice", "init/azure-vmextensions.slice" ]) if version.startswith("7.1"): # TODO this is a mitigation to systemctl bug on 7.1 set_sysv_files(data_files) elif name == 'arch': set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files, src=["config/arch/waagent.conf"]) set_udev_files(data_files) set_systemd_files(data_files, dest=systemd_dir_path, src=["init/arch/waagent.service"]) elif name in ('coreos', 'flatcar'): set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files, dest="/usr/share/oem", src=["config/coreos/waagent.conf"]) set_logrotate_files(data_files) set_udev_files(data_files) set_files(data_files, dest="/usr/share/oem", src=["init/coreos/cloud-config.yml"]) elif "Clear Linux" in fullname: set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files, dest="/usr/share/defaults/waagent", src=["config/clearlinux/waagent.conf"]) set_systemd_files(data_files, dest=systemd_dir_path, src=["init/clearlinux/waagent.service"]) elif name == 'mariner': set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files, dest="/etc", src=["config/mariner/waagent.conf"]) set_systemd_files(data_files, dest=systemd_dir_path, src=["init/mariner/waagent.service"]) set_logrotate_files(data_files) set_udev_files(data_files) elif name == 'ubuntu': set_conf_files(data_files, src=["config/ubuntu/waagent.conf"]) set_logrotate_files(data_files) set_udev_files(data_files) if version.startswith("12") or version.startswith("14"): # Ubuntu12.04/14.04 - uses upstart if version.startswith("12"): set_bin_files(data_files, dest=agent_bin_path) else: set_bin_files(data_files, dest=agent_bin_path, src=["bin/py3/waagent", "bin/waagent2.0"]) set_files(data_files, dest="/etc/init", src=["init/ubuntu/walinuxagent.conf"]) set_files(data_files, dest='/etc/default', src=['init/ubuntu/walinuxagent']) else: set_bin_files(data_files, dest=agent_bin_path, src=["bin/py3/waagent", "bin/waagent2.0"]) # Ubuntu15.04+ uses systemd set_systemd_files(data_files, dest=systemd_dir_path, src=[ "init/ubuntu/walinuxagent.service", "init/azure.slice", "init/azure-vmextensions.slice" ]) elif name == 'suse' or name == 'opensuse': # pylint: disable=R1714 set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files, src=["config/suse/waagent.conf"]) set_logrotate_files(data_files) set_udev_files(data_files) if fullname == 'SUSE Linux Enterprise Server' and \ version.startswith('11') or \ fullname == 'openSUSE' and version.startswith( '13.1'): set_sysv_files(data_files, dest='/etc/init.d', src=["init/suse/waagent"]) else: # sles 12+ and openSUSE 13.2+ use systemd set_systemd_files(data_files, dest=systemd_dir_path) elif name == 'sles': # sles 15+ distro named as sles set_bin_files(data_files, dest=agent_bin_path, src=["bin/py3/waagent", "bin/waagent2.0"]) set_conf_files(data_files, src=["config/suse/waagent.conf"]) set_logrotate_files(data_files) set_udev_files(data_files) # sles 15+ uses systemd and python3 set_systemd_files(data_files, dest=systemd_dir_path, src=["init/sles/waagent.service"]) elif name == 'freebsd': set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files, src=["config/freebsd/waagent.conf"]) set_freebsd_rc_files(data_files) elif name == 'openbsd': set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files, src=["config/openbsd/waagent.conf"]) set_openbsd_rc_files(data_files) elif name == 'debian': set_bin_files(data_files, dest=agent_bin_path, src=["bin/py3/waagent", "bin/waagent2.0"]) set_conf_files(data_files, src=["config/debian/waagent.conf"]) set_logrotate_files(data_files) set_udev_files(data_files, dest="/lib/udev/rules.d") if debian_has_systemd(): set_systemd_files(data_files, dest=systemd_dir_path) elif name == 'devuan': set_bin_files(data_files, dest=agent_bin_path, src=["bin/py3/waagent", "bin/waagent2.0"]) set_files(data_files, dest="/etc/init.d", src=['init/devuan/walinuxagent']) set_files(data_files, dest="/etc/default", src=['init/devuan/default/walinuxagent']) set_conf_files(data_files, src=['config/devuan/waagent.conf']) set_logrotate_files(data_files) set_udev_files(data_files, dest="/lib/udev/rules.d") elif name == 'iosxe': set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files, src=["config/iosxe/waagent.conf"]) set_logrotate_files(data_files) set_udev_files(data_files) set_systemd_files(data_files, dest=systemd_dir_path) if version.startswith("7.1"): # TODO this is a mitigation to systemctl bug on 7.1 set_sysv_files(data_files) elif name == 'openwrt': set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files) set_logrotate_files(data_files) set_sysv_files(data_files, dest='/etc/init.d', src=["init/openwrt/waagent"]) elif name == 'photonos': set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files, src=["config/photonos/waagent.conf"]) set_systemd_files(data_files, dest=systemd_dir_path, src=["init/photonos/waagent.service"]) elif name == 'fedora': set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files) set_logrotate_files(data_files) set_udev_files(data_files) set_systemd_files(data_files, dest=systemd_dir_path) else: # Use default setting set_bin_files(data_files, dest=agent_bin_path) set_conf_files(data_files) set_logrotate_files(data_files) set_udev_files(data_files) set_sysv_files(data_files) return data_files def debian_has_systemd(): try: return subprocess.check_output( ['cat', '/proc/1/comm']).strip().decode() == 'systemd' except subprocess.CalledProcessError: return False class install(_install): # pylint: disable=C0103 user_options = _install.user_options + [ ('lnx-distro=', None, 'target Linux distribution'), ('lnx-distro-version=', None, 'target Linux distribution version'), ('lnx-distro-fullname=', None, 'target Linux distribution full name'), ('register-service', None, 'register as startup service and start'), ('skip-data-files', None, 'skip data files installation'), ] def initialize_options(self): _install.initialize_options(self) # pylint: disable=attribute-defined-outside-init self.lnx_distro = DISTRO_NAME self.lnx_distro_version = DISTRO_VERSION self.lnx_distro_fullname = DISTRO_FULL_NAME self.register_service = False # All our data files are system-wide files that are not included in the egg; skip them when # creating an egg. self.skip_data_files = "bdist_egg" in sys.argv # pylint: enable=attribute-defined-outside-init def finalize_options(self): _install.finalize_options(self) if self.skip_data_files: return data_files = get_data_files(self.lnx_distro, self.lnx_distro_version, self.lnx_distro_fullname) self.distribution.data_files = data_files self.distribution.reinitialize_command('install_data', True) def run(self): _install.run(self) if self.register_service: osutil = get_osutil() osutil.register_agent_service() osutil.stop_agent_service() osutil.start_agent_service() # Note to packagers and users from source. # In version 3.5 of Python distribution information handling in the platform # module was deprecated. Depending on the Linux distribution the # implementation may be broken prior to Python 3.7 wher the functionality # will be removed from Python 3 requires = [] # pylint: disable=invalid-name if sys.version_info[0] >= 3 and sys.version_info[1] >= 7: requires = ['distro'] # pylint: disable=invalid-name modules = [] # pylint: disable=invalid-name if "bdist_egg" in sys.argv: modules.append("__main__") setuptools.setup( name=AGENT_NAME, version=AGENT_VERSION, long_description=AGENT_DESCRIPTION, author='Microsoft Corporation', author_email='walinuxagent@microsoft.com', platforms='Linux', url='https://github.com/Azure/WALinuxAgent', license='Apache License Version 2.0', packages=find_packages(exclude=["tests*", "dcr*"]), py_modules=modules, install_requires=requires, cmdclass={ 'install': install } ) Azure-WALinuxAgent-2b21de5/test-requirements.txt000066400000000000000000000012701462617747000217710ustar00rootroot00000000000000coverage mock==2.0.0; python_version == '2.6' mock==3.0.5; python_version >= '2.7' and python_version <= '3.5' mock==4.0.2; python_version >= '3.6' distro; python_version >= '3.8' nose nose-timer; python_version >= '2.7' # Pinning the wrapt requirement to 1.12.0 due to the bug - https://github.com/GrahamDumpleton/wrapt/issues/188 wrapt==1.12.0; python_version > '2.6' and python_version < '3.6' pylint; python_version > '2.6' and python_version < '3.6' pylint==2.8.3; python_version >= '3.6' # Requirements to run pylint on the end-to-end tests source code assertpy azure-core azure-identity azure-mgmt-compute>=22.1.0 azure-mgmt-network>=19.3.0 azure-mgmt-resource>=15.0.0 msrestazure pytz Azure-WALinuxAgent-2b21de5/tests/000077500000000000000000000000001462617747000166725ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/__init__.py000066400000000000000000000011651462617747000210060ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-2b21de5/tests/common/000077500000000000000000000000001462617747000201625ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/common/__init__.py000066400000000000000000000011651462617747000222760ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-2b21de5/tests/common/dhcp/000077500000000000000000000000001462617747000211005ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/common/dhcp/__init__.py000066400000000000000000000011651462617747000232140ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-2b21de5/tests/common/dhcp/test_dhcp.py000066400000000000000000000120501462617747000234250ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import mock import azurelinuxagent.common.dhcp as dhcp import azurelinuxagent.common.osutil.default as osutil from tests.lib.tools import AgentTestCase, open_patch, patch class TestDHCP(AgentTestCase): DEFAULT_ROUTING_TABLE = "\ Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT \n\ eth0 00345B0A 00000000 0001 0 0 5 00000000 0 0 0 \n\ lo 00000000 01345B0A 0003 0 0 1 00FCFFFF 0 0 0 \n" def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_wireserver_route_exists(self): # setup dhcp_handler = dhcp.get_dhcp_handler() self.assertTrue(dhcp_handler.endpoint is None) self.assertTrue(dhcp_handler.routes is None) self.assertTrue(dhcp_handler.gateway is None) # execute routing_table_with_wireserver_route = TestDHCP.DEFAULT_ROUTING_TABLE + \ "eth0 00000000 10813FA8 0003 0 0 5 00000000 0 0 0 \n" with patch("os.path.exists", return_value=True): open_file_mock = mock.mock_open(read_data=routing_table_with_wireserver_route) with patch(open_patch(), open_file_mock): self.assertTrue(dhcp_handler.wireserver_route_exists) # test self.assertTrue(dhcp_handler.endpoint is not None) self.assertTrue(dhcp_handler.routes is None) self.assertTrue(dhcp_handler.gateway is None) def test_wireserver_route_not_exists(self): # setup dhcp_handler = dhcp.get_dhcp_handler() self.assertTrue(dhcp_handler.endpoint is None) self.assertTrue(dhcp_handler.routes is None) self.assertTrue(dhcp_handler.gateway is None) # execute with patch("os.path.exists", return_value=True): open_file_mock = mock.mock_open(read_data=TestDHCP.DEFAULT_ROUTING_TABLE) with patch(open_patch(), open_file_mock): self.assertFalse(dhcp_handler.wireserver_route_exists) # test self.assertTrue(dhcp_handler.endpoint is None) self.assertTrue(dhcp_handler.routes is None) self.assertTrue(dhcp_handler.gateway is None) def test_dhcp_cache_exists(self): dhcp_handler = dhcp.get_dhcp_handler() dhcp_handler.osutil = osutil.DefaultOSUtil() with patch.object(osutil.DefaultOSUtil, 'get_dhcp_lease_endpoint', return_value=None): self.assertFalse(dhcp_handler.dhcp_cache_exists) self.assertEqual(dhcp_handler.endpoint, None) with patch.object(osutil.DefaultOSUtil, 'get_dhcp_lease_endpoint', return_value="foo"): self.assertTrue(dhcp_handler.dhcp_cache_exists) self.assertEqual(dhcp_handler.endpoint, "foo") def test_dhcp_skip_cache(self): handler = dhcp.get_dhcp_handler() handler.osutil = osutil.DefaultOSUtil() open_file_mock = mock.mock_open(read_data=TestDHCP.DEFAULT_ROUTING_TABLE) with patch('os.path.exists', return_value=False): with patch.object(osutil.DefaultOSUtil, 'get_dhcp_lease_endpoint')\ as patch_dhcp_cache: with patch.object(dhcp.DhcpHandler, 'send_dhcp_req') \ as patch_dhcp_send: endpoint = 'foo' patch_dhcp_cache.return_value = endpoint # endpoint comes from cache self.assertFalse(handler.skip_cache) with patch("os.path.exists", return_value=True): with patch(open_patch(), open_file_mock): handler.run() self.assertTrue(patch_dhcp_cache.call_count == 1) self.assertTrue(patch_dhcp_send.call_count == 0) self.assertTrue(handler.endpoint == endpoint) # reset handler.skip_cache = True handler.endpoint = None # endpoint comes from dhcp request self.assertTrue(handler.skip_cache) with patch("os.path.exists", return_value=True): with patch(open_patch(), open_file_mock): handler.run() self.assertTrue(patch_dhcp_cache.call_count == 1) self.assertTrue(patch_dhcp_send.call_count == 1) Azure-WALinuxAgent-2b21de5/tests/common/osutil/000077500000000000000000000000001462617747000215015ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/common/osutil/__init__.py000066400000000000000000000011651462617747000236150ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-2b21de5/tests/common/osutil/test_alpine.py000066400000000000000000000022261462617747000243640ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.alpine import AlpineOSUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestAlpineOSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, AlpineOSUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/osutil/test_arch.py000066400000000000000000000022101462617747000240220ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.arch import ArchUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestArchUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, ArchUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/osutil/test_bigip.py000066400000000000000000000270711462617747000242130ustar00rootroot00000000000000# Copyright 2016 F5 Networks Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import socket import time import unittest import azurelinuxagent.common.logger as logger import azurelinuxagent.common.osutil.bigip as osutil import azurelinuxagent.common.osutil.default as default import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.exception import OSUtilError from azurelinuxagent.common.osutil.bigip import BigIpOSUtil from tests.lib.tools import AgentTestCase, patch from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestBigIpOSUtil_wait_until_mcpd_is_initialized(AgentTestCase): @patch.object(shellutil, "run", return_value=0) @patch.object(logger, "info", return_value=None) def test_success(self, *args): result = osutil.BigIpOSUtil._wait_until_mcpd_is_initialized( osutil.BigIpOSUtil() ) self.assertEqual(result, True) # There are two logger calls in the mcpd wait function. The second # occurs after mcpd is found to be "up" self.assertEqual(args[0].call_count, 2) @patch.object(shellutil, "run", return_value=1) @patch.object(logger, "info", return_value=None) @patch.object(time, "sleep", return_value=None) def test_failure(self, *args): # pylint: disable=unused-argument self.assertRaises( OSUtilError, osutil.BigIpOSUtil._wait_until_mcpd_is_initialized, osutil.BigIpOSUtil() ) class TestBigIpOSUtil_save_sys_config(AgentTestCase): @patch.object(shellutil, "run", return_value=0) @patch.object(logger, "error", return_value=None) def test_success(self, *args): result = osutil.BigIpOSUtil._save_sys_config(osutil.BigIpOSUtil()) self.assertEqual(result, 0) self.assertEqual(args[0].call_count, 0) @patch.object(shellutil, "run", return_value=1) @patch.object(logger, "error", return_value=None) def test_failure(self, *args): result = osutil.BigIpOSUtil._save_sys_config(osutil.BigIpOSUtil()) self.assertEqual(result, 1) self.assertEqual(args[0].call_count, 1) class TestBigIpOSUtil_useradd(AgentTestCase): @patch.object(osutil.BigIpOSUtil, 'get_userentry', return_value=None) @patch.object(shellutil, "run_command") def test_success(self, *args): args[0].return_value = (0, None) result = osutil.BigIpOSUtil.useradd( osutil.BigIpOSUtil(), 'foo', expiration=None ) self.assertEqual(result, 0) @patch.object(osutil.BigIpOSUtil, 'get_userentry', return_value=None) def test_user_already_exists(self, *args): args[0].return_value = 'admin' result = osutil.BigIpOSUtil.useradd( osutil.BigIpOSUtil(), 'admin', expiration=None ) self.assertEqual(result, None) @patch.object(shellutil, "run", return_value=1) def test_failure(self, *args): # pylint: disable=unused-argument self.assertRaises( OSUtilError, osutil.BigIpOSUtil.useradd, osutil.BigIpOSUtil(), 'foo', expiration=None ) class TestBigIpOSUtil_chpasswd(AgentTestCase): @patch.object(shellutil, "run_command") @patch.object(osutil.BigIpOSUtil, 'get_userentry', return_value=True) @patch.object(osutil.BigIpOSUtil, 'is_sys_user', return_value=False) @patch.object(osutil.BigIpOSUtil, '_save_sys_config', return_value=None) def test_success(self, *args): result = osutil.BigIpOSUtil.chpasswd( osutil.BigIpOSUtil(), 'admin', 'password', crypt_id=6, salt_len=10 ) self.assertEqual(result, 0) self.assertEqual(args[0].call_count, 1) self.assertEqual(args[0].call_count, 1) @patch.object(osutil.BigIpOSUtil, 'is_sys_user', return_value=True) def test_is_sys_user(self, *args): # pylint: disable=unused-argument self.assertRaises( OSUtilError, osutil.BigIpOSUtil.chpasswd, osutil.BigIpOSUtil(), 'admin', 'password', crypt_id=6, salt_len=10 ) @patch.object(shellutil, "run_get_output", return_value=(1, None)) @patch.object(osutil.BigIpOSUtil, 'is_sys_user', return_value=False) def test_failed_to_set_user_password(self, *args): # pylint: disable=unused-argument self.assertRaises( OSUtilError, osutil.BigIpOSUtil.chpasswd, osutil.BigIpOSUtil(), 'admin', 'password', crypt_id=6, salt_len=10 ) @patch.object(shellutil, "run_get_output", return_value=(0, None)) @patch.object(osutil.BigIpOSUtil, 'is_sys_user', return_value=False) @patch.object(osutil.BigIpOSUtil, 'get_userentry', return_value=None) def test_failed_to_get_user_entry(self, *args): # pylint: disable=unused-argument self.assertRaises( OSUtilError, osutil.BigIpOSUtil.chpasswd, osutil.BigIpOSUtil(), 'admin', 'password', crypt_id=6, salt_len=10 ) class TestBigIpOSUtil_get_dvd_device(AgentTestCase): @patch.object(os, "listdir", return_value=['tty1','cdrom0']) def test_success(self, *args): # pylint: disable=unused-argument result = osutil.BigIpOSUtil.get_dvd_device( osutil.BigIpOSUtil(), '/dev' ) self.assertEqual(result, '/dev/cdrom0') @patch.object(os, "listdir", return_value=['foo', 'bar']) def test_failure(self, *args): # pylint: disable=unused-argument self.assertRaises( OSUtilError, osutil.BigIpOSUtil.get_dvd_device, osutil.BigIpOSUtil(), '/dev' ) class TestBigIpOSUtil_restart_ssh_service(AgentTestCase): @patch.object(shellutil, "run", return_value=0) def test_success(self, *args): # pylint: disable=unused-argument result = osutil.BigIpOSUtil.restart_ssh_service( osutil.BigIpOSUtil() ) self.assertEqual(result, 0) class TestBigIpOSUtil_stop_agent_service(AgentTestCase): @patch.object(shellutil, "run", return_value=0) def test_success(self, *args): # pylint: disable=unused-argument result = osutil.BigIpOSUtil.stop_agent_service( osutil.BigIpOSUtil() ) self.assertEqual(result, 0) class TestBigIpOSUtil_start_agent_service(AgentTestCase): @patch.object(shellutil, "run", return_value=0) def test_success(self, *args): # pylint: disable=unused-argument result = osutil.BigIpOSUtil.start_agent_service( osutil.BigIpOSUtil() ) self.assertEqual(result, 0) class TestBigIpOSUtil_register_agent_service(AgentTestCase): @patch.object(shellutil, "run", return_value=0) def test_success(self, *args): # pylint: disable=unused-argument result = osutil.BigIpOSUtil.register_agent_service( osutil.BigIpOSUtil() ) self.assertEqual(result, 0) class TestBigIpOSUtil_unregister_agent_service(AgentTestCase): @patch.object(shellutil, "run", return_value=0) def test_success(self, *args): # pylint: disable=unused-argument result = osutil.BigIpOSUtil.unregister_agent_service( osutil.BigIpOSUtil() ) self.assertEqual(result, 0) class TestBigIpOSUtil_set_hostname(AgentTestCase): @patch.object(os.path, "exists", return_value=False) def test_success(self, *args): result = osutil.BigIpOSUtil.set_hostname( # pylint: disable=assignment-from-none osutil.BigIpOSUtil(), None ) self.assertEqual(args[0].call_count, 0) self.assertEqual(result, None) class TestBigIpOSUtil_set_dhcp_hostname(AgentTestCase): @patch.object(os.path, "exists", return_value=False) def test_success(self, *args): result = osutil.BigIpOSUtil.set_dhcp_hostname( # pylint: disable=assignment-from-none osutil.BigIpOSUtil(), None ) self.assertEqual(args[0].call_count, 0) self.assertEqual(result, None) class TestBigIpOSUtil_get_first_if(AgentTestCase): @patch.object(osutil.BigIpOSUtil, '_format_single_interface_name', return_value=b'eth0') def test_success(self, *args): # pylint: disable=unused-argument ifname, ipaddr = osutil.BigIpOSUtil().get_first_if() self.assertTrue(ifname.startswith('eth')) self.assertTrue(ipaddr is not None) try: socket.inet_aton(ipaddr) except socket.error: self.fail("not a valid ip address") @patch.object(osutil.BigIpOSUtil, '_format_single_interface_name', return_value=b'loenp0s3') def test_success(self, *args): # pylint: disable=unused-argument,function-redefined ifname, ipaddr = osutil.BigIpOSUtil().get_first_if() self.assertFalse(ifname.startswith('eth')) self.assertTrue(ipaddr is not None) try: socket.inet_aton(ipaddr) except socket.error: self.fail("not a valid ip address") class TestBigIpOSUtil_mount_dvd(AgentTestCase): @patch.object(shellutil, "run", return_value=0) @patch.object(time, "sleep", return_value=None) @patch.object(osutil.BigIpOSUtil, '_wait_until_mcpd_is_initialized', return_value=None) @patch.object(default.DefaultOSUtil, 'mount_dvd', return_value=None) def test_success(self, *args): osutil.BigIpOSUtil.mount_dvd( osutil.BigIpOSUtil(), max_retry=6, chk_err=True ) self.assertEqual(args[0].call_count, 1) self.assertEqual(args[1].call_count, 1) class TestBigIpOSUtil_route_add(AgentTestCase): @patch.object(shellutil, "run", return_value=0) def test_success(self, *args): osutil.BigIpOSUtil.route_add( osutil.BigIpOSUtil(), '10.10.10.0', '255.255.255.0', '10.10.10.1' ) self.assertEqual(args[0].call_count, 1) class TestBigIpOSUtil_device_for_ide_port(AgentTestCase): @patch.object(time, "sleep", return_value=None) @patch.object(os.path, "exists", return_value=False) @patch.object(default.DefaultOSUtil, 'device_for_ide_port', return_value=None) def test_success_waiting(self, *args): osutil.BigIpOSUtil.device_for_ide_port( osutil.BigIpOSUtil(), '5' ) self.assertEqual(args[0].call_count, 1) self.assertEqual(args[1].call_count, 99) self.assertEqual(args[2].call_count, 99) @patch.object(time, "sleep", return_value=None) @patch.object(os.path, "exists", return_value=True) @patch.object(default.DefaultOSUtil, 'device_for_ide_port', return_value=None) def test_success_immediate(self, *args): osutil.BigIpOSUtil.device_for_ide_port( osutil.BigIpOSUtil(), '5' ) self.assertEqual(args[0].call_count, 1) self.assertEqual(args[1].call_count, 1) self.assertEqual(args[2].call_count, 0) class TestBigIpOSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, BigIpOSUtil()) if __name__ == '__main__': unittest.main()Azure-WALinuxAgent-2b21de5/tests/common/osutil/test_clearlinux.py000066400000000000000000000022401462617747000252560ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.clearlinux import ClearLinuxUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestClearLinuxUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, ClearLinuxUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/osutil/test_coreos.py000066400000000000000000000022221462617747000244020ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.coreos import CoreOSUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestAlpineOSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, CoreOSUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/osutil/test_default.py000066400000000000000000001704551462617747000245520ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib import glob import os import socket import subprocess import tempfile import unittest import mock import azurelinuxagent.common.conf as conf import azurelinuxagent.common.osutil.default as osutil import azurelinuxagent.common.utils.shellutil as shellutil import azurelinuxagent.common.utils.textutil as textutil from azurelinuxagent.common.exception import OSUtilError from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.utils import fileutil from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.utils.networkutil import AddFirewallRules from tests.lib.mock_environment import MockEnvironment from tests.lib.tools import AgentTestCase, patch, open_patch, load_data, data_dir, is_python_version_26_or_34, skip_if_predicate_true actual_get_proc_net_route = 'azurelinuxagent.common.osutil.default.DefaultOSUtil._get_proc_net_route' def fake_is_loopback(_, iface): return iface.startswith('lo') class TestOSUtil(AgentTestCase): def test_restart(self): # setup retries = 3 ifname = 'dummy' with patch.object(shellutil, "run") as run_patch: run_patch.return_value = 1 # execute osutil.DefaultOSUtil.restart_if(osutil.DefaultOSUtil(), ifname=ifname, retries=retries, wait=0) # assert self.assertEqual(run_patch.call_count, retries) self.assertEqual(run_patch.call_args_list[0][0][0], 'ifdown {0} && ifup {0}'.format(ifname)) def test_get_dvd_device_success(self): with patch.object(os, 'listdir', return_value=['cpu', 'cdrom0']): osutil.DefaultOSUtil().get_dvd_device() def test_get_dvd_device_failure(self): with patch.object(os, 'listdir', return_value=['cpu', 'notmatching']): try: osutil.DefaultOSUtil().get_dvd_device() self.fail('OSUtilError was not raised') except OSUtilError as ose: self.assertTrue('notmatching' in ustr(ose)) @patch('time.sleep') def test_mount_dvd_success(self, _): msg = 'message' with patch.object(osutil.DefaultOSUtil, 'get_dvd_device', return_value='/dev/cdrom'): with patch.object(shellutil, 'run_command', return_value=msg): with patch.object(os, 'makedirs'): try: osutil.DefaultOSUtil().mount_dvd() except OSUtilError: self.fail("mounting failed") @patch('time.sleep') def test_mount_dvd_failure(self, _): msg = 'message' exception = shellutil.CommandError("mount dvd", 1, "", msg) with patch.object(osutil.DefaultOSUtil, 'get_dvd_device', return_value='/dev/cdrom'): with patch.object(shellutil, 'run_command', side_effect=exception) as patch_run: with patch.object(os, 'makedirs'): try: osutil.DefaultOSUtil().mount_dvd() self.fail('OSUtilError was not raised') except OSUtilError as ose: self.assertTrue(msg in ustr(ose)) self.assertEqual(patch_run.call_count, 5) def test_empty_proc_net_route(self): routing_table = "" mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): self.assertEqual(len(osutil.DefaultOSUtil().read_route_table()), 0) def test_no_routes(self): routing_table = 'Iface\tDestination\tGateway \tFlags\tRefCnt\tUse\tMetric\tMask\t\tMTU\tWindow\tIRTT \n' mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): raw_route_list = osutil.DefaultOSUtil().read_route_table() self.assertEqual(len(osutil.DefaultOSUtil().get_list_of_routes(raw_route_list)), 0) def test_bogus_proc_net_route(self): routing_table = 'Iface\tDestination\tGateway \tFlags\t\tUse\tMetric\t\neth0\t00000000\t00000000\t0001\t\t0\t0\n' mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): raw_route_list = osutil.DefaultOSUtil().read_route_table() self.assertEqual(len(osutil.DefaultOSUtil().get_list_of_routes(raw_route_list)), 0) def test_valid_routes(self): routing_table = \ 'Iface\tDestination\tGateway \tFlags\tRefCnt\tUse\tMetric\tMask\t\tMTU\tWindow\tIRTT \n' \ 'eth0\t00000000\tC1BB910A\t0003\t0\t0\t0\t00000000\t0\t0\t0 \n' \ 'eth0\tC0BB910A\t00000000\t0001\t0\t0\t0\tC0FFFFFF\t0\t0\t0 \n' \ 'eth0\t10813FA8\tC1BB910A\t000F\t0\t0\t0\tFFFFFFFF\t0\t0\t0 \n' \ 'eth0\tFEA9FEA9\tC1BB910A\t0007\t0\t0\t0\tFFFFFFFF\t0\t0\t0 \n' \ 'docker0\t002BA8C0\t00000000\t0001\t0\t0\t10\t00FFFFFF\t0\t0\t0 \n' known_sha1_hash = b'\x1e\xd1k\xae[\xf8\x9b\x1a\x13\xd0\xbbT\xa4\xe3Y\xa3\xdd\x0b\xbd\xa9' mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): raw_route_list = osutil.DefaultOSUtil().read_route_table() self.assertEqual(len(raw_route_list), 6) self.assertEqual(textutil.hash_strings(raw_route_list), known_sha1_hash) route_list = osutil.DefaultOSUtil().get_list_of_routes(raw_route_list) self.assertEqual(len(route_list), 5) self.assertEqual(route_list[0].gateway_quad(), '10.145.187.193') self.assertEqual(route_list[1].gateway_quad(), '0.0.0.0') self.assertEqual(route_list[1].mask_quad(), '255.255.255.192') self.assertEqual(route_list[2].destination_quad(), '168.63.129.16') self.assertEqual(route_list[1].flags, 1) self.assertEqual(route_list[2].flags, 15) self.assertEqual(route_list[3].flags, 7) self.assertEqual(route_list[3].metric, 0) self.assertEqual(route_list[4].metric, 10) self.assertEqual(route_list[0].interface, 'eth0') self.assertEqual(route_list[4].interface, 'docker0') @patch('azurelinuxagent.common.osutil.default.DefaultOSUtil.get_primary_interface', return_value='eth0') @patch('azurelinuxagent.common.osutil.default.DefaultOSUtil._get_all_interfaces', return_value={'eth0':'10.0.0.1'}) @patch('azurelinuxagent.common.osutil.default.DefaultOSUtil.is_loopback', fake_is_loopback) def test_get_first_if(self, get_all_interfaces_mock, get_primary_interface_mock): # pylint: disable=unused-argument """ Validate that the agent can find the first active non-loopback interface. This test case used to run live, but not all developers have an eth* interface. It is perfectly valid to have a br*, but this test does not account for that. """ ifname, ipaddr = osutil.DefaultOSUtil().get_first_if() self.assertEqual(ifname, 'eth0') self.assertEqual(ipaddr, '10.0.0.1') @patch('azurelinuxagent.common.osutil.default.DefaultOSUtil.get_primary_interface', return_value='bogus0') @patch('azurelinuxagent.common.osutil.default.DefaultOSUtil._get_all_interfaces', return_value={'eth0':'10.0.0.1', 'lo': '127.0.0.1'}) @patch('azurelinuxagent.common.osutil.default.DefaultOSUtil.is_loopback', fake_is_loopback) def test_get_first_if_nosuchprimary(self, get_all_interfaces_mock, get_primary_interface_mock): # pylint: disable=unused-argument ifname, ipaddr = osutil.DefaultOSUtil().get_first_if() self.assertTrue(ifname.startswith('eth')) self.assertTrue(ipaddr is not None) try: socket.inet_aton(ipaddr) except socket.error: self.fail("not a valid ip address") def test_get_first_if_all_loopback(self): fake_ifaces = {'lo':'127.0.0.1'} with patch.object(osutil.DefaultOSUtil, 'get_primary_interface', return_value='bogus0'): with patch.object(osutil.DefaultOSUtil, '_get_all_interfaces', return_value=fake_ifaces): self.assertEqual(('', ''), osutil.DefaultOSUtil().get_first_if()) def test_get_all_interfaces(self): loopback_count = 0 non_loopback_count = 0 for iface in osutil.DefaultOSUtil()._get_all_interfaces(): if iface == 'lo': loopback_count += 1 else: non_loopback_count += 1 self.assertEqual(loopback_count, 1, 'Exactly 1 loopback network interface should exist') self.assertGreater(loopback_count, 0, 'At least 1 non-loopback network interface should exist') def test_isloopback(self): for iface in osutil.DefaultOSUtil()._get_all_interfaces(): if iface == 'lo': self.assertTrue(osutil.DefaultOSUtil().is_loopback(iface)) else: self.assertFalse(osutil.DefaultOSUtil().is_loopback(iface)) def test_isprimary(self): routing_table = "\ Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT \n\ eth0 00000000 01345B0A 0003 0 0 5 00000000 0 0 0 \n\ eth0 00345B0A 00000000 0001 0 0 5 00000000 0 0 0 \n\ lo 00000000 01345B0A 0003 0 0 1 00FCFFFF 0 0 0 \n" mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): self.assertFalse(osutil.DefaultOSUtil().is_primary_interface('lo')) self.assertTrue(osutil.DefaultOSUtil().is_primary_interface('eth0')) def test_sriov(self): routing_table = "\ Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT \n" \ "bond0 00000000 0100000A 0003 0 0 0 00000000 0 0 0 \n" \ "bond0 0000000A 00000000 0001 0 0 0 00000000 0 0 0 \n" \ "eth0 0000000A 00000000 0001 0 0 0 00000000 0 0 0 \n" \ "bond0 10813FA8 0100000A 0007 0 0 0 00000000 0 0 0 \n" \ "bond0 FEA9FEA9 0100000A 0007 0 0 0 00000000 0 0 0 \n" mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): self.assertFalse(osutil.DefaultOSUtil().is_primary_interface('eth0')) self.assertTrue(osutil.DefaultOSUtil().is_primary_interface('bond0')) def test_multiple_default_routes(self): routing_table = "\ Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT \n\ high 00000000 01345B0A 0003 0 0 5 00000000 0 0 0 \n\ low1 00000000 01345B0A 0003 0 0 1 00FCFFFF 0 0 0 \n" mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): self.assertTrue(osutil.DefaultOSUtil().is_primary_interface('low1')) def test_multiple_interfaces(self): routing_table = "\ Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT \n\ first 00000000 01345B0A 0003 0 0 1 00000000 0 0 0 \n\ secnd 00000000 01345B0A 0003 0 0 1 00FCFFFF 0 0 0 \n" mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): self.assertTrue(osutil.DefaultOSUtil().is_primary_interface('first')) def test_interface_flags(self): routing_table = "\ Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT \n\ nflg 00000000 01345B0A 0001 0 0 1 00000000 0 0 0 \n\ flgs 00000000 01345B0A 0003 0 0 1 00FCFFFF 0 0 0 \n" mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): self.assertTrue(osutil.DefaultOSUtil().is_primary_interface('flgs')) def test_no_interface(self): routing_table = "\ Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT \n\ ndst 00000001 01345B0A 0003 0 0 1 00000000 0 0 0 \n\ nflg 00000000 01345B0A 0001 0 0 1 00FCFFFF 0 0 0 \n" mo = mock.mock_open(read_data=routing_table) with patch(open_patch(), mo): self.assertFalse(osutil.DefaultOSUtil().is_primary_interface('ndst')) self.assertFalse(osutil.DefaultOSUtil().is_primary_interface('nflg')) self.assertFalse(osutil.DefaultOSUtil().is_primary_interface('invalid')) def test_no_primary_does_not_throw(self): with patch.object(osutil.DefaultOSUtil, 'get_primary_interface') \ as patch_primary: exception = False patch_primary.return_value = '' try: osutil.DefaultOSUtil().get_first_if()[0] except Exception as e: # pylint: disable=unused-variable print(textutil.format_exception(e)) exception = True self.assertFalse(exception) def test_dhcp_lease_default(self): self.assertTrue(osutil.DefaultOSUtil().get_dhcp_lease_endpoint() is None) def test_dhcp_lease_older_ubuntu(self): with patch.object(glob, "glob", return_value=['/var/lib/dhcp/dhclient.eth0.leases']): with patch(open_patch(), mock.mock_open(read_data=load_data("dhcp.leases"))): endpoint = get_osutil(distro_name='ubuntu', distro_version='12.04').get_dhcp_lease_endpoint() # pylint: disable=assignment-from-none self.assertTrue(endpoint is not None) self.assertEqual(endpoint, "168.63.129.16") endpoint = get_osutil(distro_name='ubuntu', distro_version='12.04').get_dhcp_lease_endpoint() # pylint: disable=assignment-from-none self.assertTrue(endpoint is not None) self.assertEqual(endpoint, "168.63.129.16") endpoint = get_osutil(distro_name='ubuntu', distro_version='14.04').get_dhcp_lease_endpoint() # pylint: disable=assignment-from-none self.assertTrue(endpoint is not None) self.assertEqual(endpoint, "168.63.129.16") endpoint = get_osutil(distro_name='ubuntu', distro_version='18.04').get_dhcp_lease_endpoint() # pylint: disable=assignment-from-none self.assertTrue(endpoint is None) def test_dhcp_lease_newer_ubuntu(self): with patch.object(glob, "glob", return_value=['/run/systemd/netif/leases/2']): with patch(open_patch(), mock.mock_open(read_data=load_data("2"))): endpoint = get_osutil(distro_name='ubuntu', distro_version='18.04').get_dhcp_lease_endpoint() # pylint: disable=assignment-from-none self.assertTrue(endpoint is not None) self.assertEqual(endpoint, "168.63.129.16") endpoint = get_osutil(distro_name='ubuntu', distro_version='20.04').get_dhcp_lease_endpoint() # pylint: disable=assignment-from-none self.assertTrue(endpoint is not None) self.assertEqual(endpoint, "168.63.129.16") def test_dhcp_lease_custom_dns(self): """ Validate that the wireserver address is coming from option 245 (on default configurations the address is also available in the domain-name-servers option, but users may set up a custom dns server on their vnet) """ with patch.object(glob, "glob", return_value=['/var/lib/dhcp/dhclient.eth0.leases']): with patch(open_patch(), mock.mock_open(read_data=load_data("dhcp.leases.custom.dns"))): endpoint = get_osutil(distro_name='ubuntu', distro_version='14.04').get_dhcp_lease_endpoint() # pylint: disable=assignment-from-none self.assertEqual(endpoint, "168.63.129.16") def test_dhcp_lease_multi(self): with patch.object(glob, "glob", return_value=['/var/lib/dhcp/dhclient.eth0.leases']): with patch(open_patch(), mock.mock_open(read_data=load_data("dhcp.leases.multi"))): endpoint = get_osutil(distro_name='ubuntu', distro_version='12.04').get_dhcp_lease_endpoint() # pylint: disable=assignment-from-none self.assertTrue(endpoint is not None) self.assertEqual(endpoint, "168.63.129.2") def test_get_total_mem(self): """ Validate the returned value matches to the one retrieved by invoking shell command """ cmd = "grep MemTotal /proc/meminfo |awk '{print $2}'" ret = shellutil.run_get_output(cmd) if ret[0] == 0: self.assertEqual(int(ret[1]) / 1024, get_osutil().get_total_mem()) else: self.fail("Cannot retrieve total memory using shell command.") def test_get_processor_cores(self): """ Validate the returned value matches to the one retrieved by invoking shell command """ cmd = "grep 'processor.*:' /proc/cpuinfo |wc -l" ret = shellutil.run_get_output(cmd) if ret[0] == 0: self.assertEqual(int(ret[1]), get_osutil().get_processor_cores()) else: self.fail("Cannot retrieve number of process cores using shell command.") def test_conf_sshd(self): new_file = "\ Port 22\n\ Protocol 2\n\ ChallengeResponseAuthentication yes\n\ #PasswordAuthentication yes\n\ UsePAM yes\n\ " expected_output = "\ Port 22\n\ Protocol 2\n\ ChallengeResponseAuthentication no\n\ #PasswordAuthentication yes\n\ UsePAM yes\n\ PasswordAuthentication no\n\ ClientAliveInterval 180\n\ " with patch.object(fileutil, 'write_file') as patch_write: with patch.object(fileutil, 'read_file', return_value=new_file): osutil.DefaultOSUtil().conf_sshd(disable_password=True) patch_write.assert_called_once_with( conf.get_sshd_conf_file_path(), expected_output) def test_conf_sshd_with_match(self): new_file = "\ Port 22\n\ ChallengeResponseAuthentication yes\n\ Match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ " expected_output = "\ Port 22\n\ ChallengeResponseAuthentication no\n\ PasswordAuthentication no\n\ ClientAliveInterval 180\n\ Match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ " with patch.object(fileutil, 'write_file') as patch_write: with patch.object(fileutil, 'read_file', return_value=new_file): osutil.DefaultOSUtil().conf_sshd(disable_password=True) patch_write.assert_called_once_with( conf.get_sshd_conf_file_path(), expected_output) def test_conf_sshd_with_match_last(self): new_file = "\ Port 22\n\ Match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ " expected_output = "\ Port 22\n\ PasswordAuthentication no\n\ ChallengeResponseAuthentication no\n\ ClientAliveInterval 180\n\ Match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ " with patch.object(fileutil, 'write_file') as patch_write: with patch.object(fileutil, 'read_file', return_value=new_file): osutil.DefaultOSUtil().conf_sshd(disable_password=True) patch_write.assert_called_once_with( conf.get_sshd_conf_file_path(), expected_output) def test_conf_sshd_with_match_middle(self): new_file = "\ Port 22\n\ match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ match all\n\ #Other config\n\ " expected_output = "\ Port 22\n\ match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ match all\n\ #Other config\n\ PasswordAuthentication no\n\ ChallengeResponseAuthentication no\n\ ClientAliveInterval 180\n\ " with patch.object(fileutil, 'write_file') as patch_write: with patch.object(fileutil, 'read_file', return_value=new_file): osutil.DefaultOSUtil().conf_sshd(disable_password=True) patch_write.assert_called_once_with( conf.get_sshd_conf_file_path(), expected_output) def test_conf_sshd_with_match_multiple(self): new_file = "\ Port 22\n\ Match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ Match host 192.168.1.2\n\ ChallengeResponseAuthentication yes\n\ Match all\n\ #Other config\n\ " expected_output = "\ Port 22\n\ Match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ Match host 192.168.1.2\n\ ChallengeResponseAuthentication yes\n\ Match all\n\ #Other config\n\ PasswordAuthentication no\n\ ChallengeResponseAuthentication no\n\ ClientAliveInterval 180\n\ " with patch.object(fileutil, 'write_file') as patch_write: with patch.object(fileutil, 'read_file', return_value=new_file): osutil.DefaultOSUtil().conf_sshd(disable_password=True) patch_write.assert_called_once_with( conf.get_sshd_conf_file_path(), expected_output) def test_conf_sshd_with_match_multiple_first_last(self): new_file = "\ Match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ Match host 192.168.1.2\n\ ChallengeResponseAuthentication yes\n\ " expected_output = "\ PasswordAuthentication no\n\ ChallengeResponseAuthentication no\n\ ClientAliveInterval 180\n\ Match host 192.168.1.1\n\ ChallengeResponseAuthentication yes\n\ Match host 192.168.1.2\n\ ChallengeResponseAuthentication yes\n\ " with patch.object(fileutil, 'write_file') as patch_write: with patch.object(fileutil, 'read_file', return_value=new_file): osutil.DefaultOSUtil().conf_sshd(disable_password=True) patch_write.assert_called_once_with( conf.get_sshd_conf_file_path(), expected_output) def test_correct_instance_id(self): util = osutil.DefaultOSUtil() self.assertEqual( "12345678-1234-1234-1234-123456789012", util._correct_instance_id("78563412-3412-3412-1234-123456789012")) self.assertEqual( "D0DF4C54-4ECB-4A4B-9954-5BDF3ED5C3B8", util._correct_instance_id("544CDFD0-CB4E-4B4A-9954-5BDF3ED5C3B8")) self.assertEqual( "d0df4c54-4ecb-4a4b-9954-5bdf3ed5c3b8", util._correct_instance_id("544cdfd0-cb4e-4b4a-9954-5bdf3ed5c3b8")) @patch('os.path.isfile', return_value=True) @patch('azurelinuxagent.common.utils.fileutil.read_file', return_value="33C2F3B9-1399-429F-8EB3-BA656DF32502") def test_get_instance_id_from_file(self, mock_read, mock_isfile): # pylint: disable=unused-argument util = osutil.DefaultOSUtil() self.assertEqual( util.get_instance_id(), "B9F3C233-9913-9F42-8EB3-BA656DF32502") @patch('os.path.isfile', return_value=True) @patch('azurelinuxagent.common.utils.fileutil.read_file', return_value="") def test_get_instance_id_empty_from_file(self, mock_read, mock_isfile): # pylint: disable=unused-argument util = osutil.DefaultOSUtil() self.assertEqual( "", util.get_instance_id()) @patch('os.path.isfile', return_value=True) @patch('azurelinuxagent.common.utils.fileutil.read_file', return_value="Value") def test_get_instance_id_malformed_from_file(self, mock_read, mock_isfile): # pylint: disable=unused-argument util = osutil.DefaultOSUtil() self.assertEqual( "Value", util.get_instance_id()) @patch('os.path.isfile', return_value=False) @patch('azurelinuxagent.common.utils.shellutil.run_get_output', return_value=[0, '33C2F3B9-1399-429F-8EB3-BA656DF32502']) def test_get_instance_id_from_dmidecode(self, mock_shell, mock_isfile): # pylint: disable=unused-argument util = osutil.DefaultOSUtil() self.assertEqual( util.get_instance_id(), "B9F3C233-9913-9F42-8EB3-BA656DF32502") @patch('os.path.isfile', return_value=False) @patch('azurelinuxagent.common.utils.shellutil.run_get_output', return_value=[1, 'Error Value']) def test_get_instance_id_missing(self, mock_shell, mock_isfile): # pylint: disable=unused-argument util = osutil.DefaultOSUtil() self.assertEqual("", util.get_instance_id()) @patch('os.path.isfile', return_value=False) @patch('azurelinuxagent.common.utils.shellutil.run_get_output', return_value=[0, 'Unexpected Value']) def test_get_instance_id_unexpected(self, mock_shell, mock_isfile): # pylint: disable=unused-argument util = osutil.DefaultOSUtil() self.assertEqual("", util.get_instance_id()) @patch('os.path.isfile', return_value=True) @patch('azurelinuxagent.common.utils.fileutil.read_file') def test_is_current_instance_id_from_file(self, mock_read, mock_isfile): # pylint: disable=unused-argument util = osutil.DefaultOSUtil() mock_read.return_value = "11111111-2222-3333-4444-556677889900" self.assertFalse(util.is_current_instance_id( "B9F3C233-9913-9F42-8EB3-BA656DF32502")) mock_read.return_value = "B9F3C233-9913-9F42-8EB3-BA656DF32502" self.assertTrue(util.is_current_instance_id( "B9F3C233-9913-9F42-8EB3-BA656DF32502")) mock_read.return_value = "33C2F3B9-1399-429F-8EB3-BA656DF32502" self.assertTrue(util.is_current_instance_id( "B9F3C233-9913-9F42-8EB3-BA656DF32502")) mock_read.return_value = "b9f3c233-9913-9f42-8eb3-ba656df32502" self.assertTrue(util.is_current_instance_id( "B9F3C233-9913-9F42-8EB3-BA656DF32502")) mock_read.return_value = "33c2f3b9-1399-429f-8eb3-ba656df32502" self.assertTrue(util.is_current_instance_id( "B9F3C233-9913-9F42-8EB3-BA656DF32502")) @patch('os.path.isfile', return_value=False) @patch('azurelinuxagent.common.utils.shellutil.run_get_output') def test_is_current_instance_id_from_dmidecode(self, mock_shell, mock_isfile): # pylint: disable=unused-argument util = osutil.DefaultOSUtil() mock_shell.return_value = [0, 'B9F3C233-9913-9F42-8EB3-BA656DF32502'] self.assertTrue(util.is_current_instance_id( "B9F3C233-9913-9F42-8EB3-BA656DF32502")) mock_shell.return_value = [0, '33C2F3B9-1399-429F-8EB3-BA656DF32502'] self.assertTrue(util.is_current_instance_id( "B9F3C233-9913-9F42-8EB3-BA656DF32502")) @patch('azurelinuxagent.common.conf.get_sudoers_dir') def test_conf_sudoer(self, mock_dir): tmp_dir = tempfile.mkdtemp() mock_dir.return_value = tmp_dir util = osutil.DefaultOSUtil() # Assert the sudoer line is added if missing util.conf_sudoer("FooBar") waagent_sudoers = os.path.join(tmp_dir, 'waagent') self.assertTrue(os.path.isfile(waagent_sudoers)) count = -1 with open(waagent_sudoers, 'r') as f: count = len(f.readlines()) self.assertEqual(1, count) # Assert the line does not get added a second time util.conf_sudoer("FooBar") count = -1 with open(waagent_sudoers, 'r') as f: count = len(f.readlines()) print("WRITING TO {0}".format(waagent_sudoers)) self.assertEqual(1, count) @staticmethod def _command_to_string(command): return " ".join(command) if isinstance(command, list) else command @staticmethod @contextlib.contextmanager def _mock_iptables(version=osutil._IPTABLES_LOCKING_VERSION, destination='168.63.129.16'): """ Mock for the iptable commands used to set up the firewall. Returns a patch of subprocess.Popen augmented with these properties: * wait - True if the iptable commands use the -w option * destination - The target IP address * uid - The uid used for the -owner option * command_calls - A list of the iptable commands executed by the mock (the --version and -L commands are omitted) * set_command - By default all the mocked commands succeed and produce no output; this method can be used to override the return value and output of these commands (or to add other commands) """ mocked_commands = {} def set_command(command, output='', exit_code=0): command_string = TestOSUtil._command_to_string(command) mocked_commands[command_string] = (output.replace("'", "'\"'\"'"), exit_code) return command_string wait = "-w" if FlexibleVersion(version) >= osutil._IPTABLES_LOCKING_VERSION else "" uid = 42 version_command = set_command(osutil.get_iptables_version_command(), output=str(version)) list_command = set_command(osutil.get_firewall_list_command(wait), output="Mock Output") set_command(osutil.get_firewall_packets_command(wait)) set_command(AddFirewallRules.get_wire_non_root_drop_rule(AddFirewallRules.CHECK_COMMAND, destination, wait=wait)) set_command(AddFirewallRules.get_wire_non_root_drop_rule(AddFirewallRules.APPEND_COMMAND, destination, wait=wait)) set_command(AddFirewallRules.get_wire_root_accept_rule(AddFirewallRules.CHECK_COMMAND, destination, uid, wait=wait)) set_command(AddFirewallRules.get_wire_root_accept_rule(AddFirewallRules.APPEND_COMMAND, destination, uid, wait=wait)) set_command(AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.APPEND_COMMAND, destination, wait=wait)) set_command(AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.INSERT_COMMAND, destination, wait=wait)) set_command(AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.CHECK_COMMAND, destination, wait=wait)) # the agent assumes the rules have been deleted when these commands return 1 set_command(osutil.get_firewall_delete_conntrack_accept_command(wait, destination), exit_code=1) set_command(osutil.get_delete_accept_tcp_rule(wait, destination), exit_code=1) set_command(osutil.get_firewall_delete_owner_accept_command(wait, destination, uid), exit_code=1) set_command(osutil.get_firewall_delete_conntrack_drop_command(wait, destination), exit_code=1) command_calls = [] def mock_popen(command, *args, **kwargs): command_string = TestOSUtil._command_to_string(command) if command_string in mocked_commands: if command_string != version_command and command_string != list_command: command_calls.append(command_string) output, exit_code = mocked_commands[command_string] command = "echo '{0}' && exit {1}".format(output, exit_code) kwargs["shell"] = True return mock_popen.original(command, *args, **kwargs) mock_popen.original = subprocess.Popen with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", side_effect=mock_popen) as popen_patcher: with patch('os.getuid', return_value=uid): popen_patcher.wait = wait popen_patcher.destination = destination popen_patcher.uid = uid popen_patcher.set_command = set_command popen_patcher.command_calls = command_calls yield popen_patcher def test_get_firewall_dropped_packets_returns_zero_if_firewall_disabled(self): with patch.object(osutil, '_enable_firewall', False): util = osutil.DefaultOSUtil() self.assertEqual(0, util.get_firewall_dropped_packets("not used")) def test_get_firewall_dropped_packets_returns_negative_if_error(self): with TestOSUtil._mock_iptables() as mock_iptables: with patch.object(osutil, '_enable_firewall', True): mock_iptables.set_command(osutil.get_firewall_packets_command(mock_iptables.wait), exit_code=1) self.assertEqual(-1, osutil.DefaultOSUtil().get_firewall_dropped_packets()) def test_get_firewall_dropped_packets_should_ignore_transient_errors(self): with TestOSUtil._mock_iptables() as mock_iptables: with patch.object(osutil, '_enable_firewall', True): mock_iptables.set_command(osutil.get_firewall_packets_command(mock_iptables.wait), exit_code=3, output="can't initialize iptables table `security': iptables who? (do you need to insmod?)") self.assertEqual(0, osutil.DefaultOSUtil().get_firewall_dropped_packets()) def test_get_firewall_dropped_packets_should_ignore_returncode_4(self): with TestOSUtil._mock_iptables() as mock_iptables: with patch.object(osutil, '_enable_firewall', True): mock_iptables.set_command(osutil.get_firewall_packets_command(mock_iptables.wait), exit_code=4, output="iptables v1.8.2 (nf_tables): RULE_REPLACE failed (Invalid argument): rule in chain OUTPUT") self.assertEqual(0, osutil.DefaultOSUtil().get_firewall_dropped_packets()) def test_get_firewall_dropped_packets(self): destination = '168.63.129.16' with TestOSUtil._mock_iptables() as mock_iptables: with patch.object(osutil, '_enable_firewall', True): mock_iptables.set_command(osutil.get_firewall_packets_command(mock_iptables.wait), output=''' Chain OUTPUT (policy ACCEPT 104 packets, 43628 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT tcp -- any any anywhere 168.63.129.16 owner UID match daemon 32 1920 DROP tcp -- any any anywhere 168.63.129.16 ''') self.assertEqual(32, osutil.DefaultOSUtil().get_firewall_dropped_packets(destination)) def test_enable_firewall_should_set_up_the_firewall(self): with TestOSUtil._mock_iptables() as mock_iptables: with patch.object(osutil, '_enable_firewall', True): # fail the rule check to force enable of the firewall mock_iptables.set_command(AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait), exit_code=0) mock_iptables.set_command(AddFirewallRules.get_wire_root_accept_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, mock_iptables.uid, wait=mock_iptables.wait), exit_code=0) mock_iptables.set_command(AddFirewallRules.get_wire_non_root_drop_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait), exit_code=1) success, _ = osutil.DefaultOSUtil().enable_firewall(dst_ip=mock_iptables.destination, uid=mock_iptables.uid) tcp_check_command = TestOSUtil._command_to_string(AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait)) accept_check_command = TestOSUtil._command_to_string(AddFirewallRules.get_wire_root_accept_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, mock_iptables.uid, wait=mock_iptables.wait)) drop_check_command = TestOSUtil._command_to_string(AddFirewallRules.get_wire_non_root_drop_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait)) delete_conntrack_accept_command = TestOSUtil._command_to_string(osutil.get_firewall_delete_conntrack_accept_command(mock_iptables.wait, mock_iptables.destination)) delete_accept_tcp_rule = TestOSUtil._command_to_string(osutil.get_delete_accept_tcp_rule(mock_iptables.wait, mock_iptables.destination)) delete_owner_accept_command = TestOSUtil._command_to_string(osutil.get_firewall_delete_owner_accept_command(mock_iptables.wait, mock_iptables.destination, mock_iptables.uid)) delete_conntrack_drop_command = TestOSUtil._command_to_string(osutil.get_firewall_delete_conntrack_drop_command(mock_iptables.wait, mock_iptables.destination)) accept_tcp_command = TestOSUtil._command_to_string(AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.APPEND_COMMAND, mock_iptables.destination, wait=mock_iptables.wait)) accept_command = TestOSUtil._command_to_string(AddFirewallRules.get_wire_root_accept_rule(AddFirewallRules.APPEND_COMMAND, mock_iptables.destination, mock_iptables.uid, wait=mock_iptables.wait)) drop_add_command = TestOSUtil._command_to_string(AddFirewallRules.get_wire_non_root_drop_rule(AddFirewallRules.APPEND_COMMAND, mock_iptables.destination, wait=mock_iptables.wait)) self.assertTrue(success, "Enabling the firewall was not successful") # Exactly 10 calls have to be made. # First is the check rule check which was mocked to fail, and delete call and then append calls self.assertEqual(len(mock_iptables.command_calls), 10, "Incorrect number of calls to iptables: [{0}]". format(mock_iptables.command_calls)) self.assertEqual(mock_iptables.command_calls[0], tcp_check_command, "The first command should check the tcp rule") self.assertEqual(mock_iptables.command_calls[1], accept_check_command, "The second command should check the accept rule") self.assertEqual(mock_iptables.command_calls[2], drop_check_command, "The third command should check the drop rule") self.assertEqual(mock_iptables.command_calls[3], delete_conntrack_accept_command, "The fourth command should delete the conntrack accept rule: {0}".format( mock_iptables.command_calls[3])) self.assertEqual(mock_iptables.command_calls[4], delete_accept_tcp_rule, "The fifth command should delete the dns tcp accept rule: {0}".format( mock_iptables.command_calls[4])) self.assertEqual(mock_iptables.command_calls[5], delete_owner_accept_command, "The sixth command should delete the owner accept rule: {0}".format( mock_iptables.command_calls[5])) self.assertEqual(mock_iptables.command_calls[6], delete_conntrack_drop_command, "The seventh command should delete the conntrack accept rule : {0}".format( mock_iptables.command_calls[6])) self.assertEqual(mock_iptables.command_calls[7], accept_tcp_command, "The eighth command should add the dns tcp accept rule") self.assertEqual(mock_iptables.command_calls[8], accept_command, "The ninth command should add the accept rule") self.assertEqual(mock_iptables.command_calls[9], drop_add_command, "The tenth command should add the drop rule") self.assertTrue(osutil._enable_firewall, "The firewall should not have been disabled") def test_enable_firewall_should_not_use_wait_when_iptables_does_not_support_it(self): with TestOSUtil._mock_iptables(version=osutil._IPTABLES_LOCKING_VERSION - 1) as mock_iptables: with patch.object(osutil, '_enable_firewall', True): # fail the rule check to force enable of the firewall mock_iptables.set_command(AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait), exit_code=1) mock_iptables.set_command(AddFirewallRules.get_wire_root_accept_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, mock_iptables.uid, wait=mock_iptables.wait), exit_code=1) mock_iptables.set_command(AddFirewallRules.get_wire_non_root_drop_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait), exit_code=1) success, _ = osutil.DefaultOSUtil().enable_firewall(dst_ip=mock_iptables.destination, uid=mock_iptables.uid) self.assertTrue(success, "Enabling the firewall was not successful") # Exactly 10 calls have to be made. # First check 3 rules, delete 4 rules, # and Append the IPTable 3 rules. self.assertEqual(len(mock_iptables.command_calls), 10, "Incorrect number of calls to iptables: [{0}]".format(mock_iptables.command_calls)) for command in mock_iptables.command_calls: self.assertNotIn("-w", command, "The -w option sh ould have been used in {0}".format(command)) self.assertTrue(osutil._enable_firewall, "The firewall should not have been disabled") def test_enable_firewall_should_not_set_firewall_if_the_all_the_rules_exists(self): with TestOSUtil._mock_iptables() as mock_iptables: with patch.object(osutil, '_enable_firewall', True): tcp_check_command = mock_iptables.set_command(AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait), exit_code=0) accept_check_command = mock_iptables.set_command(AddFirewallRules.get_wire_root_accept_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, mock_iptables.uid, wait=mock_iptables.wait), exit_code=0) drop_check_command = mock_iptables.set_command(AddFirewallRules.get_wire_non_root_drop_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait), exit_code=0) success, _ = osutil.DefaultOSUtil().enable_firewall(dst_ip=mock_iptables.destination, uid=mock_iptables.uid) self.assertTrue(success, "Enabling the firewall was not successful") self.assertEqual(len(mock_iptables.command_calls), 3, "Incorrect number of calls to iptables: [{0}]". format(mock_iptables.command_calls)) self.assertEqual(mock_iptables.command_calls[0], tcp_check_command, "Unexpected command: {0}".format(mock_iptables.command_calls[0])) self.assertEqual(mock_iptables.command_calls[1], accept_check_command, "Unexpected command: {0}".format(mock_iptables.command_calls[1])) self.assertEqual(mock_iptables.command_calls[2], drop_check_command, "Unexpected command: {0}".format(mock_iptables.command_calls[2])) self.assertTrue(osutil._enable_firewall) def test_enable_firewall_should_check_for_invalid_iptables_options(self): with TestOSUtil._mock_iptables() as mock_iptables: with patch.object(osutil, '_enable_firewall', True): # iptables uses the following exit codes # 0 - correct function # 1 - other errors # 2 - errors which appear to be caused by invalid or abused command # line parameters tcp_check_command = mock_iptables.set_command(AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait), exit_code=0) accept_check_command = mock_iptables.set_command(AddFirewallRules.get_wire_root_accept_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, mock_iptables.uid, wait=mock_iptables.wait), exit_code=0) drop_check_command = mock_iptables.set_command(AddFirewallRules.get_wire_non_root_drop_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait), exit_code=2) success, _ = osutil.DefaultOSUtil().enable_firewall(dst_ip=mock_iptables.destination, uid=mock_iptables.uid) delete_conntrack_accept_command = TestOSUtil._command_to_string(osutil.get_firewall_delete_conntrack_accept_command(mock_iptables.wait, mock_iptables.destination)) delete_accept_tcp_rule = TestOSUtil._command_to_string(osutil.get_delete_accept_tcp_rule(mock_iptables.wait, mock_iptables.destination)) delete_owner_accept_command = TestOSUtil._command_to_string(osutil.get_firewall_delete_owner_accept_command(mock_iptables.wait, mock_iptables.destination, mock_iptables.uid)) delete_conntrack_drop_command = TestOSUtil._command_to_string(osutil.get_firewall_delete_conntrack_drop_command(mock_iptables.wait, mock_iptables.destination)) self.assertFalse(success, "Enable firewall should have failed") self.assertEqual(len(mock_iptables.command_calls), 7, "Incorrect number of calls to iptables: [{0}]". format(mock_iptables.command_calls)) self.assertEqual(mock_iptables.command_calls[0], tcp_check_command, "The first command should check the tcp rule: {0}".format(mock_iptables.command_calls[0])) self.assertEqual(mock_iptables.command_calls[1], accept_check_command, "The second command should check the accept rule: {0}".format(mock_iptables.command_calls[1])) self.assertEqual(mock_iptables.command_calls[2], drop_check_command, "The third command should check the drop rule: {0}".format(mock_iptables.command_calls[2])) self.assertEqual(mock_iptables.command_calls[3], delete_conntrack_accept_command, "The fourth command should delete the conntrack accept rule: {0}".format(mock_iptables.command_calls[3])) self.assertEqual(mock_iptables.command_calls[4], delete_accept_tcp_rule, "The fifth command should delete the dns tcp accept rule: {0}".format( mock_iptables.command_calls[4])) self.assertEqual(mock_iptables.command_calls[5], delete_owner_accept_command, "The sixth command should delete the owner accept rule: {0}".format(mock_iptables.command_calls[5])) self.assertEqual(mock_iptables.command_calls[6], delete_conntrack_drop_command, "The seventh command should delete the conntrack accept rule : {0}".format(mock_iptables.command_calls[6])) self.assertFalse(osutil._enable_firewall) def test_enable_firewall_skips_if_disabled(self): with TestOSUtil._mock_iptables() as mock_iptables: with patch.object(osutil, '_enable_firewall', False): success, _ = osutil.DefaultOSUtil().enable_firewall(dst_ip=mock_iptables.destination, uid=mock_iptables.uid) self.assertFalse(success, "The firewall should not have been disabled") self.assertEqual(len(mock_iptables.command_calls), 0, "iptables should not have been invoked: [{0}]". format(mock_iptables.command_calls)) self.assertFalse(osutil._enable_firewall) def test_remove_firewall(self): with TestOSUtil._mock_iptables() as mock_iptables: with patch.object(osutil, '_enable_firewall', True): delete_commands = {} def mock_popen(command, *args, **kwargs): command_string = TestOSUtil._command_to_string(command) if AddFirewallRules.DELETE_COMMAND in command_string: # The agent invokes the delete commands continuously until they return 1 to indicate the rules has been removed # The mock returns 0 (success) the first time it is invoked and 1 (rule does not exist) thereafter if command_string not in delete_commands: exit_code = 0 delete_commands[command_string] = 1 else: exit_code = 1 delete_commands[command_string] += 1 command = "echo '' && exit {0}".format(exit_code) kwargs["shell"] = True return mock_popen.original(command, *args, **kwargs) mock_popen.original = subprocess.Popen with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", side_effect=mock_popen): success = osutil.DefaultOSUtil().remove_firewall(mock_iptables.destination, mock_iptables.uid, mock_iptables.wait) delete_conntrack_accept_command = TestOSUtil._command_to_string(osutil.get_firewall_delete_conntrack_accept_command(mock_iptables.wait, mock_iptables.destination)) delete_accept_tcp_rule = TestOSUtil._command_to_string( osutil.get_delete_accept_tcp_rule(mock_iptables.wait, mock_iptables.destination)) delete_owner_accept_command = TestOSUtil._command_to_string(osutil.get_firewall_delete_owner_accept_command(mock_iptables.wait, mock_iptables.destination, mock_iptables.uid)) delete_conntrack_drop_command = TestOSUtil._command_to_string(osutil.get_firewall_delete_conntrack_drop_command(mock_iptables.wait, mock_iptables.destination)) self.assertTrue(success, "Removing the firewall should have succeeded") self.assertEqual(len(delete_commands), 4, "Expected 4 delete commands: [{0}]".format(delete_commands)) # delete rules < 2.2.26 self.assertIn(delete_accept_tcp_rule, delete_commands, "The delete dns tcp accept command was not executed") self.assertEqual(delete_commands[delete_accept_tcp_rule], 2, "The delete dns tcp accept command should have been executed twice") self.assertIn(delete_conntrack_accept_command, delete_commands, "The delete conntrack accept command was not executed") self.assertEqual(delete_commands[delete_conntrack_accept_command], 2, "The delete conntrack accept command should have been executed twice") self.assertIn(delete_owner_accept_command, delete_commands, "The delete owner accept command was not executed") self.assertEqual(delete_commands[delete_owner_accept_command], 2, "The delete owner accept command should have been executed twice") # delete rules >= 2.2.26 self.assertIn(delete_conntrack_drop_command, delete_commands, "The delete conntrack drop command was not executed") self.assertEqual(delete_commands[delete_conntrack_drop_command], 2, "The delete conntrack drop command should have been executed twice") self.assertTrue(osutil._enable_firewall) def test_remove_firewall_should_not_retry_invalid_rule(self): with TestOSUtil._mock_iptables() as mock_iptables: with patch.object(osutil, '_enable_firewall', True): command = osutil.get_firewall_delete_conntrack_accept_command(mock_iptables.wait, mock_iptables.destination) # Note that the command is actually a valid rule, but we use the mock to report it as invalid (exit code 2) delete_conntrack_accept_command = mock_iptables.set_command(command, exit_code=2) success = osutil.DefaultOSUtil().remove_firewall(mock_iptables.destination, mock_iptables.uid, mock_iptables.wait) self.assertFalse(success, "Removing the firewall should not have succeeded") self.assertEqual(len(mock_iptables.command_calls), 1, "Expected a single call to iptables: [{0}]". format(mock_iptables.command_calls)) self.assertEqual(mock_iptables.command_calls[0], delete_conntrack_accept_command, "Expected call to delete conntrack accept command: {0}".format(mock_iptables.command_calls[0])) self.assertFalse(osutil._enable_firewall) @skip_if_predicate_true(is_python_version_26_or_34, "Disabled on Python 2.6 and 3.4 for now. Need to revisit to fix it") def test_get_nic_state(self): state = osutil.DefaultOSUtil().get_nic_state() self.assertNotEqual(state, {}) self.assertGreater(len(state.keys()), 1) another_state = osutil.DefaultOSUtil().get_nic_state() name = list(another_state.keys())[0] another_state[name].add_ipv4("xyzzy") self.assertNotEqual(state, another_state) as_string = osutil.DefaultOSUtil().get_nic_state(as_string=True) self.assertNotEqual(as_string, '') def test_get_used_and_available_system_memory(self): memory_table = "\ total used free shared buff/cache available \n\ Mem: 8340144128 619352064 5236809728 1499136 2483982336 7426314240 \n\ Swap: 0 0 0 \n" with patch.object(shellutil, 'run_command', return_value=memory_table): used_mem, available_mem = osutil.DefaultOSUtil().get_used_and_available_system_memory() self.assertEqual(used_mem, 619352064/(1024**2), "The value didn't match") self.assertEqual(available_mem, 7426314240/(1024**2), "The value didn't match") def test_get_used_and_available_system_memory_error(self): msg = 'message' exception = shellutil.CommandError("free -d", 1, "", msg) with patch.object(shellutil, 'run_command', side_effect=exception) as patch_run: with self.assertRaises(shellutil.CommandError) as context_manager: osutil.DefaultOSUtil().get_used_and_available_system_memory() self.assertEqual(patch_run.call_count, 1) self.assertEqual(context_manager.exception.returncode, 1) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, osutil.DefaultOSUtil()) def test_get_dhcp_pid_should_return_an_empty_list_when_the_dhcp_client_is_not_running(self): original_run_command = shellutil.run_command def mock_run_command(cmd): # pylint: disable=unused-argument return original_run_command(["pidof", "non-existing-process"]) with patch("azurelinuxagent.common.utils.shellutil.run_command", side_effect=mock_run_command): pid_list = osutil.DefaultOSUtil().get_dhcp_pid() self.assertTrue(len(pid_list) == 0, "the return value is not an empty list: {0}".format(pid_list)) @patch('os.walk', return_value=[('host3/target3:0:1/3:0:1:0/block', ['sdb'], [])]) @patch('azurelinuxagent.common.utils.fileutil.read_file', return_value='{00000000-0001-8899-0000-000000000000}') @patch('os.listdir', return_value=['00000000-0001-8899-0000-000000000000']) @patch('os.path.exists', return_value=True) def test_device_for_ide_port_gen1_success( self, os_path_exists, # pylint: disable=unused-argument os_listdir, # pylint: disable=unused-argument fileutil_read_file, # pylint: disable=unused-argument os_walk): # pylint: disable=unused-argument dev = osutil.DefaultOSUtil().device_for_ide_port(1) self.assertEqual(dev, 'sdb', 'The returned device should be the resource disk') @patch('os.walk', return_value=[('host0/target0:0:0/0:0:0:1/block', ['sdb'], [])]) @patch('azurelinuxagent.common.utils.fileutil.read_file', return_value='{f8b3781a-1e82-4818-a1c3-63d806ec15bb}') @patch('os.listdir', return_value=['f8b3781a-1e82-4818-a1c3-63d806ec15bb']) @patch('os.path.exists', return_value=True) def test_device_for_ide_port_gen2_success( self, os_path_exists, # pylint: disable=unused-argument os_listdir, # pylint: disable=unused-argument fileutil_read_file, # pylint: disable=unused-argument os_walk): # pylint: disable=unused-argument dev = osutil.DefaultOSUtil().device_for_ide_port(1) self.assertEqual(dev, 'sdb', 'The returned device should be the resource disk') @patch('os.listdir', return_value=['00000000-0000-0000-0000-000000000000']) @patch('os.path.exists', return_value=True) def test_device_for_ide_port_none( self, os_path_exists, # pylint: disable=unused-argument os_listdir): # pylint: disable=unused-argument dev = osutil.DefaultOSUtil().device_for_ide_port(1) self.assertIsNone(dev, 'None should be returned if no resource disk found') def osutil_get_dhcp_pid_should_return_a_list_of_pids(test_instance, osutil_instance): """ This is a very basic test for osutil.get_dhcp_pid. It is simply meant to exercise the implementation of that method in case there are any basic errors, such as a typos, etc. The test does not verify that the implementation returns the PID for the actual dhcp client; in fact, it uses a mock that invokes pidof to return the PID of an arbitrary process (the pidof process itself). Most implementations of get_dhcp_pid use pidof with the appropriate name for the dhcp client. The test is defined as a global function to make it easily accessible from the test suites for each distro. """ original_run_command = shellutil.run_command def mock_run_command(cmd): # pylint: disable=unused-argument return original_run_command(["pidof", "pidof"]) with patch("azurelinuxagent.common.utils.shellutil.run_command", side_effect=mock_run_command): pid = osutil_instance.get_dhcp_pid() test_instance.assertTrue(len(pid) != 0, "get_dhcp_pid did not return a PID") class TestGetPublishedHostname(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) self.__published_hostname = os.path.join(self.tmp_dir, "published_hostname") self.__patcher = patch('azurelinuxagent.common.osutil.default.conf.get_published_hostname', return_value=self.__published_hostname) self.__patcher.start() def tearDown(self): self.__patcher.stop() AgentTestCase.tearDown(self) def __get_published_hostname_contents(self): with open(self.__published_hostname, "r") as file_: return file_.read() def test_get_hostname_record_should_create_published_hostname(self): actual = osutil.DefaultOSUtil().get_hostname_record() expected = socket.gethostname() self.assertEqual(expected, actual, "get_hostname_record returned an incorrect hostname") self.assertTrue(os.path.exists(self.__published_hostname), "The published_hostname file was not created") self.assertEqual(expected, self.__get_published_hostname_contents(), "get_hostname_record returned an incorrect hostname") def test_get_hostname_record_should_use_existing_published_hostname(self): expected = "a-sample-hostname-used-for-testing" with open(self.__published_hostname, "w") as file_: file_.write(expected) actual = osutil.DefaultOSUtil().get_hostname_record() self.assertEqual(expected, actual, "get_hostname_record returned an incorrect hostname") self.assertEqual(expected, self.__get_published_hostname_contents(), "get_hostname_record returned an incorrect hostname") def test_get_hostname_record_should_initialize_the_host_name_using_cloud_init_info(self): with MockEnvironment(self.tmp_dir, files=[('/var/lib/cloud/data/set-hostname', os.path.join(data_dir, "cloud-init", "set-hostname"))]): actual = osutil.DefaultOSUtil().get_hostname_record() expected = "a-sample-set-hostname" self.assertEqual(expected, actual, "get_hostname_record returned an incorrect hostname") self.assertEqual(expected, self.__get_published_hostname_contents(), "get_hostname_record returned an incorrect hostname") if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/osutil/test_default_osutil.py000066400000000000000000000017231462617747000261400ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.4+ and Openssl 1.0+ # from azurelinuxagent.common.osutil.default import DefaultOSUtil, shellutil # pylint: disable=unused-import from tests.lib.tools import AgentTestCase, patch # pylint: disable=unused-import class DefaultOsUtilTestCase(AgentTestCase): def test_default_service_name(self): self.assertEqual(DefaultOSUtil().get_service_name(), "waagent") Azure-WALinuxAgent-2b21de5/tests/common/osutil/test_factory.py000066400000000000000000000371221462617747000245660ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.4+ and Openssl 1.0+ # from azurelinuxagent.common.osutil.alpine import AlpineOSUtil from azurelinuxagent.common.osutil.arch import ArchUtil from azurelinuxagent.common.osutil.bigip import BigIpOSUtil from azurelinuxagent.common.osutil.clearlinux import ClearLinuxUtil from azurelinuxagent.common.osutil.coreos import CoreOSUtil from azurelinuxagent.common.osutil.debian import DebianOSBaseUtil, DebianOSModernUtil from azurelinuxagent.common.osutil.devuan import DevuanOSUtil from azurelinuxagent.common.osutil.default import DefaultOSUtil from azurelinuxagent.common.osutil.factory import _get_osutil from azurelinuxagent.common.osutil.freebsd import FreeBSDOSUtil from azurelinuxagent.common.osutil.gaia import GaiaOSUtil from azurelinuxagent.common.osutil.iosxe import IosxeOSUtil from azurelinuxagent.common.osutil.openbsd import OpenBSDOSUtil from azurelinuxagent.common.osutil.openwrt import OpenWRTOSUtil from azurelinuxagent.common.osutil.photonos import PhotonOSUtil from azurelinuxagent.common.osutil.redhat import RedhatOSUtil, Redhat6xOSUtil from azurelinuxagent.common.osutil.suse import SUSEOSUtil, SUSE11OSUtil from azurelinuxagent.common.osutil.ubuntu import UbuntuOSUtil, Ubuntu12OSUtil, Ubuntu14OSUtil, \ UbuntuSnappyOSUtil, Ubuntu16OSUtil, Ubuntu18OSUtil from tests.lib.tools import AgentTestCase, patch class TestOsUtilFactory(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) @patch("azurelinuxagent.common.logger.warn") def test_get_osutil_it_should_return_default(self, patch_logger): ret = _get_osutil(distro_name="", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, DefaultOSUtil)) self.assertEqual(patch_logger.call_count, 1) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_ubuntu(self): ret = _get_osutil(distro_name="ubuntu", distro_code_name="", distro_version="10.04", distro_full_name="") self.assertTrue(isinstance(ret, UbuntuOSUtil)) self.assertEqual(ret.get_service_name(), "walinuxagent") ret = _get_osutil(distro_name="ubuntu", distro_code_name="", distro_version="12.04", distro_full_name="") self.assertTrue(isinstance(ret, Ubuntu12OSUtil)) self.assertEqual(ret.get_service_name(), "walinuxagent") ret = _get_osutil(distro_name="ubuntu", distro_code_name="trusty", distro_version="14.04", distro_full_name="") self.assertTrue(isinstance(ret, Ubuntu14OSUtil)) self.assertEqual(ret.get_service_name(), "walinuxagent") ret = _get_osutil(distro_name="ubuntu", distro_code_name="xenial", distro_version="16.04", distro_full_name="") self.assertTrue(isinstance(ret, Ubuntu16OSUtil)) self.assertEqual(ret.get_service_name(), "walinuxagent") ret = _get_osutil(distro_name="ubuntu", distro_code_name="bionic", distro_version="18.04", distro_full_name="") self.assertTrue(isinstance(ret, Ubuntu18OSUtil)) self.assertEqual(ret.get_service_name(), "walinuxagent") ret = _get_osutil(distro_name="ubuntu", distro_code_name="focal", distro_version="20.04", distro_full_name="") self.assertTrue(isinstance(ret, Ubuntu18OSUtil)) self.assertEqual(ret.get_service_name(), "walinuxagent") ret = _get_osutil(distro_name="ubuntu", distro_code_name="focal", distro_version="24.04", distro_full_name="") self.assertTrue(isinstance(ret, Ubuntu18OSUtil)) self.assertEqual(ret.get_service_name(), "walinuxagent") ret = _get_osutil(distro_name="ubuntu", distro_code_name="", distro_version="10.04", distro_full_name="Snappy Ubuntu Core") self.assertTrue(isinstance(ret, UbuntuSnappyOSUtil)) self.assertEqual(ret.get_service_name(), "walinuxagent") def test_get_osutil_it_should_return_arch(self): ret = _get_osutil(distro_name="arch", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, ArchUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_clear_linux(self): ret = _get_osutil(distro_name="clear linux", distro_code_name="", distro_version="", distro_full_name="Clear Linux") self.assertTrue(isinstance(ret, ClearLinuxUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_alpine(self): ret = _get_osutil(distro_name="alpine", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, AlpineOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_kali(self): ret = _get_osutil(distro_name="kali", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, DebianOSBaseUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_coreos(self): ret = _get_osutil(distro_name="coreos", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, CoreOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="flatcar", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, CoreOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_suse(self): ret = _get_osutil(distro_name="suse", distro_code_name="", distro_version="10", distro_full_name="") self.assertTrue(isinstance(ret, SUSEOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="suse", distro_code_name="", distro_full_name="SUSE Linux Enterprise Server", distro_version="11") self.assertTrue(isinstance(ret, SUSE11OSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="suse", distro_code_name="", distro_full_name="openSUSE", distro_version="12") self.assertTrue(isinstance(ret, SUSE11OSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_debian(self): ret = _get_osutil(distro_name="debian", distro_code_name="", distro_full_name="", distro_version="7") self.assertTrue(isinstance(ret, DebianOSBaseUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="debian", distro_code_name="", distro_full_name="", distro_version="8") self.assertTrue(isinstance(ret, DebianOSModernUtil)) self.assertEqual(ret.get_service_name(), "walinuxagent") def test_get_osutil_it_should_return_devuan(self): ret = _get_osutil(distro_name="devuan", distro_code_name="", distro_full_name="", distro_version="4") self.assertTrue(isinstance(ret, DevuanOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_redhat(self): ret = _get_osutil(distro_name="redhat", distro_code_name="", distro_full_name="", distro_version="6") self.assertTrue(isinstance(ret, Redhat6xOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="rhel", distro_code_name="", distro_full_name="", distro_version="6") self.assertTrue(isinstance(ret, Redhat6xOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="centos", distro_code_name="", distro_full_name="", distro_version="6") self.assertTrue(isinstance(ret, Redhat6xOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="oracle", distro_code_name="", distro_full_name="", distro_version="6") self.assertTrue(isinstance(ret, Redhat6xOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="redhat", distro_code_name="", distro_full_name="", distro_version="7") self.assertTrue(isinstance(ret, RedhatOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="rhel", distro_code_name="", distro_full_name="", distro_version="7") self.assertTrue(isinstance(ret, RedhatOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="centos", distro_code_name="", distro_full_name="", distro_version="7") self.assertTrue(isinstance(ret, RedhatOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="oracle", distro_code_name="", distro_full_name="", distro_version="7") self.assertTrue(isinstance(ret, RedhatOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="almalinux", distro_code_name="", distro_full_name="", distro_version="8") self.assertTrue(isinstance(ret, RedhatOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="cloudlinux", distro_code_name="", distro_full_name="", distro_version="8") self.assertTrue(isinstance(ret, RedhatOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") ret = _get_osutil(distro_name="rocky", distro_code_name="", distro_full_name="", distro_version="8") self.assertTrue(isinstance(ret, RedhatOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_euleros(self): ret = _get_osutil(distro_name="euleros", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, RedhatOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_uos(self): ret = _get_osutil(distro_name="uos", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, RedhatOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_freebsd(self): ret = _get_osutil(distro_name="freebsd", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, FreeBSDOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_openbsd(self): ret = _get_osutil(distro_name="openbsd", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, OpenBSDOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_bigip(self): ret = _get_osutil(distro_name="bigip", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, BigIpOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_gaia(self): ret = _get_osutil(distro_name="gaia", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, GaiaOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_iosxe(self): ret = _get_osutil(distro_name="iosxe", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, IosxeOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_openwrt(self): ret = _get_osutil(distro_name="openwrt", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, OpenWRTOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") def test_get_osutil_it_should_return_photonos(self): ret = _get_osutil(distro_name="photonos", distro_code_name="", distro_version="", distro_full_name="") self.assertTrue(isinstance(ret, PhotonOSUtil)) self.assertEqual(ret.get_service_name(), "waagent") Azure-WALinuxAgent-2b21de5/tests/common/osutil/test_freebsd.py000066400000000000000000000113301462617747000245220ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest import azurelinuxagent.common.utils.shellutil as shellutil from azurelinuxagent.common.osutil.freebsd import FreeBSDOSUtil from azurelinuxagent.common.utils import textutil from tests.lib.tools import AgentTestCase, patch from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestFreeBSDOSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, FreeBSDOSUtil()) def test_empty_proc_net_route(self): route_table = "" with patch.object(shellutil, 'run_command', return_value=route_table): # Header line only self.assertEqual(len(FreeBSDOSUtil().read_route_table()), 1) def test_no_routes(self): route_table = """Routing tables Internet: Destination Gateway Flags Netif Expire """ with patch.object(shellutil, 'run_command', return_value=route_table): raw_route_list = FreeBSDOSUtil().read_route_table() self.assertEqual(len(FreeBSDOSUtil().get_list_of_routes(raw_route_list)), 0) def test_bogus_proc_net_route(self): route_table = """Routing tables Internet: Destination Gateway Flags Netif Expire 1.1.1 0.0.0 """ with patch.object(shellutil, 'run_command', return_value=route_table): raw_route_list = FreeBSDOSUtil().read_route_table() self.assertEqual(len(FreeBSDOSUtil().get_list_of_routes(raw_route_list)), 0) def test_valid_routes(self): route_table = """Routing tables Internet: Destination Gateway Flags Netif Expire 0.0.0.0 10.145.187.193 UGS em0 10.145.187.192/26 0.0.0.0 US em0 168.63.129.16 10.145.187.193 UH em0 169.254.169.254 10.145.187.193 UHS em0 192.168.43.0 0.0.0.0 US vtbd0 """ with patch.object(shellutil, 'run_command', return_value=route_table): raw_route_list = FreeBSDOSUtil().read_route_table() self.assertEqual(len(raw_route_list), 6) route_list = FreeBSDOSUtil().get_list_of_routes(raw_route_list) self.assertEqual(len(route_list), 5) self.assertEqual(route_list[0].gateway_quad(), '10.145.187.193') self.assertEqual(route_list[1].gateway_quad(), '0.0.0.0') self.assertEqual(route_list[1].mask_quad(), '255.255.255.192') self.assertEqual(route_list[2].destination_quad(), '168.63.129.16') self.assertEqual(route_list[1].flags, 1) self.assertEqual(route_list[2].flags, 33) self.assertEqual(route_list[3].flags, 5) self.assertEqual((route_list[3].metric - route_list[4].metric), 1) self.assertEqual(route_list[0].interface, 'em0') self.assertEqual(route_list[4].interface, 'vtbd0') def test_get_first_if(self): """ Validate that the agent can find the first active non-loopback interface. This test case used to run live, but not all developers have an eth* interface. It is perfectly valid to have a br*, but this test does not account for that. """ freebsdosutil = FreeBSDOSUtil() with patch.object(freebsdosutil, '_get_net_info', return_value=('em0', '10.0.0.1', 'e5:f0:38:aa:da:52')): ifname, ipaddr = freebsdosutil.get_first_if() self.assertEqual(ifname, 'em0') self.assertEqual(ipaddr, '10.0.0.1') def test_no_primary_does_not_throw(self): freebsdosutil = FreeBSDOSUtil() with patch.object(freebsdosutil, '_get_net_info', return_value=('em0', '10.0.0.1', 'e5:f0:38:aa:da:52')): try: freebsdosutil.get_first_if()[0] except Exception as e: # pylint: disable=unused-variable print(textutil.format_exception(e)) exception = True # pylint: disable=unused-variable if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/osutil/test_nsbsd.py000066400000000000000000000065031462617747000242270ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from os import path from azurelinuxagent.common.osutil.nsbsd import NSBSDOSUtil from azurelinuxagent.common.utils.fileutil import read_file from tests.lib.tools import AgentTestCase, patch class TestNSBSDOSUtil(AgentTestCase): dhclient_pid_file = "/var/run/dhclient.pid" def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): with patch.object(NSBSDOSUtil, "resolver"): # instantiating NSBSDOSUtil requires a resolver original_isfile = path.isfile def mock_isfile(path): # pylint: disable=redefined-outer-name return True if path == self.dhclient_pid_file else original_isfile(path) original_read_file = read_file def mock_read_file(file, *args, **kwargs): # pylint: disable=redefined-builtin return "123" if file == self.dhclient_pid_file else original_read_file(file, *args, **kwargs) with patch("os.path.isfile", mock_isfile): with patch("azurelinuxagent.common.osutil.nsbsd.fileutil.read_file", mock_read_file): pid_list = NSBSDOSUtil().get_dhcp_pid() self.assertEqual(pid_list, [123]) def test_get_dhcp_pid_should_return_an_empty_list_when_the_dhcp_client_is_not_running(self): with patch.object(NSBSDOSUtil, "resolver"): # instantiating NSBSDOSUtil requires a resolver # # PID file does not exist # original_isfile = path.isfile def mock_isfile(path): # pylint: disable=redefined-outer-name return False if path == self.dhclient_pid_file else original_isfile(path) with patch("os.path.isfile", mock_isfile): pid_list = NSBSDOSUtil().get_dhcp_pid() self.assertEqual(pid_list, []) # # PID file is empty # original_isfile = path.isfile def mock_isfile(path): # pylint: disable=redefined-outer-name,function-redefined return True if path == self.dhclient_pid_file else original_isfile(path) original_read_file = read_file def mock_read_file(file, *args, **kwargs): # pylint: disable=redefined-builtin return "" if file == self.dhclient_pid_file else original_read_file(file, *args, **kwargs) with patch("os.path.isfile", mock_isfile): with patch("azurelinuxagent.common.osutil.nsbsd.fileutil.read_file", mock_read_file): pid_list = NSBSDOSUtil().get_dhcp_pid() self.assertEqual(pid_list, []) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/osutil/test_openbsd.py000066400000000000000000000022311462617747000245420ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.openbsd import OpenBSDOSUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestAlpineOSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, OpenBSDOSUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/osutil/test_openwrt.py000066400000000000000000000022321462617747000246070ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.openwrt import OpenWRTOSUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestOpenWRTOSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, OpenWRTOSUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/osutil/test_photonos.py000066400000000000000000000022311462617747000247610ustar00rootroot00000000000000# Copyright 2021 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.photonos import PhotonOSUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestPhotonOSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, PhotonOSUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/osutil/test_redhat.py000066400000000000000000000022341462617747000243620ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.redhat import Redhat6xOSUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestRedhat6xOSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, Redhat6xOSUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/osutil/test_suse.py000066400000000000000000000022241462617747000240710ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.suse import SUSE11OSUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestSUSE11OSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, SUSE11OSUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/osutil/test_ubuntu.py000066400000000000000000000027351462617747000244430ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.osutil.ubuntu import Ubuntu12OSUtil, Ubuntu18OSUtil from tests.lib.tools import AgentTestCase from .test_default import osutil_get_dhcp_pid_should_return_a_list_of_pids class TestUbuntu12OSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, Ubuntu12OSUtil()) class TestUbuntu18OSUtil(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) def test_get_dhcp_pid_should_return_a_list_of_pids(self): osutil_get_dhcp_pid_should_return_a_list_of_pids(self, Ubuntu18OSUtil()) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/protocol/000077500000000000000000000000001462617747000220235ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/common/protocol/__init__.py000066400000000000000000000011651462617747000241370ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-2b21de5/tests/common/protocol/test_datacontract.py000066400000000000000000000027031462617747000261050ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import unittest from azurelinuxagent.common.datacontract import get_properties, set_properties, DataContract, DataContractList class SampleDataContract(DataContract): def __init__(self): self.foo = None self.bar = DataContractList(int) class TestDataContract(unittest.TestCase): def test_get_properties(self): obj = SampleDataContract() obj.foo = "foo" obj.bar.append(1) data = get_properties(obj) self.assertEqual("foo", data["foo"]) self.assertEqual(list, type(data["bar"])) def test_set_properties(self): obj = SampleDataContract() data = { 'foo' : 1, 'baz': 'a' } set_properties('sample', obj, data) self.assertFalse(hasattr(obj, 'baz')) if __name__ == '__main__': unittest.main() test_extensions_goal_state_from_extensions_config.py000066400000000000000000000154231462617747000346120ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/common/protocol# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. from azurelinuxagent.common.AgentGlobals import AgentGlobals from azurelinuxagent.common.protocol.extensions_goal_state import GoalStateChannel from tests.lib.mock_wire_protocol import wire_protocol_data, mock_wire_protocol from tests.lib.tools import AgentTestCase class ExtensionsGoalStateFromExtensionsConfigTestCase(AgentTestCase): def test_it_should_parse_in_vm_metadata(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_IN_VM_META_DATA) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertEqual("555e551c-600e-4fb4-90ba-8ab8ec28eccc", extensions_goal_state.activity_id, "Incorrect activity Id") self.assertEqual("400de90b-522e-491f-9d89-ec944661f531", extensions_goal_state.correlation_id, "Incorrect correlation Id") self.assertEqual('2020-11-09T17:48:50.412125Z', extensions_goal_state.created_on_timestamp, "Incorrect GS Creation time") def test_it_should_use_default_values_when_in_vm_metadata_is_missing(self): data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf-no_gs_metadata.xml" with mock_wire_protocol(data_file) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertEqual(AgentGlobals.GUID_ZERO, extensions_goal_state.activity_id, "Incorrect activity Id") self.assertEqual(AgentGlobals.GUID_ZERO, extensions_goal_state.correlation_id, "Incorrect correlation Id") self.assertEqual('1900-01-01T00:00:00.000000Z', extensions_goal_state.created_on_timestamp, "Incorrect GS Creation time") def test_it_should_use_default_values_when_in_vm_metadata_is_invalid(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_INVALID_VM_META_DATA) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertEqual(AgentGlobals.GUID_ZERO, extensions_goal_state.activity_id, "Incorrect activity Id") self.assertEqual(AgentGlobals.GUID_ZERO, extensions_goal_state.correlation_id, "Incorrect correlation Id") self.assertEqual('1900-01-01T00:00:00.000000Z', extensions_goal_state.created_on_timestamp, "Incorrect GS Creation time") def test_it_should_parse_missing_status_upload_blob_as_none(self): data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "hostgaplugin/ext_conf-no_status_upload_blob.xml" with mock_wire_protocol(data_file) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertIsNone(extensions_goal_state.status_upload_blob, "Expected status upload blob to be None") self.assertEqual("BlockBlob", extensions_goal_state.status_upload_blob_type, "Expected status upload blob to be Block") def test_it_should_default_to_block_blob_when_the_status_blob_type_is_not_valid(self): data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "hostgaplugin/ext_conf-invalid_blob_type.xml" with mock_wire_protocol(data_file) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertEqual("BlockBlob", extensions_goal_state.status_upload_blob_type, 'Expected BlockBlob for an invalid statusBlobType') def test_it_should_parse_empty_depends_on_as_dependency_level_0(self): data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-empty_depends_on.json" data_file["ext_conf"] = "hostgaplugin/ext_conf-empty_depends_on.xml" with mock_wire_protocol(data_file) as protocol: extensions = protocol.get_goal_state().extensions_goal_state.extensions self.assertEqual(0, extensions[0].settings[0].dependencyLevel, "Incorrect dependencyLevel") def test_its_source_channel_should_be_wire_server(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertEqual(GoalStateChannel.WireServer, extensions_goal_state.channel, "The channel is incorrect") def test_it_should_parse_is_version_from_rsm_properly(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: agent_families = protocol.get_goal_state().extensions_goal_state.agent_families for family in agent_families: self.assertIsNone(family.is_version_from_rsm, "is_version_from_rsm should be None") data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "hostgaplugin/ext_conf-agent_family_version.xml" with mock_wire_protocol(data_file) as protocol: agent_families = protocol.get_goal_state().extensions_goal_state.agent_families for family in agent_families: self.assertTrue(family.is_version_from_rsm, "is_version_from_rsm should be True") data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "hostgaplugin/ext_conf-rsm_version_properties_false.xml" with mock_wire_protocol(data_file) as protocol: agent_families = protocol.get_goal_state().extensions_goal_state.agent_families for family in agent_families: self.assertFalse(family.is_version_from_rsm, "is_version_from_rsm should be False") def test_it_should_parse_is_vm_enabled_for_rsm_upgrades(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: agent_families = protocol.get_goal_state().extensions_goal_state.agent_families for family in agent_families: self.assertIsNone(family.is_vm_enabled_for_rsm_upgrades, "is_vm_enabled_for_rsm_upgrades should be None") data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "hostgaplugin/ext_conf-agent_family_version.xml" with mock_wire_protocol(data_file) as protocol: agent_families = protocol.get_goal_state().extensions_goal_state.agent_families for family in agent_families: self.assertTrue(family.is_vm_enabled_for_rsm_upgrades, "is_vm_enabled_for_rsm_upgrades should be True") data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "hostgaplugin/ext_conf-rsm_version_properties_false.xml" with mock_wire_protocol(data_file) as protocol: agent_families = protocol.get_goal_state().extensions_goal_state.agent_families for family in agent_families: self.assertFalse(family.is_vm_enabled_for_rsm_upgrades, "is_vm_enabled_for_rsm_upgrades should be False") Azure-WALinuxAgent-2b21de5/tests/common/protocol/test_extensions_goal_state_from_vm_settings.py000066400000000000000000000320331462617747000335030ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. import json from azurelinuxagent.common.protocol.goal_state import GoalState from azurelinuxagent.common.protocol.extensions_goal_state import GoalStateChannel from azurelinuxagent.common.protocol.extensions_goal_state_from_vm_settings import _CaseFoldedDict from tests.lib.mock_wire_protocol import wire_protocol_data, mock_wire_protocol from tests.lib.tools import AgentTestCase class ExtensionsGoalStateFromVmSettingsTestCase(AgentTestCase): def test_it_should_parse_vm_settings(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state def assert_property(name, value): self.assertEqual(value, getattr(extensions_goal_state, name), '{0} was not parsed correctly'.format(name)) assert_property("activity_id", "a33f6f53-43d6-4625-b322-1a39651a00c9") assert_property("correlation_id", "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e") assert_property("created_on_timestamp", "2021-11-16T13:22:50.620529Z") assert_property("status_upload_blob", "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w") assert_property("status_upload_blob_type", "BlockBlob") assert_property("required_features", ["MultipleExtensionsPerHandler"]) assert_property("on_hold", True) # # for the rest of the attributes, we check only 1 item in each container (but check the length of the container) # # agent families self.assertEqual(2, len(extensions_goal_state.agent_families), "Incorrect number of agent families. Got: {0}".format(extensions_goal_state.agent_families)) self.assertEqual("Prod", extensions_goal_state.agent_families[0].name, "Incorrect agent family.") self.assertEqual(2, len(extensions_goal_state.agent_families[0].uris), "Incorrect number of uris. Got: {0}".format(extensions_goal_state.agent_families[0].uris)) expected = "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" self.assertEqual(expected, extensions_goal_state.agent_families[0].uris[0], "Unexpected URI for the agent manifest.") # extensions self.assertEqual(5, len(extensions_goal_state.extensions), "Incorrect number of extensions. Got: {0}".format(extensions_goal_state.extensions)) self.assertEqual('Microsoft.Azure.Monitor.AzureMonitorLinuxAgent', extensions_goal_state.extensions[0].name, "Incorrect extension name") self.assertEqual(1, len(extensions_goal_state.extensions[0].settings[0].publicSettings), "Incorrect number of public settings") self.assertEqual(True, extensions_goal_state.extensions[0].settings[0].publicSettings["GCS_AUTO_CONFIG"], "Incorrect public settings") # dependency level (single-config) self.assertEqual(1, extensions_goal_state.extensions[2].settings[0].dependencyLevel, "Incorrect dependency level (single-config)") # dependency level (multi-config) self.assertEqual(1, extensions_goal_state.extensions[3].settings[1].dependencyLevel, "Incorrect dependency level (multi-config)") def test_it_should_parse_requested_version_properly(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: goal_state = GoalState(protocol.client) families = goal_state.extensions_goal_state.agent_families for family in families: self.assertIsNone(family.version, "Version should be None") data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-agent_family_version.json" with mock_wire_protocol(data_file) as protocol: protocol.mock_wire_data.set_etag(888) goal_state = GoalState(protocol.client) families = goal_state.extensions_goal_state.agent_families for family in families: self.assertEqual(family.version, "9.9.9.9", "Version should be 9.9.9.9") def test_it_should_parse_is_version_from_rsm_properly(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: goal_state = GoalState(protocol.client) families = goal_state.extensions_goal_state.agent_families for family in families: self.assertIsNone(family.is_version_from_rsm, "is_version_from_rsm should be None") data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-agent_family_version.json" with mock_wire_protocol(data_file) as protocol: protocol.mock_wire_data.set_etag(888) goal_state = GoalState(protocol.client) families = goal_state.extensions_goal_state.agent_families for family in families: self.assertTrue(family.is_version_from_rsm, "is_version_from_rsm should be True") data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-requested_version_properties_false.json" with mock_wire_protocol(data_file) as protocol: protocol.mock_wire_data.set_etag(888) goal_state = GoalState(protocol.client) families = goal_state.extensions_goal_state.agent_families for family in families: self.assertFalse(family.is_version_from_rsm, "is_version_from_rsm should be False") def test_it_should_parse_is_vm_enabled_for_rsm_upgrades_properly(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: goal_state = GoalState(protocol.client) families = goal_state.extensions_goal_state.agent_families for family in families: self.assertIsNone(family.is_vm_enabled_for_rsm_upgrades, "is_vm_enabled_for_rsm_upgrades should be None") data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-agent_family_version.json" with mock_wire_protocol(data_file) as protocol: protocol.mock_wire_data.set_etag(888) goal_state = GoalState(protocol.client) families = goal_state.extensions_goal_state.agent_families for family in families: self.assertTrue(family.is_vm_enabled_for_rsm_upgrades, "is_vm_enabled_for_rsm_upgrades should be True") data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-requested_version_properties_false.json" with mock_wire_protocol(data_file) as protocol: protocol.mock_wire_data.set_etag(888) goal_state = GoalState(protocol.client) families = goal_state.extensions_goal_state.agent_families for family in families: self.assertFalse(family.is_vm_enabled_for_rsm_upgrades, "is_vm_enabled_for_rsm_upgrades should be False") def test_it_should_parse_missing_status_upload_blob_as_none(self): data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-no_status_upload_blob.json" with mock_wire_protocol(data_file) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertIsNone(extensions_goal_state.status_upload_blob, "Expected status upload blob to be None") self.assertEqual("BlockBlob", extensions_goal_state.status_upload_blob_type, "Expected status upload blob to be Block") def test_it_should_parse_missing_agent_manifests_as_empty(self): data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-no_manifests.json" with mock_wire_protocol(data_file) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertEqual(1, len(extensions_goal_state.agent_families), "Expected exactly one agent manifest. Got: {0}".format(extensions_goal_state.agent_families)) self.assertListEqual([], extensions_goal_state.agent_families[0].uris, "Expected an empty list of agent manifests") def test_it_should_parse_missing_extension_manifests_as_empty(self): data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-no_manifests.json" with mock_wire_protocol(data_file) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertEqual(3, len(extensions_goal_state.extensions), "Incorrect number of extensions. Got: {0}".format(extensions_goal_state.extensions)) self.assertEqual([], extensions_goal_state.extensions[0].manifest_uris, "Expected an empty list of manifests for {0}".format(extensions_goal_state.extensions[0])) self.assertEqual([], extensions_goal_state.extensions[1].manifest_uris, "Expected an empty list of manifests for {0}".format(extensions_goal_state.extensions[1])) self.assertEqual( [ "https://umsakzkwhng2ft0jjptl.blob.core.windows.net/deeb2df6-c025-e6fb-b015-449ed6a676bc/deeb2df6-c025-e6fb-b015-449ed6a676bc_manifest.xml", "https://umsafmqfbv4hgrd1hqff.blob.core.windows.net/deeb2df6-c025-e6fb-b015-449ed6a676bc/deeb2df6-c025-e6fb-b015-449ed6a676bc_manifest.xml", ], extensions_goal_state.extensions[2].manifest_uris, "Incorrect list of manifests for {0}".format(extensions_goal_state.extensions[2])) def test_it_should_default_to_block_blob_when_the_status_blob_type_is_not_valid(self): data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-invalid_blob_type.json" with mock_wire_protocol(data_file) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertEqual("BlockBlob", extensions_goal_state.status_upload_blob_type, 'Expected BlockBlob for an invalid statusBlobType') def test_its_source_channel_should_be_host_ga_plugin(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertEqual(GoalStateChannel.HostGAPlugin, extensions_goal_state.channel, "The channel is incorrect") class CaseFoldedDictionaryTestCase(AgentTestCase): def test_it_should_retrieve_items_ignoring_case(self): dictionary = json.loads('''{ "activityId": "2e7f8b5d-f637-4721-b757-cb190d49b4e9", "StatusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcqabsr1.blob.core.windows.net/$system/edpxmal5j1.058b176d-445b-4e75-bd97-4911511b7d96.status" }, "gaFamilies": [ { "Name": "Prod", "Version": "2.5.0.2", "Uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] } ] }''') case_folded = _CaseFoldedDict.from_dict(dictionary) def test_retrieve_item(key, expected_value): """ Test for operators [] and in, and methods get() and has_key() """ try: self.assertEqual(expected_value, case_folded[key], "Operator [] retrieved incorrect value for '{0}'".format(key)) except KeyError: self.fail("Operator [] failed to retrieve '{0}'".format(key)) self.assertTrue(case_folded.has_key(key), "Method has_key() did not find '{0}'".format(key)) self.assertEqual(expected_value, case_folded.get(key), "Method get() retrieved incorrect value for '{0}'".format(key)) self.assertTrue(key in case_folded, "Operator in did not find key '{0}'".format(key)) test_retrieve_item("activityId", "2e7f8b5d-f637-4721-b757-cb190d49b4e9") test_retrieve_item("activityid", "2e7f8b5d-f637-4721-b757-cb190d49b4e9") test_retrieve_item("ACTIVITYID", "2e7f8b5d-f637-4721-b757-cb190d49b4e9") self.assertEqual("BlockBlob", case_folded["statusuploadblob"]["statusblobtype"], "Failed to retrieve item in nested dictionary") self.assertEqual("Prod", case_folded["gafamilies"][0]["name"], "Failed to retrieve item in nested array") Azure-WALinuxAgent-2b21de5/tests/common/protocol/test_goal_state.py000066400000000000000000000717271462617747000255740ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. import contextlib import datetime import glob import os import re import time from azurelinuxagent.common import conf from azurelinuxagent.common.future import httpclient from azurelinuxagent.common.protocol.extensions_goal_state import GoalStateSource, GoalStateChannel from azurelinuxagent.common.protocol.extensions_goal_state_from_extensions_config import ExtensionsGoalStateFromExtensionsConfig from azurelinuxagent.common.protocol.extensions_goal_state_from_vm_settings import ExtensionsGoalStateFromVmSettings from azurelinuxagent.common.protocol import hostplugin from azurelinuxagent.common.protocol.goal_state import GoalState, GoalStateInconsistentError, \ _GET_GOAL_STATE_MAX_ATTEMPTS, GoalStateProperties from azurelinuxagent.common.exception import ProtocolError from azurelinuxagent.common.utils import fileutil from azurelinuxagent.common.utils.archive import ARCHIVE_DIRECTORY_NAME from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib import wire_protocol_data from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.tools import AgentTestCase, patch, load_data class GoalStateTestCase(AgentTestCase, HttpRequestPredicates): def test_it_should_use_vm_settings_by_default(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: protocol.mock_wire_data.set_etag(888) extensions_goal_state = GoalState(protocol.client).extensions_goal_state self.assertTrue( isinstance(extensions_goal_state, ExtensionsGoalStateFromVmSettings), 'The extensions goal state should have been created from the vmSettings (got: {0})'.format(type(extensions_goal_state))) def _assert_is_extensions_goal_state_from_extensions_config(self, extensions_goal_state): self.assertTrue( isinstance(extensions_goal_state, ExtensionsGoalStateFromExtensionsConfig), 'The extensions goal state should have been created from the extensionsConfig (got: {0})'.format(type(extensions_goal_state))) def test_it_should_use_extensions_config_when_fast_track_is_disabled(self): with patch("azurelinuxagent.common.conf.get_enable_fast_track", return_value=False): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: self._assert_is_extensions_goal_state_from_extensions_config(GoalState(protocol.client).extensions_goal_state) def test_it_should_use_extensions_config_when_fast_track_is_not_supported(self): def http_get_handler(url, *_, **__): if self.is_host_plugin_vm_settings_request(url): return MockHttpResponse(httpclient.NOT_FOUND) return None with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS, http_get_handler=http_get_handler) as protocol: self._assert_is_extensions_goal_state_from_extensions_config(GoalState(protocol.client).extensions_goal_state) def test_it_should_use_extensions_config_when_the_host_ga_plugin_version_is_not_supported(self): data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = "hostgaplugin/vm_settings-unsupported_version.json" with mock_wire_protocol(data_file) as protocol: self._assert_is_extensions_goal_state_from_extensions_config(GoalState(protocol.client).extensions_goal_state) def test_it_should_retry_get_vm_settings_on_resource_gone_error(self): # Requests to the hostgaplugin incude the Container ID and the RoleConfigName as headers; when the hostgaplugin returns GONE (HTTP status 410) the agent # needs to get a new goal state and retry the request with updated values for the Container ID and RoleConfigName headers. with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: # Do not mock the vmSettings request at the level of azurelinuxagent.common.utils.restutil.http_request. The GONE status is handled # in the internal _http_request, which we mock below. protocol.do_not_mock = lambda method, url: method == "GET" and self.is_host_plugin_vm_settings_request(url) request_headers = [] # we expect a retry with new headers and use this array to persist the headers of each request def http_get_vm_settings(_method, _host, _relative_url, _timeout, **kwargs): request_headers.append(kwargs["headers"]) if len(request_headers) == 1: # Fail the first request with status GONE and update the mock data to return the new Container ID and RoleConfigName that should be # used in the headers of the retry request. protocol.mock_wire_data.set_container_id("GET_VM_SETTINGS_TEST_CONTAINER_ID") protocol.mock_wire_data.set_role_config_name("GET_VM_SETTINGS_TEST_ROLE_CONFIG_NAME") return MockHttpResponse(status=httpclient.GONE) # For this test we are interested only on the retry logic, so the second request (the retry) is not important; we use NOT_MODIFIED (304) for simplicity. return MockHttpResponse(status=httpclient.NOT_MODIFIED) with patch("azurelinuxagent.common.utils.restutil._http_request", side_effect=http_get_vm_settings): protocol.client.update_goal_state() self.assertEqual(2, len(request_headers), "We expected 2 requests for vmSettings: the original request and the retry request") self.assertEqual("GET_VM_SETTINGS_TEST_CONTAINER_ID", request_headers[1][hostplugin._HEADER_CONTAINER_ID], "The retry request did not include the expected header for the ContainerId") self.assertEqual("GET_VM_SETTINGS_TEST_ROLE_CONFIG_NAME", request_headers[1][hostplugin._HEADER_HOST_CONFIG_NAME], "The retry request did not include the expected header for the RoleConfigName") def test_fetch_goal_state_should_raise_on_incomplete_goal_state(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: protocol.mock_wire_data.data_files = wire_protocol_data.DATA_FILE_NOOP_GS protocol.mock_wire_data.reload() protocol.mock_wire_data.set_incarnation(2) with patch('time.sleep') as mock_sleep: with self.assertRaises(ProtocolError): GoalState(protocol.client) self.assertEqual(_GET_GOAL_STATE_MAX_ATTEMPTS, mock_sleep.call_count, "Unexpected number of retries") def test_fetching_the_goal_state_should_save_the_shared_config(self): # SharedConfig.xml is used by other components (Azsec and Singularity/HPC Infiniband); verify that we do not delete it with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: _ = GoalState(protocol.client) shared_config = os.path.join(conf.get_lib_dir(), 'SharedConfig.xml') self.assertTrue(os.path.exists(shared_config), "{0} should have been created".format(shared_config)) def test_fetching_the_goal_state_should_save_the_goal_state_to_the_history_directory(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: protocol.mock_wire_data.set_incarnation(999) protocol.mock_wire_data.set_etag(888) _ = GoalState(protocol.client, save_to_history=True) self._assert_directory_contents( self._find_history_subdirectory("999-888"), ["GoalState.xml", "ExtensionsConfig.xml", "VmSettings.json", "Certificates.json", "SharedConfig.xml", "HostingEnvironmentConfig.xml"]) @staticmethod def _get_history_directory(): return os.path.join(conf.get_lib_dir(), ARCHIVE_DIRECTORY_NAME) def _find_history_subdirectory(self, tag): matches = glob.glob(os.path.join(self._get_history_directory(), "*_{0}".format(tag))) self.assertTrue(len(matches) == 1, "Expected one history directory for tag {0}. Got: {1}".format(tag, matches)) return matches[0] def _assert_directory_contents(self, directory, expected_files): actual_files = os.listdir(directory) expected_files.sort() actual_files.sort() self.assertEqual(expected_files, actual_files, "The expected files were not saved to {0}".format(directory)) def test_update_should_create_new_history_subdirectories(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: protocol.mock_wire_data.set_incarnation(123) protocol.mock_wire_data.set_etag(654) goal_state = GoalState(protocol.client, save_to_history=True) self._assert_directory_contents( self._find_history_subdirectory("123-654"), ["GoalState.xml", "ExtensionsConfig.xml", "VmSettings.json", "Certificates.json", "SharedConfig.xml", "HostingEnvironmentConfig.xml"]) def http_get_handler(url, *_, **__): if HttpRequestPredicates.is_host_plugin_vm_settings_request(url): return MockHttpResponse(status=httpclient.NOT_MODIFIED) return None protocol.mock_wire_data.set_incarnation(234) protocol.set_http_handlers(http_get_handler=http_get_handler) goal_state.update() self._assert_directory_contents( self._find_history_subdirectory("234-654"), ["GoalState.xml", "ExtensionsConfig.xml", "Certificates.json", "SharedConfig.xml", "HostingEnvironmentConfig.xml"]) protocol.mock_wire_data.set_etag(987) protocol.set_http_handlers(http_get_handler=None) goal_state.update() self._assert_directory_contents( self._find_history_subdirectory("234-987"), ["VmSettings.json"]) def test_it_should_redact_the_protected_settings_when_saving_to_the_history_directory(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: protocol.mock_wire_data.set_incarnation(888) protocol.mock_wire_data.set_etag(888) goal_state = GoalState(protocol.client, save_to_history=True) extensions_goal_state = goal_state.extensions_goal_state protected_settings = [] for ext_handler in extensions_goal_state.extensions: for extension in ext_handler.settings: if extension.protectedSettings is not None: protected_settings.append(extension.protectedSettings) if len(protected_settings) == 0: raise Exception("The test goal state does not include any protected settings") history_directory = self._find_history_subdirectory("888-888") extensions_config_file = os.path.join(history_directory, "ExtensionsConfig.xml") vm_settings_file = os.path.join(history_directory, "VmSettings.json") for file_name in extensions_config_file, vm_settings_file: with open(file_name, "r") as stream: file_contents = stream.read() for settings in protected_settings: self.assertNotIn( settings, file_contents, "The protectedSettings should not have been saved to {0}".format(file_name)) matches = re.findall(r'"protectedSettings"\s*:\s*"\*\*\* REDACTED \*\*\*"', file_contents) self.assertEqual( len(matches), len(protected_settings), "Could not find the expected number of redacted settings in {0}.\nExpected {1}.\n{2}".format(file_name, len(protected_settings), file_contents)) def test_it_should_save_vm_settings_on_parse_errors(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: invalid_vm_settings_file = "hostgaplugin/vm_settings-parse_error.json" data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() data_file["vm_settings"] = invalid_vm_settings_file protocol.mock_wire_data = wire_protocol_data.WireProtocolData(data_file) with self.assertRaises(ProtocolError): # the parsing error will cause an exception _ = GoalState(protocol.client) # Do an extra call to update the goal state; this should save the vmsettings to the history directory # only once (self._find_history_subdirectory asserts 1 single match) time.sleep(0.1) # add a short delay to ensure that a new timestamp would be saved in the history folder protocol.mock_wire_data.set_etag(888) with self.assertRaises(ProtocolError): _ = GoalState(protocol.client) history_directory = self._find_history_subdirectory("888") vm_settings_file = os.path.join(history_directory, "VmSettings.json") self.assertTrue(os.path.exists(vm_settings_file), "{0} was not saved".format(vm_settings_file)) expected = load_data(invalid_vm_settings_file) actual = fileutil.read_file(vm_settings_file) self.assertEqual(expected, actual, "The vmSettings were not saved correctly") def test_should_not_save_to_the_history_by_default(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: _ = GoalState(protocol.client) # omit the save_to_history parameter history = self._get_history_directory() self.assertFalse(os.path.exists(history), "The history directory not should have been created") @staticmethod @contextlib.contextmanager def _create_protocol_ws_and_hgap_in_sync(): """ Creates a mock protocol in which the HostGAPlugin and the WireServer are in sync, both of them returning the same Fabric goal state. """ data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() with mock_wire_protocol(data_file) as protocol: timestamp = datetime.datetime.utcnow() incarnation = '111' etag = '111111' protocol.mock_wire_data.set_incarnation(incarnation, timestamp=timestamp) protocol.mock_wire_data.set_etag(etag, timestamp=timestamp) protocol.mock_wire_data.set_vm_settings_source(GoalStateSource.Fabric) # Do a few checks on the mock data to ensure we catch changes in internal implementations # that may invalidate this setup. vm_settings, _ = protocol.client.get_host_plugin().fetch_vm_settings() if vm_settings.etag != etag: raise Exception("The HostGAPlugin is not in sync. Expected ETag {0}. Got {1}".format(etag, vm_settings.etag)) if vm_settings.source != GoalStateSource.Fabric: raise Exception("The HostGAPlugin should be returning a Fabric goal state. Got {0}".format(vm_settings.source)) goal_state = GoalState(protocol.client) if goal_state.incarnation != incarnation: raise Exception("The WireServer is not in sync. Expected incarnation {0}. Got {1}".format(incarnation, goal_state.incarnation)) if goal_state.extensions_goal_state.correlation_id != vm_settings.correlation_id: raise Exception( "The correlation ID in the WireServer and HostGAPlugin are not in sync. WS: {0} HGAP: {1}".format( goal_state.extensions_goal_state.correlation_id, vm_settings.correlation_id)) yield protocol def _assert_goal_state(self, goal_state, goal_state_id, channel=None, source=None): self.assertIn(goal_state_id, goal_state.extensions_goal_state.id, "Incorrect Goal State ID") if channel is not None: self.assertEqual(channel, goal_state.extensions_goal_state.channel, "Incorrect Goal State channel") if source is not None: self.assertEqual(source, goal_state.extensions_goal_state.source, "Incorrect Goal State source") def test_it_should_ignore_fabric_goal_states_from_the_host_ga_plugin(self): with GoalStateTestCase._create_protocol_ws_and_hgap_in_sync() as protocol: # # Verify __init__() # expected_incarnation = '111' # test setup initializes to this value timestamp = datetime.datetime.utcnow() + datetime.timedelta(seconds=15) protocol.mock_wire_data.set_etag('22222', timestamp) goal_state = GoalState(protocol.client) self._assert_goal_state(goal_state, expected_incarnation, channel=GoalStateChannel.WireServer) # # Verify update() # timestamp += datetime.timedelta(seconds=15) protocol.mock_wire_data.set_etag('333333', timestamp) goal_state.update() self._assert_goal_state(goal_state, expected_incarnation, channel=GoalStateChannel.WireServer) def test_it_should_use_fast_track_goal_states_from_the_host_ga_plugin(self): with GoalStateTestCase._create_protocol_ws_and_hgap_in_sync() as protocol: protocol.mock_wire_data.set_vm_settings_source(GoalStateSource.FastTrack) # # Verify __init__() # expected_etag = '22222' timestamp = datetime.datetime.utcnow() + datetime.timedelta(seconds=15) protocol.mock_wire_data.set_etag(expected_etag, timestamp) goal_state = GoalState(protocol.client) self._assert_goal_state(goal_state, expected_etag, channel=GoalStateChannel.HostGAPlugin) # # Verify update() # expected_etag = '333333' timestamp += datetime.timedelta(seconds=15) protocol.mock_wire_data.set_etag(expected_etag, timestamp) goal_state.update() self._assert_goal_state(goal_state, expected_etag, channel=GoalStateChannel.HostGAPlugin) def test_it_should_use_the_most_recent_goal_state(self): with GoalStateTestCase._create_protocol_ws_and_hgap_in_sync() as protocol: goal_state = GoalState(protocol.client) # The most recent goal state is FastTrack timestamp = datetime.datetime.utcnow() + datetime.timedelta(seconds=15) protocol.mock_wire_data.set_vm_settings_source(GoalStateSource.FastTrack) protocol.mock_wire_data.set_etag('222222', timestamp) goal_state.update() self._assert_goal_state(goal_state, '222222', channel=GoalStateChannel.HostGAPlugin, source=GoalStateSource.FastTrack) # The most recent goal state is Fabric timestamp += datetime.timedelta(seconds=15) protocol.mock_wire_data.set_incarnation('222', timestamp) goal_state.update() self._assert_goal_state(goal_state, '222', channel=GoalStateChannel.WireServer, source=GoalStateSource.Fabric) # The most recent goal state is Fabric, but it is coming from the HostGAPlugin (should be ignored) timestamp += datetime.timedelta(seconds=15) protocol.mock_wire_data.set_vm_settings_source(GoalStateSource.Fabric) protocol.mock_wire_data.set_etag('333333', timestamp) goal_state.update() self._assert_goal_state(goal_state, '222', channel=GoalStateChannel.WireServer, source=GoalStateSource.Fabric) def test_it_should_mark_outdated_goal_states(self): with GoalStateTestCase._create_protocol_ws_and_hgap_in_sync() as protocol: goal_state = GoalState(protocol.client) initial_incarnation = goal_state.incarnation initial_timestamp = goal_state.extensions_goal_state.created_on_timestamp # Make the most recent goal state FastTrack timestamp = datetime.datetime.utcnow() + datetime.timedelta(seconds=15) protocol.mock_wire_data.set_vm_settings_source(GoalStateSource.FastTrack) protocol.mock_wire_data.set_etag('444444', timestamp) goal_state.update() # Update the goal state after the HGAP plugin stops supporting vmSettings def http_get_handler(url, *_, **__): if self.is_host_plugin_vm_settings_request(url): return MockHttpResponse(httpclient.NOT_FOUND) return None protocol.set_http_handlers(http_get_handler=http_get_handler) goal_state.update() self._assert_goal_state(goal_state, initial_incarnation, channel=GoalStateChannel.WireServer, source=GoalStateSource.Fabric) self.assertEqual(initial_timestamp, goal_state.extensions_goal_state.created_on_timestamp, "The timestamp of the updated goal state is incorrect") self.assertTrue(goal_state.extensions_goal_state.is_outdated, "The updated goal state should be marked as outdated") def test_it_should_raise_when_the_tenant_certificate_is_missing(self): data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() with mock_wire_protocol(data_file) as protocol: data_file["vm_settings"] = "hostgaplugin/vm_settings-missing_cert.json" protocol.mock_wire_data.reload() protocol.mock_wire_data.set_etag(888) with self.assertRaises(GoalStateInconsistentError) as context: _ = GoalState(protocol.client) expected_message = "Certificate 59A10F50FFE2A0408D3F03FE336C8FD5716CF25C needed by Microsoft.OSTCExtensions.VMAccessForLinux is missing from the goal state" self.assertIn(expected_message, str(context.exception)) def test_it_should_download_certs_on_a_new_fast_track_goal_state(self): data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() with mock_wire_protocol(data_file) as protocol: goal_state = GoalState(protocol.client) cert = "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F" crt_path = os.path.join(self.tmp_dir, cert + ".crt") prv_path = os.path.join(self.tmp_dir, cert + ".prv") # Check that crt and prv files are downloaded after processing goal state self.assertTrue(os.path.isfile(crt_path)) self.assertTrue(os.path.isfile(prv_path)) # Remove .crt file os.remove(crt_path) if os.path.isfile(crt_path): raise Exception("{0}.crt was not removed.".format(cert)) # Update goal state and check that .crt was downloaded protocol.mock_wire_data.set_etag(888) goal_state.update() self.assertTrue(os.path.isfile(crt_path)) def test_it_should_download_certs_on_a_new_fabric_goal_state(self): data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() with mock_wire_protocol(data_file) as protocol: protocol.mock_wire_data.set_vm_settings_source(GoalStateSource.Fabric) goal_state = GoalState(protocol.client) cert = "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F" crt_path = os.path.join(self.tmp_dir, cert + ".crt") prv_path = os.path.join(self.tmp_dir, cert + ".prv") # Check that crt and prv files are downloaded after processing goal state self.assertTrue(os.path.isfile(crt_path)) self.assertTrue(os.path.isfile(prv_path)) # Remove .crt file os.remove(crt_path) if os.path.isfile(crt_path): raise Exception("{0}.crt was not removed.".format(cert)) # Update goal state and check that .crt was downloaded protocol.mock_wire_data.set_incarnation(999) goal_state.update() self.assertTrue(os.path.isfile(crt_path)) def test_it_should_refresh_the_goal_state_when_it_is_inconsistent(self): # # Some scenarios can produce inconsistent goal states. For example, during hibernation/resume, the Fabric goal state changes (the # tenant certificate is re-generated when the VM is restarted) *without* the incarnation changing. If a Fast Track goal state # comes after that, the extensions will need the new certificate. This test simulates that scenario by mocking the certificates # request and returning first a set of certificates (certs-2.xml) that do not match those needed by the extensions, and then a # set (certs.xml) that does match. The test then ensures that the goal state was refreshed and the correct certificates were # fetched. # data_files = [ "wire/certs-2.xml", "wire/certs.xml" ] def http_get_handler(url, *_, **__): if HttpRequestPredicates.is_certificates_request(url): http_get_handler.certificate_requests += 1 if http_get_handler.certificate_requests < len(data_files): data = load_data(data_files[http_get_handler.certificate_requests - 1]) return MockHttpResponse(status=200, body=data.encode('utf-8')) return None http_get_handler.certificate_requests = 0 with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: protocol.set_http_handlers(http_get_handler=http_get_handler) protocol.mock_wire_data.reset_call_counts() goal_state = GoalState(protocol.client) self.assertEqual(2, protocol.mock_wire_data.call_counts['goalstate'], "There should have been exactly 2 requests for the goal state (original + refresh)") self.assertEqual(2, http_get_handler.certificate_requests, "There should have been exactly 2 requests for the goal state certificates (original + refresh)") thumbprints = [c.thumbprint for c in goal_state.certs.cert_list.certificates] for extension in goal_state.extensions_goal_state.extensions: for settings in extension.settings: if settings.protectedSettings is not None: self.assertIn(settings.certificateThumbprint, thumbprints, "Certificate is missing from the goal state.") def test_it_should_raise_when_goal_state_properties_not_initialized(self): with GoalStateTestCase._create_protocol_ws_and_hgap_in_sync() as protocol: goal_state = GoalState( protocol.client, goal_state_properties=~GoalStateProperties.All) goal_state.update() with self.assertRaises(ProtocolError) as context: _ = goal_state.container_id expected_message = "ContainerId is not in goal state properties" self.assertIn(expected_message, str(context.exception)) with self.assertRaises(ProtocolError) as context: _ = goal_state.role_config_name expected_message = "RoleConfig is not in goal state properties" self.assertIn(expected_message, str(context.exception)) with self.assertRaises(ProtocolError) as context: _ = goal_state.role_instance_id expected_message = "RoleInstanceId is not in goal state properties" self.assertIn(expected_message, str(context.exception)) with self.assertRaises(ProtocolError) as context: _ = goal_state.extensions_goal_state expected_message = "ExtensionsGoalState is not in goal state properties" self.assertIn(expected_message, str(context.exception)) with self.assertRaises(ProtocolError) as context: _ = goal_state.hosting_env expected_message = "HostingEnvironment is not in goal state properties" self.assertIn(expected_message, str(context.exception)) with self.assertRaises(ProtocolError) as context: _ = goal_state.certs expected_message = "Certificates is not in goal state properties" self.assertIn(expected_message, str(context.exception)) with self.assertRaises(ProtocolError) as context: _ = goal_state.shared_conf expected_message = "SharedConfig is not in goal state properties" self.assertIn(expected_message, str(context.exception)) with self.assertRaises(ProtocolError) as context: _ = goal_state.remote_access expected_message = "RemoteAccessInfo is not in goal state properties" self.assertIn(expected_message, str(context.exception)) goal_state = GoalState( protocol.client, goal_state_properties=GoalStateProperties.All & ~GoalStateProperties.HostingEnv) goal_state.update() _ = goal_state.container_id, goal_state.role_instance_id, goal_state.role_config_name, \ goal_state.extensions_goal_state, goal_state.certs, goal_state.shared_conf, goal_state.remote_access with self.assertRaises(ProtocolError) as context: _ = goal_state.hosting_env expected_message = "HostingEnvironment is not in goal state properties" self.assertIn(expected_message, str(context.exception)) Azure-WALinuxAgent-2b21de5/tests/common/protocol/test_healthservice.py000066400000000000000000000261241462617747000262670ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import json from azurelinuxagent.common.exception import HttpError from azurelinuxagent.common.protocol.healthservice import Observation, HealthService from azurelinuxagent.common.utils import restutil from tests.common.protocol.test_hostplugin import MockResponse from tests.lib.tools import AgentTestCase, patch class TestHealthService(AgentTestCase): def assert_status_code(self, status_code, expected_healthy): response = MockResponse('response', status_code) is_healthy = not restutil.request_failed_at_hostplugin(response) self.assertEqual(expected_healthy, is_healthy) def assert_observation(self, call_args, name, is_healthy, value, description): endpoint = call_args[0][0] content = call_args[0][1] jo = json.loads(content) api = jo['Api'] source = jo['Source'] version = jo['Version'] obs = jo['Observations'] fo = obs[0] obs_name = fo['ObservationName'] obs_healthy = fo['IsHealthy'] obs_value = fo['Value'] obs_description = fo['Description'] self.assertEqual('application/json', call_args[1]['headers']['Content-Type']) self.assertEqual('http://endpoint:80/HealthService', endpoint) self.assertEqual('reporttargethealth', api) self.assertEqual('WALinuxAgent', source) self.assertEqual('1.0', version) self.assertEqual(name, obs_name) self.assertEqual(value, obs_value) self.assertEqual(is_healthy, obs_healthy) self.assertEqual(description, obs_description) def assert_telemetry(self, call_args, response=''): args, kw_args = call_args # pylint: disable=unused-variable self.assertFalse(kw_args['is_success']) self.assertEqual('HealthObservation', kw_args['op']) obs = json.loads(kw_args['message']) self.assertEqual(obs['Value'], response) def test_observation_validity(self): try: Observation(name=None, is_healthy=True) self.fail('Empty observation name should raise ValueError') except ValueError: pass try: Observation(name='Name', is_healthy=None) self.fail('Empty measurement should raise ValueError') except ValueError: pass o = Observation(name='Name', is_healthy=True, value=None, description=None) self.assertEqual('', o.value) self.assertEqual('', o.description) long_str = 's' * 200 o = Observation(name=long_str, is_healthy=True, value=long_str, description=long_str) self.assertEqual(200, len(o.name)) self.assertEqual(200, len(o.value)) self.assertEqual(200, len(o.description)) self.assertEqual(64, len(o.as_obj['ObservationName'])) self.assertEqual(128, len(o.as_obj['Value'])) self.assertEqual(128, len(o.as_obj['Description'])) def test_observation_json(self): health_service = HealthService('endpoint') health_service.observations.append(Observation(name='name', is_healthy=True, value='value', description='description')) expected_json = '{"Source": "WALinuxAgent", ' \ '"Api": "reporttargethealth", ' \ '"Version": "1.0", ' \ '"Observations": [{' \ '"Value": "value", ' \ '"ObservationName": "name", ' \ '"Description": "description", ' \ '"IsHealthy": true' \ '}]}' expected = sorted(json.loads(expected_json).items()) actual = sorted(json.loads(health_service.as_json).items()) self.assertEqual(expected, actual) @patch('azurelinuxagent.common.event.add_event') @patch("azurelinuxagent.common.utils.restutil.http_post") def test_reporting(self, patch_post, patch_add_event): health_service = HealthService('endpoint') health_service.report_host_plugin_status(is_healthy=True, response='response') self.assertEqual(1, patch_post.call_count) self.assertEqual(0, patch_add_event.call_count) self.assert_observation(call_args=patch_post.call_args, name=HealthService.HOST_PLUGIN_STATUS_OBSERVATION_NAME, is_healthy=True, value='response', description='') self.assertEqual(0, len(health_service.observations)) health_service.report_host_plugin_status(is_healthy=False, response='error') self.assertEqual(2, patch_post.call_count) self.assertEqual(1, patch_add_event.call_count) self.assert_telemetry(call_args=patch_add_event.call_args, response='error') self.assert_observation(call_args=patch_post.call_args, name=HealthService.HOST_PLUGIN_STATUS_OBSERVATION_NAME, is_healthy=False, value='error', description='') self.assertEqual(0, len(health_service.observations)) health_service.report_host_plugin_extension_artifact(is_healthy=True, source='source', response='response') self.assertEqual(3, patch_post.call_count) self.assertEqual(1, patch_add_event.call_count) self.assert_observation(call_args=patch_post.call_args, name=HealthService.HOST_PLUGIN_ARTIFACT_OBSERVATION_NAME, is_healthy=True, value='response', description='source') self.assertEqual(0, len(health_service.observations)) health_service.report_host_plugin_extension_artifact(is_healthy=False, source='source', response='response') self.assertEqual(4, patch_post.call_count) self.assertEqual(2, patch_add_event.call_count) self.assert_telemetry(call_args=patch_add_event.call_args, response='response') self.assert_observation(call_args=patch_post.call_args, name=HealthService.HOST_PLUGIN_ARTIFACT_OBSERVATION_NAME, is_healthy=False, value='response', description='source') self.assertEqual(0, len(health_service.observations)) health_service.report_host_plugin_heartbeat(is_healthy=True) self.assertEqual(5, patch_post.call_count) self.assertEqual(2, patch_add_event.call_count) self.assert_observation(call_args=patch_post.call_args, name=HealthService.HOST_PLUGIN_HEARTBEAT_OBSERVATION_NAME, is_healthy=True, value='', description='') self.assertEqual(0, len(health_service.observations)) health_service.report_host_plugin_heartbeat(is_healthy=False) self.assertEqual(3, patch_add_event.call_count) self.assert_telemetry(call_args=patch_add_event.call_args) self.assertEqual(6, patch_post.call_count) self.assert_observation(call_args=patch_post.call_args, name=HealthService.HOST_PLUGIN_HEARTBEAT_OBSERVATION_NAME, is_healthy=False, value='', description='') self.assertEqual(0, len(health_service.observations)) health_service.report_host_plugin_versions(is_healthy=True, response='response') self.assertEqual(7, patch_post.call_count) self.assertEqual(3, patch_add_event.call_count) self.assert_observation(call_args=patch_post.call_args, name=HealthService.HOST_PLUGIN_VERSIONS_OBSERVATION_NAME, is_healthy=True, value='response', description='') self.assertEqual(0, len(health_service.observations)) health_service.report_host_plugin_versions(is_healthy=False, response='response') self.assertEqual(8, patch_post.call_count) self.assertEqual(4, patch_add_event.call_count) self.assert_telemetry(call_args=patch_add_event.call_args, response='response') self.assert_observation(call_args=patch_post.call_args, name=HealthService.HOST_PLUGIN_VERSIONS_OBSERVATION_NAME, is_healthy=False, value='response', description='') self.assertEqual(0, len(health_service.observations)) patch_post.side_effect = HttpError() health_service.report_host_plugin_versions(is_healthy=True, response='') self.assertEqual(9, patch_post.call_count) self.assertEqual(4, patch_add_event.call_count) self.assertEqual(0, len(health_service.observations)) def test_observation_length(self): health_service = HealthService('endpoint') # make 100 observations for i in range(0, 100): health_service._observe(is_healthy=True, name='{0}'.format(i)) # ensure we keep only 10 self.assertEqual(10, len(health_service.observations)) # ensure we keep the most recent 10 self.assertEqual('90', health_service.observations[0].name) self.assertEqual('99', health_service.observations[9].name) def test_status_codes(self): # healthy self.assert_status_code(status_code=200, expected_healthy=True) self.assert_status_code(status_code=201, expected_healthy=True) self.assert_status_code(status_code=302, expected_healthy=True) self.assert_status_code(status_code=400, expected_healthy=True) self.assert_status_code(status_code=416, expected_healthy=True) self.assert_status_code(status_code=419, expected_healthy=True) self.assert_status_code(status_code=429, expected_healthy=True) self.assert_status_code(status_code=502, expected_healthy=True) # unhealthy self.assert_status_code(status_code=500, expected_healthy=False) self.assert_status_code(status_code=501, expected_healthy=False) self.assert_status_code(status_code=503, expected_healthy=False) self.assert_status_code(status_code=504, expected_healthy=False) Azure-WALinuxAgent-2b21de5/tests/common/protocol/test_hostplugin.py000066400000000000000000001440021462617747000256310ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import base64 import contextlib import datetime import json import os.path import sys import unittest from azurelinuxagent.common.protocol import hostplugin, restapi, wire from azurelinuxagent.common import conf from azurelinuxagent.common.errorstate import ErrorState from azurelinuxagent.common.exception import HttpError, ResourceGoneError, ProtocolError from azurelinuxagent.common.future import ustr, httpclient from azurelinuxagent.common.osutil.default import UUID_PATTERN from azurelinuxagent.common.protocol.hostplugin import API_VERSION, _VmSettingsErrorReporter, VmSettingsNotSupported, VmSettingsSupportStopped from azurelinuxagent.common.protocol.extensions_goal_state import GoalStateSource from azurelinuxagent.common.protocol.goal_state import GoalState from azurelinuxagent.common.utils import restutil from azurelinuxagent.common.version import AGENT_VERSION, AGENT_NAME from tests.lib.mock_wire_protocol import mock_wire_protocol, wire_protocol_data, MockHttpResponse from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.wire_protocol_data import DATA_FILE, DATA_FILE_NO_EXT from tests.lib.tools import AgentTestCase, PY_VERSION_MAJOR, Mock, patch hostplugin_status_url = "http://168.63.129.16:32526/status" hostplugin_versions_url = "http://168.63.129.16:32526/versions" health_service_url = 'http://168.63.129.16:80/HealthService' hostplugin_logs_url = "http://168.63.129.16:32526/vmAgentLog" sas_url = "http://sas_url" wireserver_url = "168.63.129.16" block_blob_type = 'BlockBlob' page_blob_type = 'PageBlob' api_versions = '["2015-09-01"]' storage_version = "2014-02-14" faux_status = "{ 'dummy' : 'data' }" faux_status_b64 = base64.b64encode(bytes(bytearray(faux_status, encoding='utf-8'))) if PY_VERSION_MAJOR > 2: faux_status_b64 = faux_status_b64.decode('utf-8') class TestHostPlugin(HttpRequestPredicates, AgentTestCase): def _init_host(self): with mock_wire_protocol(DATA_FILE) as protocol: host_plugin = wire.HostPluginProtocol(wireserver_url) GoalState.update_host_plugin_headers(protocol.client) self.assertTrue(host_plugin.health_service is not None) return host_plugin def _init_status_blob(self): wire_protocol_client = wire.WireProtocol(wireserver_url).client status_blob = wire_protocol_client.status_blob status_blob.data = faux_status status_blob.vm_status = restapi.VMStatus(message="Ready", status="Ready") return status_blob def _relax_timestamp(self, headers): new_headers = [] for header in headers: header_value = header['headerValue'] if header['headerName'] == 'x-ms-date': timestamp = header['headerValue'] header_value = timestamp[:timestamp.rfind(":")] new_header = {header['headerName']: header_value} new_headers.append(new_header) return new_headers def _compare_data(self, actual, expected): # Remove seconds from the timestamps for testing purposes, that level or granularity introduces test flakiness actual['headers'] = self._relax_timestamp(actual['headers']) expected['headers'] = self._relax_timestamp(expected['headers']) for k in iter(expected.keys()): if k == 'content' or k == 'requestUri': if actual[k] != expected[k]: print("Mismatch: Actual '{0}'='{1}', " "Expected '{0}'='{2}'".format(k, actual[k], expected[k])) return False elif k == 'headers': for h in expected['headers']: if not (h in actual['headers']): print("Missing Header: '{0}'".format(h)) return False else: print("Unexpected Key: '{0}'".format(k)) return False return True def _hostplugin_data(self, blob_headers, content=None): headers = [] for name in iter(blob_headers.keys()): headers.append({ 'headerName': name, 'headerValue': blob_headers[name] }) data = { 'requestUri': sas_url, 'headers': headers } if not content is None: s = base64.b64encode(bytes(content)) if PY_VERSION_MAJOR > 2: s = s.decode('utf-8') data['content'] = s return data def _hostplugin_headers(self, goal_state): return { 'x-ms-version': '2015-09-01', 'Content-type': 'application/json', 'x-ms-containerid': goal_state.container_id, 'x-ms-host-config-name': goal_state.role_config_name } def _validate_hostplugin_args(self, args, goal_state, exp_method, exp_url, exp_data): args, kwargs = args self.assertEqual(exp_method, args[0]) self.assertEqual(exp_url, args[1]) self.assertTrue(self._compare_data(json.loads(args[2]), exp_data)) headers = kwargs['headers'] self.assertEqual(headers['x-ms-containerid'], goal_state.container_id) self.assertEqual(headers['x-ms-host-config-name'], goal_state.role_config_name) @staticmethod @contextlib.contextmanager def create_mock_protocol(): data_file = DATA_FILE_NO_EXT.copy() data_file["ext_conf"] = "wire/ext_conf_no_extensions-page_blob.xml" with mock_wire_protocol(data_file) as protocol: status = restapi.VMStatus(status="Ready", message="Guest Agent is running") protocol.client.status_blob.set_vm_status(status) # Also, they mock WireClient.update_goal_state() to verify how it is called protocol.client.update_goal_state = Mock() yield protocol @patch("azurelinuxagent.common.protocol.healthservice.HealthService.report_host_plugin_versions") @patch("azurelinuxagent.common.protocol.hostplugin.restutil.http_get") @patch("azurelinuxagent.common.protocol.hostplugin.add_event") def assert_ensure_initialized(self, patch_event, patch_http_get, patch_report_health, response_body, response_status_code, should_initialize, should_report_healthy): host = hostplugin.HostPluginProtocol(endpoint='ws') host.is_initialized = False patch_http_get.return_value = MockResponse(body=response_body, reason='reason', status_code=response_status_code) return_value = host.ensure_initialized() self.assertEqual(return_value, host.is_available) self.assertEqual(should_initialize, host.is_initialized) init_events = [kwargs for _, kwargs in patch_event.call_args_list if kwargs['op'] == 'InitializeHostPlugin'] self.assertEqual(1, len(init_events), 'Expected exactly 1 InitializeHostPlugin event') self.assertEqual(should_initialize, init_events[0]['is_success']) self.assertEqual(1, patch_report_health.call_count) self.assertEqual(should_report_healthy, patch_report_health.call_args[1]['is_healthy']) actual_response = patch_report_health.call_args[1]['response'] if should_initialize: self.assertEqual('', actual_response) else: self.assertTrue('HTTP Failed' in actual_response) self.assertTrue(response_body in actual_response) self.assertTrue(ustr(response_status_code) in actual_response) def test_ensure_initialized(self): """ Test calls to ensure_initialized """ self.assert_ensure_initialized(response_body=api_versions, # pylint: disable=no-value-for-parameter response_status_code=200, should_initialize=True, should_report_healthy=True) self.assert_ensure_initialized(response_body='invalid ip', # pylint: disable=no-value-for-parameter response_status_code=400, should_initialize=False, should_report_healthy=True) self.assert_ensure_initialized(response_body='generic bad request', # pylint: disable=no-value-for-parameter response_status_code=400, should_initialize=False, should_report_healthy=True) self.assert_ensure_initialized(response_body='resource gone', # pylint: disable=no-value-for-parameter response_status_code=410, should_initialize=False, should_report_healthy=True) self.assert_ensure_initialized(response_body='generic error', # pylint: disable=no-value-for-parameter response_status_code=500, should_initialize=False, should_report_healthy=False) self.assert_ensure_initialized(response_body='upstream error', # pylint: disable=no-value-for-parameter response_status_code=502, should_initialize=False, should_report_healthy=True) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol.ensure_initialized", return_value=True) @patch("azurelinuxagent.common.protocol.wire.StatusBlob.upload", return_value=False) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol._put_page_blob_status") def test_default_channel(self, patch_put, patch_upload, _): """ Status now defaults to HostPlugin. Validate that any errors on the public channel are ignored. Validate that the default channel is never changed as part of status upload. """ with self.create_mock_protocol() as wire_protocol: wire.HostPluginProtocol.is_default_channel = False wire_protocol.client.update_goal_state() # act wire_protocol.client.upload_status_blob() # assert direct route is not called self.assertEqual(0, patch_upload.call_count, "Direct channel was used") # assert host plugin route is called self.assertEqual(1, patch_put.call_count, "Host plugin was not used") # assert update goal state is only called once self.assertEqual(1, wire_protocol.client.update_goal_state.call_count, "Unexpected call count") # ensure the correct url is used self.assertEqual(sas_url, patch_put.call_args[0][0]) # ensure host plugin is not set as default self.assertFalse(wire.HostPluginProtocol.is_default_channel) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol.ensure_initialized", return_value=True) @patch("azurelinuxagent.common.protocol.wire.StatusBlob.upload", return_value=True) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol._put_page_blob_status", side_effect=HttpError("503")) def test_fallback_channel_503(self, patch_put, patch_upload, _): """ When host plugin returns a 503, we should fall back to the direct channel """ with self.create_mock_protocol() as wire_protocol: wire.HostPluginProtocol.is_default_channel = False wire_protocol.client.update_goal_state() # act wire_protocol.client.upload_status_blob() # assert direct route is called self.assertEqual(1, patch_upload.call_count, "Direct channel was not used") # assert host plugin route is called self.assertEqual(1, patch_put.call_count, "Host plugin was not used") # assert update goal state is only called once self.assertEqual(1, wire_protocol.client.update_goal_state.call_count, "Update goal state unexpected call count") # ensure the correct url is used self.assertEqual(sas_url, patch_put.call_args[0][0]) # ensure host plugin is not set as default self.assertFalse(wire.HostPluginProtocol.is_default_channel) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol.ensure_initialized", return_value=True) @patch("azurelinuxagent.common.protocol.wire.StatusBlob.upload", return_value=True) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol._put_page_blob_status", side_effect=ResourceGoneError("410")) @patch("azurelinuxagent.common.protocol.wire.WireClient.update_host_plugin_from_goal_state") def test_fallback_channel_410(self, patch_refresh_host_plugin, patch_put, patch_upload, _): """ When host plugin returns a 410, we should force the goal state update and return """ with self.create_mock_protocol() as wire_protocol: wire.HostPluginProtocol.is_default_channel = False wire_protocol.client.update_goal_state() # act wire_protocol.client.upload_status_blob() # assert direct route is not called self.assertEqual(0, patch_upload.call_count, "Direct channel was used") # assert host plugin route is called self.assertEqual(1, patch_put.call_count, "Host plugin was not used") # assert update goal state is called, then update_host_plugin_from_goal_state is called self.assertEqual(1, wire_protocol.client.update_goal_state.call_count, "Update goal state unexpected call count") self.assertEqual(1, patch_refresh_host_plugin.call_count, "Refresh host plugin unexpected call count") # ensure the correct url is used self.assertEqual(sas_url, patch_put.call_args[0][0]) # ensure host plugin is not set as default self.assertFalse(wire.HostPluginProtocol.is_default_channel) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol.ensure_initialized", return_value=True) @patch("azurelinuxagent.common.protocol.wire.StatusBlob.upload", return_value=False) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol._put_page_blob_status", side_effect=HttpError("500")) def test_fallback_channel_failure(self, patch_put, patch_upload, _): """ When host plugin returns a 500, and direct fails, we should raise a ProtocolError """ with self.create_mock_protocol() as wire_protocol: wire.HostPluginProtocol.is_default_channel = False wire_protocol.client.update_goal_state() # act self.assertRaises(wire.ProtocolError, wire_protocol.client.upload_status_blob) # assert direct route is not called self.assertEqual(1, patch_upload.call_count, "Direct channel was not used") # assert host plugin route is called self.assertEqual(1, patch_put.call_count, "Host plugin was not used") # assert update goal state is called twice self.assertEqual(1, wire_protocol.client.update_goal_state.call_count, "Update goal state unexpected call count") # ensure the correct url is used self.assertEqual(sas_url, patch_put.call_args[0][0]) # ensure host plugin is not set as default self.assertFalse(wire.HostPluginProtocol.is_default_channel) @patch("azurelinuxagent.common.event.add_event") def test_put_status_error_reporting(self, patch_add_event): """ Validate the telemetry when uploading status fails """ wire.HostPluginProtocol.is_default_channel = False with patch.object(wire.StatusBlob, "upload", return_value=False): with self.create_mock_protocol() as wire_protocol: wire_protocol_client = wire_protocol.client put_error = wire.HttpError("put status http error") with patch.object(restutil, "http_put", side_effect=put_error): with patch.object(wire.HostPluginProtocol, "ensure_initialized", return_value=True): self.assertRaises(wire.ProtocolError, wire_protocol_client.upload_status_blob) # The agent tries to upload via HostPlugin and that fails due to # http_put having a side effect of "put_error" # # The agent tries to upload using a direct connection, and that succeeds. self.assertEqual(1, wire_protocol_client.status_blob.upload.call_count) # pylint: disable=no-member # The agent never touches the default protocol is this code path, so no change. self.assertFalse(wire.HostPluginProtocol.is_default_channel) # The agent never logs telemetry event for direct fallback self.assertEqual(1, patch_add_event.call_count) self.assertEqual('ReportStatus', patch_add_event.call_args[1]['op']) self.assertTrue('Falling back to direct' in patch_add_event.call_args[1]['message']) self.assertEqual(True, patch_add_event.call_args[1]['is_success']) def test_validate_http_request_when_uploading_status(self): """Validate correct set of data is sent to HostGAPlugin when reporting VM status""" with mock_wire_protocol(DATA_FILE) as protocol: test_goal_state = protocol.client._goal_state plugin = protocol.client.get_host_plugin() status_blob = protocol.client.status_blob status_blob.data = faux_status status_blob.vm_status = restapi.VMStatus(message="Ready", status="Ready") exp_method = 'PUT' exp_url = hostplugin_status_url exp_data = self._hostplugin_data( status_blob.get_block_blob_headers(len(faux_status)), bytearray(faux_status, encoding='utf-8')) with patch.object(restutil, "http_request") as patch_http: patch_http.return_value = Mock(status=httpclient.OK) with patch.object(plugin, 'get_api_versions') as patch_api: patch_api.return_value = API_VERSION plugin.put_vm_status(status_blob, sas_url, block_blob_type) self.assertTrue(patch_http.call_count == 2) # first call is to host plugin self._validate_hostplugin_args( patch_http.call_args_list[0], test_goal_state, exp_method, exp_url, exp_data) # second call is to health service self.assertEqual('POST', patch_http.call_args_list[1][0][0]) self.assertEqual(health_service_url, patch_http.call_args_list[1][0][1]) def test_validate_block_blob(self): with mock_wire_protocol(DATA_FILE) as protocol: host_client = protocol.client.get_host_plugin() self.assertFalse(host_client.is_initialized) self.assertTrue(host_client.api_versions is None) self.assertTrue(host_client.health_service is not None) status_blob = protocol.client.status_blob status_blob.data = faux_status status_blob.type = block_blob_type status_blob.vm_status = restapi.VMStatus(message="Ready", status="Ready") exp_method = 'PUT' exp_url = hostplugin_status_url exp_data = self._hostplugin_data( status_blob.get_block_blob_headers(len(faux_status)), bytearray(faux_status, encoding='utf-8')) with patch.object(restutil, "http_request") as patch_http: patch_http.return_value = Mock(status=httpclient.OK) with patch.object(wire.HostPluginProtocol, "get_api_versions") as patch_get: patch_get.return_value = api_versions host_client.put_vm_status(status_blob, sas_url) self.assertTrue(patch_http.call_count == 2) # first call is to host plugin self._validate_hostplugin_args( patch_http.call_args_list[0], protocol.get_goal_state(), exp_method, exp_url, exp_data) # second call is to health service self.assertEqual('POST', patch_http.call_args_list[1][0][0]) self.assertEqual(health_service_url, patch_http.call_args_list[1][0][1]) def test_validate_page_blobs(self): """Validate correct set of data is sent for page blobs""" with mock_wire_protocol(DATA_FILE) as protocol: test_goal_state = protocol.get_goal_state() host_client = protocol.client.get_host_plugin() self.assertFalse(host_client.is_initialized) self.assertTrue(host_client.api_versions is None) status_blob = protocol.client.status_blob status_blob.data = faux_status status_blob.type = page_blob_type status_blob.vm_status = restapi.VMStatus(message="Ready", status="Ready") exp_method = 'PUT' exp_url = hostplugin_status_url page_status = bytearray(status_blob.data, encoding='utf-8') page_size = int((len(page_status) + 511) / 512) * 512 page_status = bytearray(status_blob.data.ljust(page_size), encoding='utf-8') page = bytearray(page_size) page[0: page_size] = page_status[0: len(page_status)] mock_response = MockResponse('', httpclient.OK) with patch.object(restutil, "http_request", return_value=mock_response) as patch_http: with patch.object(wire.HostPluginProtocol, "get_api_versions") as patch_get: patch_get.return_value = api_versions host_client.put_vm_status(status_blob, sas_url) self.assertTrue(patch_http.call_count == 3) # first call is to host plugin exp_data = self._hostplugin_data( status_blob.get_page_blob_create_headers( page_size)) self._validate_hostplugin_args( patch_http.call_args_list[0], test_goal_state, exp_method, exp_url, exp_data) # second call is to health service self.assertEqual('POST', patch_http.call_args_list[1][0][0]) self.assertEqual(health_service_url, patch_http.call_args_list[1][0][1]) # last call is to host plugin exp_data = self._hostplugin_data( status_blob.get_page_blob_page_headers( 0, page_size), page) exp_data['requestUri'] += "?comp=page" self._validate_hostplugin_args( patch_http.call_args_list[2], test_goal_state, exp_method, exp_url, exp_data) def test_validate_http_request_for_put_vm_log(self): def http_put_handler(url, *args, **kwargs): # pylint: disable=inconsistent-return-statements if self.is_host_plugin_put_logs_request(url): http_put_handler.args, http_put_handler.kwargs = args, kwargs return MockResponse(body=b'', status_code=200) http_put_handler.args, http_put_handler.kwargs = [], {} with mock_wire_protocol(DATA_FILE, http_put_handler=http_put_handler) as protocol: test_goal_state = protocol.get_goal_state() expected_url = hostplugin.URI_FORMAT_PUT_LOG.format(wireserver_url, hostplugin.HOST_PLUGIN_PORT) expected_headers = {'x-ms-version': '2015-09-01', "x-ms-containerid": test_goal_state.container_id, "x-ms-vmagentlog-deploymentid": test_goal_state.role_config_name.split(".")[0], "x-ms-client-name": AGENT_NAME, "x-ms-client-version": AGENT_VERSION} host_client = protocol.client.get_host_plugin() self.assertFalse(host_client.is_initialized, "Host plugin should not be initialized!") content = b"test" host_client.put_vm_log(content) self.assertTrue(host_client.is_initialized, "Host plugin is not initialized!") urls = protocol.get_tracked_urls() self.assertEqual(expected_url, urls[0], "Unexpected request URL!") self.assertEqual(content, http_put_handler.args[0], "Unexpected content for HTTP PUT request!") headers = http_put_handler.kwargs['headers'] for k in expected_headers: self.assertTrue(k in headers, "Header {0} not found in headers!".format(k)) self.assertEqual(expected_headers[k], headers[k], "Request headers don't match!") # Special check for correlation id header value, check for pattern, not exact value self.assertTrue("x-ms-client-correlationid" in headers.keys(), "Correlation id not found in headers!") self.assertTrue(UUID_PATTERN.match(headers["x-ms-client-correlationid"]), "Correlation id is not in GUID form!") def test_put_vm_log_should_raise_an_exception_when_request_fails(self): def http_put_handler(url, *args, **kwargs): # pylint: disable=inconsistent-return-statements if self.is_host_plugin_put_logs_request(url): http_put_handler.args, http_put_handler.kwargs = args, kwargs return MockResponse(body=ustr('Gone'), status_code=410) http_put_handler.args, http_put_handler.kwargs = [], {} with mock_wire_protocol(DATA_FILE, http_put_handler=http_put_handler) as protocol: host_client = wire.HostPluginProtocol(wireserver_url) GoalState.update_host_plugin_headers(protocol.client) self.assertFalse(host_client.is_initialized, "Host plugin should not be initialized!") with self.assertRaises(HttpError) as context_manager: content = b"test" host_client.put_vm_log(content) self.assertIsInstance(context_manager.exception, HttpError) self.assertIn("410", ustr(context_manager.exception)) self.assertIn("Gone", ustr(context_manager.exception)) def test_validate_get_extension_artifacts(self): with mock_wire_protocol(DATA_FILE) as protocol: test_goal_state = protocol.get_goal_state() expected_url = hostplugin.URI_FORMAT_GET_EXTENSION_ARTIFACT.format(wireserver_url, hostplugin.HOST_PLUGIN_PORT) expected_headers = {'x-ms-version': '2015-09-01', "x-ms-containerid": test_goal_state.container_id, "x-ms-host-config-name": test_goal_state.role_config_name, "x-ms-artifact-location": sas_url} host_client = protocol.client.get_host_plugin() self.assertFalse(host_client.is_initialized) self.assertTrue(host_client.api_versions is None) self.assertTrue(host_client.health_service is not None) with patch.object(wire.HostPluginProtocol, "get_api_versions", return_value=api_versions) as patch_get: # pylint: disable=unused-variable actual_url, actual_headers = host_client.get_artifact_request(sas_url, use_verify_header=False) self.assertTrue(host_client.is_initialized) self.assertFalse(host_client.api_versions is None) self.assertEqual(expected_url, actual_url) for k in expected_headers: self.assertTrue(k in actual_headers) self.assertEqual(expected_headers[k], actual_headers[k]) def test_health(self): host_plugin = self._init_host() with patch("azurelinuxagent.common.utils.restutil.http_get") as patch_http_get: patch_http_get.return_value = MockResponse('', 200) result = host_plugin.get_health() self.assertEqual(1, patch_http_get.call_count) self.assertTrue(result) patch_http_get.return_value = MockResponse('', 500) result = host_plugin.get_health() self.assertFalse(result) patch_http_get.side_effect = IOError('client IO error') try: host_plugin.get_health() self.fail('IO error expected to be raised') except IOError: # expected pass def test_ensure_health_service_called(self): host_plugin = self._init_host() with patch("azurelinuxagent.common.utils.restutil.http_get", return_value=MockHttpResponse(200)) as patch_http_get: with patch("azurelinuxagent.common.protocol.healthservice.HealthService.report_host_plugin_versions") as patch_report_versions: host_plugin.get_api_versions() self.assertEqual(1, patch_http_get.call_count) self.assertEqual(1, patch_report_versions.call_count) def test_put_status_healthy_signal(self): host_plugin = self._init_host() with patch("azurelinuxagent.common.utils.restutil.http_get") as patch_http_get: with patch("azurelinuxagent.common.utils.restutil.http_post") as patch_http_post: with patch("azurelinuxagent.common.utils.restutil.http_put") as patch_http_put: status_blob = self._init_status_blob() # get_api_versions patch_http_get.return_value = MockResponse(api_versions, 200) # put status blob patch_http_put.return_value = MockResponse(None, 201) host_plugin.put_vm_status(status_blob=status_blob, sas_url=sas_url) get_versions = [args for args in patch_http_get.call_args_list if args[0][0] == hostplugin_versions_url] self.assertEqual(1, len(get_versions), "Expected exactly 1 GET on {0}".format(hostplugin_versions_url)) self.assertEqual(2, patch_http_put.call_count) self.assertEqual(hostplugin_status_url, patch_http_put.call_args_list[0][0][0]) self.assertEqual(hostplugin_status_url, patch_http_put.call_args_list[1][0][0]) self.assertEqual(2, patch_http_post.call_count) # signal for /versions self.assertEqual(health_service_url, patch_http_post.call_args_list[0][0][0]) jstr = patch_http_post.call_args_list[0][0][1] obj = json.loads(jstr) self.assertEqual(1, len(obj['Observations'])) self.assertTrue(obj['Observations'][0]['IsHealthy']) self.assertEqual('GuestAgentPluginVersions', obj['Observations'][0]['ObservationName']) # signal for /status self.assertEqual(health_service_url, patch_http_post.call_args_list[1][0][0]) jstr = patch_http_post.call_args_list[1][0][1] obj = json.loads(jstr) self.assertEqual(1, len(obj['Observations'])) self.assertTrue(obj['Observations'][0]['IsHealthy']) self.assertEqual('GuestAgentPluginStatus', obj['Observations'][0]['ObservationName']) def test_put_status_unhealthy_signal_transient(self): host_plugin = self._init_host() with patch("azurelinuxagent.common.utils.restutil.http_get") as patch_http_get: with patch("azurelinuxagent.common.utils.restutil.http_post") as patch_http_post: with patch("azurelinuxagent.common.utils.restutil.http_put") as patch_http_put: status_blob = self._init_status_blob() # get_api_versions patch_http_get.return_value = MockResponse(api_versions, 200) # put status blob patch_http_put.return_value = MockResponse(None, 500) with self.assertRaises(HttpError): host_plugin.put_vm_status(status_blob=status_blob, sas_url=sas_url) get_versions = [args for args in patch_http_get.call_args_list if args[0][0] == hostplugin_versions_url] self.assertEqual(1, len(get_versions), "Expected exactly 1 GET on {0}".format(hostplugin_versions_url)) self.assertEqual(1, patch_http_put.call_count) self.assertEqual(hostplugin_status_url, patch_http_put.call_args[0][0]) self.assertEqual(2, patch_http_post.call_count) # signal for /versions self.assertEqual(health_service_url, patch_http_post.call_args_list[0][0][0]) jstr = patch_http_post.call_args_list[0][0][1] obj = json.loads(jstr) self.assertEqual(1, len(obj['Observations'])) self.assertTrue(obj['Observations'][0]['IsHealthy']) self.assertEqual('GuestAgentPluginVersions', obj['Observations'][0]['ObservationName']) # signal for /status self.assertEqual(health_service_url, patch_http_post.call_args_list[1][0][0]) jstr = patch_http_post.call_args_list[1][0][1] obj = json.loads(jstr) self.assertEqual(1, len(obj['Observations'])) self.assertTrue(obj['Observations'][0]['IsHealthy']) self.assertEqual('GuestAgentPluginStatus', obj['Observations'][0]['ObservationName']) def test_put_status_unhealthy_signal_permanent(self): host_plugin = self._init_host() with patch("azurelinuxagent.common.utils.restutil.http_get") as patch_http_get: with patch("azurelinuxagent.common.utils.restutil.http_post") as patch_http_post: with patch("azurelinuxagent.common.utils.restutil.http_put") as patch_http_put: status_blob = self._init_status_blob() # get_api_versions patch_http_get.return_value = MockResponse(api_versions, 200) # put status blob patch_http_put.return_value = MockResponse(None, 500) host_plugin.status_error_state.is_triggered = Mock(return_value=True) with self.assertRaises(HttpError): host_plugin.put_vm_status(status_blob=status_blob, sas_url=sas_url) get_versions = [args for args in patch_http_get.call_args_list if args[0][0] == hostplugin_versions_url] self.assertEqual(1, len(get_versions), "Expected exactly 1 GET on {0}".format(hostplugin_versions_url)) self.assertEqual(1, patch_http_put.call_count) self.assertEqual(hostplugin_status_url, patch_http_put.call_args[0][0]) self.assertEqual(2, patch_http_post.call_count) # signal for /versions self.assertEqual(health_service_url, patch_http_post.call_args_list[0][0][0]) jstr = patch_http_post.call_args_list[0][0][1] obj = json.loads(jstr) self.assertEqual(1, len(obj['Observations'])) self.assertTrue(obj['Observations'][0]['IsHealthy']) self.assertEqual('GuestAgentPluginVersions', obj['Observations'][0]['ObservationName']) # signal for /status self.assertEqual(health_service_url, patch_http_post.call_args_list[1][0][0]) jstr = patch_http_post.call_args_list[1][0][1] obj = json.loads(jstr) self.assertEqual(1, len(obj['Observations'])) self.assertFalse(obj['Observations'][0]['IsHealthy']) self.assertEqual('GuestAgentPluginStatus', obj['Observations'][0]['ObservationName']) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol.should_report", return_value=True) @patch("azurelinuxagent.common.protocol.healthservice.HealthService.report_host_plugin_extension_artifact") def test_report_fetch_health(self, patch_report_artifact, patch_should_report): host_plugin = self._init_host() host_plugin.report_fetch_health(uri='', is_healthy=True) self.assertEqual(0, patch_should_report.call_count) host_plugin.report_fetch_health(uri='http://169.254.169.254/extensionArtifact', is_healthy=True) self.assertEqual(0, patch_should_report.call_count) host_plugin.report_fetch_health(uri='http://168.63.129.16:32526/status', is_healthy=True) self.assertEqual(0, patch_should_report.call_count) self.assertEqual(None, host_plugin.fetch_last_timestamp) host_plugin.report_fetch_health(uri='http://168.63.129.16:32526/extensionArtifact', is_healthy=True) self.assertNotEqual(None, host_plugin.fetch_last_timestamp) self.assertEqual(1, patch_should_report.call_count) self.assertEqual(1, patch_report_artifact.call_count) @patch("azurelinuxagent.common.protocol.hostplugin.HostPluginProtocol.should_report", return_value=True) @patch("azurelinuxagent.common.protocol.healthservice.HealthService.report_host_plugin_status") def test_report_status_health(self, patch_report_status, patch_should_report): host_plugin = self._init_host() self.assertEqual(None, host_plugin.status_last_timestamp) host_plugin.report_status_health(is_healthy=True) self.assertNotEqual(None, host_plugin.status_last_timestamp) self.assertEqual(1, patch_should_report.call_count) self.assertEqual(1, patch_report_status.call_count) def test_should_report(self): host_plugin = self._init_host() error_state = ErrorState(min_timedelta=datetime.timedelta(minutes=5)) period = datetime.timedelta(minutes=1) last_timestamp = None # first measurement at 0s, should report is_healthy = True actual = host_plugin.should_report(is_healthy, error_state, last_timestamp, period) self.assertEqual(True, actual) # second measurement at 30s, should not report last_timestamp = datetime.datetime.utcnow() - datetime.timedelta(seconds=30) actual = host_plugin.should_report(is_healthy, error_state, last_timestamp, period) self.assertEqual(False, actual) # third measurement at 60s, should report last_timestamp = datetime.datetime.utcnow() - datetime.timedelta(seconds=60) actual = host_plugin.should_report(is_healthy, error_state, last_timestamp, period) self.assertEqual(True, actual) # fourth measurement unhealthy, should report and increment counter is_healthy = False self.assertEqual(0, error_state.count) actual = host_plugin.should_report(is_healthy, error_state, last_timestamp, period) self.assertEqual(1, error_state.count) self.assertEqual(True, actual) # fifth measurement, should not report and reset counter is_healthy = True last_timestamp = datetime.datetime.utcnow() - datetime.timedelta(seconds=30) self.assertEqual(1, error_state.count) actual = host_plugin.should_report(is_healthy, error_state, last_timestamp, period) self.assertEqual(0, error_state.count) self.assertEqual(False, actual) class TestHostPluginVmSettings(HttpRequestPredicates, AgentTestCase): def test_it_should_raise_protocol_error_when_the_vm_settings_request_fails(self): def http_get_handler(url, *_, **__): if self.is_host_plugin_vm_settings_request(url): return MockHttpResponse(httpclient.INTERNAL_SERVER_ERROR, body="TEST ERROR") return None with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: protocol.set_http_handlers(http_get_handler=http_get_handler) with self.assertRaisesRegexCM(ProtocolError, r'GET vmSettings \[correlation ID: .* eTag: .*\]: \[HTTP Failed\] \[500: None].*TEST ERROR.*'): protocol.client.get_host_plugin().fetch_vm_settings() @staticmethod def _fetch_vm_settings_ignoring_errors(protocol): try: protocol.client.get_host_plugin().fetch_vm_settings() except (ProtocolError, VmSettingsNotSupported): pass def test_it_should_keep_track_of_errors_in_vm_settings_requests(self): mock_response = None def http_get_handler(url, *_, **__): if self.is_host_plugin_vm_settings_request(url): if isinstance(mock_response, Exception): # E0702: Raising NoneType while only classes or instances are allowed (raising-bad-type) - Disabled: we never raise None raise mock_response # pylint: disable=raising-bad-type return mock_response return None with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS, http_get_handler=http_get_handler) as protocol: mock_response = MockHttpResponse(httpclient.INTERNAL_SERVER_ERROR) self._fetch_vm_settings_ignoring_errors(protocol) mock_response = MockHttpResponse(httpclient.BAD_REQUEST) self._fetch_vm_settings_ignoring_errors(protocol) self._fetch_vm_settings_ignoring_errors(protocol) mock_response = IOError("timed out") self._fetch_vm_settings_ignoring_errors(protocol) mock_response = httpclient.HTTPException() self._fetch_vm_settings_ignoring_errors(protocol) self._fetch_vm_settings_ignoring_errors(protocol) # force the summary by resetting its period and calling update_goal_state with patch("azurelinuxagent.common.protocol.hostplugin.add_event") as add_event: mock_response = None # stop producing errors protocol.client._host_plugin._vm_settings_error_reporter._next_period = datetime.datetime.now() self._fetch_vm_settings_ignoring_errors(protocol) summary_text = [kwargs["message"] for _, kwargs in add_event.call_args_list if kwargs["op"] == "VmSettingsSummary"] self.assertEqual(1, len(summary_text), "Exactly 1 summary should have been produced. Got: {0} ".format(summary_text)) summary = json.loads(summary_text[0]) expected = { "requests": 6 + 2, # two extra calls to update_goal_state (when creating the mock protocol and when forcing the summary) "errors": 6, "serverErrors": 1, "clientErrors": 2, "timeouts": 1, "failedRequests": 2 } self.assertEqual(expected, summary, "The count of errors is incorrect") def test_it_should_limit_the_number_of_errors_it_reports(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: def http_get_handler(url, *_, **__): if self.is_host_plugin_vm_settings_request(url): return MockHttpResponse(httpclient.BAD_GATEWAY) # HostGAPlugin returns 502 for internal errors return None protocol.set_http_handlers(http_get_handler=http_get_handler) def get_telemetry_messages(): return [kwargs["message"] for _, kwargs in add_event.call_args_list if kwargs["op"] == "VmSettings"] with patch("azurelinuxagent.common.protocol.hostplugin.add_event") as add_event: for _ in range(_VmSettingsErrorReporter._MaxErrors + 3): self._fetch_vm_settings_ignoring_errors(protocol) telemetry_messages = get_telemetry_messages() self.assertEqual(_VmSettingsErrorReporter._MaxErrors, len(telemetry_messages), "The number of errors reported to telemetry is not the max allowed (got: {0})".format(telemetry_messages)) # Reset the error reporter and verify that additional errors are reported protocol.client.get_host_plugin()._vm_settings_error_reporter._next_period = datetime.datetime.now() self._fetch_vm_settings_ignoring_errors(protocol) # this triggers the reset with patch("azurelinuxagent.common.protocol.hostplugin.add_event") as add_event: self._fetch_vm_settings_ignoring_errors(protocol) telemetry_messages = get_telemetry_messages() self.assertEqual(1, len(telemetry_messages), "Expected additional errors to be reported to telemetry in the next period (got: {0})".format(telemetry_messages)) def test_it_should_stop_issuing_vm_settings_requests_when_api_is_not_supported(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: def http_get_handler(url, *_, **__): if self.is_host_plugin_vm_settings_request(url): return MockHttpResponse(httpclient.NOT_FOUND) # HostGAPlugin returns 404 if the API is not supported return None protocol.set_http_handlers(http_get_handler=http_get_handler) def get_vm_settings_call_count(): return len([url for url in protocol.get_tracked_urls() if "vmSettings" in url]) self._fetch_vm_settings_ignoring_errors(protocol) self.assertEqual(1, get_vm_settings_call_count(), "There should have been an initial call to vmSettings.") self._fetch_vm_settings_ignoring_errors(protocol) self._fetch_vm_settings_ignoring_errors(protocol) self.assertEqual(1, get_vm_settings_call_count(), "Additional calls to update_goal_state should not have produced extra calls to vmSettings.") # reset the vmSettings check period; this should restart the calls to the API protocol.client._host_plugin._supports_vm_settings_next_check = datetime.datetime.now() protocol.client.update_goal_state() self.assertEqual(2, get_vm_settings_call_count(), "A second call to vmSettings was expecting after the check period has elapsed.") def test_it_should_raise_when_the_vm_settings_api_stops_being_supported(self): def http_get_handler(url, *_, **__): if self.is_host_plugin_vm_settings_request(url): return MockHttpResponse(httpclient.NOT_FOUND) # HostGAPlugin returns 404 if the API is not supported return None with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: host_ga_plugin = protocol.client.get_host_plugin() # Do an initial call to ensure the API is supported vm_settings, _ = host_ga_plugin.fetch_vm_settings() # Now return NOT_FOUND to indicate the API is not supported protocol.set_http_handlers(http_get_handler=http_get_handler) with self.assertRaises(VmSettingsSupportStopped) as cm: host_ga_plugin.fetch_vm_settings() self.assertEqual(vm_settings.created_on_timestamp, cm.exception.timestamp) def test_it_should_save_the_timestamp_of_the_most_recent_fast_track_goal_state(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: host_ga_plugin = protocol.client.get_host_plugin() vm_settings, _ = host_ga_plugin.fetch_vm_settings() state_file = os.path.join(conf.get_lib_dir(), "fast_track.json") self.assertTrue(os.path.exists(state_file), "The timestamp was not saved (can't find {0})".format(state_file)) with open(state_file, "r") as state_file_: state = json.load(state_file_) self.assertEqual(vm_settings.created_on_timestamp, state["timestamp"], "{0} does not contain the expected timestamp".format(state_file)) # A fabric goal state should remove the state file protocol.mock_wire_data.set_vm_settings_source(GoalStateSource.Fabric) protocol.mock_wire_data.set_etag(888) _ = host_ga_plugin.fetch_vm_settings() self.assertFalse(os.path.exists(state_file), "{0} was not removed by a Fabric goal state".format(state_file)) class MockResponse: def __init__(self, body, status_code, reason=''): self.body = body self.status = status_code self.reason = reason def read(self): return self.body if sys.version_info[0] == 2 else bytes(self.body, encoding='utf-8') if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/protocol/test_image_info_matcher.py000066400000000000000000000114421462617747000272360ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import unittest from azurelinuxagent.common.protocol.imds import ImageInfoMatcher class TestImageInfoMatcher(unittest.TestCase): def test_image_does_not_exist(self): doc = '{}' test_subject = ImageInfoMatcher(doc) self.assertFalse(test_subject.is_match("Red Hat", "RHEL", "6.3", "")) def test_image_exists_by_sku(self): doc = '''{ "CANONICAL": { "UBUNTUSERVER": { "16.04-LTS": { "Match": ".*" } } } }''' test_subject = ImageInfoMatcher(doc) self.assertTrue(test_subject.is_match("Canonical", "UbuntuServer", "16.04-LTS", "")) self.assertTrue(test_subject.is_match("Canonical", "UbuntuServer", "16.04-LTS", "16.04.201805090")) self.assertFalse(test_subject.is_match("Canonical", "UbuntuServer", "14.04.0-LTS", "16.04.201805090")) def test_image_exists_by_version(self): doc = '''{ "REDHAT": { "RHEL": { "Minimum": "6.3" } } }''' test_subject = ImageInfoMatcher(doc) self.assertFalse(test_subject.is_match("RedHat", "RHEL", "6.1", "")) self.assertFalse(test_subject.is_match("RedHat", "RHEL", "6.2", "")) self.assertTrue(test_subject.is_match("RedHat", "RHEL", "6.3", "")) self.assertTrue(test_subject.is_match("RedHat", "RHEL", "6.4", "")) self.assertTrue(test_subject.is_match("RedHat", "RHEL", "6.5", "")) self.assertTrue(test_subject.is_match("RedHat", "RHEL", "7.0", "")) self.assertTrue(test_subject.is_match("RedHat", "RHEL", "7.1", "")) def test_image_exists_by_version01(self): """ Test case to ensure the matcher exhaustively searches all cases. REDHAT/RHEL have a SKU >= 6.3 is less precise than REDHAT/RHEL/7-LVM have a any version. Both should return a successful match. """ doc = '''{ "REDHAT": { "RHEL": { "Minimum": "6.3", "7-LVM": { "Match": ".*" } } } }''' test_subject = ImageInfoMatcher(doc) self.assertTrue(test_subject.is_match("RedHat", "RHEL", "6.3", "")) self.assertTrue(test_subject.is_match("RedHat", "RHEL", "7-LVM", "")) def test_ignores_case(self): doc = '''{ "CANONICAL": { "UBUNTUSERVER": { "16.04-LTS": { "Match": ".*" } } } }''' test_subject = ImageInfoMatcher(doc) self.assertTrue(test_subject.is_match("canonical", "ubuntuserver", "16.04-lts", "")) self.assertFalse(test_subject.is_match("canonical", "ubuntuserver", "14.04.0-lts", "16.04.201805090")) def test_list_operator(self): doc = '''{ "CANONICAL": { "UBUNTUSERVER": { "List": [ "14.04.0-LTS", "14.04.1-LTS" ] } } }''' test_subject = ImageInfoMatcher(doc) self.assertTrue(test_subject.is_match("Canonical", "UbuntuServer", "14.04.0-LTS", "")) self.assertTrue(test_subject.is_match("Canonical", "UbuntuServer", "14.04.1-LTS", "")) self.assertFalse(test_subject.is_match("Canonical", "UbuntuServer", "22.04-LTS", "")) def test_invalid_version(self): doc = '''{ "REDHAT": { "RHEL": { "Minimum": "6.3" } } }''' test_subject = ImageInfoMatcher(doc) self.assertFalse(test_subject.is_match("RedHat", "RHEL", "16.04-LTS", "")) # This is *expected* behavior as opposed to desirable. The specification is # controlled by the agent, so there is no reason to use these values, but if # one does this is expected behavior. # # FlexibleVersion chops off all leading zeros. self.assertTrue(test_subject.is_match("RedHat", "RHEL", "6.04", "")) # FlexibleVersion coerces everything to a string self.assertTrue(test_subject.is_match("RedHat", "RHEL", 6.04, "")) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/protocol/test_imds.py000066400000000000000000000736541462617747000244070ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ import json import os import unittest from azurelinuxagent.common.protocol import imds from azurelinuxagent.common.datacontract import set_properties from azurelinuxagent.common.exception import HttpError, ResourceGoneError from azurelinuxagent.common.future import ustr, httpclient from azurelinuxagent.common.utils import restutil from tests.lib.mock_wire_protocol import MockHttpResponse from tests.lib.tools import AgentTestCase, data_dir, MagicMock, Mock, patch def get_mock_compute_response(): return MockHttpResponse(status=httpclient.OK, body='''{ "location": "westcentralus", "name": "unit_test", "offer": "UnitOffer", "osType": "Linux", "placementGroupId": "", "platformFaultDomain": "0", "platformUpdateDomain": "0", "publisher": "UnitPublisher", "resourceGroupName": "UnitResourceGroupName", "sku": "UnitSku", "subscriptionId": "e4402c6c-2804-4a0a-9dee-d61918fc4d28", "tags": "Key1:Value1;Key2:Value2", "vmId": "f62f23fb-69e2-4df0-a20b-cb5c201a3e7a", "version": "UnitVersion", "vmSize": "Standard_D1_v2" }'''.encode('utf-8')) class TestImds(AgentTestCase): @patch("azurelinuxagent.common.protocol.imds.restutil.http_get") def test_get(self, mock_http_get): mock_http_get.return_value = get_mock_compute_response() test_subject = imds.ImdsClient(restutil.KNOWN_WIRESERVER_IP) test_subject.get_compute() self.assertEqual(1, mock_http_get.call_count) positional_args, kw_args = mock_http_get.call_args self.assertEqual('http://169.254.169.254/metadata/instance/compute?api-version=2018-02-01', positional_args[0]) self.assertTrue('User-Agent' in kw_args['headers']) self.assertTrue('Metadata' in kw_args['headers']) self.assertEqual(True, kw_args['headers']['Metadata']) @patch("azurelinuxagent.common.protocol.imds.restutil.http_get") def test_get_bad_request(self, mock_http_get): mock_http_get.return_value = MockHttpResponse(status=restutil.httpclient.BAD_REQUEST) test_subject = imds.ImdsClient(restutil.KNOWN_WIRESERVER_IP) self.assertRaises(HttpError, test_subject.get_compute) @patch("azurelinuxagent.common.protocol.imds.restutil.http_get") def test_get_internal_service_error(self, mock_http_get): mock_http_get.return_value = MockHttpResponse(status=restutil.httpclient.INTERNAL_SERVER_ERROR) test_subject = imds.ImdsClient(restutil.KNOWN_WIRESERVER_IP) self.assertRaises(HttpError, test_subject.get_compute) @patch("azurelinuxagent.common.protocol.imds.restutil.http_get") def test_get_empty_response(self, mock_http_get): mock_http_get.return_value = MockHttpResponse(status=httpclient.OK, body=''.encode('utf-8')) test_subject = imds.ImdsClient(restutil.KNOWN_WIRESERVER_IP) self.assertRaises(ValueError, test_subject.get_compute) def test_deserialize_ComputeInfo(self): s = '''{ "location": "westcentralus", "name": "unit_test", "offer": "UnitOffer", "osType": "Linux", "placementGroupId": "", "platformFaultDomain": "0", "platformUpdateDomain": "0", "publisher": "UnitPublisher", "resourceGroupName": "UnitResourceGroupName", "sku": "UnitSku", "subscriptionId": "e4402c6c-2804-4a0a-9dee-d61918fc4d28", "tags": "Key1:Value1;Key2:Value2", "vmId": "f62f23fb-69e2-4df0-a20b-cb5c201a3e7a", "version": "UnitVersion", "vmSize": "Standard_D1_v2", "vmScaleSetName": "MyScaleSet", "zone": "In" }''' data = json.loads(s) compute_info = imds.ComputeInfo() set_properties("compute", compute_info, data) self.assertEqual('westcentralus', compute_info.location) self.assertEqual('unit_test', compute_info.name) self.assertEqual('UnitOffer', compute_info.offer) self.assertEqual('Linux', compute_info.osType) self.assertEqual('', compute_info.placementGroupId) self.assertEqual('0', compute_info.platformFaultDomain) self.assertEqual('0', compute_info.platformUpdateDomain) self.assertEqual('UnitPublisher', compute_info.publisher) self.assertEqual('UnitResourceGroupName', compute_info.resourceGroupName) self.assertEqual('UnitSku', compute_info.sku) self.assertEqual('e4402c6c-2804-4a0a-9dee-d61918fc4d28', compute_info.subscriptionId) self.assertEqual('Key1:Value1;Key2:Value2', compute_info.tags) self.assertEqual('f62f23fb-69e2-4df0-a20b-cb5c201a3e7a', compute_info.vmId) self.assertEqual('UnitVersion', compute_info.version) self.assertEqual('Standard_D1_v2', compute_info.vmSize) self.assertEqual('MyScaleSet', compute_info.vmScaleSetName) self.assertEqual('In', compute_info.zone) self.assertEqual('UnitPublisher:UnitOffer:UnitSku:UnitVersion', compute_info.image_info) def test_is_custom_image(self): image_origin = self._setup_image_origin_assert("", "", "", "") self.assertEqual(imds.IMDS_IMAGE_ORIGIN_CUSTOM, image_origin) def test_is_endorsed_CentOS(self): self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.5", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.6", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.7", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.8", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.9", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7.0", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7.1", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7.2", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7.3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7.4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7-LVM", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS", "7-RAW", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS-HPC", "6.5", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS-HPC", "6.8", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS-HPC", "7.1", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS-HPC", "7.3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("OpenLogic", "CentOS-HPC", "7.4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.2", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("OpenLogic", "CentOS", "6.1", "")) def test_is_endorsed_CoreOS(self): self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("CoreOS", "CoreOS", "stable", "494.4.0")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("CoreOS", "CoreOS", "stable", "899.17.0")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("CoreOS", "CoreOS", "stable", "1688.5.3")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("CoreOS", "CoreOS", "stable", "494.3.0")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("CoreOS", "CoreOS", "alpha", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("CoreOS", "CoreOS", "beta", "")) def test_is_endorsed_Debian(self): self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("credativ", "Debian", "7", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("credativ", "Debian", "8", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("credativ", "Debian", "9", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("credativ", "Debian", "9-DAILY", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("credativ", "Debian", "10-DAILY", "")) def test_is_endorsed_Rhel(self): self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "6.7", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "6.8", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "6.9", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7.0", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7.1", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7.2", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7.3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7.4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7-LVM", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL", "7-RAW", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP-HANA", "7.2", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP-HANA", "7.3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP-HANA", "7.4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP", "7.2", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP", "7.3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP", "7.4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP-APPS", "7.2", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP-APPS", "7.3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("RedHat", "RHEL-SAP-APPS", "7.4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("RedHat", "RHEL", "6.6", "")) def test_is_endorsed_SuSE(self): self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES", "11-SP4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-BYOS", "11-SP4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES", "12-SP1", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES", "12-SP2", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES", "12-SP3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES", "12-SP4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES", "12-SP5", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-BYOS", "12-SP1", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-BYOS", "12-SP2", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-BYOS", "12-SP3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-BYOS", "12-SP4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-BYOS", "12-SP5", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-SAP", "12-SP1", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-SAP", "12-SP2", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-SAP", "12-SP3", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-SAP", "12-SP4", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("SuSE", "SLES-SAP", "12-SP5", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("SuSE", "SLES", "11-SP3", "")) def test_is_endorsed_UbuntuServer(self): self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.0-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.1-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.2-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.3-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.4-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.5-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.6-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.7-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "14.04.8-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "16.04-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "18.04-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "20.04-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_ENDORSED, self._setup_image_origin_assert("Canonical", "UbuntuServer", "22.04-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("Canonical", "UbuntuServer", "12.04-LTS", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("Canonical", "UbuntuServer", "17.10", "")) self.assertEqual(imds.IMDS_IMAGE_ORIGIN_PLATFORM, self._setup_image_origin_assert("Canonical", "UbuntuServer", "18.04-DAILY-LTS", "")) @staticmethod def _setup_image_origin_assert(publisher, offer, sku, version): s = '''{{ "publisher": "{0}", "offer": "{1}", "sku": "{2}", "version": "{3}" }}'''.format(publisher, offer, sku, version) data = json.loads(s) compute_info = imds.ComputeInfo() set_properties("compute", compute_info, data) return compute_info.image_origin def test_response_validation(self): # invalid json or empty response self._assert_validation(http_status_code=200, http_response='', expected_valid=False, expected_response='JSON parsing failed') self._assert_validation(http_status_code=200, http_response=None, expected_valid=False, expected_response='JSON parsing failed') self._assert_validation(http_status_code=200, http_response='{ bad json ', expected_valid=False, expected_response='JSON parsing failed') # 500 response self._assert_validation(http_status_code=500, http_response='error response', expected_valid=False, expected_response='IMDS error in /metadata/instance: [HTTP Failed] [500: reason] error response') # 429 response - throttling does not mean service is unhealthy self._assert_validation(http_status_code=429, http_response='server busy', expected_valid=True, expected_response='[HTTP Failed] [429: reason] server busy') # 404 response - error responses do not mean service is unhealthy self._assert_validation(http_status_code=404, http_response='not found', expected_valid=True, expected_response='[HTTP Failed] [404: reason] not found') # valid json self._assert_validation(http_status_code=200, http_response=self._imds_response('valid'), expected_valid=True, expected_response='') # unicode self._assert_validation(http_status_code=200, http_response=self._imds_response('unicode'), expected_valid=True, expected_response='') def test_field_validation(self): # TODO: compute fields (#1249) self._assert_field('network', 'interface', 'ipv4', 'ipAddress', 'privateIpAddress') self._assert_field('network', 'interface', 'ipv4', 'ipAddress') self._assert_field('network', 'interface', 'ipv4') self._assert_field('network', 'interface', 'macAddress') self._assert_field('network') def _assert_field(self, *fields): response = self._imds_response('valid') response_obj = json.loads(ustr(response, encoding="utf-8")) # assert empty value self._update_field(response_obj, fields, '') altered_response = json.dumps(response_obj).encode() self._assert_validation(http_status_code=200, http_response=altered_response, expected_valid=False, expected_response='Empty field: [{0}]'.format(fields[-1])) # assert missing value self._update_field(response_obj, fields, None) altered_response = json.dumps(response_obj).encode() self._assert_validation(http_status_code=200, http_response=altered_response, expected_valid=False, expected_response='Missing field: [{0}]'.format(fields[-1])) def _update_field(self, obj, fields, val): if isinstance(obj, list): self._update_field(obj[0], fields, val) else: f = fields[0] if len(fields) == 1: if val is None: del obj[f] else: obj[f] = val else: self._update_field(obj[f], fields[1:], val) @staticmethod def _imds_response(f): path = os.path.join(data_dir, "imds", "{0}.json".format(f)) with open(path, "rb") as fh: return fh.read() def _assert_validation(self, http_status_code, http_response, expected_valid, expected_response): test_subject = imds.ImdsClient(restutil.KNOWN_WIRESERVER_IP) with patch("azurelinuxagent.common.utils.restutil.http_get") as mock_http_get: mock_http_get.return_value = MockHttpResponse(status=http_status_code, reason='reason', body=http_response) validate_response = test_subject.validate() self.assertEqual(1, mock_http_get.call_count) positional_args, kw_args = mock_http_get.call_args self.assertTrue('User-Agent' in kw_args['headers']) self.assertEqual(restutil.HTTP_USER_AGENT_HEALTH, kw_args['headers']['User-Agent']) self.assertTrue('Metadata' in kw_args['headers']) self.assertEqual(True, kw_args['headers']['Metadata']) self.assertEqual('http://169.254.169.254/metadata/instance?api-version=2018-02-01', positional_args[0]) self.assertEqual(expected_valid, validate_response[0]) self.assertTrue(expected_response in validate_response[1], "Expected: '{0}', Actual: '{1}'" .format(expected_response, validate_response[1])) def test_endpoint_fallback(self): # http error status codes are tested in test_response_validation, none of which # should trigger a fallback. This is confirmed as _assert_validation will count # http GET calls and enforces a single GET call (fallback would cause 2) and # checks the url called. test_subject = imds.ImdsClient("foo.bar") # ensure user-agent gets set correctly for is_health, expected_useragent in [(False, restutil.HTTP_USER_AGENT), (True, restutil.HTTP_USER_AGENT_HEALTH)]: # set a different resource path for health query to make debugging unit test easier resource_path = 'something/health' if is_health else 'something' for has_primary_ioerror in (False, True): # secondary endpoint unreachable test_subject._http_get = Mock(side_effect=self._mock_http_get) self._mock_imds_setup(primary_ioerror=has_primary_ioerror, secondary_ioerror=True) result = test_subject.get_metadata(resource_path=resource_path, is_health=is_health) self.assertFalse(result.success) if has_primary_ioerror else self.assertTrue(result.success) # pylint: disable=expression-not-assigned self.assertFalse(result.service_error) if has_primary_ioerror: self.assertEqual('IMDS error in /metadata/{0}: Unable to connect to endpoint'.format(resource_path), result.response) else: self.assertEqual('Mock success response', result.response) for _, kwargs in test_subject._http_get.call_args_list: self.assertTrue('User-Agent' in kwargs['headers']) self.assertEqual(expected_useragent, kwargs['headers']['User-Agent']) self.assertEqual(2 if has_primary_ioerror else 1, test_subject._http_get.call_count) # IMDS success test_subject._http_get = Mock(side_effect=self._mock_http_get) self._mock_imds_setup(primary_ioerror=has_primary_ioerror) result = test_subject.get_metadata(resource_path=resource_path, is_health=is_health) self.assertTrue(result.success) self.assertFalse(result.service_error) self.assertEqual('Mock success response', result.response) for _, kwargs in test_subject._http_get.call_args_list: self.assertTrue('User-Agent' in kwargs['headers']) self.assertEqual(expected_useragent, kwargs['headers']['User-Agent']) self.assertEqual(2 if has_primary_ioerror else 1, test_subject._http_get.call_count) # IMDS throttled test_subject._http_get = Mock(side_effect=self._mock_http_get) self._mock_imds_setup(primary_ioerror=has_primary_ioerror, throttled=True) result = test_subject.get_metadata(resource_path=resource_path, is_health=is_health) self.assertFalse(result.success) self.assertFalse(result.service_error) self.assertEqual('IMDS error in /metadata/{0}: Throttled'.format(resource_path), result.response) for _, kwargs in test_subject._http_get.call_args_list: self.assertTrue('User-Agent' in kwargs['headers']) self.assertEqual(expected_useragent, kwargs['headers']['User-Agent']) self.assertEqual(2 if has_primary_ioerror else 1, test_subject._http_get.call_count) # IMDS gone error test_subject._http_get = Mock(side_effect=self._mock_http_get) self._mock_imds_setup(primary_ioerror=has_primary_ioerror, gone_error=True) result = test_subject.get_metadata(resource_path=resource_path, is_health=is_health) self.assertFalse(result.success) self.assertTrue(result.service_error) self.assertEqual('IMDS error in /metadata/{0}: HTTP Failed with Status Code 410: Gone'.format(resource_path), result.response) for _, kwargs in test_subject._http_get.call_args_list: self.assertTrue('User-Agent' in kwargs['headers']) self.assertEqual(expected_useragent, kwargs['headers']['User-Agent']) self.assertEqual(2 if has_primary_ioerror else 1, test_subject._http_get.call_count) # IMDS bad request test_subject._http_get = Mock(side_effect=self._mock_http_get) self._mock_imds_setup(primary_ioerror=has_primary_ioerror, bad_request=True) result = test_subject.get_metadata(resource_path=resource_path, is_health=is_health) self.assertFalse(result.success) self.assertFalse(result.service_error) self.assertEqual('IMDS error in /metadata/{0}: [HTTP Failed] [404: reason] Mock not found'.format(resource_path), result.response) for _, kwargs in test_subject._http_get.call_args_list: self.assertTrue('User-Agent' in kwargs['headers']) self.assertEqual(expected_useragent, kwargs['headers']['User-Agent']) self.assertEqual(2 if has_primary_ioerror else 1, test_subject._http_get.call_count) def _mock_imds_setup(self, primary_ioerror=False, secondary_ioerror=False, gone_error=False, throttled=False, bad_request=False): self._mock_imds_expect_fallback = primary_ioerror # pylint: disable=attribute-defined-outside-init self._mock_imds_primary_ioerror = primary_ioerror # pylint: disable=attribute-defined-outside-init self._mock_imds_secondary_ioerror = secondary_ioerror # pylint: disable=attribute-defined-outside-init self._mock_imds_gone_error = gone_error # pylint: disable=attribute-defined-outside-init self._mock_imds_throttled = throttled # pylint: disable=attribute-defined-outside-init self._mock_imds_bad_request = bad_request # pylint: disable=attribute-defined-outside-init def _mock_http_get(self, *_, **kwargs): if "foo.bar" == kwargs['endpoint'] and not self._mock_imds_expect_fallback: raise Exception("Unexpected endpoint called") if self._mock_imds_primary_ioerror and "169.254.169.254" == kwargs['endpoint']: raise HttpError("[HTTP Failed] GET http://{0}/metadata/{1} -- IOError timed out -- 6 attempts made" .format(kwargs['endpoint'], kwargs['resource_path'])) if self._mock_imds_secondary_ioerror and "foo.bar" == kwargs['endpoint']: raise HttpError("[HTTP Failed] GET http://{0}/metadata/{1} -- IOError timed out -- 6 attempts made" .format(kwargs['endpoint'], kwargs['resource_path'])) if self._mock_imds_gone_error: raise ResourceGoneError("Resource is gone") if self._mock_imds_throttled: raise HttpError("[HTTP Retry] GET http://{0}/metadata/{1} -- Status Code 429 -- 25 attempts made" .format(kwargs['endpoint'], kwargs['resource_path'])) resp = MagicMock() resp.reason = 'reason' if self._mock_imds_bad_request: resp.status = httpclient.NOT_FOUND resp.read.return_value = 'Mock not found' else: resp.status = httpclient.OK resp.read.return_value = 'Mock success response' return resp if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/protocol/test_metadata_server_migration_util.py000066400000000000000000000153301462617747000317120ustar00rootroot00000000000000# Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import tempfile import unittest import azurelinuxagent.common.osutil.default as osutil # pylint: disable=unused-import import azurelinuxagent.common.protocol.metadata_server_migration_util as migration_util from azurelinuxagent.common.protocol.metadata_server_migration_util import _LEGACY_METADATA_SERVER_TRANSPORT_PRV_FILE_NAME, \ _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME, \ _LEGACY_METADATA_SERVER_P7B_FILE_NAME, \ _KNOWN_METADATASERVER_IP from azurelinuxagent.common.utils.restutil import KNOWN_WIRESERVER_IP from tests.lib.tools import AgentTestCase, patch, MagicMock class TestMetadataServerMigrationUtil(AgentTestCase): @patch('azurelinuxagent.common.conf.get_lib_dir') def test_is_metadata_server_artifact_present(self, mock_get_lib_dir): dir = tempfile.gettempdir() # pylint: disable=redefined-builtin metadata_server_transport_cert_file = os.path.join(dir, _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME) open(metadata_server_transport_cert_file, 'w').close() mock_get_lib_dir.return_value = dir self.assertTrue(migration_util.is_metadata_server_artifact_present()) @patch('azurelinuxagent.common.conf.get_lib_dir') def test_is_metadata_server_artifact_not_present(self, mock_get_lib_dir): mock_get_lib_dir.return_value = tempfile.gettempdir() self.assertFalse(migration_util.is_metadata_server_artifact_present()) @patch('azurelinuxagent.common.conf.enable_firewall') @patch('azurelinuxagent.common.conf.get_lib_dir') def test_cleanup_metadata_server_artifacts_does_not_throw_with_no_metadata_certs(self, mock_get_lib_dir, mock_enable_firewall): mock_get_lib_dir.return_value = tempfile.gettempdir() mock_enable_firewall.return_value = False osutil = MagicMock() # pylint: disable=redefined-outer-name migration_util.cleanup_metadata_server_artifacts(osutil) @patch('azurelinuxagent.common.conf.enable_firewall') @patch('azurelinuxagent.common.conf.get_lib_dir') @patch('os.getuid') def test_cleanup_metadata_server_artifacts_firewall_enabled(self, mock_os_getuid, mock_get_lib_dir, mock_enable_firewall): # Setup Certificate Files dir = tempfile.gettempdir() # pylint: disable=redefined-builtin metadata_server_transport_prv_file = os.path.join(dir, _LEGACY_METADATA_SERVER_TRANSPORT_PRV_FILE_NAME) metadata_server_transport_cert_file = os.path.join(dir, _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME) metadata_server_p7b_file = os.path.join(dir, _LEGACY_METADATA_SERVER_P7B_FILE_NAME) open(metadata_server_transport_prv_file, 'w').close() open(metadata_server_transport_cert_file, 'w').close() open(metadata_server_p7b_file, 'w').close() # Setup Mocks mock_get_lib_dir.return_value = dir mock_enable_firewall.return_value = True fixed_uid = 0 mock_os_getuid.return_value = fixed_uid osutil = MagicMock() # pylint: disable=redefined-outer-name osutil.enable_firewall.return_value = (MagicMock(), MagicMock()) # Run migration_util.cleanup_metadata_server_artifacts(osutil) # Assert files deleted self.assertFalse(os.path.exists(metadata_server_transport_prv_file)) self.assertFalse(os.path.exists(metadata_server_transport_cert_file)) self.assertFalse(os.path.exists(metadata_server_p7b_file)) # Assert Firewall rule calls osutil.remove_firewall.assert_called_once_with(dst_ip=_KNOWN_METADATASERVER_IP, uid=fixed_uid, wait=osutil.get_firewall_will_wait()) osutil.enable_firewall.assert_called_once_with(dst_ip=KNOWN_WIRESERVER_IP, uid=fixed_uid) @patch('azurelinuxagent.common.conf.enable_firewall') @patch('azurelinuxagent.common.conf.get_lib_dir') @patch('os.getuid') def test_cleanup_metadata_server_artifacts_firewall_disabled(self, mock_os_getuid, mock_get_lib_dir, mock_enable_firewall): # Setup Certificate Files dir = tempfile.gettempdir() # pylint: disable=redefined-builtin metadata_server_transport_prv_file = os.path.join(dir, _LEGACY_METADATA_SERVER_TRANSPORT_PRV_FILE_NAME) metadata_server_transport_cert_file = os.path.join(dir, _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME) metadata_server_p7b_file = os.path.join(dir, _LEGACY_METADATA_SERVER_P7B_FILE_NAME) open(metadata_server_transport_prv_file, 'w').close() open(metadata_server_transport_cert_file, 'w').close() open(metadata_server_p7b_file, 'w').close() # Setup Mocks mock_get_lib_dir.return_value = dir mock_enable_firewall.return_value = False fixed_uid = 0 mock_os_getuid.return_value = fixed_uid osutil = MagicMock() # pylint: disable=redefined-outer-name # Run migration_util.cleanup_metadata_server_artifacts(osutil) # Assert files deleted self.assertFalse(os.path.exists(metadata_server_transport_prv_file)) self.assertFalse(os.path.exists(metadata_server_transport_cert_file)) self.assertFalse(os.path.exists(metadata_server_p7b_file)) # Assert Firewall rule calls osutil.remove_firewall.assert_called_once_with(dst_ip=_KNOWN_METADATASERVER_IP, uid=fixed_uid, wait=osutil.get_firewall_will_wait()) osutil.enable_firewall.assert_not_called() # Cleanup certificate files def tearDown(self): # pylint: disable=redefined-builtin dir = tempfile.gettempdir() for file in [_LEGACY_METADATA_SERVER_TRANSPORT_PRV_FILE_NAME, \ _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME, \ _LEGACY_METADATA_SERVER_P7B_FILE_NAME]: path = os.path.join(dir, file) if os.path.exists(path): os.remove(path) # pylint: enable=redefined-builtin super(TestMetadataServerMigrationUtil, self).tearDown() if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/protocol/test_protocol_util.py000066400000000000000000000370371462617747000263440ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import tempfile import unittest from errno import ENOENT from threading import Thread from azurelinuxagent.common.exception import ProtocolError, DhcpError, OSUtilError from azurelinuxagent.common.protocol.goal_state import TRANSPORT_CERT_FILE_NAME, TRANSPORT_PRV_FILE_NAME from azurelinuxagent.common.protocol.metadata_server_migration_util import _METADATA_PROTOCOL_NAME, \ _LEGACY_METADATA_SERVER_TRANSPORT_PRV_FILE_NAME, \ _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME, \ _LEGACY_METADATA_SERVER_P7B_FILE_NAME from azurelinuxagent.common.protocol.util import get_protocol_util, ProtocolUtil, PROTOCOL_FILE_NAME, \ WIRE_PROTOCOL_NAME, ENDPOINT_FILE_NAME from azurelinuxagent.common.utils.restutil import KNOWN_WIRESERVER_IP from tests.lib.tools import AgentTestCase, MagicMock, Mock, patch, clear_singleton_instances @patch("time.sleep") class TestProtocolUtil(AgentTestCase): MDS_CERTIFICATES = [_LEGACY_METADATA_SERVER_TRANSPORT_PRV_FILE_NAME, \ _LEGACY_METADATA_SERVER_TRANSPORT_CERT_FILE_NAME, \ _LEGACY_METADATA_SERVER_P7B_FILE_NAME] WIRESERVER_CERTIFICATES = [TRANSPORT_CERT_FILE_NAME, TRANSPORT_PRV_FILE_NAME] def setUp(self): super(TestProtocolUtil, self).setUp() # Since ProtocolUtil is a singleton per thread, we need to clear it to ensure that the test cases do not # reuse a previous state clear_singleton_instances(ProtocolUtil) # Cleanup certificate files, protocol file, and endpoint files def tearDown(self): dir = tempfile.gettempdir() # pylint: disable=redefined-builtin for path in [os.path.join(dir, mds_cert) for mds_cert in TestProtocolUtil.MDS_CERTIFICATES]: if os.path.exists(path): os.remove(path) for path in [os.path.join(dir, ws_cert) for ws_cert in TestProtocolUtil.WIRESERVER_CERTIFICATES]: if os.path.exists(path): os.remove(path) protocol_path = os.path.join(dir, PROTOCOL_FILE_NAME) if os.path.exists(protocol_path): os.remove(protocol_path) endpoint_path = os.path.join(dir, ENDPOINT_FILE_NAME) if os.path.exists(endpoint_path): os.remove(endpoint_path) super(TestProtocolUtil, self).tearDown() def test_get_protocol_util_should_return_same_object_for_same_thread(self, _): protocol_util1 = get_protocol_util() protocol_util2 = get_protocol_util() self.assertEqual(protocol_util1, protocol_util2) def test_get_protocol_util_should_return_different_object_for_different_thread(self, _): protocol_util_instances = [] errors = [] def get_protocol_util_instance(): try: protocol_util_instances.append(get_protocol_util()) except Exception as e: errors.append(e) t1 = Thread(target=get_protocol_util_instance) t2 = Thread(target=get_protocol_util_instance) t1.start() t2.start() t1.join() t2.join() self.assertEqual(len(protocol_util_instances), 2, "Could not create the expected number of protocols. Errors: [{0}]".format(errors)) self.assertNotEqual(protocol_util_instances[0], protocol_util_instances[1], "The instances created by different threads should be different") @patch("azurelinuxagent.common.protocol.util.WireProtocol") def test_detect_protocol(self, WireProtocol, _): WireProtocol.return_value = MagicMock() protocol_util = get_protocol_util() protocol_util.dhcp_handler = MagicMock() protocol_util.dhcp_handler.endpoint = "foo.bar" # Test wire protocol is available protocol = protocol_util.get_protocol() self.assertEqual(WireProtocol.return_value, protocol) # Test wire protocol is not available protocol_util.clear_protocol() WireProtocol.return_value.detect.side_effect = ProtocolError() self.assertRaises(ProtocolError, protocol_util.get_protocol) @patch("azurelinuxagent.common.conf.get_lib_dir") @patch("azurelinuxagent.common.protocol.util.WireProtocol") def test_detect_protocol_no_dhcp(self, WireProtocol, mock_get_lib_dir, _): WireProtocol.return_value.detect = Mock() mock_get_lib_dir.return_value = self.tmp_dir protocol_util = get_protocol_util() protocol_util.osutil = MagicMock() protocol_util.osutil.is_dhcp_available.return_value = False protocol_util.dhcp_handler = MagicMock() protocol_util.dhcp_handler.endpoint = None protocol_util.dhcp_handler.run = Mock() endpoint_file = protocol_util._get_wireserver_endpoint_file_path() # pylint: disable=unused-variable # Test wire protocol when no endpoint file has been written protocol_util._detect_protocol(save_to_history=False) self.assertEqual(KNOWN_WIRESERVER_IP, protocol_util.get_wireserver_endpoint()) # Test wire protocol on dhcp failure protocol_util.osutil.is_dhcp_available.return_value = True protocol_util.dhcp_handler.run.side_effect = DhcpError() self.assertRaises(ProtocolError, lambda: protocol_util._detect_protocol(save_to_history=False)) @patch("azurelinuxagent.common.protocol.util.WireProtocol") def test_get_protocol(self, WireProtocol, _): WireProtocol.return_value = MagicMock() protocol_util = get_protocol_util() protocol_util.get_wireserver_endpoint = Mock() protocol_util._detect_protocol = MagicMock() protocol_util._save_protocol("WireProtocol") protocol = protocol_util.get_protocol() self.assertEqual(WireProtocol.return_value, protocol) protocol_util.get_wireserver_endpoint.assert_any_call() @patch('azurelinuxagent.common.conf.get_lib_dir') @patch('azurelinuxagent.common.conf.enable_firewall') def test_get_protocol_wireserver_to_wireserver_update_removes_metadataserver_artifacts(self, mock_enable_firewall, mock_get_lib_dir, _): """ This is for testing that agent upgrade from WireServer to WireServer protocol will clean up leftover MDS Certificates (from a previous Metadata Server to Wireserver update, intermediate updated agent does not clean up MDS certificates) and reset firewall rules. We don't test that WireServer certificates, protocol file, or endpoint file were created because we already expect them to be created since we are updating from a WireServer agent. """ # Setup Protocol file with WireProtocol dir = tempfile.gettempdir() # pylint: disable=redefined-builtin filename = os.path.join(dir, PROTOCOL_FILE_NAME) with open(filename, "w") as f: f.write(WIRE_PROTOCOL_NAME) # Setup MDS Certificates mds_cert_paths = [os.path.join(dir, mds_cert) for mds_cert in TestProtocolUtil.MDS_CERTIFICATES] for mds_cert_path in mds_cert_paths: open(mds_cert_path, "w").close() # Setup mocks mock_get_lib_dir.return_value = dir mock_enable_firewall.return_value = True protocol_util = get_protocol_util() protocol_util.osutil = MagicMock() protocol_util.osutil.enable_firewall.return_value = (MagicMock(), MagicMock()) protocol_util.dhcp_handler = MagicMock() protocol_util.dhcp_handler.endpoint = KNOWN_WIRESERVER_IP # Run protocol_util.get_protocol() # Check MDS Certs do not exist for mds_cert_path in mds_cert_paths: self.assertFalse(os.path.exists(mds_cert_path)) # Check firewall rules was reset protocol_util.osutil.remove_firewall.assert_called_once() protocol_util.osutil.enable_firewall.assert_called_once() @patch('azurelinuxagent.common.conf.get_lib_dir') @patch('azurelinuxagent.common.conf.enable_firewall') @patch('azurelinuxagent.common.protocol.wire.WireClient') def test_get_protocol_metadataserver_to_wireserver_update_removes_metadataserver_artifacts(self, mock_wire_client, mock_enable_firewall, mock_get_lib_dir, _): """ This is for testing that agent upgrade from MetadataServer to WireServer protocol will clean up leftover MDS Certificates and reset firewall rules. Also check that WireServer certificates are present, and protocol/endpoint files are written to appropriately. """ # Setup Protocol file with MetadataProtocol dir = tempfile.gettempdir() # pylint: disable=redefined-builtin protocol_filename = os.path.join(dir, PROTOCOL_FILE_NAME) with open(protocol_filename, "w") as f: f.write(_METADATA_PROTOCOL_NAME) # Setup MDS Certificates mds_cert_paths = [os.path.join(dir, mds_cert) for mds_cert in TestProtocolUtil.MDS_CERTIFICATES] for mds_cert_path in mds_cert_paths: open(mds_cert_path, "w").close() # Setup mocks mock_get_lib_dir.return_value = dir mock_enable_firewall.return_value = True protocol_util = get_protocol_util() protocol_util.osutil = MagicMock() protocol_util.osutil.enable_firewall.return_value = (MagicMock(), MagicMock()) mock_wire_client.return_value = MagicMock() protocol_util.dhcp_handler = MagicMock() protocol_util.dhcp_handler.endpoint = KNOWN_WIRESERVER_IP # Run protocol_util.get_protocol() # Check MDS Certs do not exist for mds_cert_path in mds_cert_paths: self.assertFalse(os.path.exists(mds_cert_path)) # Check that WireServer Certs exist ws_cert_paths = [os.path.join(dir, ws_cert) for ws_cert in TestProtocolUtil.WIRESERVER_CERTIFICATES] for ws_cert_path in ws_cert_paths: self.assertTrue(os.path.isfile(ws_cert_path)) # Check firewall rules was reset protocol_util.osutil.remove_firewall.assert_called_once() protocol_util.osutil.enable_firewall.assert_called_once() # Check Protocol File is updated to WireProtocol with open(os.path.join(dir, PROTOCOL_FILE_NAME), "r") as f: self.assertEqual(f.read(), WIRE_PROTOCOL_NAME) # Check Endpoint file is updated to WireServer IP with open(os.path.join(dir, ENDPOINT_FILE_NAME), 'r') as f: self.assertEqual(f.read(), KNOWN_WIRESERVER_IP) @patch('azurelinuxagent.common.conf.get_lib_dir') @patch('azurelinuxagent.common.conf.enable_firewall') @patch('azurelinuxagent.common.protocol.wire.WireClient') def test_get_protocol_new_wireserver_agent_generates_certificates(self, mock_wire_client, mock_enable_firewall, mock_get_lib_dir, _): """ This is for testing that a new WireServer Linux Agent generates appropriate certificates, protocol file, and endpoint file. """ # Setup mocks dir = tempfile.gettempdir() # pylint: disable=redefined-builtin mock_get_lib_dir.return_value = dir mock_enable_firewall.return_value = True protocol_util = get_protocol_util() protocol_util.osutil = MagicMock() mock_wire_client.return_value = MagicMock() protocol_util.dhcp_handler = MagicMock() protocol_util.dhcp_handler.endpoint = KNOWN_WIRESERVER_IP # Run protocol_util.get_protocol() # Check that WireServer Certs exist ws_cert_paths = [os.path.join(dir, ws_cert) for ws_cert in TestProtocolUtil.WIRESERVER_CERTIFICATES] for ws_cert_path in ws_cert_paths: self.assertTrue(os.path.isfile(ws_cert_path)) # Check firewall rules were not reset protocol_util.osutil.remove_firewall.assert_not_called() protocol_util.osutil.enable_firewall.assert_not_called() # Check Protocol File is updated to WireProtocol with open(os.path.join(dir, PROTOCOL_FILE_NAME), "r") as f: self.assertEqual(f.read(), WIRE_PROTOCOL_NAME) # Check Endpoint file is updated to WireServer IP with open(os.path.join(dir, ENDPOINT_FILE_NAME), 'r') as f: self.assertEqual(f.read(), KNOWN_WIRESERVER_IP) @patch("azurelinuxagent.common.protocol.util.fileutil") @patch("azurelinuxagent.common.conf.get_lib_dir") def test_endpoint_file_states(self, mock_get_lib_dir, mock_fileutil, _): mock_get_lib_dir.return_value = self.tmp_dir protocol_util = get_protocol_util() endpoint_file = protocol_util._get_wireserver_endpoint_file_path() # Test get endpoint for io error mock_fileutil.read_file.side_effect = IOError() ep = protocol_util.get_wireserver_endpoint() self.assertEqual(ep, KNOWN_WIRESERVER_IP) # Test get endpoint when file not found mock_fileutil.read_file.side_effect = IOError(ENOENT, 'File not found') ep = protocol_util.get_wireserver_endpoint() self.assertEqual(ep, KNOWN_WIRESERVER_IP) # Test get endpoint for empty file mock_fileutil.read_file.return_value = "" ep = protocol_util.get_wireserver_endpoint() self.assertEqual(ep, KNOWN_WIRESERVER_IP) # Test set endpoint for io error mock_fileutil.write_file.side_effect = IOError() ep = protocol_util.get_wireserver_endpoint() self.assertRaises(OSUtilError, protocol_util._set_wireserver_endpoint, 'abc') # Test clear endpoint for io error with open(endpoint_file, "w+") as ep_fd: ep_fd.write("") with patch('os.remove') as mock_remove: protocol_util._clear_wireserver_endpoint() self.assertEqual(1, mock_remove.call_count) self.assertEqual(endpoint_file, mock_remove.call_args_list[0][0][0]) # Test clear endpoint when file not found with patch('os.remove') as mock_remove: mock_remove = Mock(side_effect=IOError(ENOENT, 'File not found')) protocol_util._clear_wireserver_endpoint() mock_remove.assert_not_called() def test_protocol_file_states(self, _): protocol_util = get_protocol_util() protocol_util._clear_wireserver_endpoint = Mock() protocol_file = protocol_util._get_protocol_file_path() # Test clear protocol for io error with open(protocol_file, "w+") as proto_fd: proto_fd.write("") with patch('os.remove') as mock_remove: protocol_util.clear_protocol() self.assertEqual(1, protocol_util._clear_wireserver_endpoint.call_count) self.assertEqual(1, mock_remove.call_count) self.assertEqual(protocol_file, mock_remove.call_args_list[0][0][0]) # Test clear protocol when file not found protocol_util._clear_wireserver_endpoint.reset_mock() with patch('os.remove') as mock_remove: protocol_util.clear_protocol() self.assertEqual(1, protocol_util._clear_wireserver_endpoint.call_count) self.assertEqual(1, mock_remove.call_count) self.assertEqual(protocol_file, mock_remove.call_args_list[0][0][0]) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/protocol/test_wire.py000066400000000000000000002046401462617747000244100ustar00rootroot00000000000000# -*- encoding: utf-8 -*- # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib import json import os import socket import time import unittest import uuid from azurelinuxagent.common.agent_supported_feature import SupportedFeatureNames, get_supported_feature_by_name, \ get_agent_supported_features_list_for_crp from azurelinuxagent.common.event import WALAEventOperation from azurelinuxagent.common.exception import ResourceGoneError, ProtocolError, \ ExtensionDownloadError, HttpError from azurelinuxagent.common.protocol.extensions_goal_state_from_extensions_config import ExtensionsGoalStateFromExtensionsConfig from azurelinuxagent.common.protocol.goal_state import GoalStateProperties from azurelinuxagent.common.protocol.hostplugin import HostPluginProtocol from azurelinuxagent.common.protocol.wire import WireProtocol, WireClient, \ StatusBlob, VMStatus from azurelinuxagent.common.telemetryevent import GuestAgentExtensionEventsSchema, \ TelemetryEventParam, TelemetryEvent from azurelinuxagent.common.utils import restutil from azurelinuxagent.common.version import CURRENT_VERSION, DISTRO_NAME, DISTRO_VERSION from azurelinuxagent.ga.exthandlers import get_exthandlers_handler from tests.ga.test_monitor import random_generator from tests.lib import wire_protocol_data from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.wire_protocol_data import DATA_FILE_NO_EXT, DATA_FILE from tests.lib.wire_protocol_data import WireProtocolData from tests.lib.tools import patch, AgentTestCase, load_bin_data data_with_bom = b'\xef\xbb\xbfhehe' testurl = 'http://foo' testtype = 'BlockBlob' WIRESERVER_URL = '168.63.129.16' def get_event(message, duration=30000, evt_type="", is_internal=False, is_success=True, name="", op="Unknown", version=CURRENT_VERSION, eventId=1): event = TelemetryEvent(eventId, "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX") event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Name, name)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Version, str(version))) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.IsInternal, is_internal)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Operation, op)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.OperationSuccess, is_success)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Message, message)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Duration, duration)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.ExtensionType, evt_type)) return event @contextlib.contextmanager def create_mock_protocol(): with mock_wire_protocol(DATA_FILE_NO_EXT) as protocol: yield protocol @patch("time.sleep") @patch("azurelinuxagent.common.protocol.wire.CryptUtil") @patch("azurelinuxagent.common.protocol.healthservice.HealthService._report") class TestWireProtocol(AgentTestCase, HttpRequestPredicates): def setUp(self): super(TestWireProtocol, self).setUp() HostPluginProtocol.is_default_channel = False def _test_getters(self, test_data, certsMustBePresent, __, MockCryptUtil, _): MockCryptUtil.side_effect = test_data.mock_crypt_util with patch.object(restutil, 'http_get', test_data.mock_http_get): protocol = WireProtocol(WIRESERVER_URL) protocol.detect() protocol.get_vminfo() protocol.get_certs() ext_handlers = protocol.get_goal_state().extensions_goal_state.extensions for ext_handler in ext_handlers: protocol.get_goal_state().fetch_extension_manifest(ext_handler.name, ext_handler.manifest_uris) crt1 = os.path.join(self.tmp_dir, '38B85D88F03D1A8E1C671EB169274C09BC4D4703.crt') crt2 = os.path.join(self.tmp_dir, 'BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F.crt') prv2 = os.path.join(self.tmp_dir, 'BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F.prv') if certsMustBePresent: self.assertTrue(os.path.isfile(crt1)) self.assertTrue(os.path.isfile(crt2)) self.assertTrue(os.path.isfile(prv2)) else: self.assertFalse(os.path.isfile(crt1)) self.assertFalse(os.path.isfile(crt2)) self.assertFalse(os.path.isfile(prv2)) self.assertEqual("1", protocol.get_goal_state().incarnation) @staticmethod def _get_telemetry_events_generator(event_list): def _yield_events(): for telemetry_event in event_list: yield telemetry_event return _yield_events() def test_getters(self, *args): """Normal case""" test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) self._test_getters(test_data, True, *args) def test_getters_no_ext(self, *args): """Provision with agent is not checked""" test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_NO_EXT) self._test_getters(test_data, True, *args) def test_getters_ext_no_settings(self, *args): """Extensions without any settings""" test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_NO_SETTINGS) self._test_getters(test_data, True, *args) def test_getters_ext_no_public(self, *args): """Extensions without any public settings""" test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_NO_PUBLIC) self._test_getters(test_data, True, *args) def test_getters_ext_no_cert_format(self, *args): """Certificate format not specified""" test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_NO_CERT_FORMAT) self._test_getters(test_data, True, *args) def test_getters_ext_cert_format_not_pfx(self, *args): """Certificate format is not Pkcs7BlobWithPfxContents specified""" test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_CERT_FORMAT_NOT_PFX) self._test_getters(test_data, False, *args) @patch("azurelinuxagent.common.protocol.healthservice.HealthService.report_host_plugin_extension_artifact") def test_getters_with_stale_goal_state(self, patch_report, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) test_data.emulate_stale_goal_state = True self._test_getters(test_data, True, *args) # Ensure HostPlugin was invoked self.assertEqual(1, test_data.call_counts["/versions"]) self.assertEqual(2, test_data.call_counts["extensionArtifact"]) # Ensure the expected number of HTTP calls were made # -- Tracking calls to retrieve GoalState is problematic since it is # fetched often; however, the dependent documents, such as the # HostingEnvironmentConfig, will be retrieved the expected number self.assertEqual(1, test_data.call_counts["hostingEnvironmentConfig"]) self.assertEqual(1, patch_report.call_count) def test_call_storage_kwargs(self, *args): # pylint: disable=unused-argument with patch.object(restutil, 'http_get') as http_patch: http_req = restutil.http_get url = testurl headers = {} # no kwargs -- Default to True WireClient.call_storage_service(http_req) # kwargs, no use_proxy -- Default to True WireClient.call_storage_service(http_req, url, headers) # kwargs, use_proxy None -- Default to True WireClient.call_storage_service(http_req, url, headers, use_proxy=None) # kwargs, use_proxy False -- Keep False WireClient.call_storage_service(http_req, url, headers, use_proxy=False) # kwargs, use_proxy True -- Keep True WireClient.call_storage_service(http_req, url, headers, use_proxy=True) # assert self.assertTrue(http_patch.call_count == 5) for i in range(0, 5): c = http_patch.call_args_list[i][-1]['use_proxy'] self.assertTrue(c == (True if i != 3 else False)) def test_status_blob_parsing(self, *args): # pylint: disable=unused-argument with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state self.assertIsInstance(extensions_goal_state, ExtensionsGoalStateFromExtensionsConfig) self.assertEqual(extensions_goal_state.status_upload_blob, 'https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?' 'sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&' 'sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo') self.assertEqual(protocol.get_goal_state().extensions_goal_state.status_upload_blob_type, u'BlockBlob') def test_get_host_ga_plugin(self, *args): # pylint: disable=unused-argument with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: host_plugin = protocol.client.get_host_plugin() goal_state = protocol.client.get_goal_state() self.assertEqual(goal_state.container_id, host_plugin.container_id) self.assertEqual(goal_state.role_config_name, host_plugin.role_config_name) def test_upload_status_blob_should_use_the_host_channel_by_default(self, *_): def http_put_handler(url, *_, **__): # pylint: disable=inconsistent-return-statements if protocol.get_endpoint() in url and url.endswith('/status'): return MockHttpResponse(200) with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_put_handler=http_put_handler) as protocol: HostPluginProtocol.is_default_channel = False protocol.client.status_blob.vm_status = VMStatus(message="Ready", status="Ready") protocol.client.upload_status_blob() urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 1, 'Expected one post request to the host: [{0}]'.format(urls)) def test_upload_status_blob_host_ga_plugin(self, *_): with create_mock_protocol() as protocol: protocol.client.status_blob.vm_status = VMStatus(message="Ready", status="Ready") with patch.object(HostPluginProtocol, "ensure_initialized", return_value=True): with patch.object(StatusBlob, "upload", return_value=False) as patch_default_upload: with patch.object(HostPluginProtocol, "_put_block_blob_status") as patch_http: HostPluginProtocol.is_default_channel = False protocol.client.upload_status_blob() patch_default_upload.assert_not_called() patch_http.assert_called_once_with(testurl, protocol.client.status_blob) self.assertFalse(HostPluginProtocol.is_default_channel) def test_upload_status_blob_reports_prepare_error(self, *_): with create_mock_protocol() as protocol: protocol.client.status_blob.vm_status = VMStatus(message="Ready", status="Ready") with patch.object(StatusBlob, "prepare", side_effect=Exception) as mock_prepare: self.assertRaises(ProtocolError, protocol.client.upload_status_blob) self.assertEqual(1, mock_prepare.call_count) def test_get_in_vm_artifacts_profile_blob_not_available(self, *_): data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_in_vm_empty_artifacts_profile.xml" with mock_wire_protocol(data_file) as protocol: self.assertFalse(protocol.get_goal_state().extensions_goal_state.on_hold) def test_it_should_set_on_hold_to_false_when_the_in_vm_artifacts_profile_is_not_valid(self, *_): with mock_wire_protocol(wire_protocol_data.DATA_FILE_IN_VM_ARTIFACTS_PROFILE) as protocol: extensions_on_hold = protocol.get_goal_state().extensions_goal_state.on_hold self.assertTrue(extensions_on_hold, "Extensions should be on hold in the test data") def http_get_handler(url, *_, **kwargs): if self.is_in_vm_artifacts_profile_request(url) or self.is_host_plugin_in_vm_artifacts_profile_request(url, kwargs): return mock_response return None protocol.set_http_handlers(http_get_handler=http_get_handler) mock_response = MockHttpResponse(200, body=None) protocol.client.reset_goal_state() extensions_on_hold = protocol.get_goal_state().extensions_goal_state.on_hold self.assertFalse(extensions_on_hold, "Extensions should not be on hold when the in-vm artifacts profile response body is None") mock_response = MockHttpResponse(200, ' '.encode('utf-8')) protocol.client.reset_goal_state() extensions_on_hold = protocol.get_goal_state().extensions_goal_state.on_hold self.assertFalse(extensions_on_hold, "Extensions should not be on hold when the in-vm artifacts profile response is an empty string") mock_response = MockHttpResponse(200, '{ }'.encode('utf-8')) protocol.client.reset_goal_state() extensions_on_hold = protocol.get_goal_state().extensions_goal_state.on_hold self.assertFalse(extensions_on_hold, "Extensions should not be on hold when the in-vm artifacts profile response is an empty json object") with patch("azurelinuxagent.common.protocol.extensions_goal_state_from_extensions_config.add_event") as add_event: mock_response = MockHttpResponse(200, 'invalid json'.encode('utf-8')) protocol.client.reset_goal_state() extensions_on_hold = protocol.get_goal_state().extensions_goal_state.on_hold self.assertFalse(extensions_on_hold, "Extensions should not be on hold when the in-vm artifacts profile response is not valid json") events = [kwargs for _, kwargs in add_event.call_args_list if kwargs['op'] == WALAEventOperation.ArtifactsProfileBlob] self.assertEqual(1, len(events), "Expected 1 event for operation ArtifactsProfileBlob. Got: {0}".format(events)) self.assertFalse(events[0]['is_success'], "Expected ArtifactsProfileBlob's success to be False") self.assertTrue("Can't parse the artifacts profile blob" in events[0]['message'], "Expected 'Can't parse the artifacts profile blob as the reason for the operation failure. Got: {0}".format(events[0]['message'])) @patch("socket.gethostname", return_value="hostname") @patch("time.gmtime", return_value=time.localtime(1485543256)) def test_report_vm_status(self, *args): # pylint: disable=unused-argument status = 'status' message = 'message' client = WireProtocol(WIRESERVER_URL).client actual = StatusBlob(client=client) actual.set_vm_status(VMStatus(status=status, message=message)) timestamp = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()) formatted_msg = { 'lang': 'en-US', 'message': message } v1_ga_status = { 'version': str(CURRENT_VERSION), 'status': status, 'formattedMessage': formatted_msg } v1_ga_guest_info = { 'computerName': socket.gethostname(), 'osName': DISTRO_NAME, 'osVersion': DISTRO_VERSION, 'version': str(CURRENT_VERSION), } v1_agg_status = { 'guestAgentStatus': v1_ga_status, 'handlerAggregateStatus': [] } supported_features = [] for _, feature in get_agent_supported_features_list_for_crp().items(): supported_features.append( { "Key": feature.name, "Value": feature.version } ) v1_vm_status = { 'version': '1.1', 'timestampUTC': timestamp, 'aggregateStatus': v1_agg_status, 'guestOSInfo': v1_ga_guest_info, 'supportedFeatures': supported_features } self.assertEqual(json.dumps(v1_vm_status), actual.to_json()) def test_it_should_report_supported_features_in_status_blob_if_supported(self, *_): with mock_wire_protocol(DATA_FILE) as protocol: def mock_http_put(url, *args, **__): if not HttpRequestPredicates.is_host_plugin_status_request(url): # Skip reading the HostGA request data as its encoded protocol.aggregate_status = json.loads(args[0]) protocol.aggregate_status = {} protocol.set_http_handlers(http_put_handler=mock_http_put) exthandlers_handler = get_exthandlers_handler(protocol) with patch("azurelinuxagent.common.agent_supported_feature._MultiConfigFeature.is_supported", True): with patch("azurelinuxagent.common.agent_supported_feature._GAVersioningGovernanceFeature.is_supported", True): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertIsNotNone(protocol.aggregate_status, "Aggregate status should not be None") self.assertIn("supportedFeatures", protocol.aggregate_status, "supported features not reported") multi_config_feature = get_supported_feature_by_name(SupportedFeatureNames.MultiConfig) found = False for feature in protocol.aggregate_status['supportedFeatures']: if feature['Key'] == multi_config_feature.name and feature['Value'] == multi_config_feature.version: found = True break self.assertTrue(found, "Multi-config name should be present in supportedFeatures") ga_versioning_feature = get_supported_feature_by_name(SupportedFeatureNames.GAVersioningGovernance) found = False for feature in protocol.aggregate_status['supportedFeatures']: if feature['Key'] == ga_versioning_feature.name and feature['Value'] == ga_versioning_feature.version: found = True break self.assertTrue(found, "ga versioning name should be present in supportedFeatures") # Feature should not be reported if not present with patch("azurelinuxagent.common.agent_supported_feature._MultiConfigFeature.is_supported", False): with patch("azurelinuxagent.common.agent_supported_feature._GAVersioningGovernanceFeature.is_supported", False): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertIsNotNone(protocol.aggregate_status, "Aggregate status should not be None") if "supportedFeatures" not in protocol.aggregate_status: # In the case Multi-config and GA Versioning only features available, 'supportedFeatures' should not be # reported in the status blob as its not supported as of now. # Asserting no other feature was available to report back to crp self.assertEqual(0, len(get_agent_supported_features_list_for_crp()), "supportedFeatures should be available if there are more features") return # If there are other features available, confirm MultiConfig and GA versioning was not reported multi_config_feature = get_supported_feature_by_name(SupportedFeatureNames.MultiConfig) found = False for feature in protocol.aggregate_status['supportedFeatures']: if feature['Key'] == multi_config_feature.name and feature['Value'] == multi_config_feature.version: found = True break self.assertFalse(found, "Multi-config name should not be present in supportedFeatures") ga_versioning_feature = get_supported_feature_by_name(SupportedFeatureNames.GAVersioningGovernance) found = False for feature in protocol.aggregate_status['supportedFeatures']: if feature['Key'] == ga_versioning_feature.name and feature['Value'] == ga_versioning_feature.version: found = True break self.assertFalse(found, "ga versioning name should not be present in supportedFeatures") @patch("azurelinuxagent.common.utils.restutil.http_request") def test_send_encoded_event(self, mock_http_request, *args): mock_http_request.return_value = MockHttpResponse(200) event_str = u'a test string' client = WireProtocol(WIRESERVER_URL).client client.send_encoded_event("foo", event_str.encode('utf-8')) first_call = mock_http_request.call_args_list[0] args, kwargs = first_call method, url, body_received, timeout = args # pylint: disable=unused-variable headers = kwargs['headers'] # the headers should include utf-8 encoding... self.assertTrue("utf-8" in headers['Content-Type']) # the body is encoded, decode and check for equality self.assertIn(event_str, body_received.decode('utf-8')) @patch("azurelinuxagent.common.protocol.wire.WireClient.send_encoded_event") def test_report_event_small_event(self, patch_send_event, *args): # pylint: disable=unused-argument event_list = [] client = WireProtocol(WIRESERVER_URL).client event_str = random_generator(10) event_list.append(get_event(message=event_str)) event_str = random_generator(100) event_list.append(get_event(message=event_str)) event_str = random_generator(1000) event_list.append(get_event(message=event_str)) event_str = random_generator(10000) event_list.append(get_event(message=event_str)) client.report_event(self._get_telemetry_events_generator(event_list)) # It merges the messages into one message self.assertEqual(patch_send_event.call_count, 1) @patch("azurelinuxagent.common.protocol.wire.WireClient.send_encoded_event") def test_report_event_multiple_events_to_fill_buffer(self, patch_send_event, *args): # pylint: disable=unused-argument event_list = [] client = WireProtocol(WIRESERVER_URL).client event_str = random_generator(2 ** 15) event_list.append(get_event(message=event_str)) event_list.append(get_event(message=event_str)) client.report_event(self._get_telemetry_events_generator(event_list)) # It merges the messages into one message self.assertEqual(patch_send_event.call_count, 2) @patch("azurelinuxagent.common.protocol.wire.WireClient.send_encoded_event") def test_report_event_large_event(self, patch_send_event, *args): # pylint: disable=unused-argument event_list = [] event_str = random_generator(2 ** 18) event_list.append(get_event(message=event_str)) client = WireProtocol(WIRESERVER_URL).client client.report_event(self._get_telemetry_events_generator(event_list)) self.assertEqual(patch_send_event.call_count, 0) class TestWireClient(HttpRequestPredicates, AgentTestCase): def test_get_ext_conf_without_extensions_should_retrieve_vmagent_manifests_info(self, *args): # pylint: disable=unused-argument # Basic test for extensions_goal_state when extensions are not present in the config. The test verifies that # extensions_goal_state fetches the correct data by comparing the returned data with the test data provided the # mock_wire_protocol. with mock_wire_protocol(wire_protocol_data.DATA_FILE_NO_EXT) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state ext_handlers_names = [ext_handler.name for ext_handler in extensions_goal_state.extensions] self.assertEqual(0, len(extensions_goal_state.extensions), "Unexpected number of extension handlers in the extension config: [{0}]".format(ext_handlers_names)) vmagent_families = [manifest.name for manifest in extensions_goal_state.agent_families] self.assertEqual(0, len(extensions_goal_state.agent_families), "Unexpected number of vmagent manifests in the extension config: [{0}]".format(vmagent_families)) self.assertFalse(extensions_goal_state.on_hold, "Extensions On Hold is expected to be False") def test_get_ext_conf_with_extensions_should_retrieve_ext_handlers_and_vmagent_manifests_info(self): # Basic test for extensions_goal_state when extensions are present in the config. The test verifies that extensions_goal_state # fetches the correct data by comparing the returned data with the test data provided the mock_wire_protocol. with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: extensions_goal_state = protocol.get_goal_state().extensions_goal_state ext_handlers_names = [ext_handler.name for ext_handler in extensions_goal_state.extensions] self.assertEqual(1, len(extensions_goal_state.extensions), "Unexpected number of extension handlers in the extension config: [{0}]".format(ext_handlers_names)) vmagent_families = [manifest.name for manifest in extensions_goal_state.agent_families] self.assertEqual(2, len(extensions_goal_state.agent_families), "Unexpected number of vmagent manifests in the extension config: [{0}]".format(vmagent_families)) self.assertEqual("https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw" "&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo", extensions_goal_state.status_upload_blob, "Unexpected value for status upload blob URI") self.assertEqual("BlockBlob", extensions_goal_state.status_upload_blob_type, "Unexpected status upload blob type in the extension config") self.assertFalse(extensions_goal_state.on_hold, "Extensions On Hold is expected to be False") def test_download_zip_package_should_expand_and_delete_the_package(self): extension_url = 'https://fake_host/fake_extension.zip' target_file = os.path.join(self.tmp_dir, 'fake_extension.zip') target_directory = os.path.join(self.tmp_dir, "fake_extension") def http_get_handler(url, *_, **__): if url == extension_url or self.is_host_plugin_extension_artifact_request(url): return MockHttpResponse(200, body=load_bin_data("ga/fake_extension.zip")) return None with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_get_handler=http_get_handler) as protocol: protocol.client.download_zip_package("extension package", [extension_url], target_file, target_directory, use_verify_header=False) self.assertTrue(os.path.exists(target_directory), "The extension package was not downloaded") self.assertFalse(os.path.exists(target_file), "The extension package was not deleted") def test_download_zip_package_should_not_invoke_host_channel_when_direct_channel_succeeds(self): extension_url = 'https://fake_host/fake_extension.zip' target_file = os.path.join(self.tmp_dir, 'fake_extension.zip') target_directory = os.path.join(self.tmp_dir, "fake_extension") def http_get_handler(url, *_, **__): if url == extension_url: return MockHttpResponse(200, body=load_bin_data("ga/fake_extension.zip")) if self.is_host_plugin_extension_artifact_request(url): self.fail('The host channel should not have been used') return None with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_get_handler=http_get_handler) as protocol: HostPluginProtocol.is_default_channel = False protocol.client.download_zip_package("extension package", [extension_url], target_file, target_directory, use_verify_header=False) urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 1, "Unexpected number of HTTP requests: [{0}]".format(urls)) self.assertEqual(urls[0], extension_url, "The extension should have been downloaded over the direct channel") self.assertTrue(os.path.exists(target_directory), "The extension package was not downloaded") self.assertFalse(HostPluginProtocol.is_default_channel, "The host channel should not have been set as the default") def test_download_zip_package_should_use_host_channel_when_direct_channel_fails_and_set_host_as_default(self): extension_url = 'https://fake_host/fake_extension.zip' target_file = os.path.join(self.tmp_dir, 'fake_extension.zip') target_directory = os.path.join(self.tmp_dir, "fake_extension") def http_get_handler(url, *_, **kwargs): if url == extension_url: return HttpError("Exception to fake an error on the direct channel") if self.is_host_plugin_extension_request(url, kwargs, extension_url): return MockHttpResponse(200, body=load_bin_data("ga/fake_extension.zip")) return None with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_get_handler=http_get_handler) as protocol: HostPluginProtocol.is_default_channel = False protocol.client.download_zip_package("extension package", [extension_url], target_file, target_directory, use_verify_header=False) urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 2, "Unexpected number of HTTP requests: [{0}]".format(urls)) self.assertEqual(urls[0], extension_url, "The first attempt should have been over the direct channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[1]), "The retry attempt should have been over the host channel") self.assertTrue(os.path.exists(target_directory), 'The extension package was not downloaded') self.assertTrue(HostPluginProtocol.is_default_channel, "The host channel should have been set as the default") def test_download_zip_package_should_retry_the_host_channel_after_refreshing_host_plugin(self): extension_url = 'https://fake_host/fake_extension.zip' target_file = os.path.join(self.tmp_dir, 'fake_extension.zip') target_directory = os.path.join(self.tmp_dir, "fake_extension") def http_get_handler(url, *_, **kwargs): if url == extension_url: return HttpError("Exception to fake an error on the direct channel") if self.is_host_plugin_extension_request(url, kwargs, extension_url): # fake a stale goal state then succeed once the goal state has been refreshed if http_get_handler.goal_state_requests == 0: http_get_handler.goal_state_requests += 1 return ResourceGoneError("Exception to fake a stale goal") return MockHttpResponse(200, body=load_bin_data("ga/fake_extension.zip")) if self.is_goal_state_request(url): protocol.track_url(url) # track requests for the goal state return None http_get_handler.goal_state_requests = 0 with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: HostPluginProtocol.is_default_channel = False try: # initialization of the host plugin triggers a request for the goal state; do it here before we start tracking those requests. protocol.client.get_host_plugin() protocol.set_http_handlers(http_get_handler=http_get_handler) protocol.client.download_zip_package("extension package", [extension_url], target_file, target_directory, use_verify_header=False) urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 4, "Unexpected number of HTTP requests: [{0}]".format(urls)) self.assertEqual(urls[0], extension_url, "The first attempt should have been over the direct channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[1]), "The second attempt should have been over the host channel") self.assertTrue(self.is_goal_state_request(urls[2]), "The host channel should have been refreshed the goal state") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[3]), "The third attempt should have been over the host channel") self.assertTrue(os.path.exists(target_directory), 'The extension package was not downloaded') self.assertTrue(HostPluginProtocol.is_default_channel, "The host channel should have been set as the default") finally: HostPluginProtocol.is_default_channel = False def test_download_zip_package_should_not_change_default_channel_when_all_channels_fail(self): extension_url = 'https://fake_host/fake_extension.zip' target_file = os.path.join(self.tmp_dir, "fake_extension.zip") target_directory = os.path.join(self.tmp_dir, "fake_extension") def http_get_handler(url, *_, **kwargs): if url == extension_url or self.is_host_plugin_extension_request(url, kwargs, extension_url): return MockHttpResponse(status=404, body=b"content not found", reason="Not Found") if self.is_goal_state_request(url): protocol.track_url(url) # keep track of goal state requests return None with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: HostPluginProtocol.is_default_channel = False # initialization of the host plugin triggers a request for the goal state; do it here before we start tracking those requests. protocol.client.get_host_plugin() protocol.set_http_handlers(http_get_handler=http_get_handler) with self.assertRaises(ExtensionDownloadError): protocol.client.download_zip_package("extension package", [extension_url], target_file, target_directory, use_verify_header=False) urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 2, "Unexpected number of HTTP requests: [{0}]".format(urls)) self.assertEqual(urls[0], extension_url, "The first attempt should have been over the direct channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[1]), "The second attempt should have been over the host channel") self.assertFalse(os.path.exists(target_file), "The extension package was downloaded and it shouldn't have") self.assertFalse(HostPluginProtocol.is_default_channel, "The host channel should not have been set as the default") def test_invalid_zip_should_raise_an_error(self): extension_url = 'https://fake_host/fake_extension.zip' target_file = os.path.join(self.tmp_dir, "fake_extension.zip") target_directory = os.path.join(self.tmp_dir, "fake_extension") def http_get_handler(url, *_, **kwargs): if url == extension_url or self.is_host_plugin_extension_request(url, kwargs, extension_url): return MockHttpResponse(status=200, body=b"NOT A ZIP") return None with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: protocol.set_http_handlers(http_get_handler=http_get_handler) with self.assertRaises(ExtensionDownloadError): protocol.client.download_zip_package("extension package", [extension_url], target_file, target_directory, use_verify_header=False) self.assertFalse(os.path.exists(target_file), "The extension package should have been deleted") self.assertFalse(os.path.exists(target_directory), "The extension directory should not have been created") def test_fetch_manifest_should_not_invoke_host_channel_when_direct_channel_succeeds(self): manifest_url = 'https://fake_host/fake_manifest.xml' manifest_xml = '' def http_get_handler(url, *_, **__): if url == manifest_url: return MockHttpResponse(200, manifest_xml.encode('utf-8')) if url.endswith('/extensionArtifact'): self.fail('The Host GA Plugin should not have been invoked') return None with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_get_handler=http_get_handler) as protocol: HostPluginProtocol.is_default_channel = False manifest = protocol.client.fetch_manifest("test", [manifest_url], use_verify_header=False) urls = protocol.get_tracked_urls() self.assertEqual(manifest, manifest_xml, 'The expected manifest was not downloaded') self.assertEqual(len(urls), 1, "Unexpected number of HTTP requests: [{0}]".format(urls)) self.assertEqual(urls[0], manifest_url, "The manifest should have been downloaded over the direct channel") self.assertFalse(HostPluginProtocol.is_default_channel, "The default channel should not have changed") def test_fetch_manifest_should_use_host_channel_when_direct_channel_fails_and_set_it_to_default(self): manifest_url = 'https://fake_host/fake_manifest.xml' manifest_xml = '' def http_get_handler(url, *_, **kwargs): if url == manifest_url: return ResourceGoneError("Exception to fake an error on the direct channel") if self.is_host_plugin_extension_request(url, kwargs, manifest_url): return MockHttpResponse(200, body=manifest_xml.encode('utf-8')) return None with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_get_handler=http_get_handler) as protocol: HostPluginProtocol.is_default_channel = False try: manifest = protocol.client.fetch_manifest("test", [manifest_url], use_verify_header=False) urls = protocol.get_tracked_urls() self.assertEqual(manifest, manifest_xml, 'The expected manifest was not downloaded') self.assertEqual(len(urls), 2, "Unexpected number of HTTP requests: [{0}]".format(urls)) self.assertEqual(urls[0], manifest_url, "The first attempt should have been over the direct channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[1]), "The retry should have been over the host channel") self.assertTrue(HostPluginProtocol.is_default_channel, "The host should have been set as the default channel") finally: HostPluginProtocol.is_default_channel = False # Reset default channel def test_fetch_manifest_should_retry_the_host_channel_after_refreshing_the_host_plugin_and_set_the_host_as_default(self): manifest_url = 'https://fake_host/fake_manifest.xml' manifest_xml = '' def http_get_handler(url, *_, **kwargs): if url == manifest_url: return HttpError("Exception to fake an error on the direct channel") if self.is_host_plugin_extension_request(url, kwargs, manifest_url): # fake a stale goal state then succeed once the goal state has been refreshed if http_get_handler.goal_state_requests == 0: http_get_handler.goal_state_requests += 1 return ResourceGoneError("Exception to fake a stale goal state") return MockHttpResponse(200, manifest_xml.encode('utf-8')) elif self.is_goal_state_request(url): protocol.track_url(url) # keep track of goal state requests return None http_get_handler.goal_state_requests = 0 with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: HostPluginProtocol.is_default_channel = False try: # initialization of the host plugin triggers a request for the goal state; do it here before we start tracking those requests. protocol.client.get_host_plugin() protocol.set_http_handlers(http_get_handler=http_get_handler) manifest = protocol.client.fetch_manifest("test", [manifest_url], use_verify_header=False) urls = protocol.get_tracked_urls() self.assertEqual(manifest, manifest_xml) self.assertEqual(len(urls), 4, "Unexpected number of HTTP requests: [{0}]".format(urls)) self.assertEqual(urls[0], manifest_url, "The first attempt should have been over the direct channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[1]), "The second attempt should have been over the host channel") self.assertTrue(self.is_goal_state_request(urls[2]), "The host channel should have been refreshed the goal state") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[3]), "The third attempt should have been over the host channel") self.assertTrue(HostPluginProtocol.is_default_channel, "The host should have been set as the default channel") finally: HostPluginProtocol.is_default_channel = False # Reset default channel def test_fetch_manifest_should_update_goal_state_and_not_change_default_channel_if_host_fails(self): manifest_url = 'https://fake_host/fake_manifest.xml' def http_get_handler(url, *_, **kwargs): if url == manifest_url or self.is_host_plugin_extension_request(url, kwargs, manifest_url): return ResourceGoneError("Exception to fake an error on either channel") elif self.is_goal_state_request(url): protocol.track_url(url) # keep track of goal state requests return None # Everything fails. Goal state should have been updated and host channel should not have been set as default. with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: HostPluginProtocol.is_default_channel = False # initialization of the host plugin triggers a request for the goal state; do it here before we start # tracking those requests. protocol.client.get_host_plugin() protocol.set_http_handlers(http_get_handler=http_get_handler) with self.assertRaises(ExtensionDownloadError): protocol.client.fetch_manifest("test", [manifest_url], use_verify_header=False) urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 4, "Unexpected number of HTTP requests: [{0}]".format(urls)) self.assertEqual(urls[0], manifest_url, "The first attempt should have been over the direct channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[1]), "The second attempt should have been over the host channel") self.assertTrue(self.is_goal_state_request(urls[2]), "The host channel should have been refreshed the goal state") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[3]), "The third attempt should have been over the host channel") self.assertFalse(HostPluginProtocol.is_default_channel, "The host should not have been set as the default channel") self.assertEqual(HostPluginProtocol.is_default_channel, False) def test_get_artifacts_profile_should_not_invoke_host_channel_when_direct_channel_succeeds(self): def http_get_handler(url, *_, **__): if self.is_in_vm_artifacts_profile_request(url): protocol.track_url(url) return None with mock_wire_protocol(wire_protocol_data.DATA_FILE_IN_VM_ARTIFACTS_PROFILE) as protocol: protocol.set_http_handlers(http_get_handler=http_get_handler) HostPluginProtocol.is_default_channel = False protocol.client.reset_goal_state() urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 1, "Unexpected HTTP requests: [{0}]".format(urls)) self.assertFalse(HostPluginProtocol.is_default_channel, "The host should not have been set as the default channel") def test_get_artifacts_profile_should_use_host_channel_when_direct_channel_fails(self): def http_get_handler(url, *_, **kwargs): if self.is_in_vm_artifacts_profile_request(url): return HttpError("Exception to fake an error on the direct channel") if self.is_host_plugin_in_vm_artifacts_profile_request(url, kwargs): protocol.track_url(url) return None with mock_wire_protocol(wire_protocol_data.DATA_FILE_IN_VM_ARTIFACTS_PROFILE) as protocol: protocol.set_http_handlers(http_get_handler=http_get_handler) HostPluginProtocol.is_default_channel = False try: protocol.client.reset_goal_state() urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 2, "Invalid number of requests: [{0}]".format(urls)) self.assertTrue(self.is_in_vm_artifacts_profile_request(urls[0]), "The first request should have been over the direct channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[1]), "The second request should have been over the host channel") self.assertTrue(HostPluginProtocol.is_default_channel, "The default channel should have changed to the host") finally: HostPluginProtocol.is_default_channel = False def test_get_artifacts_profile_should_retry_the_host_channel_after_refreshing_the_host_plugin(self): def http_get_handler(url, *_, **kwargs): if self.is_in_vm_artifacts_profile_request(url): return HttpError("Exception to fake an error on the direct channel") if self.is_host_plugin_in_vm_artifacts_profile_request(url, kwargs): if http_get_handler.host_plugin_calls == 0: http_get_handler.host_plugin_calls += 1 return ResourceGoneError("Exception to fake a stale goal state") protocol.track_url(url) if self.is_goal_state_request(url) and http_get_handler.host_plugin_calls == 1: protocol.track_url(url) return None http_get_handler.host_plugin_calls = 0 with mock_wire_protocol(wire_protocol_data.DATA_FILE_IN_VM_ARTIFACTS_PROFILE) as protocol: HostPluginProtocol.is_default_channel = False try: # initialization of the host plugin triggers a request for the goal state; do it here before we start tracking those requests. protocol.client.get_host_plugin() protocol.set_http_handlers(http_get_handler=http_get_handler) protocol.client.reset_goal_state() urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 4, "Invalid number of requests: [{0}]".format(urls)) self.assertTrue(self.is_in_vm_artifacts_profile_request(urls[0]), "The first request should have been over the direct channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[1]), "The second request should have been over the host channel") self.assertTrue(self.is_goal_state_request(urls[2]), "The goal state should have been refreshed before retrying the host channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[3]), "The retry request should have been over the host channel") self.assertTrue(HostPluginProtocol.is_default_channel, "The default channel should have changed to the host") finally: HostPluginProtocol.is_default_channel = False def test_get_artifacts_profile_should_refresh_the_host_plugin_and_not_change_default_channel_if_host_plugin_fails(self): def http_get_handler(url, *_, **kwargs): if self.is_in_vm_artifacts_profile_request(url): return HttpError("Exception to fake an error on the direct channel") if self.is_host_plugin_in_vm_artifacts_profile_request(url, kwargs): http_get_handler.host_plugin_calls += 1 return ResourceGoneError("Exception to fake a stale goal state") if self.is_goal_state_request(url) and http_get_handler.host_plugin_calls == 1: protocol.track_url(url) return None http_get_handler.host_plugin_calls = 0 with mock_wire_protocol(wire_protocol_data.DATA_FILE_IN_VM_ARTIFACTS_PROFILE) as protocol: HostPluginProtocol.is_default_channel = False # initialization of the host plugin triggers a request for the goal state; do it here before we start tracking those requests. protocol.client.get_host_plugin() protocol.set_http_handlers(http_get_handler=http_get_handler) protocol.client.reset_goal_state() urls = protocol.get_tracked_urls() self.assertEqual(len(urls), 4, "Invalid number of requests: [{0}]".format(urls)) self.assertTrue(self.is_in_vm_artifacts_profile_request(urls[0]), "The first request should have been over the direct channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[1]), "The second request should have been over the host channel") self.assertTrue(self.is_goal_state_request(urls[2]), "The goal state should have been refreshed before retrying the host channel") self.assertTrue(self.is_host_plugin_extension_artifact_request(urls[3]), "The retry request should have been over the host channel") self.assertFalse(HostPluginProtocol.is_default_channel, "The default channel should not have changed") @staticmethod def _set_and_fail_helper_channel_functions(fail_direct=False, fail_host=False): def direct_func(*_): direct_func.counter += 1 if direct_func.fail: raise Exception("Direct channel failed") return "direct" def host_func(*_): host_func.counter += 1 if host_func.fail: raise Exception("Host channel failed") return "host" direct_func.counter = 0 direct_func.fail = fail_direct host_func.counter = 0 host_func.fail = fail_host return direct_func, host_func def test_download_using_appropriate_channel_should_not_invoke_secondary_when_primary_channel_succeeds(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: # Scenario #1: Direct channel default HostPluginProtocol.is_default_channel = False direct_func, host_func = self._set_and_fail_helper_channel_functions() # Assert we're only calling the primary channel (direct) and that it succeeds. for iteration in range(5): ret = protocol.client._download_using_appropriate_channel(direct_func, host_func) self.assertEqual("direct", ret) self.assertEqual(iteration + 1, direct_func.counter) self.assertEqual(0, host_func.counter) self.assertFalse(HostPluginProtocol.is_default_channel) # Scenario #2: Host channel default HostPluginProtocol.is_default_channel = True direct_func, host_func = self._set_and_fail_helper_channel_functions() # Assert we're only calling the primary channel (host) and that it succeeds. for iteration in range(5): ret = protocol.client._download_using_appropriate_channel(direct_func, host_func) self.assertEqual("host", ret) self.assertEqual(0, direct_func.counter) self.assertEqual(iteration + 1, host_func.counter) self.assertTrue(HostPluginProtocol.is_default_channel) def test_download_using_appropriate_channel_should_not_change_default_channel_if_none_succeeds(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: # Scenario #1: Direct channel is default HostPluginProtocol.is_default_channel = False direct_func, host_func = self._set_and_fail_helper_channel_functions(fail_direct=True, fail_host=True) # Assert we keep trying both channels, but the default channel doesn't change for iteration in range(5): with self.assertRaises(HttpError): protocol.client._download_using_appropriate_channel(direct_func, host_func) self.assertEqual(iteration + 1, direct_func.counter) self.assertEqual(iteration + 1, host_func.counter) self.assertFalse(HostPluginProtocol.is_default_channel) # Scenario #2: Host channel is default HostPluginProtocol.is_default_channel = True direct_func, host_func = self._set_and_fail_helper_channel_functions(fail_direct=True, fail_host=True) # Assert we keep trying both channels, but the default channel doesn't change for iteration in range(5): with self.assertRaises(HttpError): protocol.client._download_using_appropriate_channel(direct_func, host_func) self.assertEqual(iteration + 1, direct_func.counter) self.assertEqual(iteration + 1, host_func.counter) self.assertTrue(HostPluginProtocol.is_default_channel) def test_download_using_appropriate_channel_should_change_default_channel_when_secondary_succeeds(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: # Scenario #1: Direct channel is default HostPluginProtocol.is_default_channel = False direct_func, host_func = self._set_and_fail_helper_channel_functions(fail_direct=True, fail_host=False) # Assert we've called both channels and the default channel changed ret = protocol.client._download_using_appropriate_channel(direct_func, host_func) self.assertEqual("host", ret) self.assertEqual(1, direct_func.counter) self.assertEqual(1, host_func.counter) self.assertTrue(HostPluginProtocol.is_default_channel) # If host keeps succeeding, assert we keep calling only that channel and not changing the default. for iteration in range(5): ret = protocol.client._download_using_appropriate_channel(direct_func, host_func) self.assertEqual("host", ret) self.assertEqual(1, direct_func.counter) self.assertEqual(1 + iteration + 1, host_func.counter) self.assertTrue(HostPluginProtocol.is_default_channel) # Scenario #2: Host channel is default HostPluginProtocol.is_default_channel = True direct_func, host_func = self._set_and_fail_helper_channel_functions(fail_direct=False, fail_host=True) # Assert we've called both channels and the default channel changed ret = protocol.client._download_using_appropriate_channel(direct_func, host_func) self.assertEqual("direct", ret) self.assertEqual(1, direct_func.counter) self.assertEqual(1, host_func.counter) self.assertFalse(HostPluginProtocol.is_default_channel) # If direct keeps succeeding, assert we keep calling only that channel and not changing the default. for iteration in range(5): ret = protocol.client._download_using_appropriate_channel(direct_func, host_func) self.assertEqual("direct", ret) self.assertEqual(1 + iteration + 1, direct_func.counter) self.assertEqual(1, host_func.counter) self.assertFalse(HostPluginProtocol.is_default_channel) class UpdateGoalStateTestCase(HttpRequestPredicates, AgentTestCase): """ Tests for WireClient.update_goal_state() and WireClient.reset_goal_state() """ def test_it_should_update_the_goal_state_and_the_host_plugin_when_the_incarnation_changes(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: protocol.client.get_host_plugin() # if the incarnation changes the behavior is the same for forced and non-forced updates for forced in [True, False]: protocol.mock_wire_data.reload() # start each iteration of the test with fresh mock data # # Update the mock data with random values; include at least one field from each of the components # in the goal state to ensure the entire state was updated. Note that numeric entities, e.g. incarnation, are # actually represented as strings in the goal state. # # Note that the shared config is not parsed by the agent, so we modify the XML data directly. Also, the # certificates are encrypted and it is hard to update a single field; instead, we update the entire list with # empty. # new_incarnation = str(uuid.uuid4()) new_container_id = str(uuid.uuid4()) new_role_config_name = str(uuid.uuid4()) new_hosting_env_deployment_name = str(uuid.uuid4()) new_shared_conf = WireProtocolData.replace_xml_attribute_value(protocol.mock_wire_data.shared_config, "Deployment", "name", str(uuid.uuid4())) new_sequence_number = 12345 if 'Pkcs7BlobWithPfxContents' not in protocol.mock_wire_data.certs: raise Exception('This test requires a non-empty certificate list') protocol.mock_wire_data.set_incarnation(new_incarnation) protocol.mock_wire_data.set_container_id(new_container_id) protocol.mock_wire_data.set_role_config_name(new_role_config_name) protocol.mock_wire_data.set_hosting_env_deployment_name(new_hosting_env_deployment_name) protocol.mock_wire_data.shared_config = new_shared_conf protocol.mock_wire_data.set_extensions_config_sequence_number(new_sequence_number) protocol.mock_wire_data.certs = r''' 2012-11-30 12 CertificatesNonPfxPackage NotPFXData ''' if forced: protocol.client.reset_goal_state() else: protocol.client.update_goal_state() sequence_number = protocol.get_goal_state().extensions_goal_state.extensions[0].settings[0].sequenceNumber self.assertEqual(protocol.client.get_goal_state().incarnation, new_incarnation) self.assertEqual(protocol.client.get_hosting_env().deployment_name, new_hosting_env_deployment_name) self.assertEqual(protocol.client.get_shared_conf().xml_text, new_shared_conf) self.assertEqual(sequence_number, new_sequence_number) self.assertEqual(len(protocol.client.get_certs().cert_list.certificates), 0) self.assertEqual(protocol.client.get_host_plugin().container_id, new_container_id) self.assertEqual(protocol.client.get_host_plugin().role_config_name, new_role_config_name) def test_non_forced_update_should_not_update_the_goal_state_but_should_update_the_host_plugin_when_the_incarnation_does_not_change(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: protocol.client.get_host_plugin() # The container id, role config name and shared config can change without the incarnation changing; capture the initial # goal state and then change those fields. container_id = protocol.client.get_goal_state().container_id role_config_name = protocol.client.get_goal_state().role_config_name new_container_id = str(uuid.uuid4()) new_role_config_name = str(uuid.uuid4()) protocol.mock_wire_data.set_container_id(new_container_id) protocol.mock_wire_data.set_role_config_name(new_role_config_name) protocol.mock_wire_data.shared_config = WireProtocolData.replace_xml_attribute_value( protocol.mock_wire_data.shared_config, "Deployment", "name", str(uuid.uuid4())) protocol.client.update_goal_state() self.assertEqual(protocol.client.get_goal_state().container_id, container_id) self.assertEqual(protocol.client.get_goal_state().role_config_name, role_config_name) self.assertEqual(protocol.client.get_host_plugin().container_id, new_container_id) self.assertEqual(protocol.client.get_host_plugin().role_config_name, new_role_config_name) def test_forced_update_should_update_the_goal_state_and_the_host_plugin_when_the_incarnation_does_not_change(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: protocol.client.get_host_plugin() # The container id, role config name and shared config can change without the incarnation changing incarnation = protocol.client.get_goal_state().incarnation new_container_id = str(uuid.uuid4()) new_role_config_name = str(uuid.uuid4()) new_shared_conf = WireProtocolData.replace_xml_attribute_value( protocol.mock_wire_data.shared_config, "Deployment", "name", str(uuid.uuid4())) protocol.mock_wire_data.set_container_id(new_container_id) protocol.mock_wire_data.set_role_config_name(new_role_config_name) protocol.mock_wire_data.shared_config = new_shared_conf protocol.client.reset_goal_state() self.assertEqual(protocol.client.get_goal_state().incarnation, incarnation) self.assertEqual(protocol.client.get_shared_conf().xml_text, new_shared_conf) self.assertEqual(protocol.client.get_host_plugin().container_id, new_container_id) self.assertEqual(protocol.client.get_host_plugin().role_config_name, new_role_config_name) def test_reset_should_init_provided_goal_state_properties(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: protocol.client.reset_goal_state(goal_state_properties=GoalStateProperties.All & ~GoalStateProperties.Certificates) with self.assertRaises(ProtocolError) as context: _ = protocol.client.get_certs() expected_message = "Certificates is not in goal state properties" self.assertIn(expected_message, str(context.exception)) def test_reset_should_init_the_goal_state(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: new_container_id = str(uuid.uuid4()) new_role_config_name = str(uuid.uuid4()) protocol.mock_wire_data.set_container_id(new_container_id) protocol.mock_wire_data.set_role_config_name(new_role_config_name) protocol.client.reset_goal_state() self.assertEqual(protocol.client.get_goal_state().container_id, new_container_id) self.assertEqual(protocol.client.get_goal_state().role_config_name, new_role_config_name) class UpdateHostPluginFromGoalStateTestCase(AgentTestCase): """ Tests for WireClient.update_host_plugin_from_goal_state() """ def test_it_should_update_the_host_plugin_with_or_without_incarnation_changes(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: protocol.client.get_host_plugin() # the behavior should be the same whether the incarnation changes or not for incarnation_change in [True, False]: protocol.mock_wire_data.reload() # start each iteration of the test with fresh mock data new_container_id = str(uuid.uuid4()) new_role_config_name = str(uuid.uuid4()) if incarnation_change: protocol.mock_wire_data.set_incarnation(str(uuid.uuid4())) protocol.mock_wire_data.set_container_id(new_container_id) protocol.mock_wire_data.set_role_config_name(new_role_config_name) protocol.mock_wire_data.shared_config = WireProtocolData.replace_xml_attribute_value( protocol.mock_wire_data.shared_config, "Deployment", "name", str(uuid.uuid4())) protocol.client.update_host_plugin_from_goal_state() self.assertEqual(protocol.client.get_host_plugin().container_id, new_container_id) self.assertEqual(protocol.client.get_host_plugin().role_config_name, new_role_config_name) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/test_agent_supported_feature.py000066400000000000000000000100361462617747000265110ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.common.agent_supported_feature import SupportedFeatureNames, \ get_agent_supported_features_list_for_crp, get_supported_feature_by_name, \ get_agent_supported_features_list_for_extensions from tests.lib.tools import AgentTestCase, patch class TestAgentSupportedFeature(AgentTestCase): def test_it_should_return_features_properly(self): with patch("azurelinuxagent.common.agent_supported_feature._MultiConfigFeature.is_supported", True): self.assertIn(SupportedFeatureNames.MultiConfig, get_agent_supported_features_list_for_crp(), "Multi-config should be fetched in crp_supported_features") with patch("azurelinuxagent.common.agent_supported_feature._MultiConfigFeature.is_supported", False): self.assertNotIn(SupportedFeatureNames.MultiConfig, get_agent_supported_features_list_for_crp(), "Multi-config should not be fetched in crp_supported_features as not supported") self.assertEqual(SupportedFeatureNames.MultiConfig, get_supported_feature_by_name(SupportedFeatureNames.MultiConfig).name, "Invalid/Wrong feature returned") # Raise error if feature name not found with self.assertRaises(NotImplementedError): get_supported_feature_by_name("ABC") def test_it_should_return_extension_supported_features_properly(self): with patch("azurelinuxagent.common.agent_supported_feature._ETPFeature.is_supported", True): self.assertIn(SupportedFeatureNames.ExtensionTelemetryPipeline, get_agent_supported_features_list_for_extensions(), "ETP should be in supported features list") with patch("azurelinuxagent.common.agent_supported_feature._ETPFeature.is_supported", False): self.assertNotIn(SupportedFeatureNames.ExtensionTelemetryPipeline, get_agent_supported_features_list_for_extensions(), "ETP should not be in supported features list") self.assertEqual(SupportedFeatureNames.ExtensionTelemetryPipeline, get_supported_feature_by_name(SupportedFeatureNames.ExtensionTelemetryPipeline).name, "Invalid/Wrong feature returned") def test_it_should_return_ga_versioning_governance_feature_properly(self): with patch("azurelinuxagent.common.agent_supported_feature._GAVersioningGovernanceFeature.is_supported", True): self.assertIn(SupportedFeatureNames.GAVersioningGovernance, get_agent_supported_features_list_for_crp(), "GAVersioningGovernance should be fetched in crp_supported_features") with patch("azurelinuxagent.common.agent_supported_feature._GAVersioningGovernanceFeature.is_supported", False): self.assertNotIn(SupportedFeatureNames.GAVersioningGovernance, get_agent_supported_features_list_for_crp(), "GAVersioningGovernance should not be fetched in crp_supported_features as not supported") self.assertEqual(SupportedFeatureNames.GAVersioningGovernance, get_supported_feature_by_name(SupportedFeatureNames.GAVersioningGovernance).name, "Invalid/Wrong feature returned") # Raise error if feature name not found with self.assertRaises(NotImplementedError): get_supported_feature_by_name("ABC") Azure-WALinuxAgent-2b21de5/tests/common/test_conf.py000066400000000000000000000217651462617747000225330ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os.path import azurelinuxagent.common.conf as conf from azurelinuxagent.common.utils import fileutil from tests.lib.tools import AgentTestCase, data_dir class TestConf(AgentTestCase): # Note: # -- These values *MUST* match those from data/test_waagent.conf EXPECTED_CONFIGURATION = { "Extensions.Enabled": True, "Extensions.WaitForCloudInit": False, "Extensions.WaitForCloudInitTimeout": 3600, "Provisioning.Agent": "auto", "Provisioning.DeleteRootPassword": True, "Provisioning.RegenerateSshHostKeyPair": True, "Provisioning.SshHostKeyPairType": "rsa", "Provisioning.MonitorHostName": True, "Provisioning.DecodeCustomData": False, "Provisioning.ExecuteCustomData": False, "Provisioning.PasswordCryptId": '6', "Provisioning.PasswordCryptSaltLength": 10, "Provisioning.AllowResetSysUser": False, "ResourceDisk.Format": True, "ResourceDisk.Filesystem": "ext4", "ResourceDisk.MountPoint": "/mnt/resource", "ResourceDisk.EnableSwap": False, "ResourceDisk.EnableSwapEncryption": False, "ResourceDisk.SwapSizeMB": 0, "ResourceDisk.MountOptions": None, "Logs.Verbose": False, "OS.EnableFIPS": True, "OS.RootDeviceScsiTimeout": '300', "OS.OpensslPath": '/usr/bin/openssl', "OS.SshClientAliveInterval": 42, "OS.SshDir": "/notareal/path", "HttpProxy.Host": None, "HttpProxy.Port": None, "DetectScvmmEnv": False, "Lib.Dir": "/var/lib/waagent", "DVD.MountPoint": "/mnt/cdrom/secure", "Pid.File": "/var/run/waagent.pid", "Extension.LogDir": "/var/log/azure", "OS.HomeDir": "/home", "OS.EnableRDMA": False, "OS.UpdateRdmaDriver": False, "OS.CheckRdmaDriver": False, "AutoUpdate.Enabled": True, "AutoUpdate.GAFamily": "Prod", "AutoUpdate.UpdateToLatestVersion": True, "EnableOverProvisioning": True, "OS.AllowHTTP": False, "OS.EnableFirewall": False } def setUp(self): AgentTestCase.setUp(self) self.conf = conf.ConfigurationProvider() conf.load_conf_from_file( os.path.join(data_dir, "test_waagent.conf"), self.conf) def test_get_should_return_default_when_key_is_not_found(self): self.assertEqual("The Default Value", self.conf.get("this-key-does-not-exist", "The Default Value")) self.assertEqual("The Default Value", self.conf.get("this-key-does-not-exist", lambda: "The Default Value")) def test_get_switch_should_return_default_when_key_is_not_found(self): self.assertEqual(True, self.conf.get_switch("this-key-does-not-exist", True)) self.assertEqual(True, self.conf.get_switch("this-key-does-not-exist", lambda: True)) def test_get_int_should_return_default_when_key_is_not_found(self): self.assertEqual(123456789, self.conf.get_int("this-key-does-not-exist", 123456789)) self.assertEqual(123456789, self.conf.get_int("this-key-does-not-exist", lambda: 123456789)) def test_key_value_handling(self): self.assertEqual("Value1", self.conf.get("FauxKey1", "Bad")) self.assertEqual("Value2 Value2", self.conf.get("FauxKey2", "Bad")) self.assertEqual("delalloc,rw,noatime,nobarrier,users,mode=777", self.conf.get("FauxKey3", "Bad")) def test_get_ssh_dir(self): self.assertTrue(conf.get_ssh_dir(self.conf).startswith("/notareal/path")) def test_get_sshd_conf_file_path(self): self.assertTrue(conf.get_sshd_conf_file_path( self.conf).startswith("/notareal/path")) def test_get_ssh_key_glob(self): self.assertTrue(conf.get_ssh_key_glob( self.conf).startswith("/notareal/path")) def test_get_ssh_key_private_path(self): self.assertTrue(conf.get_ssh_key_private_path( self.conf).startswith("/notareal/path")) def test_get_ssh_key_public_path(self): self.assertTrue(conf.get_ssh_key_public_path( self.conf).startswith("/notareal/path")) def test_get_fips_enabled(self): self.assertTrue(conf.get_fips_enabled(self.conf)) def test_get_provision_agent(self): self.assertTrue(conf.get_provisioning_agent(self.conf) == 'auto') def test_get_configuration(self): configuration = conf.get_configuration(self.conf) self.assertTrue(len(configuration.keys()) > 0) for k in TestConf.EXPECTED_CONFIGURATION.keys(): self.assertEqual( TestConf.EXPECTED_CONFIGURATION[k], configuration[k], k) def test_get_agent_disabled_file_path(self): self.assertEqual(conf.get_disable_agent_file_path(self.conf), os.path.join(self.tmp_dir, conf.DISABLE_AGENT_FILE)) def test_write_agent_disabled(self): """ Test writing disable_agent is empty """ from azurelinuxagent.pa.provision.default import ProvisionHandler disable_file_path = conf.get_disable_agent_file_path(self.conf) self.assertFalse(os.path.exists(disable_file_path)) ProvisionHandler.write_agent_disabled() self.assertTrue(os.path.exists(disable_file_path)) self.assertEqual('', fileutil.read_file(disable_file_path)) def test_get_extensions_enabled(self): self.assertTrue(conf.get_extensions_enabled(self.conf)) def test_get_get_auto_update_to_latest_version(self): # update flags not set self.assertTrue(conf.get_auto_update_to_latest_version(self.conf)) config = conf.ConfigurationProvider() # AutoUpdate.Enabled is set to 'n' conf.load_conf_from_file( os.path.join(data_dir, "config/waagent_auto_update_disabled.conf"), config) self.assertFalse(conf.get_auto_update_to_latest_version(config), "AutoUpdate.UpdateToLatestVersion should be 'n'") # AutoUpdate.Enabled is set to 'y' conf.load_conf_from_file( os.path.join(data_dir, "config/waagent_auto_update_enabled.conf"), config) self.assertTrue(conf.get_auto_update_to_latest_version(config), "AutoUpdate.UpdateToLatestVersion should be 'y'") # AutoUpdate.UpdateToLatestVersion is set to 'n' conf.load_conf_from_file( os.path.join(data_dir, "config/waagent_update_to_latest_version_disabled.conf"), config) self.assertFalse(conf.get_auto_update_to_latest_version(config), "AutoUpdate.UpdateToLatestVersion should be 'n'") # AutoUpdate.UpdateToLatestVersion is set to 'y' conf.load_conf_from_file( os.path.join(data_dir, "config/waagent_update_to_latest_version_enabled.conf"), config) self.assertTrue(conf.get_auto_update_to_latest_version(config), "AutoUpdate.UpdateToLatestVersion should be 'y'") # AutoUpdate.Enabled is set to 'y' and AutoUpdate.UpdateToLatestVersion is set to 'n' conf.load_conf_from_file( os.path.join(data_dir, "config/waagent_auto_update_enabled_update_to_latest_version_disabled.conf"), config) self.assertFalse(conf.get_auto_update_to_latest_version(config), "AutoUpdate.UpdateToLatestVersion should be 'n'") # AutoUpdate.Enabled is set to 'n' and AutoUpdate.UpdateToLatestVersion is set to 'y' conf.load_conf_from_file( os.path.join(data_dir, "config/waagent_auto_update_disabled_update_to_latest_version_enabled.conf"), config) self.assertTrue(conf.get_auto_update_to_latest_version(config), "AutoUpdate.UpdateToLatestVersion should be 'y'") # AutoUpdate.Enabled is set to 'n' and AutoUpdate.UpdateToLatestVersion is set to 'n' conf.load_conf_from_file( os.path.join(data_dir, "config/waagent_auto_update_disabled_update_to_latest_version_disabled.conf"), config) self.assertFalse(conf.get_auto_update_to_latest_version(config), "AutoUpdate.UpdateToLatestVersion should be 'n'") # AutoUpdate.Enabled is set to 'y' and AutoUpdate.UpdateToLatestVersion is set to 'y' conf.load_conf_from_file( os.path.join(data_dir, "config/waagent_auto_update_enabled_update_to_latest_version_enabled.conf"), config) self.assertTrue(conf.get_auto_update_to_latest_version(config), "AutoUpdate.UpdateToLatestVersion should be 'y'") Azure-WALinuxAgent-2b21de5/tests/common/test_errorstate.py000066400000000000000000000074051462617747000237730ustar00rootroot00000000000000import unittest from datetime import timedelta, datetime from azurelinuxagent.common.errorstate import ErrorState from tests.lib.tools import Mock, patch class TestErrorState(unittest.TestCase): def test_errorstate00(self): """ If ErrorState is never incremented, it will never trigger. """ test_subject = ErrorState(timedelta(seconds=10000)) self.assertFalse(test_subject.is_triggered()) self.assertEqual(0, test_subject.count) self.assertEqual('unknown', test_subject.fail_time) def test_errorstate01(self): """ If ErrorState is never incremented, and the timedelta is zero it will not trigger. """ test_subject = ErrorState(timedelta(seconds=0)) self.assertFalse(test_subject.is_triggered()) self.assertEqual(0, test_subject.count) self.assertEqual('unknown', test_subject.fail_time) def test_errorstate02(self): """ If ErrorState is triggered, and the current time is within timedelta of now it will trigger. """ test_subject = ErrorState(timedelta(seconds=0)) test_subject.incr() self.assertTrue(test_subject.is_triggered()) self.assertEqual(1, test_subject.count) self.assertEqual('0.0 min', test_subject.fail_time) @patch('azurelinuxagent.common.errorstate.datetime') def test_errorstate03(self, mock_time): """ ErrorState will not trigger until 1. ErrorState has been incr() at least once. 2. The timedelta from the first incr() has elapsed. """ test_subject = ErrorState(timedelta(minutes=15)) for x in range(1, 10): mock_time.utcnow = Mock(return_value=datetime.utcnow() + timedelta(minutes=x)) test_subject.incr() self.assertFalse(test_subject.is_triggered()) mock_time.utcnow = Mock(return_value=datetime.utcnow() + timedelta(minutes=30)) test_subject.incr() self.assertTrue(test_subject.is_triggered()) self.assertEqual('29.0 min', test_subject.fail_time) def test_errorstate04(self): """ If ErrorState is reset the timestamp of the last incr() is reset to None. """ test_subject = ErrorState(timedelta(minutes=15)) self.assertTrue(test_subject.timestamp is None) test_subject.incr() self.assertTrue(test_subject.timestamp is not None) test_subject.reset() self.assertTrue(test_subject.timestamp is None) def test_errorstate05(self): """ Test the fail_time for various scenarios """ test_subject = ErrorState(timedelta(minutes=15)) self.assertEqual('unknown', test_subject.fail_time) test_subject.incr() self.assertEqual('0.0 min', test_subject.fail_time) test_subject.timestamp = datetime.utcnow() - timedelta(seconds=60) self.assertEqual('1.0 min', test_subject.fail_time) test_subject.timestamp = datetime.utcnow() - timedelta(seconds=73) self.assertEqual('1.22 min', test_subject.fail_time) test_subject.timestamp = datetime.utcnow() - timedelta(seconds=120) self.assertEqual('2.0 min', test_subject.fail_time) test_subject.timestamp = datetime.utcnow() - timedelta(seconds=60 * 59) self.assertEqual('59.0 min', test_subject.fail_time) test_subject.timestamp = datetime.utcnow() - timedelta(seconds=60 * 60) self.assertEqual('1.0 hr', test_subject.fail_time) test_subject.timestamp = datetime.utcnow() - timedelta(seconds=60 * 95) self.assertEqual('1.58 hr', test_subject.fail_time) test_subject.timestamp = datetime.utcnow() - timedelta(seconds=60 * 60 * 3) self.assertEqual('3.0 hr', test_subject.fail_time) Azure-WALinuxAgent-2b21de5/tests/common/test_event.py000066400000000000000000001415711462617747000227250ustar00rootroot00000000000000# coding=utf-8 # # Copyright 2017 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from __future__ import print_function import json import os import platform import re import shutil import threading import xml.dom from datetime import datetime, timedelta from mock import MagicMock from azurelinuxagent.common.utils import textutil, fileutil from azurelinuxagent.common import event, logger from azurelinuxagent.common.AgentGlobals import AgentGlobals from azurelinuxagent.common.event import add_event, add_periodic, add_log_event, elapsed_milliseconds, \ WALAEventOperation, parse_xml_event, parse_json_event, AGENT_EVENT_FILE_EXTENSION, EVENTS_DIRECTORY, \ TELEMETRY_EVENT_EVENT_ID, TELEMETRY_EVENT_PROVIDER_ID, TELEMETRY_LOG_EVENT_ID, TELEMETRY_LOG_PROVIDER_ID, \ report_metric from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.telemetryevent import CommonTelemetryEventSchema, GuestAgentGenericLogsSchema, \ GuestAgentExtensionEventsSchema, GuestAgentPerfCounterEventsSchema from azurelinuxagent.common.version import CURRENT_AGENT, CURRENT_VERSION, AGENT_EXECUTION_MODE from azurelinuxagent.ga.collect_telemetry_events import _CollectAndEnqueueEvents from tests.lib import wire_protocol_data from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.tools import AgentTestCase, data_dir, load_data, patch, skip_if_predicate_true, is_python_version_26_or_34 from tests.lib.event_logger_tools import EventLoggerTools class TestEvent(HttpRequestPredicates, AgentTestCase): # These are the Operation/Category for events produced by the tests below (as opposed by events produced by the agent itself) _Message = "ThisIsATestEventMessage" _Operation = "ThisIsATestEventOperation" _Category = "ThisIsATestMetricCategory" def setUp(self): AgentTestCase.setUp(self) self.event_dir = os.path.join(self.tmp_dir, EVENTS_DIRECTORY) EventLoggerTools.initialize_event_logger(self.event_dir) threading.current_thread().setName("TestEventThread") osutil = get_osutil() self.expected_common_parameters = { # common parameters computed at event creation; the timestamp (stored as the opcode name) is not included # here and is checked separately from these parameters CommonTelemetryEventSchema.GAVersion: CURRENT_AGENT, CommonTelemetryEventSchema.ContainerId: AgentGlobals.get_container_id(), CommonTelemetryEventSchema.EventTid: threading.current_thread().ident, CommonTelemetryEventSchema.EventPid: os.getpid(), CommonTelemetryEventSchema.TaskName: threading.current_thread().getName(), CommonTelemetryEventSchema.KeywordName: json.dumps({"CpuArchitecture": platform.machine()}), # common parameters computed from the OS platform CommonTelemetryEventSchema.OSVersion: EventLoggerTools.get_expected_os_version(), CommonTelemetryEventSchema.ExecutionMode: AGENT_EXECUTION_MODE, CommonTelemetryEventSchema.RAM: int(osutil.get_total_mem()), CommonTelemetryEventSchema.Processors: osutil.get_processor_cores(), # common parameters from the goal state CommonTelemetryEventSchema.TenantName: 'db00a7755a5e4e8a8fe4b19bc3b330c3', CommonTelemetryEventSchema.RoleName: 'MachineRole', CommonTelemetryEventSchema.RoleInstanceName: 'b61f93d0-e1ed-40b2-b067-22c243233448.MachineRole_IN_0', # common parameters CommonTelemetryEventSchema.Location: EventLoggerTools.mock_imds_data['location'], CommonTelemetryEventSchema.SubscriptionId: EventLoggerTools.mock_imds_data['subscriptionId'], CommonTelemetryEventSchema.ResourceGroupName: EventLoggerTools.mock_imds_data['resourceGroupName'], CommonTelemetryEventSchema.VMId: EventLoggerTools.mock_imds_data['vmId'], CommonTelemetryEventSchema.ImageOrigin: EventLoggerTools.mock_imds_data['image_origin'], } self.expected_extension_events_params = { GuestAgentExtensionEventsSchema.IsInternal: False, GuestAgentExtensionEventsSchema.ExtensionType: "" } @staticmethod def _report_events(protocol, event_list): def _yield_events(): for telemetry_event in event_list: yield telemetry_event protocol.client.report_event(_yield_events()) @staticmethod def _collect_events(): def append_event(e): for p in e.parameters: if p.name == 'Operation' and p.value == TestEvent._Operation \ or p.name == 'Category' and p.value == TestEvent._Category \ or p.name == 'Message' and p.value == TestEvent._Message \ or p.name == 'Context1' and p.value == TestEvent._Message: event_list.append(e) event_list = [] send_telemetry_events = MagicMock() send_telemetry_events.enqueue_event = MagicMock(wraps=append_event) event_collector = _CollectAndEnqueueEvents(send_telemetry_events) event_collector.process_events() return event_list def _collect_event_files(self): files = [os.path.join(self.event_dir, f) for f in os.listdir(self.event_dir)] return [f for f in files if fileutil.findre_in_file(f, TestEvent._Operation)] @staticmethod def _is_guest_extension_event(event): # pylint: disable=redefined-outer-name return event.eventId == TELEMETRY_EVENT_EVENT_ID and event.providerId == TELEMETRY_EVENT_PROVIDER_ID @staticmethod def _is_telemetry_log_event(event): # pylint: disable=redefined-outer-name return event.eventId == TELEMETRY_LOG_EVENT_ID and event.providerId == TELEMETRY_LOG_PROVIDER_ID def test_parse_xml_event(self, *args): # pylint: disable=unused-argument data_str = load_data('ext/event_from_extension.xml') event = parse_xml_event(data_str) # pylint: disable=redefined-outer-name self.assertIsNotNone(event) self.assertNotEqual(0, event.parameters) self.assertTrue(all(param is not None for param in event.parameters)) def test_parse_json_event(self, *args): # pylint: disable=unused-argument data_str = load_data('ext/event.json') event = parse_json_event(data_str) # pylint: disable=redefined-outer-name self.assertIsNotNone(event) self.assertNotEqual(0, event.parameters) self.assertTrue(all(param is not None for param in event.parameters)) def test_add_event_should_use_the_container_id_from_the_most_recent_goal_state(self): def create_event_and_return_container_id(): # pylint: disable=inconsistent-return-statements event.add_event(name='Event', op=TestEvent._Operation) event_list = self._collect_events() self.assertEqual(len(event_list), 1, "Could not find the event created by add_event") for p in event_list[0].parameters: if p.name == CommonTelemetryEventSchema.ContainerId: return p.value self.fail("Could not find Contained ID on event") with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: contained_id = create_event_and_return_container_id() # The expect value comes from DATA_FILE self.assertEqual(contained_id, 'c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2', "Incorrect container ID") protocol.mock_wire_data.set_container_id('AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE') protocol.client.update_goal_state() contained_id = create_event_and_return_container_id() self.assertEqual(contained_id, 'AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE', "Incorrect container ID") protocol.mock_wire_data.set_container_id('11111111-2222-3333-4444-555555555555') protocol.client.update_goal_state() contained_id = create_event_and_return_container_id() self.assertEqual(contained_id, '11111111-2222-3333-4444-555555555555', "Incorrect container ID") def test_add_event_should_handle_event_errors(self): with patch("azurelinuxagent.common.utils.fileutil.mkdir", side_effect=OSError): with patch('azurelinuxagent.common.logger.periodic_error') as mock_logger_periodic_error: add_event('test', message='test event', op=TestEvent._Operation) # The event shouldn't have been created self.assertTrue(len(self._collect_event_files()) == 0) # The exception should have been caught and logged args = mock_logger_periodic_error.call_args exception_message = args[0][1] self.assertIn("[EventError] Failed to create events folder", exception_message) def test_event_status_event_marked(self): es = event.__event_status__ self.assertFalse(es.event_marked("Foo", "1.2", "FauxOperation")) es.mark_event_status("Foo", "1.2", "FauxOperation", True) self.assertTrue(es.event_marked("Foo", "1.2", "FauxOperation")) event.__event_status__ = event.EventStatus() event.init_event_status(self.tmp_dir) es = event.__event_status__ self.assertTrue(es.event_marked("Foo", "1.2", "FauxOperation")) def test_event_status_defaults_to_success(self): es = event.__event_status__ self.assertTrue(es.event_succeeded("Foo", "1.2", "FauxOperation")) def test_event_status_records_status(self): es = event.EventStatus() es.mark_event_status("Foo", "1.2", "FauxOperation", True) self.assertTrue(es.event_succeeded("Foo", "1.2", "FauxOperation")) es.mark_event_status("Foo", "1.2", "FauxOperation", False) self.assertFalse(es.event_succeeded("Foo", "1.2", "FauxOperation")) def test_event_status_preserves_state(self): es = event.__event_status__ es.mark_event_status("Foo", "1.2", "FauxOperation", False) self.assertFalse(es.event_succeeded("Foo", "1.2", "FauxOperation")) event.__event_status__ = event.EventStatus() event.init_event_status(self.tmp_dir) es = event.__event_status__ self.assertFalse(es.event_succeeded("Foo", "1.2", "FauxOperation")) def test_should_emit_event_ignores_unknown_operations(self): event.__event_status__ = event.EventStatus() self.assertTrue(event.should_emit_event("Foo", "1.2", "FauxOperation", True)) self.assertTrue(event.should_emit_event("Foo", "1.2", "FauxOperation", False)) # Marking the event has no effect event.mark_event_status("Foo", "1.2", "FauxOperation", True) self.assertTrue(event.should_emit_event("Foo", "1.2", "FauxOperation", True)) self.assertTrue(event.should_emit_event("Foo", "1.2", "FauxOperation", False)) def test_should_emit_event_handles_known_operations(self): event.__event_status__ = event.EventStatus() # Known operations always initially "fire" for op in event.__event_status_operations__: self.assertTrue(event.should_emit_event("Foo", "1.2", op, True)) self.assertTrue(event.should_emit_event("Foo", "1.2", op, False)) # Note a success event... for op in event.__event_status_operations__: event.mark_event_status("Foo", "1.2", op, True) # Subsequent success events should not fire, but failures will for op in event.__event_status_operations__: self.assertFalse(event.should_emit_event("Foo", "1.2", op, True)) self.assertTrue(event.should_emit_event("Foo", "1.2", op, False)) # Note a failure event... for op in event.__event_status_operations__: event.mark_event_status("Foo", "1.2", op, False) # Subsequent success events fire and failure do not for op in event.__event_status_operations__: self.assertTrue(event.should_emit_event("Foo", "1.2", op, True)) self.assertFalse(event.should_emit_event("Foo", "1.2", op, False)) @patch('azurelinuxagent.common.event.EventLogger') @patch('azurelinuxagent.common.logger.error') @patch('azurelinuxagent.common.logger.warn') @patch('azurelinuxagent.common.logger.info') def test_should_log_errors_if_failed_operation_and_empty_event_dir(self, mock_logger_info, mock_logger_warn, mock_logger_error, mock_reporter): mock_reporter.event_dir = None add_event("dummy name", version=CURRENT_VERSION, op=WALAEventOperation.Download, is_success=False, message="dummy event message", reporter=mock_reporter) self.assertEqual(1, mock_logger_error.call_count) self.assertEqual(1, mock_logger_warn.call_count) self.assertEqual(0, mock_logger_info.call_count) args = mock_logger_error.call_args[0] self.assertEqual(('dummy name', 'Download', 'dummy event message', 0), args[1:]) @patch('azurelinuxagent.common.event.EventLogger') @patch('azurelinuxagent.common.logger.error') @patch('azurelinuxagent.common.logger.warn') @patch('azurelinuxagent.common.logger.info') def test_should_log_errors_if_failed_operation_and_not_empty_event_dir(self, mock_logger_info, mock_logger_warn, mock_logger_error, mock_reporter): mock_reporter.event_dir = "dummy" with patch("azurelinuxagent.common.event.should_emit_event", return_value=True) as mock_should_emit_event: with patch("azurelinuxagent.common.event.mark_event_status"): with patch("azurelinuxagent.common.event.EventLogger._add_event"): add_event("dummy name", version=CURRENT_VERSION, op=WALAEventOperation.Download, is_success=False, message="dummy event message") self.assertEqual(1, mock_should_emit_event.call_count) self.assertEqual(1, mock_logger_error.call_count) self.assertEqual(0, mock_logger_warn.call_count) self.assertEqual(0, mock_logger_info.call_count) args = mock_logger_error.call_args[0] self.assertEqual(('dummy name', 'Download', 'dummy event message', 0), args[1:]) @patch('azurelinuxagent.common.event.EventLogger.add_event') def test_periodic_emits_if_not_previously_sent(self, mock_event): event.__event_logger__.reset_periodic() event.add_periodic(logger.EVERY_DAY, "FauxEvent") self.assertEqual(1, mock_event.call_count) @patch('azurelinuxagent.common.event.EventLogger.add_event') def test_periodic_does_not_emit_if_previously_sent(self, mock_event): event.__event_logger__.reset_periodic() event.add_periodic(logger.EVERY_DAY, "FauxEvent") self.assertEqual(1, mock_event.call_count) event.add_periodic(logger.EVERY_DAY, "FauxEvent") self.assertEqual(1, mock_event.call_count) @patch('azurelinuxagent.common.event.EventLogger.add_event') def test_periodic_emits_if_forced(self, mock_event): event.__event_logger__.reset_periodic() event.add_periodic(logger.EVERY_DAY, "FauxEvent") self.assertEqual(1, mock_event.call_count) event.add_periodic(logger.EVERY_DAY, "FauxEvent", force=True) self.assertEqual(2, mock_event.call_count) @patch('azurelinuxagent.common.event.EventLogger.add_event') def test_periodic_emits_after_elapsed_delta(self, mock_event): event.__event_logger__.reset_periodic() event.add_periodic(logger.EVERY_DAY, "FauxEvent") self.assertEqual(1, mock_event.call_count) event.add_periodic(logger.EVERY_DAY, "FauxEvent") self.assertEqual(1, mock_event.call_count) h = hash("FauxEvent"+WALAEventOperation.Unknown+ustr(True)) event.__event_logger__.periodic_events[h] = \ datetime.now() - logger.EVERY_DAY - logger.EVERY_HOUR event.add_periodic(logger.EVERY_DAY, "FauxEvent") self.assertEqual(2, mock_event.call_count) @patch('azurelinuxagent.common.event.EventLogger.add_event') def test_periodic_forwards_args(self, mock_event): event.__event_logger__.reset_periodic() event.add_periodic(logger.EVERY_DAY, "FauxEvent", op=WALAEventOperation.Log, is_success=True, duration=0, version=str(CURRENT_VERSION), message="FauxEventMessage", log_event=True, force=False) mock_event.assert_called_once_with("FauxEvent", op=WALAEventOperation.Log, is_success=True, duration=0, version=str(CURRENT_VERSION), message="FauxEventMessage", log_event=True) @patch("azurelinuxagent.common.event.datetime") @patch('azurelinuxagent.common.event.EventLogger.add_event') def test_periodic_forwards_args_default_values(self, mock_event, mock_datetime): # pylint: disable=unused-argument event.__event_logger__.reset_periodic() event.add_periodic(logger.EVERY_DAY, "FauxEvent", message="FauxEventMessage") mock_event.assert_called_once_with("FauxEvent", op=WALAEventOperation.Unknown, is_success=True, duration=0, version=str(CURRENT_VERSION), message="FauxEventMessage", log_event=True) @patch("azurelinuxagent.common.event.EventLogger.add_event") def test_add_event_default_variables(self, mock_add_event): add_event('test', message='test event') mock_add_event.assert_called_once_with('test', duration=0, is_success=True, log_event=True, message='test event', op=WALAEventOperation.Unknown, version=str(CURRENT_VERSION)) def test_collect_events_should_delete_event_files(self): add_event(name='Event1', op=TestEvent._Operation) add_event(name='Event1', op=TestEvent._Operation) add_event(name='Event3', op=TestEvent._Operation) event_files = self._collect_event_files() self.assertEqual(3, len(event_files), "Did not find all the event files that were created") event_list = self._collect_events() event_files = os.listdir(self.event_dir) self.assertEqual(len(event_list), 3, "Did not collect all the events that were created") self.assertEqual(len(event_files), 0, "The event files were not deleted") def test_save_event(self): add_event('test', message='test event', op=TestEvent._Operation) self.assertTrue(len(self._collect_event_files()) == 1) # checking the extension of the file created. for filename in os.listdir(self.event_dir): self.assertTrue(filename.endswith(AGENT_EVENT_FILE_EXTENSION), 'Event file does not have the correct extension ({0}): {1}'.format(AGENT_EVENT_FILE_EXTENSION, filename)) @staticmethod def _get_event_message(evt): for p in evt.parameters: if p.name == GuestAgentExtensionEventsSchema.Message: return p.value return None def test_collect_events_should_be_able_to_process_events_with_non_ascii_characters(self): self._create_test_event_file("custom_script_nonascii_characters.tld") event_list = self._collect_events() self.assertEqual(len(event_list), 1) self.assertEqual(TestEvent._get_event_message(event_list[0]), u'World\u05e2\u05d9\u05d5\u05ea \u05d0\u05d7\u05e8\u05d5\u05ea\u0906\u091c') @skip_if_predicate_true(is_python_version_26_or_34, "Disabled on Python 2.6 and 3.4 for now. Need to revisit to fix it") def test_collect_events_should_ignore_invalid_event_files(self): self._create_test_event_file("custom_script_1.tld") # a valid event self._create_test_event_file("custom_script_utf-16.tld") self._create_test_event_file("custom_script_invalid_json.tld") os.chmod(self._create_test_event_file("custom_script_no_read_access.tld"), 0o200) self._create_test_event_file("custom_script_2.tld") # another valid event with patch("azurelinuxagent.common.event.add_event") as mock_add_event: event_list = self._collect_events() self.assertEqual( len(event_list), 2) self.assertTrue( all(TestEvent._get_event_message(evt) == "A test telemetry message." for evt in event_list), "The valid events were not found") invalid_events = [] total_dropped_count = 0 for args, kwargs in mock_add_event.call_args_list: # pylint: disable=unused-variable match = re.search(r"DroppedEventsCount: (\d+)", kwargs['message']) if match is not None: invalid_events.append(kwargs['op']) total_dropped_count += int(match.groups()[0]) self.assertEqual(3, total_dropped_count, "Total dropped events dont match") self.assertIn(WALAEventOperation.CollectEventErrors, invalid_events, "{0} errors not reported".format(WALAEventOperation.CollectEventErrors)) self.assertIn(WALAEventOperation.CollectEventUnicodeErrors, invalid_events, "{0} errors not reported".format(WALAEventOperation.CollectEventUnicodeErrors)) def test_save_event_rollover(self): # We keep 1000 events only, and the older ones are removed. num_of_events = 999 add_event('test', message='first event') # this makes number of events to num_of_events + 1. for i in range(num_of_events): add_event('test', message='test event {0}'.format(i)) num_of_events += 1 # adding the first add_event. events = os.listdir(self.event_dir) events.sort() self.assertTrue(len(events) == num_of_events, "{0} is not equal to {1}".format(len(events), num_of_events)) first_event = os.path.join(self.event_dir, events[0]) with open(first_event) as first_fh: first_event_text = first_fh.read() self.assertTrue('first event' in first_event_text) add_event('test', message='last event') # Adding the above event displaces the first_event events = os.listdir(self.event_dir) events.sort() self.assertTrue(len(events) == num_of_events, "{0} events found, {1} expected".format(len(events), num_of_events)) first_event = os.path.join(self.event_dir, events[0]) with open(first_event) as first_fh: first_event_text = first_fh.read() self.assertFalse('first event' in first_event_text, "'first event' not in {0}".format(first_event_text)) self.assertTrue('test event 0' in first_event_text) last_event = os.path.join(self.event_dir, events[-1]) with open(last_event) as last_fh: last_event_text = last_fh.read() self.assertTrue('last event' in last_event_text) def test_save_event_cleanup(self): for i in range(0, 2000): evt = os.path.join(self.event_dir, '{0}.tld'.format(ustr(1491004920536531 + i))) with open(evt, 'w') as fh: fh.write('{0}{1}'.format(TestEvent._Operation, i)) test_events = self._collect_event_files() self.assertTrue(len(test_events) == 2000, "{0} events found, 2000 expected".format(len(test_events))) add_event('test', message='last event', op=TestEvent._Operation) events = os.listdir(self.event_dir) self.assertTrue(len(events) == 1000, "{0} events found, 1000 expected".format(len(events))) def test_elapsed_milliseconds(self): utc_start = datetime.utcnow() + timedelta(days=1) self.assertEqual(0, elapsed_milliseconds(utc_start)) def _assert_event_includes_all_parameters_in_the_telemetry_schema(self, actual_event, expected_parameters, assert_timestamp): # add the common parameters to the set of expected parameters all_expected_parameters = self.expected_common_parameters.copy() if self._is_guest_extension_event(actual_event): all_expected_parameters.update(self.expected_extension_events_params.copy()) all_expected_parameters.update(expected_parameters) # convert the event parameters to a dictionary; do not include the timestamp, # which is verified using assert_timestamp() event_parameters = {} timestamp = None for p in actual_event.parameters: if p.name == CommonTelemetryEventSchema.OpcodeName: # the timestamp is stored in the opcode name timestamp = p.value else: event_parameters[p.name] = p.value if self._is_telemetry_log_event(actual_event): # Remove Context2 from event parameters and verify that the timestamp is correct telemetry_log_event_timestamp = event_parameters.pop(GuestAgentGenericLogsSchema.Context2, None) self.assertIsNotNone(telemetry_log_event_timestamp, "Context2 should be filled with a timestamp") assert_timestamp(telemetry_log_event_timestamp) self.maxDiff = None # the dictionary diffs can be quite large; display the whole thing self.assertDictEqual(event_parameters, all_expected_parameters) self.assertIsNotNone(timestamp, "The event does not have a timestamp (Opcode)") assert_timestamp(timestamp) @staticmethod def _datetime_to_event_timestamp(dt): return dt.strftime(logger.Logger.LogTimeFormatInUTC) def _test_create_event_function_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema(self, create_event_function, expected_parameters): """ Helper to tests methods that create events (e.g. add_event, add_log_event, etc). """ # execute the method that creates the event, capturing the time range of the execution timestamp_lower = TestEvent._datetime_to_event_timestamp(datetime.utcnow()) create_event_function() timestamp_upper = TestEvent._datetime_to_event_timestamp(datetime.utcnow()) event_list = self._collect_events() self.assertEqual(len(event_list), 1) # verify the event parameters self._assert_event_includes_all_parameters_in_the_telemetry_schema( event_list[0], expected_parameters, assert_timestamp=lambda timestamp: self.assertTrue(timestamp_lower <= timestamp <= timestamp_upper, "The event timestamp (opcode) is incorrect") ) def test_add_event_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema(self): self._test_create_event_function_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema( create_event_function=lambda: add_event( name="TestEvent", op=TestEvent._Operation, is_success=True, duration=1234, version="1.2.3.4", message="Test Message"), expected_parameters={ GuestAgentExtensionEventsSchema.Name: 'TestEvent', GuestAgentExtensionEventsSchema.Version: '1.2.3.4', GuestAgentExtensionEventsSchema.Operation: TestEvent._Operation, GuestAgentExtensionEventsSchema.OperationSuccess: True, GuestAgentExtensionEventsSchema.Message: 'Test Message', GuestAgentExtensionEventsSchema.Duration: 1234, GuestAgentExtensionEventsSchema.ExtensionType: ''}) def test_add_periodic_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema(self): self._test_create_event_function_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema( create_event_function=lambda: add_periodic( delta=logger.EVERY_MINUTE, name="TestPeriodicEvent", op=TestEvent._Operation, is_success=False, duration=4321, version="4.3.2.1", message="Test Periodic Message"), expected_parameters={ GuestAgentExtensionEventsSchema.Name: 'TestPeriodicEvent', GuestAgentExtensionEventsSchema.Version: '4.3.2.1', GuestAgentExtensionEventsSchema.Operation: TestEvent._Operation, GuestAgentExtensionEventsSchema.OperationSuccess: False, GuestAgentExtensionEventsSchema.Message: 'Test Periodic Message', GuestAgentExtensionEventsSchema.Duration: 4321, GuestAgentExtensionEventsSchema.ExtensionType: ''}) @skip_if_predicate_true(lambda: True, "Enable this test when SEND_LOGS_TO_TELEMETRY is enabled") def test_add_log_event_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema(self): self._test_create_event_function_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema( create_event_function=lambda: add_log_event(logger.LogLevel.INFO, 'A test INFO log event'), expected_parameters={ GuestAgentGenericLogsSchema.EventName: 'Log', GuestAgentGenericLogsSchema.CapabilityUsed: 'INFO', GuestAgentGenericLogsSchema.Context1: 'log event', GuestAgentGenericLogsSchema.Context3: '' }) def test_add_log_event_should_always_create_events_when_forced(self): self._test_create_event_function_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema( create_event_function=lambda: add_log_event(logger.LogLevel.WARNING, TestEvent._Message, forced=True), expected_parameters={ GuestAgentGenericLogsSchema.EventName: 'Log', GuestAgentGenericLogsSchema.CapabilityUsed: 'WARNING', GuestAgentGenericLogsSchema.Context1: TestEvent._Message, GuestAgentGenericLogsSchema.Context3: '' }) def test_add_log_event_should_not_create_event_if_not_allowed_and_not_forced(self): add_log_event(logger.LogLevel.WARNING, 'A test WARNING log event') event_list = self._collect_events() self.assertEqual(len(event_list), 0, "No events should be created if not forced and not allowed") def test_report_metric_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema(self): self._test_create_event_function_should_create_events_that_have_all_the_parameters_in_the_telemetry_schema( create_event_function=lambda: report_metric(TestEvent._Category, "%idle", "total", 12.34), expected_parameters={ GuestAgentPerfCounterEventsSchema.Category: TestEvent._Category, GuestAgentPerfCounterEventsSchema.Counter: '%idle', GuestAgentPerfCounterEventsSchema.Instance: 'total', GuestAgentPerfCounterEventsSchema.Value: 12.34 }) def _create_test_event_file(self, source_file): source_file_path = os.path.join(data_dir, "events", source_file) target_file_path = os.path.join(self.event_dir, source_file) shutil.copy(source_file_path, target_file_path) return target_file_path @staticmethod def _get_file_creation_timestamp(file): # pylint: disable=redefined-builtin return TestEvent._datetime_to_event_timestamp(datetime.fromtimestamp(os.path.getmtime(file))) def test_collect_events_should_add_all_the_parameters_in_the_telemetry_schema_to_legacy_agent_events(self): # Agents <= 2.2.46 use *.tld as the extension for event files (newer agents use "*.waagent.tld") and they populate # only a subset of fields; the rest are added by the current agent when events are collected. self._create_test_event_file("legacy_agent.tld") event_list = self._collect_events() self.assertEqual(len(event_list), 1) self._assert_event_includes_all_parameters_in_the_telemetry_schema( event_list[0], expected_parameters={ GuestAgentExtensionEventsSchema.Name: "WALinuxAgent", GuestAgentExtensionEventsSchema.Version: "9.9.9", GuestAgentExtensionEventsSchema.IsInternal: False, GuestAgentExtensionEventsSchema.Operation: TestEvent._Operation, GuestAgentExtensionEventsSchema.OperationSuccess: True, GuestAgentExtensionEventsSchema.Message: "The cgroup filesystem is ready to use", GuestAgentExtensionEventsSchema.Duration: 1234, GuestAgentExtensionEventsSchema.ExtensionType: "ALegacyExtensionType", CommonTelemetryEventSchema.GAVersion: "WALinuxAgent-1.1.1", CommonTelemetryEventSchema.ContainerId: "AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE", CommonTelemetryEventSchema.EventTid: 98765, CommonTelemetryEventSchema.EventPid: 4321, CommonTelemetryEventSchema.TaskName: "ALegacyTask", CommonTelemetryEventSchema.KeywordName: "ALegacyKeywordName"}, assert_timestamp=lambda timestamp: self.assertEqual(timestamp, '1970-01-01 12:00:00', "The event timestamp (opcode) is incorrect") ) def test_collect_events_should_use_the_file_creation_time_for_legacy_agent_events_missing_a_timestamp(self): test_file = self._create_test_event_file("legacy_agent_no_timestamp.tld") event_creation_time = TestEvent._get_file_creation_timestamp(test_file) event_list = self._collect_events() self.assertEqual(len(event_list), 1) self._assert_event_includes_all_parameters_in_the_telemetry_schema( event_list[0], expected_parameters={ GuestAgentExtensionEventsSchema.Name: "WALinuxAgent", GuestAgentExtensionEventsSchema.Version: "9.9.9", GuestAgentExtensionEventsSchema.IsInternal: False, GuestAgentExtensionEventsSchema.Operation: TestEvent._Operation, GuestAgentExtensionEventsSchema.OperationSuccess: True, GuestAgentExtensionEventsSchema.Message: "The cgroup filesystem is ready to use", GuestAgentExtensionEventsSchema.Duration: 1234, GuestAgentExtensionEventsSchema.ExtensionType: "ALegacyExtensionType", CommonTelemetryEventSchema.GAVersion: "WALinuxAgent-1.1.1", CommonTelemetryEventSchema.ContainerId: "AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE", CommonTelemetryEventSchema.EventTid: 98765, CommonTelemetryEventSchema.EventPid: 4321, CommonTelemetryEventSchema.TaskName: "ALegacyTask", CommonTelemetryEventSchema.KeywordName: "ALegacyKeywordName"}, assert_timestamp=lambda timestamp: self.assertEqual(timestamp, event_creation_time, "The event timestamp (opcode) is incorrect") ) def _assert_extension_event_includes_all_parameters_in_the_telemetry_schema(self, event_file): # Extensions drop their events as *.tld files on the events directory. They populate only a subset of fields, # and the rest are added by the agent when events are collected. test_file = self._create_test_event_file(event_file) event_creation_time = TestEvent._get_file_creation_timestamp(test_file) event_list = self._collect_events() self.assertEqual(len(event_list), 1) self._assert_event_includes_all_parameters_in_the_telemetry_schema( event_list[0], expected_parameters={ GuestAgentExtensionEventsSchema.Name: 'Microsoft.Azure.Extensions.CustomScript', GuestAgentExtensionEventsSchema.Version: '2.0.4', GuestAgentExtensionEventsSchema.Operation: TestEvent._Operation, GuestAgentExtensionEventsSchema.OperationSuccess: True, GuestAgentExtensionEventsSchema.Message: 'A test telemetry message.', GuestAgentExtensionEventsSchema.Duration: 150000, GuestAgentExtensionEventsSchema.ExtensionType: 'json'}, assert_timestamp=lambda timestamp: self.assertEqual(timestamp, event_creation_time, "The event timestamp (opcode) is incorrect") ) def test_collect_events_should_add_all_the_parameters_in_the_telemetry_schema_to_extension_events(self): self._assert_extension_event_includes_all_parameters_in_the_telemetry_schema('custom_script_1.tld') def test_collect_events_should_ignore_extra_parameters_in_extension_events(self): self._assert_extension_event_includes_all_parameters_in_the_telemetry_schema('custom_script_extra_parameters.tld') def test_report_event_should_encode_call_stack_correctly(self): """ The Message in some telemetry events that include call stacks are being truncated in Kusto. While the issue doesn't seem to be in the agent itself, this test verifies that the Message of the event we send in the HTTP request matches the Message we read from the event's file. """ def get_event_message_from_event_file(event_file): with open(event_file, "rb") as fd: event_data = fd.read().decode("utf-8") # event files are UTF-8 encoded telemetry_event = json.loads(event_data) for p in telemetry_event['parameters']: if p['name'] == GuestAgentExtensionEventsSchema.Message: return p['value'] raise ValueError('Could not find the Message for the telemetry event in {0}'.format(event_file)) def get_event_message_from_http_request_body(event_body): # The XML for the event is sent over as a CDATA element ("Event") in the request's body http_request_body = event_body if ( event_body is None or type(event_body) is ustr) else textutil.str_to_encoded_ustr(event_body) request_body_xml_doc = textutil.parse_doc(http_request_body) event_node = textutil.find(request_body_xml_doc, "Event") if event_node is None: raise ValueError('Could not find the Event node in the XML document') if len(event_node.childNodes) != 1: raise ValueError('The Event node in the XML document should have exactly 1 child') event_node_first_child = event_node.childNodes[0] if event_node_first_child.nodeType != xml.dom.Node.CDATA_SECTION_NODE: raise ValueError('The Event node contents should be CDATA') event_node_cdata = event_node_first_child.nodeValue # The CDATA will contain a sequence of "" nodes, which # correspond to the parameters of the telemetry event. Wrap those into a "Helper" node # and extract the "Message" event_xml_text = '{0}'.format(event_node_cdata) event_xml_doc = textutil.parse_doc(event_xml_text) helper_node = textutil.find(event_xml_doc, "Helper") for child in helper_node.childNodes: if child.getAttribute('Name') == GuestAgentExtensionEventsSchema.Message: return child.getAttribute('Value') raise ValueError('Could not find the Message for the telemetry event. Request body: {0}'.format(http_request_body)) def http_post_handler(url, body, **__): if self.is_telemetry_request(url): http_post_handler.request_body = body return MockHttpResponse(status=200) return None http_post_handler.request_body = None with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_post_handler=http_post_handler) as protocol: event_file_path = self._create_test_event_file("event_with_callstack.waagent.tld") expected_message = get_event_message_from_event_file(event_file_path) event_list = self._collect_events() self._report_events(protocol, event_list) event_message = get_event_message_from_http_request_body(http_post_handler.request_body) self.assertEqual(event_message, expected_message, "The Message in the HTTP request does not match the Message in the event's *.tld file") def test_report_event_should_encode_events_correctly(self): def http_post_handler(url, body, **__): if self.is_telemetry_request(url): http_post_handler.request_body = body return MockHttpResponse(status=200) return None http_post_handler.request_body = None with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_post_handler=http_post_handler) as protocol: test_messages = [ 'Non-English message - 此文字不是英文的', "Ξεσκεπάζω τὴν ψυχοφθόρα βδελυγμία", "The quick brown fox jumps over the lazy dog", "El pingüino Wenceslao hizo kilómetros bajo exhaustiva lluvia y frío, añoraba a su querido cachorro.", "Portez ce vieux whisky au juge blond qui fume sur son île intérieure, à côté de l'alcôve ovoïde, où les bûches", "se consument dans l'âtre, ce qui lui permet de penser à la cænogenèse de l'être dont il est question", "dans la cause ambiguë entendue à Moÿ, dans un capharnaüm qui, pense-t-il, diminue çà et là la qualité de son œuvre.", "D'fhuascail Íosa, Úrmhac na hÓighe Beannaithe, pór Éava agus Ádhaimh", "Árvíztűrő tükörfúrógép", "Kæmi ný öxi hér ykist þjófum nú bæði víl og ádrepa", "Sævör grét áðan því úlpan var ónýt", "いろはにほへとちりぬるを わかよたれそつねならむ うゐのおくやまけふこえて あさきゆめみしゑひもせす", "? דג סקרן שט בים מאוכזב ולפתע מצא לו חברה איך הקליטה" "Pchnąć w tę łódź jeża lub ośm skrzyń fig", "Normal string event" ] for msg in test_messages: add_event('TestEventEncoding', message=msg, op=TestEvent._Operation) event_list = self._collect_events() self._report_events(protocol, event_list) # In Py2, encode() produces a str and in py3 it produces a bytes string. # type(bytes) == type(str) for Py2 so this check is mainly for Py3 to ensure that the event is encoded properly. self.assertIsInstance(http_post_handler.request_body, bytes, "The Event request body should be encoded") self.assertIn(textutil.str_to_encoded_ustr(msg).encode('utf-8'), http_post_handler.request_body, "Encoded message not found in body") class TestMetrics(AgentTestCase): @patch('azurelinuxagent.common.event.EventLogger.save_event') def test_report_metric(self, mock_event): event.report_metric("cpu", "%idle", "_total", 10.0) self.assertEqual(1, mock_event.call_count) event_json = mock_event.call_args[0][0] self.assertIn(event.TELEMETRY_EVENT_PROVIDER_ID, event_json) self.assertIn("%idle", event_json) event_dictionary = json.loads(event_json) self.assertEqual(event_dictionary['providerId'], event.TELEMETRY_EVENT_PROVIDER_ID) for parameter in event_dictionary["parameters"]: if parameter['name'] == GuestAgentPerfCounterEventsSchema.Counter: self.assertEqual(parameter['value'], '%idle') break else: self.fail("Counter '%idle' not found in event parameters: {0}".format(repr(event_dictionary))) def test_cleanup_message(self): ev_logger = event.EventLogger() self.assertEqual(None, ev_logger._clean_up_message(None)) self.assertEqual("", ev_logger._clean_up_message("")) self.assertEqual("Daemon Activate resource disk failure", ev_logger._clean_up_message( "Daemon Activate resource disk failure")) self.assertEqual("[M.A.E.CS-2.0.7] Target handler state", ev_logger._clean_up_message( '2019/10/07 21:54:16.629444 INFO [M.A.E.CS-2.0.7] Target handler state')) self.assertEqual("[M.A.E.CS-2.0.7] Initializing extension M.A.E.CS-2.0.7", ev_logger._clean_up_message( '2019/10/07 21:54:17.284385 INFO [M.A.E.CS-2.0.7] Initializing extension M.A.E.CS-2.0.7')) self.assertEqual("ExtHandler ProcessGoalState completed [incarnation 4; 4197 ms]", ev_logger._clean_up_message( "2019/10/07 21:55:38.474861 INFO ExtHandler ProcessGoalState completed [incarnation 4; 4197 ms]")) self.assertEqual("Daemon Azure Linux Agent Version:2.2.43", ev_logger._clean_up_message( "2019/10/07 21:52:28.615720 INFO Daemon Azure Linux Agent Version:2.2.43")) self.assertEqual('Daemon Cgroup controller "memory" is not mounted. Failed to create a cgroup for the VM Agent;' ' resource usage will not be tracked', ev_logger._clean_up_message('Daemon Cgroup controller "memory" is not mounted. Failed to ' 'create a cgroup for the VM Agent; resource usage will not be ' 'tracked')) self.assertEqual('ExtHandler Root directory /sys/fs/cgroup/memory/walinuxagent.extensions does not exist.', ev_logger._clean_up_message("2019/10/08 23:45:05.691037 WARNING ExtHandler Root directory " "/sys/fs/cgroup/memory/walinuxagent.extensions does not exist.")) self.assertEqual("LinuxAzureDiagnostic started to handle.", ev_logger._clean_up_message("2019/10/07 22:02:40 LinuxAzureDiagnostic started to handle.")) self.assertEqual("VMAccess started to handle.", ev_logger._clean_up_message("2019/10/07 21:56:58 VMAccess started to handle.")) self.assertEqual( '[PERIODIC] ExtHandler Root directory /sys/fs/cgroup/memory/walinuxagent.extensions does not exist.', ev_logger._clean_up_message("2019/10/08 23:45:05.691037 WARNING [PERIODIC] ExtHandler Root directory " "/sys/fs/cgroup/memory/walinuxagent.extensions does not exist.")) self.assertEqual("[PERIODIC] LinuxAzureDiagnostic started to handle.", ev_logger._clean_up_message( "2019/10/07 22:02:40 [PERIODIC] LinuxAzureDiagnostic started to handle.")) self.assertEqual("[PERIODIC] VMAccess started to handle.", ev_logger._clean_up_message("2019/10/07 21:56:58 [PERIODIC] VMAccess started to handle.")) self.assertEqual('[PERIODIC] Daemon Cgroup controller "memory" is not mounted. Failed to create a cgroup for ' 'the VM Agent; resource usage will not be tracked', ev_logger._clean_up_message('[PERIODIC] Daemon Cgroup controller "memory" is not mounted. ' 'Failed to create a cgroup for the VM Agent; resource usage will ' 'not be tracked')) self.assertEqual('The time should be in UTC', ev_logger._clean_up_message( '2019-11-26T18:15:06.866746Z INFO The time should be in UTC')) self.assertEqual('The time should be in UTC', ev_logger._clean_up_message( '2019-11-26T18:15:06.866746Z The time should be in UTC')) self.assertEqual('[PERIODIC] The time should be in UTC', ev_logger._clean_up_message( '2019-11-26T18:15:06.866746Z INFO [PERIODIC] The time should be in UTC')) self.assertEqual('[PERIODIC] The time should be in UTC', ev_logger._clean_up_message( '2019-11-26T18:15:06.866746Z [PERIODIC] The time should be in UTC')) Azure-WALinuxAgent-2b21de5/tests/common/test_logger.py000066400000000000000000000711031462617747000230540ustar00rootroot00000000000000# Copyright 2016 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import json # pylint: disable=unused-import import os import tempfile from datetime import datetime, timedelta from azurelinuxagent.common.event import __event_logger__, add_log_event, MAX_NUMBER_OF_EVENTS, EVENTS_DIRECTORY import azurelinuxagent.common.logger as logger from azurelinuxagent.common.utils import fileutil from tests.lib.tools import AgentTestCase, MagicMock, patch, skip_if_predicate_true _MSG_INFO = "This is our test info logging message {0} {1}" _MSG_WARN = "This is our test warn logging message {0} {1}" _MSG_ERROR = "This is our test error logging message {0} {1}" _MSG_VERBOSE = "This is our test verbose logging message {0} {1}" _DATA = ["arg1", "arg2"] class TestLogger(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) self.lib_dir = tempfile.mkdtemp() self.event_dir = os.path.join(self.lib_dir, EVENTS_DIRECTORY) fileutil.mkdir(self.event_dir) self.log_file = tempfile.mkstemp(prefix="logfile-")[1] logger.reset_periodic() def tearDown(self): AgentTestCase.tearDown(self) logger.reset_periodic() logger.DEFAULT_LOGGER.appenders *= 0 fileutil.rm_dirs(self.event_dir) @patch('azurelinuxagent.common.logger.Logger.verbose') @patch('azurelinuxagent.common.logger.Logger.warn') @patch('azurelinuxagent.common.logger.Logger.error') @patch('azurelinuxagent.common.logger.Logger.info') def test_periodic_emits_if_not_previously_sent(self, mock_info, mock_error, mock_warn, mock_verbose): logger.periodic_info(logger.EVERY_DAY, _MSG_INFO, logger.LogLevel.INFO, *_DATA) self.assertEqual(1, mock_info.call_count) logger.periodic_error(logger.EVERY_DAY, _MSG_ERROR, logger.LogLevel.ERROR, *_DATA) self.assertEqual(1, mock_error.call_count) logger.periodic_warn(logger.EVERY_DAY, _MSG_WARN, logger.LogLevel.WARNING, *_DATA) self.assertEqual(1, mock_warn.call_count) logger.periodic_verbose(logger.EVERY_DAY, _MSG_VERBOSE, logger.LogLevel.VERBOSE, *_DATA) self.assertEqual(1, mock_verbose.call_count) @patch('azurelinuxagent.common.logger.Logger.verbose') @patch('azurelinuxagent.common.logger.Logger.warn') @patch('azurelinuxagent.common.logger.Logger.error') @patch('azurelinuxagent.common.logger.Logger.info') def test_periodic_does_not_emit_if_previously_sent(self, mock_info, mock_error, mock_warn, mock_verbose): # The count does not increase from 1 - the first time it sends the data. logger.periodic_info(logger.EVERY_DAY, _MSG_INFO, *_DATA) self.assertIn(hash(_MSG_INFO), logger.DEFAULT_LOGGER.periodic_messages) self.assertEqual(1, mock_info.call_count) logger.periodic_info(logger.EVERY_DAY, _MSG_INFO, *_DATA) self.assertIn(hash(_MSG_INFO), logger.DEFAULT_LOGGER.periodic_messages) self.assertEqual(1, mock_info.call_count) logger.periodic_warn(logger.EVERY_DAY, _MSG_WARN, *_DATA) self.assertIn(hash(_MSG_WARN), logger.DEFAULT_LOGGER.periodic_messages) self.assertEqual(1, mock_warn.call_count) logger.periodic_warn(logger.EVERY_DAY, _MSG_WARN, *_DATA) self.assertIn(hash(_MSG_WARN), logger.DEFAULT_LOGGER.periodic_messages) self.assertEqual(1, mock_warn.call_count) logger.periodic_error(logger.EVERY_DAY, _MSG_ERROR, *_DATA) self.assertIn(hash(_MSG_ERROR), logger.DEFAULT_LOGGER.periodic_messages) self.assertEqual(1, mock_error.call_count) logger.periodic_error(logger.EVERY_DAY, _MSG_ERROR, *_DATA) self.assertIn(hash(_MSG_ERROR), logger.DEFAULT_LOGGER.periodic_messages) self.assertEqual(1, mock_error.call_count) logger.periodic_verbose(logger.EVERY_DAY, _MSG_VERBOSE, *_DATA) self.assertIn(hash(_MSG_VERBOSE), logger.DEFAULT_LOGGER.periodic_messages) self.assertEqual(1, mock_verbose.call_count) logger.periodic_verbose(logger.EVERY_DAY, _MSG_VERBOSE, *_DATA) self.assertIn(hash(_MSG_VERBOSE), logger.DEFAULT_LOGGER.periodic_messages) self.assertEqual(1, mock_verbose.call_count) self.assertEqual(4, len(logger.DEFAULT_LOGGER.periodic_messages)) @patch('azurelinuxagent.common.logger.Logger.verbose') @patch('azurelinuxagent.common.logger.Logger.warn') @patch('azurelinuxagent.common.logger.Logger.error') @patch('azurelinuxagent.common.logger.Logger.info') def test_periodic_emits_after_elapsed_delta(self, mock_info, mock_error, mock_warn, mock_verbose): logger.periodic_info(logger.EVERY_DAY, _MSG_INFO, *_DATA) self.assertEqual(1, mock_info.call_count) logger.periodic_info(logger.EVERY_DAY, _MSG_INFO, *_DATA) self.assertEqual(1, mock_info.call_count) logger.DEFAULT_LOGGER.periodic_messages[hash(_MSG_INFO)] = datetime.now() - \ logger.EVERY_DAY - logger.EVERY_HOUR logger.periodic_info(logger.EVERY_DAY, _MSG_INFO, *_DATA) self.assertEqual(2, mock_info.call_count) logger.periodic_warn(logger.EVERY_DAY, _MSG_WARN, *_DATA) self.assertEqual(1, mock_warn.call_count) logger.periodic_warn(logger.EVERY_DAY, _MSG_WARN, *_DATA) self.assertEqual(1, mock_warn.call_count) logger.DEFAULT_LOGGER.periodic_messages[hash(_MSG_WARN)] = datetime.now() - \ logger.EVERY_DAY - logger.EVERY_HOUR logger.periodic_warn(logger.EVERY_DAY, _MSG_WARN, *_DATA) self.assertEqual(2, mock_info.call_count) logger.periodic_error(logger.EVERY_DAY, _MSG_ERROR, *_DATA) self.assertEqual(1, mock_error.call_count) logger.periodic_error(logger.EVERY_DAY, _MSG_ERROR, *_DATA) self.assertEqual(1, mock_error.call_count) logger.DEFAULT_LOGGER.periodic_messages[hash(_MSG_ERROR)] = datetime.now() - \ logger.EVERY_DAY - logger.EVERY_HOUR logger.periodic_error(logger.EVERY_DAY, _MSG_ERROR, *_DATA) self.assertEqual(2, mock_info.call_count) logger.periodic_verbose(logger.EVERY_DAY, _MSG_VERBOSE, *_DATA) self.assertEqual(1, mock_verbose.call_count) logger.periodic_verbose(logger.EVERY_DAY, _MSG_VERBOSE, *_DATA) self.assertEqual(1, mock_verbose.call_count) logger.DEFAULT_LOGGER.periodic_messages[hash(_MSG_VERBOSE)] = datetime.now() - \ logger.EVERY_DAY - logger.EVERY_HOUR logger.periodic_verbose(logger.EVERY_DAY, _MSG_VERBOSE, *_DATA) self.assertEqual(2, mock_info.call_count) @patch('azurelinuxagent.common.logger.Logger.verbose') @patch('azurelinuxagent.common.logger.Logger.warn') @patch('azurelinuxagent.common.logger.Logger.error') @patch('azurelinuxagent.common.logger.Logger.info') def test_periodic_forwards_message_and_args(self, mock_info, mock_error, mock_warn, mock_verbose): logger.periodic_info(logger.EVERY_DAY, _MSG_INFO, *_DATA) mock_info.assert_called_once_with(_MSG_INFO, *_DATA) logger.periodic_error(logger.EVERY_DAY, _MSG_ERROR, *_DATA) mock_error.assert_called_once_with(_MSG_ERROR, *_DATA) logger.periodic_warn(logger.EVERY_DAY, _MSG_WARN, *_DATA) mock_warn.assert_called_once_with(_MSG_WARN, *_DATA) logger.periodic_verbose(logger.EVERY_DAY, _MSG_VERBOSE, *_DATA) mock_verbose.assert_called_once_with(_MSG_VERBOSE, *_DATA) def test_logger_should_log_in_utc(self): file_name = "test.log" file_path = os.path.join(self.tmp_dir, file_name) test_logger = logger.Logger() test_logger.add_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, path=file_path) before_write_utc = datetime.utcnow() test_logger.info("The time should be in UTC") with open(file_path, "r") as log_file: log = log_file.read() try: time_in_file = datetime.strptime(log.split(logger.LogLevel.STRINGS[logger.LogLevel.INFO])[0].strip() , logger.Logger.LogTimeFormatInUTC) except ValueError: self.fail("Ensure timestamp follows ISO-8601 format + 'Z' for UTC") # If the time difference is > 5secs, there's a high probability that the time_in_file is in different TZ self.assertTrue((time_in_file-before_write_utc) <= timedelta(seconds=5)) @patch("azurelinuxagent.common.logger.datetime") def test_logger_should_log_micro_seconds(self, mock_dt): # datetime.isoformat() skips ms if ms=0, this test ensures that ms is always set file_name = "test.log" file_path = os.path.join(self.tmp_dir, file_name) test_logger = logger.Logger() test_logger.add_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, path=file_path) ts_with_no_ms = datetime.utcnow().replace(microsecond=0) mock_dt.utcnow = MagicMock(return_value=ts_with_no_ms) test_logger.info("The time should contain milli-seconds") with open(file_path, "r") as log_file: log = log_file.read() try: time_in_file = datetime.strptime(log.split(logger.LogLevel.STRINGS[logger.LogLevel.INFO])[0].strip() , logger.Logger.LogTimeFormatInUTC) except ValueError: self.fail("Ensure timestamp follows ISO-8601 format and has micro seconds in it") self.assertEqual(ts_with_no_ms, time_in_file, "Timestamps dont match") def test_telemetry_logger(self): mock = MagicMock() appender = logger.TelemetryAppender(logger.LogLevel.WARNING, mock) appender.write(logger.LogLevel.WARNING, "--unit-test-WARNING--") mock.assert_called_with(logger.LogLevel.WARNING, "--unit-test-WARNING--") mock.reset_mock() appender.write(logger.LogLevel.ERROR, "--unit-test-ERROR--") mock.assert_called_with(logger.LogLevel.ERROR, "--unit-test-ERROR--") mock.reset_mock() appender.write(logger.LogLevel.INFO, "--unit-test-INFO--") mock.assert_not_called() mock.reset_mock() for i in range(5): # pylint: disable=unused-variable appender.write(logger.LogLevel.ERROR, "--unit-test-ERROR--") appender.write(logger.LogLevel.INFO, "--unit-test-INFO--") self.assertEqual(5, mock.call_count) # Only ERROR should be called. @patch('azurelinuxagent.common.event.EventLogger.save_event') def test_telemetry_logger_not_on_by_default(self, mock_save): appender = logger.TelemetryAppender(logger.LogLevel.WARNING, add_log_event) appender.write(logger.LogLevel.WARNING, 'Cgroup controller "memory" is not mounted. ' 'Failed to create a cgroup for extension ' 'Microsoft.OSTCExtensions.DummyExtension-1.2.3.4') self.assertEqual(0, mock_save.call_count) @patch("azurelinuxagent.common.logger.StdoutAppender.write") @patch("azurelinuxagent.common.logger.TelemetryAppender.write") @patch("azurelinuxagent.common.logger.ConsoleAppender.write") @patch("azurelinuxagent.common.logger.FileAppender.write") def test_add_appender(self, mock_file_write, mock_console_write, mock_telem_write, mock_stdout_write): lg = logger.Logger(logger.DEFAULT_LOGGER, "TestLogger1") lg.add_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, path=self.log_file) lg.add_appender(logger.AppenderType.TELEMETRY, logger.LogLevel.WARNING, path=add_log_event) lg.add_appender(logger.AppenderType.CONSOLE, logger.LogLevel.WARNING, path="/dev/null") lg.add_appender(logger.AppenderType.STDOUT, logger.LogLevel.WARNING, path=None) counter = 0 for appender in lg.appenders: if isinstance(appender, logger.FileAppender): counter += 1 elif isinstance(appender, logger.TelemetryAppender): counter += 1 elif isinstance(appender, logger.ConsoleAppender): counter += 1 elif isinstance(appender, logger.StdoutAppender): counter += 1 # All 4 appenders should have been included. self.assertEqual(4, counter) # The write for all the loggers will get called, but the levels are honored in the individual write method # itself. Each appender has its own test to validate the writing of the log message for different levels. # For Reference: tests.common.test_logger.TestAppender lg.warn("Test Log") self.assertEqual(1, mock_file_write.call_count) self.assertEqual(1, mock_console_write.call_count) self.assertEqual(1, mock_telem_write.call_count) self.assertEqual(1, mock_stdout_write.call_count) lg.info("Test Log") self.assertEqual(2, mock_file_write.call_count) self.assertEqual(2, mock_console_write.call_count) self.assertEqual(2, mock_telem_write.call_count) self.assertEqual(2, mock_stdout_write.call_count) lg.error("Test Log") self.assertEqual(3, mock_file_write.call_count) self.assertEqual(3, mock_console_write.call_count) self.assertEqual(3, mock_telem_write.call_count) self.assertEqual(3, mock_stdout_write.call_count) @patch("azurelinuxagent.common.logger.StdoutAppender.write") @patch("azurelinuxagent.common.logger.TelemetryAppender.write") @patch("azurelinuxagent.common.logger.ConsoleAppender.write") @patch("azurelinuxagent.common.logger.FileAppender.write") def test_set_prefix(self, mock_file_write, mock_console_write, mock_telem_write, mock_stdout_write): lg = logger.Logger(logger.DEFAULT_LOGGER) prefix = "YoloLogger" lg.set_prefix(prefix) self.assertEqual(lg.prefix, prefix) lg.add_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, path=self.log_file) lg.add_appender(logger.AppenderType.TELEMETRY, logger.LogLevel.WARNING, path=add_log_event) lg.add_appender(logger.AppenderType.CONSOLE, logger.LogLevel.WARNING, path="/dev/null") lg.add_appender(logger.AppenderType.STDOUT, logger.LogLevel.WARNING, path=None) lg.error("Test Log") self.assertIn(prefix, mock_file_write.call_args[0][1]) self.assertIn(prefix, mock_console_write.call_args[0][1]) self.assertIn(prefix, mock_telem_write.call_args[0][1]) self.assertIn(prefix, mock_stdout_write.call_args[0][1]) @patch("azurelinuxagent.common.logger.StdoutAppender.write") @patch("azurelinuxagent.common.logger.TelemetryAppender.write") @patch("azurelinuxagent.common.logger.ConsoleAppender.write") @patch("azurelinuxagent.common.logger.FileAppender.write") def test_nested_logger(self, mock_file_write, mock_console_write, mock_telem_write, mock_stdout_write): """ The purpose of this test is to see if the logger gets correctly created when passed it another logger and also if the appender correctly gets the messages logged. This is how the ExtHandlerInstance logger works. I initialize the default logger(logger), then create a new logger(lg) from it, and then log using logger & lg. See if both logs are flowing through or not. """ parent_prefix = "ParentLogger" child_prefix = "ChildLogger" logger.add_logger_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, path=self.log_file) logger.add_logger_appender(logger.AppenderType.TELEMETRY, logger.LogLevel.WARNING, path=add_log_event) logger.add_logger_appender(logger.AppenderType.CONSOLE, logger.LogLevel.WARNING, path="/dev/null") logger.add_logger_appender(logger.AppenderType.STDOUT, logger.LogLevel.WARNING) logger.set_prefix(parent_prefix) lg = logger.Logger(logger.DEFAULT_LOGGER, child_prefix) lg.error("Test Log") self.assertEqual(1, mock_file_write.call_count) self.assertEqual(1, mock_console_write.call_count) self.assertEqual(1, mock_telem_write.call_count) self.assertEqual(1, mock_stdout_write.call_count) self.assertIn(child_prefix, mock_file_write.call_args[0][1]) self.assertIn(child_prefix, mock_console_write.call_args[0][1]) self.assertIn(child_prefix, mock_telem_write.call_args[0][1]) self.assertIn(child_prefix, mock_stdout_write.call_args[0][1]) logger.error("Test Log") self.assertEqual(2, mock_file_write.call_count) self.assertEqual(2, mock_console_write.call_count) self.assertEqual(2, mock_telem_write.call_count) self.assertEqual(2, mock_stdout_write.call_count) self.assertIn(parent_prefix, mock_file_write.call_args[0][1]) self.assertIn(parent_prefix, mock_console_write.call_args[0][1]) self.assertIn(parent_prefix, mock_telem_write.call_args[0][1]) self.assertIn(parent_prefix, mock_stdout_write.call_args[0][1]) @patch("azurelinuxagent.common.event.send_logs_to_telemetry", return_value=True) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_telemetry_logger_add_log_event(self, mock_lib_dir, *_): mock_lib_dir.return_value = self.lib_dir __event_logger__.event_dir = self.event_dir prefix = "YoloLogger" logger.add_logger_appender(logger.AppenderType.TELEMETRY, logger.LogLevel.WARNING, path=add_log_event) logger.set_prefix(prefix) logger.warn('Test Log - Warning') event_files = os.listdir(__event_logger__.event_dir) self.assertEqual(1, len(event_files)) log_file_event = os.path.join(__event_logger__.event_dir, event_files[0]) try: with open(log_file_event) as logfile: logcontent = logfile.read() # Checking the contents of the event file. self.assertIn("Test Log - Warning", logcontent) except Exception as e: self.assertFalse(True, "The log file looks like it isn't correctly setup for this test. Take a look. " # pylint: disable=redundant-unittest-assert "{0}".format(e)) @skip_if_predicate_true(lambda: True, "Enable this test when SEND_LOGS_TO_TELEMETRY is enabled") @patch("azurelinuxagent.common.logger.StdoutAppender.write") @patch("azurelinuxagent.common.logger.ConsoleAppender.write") @patch("azurelinuxagent.common.event.send_logs_to_telemetry", return_value=True) def test_telemetry_logger_verify_maximum_recursion_depths_doesnt_happen(self, *_): logger.add_logger_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, path="/dev/null") logger.add_logger_appender(logger.AppenderType.TELEMETRY, logger.LogLevel.WARNING, path=add_log_event) for i in range(MAX_NUMBER_OF_EVENTS): logger.warn('Test Log - {0} - 1 - Warning'.format(i)) exception_caught = False # #1035 was caused due to too many files being written in an error condition. Adding even one more here broke # the camels back earlier - It would go into an infinite recursion as telemetry would call log, which in turn # would call telemetry, and so on. # The description of the fix is given in the comments @ azurelinuxagent.common.logger.Logger#log.write_log. try: for i in range(10): logger.warn('Test Log - {0} - 2 - Warning'.format(i)) except RuntimeError: exception_caught = True self.assertFalse(exception_caught, msg="Caught a Runtime Error. This should not have been raised.") @skip_if_predicate_true(lambda: True, "Enable this test when SEND_LOGS_TO_TELEMETRY is enabled") @patch("azurelinuxagent.common.logger.StdoutAppender.write") @patch("azurelinuxagent.common.logger.ConsoleAppender.write") @patch("azurelinuxagent.common.event.send_logs_to_telemetry", return_value=True) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_telemetry_logger_check_all_file_logs_written_when_events_gt_MAX_NUMBER_OF_EVENTS(self, mock_lib_dir, *_): mock_lib_dir.return_value = self.lib_dir __event_logger__.event_dir = self.event_dir no_of_log_statements = MAX_NUMBER_OF_EVENTS + 100 exception_caught = False prefix = "YoloLogger" logger.add_logger_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, path=self.log_file) logger.add_logger_appender(logger.AppenderType.TELEMETRY, logger.LogLevel.WARNING, path=add_log_event) logger.set_prefix(prefix) # Calling logger.warn no_of_log_statements times would cause the telemetry appender to writing # 1000 events into the events dir, and then drop the remaining events. It should not generate the RuntimeError try: for i in range(0, no_of_log_statements): logger.warn('Test Log - {0} - 1 - Warning'.format(i)) except RuntimeError: exception_caught = True self.assertFalse(exception_caught, msg="Caught a Runtime Error. This should not have been raised.") self.assertEqual(MAX_NUMBER_OF_EVENTS, len(os.listdir(__event_logger__.event_dir))) try: with open(self.log_file) as logfile: logcontent = logfile.readlines() # Checking the last log entry. # Subtracting 1 as range is exclusive of the upper bound self.assertIn("WARNING {1} Test Log - {0} - 1 - Warning".format(no_of_log_statements - 1, prefix), logcontent[-1]) # Checking the 1001st log entry. We know that 1001st entry would generate a PERIODIC message of too many # events, which should be captured in the log file as well. self.assertRegex(logcontent[1001], r"(.*WARNING\s*{0}\s*\[PERIODIC\]\s*Too many files under:.*{1}, " r"current count\:\s*\d+,\s*removing oldest\s*.*)".format(prefix, self.event_dir)) except Exception as e: self.assertFalse(True, "The log file looks like it isn't correctly setup for this test. " # pylint: disable=redundant-unittest-assert "Take a look. {0}".format(e)) class TestAppender(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) self.lib_dir = tempfile.mkdtemp() self.event_dir = os.path.join(self.lib_dir, EVENTS_DIRECTORY) fileutil.mkdir(self.event_dir) self.log_file = tempfile.mkstemp(prefix="logfile-")[1] logger.reset_periodic() def tearDown(self): AgentTestCase.tearDown(self) logger.reset_periodic() fileutil.rm_dirs(self.event_dir) logger.DEFAULT_LOGGER.appenders *= 0 @patch("azurelinuxagent.common.event.send_logs_to_telemetry", return_value=True) @patch("azurelinuxagent.common.logger.sys.stdout.write") @patch("azurelinuxagent.common.event.EventLogger.add_log_event") def test_no_appenders_added(self, mock_add_log_event, mock_sys_stdout, *_): # Validating no logs are written in any appender logger.verbose("test-verbose") logger.info("test-info") logger.warn("test-warn") logger.error("test-error") # Validating Console and File logs with open(self.log_file) as logfile: logcontent = logfile.readlines() self.assertEqual(0, len(logcontent)) # Validating telemetry call self.assertEqual(0, mock_add_log_event.call_count) # Validating stdout call self.assertEqual(0, mock_sys_stdout.call_count) def test_console_appender(self): logger.add_logger_appender(logger.AppenderType.CONSOLE, logger.LogLevel.WARNING, path=self.log_file) logger.verbose("test-verbose") with open(self.log_file) as logfile: logcontent = logfile.readlines() # Levels are honored and Verbose should not be written. self.assertEqual(0, len(logcontent)) logger.info("test-info") with open(self.log_file) as logfile: logcontent = logfile.readlines() # Levels are honored and Info should not be written. self.assertEqual(0, len(logcontent)) # As console has a mode of w, it'll always only have 1 line only. logger.warn("test-warn") with open(self.log_file) as logfile: logcontent = logfile.readlines() self.assertEqual(1, len(logcontent)) self.assertRegex(logcontent[0], r"(.*WARNING\s\w+\s*test-warn.*)") logger.error("test-error") with open(self.log_file) as logfile: logcontent = logfile.readlines() # Levels are honored and Info, Verbose should not be written. self.assertEqual(1, len(logcontent)) self.assertRegex(logcontent[0], r"(.*ERROR\s\w+\s*test-error.*)") def test_file_appender(self): logger.add_logger_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, path=self.log_file) logger.verbose("test-verbose") logger.info("test-info") logger.warn("test-warn") logger.error("test-error") with open(self.log_file) as logfile: logcontent = logfile.readlines() # Levels are honored and Verbose should not be written. self.assertEqual(3, len(logcontent)) self.assertRegex(logcontent[0], r"(.*INFO\s\w+\s*test-info.*)") self.assertRegex(logcontent[1], r"(.*WARNING\s\w+\s*test-warn.*)") self.assertRegex(logcontent[2], r"(.*ERROR\s\w+\s*test-error.*)") @patch("azurelinuxagent.common.event.send_logs_to_telemetry", return_value=True) @patch("azurelinuxagent.common.event.EventLogger.add_log_event") def test_telemetry_appender(self, mock_add_log_event, *_): logger.add_logger_appender(logger.AppenderType.TELEMETRY, logger.LogLevel.WARNING, path=add_log_event) logger.verbose("test-verbose") logger.info("test-info") logger.warn("test-warn") logger.error("test-error") self.assertEqual(2, mock_add_log_event.call_count) @patch("azurelinuxagent.common.logger.sys.stdout.write") def test_stdout_appender(self, mock_sys_stdout): logger.add_logger_appender(logger.AppenderType.STDOUT, logger.LogLevel.ERROR) logger.verbose("test-verbose") logger.info("test-info") logger.warn("test-warn") logger.error("test-error") # Validating only test-error gets logged and not others. self.assertEqual(1, mock_sys_stdout.call_count) def test_console_output_enabled_should_return_true_when_there_are_console_appenders(self): my_logger = logger.Logger() my_logger.add_appender(logger.AppenderType.STDOUT, logger.LogLevel.INFO, None) my_logger.add_appender(logger.AppenderType.CONSOLE, logger.LogLevel.INFO, None) self.assertTrue(my_logger.console_output_enabled(), "Console output should be enabled, appenders = {0}".format(my_logger.appenders)) def test_console_output_enabled_should_return_false_when_there_are_no_console_appenders(self): my_logger = logger.Logger() my_logger.add_appender(logger.AppenderType.STDOUT, logger.LogLevel.INFO, None) self.assertFalse(my_logger.console_output_enabled(), "Console output should not be enabled, appenders = {0}".format(my_logger.appenders)) def test_disable_console_output_should_remove_all_console_appenders(self): my_logger = logger.Logger() my_logger.add_appender(logger.AppenderType.STDOUT, logger.LogLevel.INFO, None) my_logger.add_appender(logger.AppenderType.CONSOLE, logger.LogLevel.INFO, None) my_logger.add_appender(logger.AppenderType.STDOUT, logger.LogLevel.INFO, None) my_logger.add_appender(logger.AppenderType.CONSOLE, logger.LogLevel.INFO, None) my_logger.disable_console_output() self.assertTrue( len(my_logger.appenders) == 2 and all(isinstance(a, logger.StdoutAppender) for a in my_logger.appenders), "The console appender was not removed: {0}".format(my_logger.appenders)) Azure-WALinuxAgent-2b21de5/tests/common/test_singletonperthread.py000066400000000000000000000154631462617747000255050ustar00rootroot00000000000000import uuid from multiprocessing import Queue from threading import Thread, currentThread from azurelinuxagent.common.singletonperthread import SingletonPerThread from tests.lib.tools import AgentTestCase, clear_singleton_instances class TestClassToTestSingletonPerThread(SingletonPerThread): """ Since these tests deal with testing in a multithreaded environment, we employ the use of multiprocessing.Queue() to ensure that the data is consistent. This test class uses a uuid to identify an object instead of directly using object reference because Queue.get() returns a different object reference than what is put in it even though the object is same (which is verified using uuid in this test class) Eg: obj1 = WireClient("obj1") obj1 <__main__.WireClient object at 0x7f5e78476198> q = Queue() q.put(obj1) test1 = q.get() test1 <__main__.WireClient object at 0x7f5e78430630> test1.endpoint == obj1.endpoint True """ def __init__(self): # Set the name of the object to the current thread name self.name = currentThread().getName() # Unique identifier for a class object self.uuid = str(uuid.uuid4()) class TestSingletonPerThread(AgentTestCase): THREAD_NAME_1 = 'thread-1' THREAD_NAME_2 = 'thread-2' def setUp(self): super(TestSingletonPerThread, self).setUp() # In a multi-threaded environment, exceptions thrown in the child thread will not be propagated to the parent # thread. In order to achieve that, adding all exceptions to a Queue and then checking that in parent thread. self.errors = Queue() clear_singleton_instances(TestClassToTestSingletonPerThread) def _setup_multithread_and_execute(self, func1, args1, func2, args2, t1_name=None, t2_name=None): t1 = Thread(target=func1, args=args1) t2 = Thread(target=func2, args=args2) t1.setName(t1_name if t1_name else self.THREAD_NAME_1) t2.setName(t2_name if t2_name else self.THREAD_NAME_2) t1.start() t2.start() t1.join() t2.join() errs = [] while not self.errors.empty(): errs.append(self.errors.get()) if len(errs) > 0: raise Exception("Errors: %s" % ' , '.join(errs)) @staticmethod def _get_test_class_instance(q, err): try: obj = TestClassToTestSingletonPerThread() q.put(obj) except Exception as e: err.put(str(e)) def _parse_instances_and_return_thread_objects(self, instances, t1_name=None, t2_name=None): obj1, obj2 = instances.get(), instances.get() def check_obj(name): if obj1.name == name: return obj1 elif obj2.name == name: return obj2 else: return None t1_object = check_obj(t1_name if t1_name else self.THREAD_NAME_1) t2_object = check_obj(t2_name if t2_name else self.THREAD_NAME_2) return t1_object, t2_object def test_it_should_have_only_one_instance_for_same_thread(self): obj1 = TestClassToTestSingletonPerThread() obj2 = TestClassToTestSingletonPerThread() self.assertEqual(obj1.uuid, obj2.uuid) def test_it_should_have_multiple_instances_for_multiple_threads(self): instances = Queue() self._setup_multithread_and_execute(func1=self._get_test_class_instance, args1=(instances, self.errors), func2=self._get_test_class_instance, args2=(instances, self.errors)) self.assertEqual(2, instances.qsize()) # Assert that there are 2 objects in the queue obj1, obj2 = instances.get(), instances.get() self.assertNotEqual(obj1.uuid, obj2.uuid) def test_it_should_return_existing_instance_for_new_thread_with_same_name(self): instances = Queue() self._setup_multithread_and_execute(func1=self._get_test_class_instance, args1=(instances, self.errors), func2=self._get_test_class_instance, args2=(instances, self.errors)) t1_obj, t2_obj = self._parse_instances_and_return_thread_objects(instances) new_instances = Queue() # The 2nd call is to get new objects with the same thread name to verify if the objects are same self._setup_multithread_and_execute(func1=self._get_test_class_instance, args1=(new_instances, self.errors), func2=self._get_test_class_instance, args2=(new_instances, self.errors)) new_t1_obj, new_t2_obj = self._parse_instances_and_return_thread_objects(new_instances) self.assertEqual(t1_obj.name, new_t1_obj.name) self.assertEqual(t1_obj.uuid, new_t1_obj.uuid) self.assertEqual(t2_obj.name, new_t2_obj.name) self.assertEqual(t2_obj.uuid, new_t2_obj.uuid) def test_singleton_object_should_match_thread_name(self): instances = Queue() t1_name = str(uuid.uuid4()) t2_name = str(uuid.uuid4()) test_class_obj_name = lambda t_name: "%s__%s" % (TestClassToTestSingletonPerThread.__name__, t_name) self._setup_multithread_and_execute(func1=self._get_test_class_instance, args1=(instances, self.errors), func2=self._get_test_class_instance, args2=(instances, self.errors), t1_name=t1_name, t2_name=t2_name) singleton_instances = TestClassToTestSingletonPerThread._instances # pylint: disable=no-member # Assert instance names are consistent with the thread names self.assertIn(test_class_obj_name(t1_name), singleton_instances) self.assertIn(test_class_obj_name(t2_name), singleton_instances) # Assert that the objects match their respective threads # This function matches objects with their thread names and returns the respective object or None if not found t1_obj, t2_obj = self._parse_instances_and_return_thread_objects(instances, t1_name, t2_name) # Ensure that objects for both the threads were found self.assertIsNotNone(t1_obj) self.assertIsNotNone(t2_obj) # Ensure that the objects match with their respective thread objects self.assertEqual(singleton_instances[test_class_obj_name(t1_name)].uuid, t1_obj.uuid) self.assertEqual(singleton_instances[test_class_obj_name(t2_name)].uuid, t2_obj.uuid) Azure-WALinuxAgent-2b21de5/tests/common/test_telemetryevent.py000066400000000000000000000055101462617747000246500ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from azurelinuxagent.common.telemetryevent import TelemetryEvent, TelemetryEventParam, GuestAgentExtensionEventsSchema, \ CommonTelemetryEventSchema from tests.lib.tools import AgentTestCase def get_test_event(name="DummyExtension", op="Unknown", is_success=True, duration=0, version="foo", evt_type="", is_internal=False, message="DummyMessage", eventId=1): event = TelemetryEvent(eventId, "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX") event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Name, name)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Version, str(version))) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.IsInternal, is_internal)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Operation, op)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.OperationSuccess, is_success)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Message, message)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Duration, duration)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.ExtensionType, evt_type)) return event class TestTelemetryEvent(AgentTestCase): def test_contains_works_for_TelemetryEvent(self): test_event = get_test_event(message="Dummy Event") self.assertTrue(GuestAgentExtensionEventsSchema.Name in test_event) self.assertTrue(GuestAgentExtensionEventsSchema.Version in test_event) self.assertTrue(GuestAgentExtensionEventsSchema.IsInternal in test_event) self.assertTrue(GuestAgentExtensionEventsSchema.Operation in test_event) self.assertTrue(GuestAgentExtensionEventsSchema.OperationSuccess in test_event) self.assertTrue(GuestAgentExtensionEventsSchema.Message in test_event) self.assertTrue(GuestAgentExtensionEventsSchema.Duration in test_event) self.assertTrue(GuestAgentExtensionEventsSchema.ExtensionType in test_event) self.assertFalse(CommonTelemetryEventSchema.GAVersion in test_event) self.assertFalse(CommonTelemetryEventSchema.ContainerId in test_event)Azure-WALinuxAgent-2b21de5/tests/common/test_version.py000066400000000000000000000247661462617747000232770ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from __future__ import print_function import os import textwrap import mock import azurelinuxagent.common.conf as conf from azurelinuxagent.common.future import ustr from azurelinuxagent.common.event import EVENTS_DIRECTORY from azurelinuxagent.common.version import set_current_agent, \ AGENT_LONG_VERSION, AGENT_VERSION, AGENT_NAME, AGENT_NAME_PATTERN, \ get_f5_platform, get_distro, get_lis_version, PY_VERSION_MAJOR, \ PY_VERSION_MINOR, get_daemon_version, set_daemon_version, __DAEMON_VERSION_ENV_VARIABLE as DAEMON_VERSION_ENV_VARIABLE from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from tests.lib.tools import AgentTestCase, open_patch, patch def freebsd_system(): return ["FreeBSD"] def freebsd_system_release(x, y, z): # pylint: disable=unused-argument return "10.0" def openbsd_system(): return ["OpenBSD"] def openbsd_system_release(x, y, z): # pylint: disable=unused-argument return "20.0" def default_system(): return [""] def default_system_no_linux_distro(): return '', '', '' def default_system_exception(): raise Exception def is_platform_dist_supported(): # platform.dist() and platform.linux_distribution() is deprecated from Python 3.8+ if PY_VERSION_MAJOR == 3 and PY_VERSION_MINOR >= 8: return False return True class TestAgentVersion(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) @mock.patch('platform.system', side_effect=freebsd_system) @mock.patch('re.sub', side_effect=freebsd_system_release) def test_distro_is_correct_format_when_freebsd(self, platform_system_name, mock_variable): # pylint: disable=unused-argument osinfo = get_distro() freebsd_list = ['freebsd', "10.0", '', 'freebsd'] self.assertListEqual(freebsd_list, osinfo) @mock.patch('platform.system', side_effect=openbsd_system) @mock.patch('re.sub', side_effect=openbsd_system_release) def test_distro_is_correct_format_when_openbsd(self, platform_system_name, mock_variable): # pylint: disable=unused-argument osinfo = get_distro() openbsd_list = ['openbsd', "20.0", '', 'openbsd'] self.assertListEqual(openbsd_list, osinfo) @mock.patch('platform.system', side_effect=default_system) def test_distro_is_correct_format_when_default_case(self, *args): # pylint: disable=unused-argument default_list = ['', '', '', ''] unknown_list = ['unknown', 'FFFF', '', ''] if is_platform_dist_supported(): with patch('platform.dist', side_effect=default_system_no_linux_distro): osinfo = get_distro() self.assertListEqual(default_list, osinfo) else: # platform.dist() is deprecated in Python 3.7+ and would throw, resulting in unknown distro osinfo = get_distro() self.assertListEqual(unknown_list, osinfo) @mock.patch('platform.system', side_effect=default_system) def test_distro_is_correct_for_exception_case(self, *args): # pylint: disable=unused-argument default_list = ['unknown', 'FFFF', '', ''] if is_platform_dist_supported(): with patch('platform.dist', side_effect=default_system_exception): osinfo = get_distro() else: # platform.dist() is deprecated in Python 3.7+ so we can't patch it, but it would throw # as well, resulting in the same unknown distro osinfo = get_distro() self.assertListEqual(default_list, osinfo) def test_get_lis_version_should_return_a_string(self): """ On a Hyper-V guest with the LIS drivers installed as a module, this function should return a string of the version, like '4.3.5'. Anywhere else it should return 'Absent' and possibly return 'Failed' if an exception was raised, so we check that it returns a string'. """ lis_version = get_lis_version() self.assertIsInstance(lis_version, ustr) def test_get_daemon_version_should_return_the_version_that_was_previously_set(self): set_daemon_version("1.2.3.4") try: self.assertEqual( FlexibleVersion("1.2.3.4"), get_daemon_version(), "The daemon version should be 1.2.3.4. Environment={0}".format(os.environ) ) finally: os.environ.pop(DAEMON_VERSION_ENV_VARIABLE) def test_get_daemon_version_should_return_zero_when_the_version_has_not_been_set(self): self.assertEqual( FlexibleVersion("0.0.0.0"), get_daemon_version(), "The daemon version should not be defined. Environment={0}".format(os.environ) ) class TestCurrentAgentName(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) @patch("os.getcwd", return_value="/default/install/directory") def test_extract_name_finds_installed(self, mock_cwd): # pylint: disable=unused-argument current_agent, current_version = set_current_agent() self.assertEqual(AGENT_LONG_VERSION, current_agent) self.assertEqual(AGENT_VERSION, str(current_version)) @patch("os.getcwd", return_value="/") def test_extract_name_root_finds_installed(self, mock_cwd): # pylint: disable=unused-argument current_agent, current_version = set_current_agent() self.assertEqual(AGENT_LONG_VERSION, current_agent) self.assertEqual(AGENT_VERSION, str(current_version)) @patch("os.getcwd") def test_extract_name_in_path_finds_installed(self, mock_cwd): path = os.path.join(conf.get_lib_dir(), EVENTS_DIRECTORY) mock_cwd.return_value = path current_agent, current_version = set_current_agent() self.assertEqual(AGENT_LONG_VERSION, current_agent) self.assertEqual(AGENT_VERSION, str(current_version)) @patch("os.getcwd") def test_extract_name_finds_latest_agent(self, mock_cwd): path = os.path.join(conf.get_lib_dir(), "{0}-{1}".format( AGENT_NAME, "1.2.3")) mock_cwd.return_value = path agent = os.path.basename(path) version = AGENT_NAME_PATTERN.match(agent).group(1) current_agent, current_version = set_current_agent() self.assertEqual(agent, current_agent) self.assertEqual(version, str(current_version)) class TestGetF5Platforms(AgentTestCase): def test_get_f5_platform_bigip_12_1_1(self): version_file = textwrap.dedent(""" Product: BIG-IP Version: 12.1.1 Build: 0.0.184 Sequence: 12.1.1.0.0.184.0 BaseBuild: 0.0.184 Edition: Final Date: Thu Aug 11 17:09:01 PDT 2016 Built: 160811170901 Changelist: 1874858 JobID: 705993""") mocked_open = mock.mock_open(read_data=version_file) with patch(open_patch(), mocked_open): platform = get_f5_platform() self.assertTrue(platform[0] == 'bigip') self.assertTrue(platform[1] == '12.1.1') self.assertTrue(platform[2] == 'bigip') self.assertTrue(platform[3] == 'BIG-IP') def test_get_f5_platform_bigip_12_1_0_hf1(self): version_file = textwrap.dedent(""" Product: BIG-IP Version: 12.1.0 Build: 1.0.1447 Sequence: 12.1.0.1.0.1447.0 BaseBuild: 0.0.1434 Edition: Hotfix HF1 Date: Wed Jun 8 13:41:59 PDT 2016 Built: 160608134159 Changelist: 1773831 JobID: 673467""") mocked_open = mock.mock_open(read_data=version_file) with patch(open_patch(), mocked_open): platform = get_f5_platform() self.assertTrue(platform[0] == 'bigip') self.assertTrue(platform[1] == '12.1.0') self.assertTrue(platform[2] == 'bigip') self.assertTrue(platform[3] == 'BIG-IP') def test_get_f5_platform_bigip_12_0_0(self): version_file = textwrap.dedent(""" Product: BIG-IP Version: 12.0.0 Build: 0.0.606 Sequence: 12.0.0.0.0.606.0 BaseBuild: 0.0.606 Edition: Final Date: Fri Aug 21 13:29:22 PDT 2015 Built: 150821132922 Changelist: 1486072 JobID: 536212""") mocked_open = mock.mock_open(read_data=version_file) with patch(open_patch(), mocked_open): platform = get_f5_platform() self.assertTrue(platform[0] == 'bigip') self.assertTrue(platform[1] == '12.0.0') self.assertTrue(platform[2] == 'bigip') self.assertTrue(platform[3] == 'BIG-IP') def test_get_f5_platform_iworkflow_2_0_1(self): version_file = textwrap.dedent(""" Product: iWorkflow Version: 2.0.1 Build: 0.0.9842 Sequence: 2.0.1.0.0.9842.0 BaseBuild: 0.0.9842 Edition: Final Date: Sat Oct 1 22:52:08 PDT 2016 Built: 161001225208 Changelist: 1924048 JobID: 734712""") mocked_open = mock.mock_open(read_data=version_file) with patch(open_patch(), mocked_open): platform = get_f5_platform() self.assertTrue(platform[0] == 'iworkflow') self.assertTrue(platform[1] == '2.0.1') self.assertTrue(platform[2] == 'iworkflow') self.assertTrue(platform[3] == 'iWorkflow') def test_get_f5_platform_bigiq_5_1_0(self): version_file = textwrap.dedent(""" Product: BIG-IQ Version: 5.1.0 Build: 0.0.631 Sequence: 5.1.0.0.0.631.0 BaseBuild: 0.0.631 Edition: Final Date: Thu Sep 15 19:55:43 PDT 2016 Built: 160915195543 Changelist: 1907534 JobID: 726344""") mocked_open = mock.mock_open(read_data=version_file) with patch(open_patch(), mocked_open): platform = get_f5_platform() self.assertTrue(platform[0] == 'bigiq') self.assertTrue(platform[1] == '5.1.0') self.assertTrue(platform[2] == 'bigiq') self.assertTrue(platform[3] == 'BIG-IQ') Azure-WALinuxAgent-2b21de5/tests/common/utils/000077500000000000000000000000001462617747000213225ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/common/utils/__init__.py000066400000000000000000000011651462617747000234360ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-2b21de5/tests/common/utils/test_archive.py000066400000000000000000000200701462617747000243530ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. import os import tempfile import zipfile from datetime import datetime, timedelta import azurelinuxagent.common.logger as logger from azurelinuxagent.common import conf from azurelinuxagent.common.utils import fileutil, timeutil from azurelinuxagent.common.utils.archive import GoalStateHistory, StateArchiver, _MAX_ARCHIVED_STATES, ARCHIVE_DIRECTORY_NAME from tests.lib.tools import AgentTestCase, patch debug = False if os.environ.get('DEBUG') == '1': debug = True # Enable verbose logger to stdout if debug: logger.add_logger_appender(logger.AppenderType.STDOUT, logger.LogLevel.VERBOSE) class TestArchive(AgentTestCase): def setUp(self): super(TestArchive, self).setUp() prefix = "{0}_".format(self.__class__.__name__) self.tmp_dir = tempfile.mkdtemp(prefix=prefix) def _write_file(self, filename, contents=None): full_name = os.path.join(conf.get_lib_dir(), filename) fileutil.mkdir(os.path.dirname(full_name)) with open(full_name, 'w') as file_handler: data = contents if contents is not None else filename file_handler.write(data) return full_name @property def history_dir(self): return os.path.join(conf.get_lib_dir(), ARCHIVE_DIRECTORY_NAME) @staticmethod def _parse_archive_name(name): # Name can be a directory or a zip # '0000-00-00T00:00:00.000000_incarnation_0' # '0000-00-00T00:00:00.000000_incarnation_0.zip' timestamp_str, incarnation_ext = name.split("_incarnation_") incarnation_no_ext = os.path.splitext(incarnation_ext)[0] return timestamp_str, incarnation_no_ext def test_archive_should_zip_all_but_the_latest_goal_state_in_the_history_folder(self): test_files = [ 'GoalState.xml', 'Prod.manifest.xml', 'Prod.agentsManifest', 'Microsoft.Azure.Extensions.CustomScript.xml' ] # these directories match the pattern that StateArchiver.archive() searches for test_directories = [] for i in range(0, 3): timestamp = (datetime.utcnow() + timedelta(minutes=i)).isoformat() directory = os.path.join(self.history_dir, "{0}_incarnation_{1}".format(timestamp, i)) for current_file in test_files: self._write_file(os.path.join(directory, current_file)) test_directories.append(directory) test_subject = StateArchiver(conf.get_lib_dir()) # NOTE: StateArchiver sorts the state directories by creation time, but the test files are created too fast and the # time resolution is too coarse, so instead we mock getctime to simply return the path of the file with patch("azurelinuxagent.common.utils.archive.os.path.getctime", side_effect=lambda path: path): test_subject.archive() for directory in test_directories[0:2]: zip_file = directory + ".zip" self.assertTrue(os.path.exists(zip_file), "{0} was not archived (could not find {1})".format(directory, zip_file)) missing_file = self.assert_zip_contains(zip_file, test_files) self.assertEqual(None, missing_file, missing_file) self.assertFalse(os.path.exists(directory), "{0} was not removed after being archived ".format(directory)) self.assertTrue(os.path.exists(test_directories[2]), "{0}, the latest goal state, should not have being removed".format(test_directories[2])) def test_goal_state_history_init_should_purge_old_items(self): """ GoalStateHistory.__init__ should _purge the MAX_ARCHIVED_STATES oldest files or directories. The oldest timestamps are purged first. This test case creates a mixture of archive files and directories. It creates 5 more values than MAX_ARCHIVED_STATES to ensure that 5 archives are cleaned up. It asserts that the files and directories are properly deleted from the disk. """ count = 6 total = _MAX_ARCHIVED_STATES + count start = datetime.now() timestamps = [] for i in range(0, total): timestamp = start + timedelta(seconds=i) timestamps.append(timestamp) if i % 2 == 0: filename = os.path.join('history', "{0}_0".format(timestamp.isoformat()), 'Prod.manifest.xml') else: filename = os.path.join('history', "{0}_0.zip".format(timestamp.isoformat())) self._write_file(filename) self.assertEqual(total, len(os.listdir(self.history_dir))) # NOTE: The purge method sorts the items by creation time, but the test files are created too fast and the # time resolution is too coarse, so instead we mock getctime to simply return the path of the file with patch("azurelinuxagent.common.utils.archive.os.path.getctime", side_effect=lambda path: path): GoalStateHistory(datetime.utcnow(), 'test') archived_entries = os.listdir(self.history_dir) self.assertEqual(_MAX_ARCHIVED_STATES, len(archived_entries)) archived_entries.sort() for i in range(0, _MAX_ARCHIVED_STATES): timestamp = timestamps[i + count].isoformat() if i % 2 == 0: filename = "{0}_0".format(timestamp) else: filename = "{0}_0.zip".format(timestamp) self.assertTrue(filename in archived_entries, "'{0}' is not in the list of unpurged entires".format(filename)) def test_purge_legacy_goal_state_history(self): with patch("azurelinuxagent.common.conf.get_lib_dir", return_value=self.tmp_dir): # SharedConfig.xml is used by other components (Azsec and Singularity/HPC Infiniband); verify that we do not delete it shared_config = os.path.join(self.tmp_dir, 'SharedConfig.xml') legacy_files = [ 'GoalState.2.xml', 'VmSettings.2.json', 'Prod.2.manifest.xml', 'ExtensionsConfig.2.xml', 'Microsoft.Azure.Extensions.CustomScript.1.xml', 'HostingEnvironmentConfig.xml', 'RemoteAccess.xml', 'waagent_status.1.json' ] legacy_files = [os.path.join(self.tmp_dir, f) for f in legacy_files] self._write_file(shared_config) for f in legacy_files: self._write_file(f) StateArchiver.purge_legacy_goal_state_history() self.assertTrue(os.path.exists(shared_config), "{0} should not have been removed".format(shared_config)) for f in legacy_files: self.assertFalse(os.path.exists(f), "Legacy file {0} was not removed".format(f)) @staticmethod def parse_isoformat(timestamp_str): return datetime.strptime(timestamp_str, '%Y-%m-%dT%H:%M:%S.%f') @staticmethod def assert_is_iso8601(timestamp_str): try: TestArchive.parse_isoformat(timestamp_str) except: raise AssertionError("the value '{0}' is not an ISO8601 formatted timestamp".format(timestamp_str)) def assert_datetime_close_to(self, time1, time2, within): if time1 <= time2: diff = time2 - time1 else: diff = time1 - time2 secs = timeutil.total_seconds(within - diff) if secs < 0: self.fail("the timestamps are outside of the tolerance of by {0} seconds".format(secs)) @staticmethod def assert_zip_contains(zip_filename, files): ziph = None try: # contextmanager for zipfile.ZipFile doesn't exist for py2.6, manually closing it ziph = zipfile.ZipFile(zip_filename, 'r') zip_files = [x.filename for x in ziph.filelist] for current_file in files: if current_file not in zip_files: return "'{0}' was not found in {1}".format(current_file, zip_filename) return None finally: if ziph is not None: ziph.close()Azure-WALinuxAgent-2b21de5/tests/common/utils/test_crypt_util.py000066400000000000000000000074421462617747000251400ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import unittest import azurelinuxagent.common.conf as conf from azurelinuxagent.common.exception import CryptError from azurelinuxagent.common.utils.cryptutil import CryptUtil from tests.lib.tools import AgentTestCase, data_dir, load_data, is_python_version_26, skip_if_predicate_true class TestCryptoUtilOperations(AgentTestCase): def test_decrypt_encrypted_text(self): encrypted_string = load_data("wire/encrypted.enc") prv_key = os.path.join(self.tmp_dir, "TransportPrivate.pem") with open(prv_key, 'w+') as c: c.write(load_data("wire/sample.pem")) secret = ']aPPEv}uNg1FPnl?' crypto = CryptUtil(conf.get_openssl_cmd()) decrypted_string = crypto.decrypt_secret(encrypted_string, prv_key) self.assertEqual(secret, decrypted_string, "decrypted string does not match expected") def test_decrypt_encrypted_text_missing_private_key(self): encrypted_string = load_data("wire/encrypted.enc") prv_key = os.path.join(self.tmp_dir, "TransportPrivate.pem") crypto = CryptUtil(conf.get_openssl_cmd()) self.assertRaises(CryptError, crypto.decrypt_secret, encrypted_string, "abc" + prv_key) @skip_if_predicate_true(is_python_version_26, "Disabled on Python 2.6") def test_decrypt_encrypted_text_wrong_private_key(self): encrypted_string = load_data("wire/encrypted.enc") prv_key = os.path.join(self.tmp_dir, "wrong.pem") with open(prv_key, 'w+') as c: c.write(load_data("wire/trans_prv")) crypto = CryptUtil(conf.get_openssl_cmd()) self.assertRaises(CryptError, crypto.decrypt_secret, encrypted_string, prv_key) def test_decrypt_encrypted_text_text_not_encrypted(self): encrypted_string = "abc@123" prv_key = os.path.join(self.tmp_dir, "TransportPrivate.pem") with open(prv_key, 'w+') as c: c.write(load_data("wire/sample.pem")) crypto = CryptUtil(conf.get_openssl_cmd()) self.assertRaises(CryptError, crypto.decrypt_secret, encrypted_string, prv_key) def test_get_pubkey_from_crt(self): crypto = CryptUtil(conf.get_openssl_cmd()) prv_key = os.path.join(data_dir, "wire", "trans_prv") expected_pub_key = os.path.join(data_dir, "wire", "trans_pub") with open(expected_pub_key) as fh: self.assertEqual(fh.read(), crypto.get_pubkey_from_prv(prv_key)) def test_get_pubkey_from_prv(self): crypto = CryptUtil(conf.get_openssl_cmd()) def do_test(prv_key, expected_pub_key): prv_key = os.path.join(data_dir, "wire", prv_key) expected_pub_key = os.path.join(data_dir, "wire", expected_pub_key) with open(expected_pub_key) as fh: self.assertEqual(fh.read(), crypto.get_pubkey_from_prv(prv_key)) do_test("rsa-key.pem", "rsa-key.pub.pem") do_test("ec-key.pem", "ec-key.pub.pem") def test_get_pubkey_from_crt_invalid_file(self): crypto = CryptUtil(conf.get_openssl_cmd()) prv_key = os.path.join(data_dir, "wire", "trans_prv_does_not_exist") self.assertRaises(IOError, crypto.get_pubkey_from_prv, prv_key) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/utils/test_extension_process_util.py000066400000000000000000000367161462617747000275570ustar00rootroot00000000000000# Copyright Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import shutil import subprocess import tempfile from azurelinuxagent.ga.cgroup import CpuCgroup from azurelinuxagent.common.exception import ExtensionError, ExtensionErrorCodes from azurelinuxagent.common.future import ustr from azurelinuxagent.ga.extensionprocessutil import format_stdout_stderr, read_output, \ wait_for_process_completion_or_timeout, handle_process_completion from tests.lib.tools import AgentTestCase, patch, data_dir class TestProcessUtils(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) self.tmp_dir = tempfile.mkdtemp() self.stdout = tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") self.stderr = tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") self.stdout.write("The quick brown fox jumps over the lazy dog.".encode("utf-8")) self.stderr.write("The five boxing wizards jump quickly.".encode("utf-8")) def tearDown(self): self.stderr.close() self.stdout.close() super(TestProcessUtils, self).tearDown() def test_wait_for_process_completion_or_timeout_should_terminate_cleanly(self): process = subprocess.Popen( "date", shell=True, cwd=self.tmp_dir, env={}, stdout=subprocess.PIPE, stderr=subprocess.PIPE) timed_out, ret, _ = wait_for_process_completion_or_timeout(process=process, timeout=5, cpu_cgroup=None) self.assertEqual(timed_out, False) self.assertEqual(ret, 0) def test_wait_for_process_completion_or_timeout_should_kill_process_on_timeout(self): timeout = 5 process = subprocess.Popen( # pylint: disable=subprocess-popen-preexec-fn "sleep 1m", shell=True, cwd=self.tmp_dir, env={}, stdout=subprocess.PIPE, stderr=subprocess.PIPE, preexec_fn=os.setsid) # We don't actually mock the kill, just wrap it so we can assert its call count with patch('azurelinuxagent.ga.extensionprocessutil.os.killpg', wraps=os.killpg) as patch_kill: with patch('time.sleep') as mock_sleep: timed_out, ret, _ = wait_for_process_completion_or_timeout(process=process, timeout=timeout, cpu_cgroup=None) # We're mocking sleep to avoid prolonging the test execution time, but we still want to make sure # we're "waiting" the correct amount of time before killing the process self.assertEqual(mock_sleep.call_count, timeout) self.assertEqual(patch_kill.call_count, 1) self.assertEqual(timed_out, True) self.assertEqual(ret, None) def test_handle_process_completion_should_return_nonzero_when_process_fails(self): process = subprocess.Popen( "ls folder_does_not_exist", shell=True, cwd=self.tmp_dir, env={}, stdout=subprocess.PIPE, stderr=subprocess.PIPE) timed_out, ret, _ = wait_for_process_completion_or_timeout(process=process, timeout=5, cpu_cgroup=None) self.assertEqual(timed_out, False) self.assertEqual(ret, 2) def test_handle_process_completion_should_return_process_output(self): command = "echo 'dummy stdout' && 1>&2 echo 'dummy stderr'" with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: process = subprocess.Popen(command, # pylint: disable=subprocess-popen-preexec-fn shell=True, cwd=self.tmp_dir, env={}, stdout=stdout, stderr=stderr, preexec_fn=os.setsid) process_output = handle_process_completion(process=process, command=command, timeout=5, stdout=stdout, stderr=stderr, error_code=42) expected_output = "[stdout]\ndummy stdout\n\n\n[stderr]\ndummy stderr\n" self.assertEqual(process_output, expected_output) def test_handle_process_completion_should_raise_on_timeout(self): command = "sleep 1m" timeout = 20 with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: with patch('time.sleep') as mock_sleep: with self.assertRaises(ExtensionError) as context_manager: process = subprocess.Popen(command, # pylint: disable=subprocess-popen-preexec-fn shell=True, cwd=self.tmp_dir, env={}, stdout=stdout, stderr=stderr, preexec_fn=os.setsid) handle_process_completion(process=process, command=command, timeout=timeout, stdout=stdout, stderr=stderr, error_code=42) # We're mocking sleep to avoid prolonging the test execution time, but we still want to make sure # we're "waiting" the correct amount of time before killing the process and raising an exception # Due to an extra call to sleep at some point in the call stack which only happens sometimes, # we are relaxing this assertion to allow +/- 2 sleep calls. self.assertTrue(abs(mock_sleep.call_count - timeout) <= 2) self.assertEqual(context_manager.exception.code, ExtensionErrorCodes.PluginHandlerScriptTimedout) self.assertIn("Timeout({0})".format(timeout), ustr(context_manager.exception)) self.assertNotIn("CPUThrottledTime({0}secs)".format(timeout), ustr(context_manager.exception)) #Extension not started in cpuCgroup def test_handle_process_completion_should_log_throttled_time_on_timeout(self): command = "sleep 1m" timeout = 20 with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: with patch('time.sleep') as mock_sleep: with self.assertRaises(ExtensionError) as context_manager: test_file = os.path.join(self.tmp_dir, "cpu.stat") shutil.copyfile(os.path.join(data_dir, "cgroups", "cpu.stat_t0"), test_file) # throttled_time = 50 cgroup = CpuCgroup("test", self.tmp_dir) process = subprocess.Popen(command, # pylint: disable=subprocess-popen-preexec-fn shell=True, cwd=self.tmp_dir, env={}, stdout=stdout, stderr=stderr, preexec_fn=os.setsid) handle_process_completion(process=process, command=command, timeout=timeout, stdout=stdout, stderr=stderr, error_code=42, cpu_cgroup=cgroup) # We're mocking sleep to avoid prolonging the test execution time, but we still want to make sure # we're "waiting" the correct amount of time before killing the process and raising an exception # Due to an extra call to sleep at some point in the call stack which only happens sometimes, # we are relaxing this assertion to allow +/- 2 sleep calls. self.assertTrue(abs(mock_sleep.call_count - timeout) <= 2) self.assertEqual(context_manager.exception.code, ExtensionErrorCodes.PluginHandlerScriptTimedout) self.assertIn("Timeout({0})".format(timeout), ustr(context_manager.exception)) throttled_time = float(50 / 1E9) self.assertIn("CPUThrottledTime({0}secs)".format(throttled_time), ustr(context_manager.exception)) def test_handle_process_completion_should_raise_on_nonzero_exit_code(self): command = "ls folder_does_not_exist" error_code = 42 with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: with self.assertRaises(ExtensionError) as context_manager: process = subprocess.Popen(command, # pylint: disable=subprocess-popen-preexec-fn shell=True, cwd=self.tmp_dir, env={}, stdout=stdout, stderr=stderr, preexec_fn=os.setsid) handle_process_completion(process=process, command=command, timeout=4, stdout=stdout, stderr=stderr, error_code=error_code) self.assertEqual(context_manager.exception.code, error_code) self.assertIn("Non-zero exit code:", ustr(context_manager.exception)) def test_read_output_should_return_no_content(self): with patch('azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN', 0): expected = "" actual = read_output(self.stdout, self.stderr) self.assertEqual(expected, actual) def test_read_output_should_truncate_the_content(self): with patch('azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN', 50): expected = "[stdout]\nr the lazy dog.\n\n" \ "[stderr]\ns jump quickly." actual = read_output(self.stdout, self.stderr) self.assertEqual(expected, actual) def test_read_output_should_not_truncate_the_content(self): with patch('azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN', 90): expected = "[stdout]\nThe quick brown fox jumps over the lazy dog.\n\n" \ "[stderr]\nThe five boxing wizards jump quickly." actual = read_output(self.stdout, self.stderr) self.assertEqual(expected, actual) def test_format_stdout_stderr00(self): """ If stdout and stderr are both smaller than the max length, the full representation should be displayed. """ stdout = "The quick brown fox jumps over the lazy dog." stderr = "The five boxing wizards jump quickly." expected = "[stdout]\n{0}\n\n[stderr]\n{1}".format(stdout, stderr) with patch('azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN', 1000): actual = format_stdout_stderr(stdout, stderr) self.assertEqual(expected, actual) def test_format_stdout_stderr01(self): """ If stdout and stderr both exceed the max length, then both stdout and stderr are trimmed equally. """ stdout = "The quick brown fox jumps over the lazy dog." stderr = "The five boxing wizards jump quickly." # noinspection SpellCheckingInspection expected = '[stdout]\ns over the lazy dog.\n\n[stderr]\nizards jump quickly.' with patch('azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN', 60): actual = format_stdout_stderr(stdout, stderr) self.assertEqual(expected, actual) self.assertEqual(60, len(actual)) def test_format_stdout_stderr02(self): """ If stderr is much larger than stdout, stderr is allowed to borrow space from stdout's quota. """ stdout = "empty" stderr = "The five boxing wizards jump quickly." expected = '[stdout]\nempty\n\n[stderr]\ns jump quickly.' with patch('azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN', 40): actual = format_stdout_stderr(stdout, stderr) self.assertEqual(expected, actual) self.assertEqual(40, len(actual)) def test_format_stdout_stderr03(self): """ If stdout is much larger than stderr, stdout is allowed to borrow space from stderr's quota. """ stdout = "The quick brown fox jumps over the lazy dog." stderr = "empty" expected = '[stdout]\nr the lazy dog.\n\n[stderr]\nempty' with patch('azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN', 40): actual = format_stdout_stderr(stdout, stderr) self.assertEqual(expected, actual) self.assertEqual(40, len(actual)) def test_format_stdout_stderr04(self): """ If the max length is not sufficient to even hold the stdout and stderr markers an empty string is returned. """ stdout = "The quick brown fox jumps over the lazy dog." stderr = "The five boxing wizards jump quickly." expected = '' with patch('azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN', 4): actual = format_stdout_stderr(stdout, stderr) self.assertEqual(expected, actual) self.assertEqual(0, len(actual)) def test_format_stdout_stderr05(self): """ If stdout and stderr are empty, an empty template is returned. """ expected = '[stdout]\n\n\n[stderr]\n' with patch('azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN', 1000): actual = format_stdout_stderr('', '') self.assertEqual(expected, actual) Azure-WALinuxAgent-2b21de5/tests/common/utils/test_file_util.py000066400000000000000000000255001462617747000247110ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import errno import glob import os import random import shutil import string import tempfile import unittest import uuid import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.common.future import ustr from tests.lib.tools import AgentTestCase, patch class TestFileOperations(AgentTestCase): def test_read_write_file(self): test_file=os.path.join(self.tmp_dir, self.test_file) content = ustr(uuid.uuid4()) fileutil.write_file(test_file, content) content_read = fileutil.read_file(test_file) self.assertEqual(content, content_read) os.remove(test_file) def test_write_file_content_is_None(self): """ write_file throws when content is None. No file is created. """ try: test_file=os.path.join(self.tmp_dir, self.test_file) fileutil.write_file(test_file, None) self.fail("expected write_file to throw an exception") except: # pylint: disable=bare-except self.assertEqual(False, os.path.exists(test_file)) def test_rw_utf8_file(self): test_file=os.path.join(self.tmp_dir, self.test_file) content = u"\u6211" fileutil.write_file(test_file, content, encoding="utf-8") content_read = fileutil.read_file(test_file) self.assertEqual(content, content_read) os.remove(test_file) def test_remove_bom(self): test_file=os.path.join(self.tmp_dir, self.test_file) data = b'\xef\xbb\xbfhehe' fileutil.write_file(test_file, data, asbin=True) data = fileutil.read_file(test_file, remove_bom=True) self.assertNotEqual(0xbb, ord(data[0])) def test_append_file(self): test_file=os.path.join(self.tmp_dir, self.test_file) content = ustr(uuid.uuid4()) fileutil.append_file(test_file, content) content_read = fileutil.read_file(test_file) self.assertEqual(content, content_read) os.remove(test_file) def test_findre_in_file(self): fp = tempfile.mktemp() with open(fp, 'w') as f: f.write( ''' First line Second line Third line with more words ''' ) self.assertNotEqual( None, fileutil.findre_in_file(fp, ".*rst line$")) self.assertNotEqual( None, fileutil.findre_in_file(fp, ".*ond line$")) self.assertNotEqual( None, fileutil.findre_in_file(fp, ".*with more.*")) self.assertNotEqual( None, fileutil.findre_in_file(fp, "^Third.*")) self.assertEqual( None, fileutil.findre_in_file(fp, "^Do not match.*")) def test_findstr_in_file(self): fp = tempfile.mktemp() with open(fp, 'w') as f: f.write( ''' First line Second line Third line with more words ''' ) self.assertTrue(fileutil.findstr_in_file(fp, "First line")) self.assertTrue(fileutil.findstr_in_file(fp, "Second line")) self.assertTrue( fileutil.findstr_in_file(fp, "Third line with more words")) self.assertFalse(fileutil.findstr_in_file(fp, "Not a line")) def test_get_last_path_element(self): filepath = '/tmp/abc.def' filename = fileutil.base_name(filepath) self.assertEqual('abc.def', filename) filepath = '/tmp/abc' filename = fileutil.base_name(filepath) self.assertEqual('abc', filename) def test_remove_files(self): random_word = lambda : ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(5)) #Create 10 test files test_file = os.path.join(self.tmp_dir, self.test_file) test_file2 = os.path.join(self.tmp_dir, 'another_file') test_files = [test_file + random_word() for _ in range(5)] + \ [test_file2 + random_word() for _ in range(5)] for file in test_files: # pylint: disable=redefined-builtin open(file, 'a').close() #Remove files using fileutil.rm_files test_file_pattern = test_file + '*' test_file_pattern2 = test_file2 + '*' fileutil.rm_files(test_file_pattern, test_file_pattern2) self.assertEqual(0, len(glob.glob(os.path.join(self.tmp_dir, test_file_pattern)))) self.assertEqual(0, len(glob.glob(os.path.join(self.tmp_dir, test_file_pattern2)))) def test_remove_dirs(self): dirs = [] for n in range(0,5): dirs.append(tempfile.mkdtemp()) for d in dirs: for n in range(0, random.choice(range(0,10))): fileutil.write_file(os.path.join(d, "test"+str(n)), "content") for n in range(0, random.choice(range(0,10))): dd = os.path.join(d, "testd"+str(n)) os.mkdir(dd) for nn in range(0, random.choice(range(0,10))): os.symlink(dd, os.path.join(dd, "sym"+str(nn))) for n in range(0, random.choice(range(0,10))): os.symlink(d, os.path.join(d, "sym"+str(n))) fileutil.rm_dirs(*dirs) for d in dirs: self.assertEqual(len(os.listdir(d)), 0) def test_get_all_files(self): random_word = lambda: ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(5)) # Create 10 test files at the root dir and 10 other in the sub dir test_file = os.path.join(self.tmp_dir, self.test_file) test_file2 = os.path.join(self.tmp_dir, 'another_file') expected_files = [test_file + random_word() for _ in range(5)] + \ [test_file2 + random_word() for _ in range(5)] test_subdir = os.path.join(self.tmp_dir, 'test_dir') os.mkdir(test_subdir) test_file_in_subdir = os.path.join(test_subdir, self.test_file) test_file_in_subdir2 = os.path.join(test_subdir, 'another_file') expected_files.extend([test_file_in_subdir + random_word() for _ in range(5)] + \ [test_file_in_subdir2 + random_word() for _ in range(5)]) for file in expected_files: # pylint: disable=redefined-builtin open(file, 'a').close() # Get All files using fileutil.get_all_files actual_files = fileutil.get_all_files(self.tmp_dir) self.assertEqual(set(expected_files), set(actual_files)) @patch('os.path.isfile') def test_update_conf_file(self, _): new_file = "\ DEVICE=eth0\n\ ONBOOT=yes\n\ BOOTPROTO=dhcp\n\ TYPE=Ethernet\n\ USERCTL=no\n\ PEERDNS=yes\n\ IPV6INIT=no\n\ NM_CONTROLLED=yes\n" existing_file = "\ DEVICE=eth0\n\ ONBOOT=yes\n\ BOOTPROTO=dhcp\n\ TYPE=Ethernet\n\ DHCP_HOSTNAME=existing\n\ USERCTL=no\n\ PEERDNS=yes\n\ IPV6INIT=no\n\ NM_CONTROLLED=yes\n" bad_file = "\ DEVICE=eth0\n\ ONBOOT=yes\n\ BOOTPROTO=dhcp\n\ TYPE=Ethernet\n\ USERCTL=no\n\ PEERDNS=yes\n\ IPV6INIT=no\n\ NM_CONTROLLED=yes\n\ DHCP_HOSTNAME=no_new_line" updated_file = "\ DEVICE=eth0\n\ ONBOOT=yes\n\ BOOTPROTO=dhcp\n\ TYPE=Ethernet\n\ USERCTL=no\n\ PEERDNS=yes\n\ IPV6INIT=no\n\ NM_CONTROLLED=yes\n\ DHCP_HOSTNAME=test\n" path = 'path' with patch.object(fileutil, 'write_file') as patch_write: with patch.object(fileutil, 'read_file', return_value=new_file): fileutil.update_conf_file(path, 'DHCP_HOSTNAME', 'DHCP_HOSTNAME=test') patch_write.assert_called_once_with(path, updated_file) with patch.object(fileutil, 'write_file') as patch_write: with patch.object(fileutil, 'read_file', return_value=existing_file): fileutil.update_conf_file(path, 'DHCP_HOSTNAME', 'DHCP_HOSTNAME=test') patch_write.assert_called_once_with(path, updated_file) with patch.object(fileutil, 'write_file') as patch_write: with patch.object(fileutil, 'read_file', return_value=bad_file): fileutil.update_conf_file(path, 'DHCP_HOSTNAME', 'DHCP_HOSTNAME=test') patch_write.assert_called_once_with(path, updated_file) def test_clean_ioerror_ignores_missing(self): e = IOError() e.errno = errno.ENOSPC # Send no paths fileutil.clean_ioerror(e) # Send missing file(s) / directories fileutil.clean_ioerror(e, paths=['/foo/not/here', None, '/bar/not/there']) def test_clean_ioerror_ignores_unless_ioerror(self): try: d = tempfile.mkdtemp() fd, f = tempfile.mkstemp() os.close(fd) fileutil.write_file(f, 'Not empty') # Send non-IOError exception e = Exception() fileutil.clean_ioerror(e, paths=[d, f]) self.assertTrue(os.path.isdir(d)) self.assertTrue(os.path.isfile(f)) # Send unrecognized IOError e = IOError() e.errno = errno.EFAULT self.assertFalse(e.errno in fileutil.KNOWN_IOERRORS) fileutil.clean_ioerror(e, paths=[d, f]) self.assertTrue(os.path.isdir(d)) self.assertTrue(os.path.isfile(f)) finally: shutil.rmtree(d) os.remove(f) def test_clean_ioerror_removes_files(self): fd, f = tempfile.mkstemp() os.close(fd) fileutil.write_file(f, 'Not empty') e = IOError() e.errno = errno.ENOSPC fileutil.clean_ioerror(e, paths=[f]) self.assertFalse(os.path.isdir(f)) self.assertFalse(os.path.isfile(f)) def test_clean_ioerror_removes_directories(self): d1 = tempfile.mkdtemp() d2 = tempfile.mkdtemp() for n in ['foo', 'bar']: fileutil.write_file(os.path.join(d2, n), 'Not empty') e = IOError() e.errno = errno.ENOSPC fileutil.clean_ioerror(e, paths=[d1, d2]) self.assertFalse(os.path.isdir(d1)) self.assertFalse(os.path.isfile(d1)) self.assertFalse(os.path.isdir(d2)) self.assertFalse(os.path.isfile(d2)) def test_clean_ioerror_handles_a_range_of_errors(self): for err in fileutil.KNOWN_IOERRORS: e = IOError() e.errno = err d = tempfile.mkdtemp() fileutil.clean_ioerror(e, paths=[d]) self.assertFalse(os.path.isdir(d)) self.assertFalse(os.path.isfile(d)) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/utils/test_flexible_version.py000066400000000000000000000372621462617747000263040ustar00rootroot00000000000000import random # pylint: disable=unused-import import re import unittest from azurelinuxagent.common.utils.flexible_version import FlexibleVersion class TestFlexibleVersion(unittest.TestCase): def setUp(self): self.v = FlexibleVersion() def test_compile_separator(self): tests = [ '.', '', '-' ] for t in tests: t_escaped = re.escape(t) t_re = re.compile(t_escaped) self.assertEqual((t_escaped, t_re), self.v._compile_separator(t)) self.assertEqual(('', re.compile('')), self.v._compile_separator(None)) return def test_compile_pattern(self): self.v._compile_pattern() tests = { '1': True, '1.2': True, '1.2.3': True, '1.2.3.4': True, '1.2.3.4.5': True, '1alpha': True, '1.alpha': True, '1-alpha': True, '1alpha0': True, '1.alpha0': True, '1-alpha0': True, '1.2alpha': True, '1.2.alpha': True, '1.2-alpha': True, '1.2alpha0': True, '1.2.alpha0': True, '1.2-alpha0': True, '1beta': True, '1.beta': True, '1-beta': True, '1beta0': True, '1.beta0': True, '1-beta0': True, '1.2beta': True, '1.2.beta': True, '1.2-beta': True, '1.2beta0': True, '1.2.beta0': True, '1.2-beta0': True, '1rc': True, '1.rc': True, '1-rc': True, '1rc0': True, '1.rc0': True, '1-rc0': True, '1.2rc': True, '1.2.rc': True, '1.2-rc': True, '1.2rc0': True, '1.2.rc0': True, '1.2-rc0': True, '1.2.3.4alpha5': True, ' 1': False, 'beta': False, '1delta0': False, '': False } for test in iter(tests): expectation = tests[test] self.assertEqual( expectation, self.v.version_re.match(test) is not None, "test: {0} expected: {1} ".format(test, expectation)) return def test_compile_pattern_sep(self): self.v.sep = '-' self.v._compile_pattern() tests = { '1': True, '1-2': True, '1-2-3': True, '1-2-3-4': True, '1-2-3-4-5': True, '1alpha': True, '1-alpha': True, '1alpha0': True, '1-alpha0': True, '1-2alpha': True, '1-2.alpha': True, '1-2-alpha': True, '1-2alpha0': True, '1-2.alpha0': True, '1-2-alpha0': True, '1beta': True, '1-beta': True, '1beta0': True, '1-beta0': True, '1-2beta': True, '1-2.beta': True, '1-2-beta': True, '1-2beta0': True, '1-2.beta0': True, '1-2-beta0': True, '1rc': True, '1-rc': True, '1rc0': True, '1-rc0': True, '1-2rc': True, '1-2.rc': True, '1-2-rc': True, '1-2rc0': True, '1-2.rc0': True, '1-2-rc0': True, '1-2-3-4alpha5': True, ' 1': False, 'beta': False, '1delta0': False, '': False } for test in iter(tests): expectation = tests[test] self.assertEqual( expectation, self.v.version_re.match(test) is not None, "test: {0} expected: {1} ".format(test, expectation)) return def test_compile_pattern_prerel(self): self.v.prerel_tags = ('a', 'b', 'c') self.v._compile_pattern() tests = { '1': True, '1.2': True, '1.2.3': True, '1.2.3.4': True, '1.2.3.4.5': True, '1a': True, '1.a': True, '1-a': True, '1a0': True, '1.a0': True, '1-a0': True, '1.2a': True, '1.2.a': True, '1.2-a': True, '1.2a0': True, '1.2.a0': True, '1.2-a0': True, '1b': True, '1.b': True, '1-b': True, '1b0': True, '1.b0': True, '1-b0': True, '1.2b': True, '1.2.b': True, '1.2-b': True, '1.2b0': True, '1.2.b0': True, '1.2-b0': True, '1c': True, '1.c': True, '1-c': True, '1c0': True, '1.c0': True, '1-c0': True, '1.2c': True, '1.2.c': True, '1.2-c': True, '1.2c0': True, '1.2.c0': True, '1.2-c0': True, '1.2.3.4a5': True, ' 1': False, '1.2.3.4alpha5': False, 'beta': False, '1delta0': False, '': False } for test in iter(tests): expectation = tests[test] self.assertEqual( expectation, self.v.version_re.match(test) is not None, "test: {0} expected: {1} ".format(test, expectation)) return def test_ensure_compatible_separators(self): v1 = FlexibleVersion('1.2.3') v2 = FlexibleVersion('1-2-3', sep='-') try: v1 == v2 # pylint: disable=pointless-statement self.assertTrue(False, "Incompatible separators failed to raise an exception") # pylint: disable=redundant-unittest-assert except ValueError: pass except Exception as e: t = e.__class__.__name__ # pylint: disable=redundant-unittest-assert self.assertTrue(False, "Incompatible separators raised an unexpected exception: {0}" \ .format(t)) # pylint: enable=redundant-unittest-assert return def test_ensure_compatible_prerel(self): v1 = FlexibleVersion('1.2.3', prerel_tags=('alpha', 'beta', 'rc')) v2 = FlexibleVersion('1.2.3', prerel_tags=('a', 'b', 'c')) try: v1 == v2 # pylint: disable=pointless-statement self.assertTrue(False, "Incompatible prerel_tags failed to raise an exception") # pylint: disable=redundant-unittest-assert except ValueError: pass except Exception as e: t = e.__class__.__name__ # pylint: disable=redundant-unittest-assert self.assertTrue(False, "Incompatible prerel_tags raised an unexpected exception: {0}" \ .format(t)) # pylint: enable=redundant-unittest-assert return def test_ensure_compatible_prerel_length(self): v1 = FlexibleVersion('1.2.3', prerel_tags=('a', 'b', 'c')) v2 = FlexibleVersion('1.2.3', prerel_tags=('a', 'b')) try: v1 == v2 # pylint: disable=pointless-statement self.assertTrue(False, "Incompatible prerel_tags failed to raise an exception") # pylint: disable=redundant-unittest-assert except ValueError: pass except Exception as e: t = e.__class__.__name__ # pylint: disable=redundant-unittest-assert self.assertTrue(False, "Incompatible prerel_tags raised an unexpected exception: {0}" \ .format(t)) # pylint: enable=redundant-unittest-assert return def test_ensure_compatible_prerel_order(self): v1 = FlexibleVersion('1.2.3', prerel_tags=('a', 'b')) v2 = FlexibleVersion('1.2.3', prerel_tags=('b', 'a')) try: v1 == v2 # pylint: disable=pointless-statement self.assertTrue(False, "Incompatible prerel_tags failed to raise an exception") # pylint: disable=redundant-unittest-assert except ValueError: pass except Exception as e: t = e.__class__.__name__ # pylint: disable=redundant-unittest-assert self.assertTrue(False, "Incompatible prerel_tags raised an unexpected exception: {0}" \ .format(t)) # pylint: enable=redundant-unittest-assert return def test_major(self): tests = { '1' : 1, '1.2' : 1, '1.2.3' : 1, '1.2.3.4' : 1 } for test in iter(tests): expectation = tests[test] self.assertEqual( expectation, FlexibleVersion(test).major) return def test_minor(self): tests = { '1' : 0, '1.2' : 2, '1.2.3' : 2, '1.2.3.4' : 2 } for test in iter(tests): expectation = tests[test] self.assertEqual( expectation, FlexibleVersion(test).minor) return def test_patch(self): tests = { '1' : 0, '1.2' : 0, '1.2.3' : 3, '1.2.3.4' : 3 } for test in iter(tests): expectation = tests[test] self.assertEqual( expectation, FlexibleVersion(test).patch) return def test_parse(self): tests = { "1.2.3.4": ((1, 2, 3, 4), None), "1.2.3.4alpha5": ((1, 2, 3, 4), ('alpha', 5)), "1.2.3.4-alpha5": ((1, 2, 3, 4), ('alpha', 5)), "1.2.3.4.alpha5": ((1, 2, 3, 4), ('alpha', 5)) } for test in iter(tests): expectation = tests[test] self.v._parse(test) self.assertEqual(expectation, (self.v.version, self.v.prerelease)) return def test_decrement(self): src_v = FlexibleVersion('1.0.0.0.10') dst_v = FlexibleVersion(str(src_v)) for i in range(1,10): dst_v -= 1 self.assertEqual(i, src_v.version[-1] - dst_v.version[-1]) return def test_decrement_disallows_below_zero(self): try: FlexibleVersion('1.0') - 1 # pylint: disable=expression-not-assigned self.assertTrue(False, "Decrement failed to raise an exception") # pylint: disable=redundant-unittest-assert except ArithmeticError: pass except Exception as e: t = e.__class__.__name__ self.assertTrue(False, "Decrement raised an unexpected exception: {0}".format(t)) # pylint: disable=redundant-unittest-assert return def test_increment(self): src_v = FlexibleVersion('1.0.0.0.0') dst_v = FlexibleVersion(str(src_v)) for i in range(1,10): dst_v += 1 self.assertEqual(i, dst_v.version[-1] - src_v.version[-1]) return def test_str(self): tests = [ '1', '1.2', '1.2.3', '1.2.3.4', '1.2.3.4.5', '1alpha', '1.alpha', '1-alpha', '1alpha0', '1.alpha0', '1-alpha0', '1.2alpha', '1.2.alpha', '1.2-alpha', '1.2alpha0', '1.2.alpha0', '1.2-alpha0', '1beta', '1.beta', '1-beta', '1beta0', '1.beta0', '1-beta0', '1.2beta', '1.2.beta', '1.2-beta', '1.2beta0', '1.2.beta0', '1.2-beta0', '1rc', '1.rc', '1-rc', '1rc0', '1.rc0', '1-rc0', '1.2rc', '1.2.rc', '1.2-rc', '1.2rc0', '1.2.rc0', '1.2-rc0', '1.2.3.4alpha5', ] for test in tests: self.assertEqual(test, str(FlexibleVersion(test))) return def test_creation_from_flexible_version(self): tests = [ '1', '1.2', '1.2.3', '1.2.3.4', '1.2.3.4.5', '1alpha', '1.alpha', '1-alpha', '1alpha0', '1.alpha0', '1-alpha0', '1.2alpha', '1.2.alpha', '1.2-alpha', '1.2alpha0', '1.2.alpha0', '1.2-alpha0', '1beta', '1.beta', '1-beta', '1beta0', '1.beta0', '1-beta0', '1.2beta', '1.2.beta', '1.2-beta', '1.2beta0', '1.2.beta0', '1.2-beta0', '1rc', '1.rc', '1-rc', '1rc0', '1.rc0', '1-rc0', '1.2rc', '1.2.rc', '1.2-rc', '1.2rc0', '1.2.rc0', '1.2-rc0', '1.2.3.4alpha5', ] for test in tests: v = FlexibleVersion(test) self.assertEqual(test, str(FlexibleVersion(v))) return def test_repr(self): v = FlexibleVersion('1,2,3rc4', ',', ['lol', 'rc']) expected = "FlexibleVersion ('1,2,3rc4', ',', ('lol', 'rc'))" self.assertEqual(expected, repr(v)) def test_order(self): test0 = ["1.7.0", "1.7.0rc0", "1.11.0"] expected0 = ['1.7.0rc0', '1.7.0', '1.11.0'] self.assertEqual(expected0, list(map(str, sorted([FlexibleVersion(v) for v in test0])))) test1 = [ '2.0.2rc2', '2.2.0beta3', '2.0.10', '2.1.0alpha42', '2.0.2beta4', '2.1.1', '2.0.1', '2.0.2rc3', '2.2.0', '2.0.0', '3.0.1', '2.1.0rc1' ] expected1 = [ '2.0.0', '2.0.1', '2.0.2beta4', '2.0.2rc2', '2.0.2rc3', '2.0.10', '2.1.0alpha42', '2.1.0rc1', '2.1.1', '2.2.0beta3', '2.2.0', '3.0.1' ] self.assertEqual(expected1, list(map(str, sorted([FlexibleVersion(v) for v in test1])))) self.assertEqual(FlexibleVersion("1.0.0.0.0.0.0.0"), FlexibleVersion("1")) self.assertFalse(FlexibleVersion("1.0") > FlexibleVersion("1.0")) self.assertFalse(FlexibleVersion("1.0") < FlexibleVersion("1.0")) self.assertTrue(FlexibleVersion("1.0") < FlexibleVersion("1.1")) self.assertTrue(FlexibleVersion("1.9") < FlexibleVersion("1.10")) self.assertTrue(FlexibleVersion("1.9.9") < FlexibleVersion("1.10.0")) self.assertTrue(FlexibleVersion("1.0.0.0") < FlexibleVersion("1.2.0.0")) self.assertTrue(FlexibleVersion("1.1") > FlexibleVersion("1.0")) self.assertTrue(FlexibleVersion("1.10") > FlexibleVersion("1.9")) self.assertTrue(FlexibleVersion("1.10.0") > FlexibleVersion("1.9.9")) self.assertTrue(FlexibleVersion("1.2.0.0") > FlexibleVersion("1.0.0.0")) self.assertTrue(FlexibleVersion("1.0") <= FlexibleVersion("1.1")) self.assertTrue(FlexibleVersion("1.1") > FlexibleVersion("1.0")) self.assertTrue(FlexibleVersion("1.1") >= FlexibleVersion("1.0")) self.assertTrue(FlexibleVersion("1.0") == FlexibleVersion("1.0")) self.assertTrue(FlexibleVersion("1.0") >= FlexibleVersion("1.0")) self.assertTrue(FlexibleVersion("1.0") <= FlexibleVersion("1.0")) self.assertFalse(FlexibleVersion("1.0") != FlexibleVersion("1.0")) self.assertTrue(FlexibleVersion("1.1") != FlexibleVersion("1.0")) return if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/utils/test_network_util.py000066400000000000000000000104571462617747000254700ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import subprocess from mock.mock import patch import azurelinuxagent.common.utils.networkutil as networkutil from tests.lib.tools import AgentTestCase class TestNetworkOperations(AgentTestCase): def test_route_entry(self): interface = "eth0" mask = "C0FFFFFF" # 255.255.255.192 destination = "C0BB910A" # gateway = "C1BB910A" flags = "1" metric = "0" expected = 'Iface: eth0\tDestination: 10.145.187.192\tGateway: 10.145.187.193\tMask: 255.255.255.192\tFlags: 0x0001\tMetric: 0' expected_json = '{"Iface": "eth0", "Destination": "10.145.187.192", "Gateway": "10.145.187.193", "Mask": "255.255.255.192", "Flags": "0x0001", "Metric": "0"}' entry = networkutil.RouteEntry(interface, destination, gateway, mask, flags, metric) self.assertEqual(str(entry), expected) self.assertEqual(entry.to_json(), expected_json) def test_nic_link_only(self): nic = networkutil.NetworkInterfaceCard("test0", "link info") self.assertEqual(str(nic), '{ "name": "test0", "link": "link info" }') def test_nic_ipv4(self): nic = networkutil.NetworkInterfaceCard("test0", "link info") nic.add_ipv4("ipv4-1") self.assertEqual(str(nic), '{ "name": "test0", "link": "link info", "ipv4": ["ipv4-1"] }') nic.add_ipv4("ipv4-2") self.assertEqual(str(nic), '{ "name": "test0", "link": "link info", "ipv4": ["ipv4-1","ipv4-2"] }') def test_nic_ipv6(self): nic = networkutil.NetworkInterfaceCard("test0", "link info") nic.add_ipv6("ipv6-1") self.assertEqual(str(nic), '{ "name": "test0", "link": "link info", "ipv6": ["ipv6-1"] }') nic.add_ipv6("ipv6-2") self.assertEqual(str(nic), '{ "name": "test0", "link": "link info", "ipv6": ["ipv6-1","ipv6-2"] }') def test_nic_ordinary(self): nic = networkutil.NetworkInterfaceCard("test0", "link INFO") nic.add_ipv6("ipv6-1") nic.add_ipv4("ipv4-1") self.assertEqual(str(nic), '{ "name": "test0", "link": "link INFO", "ipv4": ["ipv4-1"], "ipv6": ["ipv6-1"] }') class TestAddFirewallRules(AgentTestCase): def test_it_should_add_firewall_rules(self): test_dst_ip = "1.2.3.4" test_uid = 9999 test_wait = "-w" original_popen = subprocess.Popen commands_called = [] def mock_popen(command, *args, **kwargs): if "iptables" in command: commands_called.append(command) command = ["echo", "iptables"] return original_popen(command, *args, **kwargs) with patch("azurelinuxagent.common.utils.shellutil.subprocess.Popen", side_effect=mock_popen): networkutil.AddFirewallRules.add_iptables_rules(test_wait, test_dst_ip, test_uid) self.assertTrue(all(test_dst_ip in cmd for cmd in commands_called), "Dest IP was not set correctly in iptables") self.assertTrue(all(test_wait in cmd for cmd in commands_called), "The wait was not set properly") self.assertTrue(all(str(test_uid) in cmd for cmd in commands_called if ("ACCEPT" in cmd and "--uid-owner" in cmd)), "The UID is not set for the accept command") def test_it_should_raise_if_invalid_data(self): with self.assertRaises(Exception) as context_manager: networkutil.AddFirewallRules.add_iptables_rules(wait="", dst_ip="", uid=9999) self.assertIn("Destination IP should not be empty", str(context_manager.exception)) with self.assertRaises(Exception) as context_manager: networkutil.AddFirewallRules.add_iptables_rules(wait="", dst_ip="1.2.3.4", uid="") self.assertIn("User ID should not be empty", str(context_manager.exception)) Azure-WALinuxAgent-2b21de5/tests/common/utils/test_passwords.txt000066400000000000000000000000401462617747000251410ustar00rootroot00000000000000김치 करी hamburger caféAzure-WALinuxAgent-2b21de5/tests/common/utils/test_rest_util.py000066400000000000000000001075431462617747000247570ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import unittest from azurelinuxagent.common.exception import HttpError, ResourceGoneError, InvalidContainerError import azurelinuxagent.common.utils.restutil as restutil from azurelinuxagent.common.utils.restutil import HTTP_USER_AGENT from azurelinuxagent.common.future import httpclient, ustr from tests.lib.tools import AgentTestCase, call, Mock, MagicMock, patch class TestIOErrorCounter(AgentTestCase): def test_increment_hostplugin(self): restutil.IOErrorCounter.reset() restutil.IOErrorCounter.set_protocol_endpoint() restutil.IOErrorCounter.increment( restutil.KNOWN_WIRESERVER_IP, restutil.HOST_PLUGIN_PORT) counts = restutil.IOErrorCounter.get_and_reset() self.assertEqual(1, counts["hostplugin"]) self.assertEqual(0, counts["protocol"]) self.assertEqual(0, counts["other"]) def test_increment_protocol(self): restutil.IOErrorCounter.reset() restutil.IOErrorCounter.set_protocol_endpoint() restutil.IOErrorCounter.increment( restutil.KNOWN_WIRESERVER_IP, 80) counts = restutil.IOErrorCounter.get_and_reset() self.assertEqual(0, counts["hostplugin"]) self.assertEqual(1, counts["protocol"]) self.assertEqual(0, counts["other"]) def test_increment_other(self): restutil.IOErrorCounter.reset() restutil.IOErrorCounter.set_protocol_endpoint() restutil.IOErrorCounter.increment( '169.254.169.254', 80) counts = restutil.IOErrorCounter.get_and_reset() self.assertEqual(0, counts["hostplugin"]) self.assertEqual(0, counts["protocol"]) self.assertEqual(1, counts["other"]) def test_get_and_reset(self): restutil.IOErrorCounter.reset() restutil.IOErrorCounter.set_protocol_endpoint() restutil.IOErrorCounter.increment( restutil.KNOWN_WIRESERVER_IP, restutil.HOST_PLUGIN_PORT) restutil.IOErrorCounter.increment( restutil.KNOWN_WIRESERVER_IP, restutil.HOST_PLUGIN_PORT) restutil.IOErrorCounter.increment( restutil.KNOWN_WIRESERVER_IP, 80) restutil.IOErrorCounter.increment( '169.254.169.254', 80) restutil.IOErrorCounter.increment( '169.254.169.254', 80) counts = restutil.IOErrorCounter.get_and_reset() self.assertEqual(2, counts.get("hostplugin")) self.assertEqual(1, counts.get("protocol")) self.assertEqual(2, counts.get("other")) self.assertEqual( {"hostplugin":0, "protocol":0, "other":0}, restutil.IOErrorCounter._counts) class TestHttpOperations(AgentTestCase): def test_parse_url(self): test_uri = "http://abc.def/ghi#hash?jkl=mn" host, port, secure, rel_uri = restutil._parse_url(test_uri) # pylint: disable=unused-variable self.assertEqual("abc.def", host) self.assertEqual("/ghi#hash?jkl=mn", rel_uri) test_uri = "http://abc.def/" host, port, secure, rel_uri = restutil._parse_url(test_uri) self.assertEqual("abc.def", host) self.assertEqual("/", rel_uri) self.assertEqual(False, secure) test_uri = "https://abc.def/ghi?jkl=mn" host, port, secure, rel_uri = restutil._parse_url(test_uri) self.assertEqual(True, secure) test_uri = "http://abc.def:80/" host, port, secure, rel_uri = restutil._parse_url(test_uri) self.assertEqual("abc.def", host) host, port, secure, rel_uri = restutil._parse_url("") self.assertEqual(None, host) self.assertEqual(rel_uri, "") host, port, secure, rel_uri = restutil._parse_url("None") self.assertEqual(None, host) self.assertEqual(rel_uri, "None") def test_trim_url_parameters(self): test_uri = "http://abc.def/ghi#hash?jkl=mn" rel_uri = restutil._trim_url_parameters(test_uri) self.assertEqual("http://abc.def/ghi", rel_uri) test_uri = "https://abc.def/ghi?jkl=mn" rel_uri = restutil._trim_url_parameters(test_uri) self.assertEqual("https://abc.def/ghi", rel_uri) test_uri = "https://abc.def:8443/ghi?jkl=mn" rel_uri = restutil._trim_url_parameters(test_uri) self.assertEqual("https://abc.def:8443/ghi", rel_uri) rel_uri = restutil._trim_url_parameters("") self.assertEqual(rel_uri, "") rel_uri = restutil._trim_url_parameters("None") self.assertEqual(rel_uri, "None") def test_cleanup_sas_tokens_from_urls_for_normal_cases(self): test_url = "http://abc.def/ghi#hash?jkl=mn" filtered_url = restutil.redact_sas_tokens_in_urls(test_url) self.assertEqual(test_url, filtered_url) test_url = "http://abc.def:80/" filtered_url = restutil.redact_sas_tokens_in_urls(test_url) self.assertEqual(test_url, filtered_url) test_url = "http://abc.def/" filtered_url = restutil.redact_sas_tokens_in_urls(test_url) self.assertEqual(test_url, filtered_url) test_url = "https://abc.def/ghi?jkl=mn" filtered_url = restutil.redact_sas_tokens_in_urls(test_url) self.assertEqual(test_url, filtered_url) def test_cleanup_sas_tokens_from_urls_containing_sas_tokens(self): # Contains pair of URLs (RawURL, RedactedURL) urls_tuples = [("https://abc.def.xyz.123.net/functiontest/yokawasa.png?sig" "=sXBjML1Fpk9UnTBtajo05ZTFSk0LWFGvARZ6WlVcAog%3D&srt=o&ss=b&" "spr=https&sp=rl&sv=2016-05-31&se=2017-07-01T00%3A21%3A38Z&" "st=2017-07-01T23%3A16%3A38Z", "https://abc.def.xyz.123.net/functiontest/yokawasa.png?sig" "=" + restutil.REDACTED_TEXT + "&srt=o&ss=b&spr=https&sp=rl&sv=2016-05-31&se=2017-07-01T00" "%3A21%3A38Z&st=2017-07-01T23%3A16%3A38Z"), ("https://abc.def.xyz.123.net/?sv=2017-11-09&ss=b&srt=o&sp=r&se=2018-07" "-26T02:20:44Z&st=2018-07-25T18:20:44Z&spr=https," "http&sig=DavQgRtl99DsEPv9Xeb63GnLXCuaLYw5ay%2BE1cFckQY%3D", "https://abc.def.xyz.123.net/?sv=2017-11-09&ss=b&srt=o&sp=r&se" "=2018-07-26T02:20:44Z&st=2018-07-25T18:20:44Z&spr=https," "http&sig=" + restutil.REDACTED_TEXT), ("https://abc.def.xyz.123.net/?sv=2017-11-09&ss=b&srt=o&sp=r&se=2018-07" "-26T02:20:44Z&st=2018-07-25T18:20:44Z&spr=https," "http&sig=ttSCKmyjiDEeIzT9q7HtYYgbCRIXuesFSOhNEab52NM%3D", "https://abc.def.xyz.123.net/?sv=2017-11-09&ss=b&srt=o&sp=r&se" "=2018-07-26T02:20:44Z&st=2018-07-25T18:20:44Z&spr=https," "http&sig=" + restutil.REDACTED_TEXT), ("https://abc.def.xyz.123.net/?sv=2017-11-09&ss=b&srt=o&sp=r&se=2018-07" "-26T02:20:42Z&st=2018-07-25T18:20:44Z&spr=https," "http&sig=X0imGmcj5KcBPFcqlfYjIZakzGrzONGbRv5JMOnGrwc%3D", "https://abc.def.xyz.123.net/?sv=2017-11-09&ss=b&srt=o&sp=r&se" "=2018-07-26T02:20:42Z&st=2018-07-25T18:20:44Z&spr=https," "http&sig=" + restutil.REDACTED_TEXT), ("https://abc.def.xyz.123.net/?sv=2017-11-09&ss=b&srt=o&sp=r&se=2018-07" "-26T02:20:42Z&st=2018-07-25T18:20:44Z&spr=https," "http&sig=9hfxYvaZzrMahtGO1OgMUiFGnDOtZXulZ3skkv1eVBg%3D", "https://abc.def.xyz.123.net/?sv=2017-11-09&ss=b&srt=o&sp=r&se" "=2018-07-26T02:20:42Z&st=2018-07-25T18:20:44Z&spr=https," "http&sig=" + restutil.REDACTED_TEXT), ("https://abc.def.xyz.123.net/?sv=2017-11-09&ss=b&srt=o&sp=r&se=2018-07" "-26T02:20:42Z&st=2018-07-25T18:20:44Z&spr=https" "&sig=cmluQEHnOGsVK9NDm83ruuPdPWNQcerfjOAbkspNZXU%3D", "https://abc.def.xyz.123.net/?sv=2017-11-09&ss=b&srt=o&sp=r&se" "=2018-07-26T02:20:42Z&st=2018-07-25T18:20:44Z&spr=https&sig" "=" + restutil.REDACTED_TEXT) ] for x in urls_tuples: self.assertEqual(restutil.redact_sas_tokens_in_urls(x[0]), x[1]) @patch('azurelinuxagent.common.conf.get_httpproxy_port') @patch('azurelinuxagent.common.conf.get_httpproxy_host') def test_get_http_proxy_none_is_default(self, mock_host, mock_port): mock_host.return_value = None mock_port.return_value = None h, p = restutil._get_http_proxy() self.assertEqual(None, h) self.assertEqual(None, p) @patch('azurelinuxagent.common.conf.get_httpproxy_port') @patch('azurelinuxagent.common.conf.get_httpproxy_host') def test_get_http_proxy_configuration_overrides_env(self, mock_host, mock_port): mock_host.return_value = "host" mock_port.return_value = None h, p = restutil._get_http_proxy() self.assertEqual("host", h) self.assertEqual(None, p) self.assertEqual(1, mock_host.call_count) self.assertEqual(1, mock_port.call_count) @patch('azurelinuxagent.common.conf.get_httpproxy_port') @patch('azurelinuxagent.common.conf.get_httpproxy_host') def test_get_http_proxy_configuration_requires_host(self, mock_host, mock_port): mock_host.return_value = None mock_port.return_value = None h, p = restutil._get_http_proxy() self.assertEqual(None, h) self.assertEqual(None, p) self.assertEqual(1, mock_host.call_count) self.assertEqual(0, mock_port.call_count) @patch('azurelinuxagent.common.conf.get_httpproxy_host') def test_get_http_proxy_http_uses_httpproxy(self, mock_host): mock_host.return_value = None with patch.dict(os.environ, { 'http_proxy' : 'http://foo.com:80', 'https_proxy' : 'https://bar.com:443' }): h, p = restutil._get_http_proxy() self.assertEqual("foo.com", h) self.assertEqual(80, p) @patch('azurelinuxagent.common.conf.get_httpproxy_host') def test_get_http_proxy_https_uses_httpsproxy(self, mock_host): mock_host.return_value = None with patch.dict(os.environ, { 'http_proxy' : 'http://foo.com:80', 'https_proxy' : 'https://bar.com:443' }): h, p = restutil._get_http_proxy(secure=True) self.assertEqual("bar.com", h) self.assertEqual(443, p) @patch('azurelinuxagent.common.conf.get_httpproxy_host') def test_get_http_proxy_ignores_user_in_httpproxy(self, mock_host): mock_host.return_value = None with patch.dict(os.environ, { 'http_proxy' : 'http://user:pw@foo.com:80' }): h, p = restutil._get_http_proxy() self.assertEqual("foo.com", h) self.assertEqual(80, p) def test_get_no_proxy_with_values_set(self): no_proxy_list = ["foo.com", "www.google.com"] with patch.dict(os.environ, { 'no_proxy': ",".join(no_proxy_list) }): no_proxy_from_environment = restutil.get_no_proxy() self.assertEqual(len(no_proxy_list), len(no_proxy_from_environment)) for i, j in zip(no_proxy_from_environment, no_proxy_list): self.assertEqual(i, j) def test_get_no_proxy_with_incorrect_variable_set(self): no_proxy_list = ["foo.com", "www.google.com", "", ""] no_proxy_list_cleaned = [entry for entry in no_proxy_list if entry] with patch.dict(os.environ, { 'no_proxy': ",".join(no_proxy_list) }): no_proxy_from_environment = restutil.get_no_proxy() self.assertEqual(len(no_proxy_list_cleaned), len(no_proxy_from_environment)) for i, j in zip(no_proxy_from_environment, no_proxy_list_cleaned): print(i, j) self.assertEqual(i, j) def test_get_no_proxy_with_ip_addresses_set(self): no_proxy_var = "10.0.0.1,10.0.0.2,10.0.0.3,10.0.0.4,10.0.0.5,10.0.0.6,10.0.0.7,10.0.0.8,10.0.0.9,10.0.0.10," no_proxy_list = ['10.0.0.1', '10.0.0.2', '10.0.0.3', '10.0.0.4', '10.0.0.5', '10.0.0.6', '10.0.0.7', '10.0.0.8', '10.0.0.9', '10.0.0.10'] with patch.dict(os.environ, { 'no_proxy': no_proxy_var }): no_proxy_from_environment = restutil.get_no_proxy() self.assertEqual(len(no_proxy_list), len(no_proxy_from_environment)) for i, j in zip(no_proxy_from_environment, no_proxy_list): self.assertEqual(i, j) def test_get_no_proxy_default(self): no_proxy_generator = restutil.get_no_proxy() self.assertIsNone(no_proxy_generator) def test_is_ipv4_address(self): self.assertTrue(restutil.is_ipv4_address('8.8.8.8')) self.assertFalse(restutil.is_ipv4_address('localhost.localdomain')) self.assertFalse(restutil.is_ipv4_address('2001:4860:4860::8888')) # ipv6 tests def test_is_valid_cidr(self): self.assertTrue(restutil.is_valid_cidr('192.168.1.0/24')) self.assertFalse(restutil.is_valid_cidr('8.8.8.8')) self.assertFalse(restutil.is_valid_cidr('192.168.1.0/a')) self.assertFalse(restutil.is_valid_cidr('192.168.1.0/128')) self.assertFalse(restutil.is_valid_cidr('192.168.1.0/-1')) self.assertFalse(restutil.is_valid_cidr('192.168.1.999/24')) def test_address_in_network(self): self.assertTrue(restutil.address_in_network('192.168.1.1', '192.168.1.0/24')) self.assertFalse(restutil.address_in_network('172.16.0.1', '192.168.1.0/24')) def test_dotted_netmask(self): self.assertEqual(restutil.dotted_netmask(0), '0.0.0.0') self.assertEqual(restutil.dotted_netmask(8), '255.0.0.0') self.assertEqual(restutil.dotted_netmask(16), '255.255.0.0') self.assertEqual(restutil.dotted_netmask(24), '255.255.255.0') self.assertEqual(restutil.dotted_netmask(32), '255.255.255.255') self.assertRaises(ValueError, restutil.dotted_netmask, 33) def test_bypass_proxy(self): no_proxy_list = ["foo.com", "www.google.com", "168.63.129.16", "Microsoft.com"] with patch.dict(os.environ, { 'no_proxy': ",".join(no_proxy_list) }): self.assertFalse(restutil.bypass_proxy("http://bar.com")) self.assertTrue(restutil.bypass_proxy("http://foo.com")) self.assertTrue(restutil.bypass_proxy("http://168.63.129.16")) self.assertFalse(restutil.bypass_proxy("http://baz.com")) self.assertFalse(restutil.bypass_proxy("http://10.1.1.1")) self.assertTrue(restutil.bypass_proxy("http://www.microsoft.com")) @patch("azurelinuxagent.common.future.httpclient.HTTPSConnection") @patch("azurelinuxagent.common.future.httpclient.HTTPConnection") def test_http_request_direct(self, HTTPConnection, HTTPSConnection): mock_conn = \ MagicMock(getresponse=\ Mock(return_value=\ Mock(read=Mock(return_value="TheResults")))) HTTPConnection.return_value = mock_conn resp = restutil._http_request("GET", "foo", "/bar", 10) HTTPConnection.assert_has_calls([ call("foo", 80, timeout=10) ]) HTTPSConnection.assert_not_called() mock_conn.request.assert_has_calls([ call(method="GET", url="/bar", body=None, headers={'User-Agent': HTTP_USER_AGENT, 'Connection': 'close'}) ]) self.assertEqual(1, mock_conn.getresponse.call_count) self.assertNotEqual(None, resp) self.assertEqual("TheResults", resp.read()) @patch("azurelinuxagent.common.future.httpclient.HTTPSConnection") @patch("azurelinuxagent.common.future.httpclient.HTTPConnection") def test_http_request_direct_secure(self, HTTPConnection, HTTPSConnection): mock_conn = \ MagicMock(getresponse=\ Mock(return_value=\ Mock(read=Mock(return_value="TheResults")))) HTTPSConnection.return_value = mock_conn resp = restutil._http_request("GET", "foo", "/bar", 10, secure=True) HTTPConnection.assert_not_called() HTTPSConnection.assert_has_calls([ call("foo", 443, timeout=10) ]) mock_conn.request.assert_has_calls([ call(method="GET", url="/bar", body=None, headers={'User-Agent': HTTP_USER_AGENT, 'Connection': 'close'}) ]) self.assertEqual(1, mock_conn.getresponse.call_count) self.assertNotEqual(None, resp) self.assertEqual("TheResults", resp.read()) @patch("azurelinuxagent.common.future.httpclient.HTTPSConnection") @patch("azurelinuxagent.common.future.httpclient.HTTPConnection") def test_http_request_proxy(self, HTTPConnection, HTTPSConnection): mock_conn = \ MagicMock(getresponse=\ Mock(return_value=\ Mock(read=Mock(return_value="TheResults")))) HTTPConnection.return_value = mock_conn resp = restutil._http_request("GET", "foo", "/bar", 10, proxy_host="foo.bar", proxy_port=23333) HTTPConnection.assert_has_calls([ call("foo.bar", 23333, timeout=10) ]) HTTPSConnection.assert_not_called() mock_conn.request.assert_has_calls([ call(method="GET", url="http://foo:80/bar", body=None, headers={'User-Agent': HTTP_USER_AGENT, 'Connection': 'close'}) ]) self.assertEqual(1, mock_conn.getresponse.call_count) self.assertNotEqual(None, resp) self.assertEqual("TheResults", resp.read()) @patch("azurelinuxagent.common.utils.restutil._get_http_proxy") @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_proxy_with_no_proxy_check(self, _http_request, sleep, mock_get_http_proxy): # pylint: disable=unused-argument mock_http_resp = MagicMock() mock_http_resp.read = Mock(return_value="hehe") _http_request.return_value = mock_http_resp mock_get_http_proxy.return_value = "host", 1234 # Return a host/port combination no_proxy_list = ["foo.com", "www.google.com", "168.63.129.16"] with patch.dict(os.environ, { 'no_proxy': ",".join(no_proxy_list) }): # Test http get resp = restutil.http_get("http://foo.com", use_proxy=True) self.assertEqual("hehe", resp.read()) self.assertEqual(0, mock_get_http_proxy.call_count) # Test http get resp = restutil.http_get("http://bar.com", use_proxy=True) self.assertEqual("hehe", resp.read()) self.assertEqual(1, mock_get_http_proxy.call_count) def test_proxy_conditions_with_no_proxy(self): should_use_proxy = True should_not_use_proxy = False use_proxy = True no_proxy_list = ["foo.com", "www.google.com", "168.63.129.16"] with patch.dict(os.environ, { 'no_proxy': ",".join(no_proxy_list) }): host = "10.0.0.1" self.assertEqual(should_use_proxy, use_proxy and not restutil.bypass_proxy(host)) host = "foo.com" self.assertEqual(should_not_use_proxy, use_proxy and not restutil.bypass_proxy(host)) host = "www.google.com" self.assertEqual(should_not_use_proxy, use_proxy and not restutil.bypass_proxy(host)) host = "168.63.129.16" self.assertEqual(should_not_use_proxy, use_proxy and not restutil.bypass_proxy(host)) host = "www.bar.com" self.assertEqual(should_use_proxy, use_proxy and not restutil.bypass_proxy(host)) no_proxy_list = ["10.0.0.1/24"] with patch.dict(os.environ, { 'no_proxy': ",".join(no_proxy_list) }): host = "www.bar.com" self.assertEqual(should_use_proxy, use_proxy and not restutil.bypass_proxy(host)) host = "10.0.0.1" self.assertEqual(should_not_use_proxy, use_proxy and not restutil.bypass_proxy(host)) host = "10.0.1.1" self.assertEqual(should_use_proxy, use_proxy and not restutil.bypass_proxy(host)) # When No_proxy is empty with patch.dict(os.environ, { 'no_proxy': "" }): host = "10.0.0.1" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "foo.com" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "www.google.com" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "168.63.129.16" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "www.bar.com" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "10.0.0.1" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "10.0.1.1" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) # When os.environ is empty - No global variables defined. with patch.dict(os.environ, {}): host = "10.0.0.1" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "foo.com" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "www.google.com" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "168.63.129.16" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "www.bar.com" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "10.0.0.1" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) host = "10.0.1.1" self.assertTrue(use_proxy and not restutil.bypass_proxy(host)) @patch("azurelinuxagent.common.future.httpclient.HTTPSConnection") @patch("azurelinuxagent.common.future.httpclient.HTTPConnection") def test_http_request_proxy_secure(self, HTTPConnection, HTTPSConnection): mock_conn = \ MagicMock(getresponse=\ Mock(return_value=\ Mock(read=Mock(return_value="TheResults")))) HTTPSConnection.return_value = mock_conn resp = restutil._http_request("GET", "foo", "/bar", 10, proxy_host="foo.bar", proxy_port=23333, secure=True) HTTPConnection.assert_not_called() HTTPSConnection.assert_has_calls([ call("foo.bar", 23333, timeout=10) ]) mock_conn.request.assert_has_calls([ call(method="GET", url="https://foo:443/bar", body=None, headers={'User-Agent': HTTP_USER_AGENT, 'Connection': 'close'}) ]) self.assertEqual(1, mock_conn.getresponse.call_count) self.assertNotEqual(None, resp) self.assertEqual("TheResults", resp.read()) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_with_retry(self, _http_request, sleep): # pylint: disable=unused-argument mock_http_resp = MagicMock() mock_http_resp.read = Mock(return_value="hehe") _http_request.return_value = mock_http_resp # Test http get resp = restutil.http_get("http://foo.bar") self.assertEqual("hehe", resp.read()) # Test https get resp = restutil.http_get("https://foo.bar") self.assertEqual("hehe", resp.read()) # Test http failure _http_request.side_effect = httpclient.HTTPException("Http failure") self.assertRaises(restutil.HttpError, restutil.http_get, "http://foo.bar") # Test http failure _http_request.side_effect = IOError("IO failure") self.assertRaises(restutil.HttpError, restutil.http_get, "http://foo.bar") @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_retries_status_codes(self, _http_request, _sleep): _http_request.side_effect = [ Mock(status=httpclient.SERVICE_UNAVAILABLE), Mock(status=httpclient.OK) ] restutil.http_get("https://foo.bar") self.assertEqual(2, _http_request.call_count) self.assertEqual(1, _sleep.call_count) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_retries_passed_status_codes(self, _http_request, _sleep): # Ensure the code is not part of the standard set self.assertFalse(httpclient.UNAUTHORIZED in restutil.RETRY_CODES) _http_request.side_effect = [ Mock(status=httpclient.UNAUTHORIZED), Mock(status=httpclient.OK) ] restutil.http_get("https://foo.bar", retry_codes=[httpclient.UNAUTHORIZED]) self.assertEqual(2, _http_request.call_count) self.assertEqual(1, _sleep.call_count) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_retries_with_fibonacci_delay(self, _http_request, _sleep): # Ensure the code is not a throttle code self.assertFalse(httpclient.BAD_GATEWAY in restutil.THROTTLE_CODES) _http_request.side_effect = [ Mock(status=httpclient.BAD_GATEWAY) for i in range(restutil.DEFAULT_RETRIES) ] + [Mock(status=httpclient.OK)] restutil.http_get("https://foo.bar", max_retry=restutil.DEFAULT_RETRIES+1) self.assertEqual(restutil.DEFAULT_RETRIES+1, _http_request.call_count) self.assertEqual(restutil.DEFAULT_RETRIES, _sleep.call_count) self.assertEqual( [ call(restutil._compute_delay(i+1, restutil.DELAY_IN_SECONDS)) for i in range(restutil.DEFAULT_RETRIES)], _sleep.call_args_list) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_retries_with_constant_delay_when_throttled(self, _http_request, _sleep): # Ensure the code is a throttle code self.assertTrue(httpclient.SERVICE_UNAVAILABLE in restutil.THROTTLE_CODES) _http_request.side_effect = [ Mock(status=httpclient.SERVICE_UNAVAILABLE) for i in range(restutil.DEFAULT_RETRIES) # pylint: disable=unused-variable ] + [Mock(status=httpclient.OK)] restutil.http_get("https://foo.bar", max_retry=restutil.DEFAULT_RETRIES+1) self.assertEqual(restutil.DEFAULT_RETRIES+1, _http_request.call_count) self.assertEqual(restutil.DEFAULT_RETRIES, _sleep.call_count) self.assertEqual( [call(1) for i in range(restutil.DEFAULT_RETRIES)], _sleep.call_args_list) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_retries_for_safe_minimum_number_when_throttled(self, _http_request, _sleep): # Ensure the code is a throttle code self.assertTrue(httpclient.SERVICE_UNAVAILABLE in restutil.THROTTLE_CODES) _http_request.side_effect = [ Mock(status=httpclient.SERVICE_UNAVAILABLE) for i in range(restutil.THROTTLE_RETRIES-1) # pylint: disable=unused-variable ] + [Mock(status=httpclient.OK)] restutil.http_get("https://foo.bar", max_retry=1) self.assertEqual(restutil.THROTTLE_RETRIES, _http_request.call_count) self.assertEqual(restutil.THROTTLE_RETRIES-1, _sleep.call_count) self.assertEqual( [call(1) for i in range(restutil.THROTTLE_RETRIES-1)], _sleep.call_args_list) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_raises_for_resource_gone(self, _http_request, _sleep): _http_request.side_effect = [ Mock(status=httpclient.GONE) ] self.assertRaises(ResourceGoneError, restutil.http_get, "https://foo.bar") self.assertEqual(1, _http_request.call_count) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_raises_for_invalid_container_configuration(self, _http_request, _sleep): def read(): return b'{ "errorCode": "InvalidContainerConfiguration", "message": "Invalid request." }' _http_request.side_effect = [ Mock(status=httpclient.BAD_REQUEST, reason='Bad Request', read=read) ] self.assertRaises(InvalidContainerError, restutil.http_get, "https://foo.bar") self.assertEqual(1, _http_request.call_count) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_raises_for_invalid_role_configuration(self, _http_request, _sleep): def read(): return b'{ "errorCode": "RequestRoleConfigFileNotFound", "message": "Invalid request." }' _http_request.side_effect = [ Mock(status=httpclient.GONE, reason='Resource Gone', read=read) ] self.assertRaises(ResourceGoneError, restutil.http_get, "https://foo.bar") self.assertEqual(1, _http_request.call_count) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_retries_exceptions(self, _http_request, _sleep): # Testing each exception is difficult because they have varying # signatures; for now, test one and ensure the set is unchanged recognized_exceptions = [ httpclient.NotConnected, httpclient.IncompleteRead, httpclient.ImproperConnectionState, httpclient.BadStatusLine ] self.assertEqual(recognized_exceptions, restutil.RETRY_EXCEPTIONS) _http_request.side_effect = [ httpclient.IncompleteRead(''), Mock(status=httpclient.OK) ] restutil.http_get("https://foo.bar") self.assertEqual(2, _http_request.call_count) self.assertEqual(1, _sleep.call_count) @patch("time.sleep") @patch("azurelinuxagent.common.utils.restutil._http_request") def test_http_request_retries_ioerrors(self, _http_request, _sleep): ioerror = IOError() ioerror.errno = 42 _http_request.side_effect = [ ioerror, Mock(status=httpclient.OK) ] restutil.http_get("https://foo.bar") self.assertEqual(2, _http_request.call_count) self.assertEqual(1, _sleep.call_count) def test_request_failed(self): self.assertTrue(restutil.request_failed(None)) resp = Mock() for status in restutil.OK_CODES: resp.status = status self.assertFalse(restutil.request_failed(resp)) self.assertFalse(httpclient.BAD_REQUEST in restutil.OK_CODES) resp.status = httpclient.BAD_REQUEST self.assertTrue(restutil.request_failed(resp)) self.assertFalse( restutil.request_failed( resp, ok_codes=[httpclient.BAD_REQUEST])) def test_request_succeeded(self): self.assertFalse(restutil.request_succeeded(None)) resp = Mock() for status in restutil.OK_CODES: resp.status = status self.assertTrue(restutil.request_succeeded(resp)) self.assertFalse(httpclient.BAD_REQUEST in restutil.OK_CODES) resp.status = httpclient.BAD_REQUEST self.assertFalse(restutil.request_succeeded(resp)) self.assertTrue( restutil.request_succeeded( resp, ok_codes=[httpclient.BAD_REQUEST])) def test_read_response_error(self): """ Validate the read_response_error method handles encoding correctly """ responses = ['message', b'message', '\x80message\x80'] response = MagicMock() response.status = 'status' response.reason = 'reason' with patch.object(response, 'read') as patch_response: for s in responses: patch_response.return_value = s result = restutil.read_response_error(response) print("RESPONSE: {0}".format(s)) print("RESULT: {0}".format(result)) print("PRESENT: {0}".format('[status: reason]' in result)) self.assertTrue('[status: reason]' in result) self.assertTrue('message' in result) def test_read_response_bytes(self): response_bytes = '7b:0a:20:20:20:20:22:65:72:72:6f:72:43:6f:64:65:22:' \ '3a:20:22:54:68:65:20:62:6c:6f:62:20:74:79:70:65:20:' \ '69:73:20:69:6e:76:61:6c:69:64:20:66:6f:72:20:74:68:' \ '69:73:20:6f:70:65:72:61:74:69:6f:6e:2e:22:2c:0a:20:' \ '20:20:20:22:6d:65:73:73:61:67:65:22:3a:20:22:c3:af:' \ 'c2:bb:c2:bf:3c:3f:78:6d:6c:20:76:65:72:73:69:6f:6e:' \ '3d:22:31:2e:30:22:20:65:6e:63:6f:64:69:6e:67:3d:22:' \ '75:74:66:2d:38:22:3f:3e:3c:45:72:72:6f:72:3e:3c:43:' \ '6f:64:65:3e:49:6e:76:61:6c:69:64:42:6c:6f:62:54:79:' \ '70:65:3c:2f:43:6f:64:65:3e:3c:4d:65:73:73:61:67:65:' \ '3e:54:68:65:20:62:6c:6f:62:20:74:79:70:65:20:69:73:' \ '20:69:6e:76:61:6c:69:64:20:66:6f:72:20:74:68:69:73:' \ '20:6f:70:65:72:61:74:69:6f:6e:2e:0a:52:65:71:75:65:' \ '73:74:49:64:3a:63:37:34:32:39:30:63:62:2d:30:30:30:' \ '31:2d:30:30:62:35:2d:30:36:64:61:2d:64:64:36:36:36:' \ '61:30:30:30:22:2c:0a:20:20:20:20:22:64:65:74:61:69:' \ '6c:73:22:3a:20:22:22:0a:7d'.split(':') expected_response = '[HTTP Failed] [status: reason] {\n "errorCode": "The blob ' \ 'type is invalid for this operation.",\n ' \ '"message": "' \ 'InvalidBlobTypeThe ' \ 'blob type is invalid for this operation.\n' \ 'RequestId:c74290cb-0001-00b5-06da-dd666a000",' \ '\n "details": ""\n}' response_string = ''.join(chr(int(b, 16)) for b in response_bytes) response = MagicMock() response.status = 'status' response.reason = 'reason' with patch.object(response, 'read') as patch_response: patch_response.return_value = response_string result = restutil.read_response_error(response) self.assertEqual(result, expected_response) try: raise HttpError("{0}".format(result)) except HttpError as e: self.assertTrue(result in ustr(e)) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/utils/test_shell_util.py000066400000000000000000000561531462617747000251110ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import signal import subprocess import sys import tempfile import threading import unittest from azurelinuxagent.common.future import ustr import azurelinuxagent.common.utils.shellutil as shellutil from tests.lib.tools import AgentTestCase, patch, skip_if_predicate_true from tests.lib.miscellaneous_tools import wait_for, format_processes class ShellQuoteTestCase(AgentTestCase): def test_shellquote(self): self.assertEqual("\'foo\'", shellutil.quote("foo")) self.assertEqual("\'foo bar\'", shellutil.quote("foo bar")) self.assertEqual("'foo'\\''bar'", shellutil.quote("foo\'bar")) class RunTestCase(AgentTestCase): def test_it_should_return_the_exit_code_of_the_command(self): exit_code = shellutil.run("exit 123") self.assertEqual(123, exit_code) def test_it_should_be_a_pass_thru_to_run_get_output(self): with patch.object(shellutil, "run_get_output", return_value=(0, "")) as mock_run_get_output: shellutil.run("echo hello word!", chk_err=False, expected_errors=[1, 2, 3]) self.assertEqual(mock_run_get_output.call_count, 1) args, kwargs = mock_run_get_output.call_args self.assertEqual(args[0], "echo hello word!") self.assertEqual(kwargs["chk_err"], False) self.assertEqual(kwargs["expected_errors"], [1, 2, 3]) class RunGetOutputTestCase(AgentTestCase): def test_run_get_output(self): output = shellutil.run_get_output(u"ls /") self.assertNotEqual(None, output) self.assertEqual(0, output[0]) err = shellutil.run_get_output(u"ls /not-exists") self.assertNotEqual(0, err[0]) err = shellutil.run_get_output(u"ls 我") self.assertNotEqual(0, err[0]) def test_it_should_log_the_command(self): command = "echo hello world!" with patch("azurelinuxagent.common.utils.shellutil.logger", autospec=True) as mock_logger: shellutil.run_get_output(command) self.assertEqual(mock_logger.verbose.call_count, 1) args, kwargs = mock_logger.verbose.call_args # pylint: disable=unused-variable command_in_message = args[1] self.assertEqual(command_in_message, command) def test_it_should_log_command_failures_as_errors(self): return_code = 99 command = "exit {0}".format(return_code) with patch("azurelinuxagent.common.utils.shellutil.logger", autospec=True) as mock_logger: shellutil.run_get_output(command, log_cmd=False) self.assertEqual(mock_logger.error.call_count, 1) args, _ = mock_logger.error.call_args message = args[0] # message is similar to "Command: [exit 99], return code: [99], result: []" self.assertIn("[{0}]".format(command), message) self.assertIn("[{0}]".format(return_code), message) self.assertEqual(mock_logger.info.call_count, 0, "Did not expect any info messages. Got: {0}".format(mock_logger.info.call_args_list)) self.assertEqual(mock_logger.warn.call_count, 0, "Did not expect any warnings. Got: {0}".format(mock_logger.warn.call_args_list)) def test_it_should_log_expected_errors_as_info(self): return_code = 99 command = "exit {0}".format(return_code) with patch("azurelinuxagent.common.utils.shellutil.logger", autospec=True) as mock_logger: shellutil.run_get_output(command, log_cmd=False, expected_errors=[return_code]) self.assertEqual(mock_logger.info.call_count, 1) args, _ = mock_logger.info.call_args message = args[0] # message is similar to "Command: [exit 99], return code: [99], result: []" self.assertIn("[{0}]".format(command), message) self.assertIn("[{0}]".format(return_code), message) self.assertEqual(mock_logger.warn.call_count, 0, "Did not expect any warnings. Got: {0}".format(mock_logger.warn.call_args_list)) self.assertEqual(mock_logger.error.call_count, 0, "Did not expect any errors. Got: {0}".format(mock_logger.error.call_args_list)) def test_it_should_log_unexpected_errors_as_errors(self): return_code = 99 command = "exit {0}".format(return_code) with patch("azurelinuxagent.common.utils.shellutil.logger", autospec=True) as mock_logger: shellutil.run_get_output(command, log_cmd=False, expected_errors=[return_code + 1]) self.assertEqual(mock_logger.error.call_count, 1) args, _ = mock_logger.error.call_args message = args[0] # message is similar to "Command: [exit 99], return code: [99], result: []" self.assertIn("[{0}]".format(command), message) self.assertIn("[{0}]".format(return_code), message) self.assertEqual(mock_logger.info.call_count, 0, "Did not expect any info messages. Got: {0}".format(mock_logger.info.call_args_list)) self.assertEqual(mock_logger.warn.call_count, 0, "Did not expect any warnings. Got: {0}".format(mock_logger.warn.call_args_list)) class RunCommandTestCase(AgentTestCase): """ Tests for shellutil.run_command/run_pipe """ def __create_tee_script(self, return_code=0): """ Creates a Python script that tees its stdin to stdout and stderr """ tee_script = os.path.join(self.tmp_dir, "tee.py") AgentTestCase.create_script(tee_script, """ import sys for line in sys.stdin: sys.stdout.write(line) sys.stderr.write(line) exit({0}) """.format(return_code)) return tee_script def test_run_command_should_execute_the_command(self): command = ["echo", "-n", "A TEST STRING"] ret = shellutil.run_command(command) self.assertEqual(ret, "A TEST STRING") def test_run_command_should_use_popen_arg_list(self): with patch("azurelinuxagent.common.utils.shellutil.subprocess.Popen", wraps=subprocess.Popen) as popen_patch: command = ["echo", "-n", "A TEST STRING"] ret = shellutil.run_command(command) self.assertEqual(ret, "A TEST STRING") self.assertEqual(popen_patch.call_count, 1) args, kwargs = popen_patch.call_args self.assertTrue(any(arg for arg in args[0] if "A TEST STRING" in arg), "command not being used") self.assertEqual(kwargs['env'].get(shellutil.PARENT_PROCESS_NAME), shellutil.AZURE_GUEST_AGENT, "Env flag not being used") def test_run_pipe_should_execute_a_pipe_with_two_commands(self): # Output the same string 3 times and then remove duplicates test_string = "A TEST STRING\n" pipe = [["echo", "-n", "-e", test_string * 3], ["uniq"]] output = shellutil.run_pipe(pipe) self.assertEqual(output, test_string) def test_run_pipe_should_execute_a_pipe_with_more_than_two_commands(self): # # The test pipe splits the output of "ls" in lines and then greps for "." # # Sample output of "ls -d .": # drwxrwxr-x 13 nam nam 4096 Nov 13 16:54 . # pipe = [["ls", "-ld", "."], ["sed", "-r", "s/\\s+/\\n/g"], ["grep", "\\."]] output = shellutil.run_pipe(pipe) self.assertEqual(".\n", output, "The pipe did not produce the expected output. Got: {0}".format(output)) def __it_should_raise_an_exception_when_the_command_fails(self, action): with self.assertRaises(shellutil.CommandError) as context_manager: action() exception = context_manager.exception self.assertIn("tee.py", str(exception), "The CommandError does not include the expected command") self.assertEqual(1, exception.returncode, "Unexpected return value from the test pipe") self.assertEqual("TEST_STRING\n", exception.stdout, "Unexpected stdout from the test pipe") self.assertEqual("TEST_STRING\n", exception.stderr, "Unexpected stderr from the test pipe") def test_run_command_should_raise_an_exception_when_the_command_fails(self): tee_script = self.__create_tee_script(return_code=1) self.__it_should_raise_an_exception_when_the_command_fails( lambda: shellutil.run_command(tee_script, input="TEST_STRING\n")) def test_run_pipe_should_raise_an_exception_when_the_last_command_fails(self): tee_script = self.__create_tee_script(return_code=1) self.__it_should_raise_an_exception_when_the_command_fails( lambda: shellutil.run_pipe([["echo", "-n", "TEST_STRING\n"], [tee_script]])) def __it_should_raise_an_exception_when_it_cannot_execute_the_command(self, action): with self.assertRaises(Exception) as context_manager: action() exception = context_manager.exception self.assertIn("No such file or directory", str(exception)) def test_run_command_should_raise_an_exception_when_it_cannot_execute_the_command(self): self.__it_should_raise_an_exception_when_it_cannot_execute_the_command( lambda: shellutil.run_command("nonexistent_command")) @skip_if_predicate_true(lambda: sys.version_info[0] == 2, "Timeouts are not supported on Python 2") def test_run_command_should_raise_an_exception_when_the_command_times_out(self): with self.assertRaises(shellutil.CommandError) as context: shellutil.run_command(["sleep", "5"], timeout=1) self.assertIn("command timeout", context.exception.stderr, "The command did not time out") def test_run_pipe_should_raise_an_exception_when_it_cannot_execute_the_pipe(self): self.__it_should_raise_an_exception_when_it_cannot_execute_the_command( lambda: shellutil.run_pipe([["ls", "-ld", "."], ["nonexistent_command"], ["wc", "-l"]])) def __it_should_not_log_by_default(self, action): with patch("azurelinuxagent.common.utils.shellutil.logger", autospec=True) as mock_logger: try: action() except Exception: pass self.assertEqual(mock_logger.warn.call_count, 0, "Did not expect any WARNINGS; Got: {0}".format(mock_logger.warn.call_args)) self.assertEqual(mock_logger.error.call_count, 0, "Did not expect any ERRORS; Got: {0}".format(mock_logger.error.call_args)) def test_run_command_it_should_not_log_by_default(self): self.__it_should_not_log_by_default( lambda: shellutil.run_command(["ls", "nonexistent_file"])) # Raises a CommandError self.__it_should_not_log_by_default( lambda: shellutil.run_command("nonexistent_command")) # Raises an OSError def test_run_pipe_it_should_not_log_by_default(self): self.__it_should_not_log_by_default( lambda: shellutil.run_pipe([["date"], [self.__create_tee_script(return_code=1)]])) # Raises a CommandError self.__it_should_not_log_by_default( lambda: shellutil.run_pipe([["date"], ["nonexistent_command"]])) # Raises an OSError def __it_should_log_an_error_when_log_error_is_set(self, action, command): with patch("azurelinuxagent.common.utils.shellutil.logger.error") as mock_log_error: try: action() except Exception: pass self.assertEqual(mock_log_error.call_count, 1) args, _ = mock_log_error.call_args self.assertTrue(any(command in str(a) for a in args), "The command was not logged") self.assertTrue(any("2" in str(a) for a in args), "The command's return code was not logged") # errno 2: No such file or directory def test_run_command_should_log_an_error_when_log_error_is_set(self): self.__it_should_log_an_error_when_log_error_is_set( lambda: shellutil.run_command(["ls", "file-does-not-exist"], log_error=True), # Raises a CommandError command="ls") self.__it_should_log_an_error_when_log_error_is_set( lambda: shellutil.run_command("command-does-not-exist", log_error=True), # Raises a CommandError command="command-does-not-exist") def test_run_command_should_raise_when_both_the_input_and_stdin_parameters_are_specified(self): with tempfile.TemporaryFile() as input_file: with self.assertRaises(ValueError): shellutil.run_command(["cat"], input='0123456789ABCDEF', stdin=input_file) def test_run_command_should_read_the_command_input_from_the_input_parameter_when_it_is_a_string(self): command_input = 'TEST STRING' output = shellutil.run_command(["cat"], input=command_input) self.assertEqual(output, command_input, "The command did not process its input correctly; the output should match the input") def test_run_command_should_read_stdin_from_the_input_parameter_when_it_is_a_sequence_of_bytes(self): command_input = 'TEST BYTES' output = shellutil.run_command(["cat"], input=command_input) self.assertEqual(output, command_input, "The command did not process its input correctly; the output should match the input") def __it_should_read_the_command_input_from_the_stdin_parameter(self, action): command_input = 'TEST STRING\n' with tempfile.TemporaryFile() as input_file: input_file.write(command_input.encode()) input_file.seek(0) output = action(stdin=input_file) self.assertEqual(output, command_input, "The command did not process its input correctly; the output should match the input") def test_run_command_should_read_the_command_input_from_the_stdin_parameter(self): self.__it_should_read_the_command_input_from_the_stdin_parameter( lambda stdin: shellutil.run_command(["cat"], stdin=stdin)) def test_run_pipe_should_read_the_command_input_from_the_stdin_parameter(self): self.__it_should_read_the_command_input_from_the_stdin_parameter( lambda stdin: shellutil.run_pipe([["cat"], ["sort"]], stdin=stdin)) def __it_should_write_the_command_output_to_the_stdout_parameter(self, action): with tempfile.TemporaryFile() as output_file: captured_output = action(stdout=output_file) output_file.seek(0) command_output = ustr(output_file.read(), encoding='utf-8', errors='backslashreplace') self.assertEqual(command_output, "TEST STRING\n", "The command did not produce the correct output; the output should match the input") self.assertEqual("", captured_output, "No output should have been captured since it was redirected to a file. Output: [{0}]".format(captured_output)) def test_run_command_should_write_the_command_output_to_the_stdout_parameter(self): self.__it_should_write_the_command_output_to_the_stdout_parameter( lambda stdout: shellutil.run_command(["echo", "TEST STRING"], stdout=stdout)) def test_run_pipe_should_write_the_command_output_to_the_stdout_parameter(self): self.__it_should_write_the_command_output_to_the_stdout_parameter( lambda stdout: shellutil.run_pipe([["echo", "TEST STRING"], ["sort"]], stdout=stdout)) def __it_should_write_the_command_error_output_to_the_stderr_parameter(self, action): with tempfile.TemporaryFile() as output_file: action(stderr=output_file) output_file.seek(0) command_error_output = ustr(output_file.read(), encoding='utf-8', errors="backslashreplace") self.assertEqual("TEST STRING\n", command_error_output, "stderr was not redirected to the output file correctly") def test_run_command_should_write_the_command_error_output_to_the_stderr_parameter(self): self.__it_should_write_the_command_error_output_to_the_stderr_parameter( lambda stderr: shellutil.run_command(self.__create_tee_script(), input="TEST STRING\n", stderr=stderr)) def test_run_pipe_should_write_the_command_error_output_to_the_stderr_parameter(self): self.__it_should_write_the_command_error_output_to_the_stderr_parameter( lambda stderr: shellutil.run_pipe([["echo", "TEST STRING"], [self.__create_tee_script()]], stderr=stderr)) def test_run_pipe_should_capture_the_stderr_of_all_the_commands_in_the_pipe(self): with self.assertRaises(shellutil.CommandError) as context_manager: shellutil.run_pipe([ ["echo", "TEST STRING"], [self.__create_tee_script()], [self.__create_tee_script()], [self.__create_tee_script(return_code=1)]]) self.assertEqual("TEST STRING\n" * 3, context_manager.exception.stderr, "Expected 3 copies of the test string since there are 3 commands in the pipe") def test_run_command_should_return_a_string_by_default(self): output = shellutil.run_command(self.__create_tee_script(), input="TEST STRING") self.assertTrue(isinstance(output, ustr), "The return value should be a string. Got: '{0}'".format(type(output))) def test_run_pipe_should_return_a_string_by_default(self): output = shellutil.run_pipe([["echo", "TEST STRING"], [self.__create_tee_script()]]) self.assertTrue(isinstance(output, ustr), "The return value should be a string. Got: '{0}'".format(type(output))) def test_run_command_should_return_a_bytes_object_when_encode_output_is_false(self): output = shellutil.run_command(self.__create_tee_script(), input="TEST STRING", encode_output=False) self.assertTrue(isinstance(output, bytes), "The return value should be a bytes object. Got: '{0}'".format(type(output))) def test_run_pipe_should_return_a_bytes_object_when_encode_output_is_false(self): output = shellutil.run_pipe([["echo", "TEST STRING"], [self.__create_tee_script()]], encode_output=False) self.assertTrue(isinstance(output, bytes), "The return value should be a bytes object. Got: '{0}'".format(type(output))) def test_run_command_run_pipe_run_get_output_should_keep_track_of_the_running_commands(self): # The children processes run this script, which creates a file with the PIDs of the script and its parent and then sleeps for a long time child_script = os.path.join(self.tmp_dir, "write_pids.py") AgentTestCase.create_script(child_script, """ import os import sys import time with open(sys.argv[1], "w") as pid_file: pid_file.write("{0} {1}".format(os.getpid(), os.getppid())) time.sleep(120) """) threads = [] try: child_processes = [] parent_processes = [] try: # each of these files will contain the PIDs of the command that created it and its parent pid_files = [os.path.join(self.tmp_dir, "pids.txt.{0}".format(i)) for i in range(4)] # we test these functions in shellutil commands_to_execute = [ # run_get_output must be the first in this list; see the code to fetch the PIDs a few lines below lambda: shellutil.run_get_output("{0} {1}".format(child_script, pid_files[0])), lambda: shellutil.run_command([child_script, pid_files[1]]), lambda: shellutil.run_pipe([[child_script, pid_files[2]], [child_script, pid_files[3]]]), ] # start each command on a separate thread (since we need to examine the processes running the commands while they are running) def invoke(command): try: command() except shellutil.CommandError as command_error: if command_error.returncode != -9: # test cleanup terminates the commands, so this is expected raise for cmd in commands_to_execute: thread = threading.Thread(target=invoke, args=(cmd,)) thread.start() threads.append(thread) # now fetch the PIDs in the files created by the commands, but wait until they are created if not wait_for(lambda: all(os.path.exists(file) and os.path.getsize(file) > 0 for file in pid_files)): raise Exception("The child processes did not start within the allowed timeout") for sig_file in pid_files: with open(sig_file, "r") as read_handle: pids = read_handle.read().split() child_processes.append(int(pids[0])) parent_processes.append(int(pids[1])) # the first item to in the PIDs we fetched corresponds to run_get_output, which invokes the command using the # shell, so in that case we need to use the parent's pid (i.e. the shell that we started) started_commands = parent_processes[0:1] + child_processes[1:] # wait for all the commands to start def all_commands_running(): all_commands_running.running_commands = shellutil.get_running_commands() return len(all_commands_running.running_commands) >= len(commands_to_execute) + 1 # +1 because run_pipe starts 2 commands all_commands_running.running_commands = [] if not wait_for(all_commands_running): self.fail("shellutil.get_running_commands() did not report the expected number of commands after the allowed timeout.\nExpected: {0}\nGot: {1}".format( format_processes(started_commands), format_processes(all_commands_running.running_commands))) started_commands.sort() all_commands_running.running_commands.sort() self.assertEqual( started_commands, all_commands_running.running_commands, "shellutil.get_running_commands() did not return the expected commands.\nExpected: {0}\nGot: {1}".format( format_processes(started_commands), format_processes(all_commands_running.running_commands))) finally: # terminate the child processes, since they are blocked for pid in child_processes: os.kill(pid, signal.SIGKILL) # once the processes complete, their PIDs should go away def no_commands_running(): no_commands_running.running_commands = shellutil.get_running_commands() return len(no_commands_running.running_commands) == 0 no_commands_running.running_commands = [] if not wait_for(no_commands_running): self.fail("shellutil.get_running_commands() should return empty after the commands complete. Got: {0}".format( format_processes(no_commands_running.running_commands))) finally: for thread in threads: thread.join(timeout=5) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/common/utils/test_text_util.py000066400000000000000000000214101462617747000247520ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import hashlib import os import unittest from distutils.version import LooseVersion as Version # pylint: disable=no-name-in-module,import-error import azurelinuxagent.common.utils.textutil as textutil from azurelinuxagent.common.future import ustr from tests.lib.tools import AgentTestCase class TestTextUtil(AgentTestCase): def test_get_password_hash(self): with open(os.path.join(os.path.dirname(os.path.realpath(__file__)), 'test_passwords.txt'), 'rb') as in_file: for data in in_file: # Remove bom on bytes data before it is converted into string. data = textutil.remove_bom(data) data = ustr(data, encoding='utf-8') password_hash = textutil.gen_password_hash(data, 6, 10) self.assertNotEqual(None, password_hash) def test_replace_non_ascii(self): data = ustr(b'\xef\xbb\xbfhehe', encoding='utf-8') self.assertEqual('hehe', textutil.replace_non_ascii(data)) data = "abcd\xa0e\xf0fghijk\xbblm" self.assertEqual("abcdefghijklm", textutil.replace_non_ascii(data)) data = "abcd\xa0e\xf0fghijk\xbblm" self.assertEqual("abcdXeXfghijkXlm", textutil.replace_non_ascii(data, replace_char='X')) self.assertEqual('', textutil.replace_non_ascii(None)) def test_remove_bom(self): #Test bom could be removed data = ustr(b'\xef\xbb\xbfhehe', encoding='utf-8') data = textutil.remove_bom(data) self.assertNotEqual(0xbb, data[0]) #bom is comprised of a sequence of three bytes and ff length of the input is shorter # than three bytes, remove_bom should not do anything data = u"\xa7" data = textutil.remove_bom(data) self.assertEqual(data, data[0]) data = u"\xa7\xef" data = textutil.remove_bom(data) self.assertEqual(u"\xa7", data[0]) self.assertEqual(u"\xef", data[1]) #Test string without BOM is not affected data = u"hehe" data = textutil.remove_bom(data) self.assertEqual(u"h", data[0]) data = u"" data = textutil.remove_bom(data) self.assertEqual(u"", data) data = u" " data = textutil.remove_bom(data) self.assertEqual(u" ", data) def test_version_compare(self): self.assertTrue(Version("1.0") < Version("1.1")) self.assertTrue(Version("1.9") < Version("1.10")) self.assertTrue(Version("1.9.9") < Version("1.10.0")) self.assertTrue(Version("1.0.0.0") < Version("1.2.0.0")) self.assertTrue(Version("1.0") <= Version("1.1")) self.assertTrue(Version("1.1") > Version("1.0")) self.assertTrue(Version("1.1") >= Version("1.0")) self.assertTrue(Version("1.0") == Version("1.0")) self.assertTrue(Version("1.0") >= Version("1.0")) self.assertTrue(Version("1.0") <= Version("1.0")) self.assertTrue(Version("1.9") < "1.10") self.assertTrue("1.9" < Version("1.10")) def test_get_bytes_from_pem(self): content = ("-----BEGIN CERTIFICATE-----\n" "certificate\n" "-----END CERTIFICATE----\n") base64_bytes = textutil.get_bytes_from_pem(content) self.assertEqual("certificate", base64_bytes) content = ("-----BEGIN PRIVATE KEY-----\n" "private key\n" "-----END PRIVATE Key-----\n") base64_bytes = textutil.get_bytes_from_pem(content) self.assertEqual("private key", base64_bytes) def test_swap_hexstring(self): data = [ ['12', 1, '21'], ['12', 2, '12'], ['12', 3, '012'], ['12', 4, '0012'], ['123', 1, '321'], ['123', 2, '2301'], ['123', 3, '123'], ['123', 4, '0123'], ['1234', 1, '4321'], ['1234', 2, '3412'], ['1234', 3, '234001'], ['1234', 4, '1234'], ['abcdef12', 1, '21fedcba'], ['abcdef12', 2, '12efcdab'], ['abcdef12', 3, 'f12cde0ab'], ['abcdef12', 4, 'ef12abcd'], ['aBcdEf12', 1, '21fEdcBa'], ['aBcdEf12', 2, '12EfcdaB'], ['aBcdEf12', 3, 'f12cdE0aB'], ['aBcdEf12', 4, 'Ef12aBcd'] ] for t in data: self.assertEqual(t[2], textutil.swap_hexstring(t[0], width=t[1])) def test_compress(self): result = textutil.compress('[stdout]\nHello World\n\n[stderr]\n\n') self.assertEqual('eJyLLi5JyS8tieXySM3JyVcIzy/KSeHiigaKphYVxXJxAQDAYQr2', result) def test_hash_empty_list(self): result = textutil.hash_strings([]) self.assertEqual(b'\xda9\xa3\xee^kK\r2U\xbf\xef\x95`\x18\x90\xaf\xd8\x07\t', result) def test_hash_list(self): test_list = ["abc", "123"] result_from_list = textutil.hash_strings(test_list) test_string = "".join(test_list) hash_from_string = hashlib.sha1() hash_from_string.update(test_string.encode()) self.assertEqual(result_from_list, hash_from_string.digest()) self.assertEqual(hash_from_string.hexdigest(), '6367c48dd193d56ea7b0baad25b19455e529f5ee') def test_empty_strings(self): self.assertTrue(textutil.is_str_none_or_whitespace(None)) self.assertTrue(textutil.is_str_none_or_whitespace(' ')) self.assertTrue(textutil.is_str_none_or_whitespace('\t')) self.assertTrue(textutil.is_str_none_or_whitespace('\n')) self.assertTrue(textutil.is_str_none_or_whitespace(' \t')) self.assertTrue(textutil.is_str_none_or_whitespace(' \r\n')) self.assertTrue(textutil.is_str_empty(None)) self.assertTrue(textutil.is_str_empty(' ')) self.assertTrue(textutil.is_str_empty('\t')) self.assertTrue(textutil.is_str_empty('\n')) self.assertTrue(textutil.is_str_empty(' \t')) self.assertTrue(textutil.is_str_empty(' \r\n')) self.assertFalse(textutil.is_str_none_or_whitespace(u' \x01 ')) self.assertFalse(textutil.is_str_none_or_whitespace(u'foo')) self.assertFalse(textutil.is_str_none_or_whitespace('bar')) self.assertFalse(textutil.is_str_empty(u' \x01 ')) self.assertFalse(textutil.is_str_empty(u'foo')) self.assertFalse(textutil.is_str_empty('bar')) hex_null_1 = u'\x00' hex_null_2 = u' \x00 ' self.assertFalse(textutil.is_str_none_or_whitespace(hex_null_1)) self.assertFalse(textutil.is_str_none_or_whitespace(hex_null_2)) self.assertTrue(textutil.is_str_empty(hex_null_1)) self.assertTrue(textutil.is_str_empty(hex_null_2)) self.assertNotEqual(textutil.is_str_none_or_whitespace(hex_null_1), textutil.is_str_empty(hex_null_1)) self.assertNotEqual(textutil.is_str_none_or_whitespace(hex_null_2), textutil.is_str_empty(hex_null_2)) def test_format_memory_value(self): """ Test formatting of memory amounts into human-readable units """ self.assertEqual(2048, textutil.format_memory_value('kilobytes', 2)) self.assertEqual(0, textutil.format_memory_value('kilobytes', 0)) self.assertEqual(2048000, textutil.format_memory_value('kilobytes', 2000)) self.assertEqual(2048 * 1024, textutil.format_memory_value('megabytes', 2)) self.assertEqual((1024 + 512) * 1024 * 1024, textutil.format_memory_value('gigabytes', 1.5)) self.assertRaises(ValueError, textutil.format_memory_value, 'KiloBytes', 1) self.assertRaises(TypeError, textutil.format_memory_value, 'bytes', None) def test_format_exception(self): """ Test formatting of exception into human-readable format """ def raise_exception(count=3): if count <= 1: raise Exception("Test Exception") raise_exception(count - 1) msg = "" try: raise_exception() except Exception as e: msg = textutil.format_exception(e) self.assertIn("Test Exception", msg) # Raise exception at count 1 after two nested calls since count starts at 3 self.assertEqual(2, msg.count("raise_exception(count - 1)")) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/daemon/000077500000000000000000000000001462617747000201355ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/daemon/__init__.py000066400000000000000000000011651462617747000222510ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-2b21de5/tests/daemon/test_daemon.py000066400000000000000000000134341462617747000230160ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import unittest from multiprocessing import Process import azurelinuxagent.common.conf as conf from azurelinuxagent.daemon.main import OPENSSL_FIPS_ENVIRONMENT, get_daemon_handler from azurelinuxagent.pa.provision.default import ProvisionHandler from tests.lib.tools import AgentTestCase, Mock, patch class MockDaemonCall(object): def __init__(self, daemon_handler, count): self.daemon_handler = daemon_handler self.count = count def __call__(self, *args, **kw): self.count = self.count - 1 # Stop daemon after restarting for n times if self.count <= 0: self.daemon_handler.running = False raise Exception("Mock unhandled exception") class TestDaemon(AgentTestCase): @patch("time.sleep") def test_daemon_restart(self, mock_sleep): # Mock daemon function daemon_handler = get_daemon_handler() mock_daemon = Mock(side_effect=MockDaemonCall(daemon_handler, 2)) daemon_handler.daemon = mock_daemon daemon_handler.check_pid = Mock() daemon_handler.run() mock_sleep.assert_any_call(15) self.assertEqual(2, daemon_handler.daemon.call_count) @patch("time.sleep") @patch("azurelinuxagent.daemon.main.conf") @patch("azurelinuxagent.daemon.main.sys.exit") def test_check_pid(self, mock_exit, mock_conf, _): daemon_handler = get_daemon_handler() mock_pid_file = os.path.join(self.tmp_dir, "pid") mock_conf.get_agent_pid_file_path = Mock(return_value=mock_pid_file) daemon_handler.check_pid() self.assertTrue(os.path.isfile(mock_pid_file)) daemon_handler.check_pid() mock_exit.assert_any_call(0) @patch("azurelinuxagent.daemon.main.DaemonHandler.check_pid") @patch("azurelinuxagent.common.conf.get_fips_enabled", return_value=True) def test_set_openssl_fips(self, _, __): daemon_handler = get_daemon_handler() daemon_handler.running = False with patch.dict("os.environ"): daemon_handler.run() self.assertTrue(OPENSSL_FIPS_ENVIRONMENT in os.environ) self.assertEqual('1', os.environ[OPENSSL_FIPS_ENVIRONMENT]) @patch("azurelinuxagent.daemon.main.DaemonHandler.check_pid") @patch("azurelinuxagent.common.conf.get_fips_enabled", return_value=False) def test_does_not_set_openssl_fips(self, _, __): daemon_handler = get_daemon_handler() daemon_handler.running = False with patch.dict("os.environ"): daemon_handler.run() self.assertFalse(OPENSSL_FIPS_ENVIRONMENT in os.environ) @patch('azurelinuxagent.common.conf.get_provisioning_agent', return_value='waagent') @patch('azurelinuxagent.ga.update.UpdateHandler.run_latest') @patch('azurelinuxagent.pa.provision.default.ProvisionHandler.run') def test_daemon_agent_enabled(self, patch_run_provision, patch_run_latest, gpa): # pylint: disable=unused-argument """ Agent should run normally when no disable_agent is found """ with patch('azurelinuxagent.pa.provision.get_provision_handler', return_value=ProvisionHandler()): # DaemonHandler._initialize_telemetry requires communication with WireServer and IMDS; since we # are not using telemetry in this test we mock it out with patch('azurelinuxagent.daemon.main.DaemonHandler._initialize_telemetry'): self.assertFalse(os.path.exists(conf.get_disable_agent_file_path())) daemon_handler = get_daemon_handler() def stop_daemon(child_args): # pylint: disable=unused-argument daemon_handler.running = False patch_run_latest.side_effect = stop_daemon daemon_handler.run() self.assertEqual(1, patch_run_provision.call_count) self.assertEqual(1, patch_run_latest.call_count) @patch('azurelinuxagent.common.conf.get_provisioning_agent', return_value='waagent') @patch('azurelinuxagent.ga.update.UpdateHandler.run_latest', side_effect=AgentTestCase.fail) @patch('azurelinuxagent.pa.provision.default.ProvisionHandler.run', side_effect=ProvisionHandler.write_agent_disabled) def test_daemon_agent_disabled(self, _, patch_run_latest, gpa): # pylint: disable=unused-argument """ Agent should provision, then sleep forever when disable_agent is found """ with patch('azurelinuxagent.pa.provision.get_provision_handler', return_value=ProvisionHandler()): # file is created by provisioning handler self.assertFalse(os.path.exists(conf.get_disable_agent_file_path())) daemon_handler = get_daemon_handler() # we need to assert this thread will sleep forever, so fork it daemon = Process(target=daemon_handler.run) daemon.start() daemon.join(timeout=5) self.assertTrue(daemon.is_alive()) daemon.terminate() # disable_agent was written, run_latest was not called self.assertTrue(os.path.exists(conf.get_disable_agent_file_path())) self.assertEqual(0, patch_run_latest.call_count) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/daemon/test_resourcedisk.py000066400000000000000000000174501462617747000242570ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import stat import sys import unittest from tests.lib.tools import AgentTestCase, patch, DEFAULT from azurelinuxagent.daemon.resourcedisk import get_resourcedisk_handler from azurelinuxagent.daemon.resourcedisk.default import ResourceDiskHandler from azurelinuxagent.common.utils import shellutil class TestResourceDisk(AgentTestCase): def test_mount_flags_empty(self): partition = '/dev/sdb1' mountpoint = '/mnt/resource' options = None expected = 'mount -t ext3 /dev/sdb1 /mnt/resource' rdh = ResourceDiskHandler() mount_string = rdh.get_mount_string(options, partition, mountpoint) self.assertEqual(expected, mount_string) def test_mount_flags_many(self): partition = '/dev/sdb1' mountpoint = '/mnt/resource' options = 'noexec,noguid,nodev' expected = 'mount -t ext3 -o noexec,noguid,nodev /dev/sdb1 /mnt/resource' rdh = ResourceDiskHandler() mount_string = rdh.get_mount_string(options, partition, mountpoint) self.assertEqual(expected, mount_string) @patch('azurelinuxagent.common.utils.shellutil.run_get_output') @patch('azurelinuxagent.common.utils.shellutil.run') @patch('azurelinuxagent.daemon.resourcedisk.default.ResourceDiskHandler.mkfile') @patch('azurelinuxagent.daemon.resourcedisk.default.os.path.isfile', return_value=False) @patch( 'azurelinuxagent.daemon.resourcedisk.default.ResourceDiskHandler.check_existing_swap_file', return_value=False) def test_create_swap_space( self, mock_check_existing_swap_file, # pylint: disable=unused-argument mock_isfile, # pylint: disable=unused-argument mock_mkfile, # pylint: disable=unused-argument mock_run, mock_run_get_output): mount_point = '/mnt/resource' size_mb = 128 rdh = ResourceDiskHandler() def rgo_side_effect(*args, **kwargs): # pylint: disable=unused-argument if args[0] == 'swapon -s': return (0, 'Filename\t\t\t\tType\t\tSize\tUsed\tPriority\n/mnt/resource/swapfile \tfile \t131068\t0\t-2\n') return DEFAULT def run_side_effect(*args, **kwargs): # pylint: disable=unused-argument # We have to change the default mock behavior to return a falsey value # (instead of the default truthy of the mock), because we are testing # really for the exit code of the the swapon command to return 0. if 'swapon' in args[0]: return 0 return None mock_run_get_output.side_effect = rgo_side_effect mock_run.side_effect = run_side_effect rdh.create_swap_space( mount_point=mount_point, size_mb=size_mb ) def test_mkfile(self): # setup test_file = os.path.join(self.tmp_dir, 'test_file') file_size = 1024 * 128 if os.path.exists(test_file): os.remove(test_file) # execute get_resourcedisk_handler().mkfile(test_file, file_size) # assert assert os.path.exists(test_file) # only the owner should have access mode = os.stat(test_file).st_mode & ( stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO) assert mode == stat.S_IRUSR | stat.S_IWUSR # cleanup os.remove(test_file) def test_mkfile_dd_fallback(self): with patch.object(shellutil, "run") as run_patch: # setup run_patch.return_value = 1 test_file = os.path.join(self.tmp_dir, 'test_file') file_size = 1024 * 128 # execute if sys.version_info >= (3, 3): with patch("os.posix_fallocate", side_effect=Exception('failure')): get_resourcedisk_handler().mkfile(test_file, file_size) else: get_resourcedisk_handler().mkfile(test_file, file_size) # assert assert run_patch.call_count > 1 assert "fallocate" in run_patch.call_args_list[0][0][0] assert "dd if" in run_patch.call_args_list[-1][0][0] def test_mkfile_xfs_fs(self): # setup test_file = os.path.join(self.tmp_dir, 'test_file') file_size = 1024 * 128 if os.path.exists(test_file): os.remove(test_file) # execute resource_disk_handler = get_resourcedisk_handler() resource_disk_handler.fs = 'xfs' with patch.object(shellutil, "run") as run_patch: resource_disk_handler.mkfile(test_file, file_size) # assert if sys.version_info >= (3, 3): with patch("os.posix_fallocate") as posix_fallocate: self.assertEqual(0, posix_fallocate.call_count) assert run_patch.call_count == 1 assert "dd if" in run_patch.call_args_list[0][0][0] def test_change_partition_type(self): resource_handler = get_resourcedisk_handler() # test when sfdisk --part-type does not exist with patch.object(shellutil, "run_get_output", side_effect=[[1, ''], [0, '']]) as run_patch: resource_handler.change_partition_type( suppress_message=True, option_str='') # assert assert run_patch.call_count == 2 assert "sfdisk --part-type" in run_patch.call_args_list[0][0][0] assert "sfdisk -c" in run_patch.call_args_list[1][0][0] # test when sfdisk --part-type exists with patch.object(shellutil, "run_get_output", side_effect=[[0, '']]) as run_patch: resource_handler.change_partition_type( suppress_message=True, option_str='') # assert assert run_patch.call_count == 1 assert "sfdisk --part-type" in run_patch.call_args_list[0][0][0] def test_check_existing_swap_file(self): test_file = os.path.join(self.tmp_dir, 'test_swap_file') file_size = 1024 * 128 if os.path.exists(test_file): os.remove(test_file) with open(test_file, "wb") as file: # pylint: disable=redefined-builtin file.write(bytearray(file_size)) os.chmod(test_file, stat.S_ISUID | stat.S_ISGID | stat.S_IRUSR | stat.S_IWUSR | stat.S_IRWXG | stat.S_IRWXO) # 0o6677 def swap_on(_): # mimic the output of "swapon -s" return [ "Filename Type Size Used Priority", "{0} partition 16498684 0 -2".format(test_file) ] with patch.object(shellutil, "run_get_output", side_effect=swap_on): get_resourcedisk_handler().check_existing_swap_file( test_file, test_file, file_size) # it should remove access from group, others mode = os.stat(test_file).st_mode & (stat.S_ISUID | stat.S_ISGID | stat.S_IRWXU | stat.S_IWUSR | stat.S_IRWXG | stat.S_IRWXO) # 0o6777 assert mode == stat.S_ISUID | stat.S_ISGID | stat.S_IRUSR | stat.S_IWUSR # 0o6600 os.remove(test_file) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/daemon/test_scvmm.py000066400000000000000000000066171462617747000227050ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # # Implements parts of RFC 2131, 1541, 1497 and # http://msdn.microsoft.com/en-us/library/cc227282%28PROT.10%29.aspx # http://msdn.microsoft.com/en-us/library/cc227259%28PROT.13%29.aspx import os import unittest import mock import azurelinuxagent.daemon.scvmm as scvmm from azurelinuxagent.common import conf from azurelinuxagent.common.osutil.default import DefaultOSUtil from azurelinuxagent.common.utils import fileutil from tests.lib.tools import AgentTestCase, Mock, patch class TestSCVMM(AgentTestCase): def test_scvmm_detection_with_file(self): # setup conf.get_dvd_mount_point = Mock(return_value=self.tmp_dir) conf.get_detect_scvmm_env = Mock(return_value=True) scvmm_file = os.path.join(self.tmp_dir, scvmm.VMM_CONF_FILE_NAME) fileutil.write_file(scvmm_file, "") with patch.object(scvmm.ScvmmHandler, 'start_scvmm_agent') as po: with patch('os.listdir', return_value=["sr0", "sr1", "sr2"]): with patch('time.sleep', return_value=0): # execute failed = False try: scvmm.get_scvmm_handler().run() except: # pylint: disable=bare-except failed = True # assert self.assertTrue(failed) self.assertTrue(po.call_count == 1) # cleanup os.remove(scvmm_file) def test_scvmm_detection_with_multiple_cdroms(self): # setup conf.get_dvd_mount_point = Mock(return_value=self.tmp_dir) conf.get_detect_scvmm_env = Mock(return_value=True) # execute with mock.patch.object(DefaultOSUtil, 'mount_dvd') as patch_mount: with patch('os.listdir', return_value=["sr0", "sr1", "sr2"]): scvmm.ScvmmHandler().detect_scvmm_env() # assert assert patch_mount.call_count == 3 assert patch_mount.call_args_list[0][1]['dvd_device'] == '/dev/sr0' assert patch_mount.call_args_list[1][1]['dvd_device'] == '/dev/sr1' assert patch_mount.call_args_list[2][1]['dvd_device'] == '/dev/sr2' def test_scvmm_detection_without_file(self): # setup conf.get_dvd_mount_point = Mock(return_value=self.tmp_dir) conf.get_detect_scvmm_env = Mock(return_value=True) scvmm_file = os.path.join(self.tmp_dir, scvmm.VMM_CONF_FILE_NAME) if os.path.exists(scvmm_file): os.remove(scvmm_file) with mock.patch.object(scvmm.ScvmmHandler, 'start_scvmm_agent') as patch_start: # execute scvmm.ScvmmHandler().detect_scvmm_env() # assert patch_start.assert_not_called() if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/data/000077500000000000000000000000001462617747000176035ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/2000066400000000000000000000006361462617747000176740ustar00rootroot00000000000000# This is private data. Do not parse. ADDRESS=10.0.0.69 NETMASK=255.255.255.0 ROUTER=10.0.0.1 SERVER_ADDRESS=168.63.129.16 NEXT_SERVER=168.63.129.16 T1=4294967295 T2=4294967295 LIFETIME=4294967295 DNS=168.63.129.16 DOMAINNAME=2rdlxelcdvjkok2emfc.bx.internal.cloudapp.net ROUTES=0.0.0.0/0,10.0.0.1 168.63.129.16/32,10.0.0.1 169.254.169.254/32,10.0.0.1 CLIENTID=ff0406a3a3000201120dc9092eccd2344 OPTION_245=a83f8110 Azure-WALinuxAgent-2b21de5/tests/data/cgroups/000077500000000000000000000000001462617747000212655ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/cgroups/cpu.stat000066400000000000000000000000561462617747000227520ustar00rootroot00000000000000nr_periods 1 nr_throttled 1 throttled_time 50 Azure-WALinuxAgent-2b21de5/tests/data/cgroups/cpu.stat_t0000066400000000000000000000000561462617747000233550ustar00rootroot00000000000000nr_periods 1 nr_throttled 1 throttled_time 50 Azure-WALinuxAgent-2b21de5/tests/data/cgroups/cpu.stat_t1000066400000000000000000000001011462617747000233450ustar00rootroot00000000000000nr_periods 66927 nr_throttled 25803 throttled_time 2075541442327 Azure-WALinuxAgent-2b21de5/tests/data/cgroups/cpu_mount/000077500000000000000000000000001462617747000232765ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/cgroups/cpu_mount/cpuacct.stat000066400000000000000000000000311462617747000256070ustar00rootroot00000000000000user 50000 system 100000 Azure-WALinuxAgent-2b21de5/tests/data/cgroups/cpuacct.stat000066400000000000000000000000301462617747000235750ustar00rootroot00000000000000user 42380 system 21383 Azure-WALinuxAgent-2b21de5/tests/data/cgroups/cpuacct.stat_t0000066400000000000000000000000301462617747000242000ustar00rootroot00000000000000user 42380 system 21383 Azure-WALinuxAgent-2b21de5/tests/data/cgroups/cpuacct.stat_t1000066400000000000000000000000301462617747000242010ustar00rootroot00000000000000user 42390 system 21390 Azure-WALinuxAgent-2b21de5/tests/data/cgroups/cpuacct.stat_t2000066400000000000000000000000301462617747000242020ustar00rootroot00000000000000user 42417 system 21428 Azure-WALinuxAgent-2b21de5/tests/data/cgroups/dummy_proc_cmdline000066400000000000000000000000751462617747000250630ustar00rootroot00000000000000python-ubin/WALinuxAgent-2.2.45-py2.7.egg-run-exthandlersAzure-WALinuxAgent-2b21de5/tests/data/cgroups/dummy_proc_comm000066400000000000000000000000061462617747000243750ustar00rootroot00000000000000pythonAzure-WALinuxAgent-2b21de5/tests/data/cgroups/dummy_proc_statm000066400000000000000000000000361462617747000245750ustar00rootroot00000000000000980608 81022 30304 4 0 93606 0Azure-WALinuxAgent-2b21de5/tests/data/cgroups/memory_mount/000077500000000000000000000000001462617747000240175ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/cgroups/memory_mount/memory.max_usage_in_bytes000066400000000000000000000000071462617747000311130ustar00rootroot000000000000001000000Azure-WALinuxAgent-2b21de5/tests/data/cgroups/memory_mount/memory.stat000066400000000000000000000012771462617747000262330ustar00rootroot00000000000000cache 50000 rss 100000 rss_huge 4194304 shmem 8192 mapped_file 540672 dirty 0 writeback 0 swap 20000 pgpgin 42584 pgpgout 24188 pgfault 71983 pgmajfault 402 inactive_anon 32854016 active_anon 12288 inactive_file 47472640 active_file 1290240 unevictable 0 hierarchical_memory_limit 9223372036854771712 hierarchical_memsw_limit 9223372036854771712 total_cache 48771072 total_rss 32845824 total_rss_huge 4194304 total_shmem 8192 total_mapped_file 540672 total_dirty 0 total_writeback 0 total_swap 0 total_pgpgin 42584 total_pgpgout 24188 total_pgfault 71983 total_pgmajfault 402 total_inactive_anon 32854016 total_active_anon 12288 total_inactive_file 47472640 total_active_file 1290240 total_unevictable 0Azure-WALinuxAgent-2b21de5/tests/data/cgroups/missing_memory_counters/000077500000000000000000000000001462617747000262505ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/cgroups/missing_memory_counters/memory.stat000066400000000000000000000012511462617747000304540ustar00rootroot00000000000000cache 50000 rss_huge 4194304 shmem 8192 mapped_file 540672 dirty 0 writeback 0 pgpgin 42584 pgpgout 24188 pgfault 71983 pgmajfault 402 inactive_anon 32854016 active_anon 12288 inactive_file 47472640 active_file 1290240 unevictable 0 hierarchical_memory_limit 9223372036854771712 hierarchical_memsw_limit 9223372036854771712 total_cache 48771072 total_rss 32845824 total_rss_huge 4194304 total_shmem 8192 total_mapped_file 540672 total_dirty 0 total_writeback 0 total_swap 0 total_pgpgin 42584 total_pgpgout 24188 total_pgfault 71983 total_pgmajfault 402 total_inactive_anon 32854016 total_active_anon 12288 total_inactive_file 47472640 total_active_file 1290240 total_unevictable 0Azure-WALinuxAgent-2b21de5/tests/data/cgroups/proc_pid_cgroup000066400000000000000000000014311462617747000243650ustar00rootroot0000000000000012:devices:/system.slice/Microsoft.A.Sample.Extension_1.0.1_aeac05dc-8c24-4542-95f2-a0d6be1c5ba7.scope 11:perf_event:/ 10:rdma:/ 9:blkio:/system.slice/Microsoft.A.Sample.Extension_1.0.1_aeac05dc-8c24-4542-95f2-a0d6be1c5ba7.scope 8:net_cls,net_prio:/ 7:freezer:/ 6:hugetlb:/ 5:memory:/system.slice/Microsoft.A.Sample.Extension_1.0.1_aeac05dc-8c24-4542-95f2-a0d6be1c5ba7.scope 4:cpuset:/ 3:cpu,cpuacct:/system.slice/Microsoft.A.Sample.Extension_1.0.1_aeac05dc-8c24-4542-95f2-a0d6be1c5ba7.scope 2:pids:/system.slice/Microsoft.A.Sample.Extension_1.0.1_aeac05dc-8c24-4542-95f2-a0d6be1c5ba7.scope 1:name=systemd:/system.slice/Microsoft.A.Sample.Extension_1.0.1_aeac05dc-8c24-4542-95f2-a0d6be1c5ba7.scope 0::/system.slice/Microsoft.A.Sample.Extension_1.0.1_aeac05dc-8c24-4542-95f2-a0d6be1c5ba7.scope Azure-WALinuxAgent-2b21de5/tests/data/cgroups/proc_self_cgroup000066400000000000000000000006121462617747000245420ustar00rootroot0000000000000012:blkio:/system.slice/walinuxagent.service 11:cpu,cpuacct:/system.slice/walinuxagent.service 10:devices:/system.slice/walinuxagent.service 9:pids:/system.slice/walinuxagent.service 8:memory:/system.slice/walinuxagent.service 7:freezer:/ 6:hugetlb:/ 5:perf_event:/ 4:net_cls,net_prio:/ 3:cpuset:/ 2:rdma:/ 1:name=systemd:/system.slice/walinuxagent.service 0::/system.slice/walinuxagent.service Azure-WALinuxAgent-2b21de5/tests/data/cgroups/proc_stat_t0000066400000000000000000000047371462617747000236240ustar00rootroot00000000000000cpu 1242996 18547 360544 3839094 7843 0 27848 0 0 0 cpu0 305718 4380 129988 2580650 5164 0 11050 0 0 0 cpu1 316342 4742 80358 416652 1001 0 8333 0 0 0 cpu2 311233 4916 75691 419501 720 0 4268 0 0 0 cpu3 309702 4508 74507 422289 956 0 4196 0 0 0 intr 78869641 14 5118 0 0 0 0 0 0 1 17275956 0 0 977671 0 0 0 0 0 0 0 1285764 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 24614913 290 40 7069 56 9208 8660 8574436 0 0 0 0 0 0 0 0 0 0 0 0 10238 107667 2597 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ctxt 145952189 btime 1575311513 processes 49655 procs_running 2 procs_blocked 0 softirq 36699327 11497379 8043346 4993 1111315 54 54 1755674 7556345 0 6730167 Azure-WALinuxAgent-2b21de5/tests/data/cgroups/proc_stat_t1000066400000000000000000000047371462617747000236250ustar00rootroot00000000000000cpu 1251670 18548 368052 3877822 7926 0 28103 0 0 0 cpu0 307860 4380 135060 2583922 5171 0 11173 0 0 0 cpu1 318474 4742 81276 428425 1019 0 8384 0 0 0 cpu2 313521 4916 76464 431225 748 0 4297 0 0 0 cpu3 311814 4508 75250 434248 986 0 4247 0 0 0 intr 81910494 14 5118 0 0 0 0 0 0 1 18573886 0 0 977671 0 0 0 0 0 0 0 1291486 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 25913782 290 40 7152 56 9383 8954 8604958 0 0 0 0 0 0 0 0 0 0 0 0 10367 107667 2672 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ctxt 150920611 btime 1575311513 processes 49693 procs_running 4 procs_blocked 0 softirq 37131388 11544829 8149486 5023 1127332 54 54 1776947 7667169 0 6860494 Azure-WALinuxAgent-2b21de5/tests/data/cgroups/proc_stat_t2000066400000000000000000000047411462617747000236210ustar00rootroot00000000000000cpu 1293131 18554 388040 3961230 8246 0 28928 0 0 0 cpu0 317730 4381 146935 2591197 5184 0 11535 0 0 0 cpu1 329039 4746 84213 453402 1178 0 8587 0 0 0 cpu2 324213 4917 79154 456426 826 0 4410 0 0 0 cpu3 322147 4509 77737 460203 1057 0 4395 0 0 0 intr 89815534 14 5118 0 0 0 0 0 0 1 21544211 0 0 977671 0 0 0 0 0 0 0 1306547 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 29282036 290 40 7566 56 10241 9584 8929545 0 0 0 0 0 0 0 0 0 0 0 0 11076 107667 2867 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ctxt 162816290 btime 1575311513 processes 49900 procs_running 2 procs_blocked 0 softirq 38518020 11917848 8439326 5100 1170153 54 54 1834614 7944771 0 7206100 Azure-WALinuxAgent-2b21de5/tests/data/cgroups/sys_fs_cgroup_unified_cgroup.controllers000066400000000000000000000000521462617747000315210ustar00rootroot00000000000000io memory pids perf_event rdma cpu freezerAzure-WALinuxAgent-2b21de5/tests/data/cloud-init/000077500000000000000000000000001462617747000216525ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/cloud-init/set-hostname000066400000000000000000000001261462617747000242030ustar00rootroot00000000000000{ "fqdn": "a-sample-set-hostname.domain.com", "hostname": "a-sample-set-hostname" } Azure-WALinuxAgent-2b21de5/tests/data/config/000077500000000000000000000000001462617747000210505ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/config/waagent_auto_update_disabled.conf000066400000000000000000000004721462617747000275710ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable or disable goal state processing auto-update, default is enabled. # Deprecated now but keep it for backward compatibility AutoUpdate.Enabled=n # Enable or disable goal state processing auto-update, default is enabled # AutoUpdate.UpdateToLatestVersion=y waagent_auto_update_disabled_update_to_latest_version_disabled.conf000066400000000000000000000004701462617747000365240ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/config# # Microsoft Azure Linux Agent Configuration # # Enable or disable goal state processing auto-update, default is enabled. # Deprecated now but keep it for backward compatibility AutoUpdate.Enabled=n # Enable or disable goal state processing auto-update, default is enabled AutoUpdate.UpdateToLatestVersion=n waagent_auto_update_disabled_update_to_latest_version_enabled.conf000066400000000000000000000004701462617747000363470ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/config# # Microsoft Azure Linux Agent Configuration # # Enable or disable goal state processing auto-update, default is enabled. # Deprecated now but keep it for backward compatibility AutoUpdate.Enabled=n # Enable or disable goal state processing auto-update, default is enabled AutoUpdate.UpdateToLatestVersion=y Azure-WALinuxAgent-2b21de5/tests/data/config/waagent_auto_update_enabled.conf000066400000000000000000000004721462617747000274140ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable or disable goal state processing auto-update, default is enabled. # Deprecated now but keep it for backward compatibility AutoUpdate.Enabled=y # Enable or disable goal state processing auto-update, default is enabled # AutoUpdate.UpdateToLatestVersion=y waagent_auto_update_enabled_update_to_latest_version_disabled.conf000066400000000000000000000004701462617747000363470ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/config# # Microsoft Azure Linux Agent Configuration # # Enable or disable goal state processing auto-update, default is enabled. # Deprecated now but keep it for backward compatibility AutoUpdate.Enabled=y # Enable or disable goal state processing auto-update, default is enabled AutoUpdate.UpdateToLatestVersion=n waagent_auto_update_enabled_update_to_latest_version_enabled.conf000066400000000000000000000004701462617747000361720ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/config# # Microsoft Azure Linux Agent Configuration # # Enable or disable goal state processing auto-update, default is enabled. # Deprecated now but keep it for backward compatibility AutoUpdate.Enabled=y # Enable or disable goal state processing auto-update, default is enabled AutoUpdate.UpdateToLatestVersion=y Azure-WALinuxAgent-2b21de5/tests/data/config/waagent_update_to_latest_version_disabled.conf000066400000000000000000000004721462617747000323640ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable or disable goal state processing auto-update, default is enabled. # Deprecated now but keep it for backward compatibility # AutoUpdate.Enabled=n # Enable or disable goal state processing auto-update, default is enabled AutoUpdate.UpdateToLatestVersion=n Azure-WALinuxAgent-2b21de5/tests/data/config/waagent_update_to_latest_version_enabled.conf000066400000000000000000000004721462617747000322070ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Enable or disable goal state processing auto-update, default is enabled. # Deprecated now but keep it for backward compatibility # AutoUpdate.Enabled=n # Enable or disable goal state processing auto-update, default is enabled AutoUpdate.UpdateToLatestVersion=y Azure-WALinuxAgent-2b21de5/tests/data/dhcp000066400000000000000000000005101462617747000204400ustar00rootroot00000000000000ƪ] >` >* >]88RD008CFA06B61CcSc56 >* > >"test-cs12.h1.internal.cloudapp.net:;3 >Azure-WALinuxAgent-2b21de5/tests/data/dhcp.leases000066400000000000000000000035721462617747000217260ustar00rootroot00000000000000lease { interface "eth0"; fixed-address 10.0.1.4; server-name "RDE41D2D9BB18C"; option subnet-mask 255.255.255.0; option dhcp-lease-time 4294967295; option routers 10.0.1.1; option dhcp-message-type 5; option dhcp-server-identifier 168.63.129.16; option domain-name-servers invalid; option dhcp-renewal-time 4294967295; option rfc3442-classless-static-routes 0,10,0,1,1,32,168,63,129,16,10,0,1,1; option unknown-245 a8:3f:81:10; option dhcp-rebinding-time 4294967295; option domain-name "qylsde3bnlhu5dstzf3bav5inc.fx.internal.cloudapp.net"; renew 0 2152/07/23 23:27:10; rebind 0 2152/07/23 23:27:10; expire 0 never; } lease { interface "eth0"; fixed-address 10.0.1.4; server-name "RDE41D2D9BB18C"; option subnet-mask 255.255.255.0; option dhcp-lease-time 4294967295; option routers 10.0.1.1; option dhcp-message-type 5; option dhcp-server-identifier 168.63.129.16; option domain-name-servers expired; option dhcp-renewal-time 4294967295; option unknown-245 a8:3f:81:10; option dhcp-rebinding-time 4294967295; option domain-name "qylsde3bnlhu5dstzf3bav5inc.fx.internal.cloudapp.net"; renew 4 2015/06/16 16:58:54; rebind 4 2015/06/16 16:58:54; expire 4 2015/06/16 16:58:54; } lease { interface "eth0"; fixed-address 10.0.1.4; server-name "RDE41D2D9BB18C"; option subnet-mask 255.255.255.0; option dhcp-lease-time 4294967295; option routers 10.0.1.1; option dhcp-message-type 5; option dhcp-server-identifier 168.63.129.16; option domain-name-servers 168.63.129.16; option dhcp-renewal-time 4294967295; option rfc3442-classless-static-routes 0,10,0,1,1,32,168,63,129,16,10,0,1,1; option unknown-245 a8:3f:81:10; option dhcp-rebinding-time 4294967295; option domain-name "qylsde3bnlhu5dstzf3bav5inc.fx.internal.cloudapp.net"; renew 0 2152/07/23 23:27:10; rebind 0 2152/07/23 23:27:10; expire 0 2152/07/23 23:27:10; } Azure-WALinuxAgent-2b21de5/tests/data/dhcp.leases.custom.dns000066400000000000000000000035641462617747000240230ustar00rootroot00000000000000lease { interface "eth0"; fixed-address 10.0.1.4; server-name "RDE41D2D9BB18C"; option subnet-mask 255.255.255.0; option dhcp-lease-time 4294967295; option routers 10.0.1.1; option dhcp-message-type 5; option dhcp-server-identifier 168.63.129.16; option domain-name-servers invalid; option dhcp-renewal-time 4294967295; option rfc3442-classless-static-routes 0,10,0,1,1,32,168,63,129,16,10,0,1,1; option unknown-245 a8:3f:81:01; option dhcp-rebinding-time 4294967295; option domain-name "qylsde3bnlhu5dstzf3bav5inc.fx.internal.cloudapp.net"; renew 0 2152/07/23 23:27:10; rebind 0 2152/07/23 23:27:10; expire 0 never; } lease { interface "eth0"; fixed-address 10.0.1.4; server-name "RDE41D2D9BB18C"; option subnet-mask 255.255.255.0; option dhcp-lease-time 4294967295; option routers 10.0.1.1; option dhcp-message-type 5; option dhcp-server-identifier 168.63.129.16; option domain-name-servers expired; option dhcp-renewal-time 4294967295; option unknown-245 a8:3f:81:02; option dhcp-rebinding-time 4294967295; option domain-name "qylsde3bnlhu5dstzf3bav5inc.fx.internal.cloudapp.net"; renew 4 2015/06/16 16:58:54; rebind 4 2015/06/16 16:58:54; expire 4 2015/06/16 16:58:54; } lease { interface "eth0"; fixed-address 10.0.1.4; server-name "RDE41D2D9BB18C"; option subnet-mask 255.255.255.0; option dhcp-lease-time 4294967295; option routers 10.0.1.1; option dhcp-message-type 5; option dhcp-server-identifier 168.63.129.16; option domain-name-servers 8.8.8.8; option dhcp-renewal-time 4294967295; option rfc3442-classless-static-routes 0,10,0,1,1,32,168,63,129,16,10,0,1,1; option unknown-245 a8:3f:81:10; option dhcp-rebinding-time 4294967295; option domain-name "qylsde3bnlhu5dstzf3bav5inc.fx.internal.cloudapp.net"; renew 0 2152/07/23 23:27:10; rebind 0 2152/07/23 23:27:10; expire 0 2152/07/23 23:27:10; } Azure-WALinuxAgent-2b21de5/tests/data/dhcp.leases.multi000066400000000000000000000037161462617747000230570ustar00rootroot00000000000000lease { interface "eth0"; fixed-address 10.0.1.4; server-name "RDE41D2D9BB18C"; option subnet-mask 255.255.255.0; option dhcp-lease-time 4294967295; option routers 10.0.1.1; option dhcp-message-type 5; option dhcp-server-identifier 168.63.129.16; option domain-name-servers first; option dhcp-renewal-time 4294967295; option rfc3442-classless-static-routes 0,10,0,1,1,32,168,63,129,16,10,0,1,1; option unknown-245 a8:3f:81:01; option dhcp-rebinding-time 4294967295; option domain-name "qylsde3bnlhu5dstzf3bav5inc.fx.internal.cloudapp.net"; renew 0 2152/07/23 23:27:10; rebind 0 2152/07/23 23:27:10; expire 0 2152/07/23 23:27:10; } lease { interface "eth0"; fixed-address 10.0.1.4; server-name "RDE41D2D9BB18C"; option subnet-mask 255.255.255.0; option dhcp-lease-time 4294967295; option routers 10.0.1.1; option dhcp-message-type 5; option dhcp-server-identifier 168.63.129.16; option domain-name-servers second; option dhcp-renewal-time 4294967295; option rfc3442-classless-static-routes 0,10,0,1,1,32,168,63,129,16,10,0,1,1; option unknown-245 a8:3f:81:02; option dhcp-rebinding-time 4294967295; option domain-name "qylsde3bnlhu5dstzf3bav5inc.fx.internal.cloudapp.net"; renew 0 2152/07/23 23:27:10; rebind 0 2152/07/23 23:27:10; expire 0 2152/07/23 23:27:10; } lease { interface "eth0"; fixed-address 10.0.1.4; server-name "RDE41D2D9BB18C"; option subnet-mask 255.255.255.0; option dhcp-lease-time 4294967295; option routers 10.0.1.1; option dhcp-message-type 5; option dhcp-server-identifier 168.63.129.16; option domain-name-servers expired; option dhcp-renewal-time 4294967295; option rfc3442-classless-static-routes 0,10,0,1,1,32,168,63,129,16,10,0,1,1; option unknown-245 a8:3f:81:03; option dhcp-rebinding-time 4294967295; option domain-name "qylsde3bnlhu5dstzf3bav5inc.fx.internal.cloudapp.net"; renew 0 2152/07/23 23:27:10; rebind 0 2152/07/23 23:27:10; expire 0 2012/07/23 23:27:10; } Azure-WALinuxAgent-2b21de5/tests/data/events/000077500000000000000000000000001462617747000211075ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/events/1478123456789000.tld000066400000000000000000000006271462617747000234010ustar00rootroot00000000000000{"eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [{"name": "Name", "value": "Test Event"}, {"name": "Version", "value": "2.2.0"}, {"name": "IsInternal", "value": false}, {"name": "Operation", "value": "Some Operation"}, {"name": "OperationSuccess", "value": true}, {"name": "Message", "value": ""}, {"name": "Duration", "value": 0}, {"name": "ExtensionType", "value": ""}]}Azure-WALinuxAgent-2b21de5/tests/data/events/1478123456789001.tld000066400000000000000000000006701462617747000234000ustar00rootroot00000000000000{"eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [{"name": "Name", "value": "Linux Event"}, {"name": "Version", "value": "2.2.0"}, {"name": "IsInternal", "value": false}, {"name": "Operation", "value": "Linux Operation"}, {"name": "OperationSuccess", "value": false}, {"name": "Message", "value": "Linux Message"}, {"name": "Duration", "value": 42}, {"name": "ExtensionType", "value": "Linux Event Type"}]}Azure-WALinuxAgent-2b21de5/tests/data/events/1479766858966718.tld000066400000000000000000000007571462617747000234460ustar00rootroot00000000000000{"eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [{"name": "Name", "value": "WALinuxAgent"}, {"name": "Version", "value": "2.3.0.1"}, {"name": "IsInternal", "value": false}, {"name": "Operation", "value": "Enable"}, {"name": "OperationSuccess", "value": true}, {"name": "Message", "value": "Agent WALinuxAgent-2.3.0.1 launched with command 'python install.py' is successfully running"}, {"name": "Duration", "value": 0}, {"name": "ExtensionType", "value": ""}]}Azure-WALinuxAgent-2b21de5/tests/data/events/collect_and_send_events_unreadable_data/000077500000000000000000000000001462617747000311265ustar00rootroot00000000000000IncorrectExtension.tmp000077500000000000000000000017101462617747000354200ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/events/collect_and_send_events_unreadable_data { "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux" }, { "name": "Version", "value": "1.11.5" }, { "name": "Operation", "value": "Install" }, { "name": "OperationSuccess", "value": false }, { "name": "Message", "value": "HelloWorld" }, { "name": "Duration", "value": 300000 } ] }UnreadableFile.tld000077500000000000000000000017101462617747000344200ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/events/collect_and_send_events_unreadable_data { "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux" }, { "name": "Version", "value": "1.11.5" }, { "name": "Operation", "value": "Install" }, { "name": "OperationSuccess", "value": false }, { "name": "Message", "value": "HelloWorld" }, { "name": "Duration", "value": 300000 } ] }Azure-WALinuxAgent-2b21de5/tests/data/events/custom_script_1.tld000066400000000000000000000012351462617747000247330ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "Microsoft.Azure.Extensions.CustomScript" }, { "name": "Version", "value": "2.0.4" }, { "name": "Operation", "value": "ThisIsATestEventOperation" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", "value": "A test telemetry message." }, { "name": "Duration", "value": 150000 } ] }Azure-WALinuxAgent-2b21de5/tests/data/events/custom_script_2.tld000066400000000000000000000012351462617747000247340ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "Microsoft.Azure.Extensions.CustomScript" }, { "name": "Version", "value": "2.0.4" }, { "name": "Operation", "value": "ThisIsATestEventOperation" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", "value": "A test telemetry message." }, { "name": "Duration", "value": 150000 } ] }Azure-WALinuxAgent-2b21de5/tests/data/events/custom_script_extra_parameters.tld000066400000000000000000000027411462617747000301440ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "Microsoft.Azure.Extensions.CustomScript" }, { "name": "Version", "value": "2.0.4" }, { "name": "Operation", "value": "ThisIsATestEventOperation" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", "value": "A test telemetry message." }, { "name": "Duration", "value": 150000 }, { "name": "IsInternal", "value": true }, { "name": "ExtensionType", "value": "XML" }, { "name": "GAVersion", "value": "WALinuxAgent-9.9.99" }, { "name": "ContainerId", "value": "11111111-2222-3333-4444-555555555555" }, { "name": "OpcodeName", "value": "2099-12-31 11:59:59.505791" }, { "name": "EventTid", "value": 54321 }, { "name": "EventPid", "value": 98765 }, { "name": "TaskName", "value": "NOT_A_VALID_TASK" }, { "name": "KeywordName", "value": "NOT_A_VALID_KEYWORD" } ] }Azure-WALinuxAgent-2b21de5/tests/data/events/custom_script_invalid_json.tld000066400000000000000000000012731462617747000272540ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "Microsoft.Azure.Extensions.CustomScript" }, { "name": "Version", "value": "2.0.4" }, { "name": "Operation", "value": "ThisIsATestEventOperation" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", value: "THIS IS NOT VALID JSON - this object's name is not quoted" }, { "name": "Duration", "value": 150000 } ] }Azure-WALinuxAgent-2b21de5/tests/data/events/custom_script_no_read_access.tld000066400000000000000000000012351462617747000275230ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "Microsoft.Azure.Extensions.CustomScript" }, { "name": "Version", "value": "2.0.4" }, { "name": "Operation", "value": "ThisIsATestEventOperation" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", "value": "A test telemetry message." }, { "name": "Duration", "value": 150000 } ] }Azure-WALinuxAgent-2b21de5/tests/data/events/custom_script_nonascii_characters.tld000066400000000000000000000012421462617747000305730ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "Microsoft.Azure.Extensions.CustomScript" }, { "name": "Version", "value": "2.0.4" }, { "name": "Operation", "value": "ThisIsATestEventOperation" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", "value": "Worldעיות אחרותआज" }, { "name": "Duration", "value": 150000 } ] }Azure-WALinuxAgent-2b21de5/tests/data/events/custom_script_utf-16.tld000066400000000000000000000024741462617747000256230ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "Microsoft.Azure.Extensions.CustomScript" }, { "name": "Version", "value": "2.0.4" }, { "name": "Operation", "value": "ThisIsATestEventOperation" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", "value": "A test telemetry message." }, { "name": "Duration", "value": 150000 } ] }Azure-WALinuxAgent-2b21de5/tests/data/events/event_with_callstack.waagent.tld000066400000000000000000000042751462617747000274460ustar00rootroot00000000000000{"eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [{"name": "Name", "value": "WALinuxAgent"}, {"name": "Version", "value": "2.2.46"}, {"name": "Operation", "value": "ThisIsATestEventOperation"}, {"name": "OperationSuccess", "value": false}, {"name": "Message", "value": "An error occurred while retrieving the goal state: Traceback (most recent call last):\n File \"bin/WALinuxAgent-2.2.47/azurelinuxagent/common/protocol/wire.py\", line 715, in try_update_goal_state\n self.update_goal_state()\n File \"bin/WALinuxAgent-2.2.47/azurelinuxagent/common/protocol/wire.py\", line 708, in update_goal_state\n WireClient._UpdateType.GoalStateForced if forced else WireClient._UpdateType.GoalState)\n File \"bin/WALinuxAgent-2.2.47/azurelinuxagent/common/protocol/wire.py\", line 771, in _update_from_goal_state\n raise ProtocolError(\"Exceeded max retry updating goal state\")\nazurelinuxagent.common.exception.ProtocolError: [ProtocolError] Exceeded max retry updating goal state\n"}, {"name": "Duration", "value": 0}, {"name": "GAVersion", "value": "WALinuxAgent-2.2.46"}, {"name": "ContainerId", "value": "ebd8bf98-8a26-4607-b82f-41333b65b587"}, {"name": "OpcodeName", "value": "2020-03-19T15:30:15.326391Z"}, {"name": "EventTid", "value": 140524239595328}, {"name": "EventPid", "value": 3264}, {"name": "TaskName", "value": "ExtHandler"}, {"name": "KeywordName", "value": ""}, {"name": "ExtensionType", "value": ""}, {"name": "IsInternal", "value": false}, {"name": "OSVersion", "value": "Linux:ubuntu-18.04-bionic:5.0.0-1032-azure"}, {"name": "ExecutionMode", "value": "IAAS"}, {"name": "RAM", "value": 7976}, {"name": "Processors", "value": 2}, {"name": "VMName", "value": "_nam-u18"}, {"name": "TenantName", "value": "1feb87ac-f8b5-427e-a5b3-563b34a3931a"}, {"name": "RoleName", "value": "_nam-u18"}, {"name": "RoleInstanceName", "value": "1feb87ac-f8b5-427e-a5b3-563b34a3931a._nam-u18"}, {"name": "Location", "value": "northcentralus"}, {"name": "SubscriptionId", "value": "2588be01-bc36-4aa0-bf22-db6efc2b58ae"}, {"name": "ResourceGroupName", "value": "narrieta-rg"}, {"name": "VMId", "value": "06a6333f-410e-4a8f-a93a-c5b6075dfdad"}, {"name": "ImageOrigin", "value": 2}], "file_type": ""}Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/000077500000000000000000000000001462617747000245075ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/different_cases/000077500000000000000000000000001462617747000276335ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/different_cases/1591918616.json000066400000000000000000000021641462617747000316270ustar00rootroot00000000000000[ { "eventlevel": "INFO", "Message": "Files downloaded. Asynchronously executing command: 'SecureCommand_11'", "Version": "1", "TASKNAME": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "tIMEsTAMP": "2019-12-12T01:21:05.1960563Z" }, { "EventLevel": "INFO", "MESSAGE": "HandlerEnvironment = Version: 1, HandlerEnvironment: [LogFolder: \"C:\\WindowsAzure\\Logs\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\", ConfigFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\RuntimeSettings\", StatusFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\", HeartbeatFile: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\\HeartBeat.Json\"]", "version": "1", "TaskName": "Downloading files", "EventPID": "3228", "EVENTTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z" } ]Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/empty_message/000077500000000000000000000000001462617747000273515ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/empty_message/1592350454.json000066400000000000000000000011501462617747000313260ustar00rootroot00000000000000[ { "EventLevel": "INFO", "Message": null, "Version": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "BadEvent": true }, { "EventLevel": "INFO", "Message": "", "Version": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "BadEvent": true } ]Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/extra_parameters/000077500000000000000000000000001462617747000300555ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/extra_parameters/1592273009.json000066400000000000000000000030551462617747000320400ustar00rootroot00000000000000[ { "EventLevel": "INFO", "Message": "Files downloaded. Asynchronously executing command: 'SecureCommand_11'", "Version": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "SomethingNewButNotCool": "This is a random new param" }, { "EL": "INFO", "msg": "HandlerEnvironment = Version: 1, HandlerEnvironment: [LogFolder: \"C:\\WindowsAzure\\Logs\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\", ConfigFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\RuntimeSettings\", StatusFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\", HeartbeatFile: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\\HeartBeat.Json\"]", "ver": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "Time": "2019-12-12T01:21:05.1960563Z", "BadEvent": true }, { "EventLevel": "INFO", "Message": "Hello World", "Version": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "SomethingVeryWeird": "Weirdly weird but satisfying" } ]Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/int_type/000077500000000000000000000000001462617747000263425ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/int_type/1519934744.json000066400000000000000000000005071462617747000303350ustar00rootroot00000000000000{ "EventLevel": "INFO", "Message": "Accept int value for eventpid and eventtid", "Version": "1", "TaskName": "Downloading files", "EventPid": 3228, "EventTid": 1, "OpErAtiOnID": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2023-03-13T01:21:05.1960563Z" }Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/large_messages/000077500000000000000000000000001462617747000274705ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/large_messages/1591921510.json000066400000000000000000002132161462617747000314510ustar00rootroot00000000000000[ { "EventLevel": "INFO", "Message": "[MESSAGE_START] Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 Files downloaded. Asynchronously executing command: 'SecureCommand_11 [MESSAGE_END]", "Version": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z" } ]Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/malformed_files/000077500000000000000000000000001462617747000276375ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/malformed_files/1592008079.json000066400000000000000000000003721462617747000316240ustar00rootroot00000000000000[ { "EventLevel": "" , "Message": null, "Version": "", "TaskName": null, "EventPid": null, "EventTid": null, "OperationId": null, "TimeStamp": null, "BadEvent": true } ]Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/malformed_files/1594857360.tld000066400000000000000000000005131462617747000314420ustar00rootroot00000000000000{ "EventLevel": "INFO", "Message": "Enabling Handler", "Version": "1", "TaskName": "Handler Operation", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z", "BadEvent": true }Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/malformed_files/bad_json_files/000077500000000000000000000000001462617747000326005ustar00rootroot000000000000001591816395.json000066400000000000000000000000301462617747000345040ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/malformed_files/bad_json_files{ "BadEvent": true ]Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/malformed_files/bad_name_file.json000066400000000000000000000022041462617747000332550ustar00rootroot00000000000000[ { "EventLevel": "INFO", "Message": "Starting IaaS ScriptHandler Extension v1", "Version": "1", "TaskName": "Extension Info", "EventPid": "5676", "EventTid": "1", "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", "TimeStamp": "2019-12-12T01:11:38.2298194Z", "BadEvent": true }, { "EventLevel": "INFO", "Message": "HandlerEnvironment = Version: 1, HandlerEnvironment: [LogFolder: \"C:\\WindowsAzure\\Logs\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\", ConfigFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\RuntimeSettings\", StatusFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\", HeartbeatFile: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\\HeartBeat.Json\"]", "Version": "1", "TaskName": "Extension Info", "EventPid": "5676", "EventTid": "1", "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", "TimeStamp": "2019-12-12T01:11:38.2318168Z", "BadEvent": true } ]Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/missing_parameters/000077500000000000000000000000001462617747000304035ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/missing_parameters/1592273793.json000066400000000000000000000045411462617747000324010ustar00rootroot00000000000000[ { "EventLevel": "INFO", "msg": "Random names for param keys should generate MissingKeyError, message missing", "Version": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "BadEvent": true }, { "EventLevel": "INFO", "msg": "Random names for param keys should generate MissingKeyError, message missing", "Version": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "BadEvent": true }, { "EventLevel": "INFO", "Message": "MissingKeyError: EventPid missing", "Version": "1", "TaskName": "Downloading files", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "BadEvent": true }, { "EventLevel": "INFO", "Message": "MissingKeyError: EventPid missing", "Version": "1", "TaskName": "Downloading files", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "BadEvent": true }, { "EventLevel": "INFO", "Message": "MissingKeyError: Version missing", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "BadEvent": true }, { "EventLevel": "INFO", "Message": "MissingKeyError: Version missing", "EventPid": "3228", "TaskName": "Downloading files", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "BadEvent": true }, { "EventLevel": "INFO", "Message": "MissingKeyError: Version missing", "EventPid": "3228", "TaskName": "Downloading files", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z", "BadEvent": true } ]Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/mix_files/000077500000000000000000000000001462617747000264665ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/mix_files/1591835369.json000066400000000000000000000000261462617747000304600ustar00rootroot00000000000000{ "BadEvent": true }Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/mix_files/1591835848.json000066400000000000000000000073511462617747000304720ustar00rootroot00000000000000[ { "EventLevel": "INFO", "Message": "Starting IaaS ScriptHandler Extension v1", "Version": "1", "TaskName": "Extension Info", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "HandlerEnvironment = Version: 1, HandlerEnvironment: [LogFolder: \"C:\\WindowsAzure\\Logs\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\", ConfigFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\RuntimeSettings\", StatusFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\", HeartbeatFile: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\\HeartBeat.Json\"]", "Version": "1", "TaskName": "Extension Info", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "Comamnd Executed: enable", "Version": "1", "TaskName": "Handler Command", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "Enabling Handler", "Version": "1", "TaskName": "Handler Operation", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "Successfully enabled TLS.", "Version": "1", "TaskName": "TLS", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.1106498Z" }, { "EventLevel": "INFO", "Message": "Handler successfully enabled", "Version": "1", "TaskName": "Handler Operation", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.1731503Z" }, { "EventLevel": "INFO", "Message": "Loading configuration for sequence number 11", "Version": "1", "TaskName": "Sequence Number", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.1731503Z" }, { "EventLevel": "INFO", "Message": "HandlerSettings = ProtectedSettingsCertThumbprint: C8F2B56B0E79B592334BA0AFFFE172EB3DEF6753, ProtectedSettings: {MIIB0AYJKoZIhvcNAQcDoIIBwTCCAb0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEGPedX2PQQm0SE3jOKJHGtUwDQYJKoZIhvcNAQEBBQAEggEAqCIhARxqObDt9esAHszPzPonOyNyDpS2f/bsaA6TiZXyUg08JARIAZYAvR3TMJaH+Q3IkTC2tcQHmlBFVx7v9Dxm0RtwGM3CJve8Xq/Jf2X6gsq79nKPiFrZ1BRDp/TB6lZdC6ahIiRA+DOeL9p1wap8R7j67s+oQiEVi0nI0zqSOPH+nXsKNhi2xaW466zdgXdfsy2amp9pO/p9/mg+W/qyMFKcnFI8d26bqaWxYBQVYqxWnXSUE7Ul7hHEKyXQeF2QwE7QVBEMJIgLyx6u58M2FYtoWopU37gOMk8MWfTgIumP0WZOPHS/n6AffPgaypStiu9Q3HYTYnqH28OtjTBLBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECCzHZJQzFhdSgCgV25G1ZQKeyJPTLZu+u8bIjtPQ5yOOLFQZj8XrJj4HhQNTndIDwyNz}, PublicSettings: {}", "Version": "1", "TaskName": "Handler Configuration", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:16.7047734Z" }, { "BadEvent": true } ]Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/mix_files/1591835859.json000066400000000000000000000005061462617747000304670ustar00rootroot00000000000000 { "EventLevel": "INFO", "Message": "Downloading files specified in configuration...", "Version": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:52.9247262Z" } Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/special_chars/000077500000000000000000000000001462617747000273075ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/special_chars/1591918939.json000066400000000000000000000005201462617747000313050ustar00rootroot00000000000000{ "EventLevel": "INFO", "Message": "Non-English message - 此文字不是英文的", "Version": "1", "TaskName": "Downloading files", "EventPid": "3228", "EventTid": "1", "OpErAtiOnID": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:21:05.1960563Z" }Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/well_formed_files/000077500000000000000000000000001462617747000301705ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/well_formed_files/1591905451.json000066400000000000000000000073031462617747000321550ustar00rootroot00000000000000[ { "EventLevel": "INFO", "Message": "Starting IaaS ScriptHandler Extension v1", "Version": "1", "TaskName": "Extension Info", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "HandlerEnvironment = Version: 1, HandlerEnvironment: [LogFolder: \"C:\\WindowsAzure\\Logs\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\", ConfigFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\RuntimeSettings\", StatusFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\", HeartbeatFile: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\\HeartBeat.Json\"]", "Version": "1", "TaskName": "Extension Info", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "Comamnd Executed: enable", "Version": "1", "TaskName": "Handler Command", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "Enabling Handler", "Version": "1", "TaskName": "Handler Operation", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "Successfully enabled TLS.", "Version": "1", "TaskName": "TLS", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.1106498Z" }, { "EventLevel": "INFO", "Message": "Handler successfully enabled", "Version": "1", "TaskName": "Handler Operation", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.1731503Z" }, { "EventLevel": "INFO", "Message": "Loading configuration for sequence number 11", "Version": "1", "TaskName": "Sequence Number", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.1731503Z" }, { "EventLevel": "INFO", "Message": "HandlerSettings = ProtectedSettingsCertThumbprint: C8F2B56B0E79B592334BA0AFFFE172EB3DEF6753, ProtectedSettings: {MIIB0AYJKoZIhvcNAQcDoIIBwTCCAb0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEGPedX2PQQm0SE3jOKJHGtUwDQYJKoZIhvcNAQEBBQAEggEAqCIhARxqObDt9esAHszPzPonOyNyDpS2f/bsaA6TiZXyUg08JARIAZYAvR3TMJaH+Q3IkTC2tcQHmlBFVx7v9Dxm0RtwGM3CJve8Xq/Jf2X6gsq79nKPiFrZ1BRDp/TB6lZdC6ahIiRA+DOeL9p1wap8R7j67s+oQiEVi0nI0zqSOPH+nXsKNhi2xaW466zdgXdfsy2amp9pO/p9/mg+W/qyMFKcnFI8d26bqaWxYBQVYqxWnXSUE7Ul7hHEKyXQeF2QwE7QVBEMJIgLyx6u58M2FYtoWopU37gOMk8MWfTgIumP0WZOPHS/n6AffPgaypStiu9Q3HYTYnqH28OtjTBLBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECCzHZJQzFhdSgCgV25G1ZQKeyJPTLZu+u8bIjtPQ5yOOLFQZj8XrJj4HhQNTndIDwyNz}, PublicSettings: {}", "Version": "1", "TaskName": "Handler Configuration", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:16.7047734Z" } ]Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/well_formed_files/1592355539.json000066400000000000000000000052361462617747000321670ustar00rootroot00000000000000[ { "EventLevel": "INFO", "Message": "Starting IaaS ScriptHandler Extension v1", "Version": "1", "TaskName": "Extension Info", "EventPid": "5676", "EventTid": "1", "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", "Timestamp": "2019-12-12T01:11:38.2298194Z" }, { "EventLevel": "INFO", "Message": "HandlerEnvironment = Version: 1, HandlerEnvironment: [LogFolder: \"C:\\WindowsAzure\\Logs\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\", ConfigFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\RuntimeSettings\", StatusFolder: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\", HeartbeatFile: \"C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.3\\Status\\HeartBeat.Json\"]", "Version": "1", "TaskName": "Extension Info", "EventPid": "5676", "EventTid": "1", "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", "Timestamp": "2019-12-12T01:11:38.2318168Z" }, { "EventLevel": "INFO", "Message": "Comamnd Executed: enable", "Version": "1", "TaskName": "Handler Command", "EventPid": "5676", "EventTid": "1", "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", "Timestamp": "2019-12-12T01:11:38.2328180Z" }, { "EventLevel": "INFO", "Message": "Enabling Handler", "Version": "1", "TaskName": "Handler Operation", "EventPid": "5676", "EventTid": "1", "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", "Timestamp": "2019-12-12T01:11:38.2338180Z" }, { "EventLevel": "INFO", "Message": "Successfully enabled TLS.", "Version": "1", "TaskName": "TLS", "EventPid": "5676", "EventTid": "1", "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", "Timestamp": "2019-12-12T01:11:38.2368192Z" }, { "EventLevel": "INFO", "Message": "Handler successfully enabled", "Version": "1", "TaskName": "Handler Operation", "EventPid": "5676", "EventTid": "1", "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", "Timestamp": "2019-12-12T01:11:38.2448190Z" }, { "EventLevel": "INFO", "Message": "Loading configuration for sequence number 11", "Version": "1", "TaskName": "Sequence Number", "EventPid": "5676", "EventTid": "1", "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", "Timestamp": "2019-12-12T01:11:38.2498176Z" } ]Azure-WALinuxAgent-2b21de5/tests/data/events/extension_events/well_formed_files/9999999999.json000066400000000000000000000043521462617747000322400ustar00rootroot00000000000000[ { "EventLevel": "INFO", "Message": "This is the latest event", "Version": "1", "TaskName": "Extension Info", "EventPid": "999", "EventTid": "999", "OperationId": "", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "This is the latest event", "Version": "1", "TaskName": "Extension Info", "EventPid": "999", "EventTid": "999", "OperationId": "", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "This is the latest event", "Version": "1", "TaskName": "Extension Info", "EventPid": "999", "EventTid": "999", "OperationId": "", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "This is the latest event", "Version": "1", "TaskName": "Extension Info", "EventPid": "999", "EventTid": "999", "OperationId": "", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "This is the latest event", "Version": "1", "TaskName": "Extension Info", "EventPid": "999", "EventTid": "999", "OperationId": "", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "This is the latest event", "Version": "1", "TaskName": "Extension Info", "EventPid": "999", "EventTid": "999", "OperationId": "", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "This is the latest event", "Version": "1", "TaskName": "Extension Info", "EventPid": "999", "EventTid": "999", "OperationId": "", "TimeStamp": "2019-12-12T01:20:05.0950244Z" }, { "EventLevel": "INFO", "Message": "This is the latest event", "Version": "1", "TaskName": "Extension Info", "EventPid": "999", "EventTid": "999", "OperationId": "", "TimeStamp": "2019-12-12T01:20:05.0950244Z" } ]Azure-WALinuxAgent-2b21de5/tests/data/events/legacy_agent.tld000066400000000000000000000027231462617747000242420ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "WALinuxAgent" }, { "name": "Version", "value": "9.9.9" }, { "name": "IsInternal", "value": false }, { "name": "Operation", "value": "ThisIsATestEventOperation" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", "value": "The cgroup filesystem is ready to use" }, { "name": "Duration", "value": 1234 }, { "name": "ExtensionType", "value": "ALegacyExtensionType" }, { "name": "GAVersion", "value": "WALinuxAgent-1.1.1" }, { "name": "ContainerId", "value": "AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE" }, { "name": "OpcodeName", "value": "1970-01-01 12:00:00" }, { "name": "EventTid", "value": 98765 }, { "name": "EventPid", "value": 4321 }, { "name": "TaskName", "value": "ALegacyTask" }, { "name": "KeywordName", "value": "ALegacyKeywordName" } ] }Azure-WALinuxAgent-2b21de5/tests/data/events/legacy_agent_no_timestamp.tld000066400000000000000000000025611462617747000270210ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "WALinuxAgent" }, { "name": "Version", "value": "9.9.9" }, { "name": "IsInternal", "value": false }, { "name": "Operation", "value": "ThisIsATestEventOperation" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", "value": "The cgroup filesystem is ready to use" }, { "name": "Duration", "value": 1234 }, { "name": "ExtensionType", "value": "ALegacyExtensionType" }, { "name": "GAVersion", "value": "WALinuxAgent-1.1.1" }, { "name": "ContainerId", "value": "AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE" }, { "name": "EventTid", "value": 98765 }, { "name": "EventPid", "value": 4321 }, { "name": "TaskName", "value": "ALegacyTask" }, { "name": "KeywordName", "value": "ALegacyKeywordName" } ] }Azure-WALinuxAgent-2b21de5/tests/data/ext/000077500000000000000000000000001462617747000204035ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/ext/dsc_event.json000066400000000000000000000015631462617747000232550ustar00rootroot00000000000000{ "eventId":"1", "parameters":[ { "name":"Name", "value":"Microsoft.Azure.GuestConfiguration.DSCAgent" }, { "name":"Version", "value":"1.18.0" }, { "name":"IsInternal", "value":true }, { "name":"Operation", "value":"GuestConfigAgent.Scenario" }, { "name":"OperationSuccess", "value":true }, { "name":"Message", "value":"[2019-11-05 10:06:52.688] [PID 11487] [TID 11513] [Timer Manager] [INFO] [89f9cf47-c02d-4774-b21a-abdf2beb3cd9] Run pull refresh for timer 'dsc_refresh_timer'\n" }, { "name":"Duration", "value":0 }, { "name":"ExtentionType", "value":"" } ], "providerId":"69B669B9-4AF8-4C50-BDC4-6006FA76E975" }Azure-WALinuxAgent-2b21de5/tests/data/ext/event.json000066400000000000000000000013501462617747000224160ustar00rootroot00000000000000{ "eventId":1, "providerId":"69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters":[ { "name":"Name", "value":"CustomScript" }, { "name":"Version", "value":"1.4.1.0" }, { "name":"IsInternal", "value":false }, { "name":"Operation", "value":"RunScript" }, { "name":"OperationSuccess", "value":true }, { "name":"Message", "value":"(01302)Script is finished. ---stdout--- hello ---errout--- " }, { "name":"Duration", "value":0 }, { "name":"ExtensionType", "value":"" } ] }Azure-WALinuxAgent-2b21de5/tests/data/ext/event_from_agent.json000066400000000000000000000037511462617747000246260ustar00rootroot00000000000000{ "eventId": 1, "providerId": "69B669B9-4AF8-4C50-BDC4-6006FA76E975", "parameters": [ { "name": "Name", "value": "dummy_name" }, { "name": "Version", "value": "2.2.46" }, { "name": "Operation", "value": "Unknown" }, { "name": "OperationSuccess", "value": true }, { "name": "Message", "value": "" }, { "name": "Duration", "value": 0 }, { "name": "GAVersion", "value": "WALinuxAgent-2.2.46" }, { "name": "ContainerId", "value": "UNINITIALIZED" }, { "name": "OpcodeName", "value": "2020-01-31T00:06:54.074757Z" }, { "name": "EventTid", "value": 139628215564096 }, { "name": "EventPid", "value": 10681 }, { "name": "TaskName", "value": "MainThread" }, { "name": "KeywordName", "value": "" }, { "name": "ExtensionType", "value": "" }, { "name": "IsInternal", "value": false }, { "name": "OSVersion", "value": "TEST_OSVersion" }, { "name": "ExecutionMode", "value": "TEST_ExecutionMode" }, { "name": "RAM", "value": 512 }, { "name": "Processors", "value": 2 }, { "name": "VMName", "value": "TEST_VMName" }, { "name": "TenantName", "value": "TEST_TenantName" }, { "name": "RoleName", "value": "TEST_RoleName" }, { "name": "RoleInstanceName", "value": "TEST_RoleInstanceName" }, { "name": "Location", "value": "TEST_Location" }, { "name": "SubscriptionId", "value": "TEST_SubscriptionId" }, { "name": "ResourceGroupName", "value": "TEST_ResourceGroupName" }, { "name": "VMId", "value": "TEST_VMId" }, { "name": "ImageOrigin", "value": 1 } ], "file_type": "json" }Azure-WALinuxAgent-2b21de5/tests/data/ext/event_from_extension.xml000066400000000000000000000023071462617747000253670ustar00rootroot00000000000000 Azure-WALinuxAgent-2b21de5/tests/data/ext/sample-status-invalid-format-emptykey-line7.json000066400000000000000000000100241462617747000316700ustar00rootroot00000000000000[ { "status": { "status": "success", "code": 1, "snapshotInfo": null, "" "name": "Microsoft.Azure.Extension.VMExtension", "commandStartTimeUTCTicks": "636953997844977993", "taskId": "e5e5602b-48a6-4c35-9f96-752043777af1", "formattedMessage": { "lang": "en-US", "message": "Aenean semper nunc nisl, vitae sollicitudin felis consequat at. In lobortis elementum sapien, non commodo odio semper ac." }, "uniqueMachineId": "e5e5602b-48a6-4c35-9f96-752043777af1", "vmHealthInfo": null, "storageDetails": { "totalUsedSizeInBytes": 10000000000, "partitionCount": 3, "isSizeComputationFailed": false, "isStoragespacePresent": false }, "telemetryData": null, "substatus": [ { "status": "success", "formattedMessage": null, "code": "0", "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" } ], "operation": "Enable" }, "version": "1.0", "timestampUTC": "2019-06-06T23:52:59Z" } ]Azure-WALinuxAgent-2b21de5/tests/data/ext/sample-status-invalid-json-format.json000066400000000000000000000101271462617747000277620ustar00rootroot00000000000000[ { "_comment": "This is an invalid status file, it's missing a brace at line 37", "status": { "status": "success", "code": 1, "snapshotInfo": null, "name": "Microsoft.Azure.Extension.VMExtension", "commandStartTimeUTCTicks": "636953997844977993", "taskId": "e5e5602b-48a6-4c35-9f96-752043777af1", "formattedMessage": { "lang": "en-US", "message": "Aenean semper nunc nisl, vitae sollicitudin felis consequat at. In lobortis elementum sapien, non commodo odio semper ac." }, "uniqueMachineId": "e5e5602b-48a6-4c35-9f96-752043777af1", "vmHealthInfo": null, "storageDetails": { "totalUsedSizeInBytes": 10000000000, "partitionCount": 3, "isSizeComputationFailed": false, "isStoragespacePresent": false }, "telemetryData": null, "substatus": [ { "status": "success", "formattedMessage": null, "code": "0", "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" } ], "operation": "Enable" }, "version": "1.0", "timestampUTC": "2019-06-06T23:52:59Z" ]Azure-WALinuxAgent-2b21de5/tests/data/ext/sample-status-invalid-status-no-status-status-key.json000066400000000000000000000077441462617747000331230ustar00rootroot00000000000000[ { "status": { "code": 1, "snapshotInfo": null, "name": "Microsoft.Azure.Extension.VMExtension", "commandStartTimeUTCTicks": "636953997844977993", "taskId": "e5e5602b-48a6-4c35-9f96-752043777af1", "formattedMessage": { "lang": "en-US", "message": "Aenean semper nunc nisl, vitae sollicitudin felis consequat at. In lobortis elementum sapien, non commodo odio semper ac." }, "uniqueMachineId": "e5e5602b-48a6-4c35-9f96-752043777af1", "vmHealthInfo": null, "storageDetails": { "totalUsedSizeInBytes": 10000000000, "partitionCount": 3, "isSizeComputationFailed": false, "isStoragespacePresent": false }, "telemetryData": null, "substatus": [ { "status": "success", "formattedMessage": null, "code": "0", "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" } ], "operation": "Enable" }, "version": "1.0", "timestampUTC": "2019-06-06T23:52:59Z" } ]Azure-WALinuxAgent-2b21de5/tests/data/ext/sample-status-very-large-multiple-substatuses.json000066400000000000000000016264531462617747000324100ustar00rootroot00000000000000[ { "status": { "status": "success", "code": 1, "snapshotInfo": null, "name": "Microsoft.Azure.Extension.VMExtension", "commandStartTimeUTCTicks": "636953997844977993", "taskId": "e5e5602b-48a6-4c35-9f96-752043777af1", "formattedMessage": { "lang": "en-US", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "uniqueMachineId": "e5e5602b-48a6-4c35-9f96-752043777af1", "vmHealthInfo": null, "storageDetails": { "totalUsedSizeInBytes": 10000000000, "partitionCount": 3, "isSizeComputationFailed": false, "isStoragespacePresent": false }, "telemetryData": null, "substatus": [ { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" }, { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" } ], "operation": "Enable" }, "version": "1.0", "timestampUTC": "2019-06-06T23:52:59Z" } ]Azure-WALinuxAgent-2b21de5/tests/data/ext/sample-status-very-large.json000066400000000000000000004276061462617747000261720ustar00rootroot00000000000000[ { "status": { "status": "success", "code": 1, "snapshotInfo": null, "name": "Microsoft.Azure.Extension.VMExtension", "commandStartTimeUTCTicks": "636953997844977993", "taskId": "e5e5602b-48a6-4c35-9f96-752043777af1", "formattedMessage": { "lang": "en-US", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "uniqueMachineId": "e5e5602b-48a6-4c35-9f96-752043777af1", "vmHealthInfo": null, "storageDetails": { "totalUsedSizeInBytes": 10000000000, "partitionCount": 3, "isSizeComputationFailed": false, "isStoragespacePresent": false }, "telemetryData": null, "substatus": [ { "status": "success", "formattedMessage": { "lang": "eng", "message": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non lacinia urna, sit amet venenatis orci. Praesent maximus erat et augue tincidunt, quis fringilla urna mollis. Fusce id lacus velit. Praesent interdum, nulla eget sagittis lobortis, elit arcu porta nisl, eu volutpat ante est ut libero. Nulla volutpat nisl arcu, sed vehicula nulla fringilla id. Mauris sollicitudin viverra nunc, id sodales dui tempor eu. Morbi sit amet placerat felis. Pellentesque nunc leo, sollicitudin eu ex ac, varius lobortis tortor. Etiam purus ipsum, venenatis nec sagittis non, commodo non ipsum. Ut hendrerit a erat ut vehicula. Nam ullamcorper finibus metus, non iaculis metus molestie id. Vivamus blandit commodo metus. Fusce pellentesque, nunc sed lobortis laoreet, neque leo pulvinar felis, ac fermentum ligula ante a lacus. Aliquam interdum luctus elit, nec ultrices quam iaculis a. Aliquam tempor, arcu vel consequat molestie, ligula nisi gravida lacus, sed blandit magna felis nec dolor. Pellentesque dignissim ac diam a posuere. Etiam vulputate nisi vel dui imperdiet, et cursus dolor pellentesque." }, "code": 0, "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"Message\", \"Value\": \"Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Quisque eu nisl consectetur, accumsan lacus et, laoreet purus. Proin eget arcu ligula. Aliquam erat volutpat. Ut laoreet velit eget malesuada maximus. Duis dictum lectus ligula, nec congue ante lobortis sed. Cras sed sapien lobortis, maximus erat lacinia, lacinia mauris. Proin sodales vulputate libero. Nullam nisi nunc, malesuada eget aliquet eu, porttitor vitae neque. Nunc interdum malesuada eleifend. Nam in enim placerat, ornare nunc quis, sollicitudin magna. Cras rutrum vel ipsum dignissim commodo. Nulla facilisi. Integer quam nibh, varius in luctus eu, volutpat accumsan ligula. Donec id blandit urna, nec tempus erat. Nullam at arcu vel nunc consectetur lobortis id ultrices purus. Suspendisse ac diam auctor, luctus risus nec, molestie urna. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor. Integer tincidunt sem nisl. Praesent dictum sit amet augue ut convallis. Maecenas lobortis lorem massa, in lobortis sapien auctor a. Duis lobortis euismod ligula non accumsan. Morbi risus turpis, interdum eu varius quis, consectetur tristique risus. Integer fermentum arcu sit amet dui faucibus pulvinar. Integer egestas quam at mi tempor, ac varius est iaculis. Morbi at nisi pretium, semper lacus vel, suscipit nisl. Mauris vitae ipsum tellus. Aenean blandit dapibus tortor, ac varius odio feugiat in. Etiam mi risus, auctor sit amet vehicula ut, maximus ut risus. Suspendisse nec interdum magna. Proin hendrerit turpis arcu, vitae dictum magna blandit egestas. Morbi vitae purus nunc. Duis vehicula, massa sed feugiat tempus, quam libero vehicula sapien, a volutpat massa nulla non est. Maecenas fermentum lectus eu lectus molestie, sed sagittis neque ornare. Proin id arcu non magna pretium semper vel sed nulla. Integer porttitor, ipsum vitae iaculis eleifend, eros ligula ornare lorem, at pulvinar urna risus vel diam. Curabitur tempus elementum tristique. Etiam non erat in nunc efficitur finibus. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Mauris id pellentesque felis. Pellentesque vel magna nec mi rhoncus laoreet a eget nibh. Proin vitae nulla non ante interdum lobortis. Nam eu ex eget neque ultrices cursus ut sed enim. Nulla facilisi. Pellentesque maximus, est ac facilisis rutrum, justo erat hendrerit est, vitae tempor tortor justo fringilla lectus. Fusce est augue, posuere porttitor sapien sed, pellentesque viverra neque. Etiam euismod suscipit egestas. Vivamus cursus dui ac massa feugiat viverra. Suspendisse luctus, ex vitae auctor volutpat, magna elit porttitor mauris, in commodo urna mi ut dolor.\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" } ], "operation": "Enable" }, "version": "1.0", "timestampUTC": "2019-06-06T23:52:59Z" } ]Azure-WALinuxAgent-2b21de5/tests/data/ext/sample-status.json000066400000000000000000000100051462617747000240740ustar00rootroot00000000000000[ { "status": { "status": "success", "code": 1, "snapshotInfo": null, "name": "Microsoft.Azure.Extension.VMExtension", "commandStartTimeUTCTicks": "636953997844977993", "taskId": "e5e5602b-48a6-4c35-9f96-752043777af1", "formattedMessage": { "lang": "en-US", "message": "Aenean semper nunc nisl, vitae sollicitudin felis consequat at. In lobortis elementum sapien, non commodo odio semper ac." }, "uniqueMachineId": "e5e5602b-48a6-4c35-9f96-752043777af1", "vmHealthInfo": null, "storageDetails": { "totalUsedSizeInBytes": 10000000000, "partitionCount": 3, "isSizeComputationFailed": false, "isStoragespacePresent": false }, "telemetryData": null, "substatus": [ { "status": "success", "formattedMessage": null, "code": "0", "name": "[{\"status\": {\"status\": \"success\", \"code\": \"1\", \"snapshotInfo\": [{\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/p3w4cdkggwwl/abcd?snapshot=2019-06-06T23:53:14.9090608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/l0z0cjhf0fbr/abcd?snapshot=2019-06-06T23:53:14.9083776Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/lxqfz15mlw0s/abcd?snapshot=2019-06-06T23:53:14.9137572Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/m04dplcjltlt/abcd?snapshot=2019-06-06T23:53:14.9087358Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-rvr1sst1m0s0.blob.core.windows.net/nkx4dljgcppt/abcd?snapshot=2019-06-06T23:53:14.9089608Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}, {\"snapshotUri\": \"https://md-sqfzxrbqwwkg.blob.core.windows.net/sl2lt5k20wwx/abcd?snapshot=2019-06-06T23:53:14.8338051Z\", \"errorMessage\": \"\", \"isSuccessful\": \"true\"}], \"name\": \"Microsoft.Azure.RecoveryServices.VMSnapshotLinux\", \"commandStartTimeUTCTicks\": \"636953997844977993\", \"taskId\": \"767baff0-3f1e-4363-a974-5d801189250d\", \"formattedMessage\": {\"lang\": \"en-US\", \"message\": \" statusBlobUploadError=true, snapshotCreator=backupHostService, hostStatusCodeDoSnapshot=200, \"}, \"uniqueMachineId\": \"cf2545fd-fef4-250b-d167-f2edb86da1ac\", \"vmHealthInfo\": {\"vmHealthStatusCode\": 1, \"vmHealthState\": 0}, \"storageDetails\": {\"totalUsedSizeInBytes\": 10795593728, \"partitionCount\": 3, \"isSizeComputationFailed\": false, \"isStoragespacePresent\": false}, \"telemetryData\": [{\"Key\": \"kernelVersion\", \"Value\": \"4.4.0-145-generic\"}, {\"Key\": \"networkFSTypePresentInMount\", \"Value\": \"True\"}, {\"Key\": \"extensionVersion\", \"Value\": \"1.0.9150.0\"}, {\"Key\": \"guestAgentVersion\", \"Value\": \"2.2.32.2\"}, {\"Key\": \"FreezeTime\", \"Value\": \"0:00:00.258313\"}, {\"Key\": \"ramDisksSize\", \"Value\": \"626944\"}, {\"Key\": \"platformArchitecture\", \"Value\": \"64bit\"}, {\"Key\": \"pythonVersion\", \"Value\": \"2.7.12\"}, {\"Key\": \"snapshotCreator\", \"Value\": \"backupHostService\"}, {\"Key\": \"tempDisksSize\", \"Value\": \"60988\"}, {\"Key\": \"statusBlobUploadError\", \"Value\": \"true\"}, {\"Key\": \"ThawTime\", \"Value\": \"0:00:00.143658\"}, {\"Key\": \"extErrorCode\", \"Value\": \"success\"}, {\"Key\": \"osVersion\", \"Value\": \"Ubuntu-16.04\"}, {\"Key\": \"hostStatusCodeDoSnapshot\", \"Value\": \"200\"}, {\"Key\": \"snapshotTimeTaken\", \"Value\": \"0:00:00.325422\"}], \"substatus\": [], \"operation\": \"Enable\"}, \"version\": \"1.0\", \"timestampUTC\": \"\\/Date(1559865179380)\\/\"}]" } ], "operation": "Enable" }, "version": "1.0", "timestampUTC": "2019-06-06T23:52:59Z" } ]Azure-WALinuxAgent-2b21de5/tests/data/ext/sample_ext-1.3.0.zip000066400000000000000000000040371462617747000237310ustar00rootroot00000000000000PK |cSRH]exit.shUT K0`bux #!/usr/bin/env sh exit $*PKlQ.h`HandlerManifest.jsonUT F_bux }M 0}N֢D=)G2$J]}4)tNhUb=>ȏ&BII]Ux$BOW"[PK |cSRH]exit.shUTK0`ux PKlQ.h`[HandlerManifest.jsonUTF_ux PK NXToN Spython.shUTbux PKwIXTC% >sample.pyUTQbux PKEAzure-WALinuxAgent-2b21de5/tests/data/ext/sample_ext-1.3.0/000077500000000000000000000000001462617747000232015ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/ext/sample_ext-1.3.0/HandlerManifest.json000066400000000000000000000006231462617747000271410ustar00rootroot00000000000000[{ "name": "ExampleHandlerLinux", "version": 1.0, "handlerManifest": { "installCommand": "sample.py -install", "uninstallCommand": "sample.py -uninstall", "updateCommand": "sample.py -update", "enableCommand": "sample.py -enable", "disableCommand": "sample.py -disable", "rebootAfterInstall": false, "reportHeartbeat": false } }] Azure-WALinuxAgent-2b21de5/tests/data/ext/sample_ext-1.3.0/exit.sh000077500000000000000000000000321462617747000245040ustar00rootroot00000000000000#!/usr/bin/env sh exit $*Azure-WALinuxAgent-2b21de5/tests/data/ext/sample_ext-1.3.0/python.sh000077500000000000000000000003551462617747000250640ustar00rootroot00000000000000#!/usr/bin/env bash # # Executes its arguments using the 'python' command, if it can be found, else using 'python3'. # python=$(command -v python 2> /dev/null) if [ -z "$PYTHON" ]; then python=$(command -v python3) fi ${python} "$@" Azure-WALinuxAgent-2b21de5/tests/data/ext/sample_ext-1.3.0/sample.py000077500000000000000000000064451462617747000250500ustar00rootroot00000000000000#!./python.sh import json import os import re import sys def get_seq(requested_ext_name=None): if 'ConfigSequenceNumber' in os.environ: # Always use the environment variable if available return int(os.environ['ConfigSequenceNumber']) latest_seq = -1 largest_modified_time = 0 config_dir = os.path.join(os.getcwd(), "config") if os.path.isdir(config_dir): for item in os.listdir(config_dir): item_path = os.path.join(config_dir, item) if os.path.isfile(item_path): match = re.search("((?P\\w+)\\.)*(?P\\d+)\\.settings", item_path) if match is not None: ext_name = match.group('ext_name') if requested_ext_name is not None and ext_name != requested_ext_name: continue curr_seq_no = int(match.group("seq_no")) curr_modified_time = os.path.getmtime(item_path) if curr_modified_time > largest_modified_time: latest_seq = curr_seq_no largest_modified_time = curr_modified_time return latest_seq def get_extension_state_prefix(): requested_ext_name = None if 'ConfigExtensionName' not in os.environ else os.environ['ConfigExtensionName'] seq = get_seq(requested_ext_name) if seq >= 0: if requested_ext_name is not None: seq = "{0}.{1}".format(requested_ext_name, seq) return seq return None def read_settings_file(seq_prefix): settings_file = os.path.join(os.getcwd(), "config", "{0}.settings".format(seq_prefix)) if not os.path.exists(settings_file): print("No settings found for {0}".format(settings_file)) return None with open(settings_file, "rb") as file_: return json.loads(file_.read().decode("utf-8")) def report_status(seq_prefix, status="success", message=None): status_path = os.path.join(os.getcwd(), "status") if not os.path.exists(status_path): os.makedirs(status_path) status_file = os.path.join(status_path, "{0}.status".format(seq_prefix)) with open(status_file, "w+") as status_: status_to_report = { "status": { "status": status } } if message is not None: status_to_report['status']["formattedMessage"] = { "lang": "en-US", "message": message } status_.write(json.dumps([status_to_report])) if __name__ == "__main__": prefix = get_extension_state_prefix() if prefix is None: print("No sequence number found!") sys.exit(-1) try: settings = read_settings_file(prefix) except Exception as error: msg = "Error when trying to fetch settings {0}.settings: {1}".format(prefix, error) print(msg) report_status(prefix, status="error", message=msg) else: status_msg = None if settings is not None: print(settings) try: status_msg = settings['runtimeSettings'][0]['handlerSettings']['publicSettings']['message'] except Exception: # Settings might not contain the message. Ignore error if not found pass report_status(prefix, message=status_msg) Azure-WALinuxAgent-2b21de5/tests/data/ga/000077500000000000000000000000001462617747000201725ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/ga/WALinuxAgent-0.0.0.0.zip000066400000000000000000023355171462617747000240330ustar00rootroot00000000000000PKdRebK+  "bin/WALinuxAgent-9.9.9.9-py2.7.eggUT $@`$@`ux c1.۶m۶m۶mm۶=\ܺ]gQuy;i`{9?Z@* vtIw_QItnNU"ꢲ9"u ][;?FR*pL{>OipqW;~'1ʁre(:oq!URx0stҖNpU .yAp*ͭBs%F^ ]""PqsO{oXGh{uPO.\wtbܫX^HX :!0_;n4ǹzse8TJѓ'~r"ãan4eKհkAhKgu,&*xA|clF )ߞSw,]g_β!_"c7stwnl+z]HdiNnY'̵[Xb#*({]x?nVL г@eG@#000010000130010J31iA"UKE+93=KQ#/*=I CPI4;"3=D#df%&h ͊+h$g+e%[,:A# qY-͋/WT\VRNL^APbWq0w{lnIV=/*%Mf{%1K^9 UyH}vh2w3c63+hcL"u U!;( m$~ li#HSÁg(5ۺ҂Հ Cq 2-6WeuZaIЅo E6} uO?.3dTK1\5Xy "#2#Ta).&a+Ȼ.@*)N�˿,9J¢t..[lcw4 vAJ&diD|_w I-qSF3>x__Sk}Xy~@-IXx.#QSsxL0;df*S,:>`pFatAs% =="BpkSoUdb=ź%U R7Mb8ߠ g"ME %R P&b83TjeV_6]t!Y;e+O #@IxP*^nAֆJjL~ 8D N[v76#ㅷўu^r@$p)q(GgZe6R23㘫ts(:6xOy/9eñUL'b"IaL({ny*௠T1v:uD%f#Z1j2M[cg"\i#LLLLL=m,X/3qvwз1u3fsgR5TR3Ե23kwS,h LCrg8ϝ/A \LahnjBoigqoDFZ"9QDpQMY&֖$VR}|ҙK(B59meg˿O2batkQ/Yt#*USi%%>6tQhE2Y7ćyk3o5-Ϭ 5xjd+5Z] с|z1*2t εvuo"/O/ڨFl3A9_jhRDJunY<]YD_kL1 9? Q/Z7>v v?O۫PJoFx)VxV`WiV=I- 4냹@Hr__aRgOg[NNF*e Pb3'KV֠qU ߛ-b<9 oSl|y}h{HvzDY<-$솙7Eanm>5Ha9io Ϳ3Į5xBgͯ 쓪,Җ 7?vo[(^_M-//ǖ{PB#$3vQ0gؿBߑ߯d6)J l>T9"V؝=…1'>-a×Of;/~DwT@e0rNB`< @qEJl>y& 1e(7nOl-(\ؚkׄ9wdH}5c]st||R&l3\5J*"9z8 l4BS:|W6Gf#-~ <i-[}b> A @@n GGdb$yΨ0Mc>НH+Vl|xS淡 c3=~~4|d:OTrp}H {0<\o܃`{Ә S$}7A 69DAEr2N٫ǽzzgہ yu~!5R$i߲]k?R*t [` S$ۤ3GM{vXqCh2faTё: B"6~pNO/k${ O!9d?yqw?@qDI@ 㙀f|Ǭ4'C}UKakZEt|aE#l2=QWEr'OhbRޟ,MDB(6/RH=/:4G/T E~~0GG ;sZhNs-G-@ )bzcr1 W标^ E ]" .$M-.b=R+jkT?ifz)Syݼqv@Ѡw .bsX~*ɐs6  ̞u Mm:&i b8{ O&[Z[)Rӏ$%D/UJ|8$A,[8~oG;uʋRΡ Y]SQ!h<~@ X ED`ɿҚ@JsN rluV}-1Qnw.נ#5*5zHT/x0~i6(i-YD$*o'EaNgVKФ?|7l2h9ʾ2 ؜hr FUF]nZug=ǻZE{%b*.>wT2Z5O&.3j4"KPD'`*āGr/ں Yr'BDU ̦{$Xg ^ka^eh@-0 Av`0͙V\M,`>LN4wU伭[`x$reⅪF^=#lʯk_;g,Нl_Y*(TeV=@= G{9< čUrB"KՓFww@l?zI4Y\U qQJ~Q=7>n[#\N ,8+F8\h~eac$(hSۣϹ/u7O~QV v *DИsU\V> _B <, ]1ƀi4E9o v،mv+" ˃~vYl֜Xz]ΑQ\j*F֬,9{퓺0ڐkV]acCӵ@kw^R62ow?{'o8,?BR(LZ~߹)(nJO!ZUJG ElU!Y?0IҺ:aϩB*`ݣE!EhR>c*z7QuqiA cق7>EylX tO9Klz}DRo rJ,0X6@@DRûcIp c6#۠sw#/t~,ļO; #V9x4xGCU\ ?ɒDggSNH̡&ܤ(#qEw{- HHQkh!x q`12>Kc&^x۳h(QD<\-fW~z7o~wN'CY.$B@&Sb"/ioOTEΈ^OZPs 2Qڲ BsO(nF(n]+z8?h+bZYvA?@SZqY&f@kB?u[ݕGCs)]Vf ΁`%OYYR\mӨª .BXGkDb{iͮQ` }V.kP0fSUiԴD#JfQ!%(V|(%™#*.mf /\)})4 2*-辷 G^eJص ($RtW({Ru/ߔ2ٖd[%'_^#TQiahpq~I99h:}7x+®WR9yٳ,Xkn ?M- ڪHcԨXYYt]~aBf3{pPs_RkN,0:==r"yyrͶt/FnXi ]֞"4QҽP*}icW4+d4NcBVT"y͹,BwY?Moݑ@J&TEZvݎ( NA꾛}c aΧ uRAF'&]RJoȧpCڻ"dD%q6GW; I#|/x`x]n.Z_,tݧӋϯ:qy[%S:]j:.Ns@ L|X1ݲO.K)wg)`pewTa}~Okl1e"<RSCHl r/ '(a 8=z0ICHaJstjۃsocQR$e&tr8)Uaso}dQ{Rث.ڞ&' 1Ẫ|!,O!pX wPl[ʘ-[Rl>lz1opx^x^CkEʇ2pf:IFmPtkޣѩy#yO_7-Eí15_ y夡 3ߑ]øVKG]RZոa,=OS)7Jґw$sEyRL)rU`&4*ڃ* =0 0מ}Bws/ggW_L/aľ=,!d ֠dDa QNewnI\ē<o^nKNE|Bݤ$CEURb&b$XGf   "Sp#r?2ѧf593ME>jEEy'Ez)seez'E)Q$Y-zI&#OQ{EVYt|b!ZyQ6ʕ )!?\(-VDv.2A=IeaoǸC,dC* 3`D׎AHVwE˰Y?pV;¦Z{SOR"aN^QޞEʼ}xy߅!֖G Kב޾fm_mb!X[0Ux)1ʎ=wkk76y R8U7w0&x(ΣpR.[!D4i7ԑ%+$<}͖4ӡnxyϝZRз)gˠMC6߹ݴ}<-V݇4!ݧlY*h"E6ɻpizr!h֑ۤ,3ęzv3BLLLDZ/ K3Z_%VodJEQҬ V|Íl"1bi4X9iKZ'Pif.l-:~8uwKn{AS+oYrMr]+%X5?Rfl_qKTuq0M4\Vv3{7Yunɠ9_T?Fgn? 4FZqmRYXJz["x̒n>-A F}@BZfk8Y2:ߗM6'r['J`rZޫ(-ݡtrIf1?+TQ<<8_dP9q,pזtsNag8A El(KU`n.K1d"p GR0 T%"5?$\ Q[JSښ u/_%o&,T33Z{'Eul~L7Y!p2_!+>HzRX<2C*MDfxk.=(TBEk*(:)7bqfRAS5-Qf E:7'#_1FaJLvXZsԍ"Ǧx:=+4`=N PD@f$&uڔAX G֨bħAwgvxFH4nVd<"CTChmo!_6UsBLn/ ڭi4o(Y7mA3ڝ{!'+̺H<`]>v9K;0 n(0hH/oɚ)hJ߶n%d4t7Х҇'?+:hSצ]q.#s3U{Ӧ{j'c̙BNj[EW* |ECl%WjtZE6\g[|h%{[]ƊR}[w3Bڇ vz+zZ섪^9Y6< jva.(q3 ̏@{np;t٩ rlm lerHDT]$ȌaxkM4vGYnk떅\ ;NSlW1 ږ/٠{ܲ9`_imפ`O[gH`NN-T\sd.uxW$1˕a:8 raG<2|b!R~dqH k]!SDzp*mFq[.bTM'LW[MQ[P ͍= ^,`]2 YVA 69G~ߋR1#vDj`&śRu=&<^|K/"?J`P>cpw3]{v 2{lK@uk:sPYAQ;[ 7@K7iց Q4#~3wԝ/#dwn^HU,a,r>v!7Ur!7 P/FsF8g}:w[ggEoVjgƾs& T 5B6,<01ӱ ;l* w9(.w"=ACr}Jb-ZO:G l]mȻaAiU jݍ7F;՚;JV:r 55Jj%oo AR<l;7c! 6%)yV*5 s#Aӣ:xe&\'$CNwb26A0}嵊UaP!r>W\1*`8S8:™z0>/Y@PC eSȨ4߫ȇ9ꨔ".Sy y|1O:Y O[aUsi"ݼ:K|õR΃TIAJkȺnx  2OxYYb1lY L )N][:NF;5Yz usӥ[ѺO:r]O]߼j6+at00{"}XA9˜T6Ij }ip2%h5E5ԅ_Q='[vcVrN-[5 `v(UVnvTAT'5c CI& tUS"hLtT|ө ߺVQs:Kz:G{1Gʠt"cf d| #j6@nY5TvnUl`D#tޏ7v ')vԬ7S>.i+"V>/x,{Wܢ?4 8I+wƈ(D> Ggd!mU^Q"A&ooI.bOs~ sQvhtM[R\SvpQOu1e Is Hv<*q"kɍ|_Ibge-8'GKODbVH)ɍSL$'+1N$d?&0N)JQ-<$q 1ldjF \nIn\jeƓc稖-16 )=_҈UPuV?Rs{G6mU@G o'ϒ,x8.!4s' r Yv9FU ݯPCFhӂjYx妵FY(@{N:"* ~-z*Ѡl#rRV GQZsVy)T>.Yȃ^i m_QaBhsnKPK6YLntxXd-gB]ޏ? O$uS  0_]ߦʫܽ_!!Ci*YٻLoړ7H>5IQN{21e:y"]S{W d9BŨbS;Enh/.Y1A;O{8r"aw,Znfp QlKj= `.\>iMf`L'ѡ?Q9l^u/Y$~ f fʝ,Bqw`Zr ~,U*O$i<]!,c"p̴ҞF6 yaz%N3PrC恛YM/Z=33NN%}`^S9wټ1i.ql/W!vKxm|Kqn,=dڶ0 _g"K\k/p03= 6r,)>;,au# t+XL$gERR/az-,;|p/*֓a#7ϽS~Q<{Ϣeȣ'NRug3  K0,eNض8Xԟol}EgzFCb$=%/O㳿C>|_Z[u%zsh  #ncodhL4_`OjAۍp*.Kj)eEPFԑ:JypxJ8sp?Aئ+լmBs );&\Zň'N\R&S`h RAP0ܻ0Y 0 LPtzHÈ )B@+3HW-$ N ! :A@t~*vӧM]MLwormC#HfՂoTAƛsRhK\"a.ٓ(7I&a.ۙIpmTD/Tq}vwCq ʭaϾvcg4#ɾ,5Ħ+Fmy(/T#FhPEΒ [_F ڰ\HW2𳡂Ҫ1fhhzdZHʼ% ݑ!MT֗ϋV+NSZ%t:.)]VVr3R+"WvE΁ȻdNHrţ+7}.;e ]5!i_l R\ *G037&'˯5\ p39O$6y-v?V(fZ*oY R NR RO"8*Pzu1J+suIM)xB~\/+¤ \IUgt[?Tj}WY5HLMGPP:;:8;蛙o3{b~g08 nQ,*;UTWGc PN?#:xRc662zqHv {^uH Qi#XeHns\ChZ0Ji )H2?a:7jv M #A3EHCx?Ay6JOlQlZ)Vc;SkK..ܺZ2ɍ\N3;gAm:S@~+wJC C%@S#GPuT*!ILK2 dFy UG De.JH<``=Z+wCѐcx^jq Ce3P 2B4N9=Uhܓm,/De"J1Bz>"M(}D jwhq?D.F(Lo uq32^~R=%=|N7!6-Yqt1P+|0ʖP Ug]дOWߟLu!3v5ZVҫ0J 9mUTi/Z9\S:K9dElPj 'HBe/%-Og=<Ú@Yv\BqqP0ԊOqPwDEQ<˧1͆óoj,ݱfx*{xNMy~:~cMU\u|yxœzJ򽋖ZKW -(O nɛaFb#I+c!y m~K{n]n5e;&QT. ɶNlZݷI{"+Hw.(eCU}t7{ !^tW᫭2\0XxyQ:H:~jql`hbIGѲX)oM#8-"dy1YL=RpBA{QAJ~~oERu):)2W]z!9tCf85lS%)40Pްez8wIzHnX*u$މV[yRk4J8Pӂ w7g_4vJ+OA.c[Y'>¨IZK3ZퟤE&LOcj|boP.8G"=ґwϦ 0cSg4D&Z!-% 2ȇ\PܠgUg4gm/5=w 哌`1 {s>K[v7B6D,9C* t&H=( :ՙ[PʭFd݋! i[G_r-l$].fhX?m;LuV_Wi> ] hà0& {VaXzl.8ƥ,k<ȜHTLn>MiSQ D_-dޜr3]1oob= 6+| V6h+y%{f0_`lElLf\[ qp=0]`17 i!S"G{Ś ))邆I }sA|Q($tq$l|Cr1G*_T.Ŝ~-BoBTlӘHjL?~='~`٢'3pG|Cae3i=X L&Cpn -tE;~W,mT&ZS%w#T1T6xj|4I1Ŝhȇ6j/xkj賏 vRy@GJqd8 Z4`.8 *!p_VH`vpLVU?SxPR~9+(R:3m-;mޯfW쮉G-`@npybE]5,L % ɞTmٳtCIe]"[f5.ɍ|\S{5=R%eIXj~\X(M/&+ˏN"q|ZaK:2{iY >H<~UoJԈxAA}hb5 Fd2"li@4H)^F? d3}' 8 #wuvzƒoy&AL:9Z$֩7pG֙O3:ce(tpPg]!ZV?wp;h|:aldoѲMC/ȸeioE@~m_i6Z2-n9Ic몺sKٛw)[6?Iz(1 Ǒ†ׂO|hP$R&ظ {E>'hP՛\8!$F=; qD>8,o\{+F P)ƯhqS֖v8an,ŬX#/DMk7$0YL<96P[@V®4+ujovɀFKA,i>̈́\Õˋӏ_ Y)4Sޜ_^]^ySgH`>s ~BM,6e4wzyZs2!㑷~3hD]`[i⪶E;L p"<&G3x( ;aR*S$Uf>ƙa_dLM!8LMd9  C= Bn}n=HuDE zz_wVjO++g+qJ`v#лuMu! Na9.o;wۣ\dMϥWnl'+|ף*0?-[7O(lB*KC,` 8m(rn/4] I#?Ns{^⢜pF@W-iG K^\U^8F㫁9#u;8HP'1!oiL HD{7X싕>f8Wjo+JR!wDA/y`hj?" qj_ 1r+eV]6Ա&ZnoXYp9VM;\UsU^ 8#N%`Hl[hP׃cOH/wUܥW9PIMG#)a>H*N' R釟ftL\S**JHxm8fcmLl`Dٴ L(#4`$e{Ial-j7P#<|NIiI^B jyD4(f)h4#oSޞ ;NeүRHα"5q"V%n[N](Ggk͖}H&tzCКؐQx|ӌE}zrQV$s(w3NvH|C%r:,Ə^W뗑B4\;9d(s)^yM& u\4pPdRT4DvIPOi**pI!tp^k˱ڧJș{q1YKv +`ߋ* eÒ\V:wYwO 'א.xjOKĆXE3yne~~)k~>}]yUZ[k.,Za,, Va oS.eMp{jscnGc?G"FDž*$eD xuH>6F,-\5cV#\Gua!Ľ%%mtux PI(Fy0~>dR:p{:y$.e޷zknz f&*眈o qz1VUXxfJ%_$F,oi2E䛋Q.)$B^ll_ج,Q 獁SHF%cTWnR2 Nb "9E#"*ʹbbr*-= _.e!d,1@mch/>)g@ OȢL,D:JsB87G"YP!*,z47:]*$xg >*7r` j sWs<^~ ѸM#K iCoV3y[,*CMZ96Y1o)0X&c%rYI/%|Ck4 ǟf'Jz^k٦ukUGf wUʀ2`"Af*'%5aKo |n Ah?% a? {|cAg}R*k ;}>PC)o.ߋdrҤ_-O *bޞޝU 4l媊U=t P,a;6ꖯ_^6${cXtщz8@*#}\4Gᔮ/-ni+0ê7^ZKmL+3]DT)&ϊQ{}r6Zh1S P >]'V׎*hϤo8Wh3qPA a<%팣ڻ FoJMEicKUB*@:OZh)Ų{q*V!]oJ VPDՅj M.c7$b<]2mWR\EBΣ<4h-~|IVSƻK7ۯٌ(Hz߭4.~%)_,Zqne\m5)y{[/nd;ζ36UC/cNY#X !0vUiŔ [5} S|J?-&!ԫnF}zu\#N^ΑZwm};T)*4wmOܻ1XÕM +]7bN<-`UuгQuI1'v].W LQH@4gX I8wS &lC7U/Ʃ[ A jNGM&tK%_[7!K,655\-Ms_)LA6gUv=l8\Ɠe|ޫ\7;C!LVвݠӶ}h{1@q&s2H& zM~+\7qҐ Hoҩarnz1g58ZZ0A˔-zU(Z T-3-UGpV|PhѨ%#)镠8S)h &A"R|YG:Kfdb˒W;ZmnJ[d=)«x:ݿ wϺVf9+gL,3(rK@s Kj17Y< ʡTW,p:rG'3O!l(N52MQ(Tw0,IRBй+e;E-pV5tɯG+NeheNɂvM<(:R^^<#a\'ҏU!zȼ#ddL%.`bF[+zll( EmA{DR"C mm [nש `C50͕vw i`QFp9o,x\ન3N>{Ia(9b%I:l_C=ݴ'wP@Mx-e~s$w'XWoÔpF*k &U6}maOIrT rKm1ee$[)1hrH@+ơ/8w̑R^ LMKفln_5Dizۿ"v 6S="l BsvS-@E%F^3l`:UV(Xx[8#9I[[NKbQwDM>ȋʞedrAntvuQKélg̞#.Xs>, ;j'1m3$XfCD}RzF _x{W8wl7}֠u懲2gթ#J!lNTZA~w"1<K䞀NбZ, gU:"0%Ǖ/Xrb>dvs ֌2h3.9HVy(kHcZ̤J9sBblToç3go?=lWLJ14KYVy%.Hmp \>D47c>R2|8EGPh1fǭg_9 jN8@s muF< "9_ߐ_ZB8#R~+? 'DJ UdgLPu98/d ~>:/8doc{o(HQn"T" IŔy'8z}\=w RWDEhT{C%g綽k/P$^i$ffks o  #< mVl@o&T_Z:otv.F;Cpd\rqb>u&fH^ Ȼ@ ߲. K$`%jqсlXT"OBEKRJM$V3N+a=Y)Ԓ5sbEjm^e%[J";9ROFÚ'DLjy_z1Nq6y9HIp3[ͮ]Oِ!1);^'h% u.;:pAwCWAH.Bp(âMf. )ceW5"$g~K&9`ƙ?7d%v 6`ΑyL,eNkoj4[U>>A"B5QUSl?YŞ^G/+QWc}qxKCU2N!+ic\iAX# 9UY>P^%$-@!TI0a̰z0ͷZR 9rъZxh\8Ja^N8\kr$Fo< 7 {jWNF-I><WS|K{5y&vw8r$L)g._D0k'G>\;9b3yXsl#q i?.G'З#wq!Z?6@e{Ft0"XFZMn omoszNC;[|1#裞=rʙC(<\G@D EGf ֜C k7F2īG(:*ܗE6ٟ.*&5_NCJ?LQ Թo'Y!:}'zD4>^^1IRt\VZ X7eX ):^Z}P]zϨxy"<^wd6gh|>biom2d㺣*<[Q[ReU9q^̒e(| YvӐu.h䆈(˅; $1v^L!G{暉2j=W(4ŸxZZȅFcRX"YXm4dW9FX mG{t D Z !0⣍*|4L:@R>m#;!F._V?e->VˇQtJpYz9=~Ho¿~;Dhlqzz97Dk"m?x%oő,y%H5{a%~^ɖG?rFn vV^@ACK:m[vğ:@L)]:)+uy\nct$p2L(([ fg}9HZ$ ד ]<91֌Frd?lTQ:ΠQz0j/eT7Ǭ1a|]ҵיʟ{@Z edö0ic"0%{! BҀ0RIs|15rs?ru{Һ8?.H}ojnN'9A`ϰӎ.eŗ._K'#kSKM;+/Y3MNto7f6nnNŻŪi 8b8uluS˛]G~z Ld/˒f60 M 2"Ԣ\oJtfgd0_H1S7 hG=yp=YC TO"%0} %8lh1aQu-i5LsúIk WKVkJ gȞh6]RyR=-Jb _6`bHx*U@xmJ@3[è&h{ #Y#`Rh[y3u*`Gw}봴0ib=CLfrbQ6P?5|` If):?K@Tlo`Z6҃Mnp"t, GmRn+AqtR0sqG}TВpHj"YsI {&mSUS  ߹]L/o{裒Z@ $eSO<7+ I!Ϧd5MՠW7X 'eI;#zJ{O۝ VcRPo uxd=.Lm/אKBˉgRzSnAi=4RgT-H)'N#:"s3.[Jez<1ghj̅W"%\,33x]xe^Yk#-9wHC[ڀ,g74]-N{wApP*!g!ͥ D'`*`xD7N_]\fV{~ʓz&X?p(~K/ [9#Í`tJX.9['N$P>ASNʪq6]H@}-~r/̟ ӚċH/@b& *O+{V4;YH9c/ Ud4DeEVT@+Ws\/гfV :]mD6X䒇nd$CnsUώ Z嫌 -}'G+l(fP0Fwac-2Sv%,4XLmpRذ&T^q kt2ؘfv V9'wп?>?͸ vPPf7n,Mw~.N;sc^O"t35L/*$VZ T'eZϰ"\ mQf+S.DnKMo3tW獴Q~yv xM;w'0PWqݣKjī n ?mGFnU[<>;%!wgk5L+eN!+`² [X?Ϳ17k ;d^cU/li/n%t-Z,qgojĴ3uMq}?4J'.f(;swM]Z)M (R"\URjX̧ K0MCV [`1fzo%qptqIJxcXla2v}­Twn6hN5J_ͭ5q}z ]wF/j"G5i4%!*z @o¨#uFN 5 ݬ͑E}#D.fzH*Z~łhp:Ei+LyY1ٔkT!m:- hva4p.0C˵^0)Kz_nd ]ӈ2(QIڰf 2sJfH". YWQa[[ftDW?̱XmLAi0:O 7)<8r-@*6/}V᾵ej+#)Q{])4Uggw~]~%]d~$b]LL[~62hY`ga JEa`W"8F5ߐӺLu3>/HDXB#mzBoo4,׳aYEj2*ofq\@ %q"C%&%c,R2 8y-ׁө;N'˿@+8˝H ǎ{vbʽ::db154Y͡<[ ϐXN=!O=1lU@P+,@1T_#F U|1eVdʡb\ As9 s--dP# U|A4sF-ϻbU(D Ѱ c9_` X_zr9_-8d  **7AX]}0읮*_hl:phNR"ba>(ػz-(>2.TE]{Q_D'Z6sǐrz,$bo=rkH{xNE-wa߲g͔moRѷ[5#ёX~{*BZk<ikv9sIty[`kL6!ΐً4mH鉬@qF3ܐ.d/XT9U0rjڹ 5/U_"*ZML2!\5$$dc 2Wܹ//a?!'[CF=+ŧ&0J+o\FKA!L l,ikyntDXP$dp<+;[Օm3MBɀԑda?Z[Kk6q "kDD}r\jj&j8Ƶnk9@GETn~o4?u {Co2T泒>o$ kWT=sow>XeF~P"VKsR鶘pi*qJyzO-8Aᘧt{f :[*{0V[LgepW̳7Z U`%SUO ~z Ӌ8TIZKJe}Oajׇml6Y^|w_cnhjk| @[kTJ$U*1$H n{-I'`OǓNŌv`BtIXK!FO$0<#~?a|Ch']'s&=-EDcpf|}{e$=<%)-,u NH) ` nByHiVQu|z*./2soMq7<"W~⎃r)K0N@ӿg cw!|(W0'Sre4ZLgY=I&'yPiq2ZTA ,5/.\6$ؙVGv+-) ,kyo !=bl9{>y黶`ivR_Y1ǺdXud3*^-'-p}n 0Ɓ.k(ݢy١%9Ҧ~]]xuR1> fJ%c}u^4qM_ J,joJEg*tǻ`*oErN. Ht<:w- + ⰳ+>1#iv8"_x '<ȩxyTWK"?כ)q™ @FU+ K{qvF:O޺Yn(l|hS\d;JI[ zswzIĖ*H.|f&hrpcG9=~=g桧F>agxhJ~i-縙ET.qüZ|6`ϴ0ʘ0OV$Vc.UָLʪCuFVj!Cܽ_QVjQw9 Xna |}hqP9,^J:$ `sjy~\Z2EG'A!CKxhſۤ|V2fsBCԳCR Qh7jv+%]>^k,rnk IgM+(Y$R6pk!zELؘ҂W=8/-\n.(TC-'Vnchhx}WSH $;f"`[I#h_}Oٌ۳nE|v;?|摺+0ԾӇ̧Fšfs]&OrE6 z c.51JռhoG iOXHoHYvr2JU3k뫹=qNaT`x o@I,}/`gNR[ޭqq#a2|1 O4!{+F-#ʯ'N؛6jd)d gu@ūn^ki[g"j&礧yYW6Ņ<R$t8 xQOAP x۶m۶m۶m۶m۶ssj滋=SܤRսIp(ROhoq#q@=^lB2)$t<*m*Շ|XBvJߑ(}%GԔo*n, r,ށNq;?e/U>/D > 0hu9#ls9³TM / *.깼~6 8TN^X{MbߜĚIZM1B)ytX\?RӸ|t1xA_ Prؔ>ɔ0@M}H7 Җ}C7`7z- m>5=FbmL O= o8p0`2p,a1l t~ Qqvoejd qtJ굵+>۔3j5006b0D%):_Y8U}1;#M$}x'ɟ1J$alA܉ c09un#ޗ1ϵyVvn}basU􏝗 8$8!1a.,[ b)"^ 0Xw!\n2G}*m=+ A|d,?t*LXiOأg ⴾa ڏ)w^ o0^\LhdcvKsA&7Ej@_w>a}Y)?}|C{>`]o7}bZ3l<typ.0vչ(g22 ]دH5vwva|8s1#;7 v\N^֩7ntsPl~2=@avX}͊ozInw=t)!<+<,xF]FGӴ$#vR1Q/y6ޥ !\}o#Pݜ,.6ژyb/AO:k: 2zL F^@>ȿiV`ZNZ=X4ytƪ:?Fd24o :--)5ct%(A,ygu} n^_IK۱NKPUB*$u=ڃzw3O GQbg`ڤ)5;|FAžZSF+>+'[p'oY pT'gW^qNURj2|[e;'+3i&UZ.[N-kB_DęL`BRc 6X@ݿ4H(7ބG j+L97y1;i6PU a,d[P5A2\qi*-i/Ÿeز,b Q7qm(d(9GARP $',2ϼ~ tF=EyEW s-\o%tFMSP{4*p\{lj/\r"~&Frug;QU^WzPL78NƸ\M#8k(jc$gr6ޜ%DY)7xʎӒQҷU Pf%0 O)Fn5_Mw1;=/$xWx#w"t:~ :J̺cEF4e|tAu5M78ok#ۗ-#YSn4\//$e*#'>!S,l}D]k:D.z\uqYpڢy 1z  J9jӋۛu\Z iAs`Jf+ywWQVM=0.l~gK»fTxԳG?ŢyI7hUy/dtdF\DU,A7,:v2oM@2Nf^sBF :]$3R%7m[9j~HMtwlm E>e 8؋vCQI'v}r*8A….D W Mlzml՜Em&$No^Mد_YΙ ì.!QFn ]T _@k6k*|ޏ.bXd6 0?Ip+8W-F $ sƮM&$H[Jƪs hR"ni} Hɞh2 2$nn{*mV{jFU|<|dD<'p'5o 2z@'>Ld`rH*jFq%^"MBEoq/S1NaL%ܹ]QmDgT($5ݝ?$ Nϑj{Rܷsx4%#Sf8ȿS V&wK+RNAg4ٳQ`8x>w('qS$g &}fwn~䴺UryѴ) [j1v LVx.Ys"tTfųR'm"KW2URuQժEݛBfM_WUJJ5XUV%0GчІܨ#snWGcN{~3[NCT~{c&Q6ҥt.Z+̒cT X)3='˯ȳ3࿎Qe gYx-HFf2ga6J卑*A1q\iK-'Oӫ] cxR26Z)מB^uWG\j&VT',[^P O`2Lj*FZm*Ĥ6Wx]s3*1dVepȂa-q,FPgY sj)O_Q?nYϭӽㅀ)ˉUnᬳGAInb\ ]tp@Lݞ N[.ԻgAefCWt?S'd0?P,#2Iٵn|,~ bNO'(9S/[%U/?87R@b;סl,mq@+V(f&3Y*CVwG9|;$@O$}{>3h  }u-xm`)j&U~~O1~J{{ΐ\*X')"& vL۹|HJ\m_)<`'Db$U9khCp1yFRۍ#ND' <(|(Ի+ xy2?QҔGqP?ΏHهS\߹[~4q()00b$^6>l:D+@HsZsEX*h5dP8 FyUt\~8ADUpLrrQJy) Fegk3"<3|`Jmc|Y ,-+{\@R֑B }>\+ZN%Uxs SCLwو1L;t˨l$K4A2uS6f.WSXE踣&CWy SyċI]/;O-כ&i˫1x&tEp̃I,1&ʪʤJSB?tJIq塠?%/8A?S(OtފKjfm!|yzSL iEUE9}]#fyueHD֊\՜ ,P'C23(2iǐގW;<=>7's!pT_v3cftE5?dKXA`+$Kg) 13,/E+:둘&E\y =YPƸ7pK bK¡QQJmI"bǵmkĠN[)>mIĊ_ jRT*?R  hgX GB TN-XZM ZO Z"ez|ʜɅۤ H}|}~HهPwoUqwTwEW#kJ< )L9h+ӄZeU'tƿ n m} ܜZuQPўğs:\. f17\x,@o7/jIA \nBCnCJe+.nWP)\X߮n 1'Ie/6?ݪp{ BڷzδY cZ<%2r9x<;4 Ja=.~=^Ls1Lg'%@g9Yν]ܥ ;nN @ɨ[{#e4iXD♉d&Zd&\B:!%bpCٍEE&p8@_ڷX71mPܟ,%) ~cSB2 D[Zu-{ۉl)kNox2&fS‰uuW7Z\h n!tEk1HK76#C#ؑ9Nˡڿ'{vDBܫf{wDZ|\s7EBMGՑuTtZJ\H}8Zr5 ^*AZI,,>\>se]>;K4-;H#M=4N 1E.Q0CY.ms(ƵR O;qm,%"ֶ55wt&$G?rTkFUUvq9 Z%>cؒHDZ|<(SZ4A{猘v"?e0l.c3Lރ'{V9Y%oBy[_niDMs?рo!-|4Qb )J20bY$!ls0vN˕J e=Qj8Ųì{WKTYz[0m[gtE@#86LL¢D/WQR(Lvh[ڹGel!su"F˸θ^6=b8AN֚?OfIbl|3#τ!f3f_İ+H98EGf؆O1?:u@ĐW bd~:|rO>8Sͪp'vY(/1cr#<%5}L~N*P,2$Y>,Ҫ Ox <# rXQwWuߍҴ-~]əG.d;cvOUEߛ;I0W=ⵁW3pϧcud1NTGZx7&˃ 9fRwdIU~B_9f?y0;"r7}@P\,8'Rxqm-j'' >_'#U*7I-3IC[y#@q EK稗9 7OgAb9L\zĽ6xpGJx,"*#LAqDY;1%{3>+ gzo7nm$XgCp-EJ;9&[_ E 3uu,za`>;;VkJn;r_0YKY5Y@*RXHsb;77=RJGmPlQvA 3i\qhA漖-0!e|:wlw%nGisJrZdoaxq}Hzԏ0O6JeJC; I@G{iXWإɥ>̪6va;qJ̉V:%QB<Q院 :Ўg5x hF%u<V@h ` lФ ~^4)m*$ \/"*~_He+ DzBJ?wFND xXpGE4S븹O&ߞKr6Ǵ2Cnv-TcS \]lb09[en?*F02D=v?ꎲɥ< OU%[pIgv5t5K" Ppi,<; "IcH߯}5\Ѹ^9Zrtzc/F<+0g:w\Y.r׆VH+/KT3z]iNxЇieMOYE ϯ˕-+nGzONB3^cea'3= }3eCMe#1T' NRl9WE\fpYdt/f gS`2" 20@OץCNFG?߰̑v9 5J ir5)eW,@KG|ws^`u# FAt#֧S͊4Iii#AnH4r͝LӬ,u B' JmDB~Y{PC \w^ tb T)gn<gi,7J.BpQTڥNb>(f~>lIʹ=.E76@Ag! (BQ7-MhU^p*J~*AدĆ4c\^ W˥A@*^iԲgjM6sXC2`(2")LlH~rX:(2s"ipoE_FK>3b;{!3y$c$hfZ>|durZm-/?#B_H-j%;Y1FTGu2дU#]I"k 56c-XMTW?1W.`8 o"$@أPFM*Z4Y|V ͐[+=P\.@F%}U9a|v #Vw A5=~Qj*s U-C2mdҤVV[T&ˆndOOcϪG/J7,!%)*!Ao6@Gj.Ll>Z ϼGq3*+=D:ڪ٠lJܜ`[rW4Awix1mn_{;cf.2j.nFBoOȭ87 kGE `S1R`d2=IfcPk!TlʀD$`L|Cc̀g׫GL`Y2lEr'vJ|HFIH HGj­֯LLּr.蔼_P9eRV#!ˢꟗN![A|ȉ c~p]B|zi_:PwRXshUeRn9yS8 I%w&u5@B+Q#ɱhqcY 4)ţ2Yjo7q`%K!/Y81 RKbF4CJ5vle@eY3c-%=[Cz$nȦ9GvmX!N#lLݸQD=4"Q6Ji~g=:Ĝv6B=?kV+s0AGsSUw5'J>k-<|-ଫp}v#SNRP땤uګIJAOjAA\L(HR9 S9RٸPVfu9_JZR{h&,~BRf(AI#!4rC˘"'2ؐOY[R3^?(zzjq(yӹ8V-QR%Q%|wcIdI2o\&AeCNz!Qї$$OIڕ+qCo 'XWӺmqMMOg%Cj0#Wyɦ4ܺy6 O} N_aL|0Go^/4Ht!%<mrnp>ūoGc7" G̸Nz0gbc͕P/c>ܟt*GAqTօ#@Rt4&)t,qrC*ә2C̦ZؐKw|b+ڑ!g ԑdh _:hb م5Q,Y)N¿Pt3%Efb+V!1( ,ˏGsR ܤwQJ8#w8a Y>;K ^#L=ܨ̊ڞ Cul_G-AQH7zjw"ѝSW[. :AV)lɕo c] *5T1䒫qIp|q&a_kJZKm)8U9Ԉ5t7_52Ii yr`4 lxr\ ["V/wPdb:7I:?!D’Ij(f1JcP N~'캆{ͼ%r=4 yTK7׷ykRjwy||a2>N$ "B6 oב?NS9,'ͿׂqT{1n^$9DǩΌiʾwuef|c{9O'O!q'r"'s/O cװ"y.ɹg*p<<5oK+YUP)cL/_ )F>D1M “Л}I.p*A܍CF|, P`J P Y|#7̶0早w  ΖdZb֬-B8֌S)q;&%9{moi; HHṕ$6b\`K"*{'˻!=KYi3ͮ4 hkK<2+̇*Gb8B'3O}3 0(=He|qc?<`U]rUcdBs&*5Ok9F߁:DƬ!$ta5O&HmB{j CbcBJW}Jr~ZaCX*hiM Aِ 6g = cX+qSˆpIi:pngv$@&T8H]Dz2 Qa?bd`YadobˠpL~bu#(p˳)Ic`wMs3u(k8c"m~8Ub}]>paϑ 9(r~ UQ˯\?eΊN>~40HO}TV!|GtAT ?b!(WK" ))P W_%#|AX',5'5~#C@ӊBT5&\(> |jrwX ªW4h8ʖԑOu cňRp[b5!rrUdnDhFa-&3d#x>aX^h**q,"( V *+(h虺ku%ðU氿NkKB3#^ls/i-Gz,9Hs803ɾD \vZ/qn/R$D '/͝*)~TҲ8ސ&* 6"wLs03+,?%nQ' USpu 6JHp9:WTe_ +.}*oi`6BcԲ h,cj k9{P΢:ɰjԕ>"z]Htb-`tה OwC%ay9{V<J'LmK^UZ?UK-H0P!|dr3= JٗK {q}}%^Jt 23,DH.d%*Wߕ7|_M9^~zjjZv?wPP4/'U<{׳-4MWߪ.άا52Ba?t<rnd؍.ap.nnm?@a =%OKa)CR_:dT x_. U(x Q%st&ǹC1s),h(fSD-( wx+P|h>>thybs5`Y%53㞥{NC]'.ƪ3Ybbwϒa4}Rɒp.LJ /ퟣ<1CG35xm P55zX2 HDali`J$w3"rdҍƥ"T7`G2P :F4eSxE"e HۗO`#D2ރ[3?ڦ%Lg9 3 |G0^=7#|v Y'@["ǞDl!;«W3By{/yo$/\)%)ߎ5%|n~c|uT3A[![I1Cpi8=LJa{ߌcfV%9/'z׸dgּy홷%'#tF쬥9= [ ^ ƒ-~\EY$xv.u^8,\?bƦM9'C~/ Q@6GЯnH@6AUt2DPx*$5/oƸ5dV. 5 rdi䈩R\l{G Y\ykb86Qj FM\>~q%ma"TrWt: wb p ټ%Hܵ+a~6 &{o-yCީ^en#k 3T=x'R|pBy`Q¹$,ϰ\߼Yܸ|GYNHR|3pI' ƺ/fLK駿FlZV N3\o]MN(Sv@|Wᗊg< iByԯ==kߎ-lO1Ry6$,Ydoz}^O%S/*ajRb륧0XKuZPsu!exhs_Lv"mG*};-WL#/ -mןBԏ jKohõuў%PMzm;T`dqr661qvhQӱbEykw5M|6@FOjnQ@6Q `]<~DNJ{;?ܲ舥M,PFu.mW,I%( %\ gHȋw"J_v6Ғ-Фm7ɕ$zM8.9q'8K=,2HoAes~( h… JzcӶ^xUULϬV't f18Xr+ ]dBZbZqrǫ6>bM|,Yu5P90,U|=glk|V乗'ua^4 0͌qEu7W0 f_M$Fkt@H"Msf8ӬVƸe/NeÒ>$Dh{m}=ff&ÈlxehSy8!/i_߿ȩpr*оJ9W D1B&#|ߵes}Z۲hgӵe/jƋuUge\8K8"VQZǰdi7 .Kz*gI|G/[-שT0w,'aZAIid+C/cy k4JI)4T\1ѽyˣV~ T W^u#@z0A+:ghs &=V޺~xp4N>jL=nKh!}R%وo-Al[Kǁt;5E$*{g5 r%ע;L& σ\ږgd o(Ut Gٚc9d{%$ēi~6pXl}ZkF3^KoVR.zp{ ȥY\O,RO%c~bY/edl(zh#9 {[-xw)g$ $!4.z_*Tgғ@ E 6{ӱ?Γ.n'`<Hǹ$+ɬGJw|&d`0œ{ Υ5I,.gζ.ӹ &%im" Y( ohbja^_ycDi$$0tmP5ܼy]DjDUheL14(L gJ$k֥U5q;rݰTڨEuqK.є%{V)<*t t2rR&&9=\ EyBNDvʗ=r2~b~b>FFg׍oS)/(@E,  %x-zK Uy(GwX_0w{}֫,壿 9rKvs$mސMb-,։2gFeJ+%AO90dʍ,(`W HA90a˒ #R#MT#Q/A+lh(aiR@O44q@0]soҞ)췭5B_᯻ҙиuZ;.@||?ϙ>719f5 Sr*r=X.ԁm2FpዌX3Q$99BR$A sF1bZ ptuAet7u+IDc̉,7>؟?00Z\$BB#GG*LPSeXg]!Q߉x1mgx8"u )b P3"9~6nB+;Hb6!ɇ=wҤOqNN}/Po0+%kIO~Ӣo}c.7!a|5\Wmw<䪡X/;nѶ.Gc৓S:!-r9I+G4V9i6Le8f Q:[i#iasw VM6ۨ gT{"6 w'@3:\r,}ejJ:Z9W8Ϗ1=\Z Bqv1 H@߆I=bDt>JS`pPX9ec^7@M]Rsg|rT/u`ȁ}|- 5q%X-҈Ci-]0H3"{ybAU!! c~vx8p1-%nڰUd!%ջ}_ \u{XEY&cάС2KULbBh8*@xoz3lIh3'Hq8}J:ζt")dGh,*q}W~ !8/kb$ԑf=E eb:5qƠ}_es067Z>e-儞`峐{ò^OO7 X)"ګhϋvu\LlʴNf~Mϓ#r5jANdTOθ4\+L.Ӵ2dT|"mm\l^x2S4{}Ry?212j\!x ;TBPֻ'+}*{n8ƥ]q ׌r~|6jxiAr%x+4ȕSOif?AP PQ#?²#U>#gFFSWS BCFZ5 eMw V)TZpS:Kq6_`olVayHi=|z:90 LQGu*Qװ e(ȵm͊A0~i|V"޲ho2+x#G¦G;iq+ 1©@ƂNQ@<s@5 }V 9nFlx:jCm.Ѧœϯ"E)NGUGolQDMtPM:QI-|e󕭜VPh_H#WHD! ȠKz\ o?4e2%XROz>N 6,Yr`H0;U'Ӧkv$ω`RNzŮ%4B_.R-}I3#47ϐ#kB֧(OpEMWq&?Sƣґ %LIoZsNaKaz=NhalsJ7Eۜ +U _?]^UrǪC!?j!M)速 e`]9vҀ0ENk#sV8Q Sj$>Ma/|㲫xKx"*Ub%f9\S^S†JMDNrenGgTQަrug4L2y& ^ҚOEpJDP#-ZJcsik١CaeM '$)5vH5oS#]C=ftf[ynElRk'B*+y;˾j6M}!H;OhWUQ Lp| +3c [Ec[D sX8v;:.ZKg\1e_$ e@_Y=TZ$h8}I@&ӌh٣:^man'f7К%PD * 9K96a%92 [KZƪO,7w3 C-suMOί,ԧÇFBBk޵>tM§%=Δaqt5y} pSy=oaf5ѰPvֈAr#*A.ϤOSo1xS4NbF5R2  y q= 8VkcS!;\Fj`8T؎{ פ)ŕ7T2j+h.l( by0kuE^q}`.u*N/j>c5$\M8m{>!wYօT/J5 /akl*GI/\ YeN7ۃkcǤc(ciD14tqDТ m O/#(B^xQj38t޻eoӚ~N]gaeKGquܹBc;6"L&ed<)g91b%c) ClX]vȧh<5F:X0ږtiCȧP HF=2b"Ϳ<4S䉉aֵ=7 ֍f;B ֈwI։2@8 Aq yxqFVvv^TϹ~AMD%.wJAP #C`N;J*_%w jI,ool"VÓ}X2T+%h W΋9,(ŹY^Ź)5/iӻYY{mgygugeAg'g~prXA$Y#I0ȜSb 4QNYa\\R$-[zJP܅`om2 %@x&>F}!R4H!N-{d\p1d<F~J呠`=p pb?`?. irb,BQ0(G԰q.5@dDF-@h__Hed?ve-@-fο]g-v_(^f_f8>^g_e߫(v-if>.,Ay7Q<<+guR6iǝ^QW2h>\C*uΞBy'SQ$~af*٣b?{¸(8xn(N|%1jiPq*gKv) !avI̎(p0gg蛀wUJ3KaT<htw\.PNgA$׵q`! y=N~c;!dӰd%Rw u:Moz)A=>ЃGN|4W1hN/" 9+*zKΞ="W%?@?u$5Q?pIee")TSJgA0h-x MjJwDDJ:]"RhsGCҕyhT1Qdz_[>JUzi6c1&eKdt` /:]\`dιDҽ!8 :I+Pz!0 c?YP\bG}:c|D[YuU&Bp u$e04P+Oq{ۂB3lKܯ3;O; *`uL󌝊sZTbA>?G 7E hu$gbsdW[]%fgyE]G1gժ;I_BP h=}ky!Գ* ! cN"FҷHӢσoSWTh$_*o/Ӭ{h5>K^8bі5Q#:vk=U'9Sۺ5r-$]_\IXqẂ֕߀aah܋:4}ԒpP1] 4悞/|WЗahKRk';A_ц % 'ϽRhpe "rl|MH ɔlW%<'w F9W;ejp8>=GS,53< yt&M4P 2P=^xJåa0OqDzX gˢ[.=mT!ml";׫mIr)No6{/K_ 9QɧL=Myk"Cp KI1qۉL탐 i3ό*@5M Sr)(6lX>-?߼Ƌ ,R@| iq6&bbCs oMɀW cA!܁θ~·<О/KGV@y| }t; =w&wS㐓4x˫@nmFh9?VfEcO \5/4Gǚi[ ~s(Y5 % ln>gΜ\xܼ_zkHЄcA (ՁL#|~ZkKu ʽ1d-K=[2ӏn}M\`k M7/;lUK,G ux/&[X ,cLK:X&kz-o`] S`oGL vbIž窤ڰ!Y* %=Z ݵtMTV fn^34`9R%d^"%+;06?c" 8,9$0Q6!*⴪Bk,l: l0TRj8Z퉘h:xUbS D{%Ta;B]r ȨqHB۶;5f.9ˆmH}kqo|HJqaΨ!fx.V@[Ħ`(]W<][j.$qu-RNl#Rl BQUнYXMP THZ压S(r.~W-hFK16{TgƵY'drȳUT#PDXNzI:|pɂ"Ec[YZ(ŞcԹ}Q)y>'3UΦk z𭏄PGXnTqmVӠñZ73ufp:e5خF-%K_V"v諜㵛A8;}<2OӺMss[ߋQ!IP]?LXam8|yGh)ٞijEdh )VWEH\'* QFS#VAM68VIQz IvQ.,Eywu:e̯`JK"nrӚ1.oz/dHĴ%2Xtŭo()UjMbNK o(sLƴP4.rݼJ2nӚ(ӡ(b\{iHQ'$#O6@$soppFc-w'Xc_TɛO{/()J;Wk%5rhH9]][f\Ux F푣S{SX)x|OO攍K0Q*Z0snENWu UqPR_B\8C!zDήVsOUQAњhB&z"{I+b O.(WbYg)Mǎq&եWw6U7lEleZFG`۶>맇KJEK <(e"aqwtS^h_JؽPk]^lu9kQ?w5ڈ iR=4$ٔ,KЕ 2a34LK/}Dxov~R D ĢrWoR*o3ʼn7y(r󥵲 'h XﰊGcKN i;0Q嘰8'q㤌_n^&ȩl17-ŕ6)xg-J`4g ׌Gc"b܋Vӱՙp[*gl:=. G8n_f~}9P\ Z e4'-s\It!YVW "w@={aX&/-hZ FmM nǢiC.{@EI1Cl˪zR-A8Cj;&d5FQ(K(j.7t`tMK[T"i&ģ2pӮ~7>6~ԏ{sƹv+ v<BW l i=rTEEg1q^+T({W* .;sjy͓Y'h?z?ͰQ݇EǴz^UDXոCϨv ohA<c5q l_70!oC:ҔrֿN̏gEAbYVSڂyg7F&x_n?fIӣކ&}~}{o򜆾#12e@R;OG$?!c ]Œ$+ldDt{Y۰Yߟ e/}Wa@LؠX#%Am1o-Te]1x["Ty 9Ԏ>!vi kV$ͤ60^A@!l˶|.(b6Y7^YC0UwjpYӊq ^ua:Pq5@%O@̳a|UpeԬt}p]UU(dK>vUF(;8#DyĮuy%3 ѹ!y=8Ak1jQqwqs98I JU U :6)RH>lF{3U$[}?.t*E-óRGG71s'Dɇ3g/4'K"!FQz.p('l&wF(6ӶJQiа_Nɘ!7zq|PͤNwzjNXB / | gs 1%@ΕÌ~.;6=a/RzM{?3"|+`КȎw_tny?BfrC،P[?xįV717wk '4 $S9qxQp=;$Iq~hGc"e 1&&B`TVlDāp49*Eqg3*D&PT`g\Dl<МaZ|2 +~nxD( T\7xd jU  bзwM:A.!6Uia'ZL_2m˅!(OabnF9:9[`RZ[,M2L`XQ**'(Ij_WT K(ˤfH_* %ZCo9HK_{|\ɭ\~w|j}̦!%23i=TTV%&KYאikm] z!uG Mwt GV# #dУD8 fK?.u##Ejԛ#Qo3GzEtg oLV`2ɬL$%  M;hľ,hZnjkF%jopkex8[\Ԓdi.TŲ\+g[_LFVOcs#zr8m6}'U*KEpO, &e2ֆ %hlع&HPH$Ӿ~o6V]6bE O+!pU   yōQ8Fc*ѼR!z5 dVoLn4[Y߂g?Q< F ˞;>,7N+ruqP Dd͘.dcEזxCSi~1sn1yj)͆FsDS\3$oFp?|lp(s_!kˎcQ`a]sݼ]ɱ]gَQtG]UM=̼B6N_6C)D?'F&x7Q:@J~:ljط A}p?!+گN&`\!)^G|uȢgWDz:Grfwmos)DWg~ ĺ>)|ODqR;8/8Us~ 4c:ջ Ah70(5C-kuz&wdAL.bF {ܕC)YO)?@1] ^5`L\2O%PSx)pyʔ7\%8 ~1H3dL~tC#| 42<"T< ,;"&iu 0)@2G'Π6T%m;;:#zzGW>L禹Vmj}Y2Dcl֯Pt+l ?BVL)#${\mY֊Lȯ*V1NZZe78z'MW넒t UK12P$t`:C4!?4j彃Z8Iӯ:EwiCk6>~u7 Y*G|zd(??XR݋^ kGJmԮԤa%ƈՐPJ[$#>d/% ӥ" ?Dp䶼Jl|ex&ʺo9k1z#s$dzJ@h~<\hOMEa׾]7Co5E tb$(3:N`;U3-v}tӖSq)Ĕ:E 'CRŘX.g|JC[Ua/S@аݝ 1#FgUv3tZ(7Ҡ]qv$6G qyX{@ujK5bZ%+ x7d#m%uUEsnqגsm Ya ф\ ~) 房v(;5ei7@pD"WmDC_`iO81U$2ZMRer詤۸+AC@x f ;#!["aR(pg.z=q }nְMǔ*K!S#Agw0}JGSgnmɯxa\t/ xPzAJm0Tz4|Ivi;2HP.N(>-K=Ȋ-cyh4u˴b1ߨ0-)sta-H;vHURL(-V琹s(Ќ'?] {t5X2i~-{4 j&Z FkiOˠ% Ha}fe} `R-$`r=J8w=o EfkT*Ξ6m#-[L▬c+ l ۶m۶m۶l۶m۶m۶=$ݛ̙NVJG S >>dTBe{{C_IsbIWGf7Da;41\ok 98Ք-za:駯bb3HCNOԪ@T]hXLDy!E/2mBL)JxLsMŠ&WɤLs_a}|GgC|V|Dx)DQh|! +u=$?n4)V޷mf4 ʯk}F -}g;lOcޗOODAV$p$@eSmOYf>3IUY،e3p8F%y&uT1_i LMsޤmЭp4"4|ti!]/} PPJq[xVF3Ϳȫ1]2L,>A3a+< aS6C߯\3~Ƿk\?c~J]=YK(>ߴ/Fo7{h/)-Wi`_YWXJShTܺU\Vx. My& sAR\'`=:yHww/:@@8.ymwdqL.bE -)(Wh-xK 8tWc#&ne14d'Ҟj;RxP퍂6| ,`1A,yA[]WQjok >4` .1 nwW'|X>$d 8#!?~9Kq&]a\GƶA NgjacFhiAP^CԱ@P#vr&O$}A龰>B5 &Ŭ8ӆ)恕`sMryѨoOgWlMfg|CC]it-G/2\C){4;I ՏNYJ%jokyu+KΊ=[Jw4SEjDBJT#onЅ&Fn!X6z6`6{5y=fDx.p-`d^tOY@օ(^/RUy< #!PO;Dgxj3tDڶ] eIpU)5ړHfRq ^@w=nmorc%YTDR?([dzCnHxֲHoӸ w .m~ʭ;PX(^}:ưi{zxjۼN޼;v24r3\k= l]-XyҚy'L-y><ϔ'Qۈh&y<đɫC=piZ&}n1Ww>6Lweȟi #sɨ(}\&_ddy)Aw#$V28mOvٗ ́>Bn z )=9fEOZmp)^ybNX̬ErJ+OJ)Fw 5[FgЛLLLt[9ֲHֺD܈:8p~Peen?|]}]C'8ZC45`lfM/tF^MPcԃ_܅jDlyWbLo-;v{/2RKH?A8R>r<.[FzڍCda6yܰ6AD:}4Vb,n/lg*"1BĪc|$3NSA^0g\ì|{RpP*7y%-M[ֶ4ihOȔ,3(\c3bJq8I"1"5{QnvsTv{+V+j*՗&eb<²J+a/ s(Dg)vc3!*G(_<9*$43kL2q2B7,OIr򇷯Lsgepf\#(|珛W}cN=% ֯%q5% NK23Oʰڏ|$7-q; Amއ FF? cAjz|d4rp1/GF~Nb#%l2q 9&0t&}"@E')y,*"$EjW$ƹ |u=mp"HDrmQۙ sN u1bBB66Ocb5G;re?g QA;c0A-UR@d~\ܫd4RuW{JeZ ~z\BizD۪xtu6v`9M"t0ŝiujz%b ;I{O{d=+cu\&(o~z5got%M!*:LP< f3L~WPNtvr4q6L!OyP/>}~J^k9ivE0THя-a2yEc =W">c}{W4[S'fSeSfg|PeQwO+[?w:j%c(*P:@*PsSrmq9Jesh2|p mGGQ?_<@n!1ńEJ2͊* 5/7F/z=gmJ$~{sÂ!#G0L:G]-w[Wgߵ ގ|X|Ʊ׷ԋ61saF>}x|Q8q^c`7Eωi؆5뷧ݏ[$>;4?,ɭr?jf}l~t:9gOG$xnkNqV~84sI ֭t Rd]v1?t!+hՔ 'Jw|xNR4VWO"Z߰/p(ky&B"ީM+}Q ) =6HJb~y0/uSHRqT|6]FdQʲqqUwW6Jsד9j_!jtųxT hWKI0چ%o­Wqu"٨8 !WU]T^7oM7S(7$Bīq_m]^"((*^oY/QҜ1MNu >֧ԇ6l&)WYż@UA5rJ3Ĥ[d=ar?]M{<8gCg^uL?J-b <#+q% _擷G!^lj[_} 6'DCiCJXU\hl| 1d8Msj՗2{+^g29st+y}=*iUgUw k^" ;edbbJY-Ā{WwE gBi0,ADjaZ%ԕa}vN?t0^mxp+/(iB̨ǠTW󰼏Uw],b.fF )wqD^e+\$s $@w(0Qx-RƦB2G2F% ŧNrb_M\ eE64P;?:Ҩ';ZPr T޴n,K%0CE$ s_A 2 Wxv,y]& jujɁI5n(lgj >o\\=mwŽQ47Mt-*Wqf&F#Xz@UMp[? 4TTV8v{9Uc|9rr6v?v]"5boO`GRX ZE|P'uo8=:IZuʥok\e/>T1bSCjr,=h)oNkjl"nVV_2CҕR O$}a\nMa}Rލ*Q1e`OEPO\\I;d*_[m gte,5AҸGkkFo!Qe'%-O z?F SP5vxKυw'U9< o邜Wݩ;Tv!9&F$a)EѤ*3aESQ$>M[ݳ!go8-MMIFb(raˍGf$H?B!"?83=҉&B٧JO!161侀N^/J@ |PQCN9 py3W<8lۋ#?l^\Np`D-T-@vF$} Gw$00N6.+}<=؜3pQ.3֣RcZUO^4'{xHX@ P!$K/lJ1?҃F!&A?_ROV<sCsz}c/ho_ƩR1% gcuHBC~aSdL˴zOtS%崶2a_;u9[f0b['ȨccP+̴0dLofcJY/ZlsȈ96=KDHRv(/ԑjÜ,3@?xg*2)| .ܯ)5 XQb,/2Y [&HGCy71倫*s~vq`r܃c{y Yo?3Gz8vQ񝘺xYvX2oIeyHDhdZn6~3ZϋC**Ʀ;;ZKi&6't[Apz2~w,y DjbāOZcl++r HtZ]kmýjv;3:S2F{s{]xwzx2X) 9~j2ŁVQ8Oi~qd4V6H7qaaDg<`X8 I:ᔙJEeh$U-|EA!>dp#'f78,%^\\`nIB s b6&8v`"Ț-8 u}(FjqȲs} *O?ۺq:2i'?1ȂwمU=@]dN+Rzd-"QG)@ĠЕl"bI]C` 1L)Ff%CM}T2BjTKIxn2:Uї)iP\J);r$I$HJ^Is jr(fsJKP,#O{yeK4][Zn&($g̔ndxZ@1$ĂٓNסwžGsiZϥԓOF_=6'I]xlCf簜} ƺC 5BZ Z/UM-jaQ~aZ.E.IC9Ie‡`Lݛx;nW'0"{ 0,U5EѦƬh  c'NJ褮AɠYL#O;3p!rC 㾰s5Xem"~M ydP`oܮed4**d^;XXXjr]?~qH&MER%HXs#Ia٦pYlY %iļs]E?KG{\{qnd89#K%Y8L (4Bc-<̵~ԨfТf`K حO۶\.b C%`D 9p`hN0ྐD{ՑX(GWљm/'2+8TE:sdXzlԗ趬"zdU Y3G_(ƳFx"5O؈`Yi"s)[GQ'ɨѪL^<~Ph҃iR0׬ >< c *B87@͆1Kd. Q;e!&1DbE7.jkT)1 6}"ZNpi9 uvws3{l 4Sxf SG Z+;@UE_&9T.!|d1(89,Ix^{AF@P >$B^? S6/$ozy?Hz-M՞Qĵt1p`$^`ךD({scau08f28p :A 7΂L^@P3gF Gy[9/3*N:41x ZqĐ-nsO^0qiu0^AD?!Co&"ojj锲>"} +_u%`̍Gs} [燼GnꌍPu(_ C?ot/'OʩD$!+.e&~ 4==;8~h>?DۂvdJqKm@)$Nbnx-JFˬ(b6cN*abjm;B!%Ơ]ٜ)s$dyw#G,<QUSqHQ_\a *+TD}}^@K #X@)1mM6#!̓  njs)J aꡧ'Df7uk57I-&!T9hjƉqܱ軧h *\民az ע!rs-uIY0y 2"hpOwi,ז;,19= C$*unLB5h2UGo}/VC|SMSdq!*X!\{AW%y<|(wzst/]lg=fr l}U=JOVczN65w"٦nQEcӤMZWN!t.xtxrKcZP_+6dt"i< pQS 0vTm ^! 3H@ƍZ s{UåީW&Q45zax764b5UT2*oC9Րn[ hJorH*c2 !yr)KXC%k e\n _~ML7mcyTܨg)\^KuU CӦ^ժġ`(G}k H+uBejAiqVq'Y0z2I^g'"_tsfaɎB]*fs#@~A! Zuo쨺e!<>M;i'V. "?"{hkBlok v3z=-\`ļ3Ș_tw"uHևGɁ$-# CprHyURfTmdY|t.w(r8!(*{TWCw]MNA.͖߼ !z̔K\ۇ2ܭXK"[gZO:Ѷem4SVjÍ:iq*ٽ6Ab14w@Tv:Ϳ)UIRqre鮽\[}/B`!@c~ :0}x9-2?N@arWmk,sZ-E&m.~}+?0\KAov 9:t\я^Js3=UW<}KPmVh&Ȳx#i=W3vaÛ*Bo)gbA[YU!94(S~P #DajrH$>KzZ *g JѰe' O̵NKfr:8- J6ZRl+Rq7[p2!Xԉ\._1UM~ȜJq+@K-?VhYh$Ljv2^2(*xy_ ZV|ʟwa1tqLW=]0zA9$Ěw՜ ǨS Ґ644aP}TtRΫV-RDΎ~]xXv)r1oqx_R/缧ufbn0*+ 85)/F^7v'@2 'K/e~'o=6ZLI{ B'"0A,E,̛bhcټ D0Y(j;-ߢV kbebXa=Xd!Ø[L@ $e`ElV6کqq :?i~1X[kZmf_hp{Z 67[+w+rMf:1&&%$r'Of{};=y&NM6&uLH}ad^b` E`D #`zdQtCg:Vz ]C{4uWEI~ caɂjSFYJ.Xcv!/+J/g'K'Z=XTZCIHl]%Kk˸7 õT6k_f˝ tklkBZ ffIc gfS@6N\ϸab$Ƣaoˆ4H$58[I2S(gPF6cD+Tѩ!tg,ñEImє>=i_kuH_"P`ӛ{\kAmp SumC=Ӟg\(wreid%viN"TΉI:-^ReX >fi|\Kq:zhkn]_>lDDq ]S [}Pb]"c8^fc[XsR)I &vV\ʵ^î 7|*>S6)zH#|Wjk9ā$O=6tњ,ɹ,Ru/N֜X-c9Z1s $ݟm}Ě'}@#An/?{Fo6tulDDO-yOxߣ_zf=QGtݒ27pchڠЖ8*#J崈>vh(hzliθdht^̆Xz$8ENі2=y*Ra4t<\bWht\&A, Ywf^kk}_HN*3i|bFvۍXi[gCGbDs*"ЉXW*Y-]2k\Inke})~9nnX=jrpf+Ra}n{J1PE6G;uK*R QԱ][ͻC=lk k 3=(?MOOkGi5q֧W?w*yeзƬczqRk]((Gl`j]mEw=xQzO`JKfʄq;7G?Ů{_2%{h6q?n'cJSSRQWpej7*U Mjy'd^-ұ8wg\8;аZs*KT#˽_͎-:45oXY`'QTQ6\Fp(ﺧLT!Xe հz9^ߢ)-Fk俿'gכ9T6Xbጛ+K՛m Z/~|;*s4-_YYǐh+c# x/^j?S!\iO\?\::"fye?5Nvn2Tp3!ukHq*VzϮ9)TPȱJF+Фt M 0Q`" #՗UWjYJ]ęԑ.~ML+H;&=;(],' _Gt j[4RnP|Fm'feCjypJO!Yc7'!ʐנ :ٴ2z}U:>[qG:vR-mUolYL<Ƽ73{;+wSJ\M֡kh)鎸 \L'wEmKx{%Z;H~8($cL>330܍ X`!Oλ@!*@X{ B0|2'W<%qxhNXHlB@-Kz6l@$ xU%V3;*YU$wL 6<)k x# X8:sXeTƠ#F~C#[sń?M 3!A&fkRTek cHDǼpY;Tt6"g?kgD"Ռޗ_Uc~DWXwB&[C|YTal|"w<^$r4Nۙ6jP`{zP>2m.7iVuC]0o*ŴyY--ay:0QWgI31!^O:um50zN~`ք(*g~m~^+cGĒggз)Ľ!$M #Rk2k!m+ngCX{yL9ahWfA` ȹqjG</=~%fLO1ػ'LwxR)90* 0),1. HUi*=e(W.;/_fB{Im%*xMPCׯv^Nˠ+A|S?hpx_1yp Rϟu_@Cr@S r# esv"01oܒ. #Y>:leMVc5#,cb4%Lfac`J0~y,yvr?feTV1aGY;UKM6޲5`v&`/CXRy+Ps?qcO˧'`mIrw&c 1YJ3J#(!-ר''C]'I+aYKdͰS .[Ŕc_-5sh غJ1[G&A|{ L3o4<1~[{4O_Uo6^c-K4?~}|+,Tک9c_閳߿]DR|ۢKC.rQ" -ϡ<[&dW)wH/Hz{۷XO0i Cys1u~Л$ANp^/lE˄p<;Mkշz{[BuE9Z^/A{ 4gȢ[A../ <߮_u'jؽ툠)y v̳/ }ps>!5JWn/LF{1.X3m7_qݘLWv/Iר.=V f͂]O3 𳝯+Cל0.σK+ 5p&́ hxn3 V*ux|w.wvPiP; I6I_HEOTg;j͛6gU3Z5bg::]{.z;WJw-10\,9S|-|/YZnY~[PFX_=1-,ucvDtlnVŠɖSvZ/J82h/0i9+j@A(Ǭbat~.`;}٣עOIu#<*Ʉhi6/[)b> c%M,دnoDʈu-:UT sM"Mݪ. GK(5Wӳ1aW^jbީ,buaH580KG3r%ӊevyٮ0KXPa̿^Uݲ^Vj6DqgYpZ="S#`&;d<ץa"`WBՠlek-AH܁g %)-Ho(@ohJSQtU!e-FeۋJہOrh[-ͼq⛳=v7YxS T{b I_\뎸9gWjB>9B0(8S^>9d?&aΧ 5hY)ؠ1tdqxP lmG E9ReU}j)plkޜn ߮&p/F`4 ]3A8@LӇJrx=< Qr56]/9/3qk2rGdY0 +zOzq?8;@1\ w |>Y.l:NK`aRӧ5OUF4E = ݶ]vд?-}k :NleS=Uc<;e)p瀛a jZ٪@>Z-?ػ=A 9Q@jQt]2=uV.%%c W,i1+ud:jlĸElN[Cawe@]ts>õ6 x2O,XO8-0XF?gx!&nsQZ%/_ݕ^qG\'GJ)ٴYT=R-& 4?4UB4’lp<i`[O^:`,6EPEcx(+23{Խ摅S\!cXİ`0{5IZ]^˞L63Cm6I&Y +X *d"ч<;vzF_sP5[ d[KeC >A)?yސ82+Bq2a,.㷐X(-ɪ^h c=41XY`=@Y':+zX8lH(˫ϏڂFvϸ`޺bZ ϐMTBԵSTMQX3>8M~% Ֆaٟ0^/jQEn$!v7ؙyURHYWg;~EqUb~'25Y+r- DNt P댵h]7 =ݒ)eb(fiʹBoGw Ř$̈ vKHo&3 п3TW쫸BYZ>7(xچR_Ib/5?htk tΌl2\w qs6Ա~FeLX"`#^ YM]C#Qj5Gdض.y=!"f!UG{%좖-m71.c 1\B,,~K֍k'_l6x+=nA Ze{?8j^)TlXLۼDaFQ0Èn4ضzqr~m3_<-3VϿp1ɓPah93'!<6Af]17ݦtc82l4]˪ Zdf? a8~΀P*n#s$5_v7aե:X ;JW  d,͕5\)MxfWa2Dzai^Idamb =BCh7I0XKh^1U@.혶\&0t8}ol)pȁêbhmM)&q[ؑW,"o~xc fOѹhT;m)f%\2;c,:`|O$3F7D~2%+S3R\t}J4tVc[ yė# 47x~=#cUuN9q@nUC _ ecM樒fhlPHutq\O!k!\t@3x w8QBցu7# vRAbǔ[|-V[g]Dd4ѮXg wj馬}[द; 褚a|P^V :訵{*a衶.Ap*ڠxY2tb_W#(ӹަM)i=[;Rseњ F+07c"ACrMat>w< TSAbGY=HdA\>[&J&yJ[ѥؔpFXpBYFI  `he{\6>M&c%"3H"bjJș*XͶ@jmR15fG6?v xTm[sW$C"[r6Z(;"YuJ\aHv |neC7WH{Xn2l؟ڱ6Q[mOv*xWq;.q"]ع-b*R\=7'ܲks9JV:&Pn^^x}~OdDH?M]R{n Kv %. cx{bc6hnI,%3!f_Aw9tgzԸXt:# a(gεі v9eMH[ͲF|I_ Te0o@24{+A![*Ml-a_EXo KiF۱OZ"l&LRSdrxpb^ÒK*f}15͓8#Qy+ PU /*X5ZC\FX{]X 2EWsڍU*>ĹDx2Nst™G2 %u @"}MCKygؒr4*M~:Y)D46Ļ8gy? SW?1sqB1ىZ JeH-9W6BP,, "Awۅj#q܅SAv׬tɍKi,q$h(2/'w{Sa*Z(!SՂLJv >C[ 8(Jn 8'e|FNf A=`_e[o+[n;i 4QJI< =wiУvGaժΈ:oA<1ջa~FG:.2DL#&QPUǺ3۶ C\H|O21. En<<87^a? qGl-srf^LDs!$.aL wcXZK@;t ZE 4JIݥƟAECo َw1bs:) 6T{Fxz%f#zmVsj$Eɮ[My⠝$$h9ݿ6h'eL|K x)V_WL|bu VڒlfhHiЁnQ+{($ Kt I4~ 5ȷI_1 #0N]8+rO\!"@z38&s#}MOi'n2ֲC߷VtiaR,jJ+XKvͭX']1 ek{s[w\DN20BQ '6vԾ#8% Wy\zqǸoMTb>uxc/vjPp-+fQ"B3J(HI5_P.Y!Ae3Ky6qRprog5tTjl89DREمn.ԕT{[m`oʫ6sXޑޅsמ)l#u_޵Y#l4Sc<F@wW-<ϟ<@ө[(Qk#2C,Y?P痝7nuZ7ֆ#kO{ޞBQ)DB_/ Յ`tyܹ(,->+:Gg5k(ɦϴ3a7z)ǔ/[yv jq 0TCNOK1; kC!vz"5~AGXUiԎvoSlH39# m[dOSB8\pwo/mm[qB?t3YMJ%~d{O@َh{^"!$ԊK]孯0&@p]TCd0~CVUulGY x 7 .2SbεK>u%pC$Ҷhe9'IA~ap>D́{^W9x3@1_t #jYu./-;.@x~O v#A#@߀?]Ǚwxw鎰4,2AGg XLwdZP=i%28xrW-1m;hʔ.U/=s He'O%FP/@J⣍Mg=aW0QwLϯhu)oF*_Ǥ xPs5,\̚lc,5!~Q2j #ţ?C]%1oKٌ5Y 椑mcʉTg#Z/H}s^~OKItM)K1ѸoiL t\jN+x}P6 Kr›|rO :8j  67شQ4k-[罩 0ʟ^WP}:f.սь gzH<5W"F}c;Iƽ{>%[SCHo&0 ihSr=| ?'V3ec$^` [b2`ݻ$8 9&Ƀtf M.!qư8;p]М{k_:.?͜IO?^Ay3򼝨OQp' ?L'Xɐ82hw9]!&6RGFD Q6~:GD?(>Ҳفug>|Hq=vaNFQ; 8xU0QLp^A/ӢqV+&eI( G>85'^w(: h+֗]ϭ9?2/r2>cx9u@ &`?C%O8AOZT|o<=|Cjr3/QPI|㒊^7q8\T8D$UHKN*| WYq~:ǺK(6g.Z'4;"*-5h>϶J&;a;&Qi0LtF΁QA,# ȴۖ~({%? Z$Dv vmzob"Q4z-H^l!Eck:]9FTvVbe1r+EseASqpQ vyR4Gxx땑FޡCktcv_5ʂ0]Ahhh>cR.cH5cFX[ѧWvխt6)y^h7hufv)*Tc)F_ /,<ҭ<-)?︀ea&! &Ks>`pȇUa1%Ӄof8Q+`\X1ƘQ|S~5L:`#I|M5n$L,<*ί|”5 uG@]BꌎXm1nHi&<ф8fF~`gEj`/99Όr h3BO1S&t^ 9WMu|4b~%C_G t+pzc-.vوZh bu?=*B]o̩ \REۯc. k_vKs5 mtV{ VoSASunՈƾiMˢ;^]Y՞DvjTkL3뺌 X\^^ _o =u澩zƯtG*jlu`3 zk|Hje*gKwg~Glm`4IO0)RQ֗+E,T  //0#Z8⨷5)&܎nPJ$졃Rѥ3r&5ʠt8p::N=?͛+Z7Yk?_/W.Տ_qs>c{-Ljpp$P!S SR!Y q l%մlSc. żƥ !ql%-CK[Ը֓/( ǢSb%.͢Sb2iyKsXJsf/ |5{,.{LԸDI}6 7Fsg/TyY.ո׫ټq3q"'Ӗa&ywz!LEUfWP"vGb9 l.((ozS#\iO_#) \|N߻r<Z;d T+C&vYL qBUI[*[+m+8mLE_@Qn~ilbHw'"zkLp eoG"7k}63Ei\H%jJ\ G?A2ws./9!O 'BP71TFnl;WFO WR_ "b t9@l* hfNzʋSJENEN2f4ؒ鏽<.YVdFOG+XcxJPI'VKo?a{  A)DCWyQuvX2g)d&RWoї- f:rzSmB)pS}pS^h:U qAY]|-{yy"-_j_Ѣ{tkP~T7]G‡.ͬ']\qBon:ݬ.nyr ޳YlwL+ey̖xZRN$-[gHx+g}WR35<;7oCoM[/|ի7+U^ðw ^F=$+Эv-/o0vЇȝ\ .* Vd'D^pQ/vNwQujVotO=\.9Iɲ3XXe_o 1{#Z 8ZN[6l{oح^>(df9ufҨhX_a#tMX:T >YsxJ6m[N@PW2CШ]P]#W֩-Cd>Ci0uwUJ~٦╗4Zhd* '9ӟ=l<>_ADg#@^`D`U\'ۼkJ2-%Y.IfI0)s^շ ՒާEtsݼCc9D[N7 [51ZpE5+.::2񲰗2&^&(zp_B>L|[zmڮ,`UEɦ4jMKepW؜E.BD+DH.( p1. ~.n{G潛WxM1\Vzt6vOkG^&t )eu*{մyenȷ"~8H$=1͕$%Гk!0OWDl6M7{Q.s) :L{q&x_u@t iFzԌH[z,ԇΧ b Ã01KDk{}lMLc^NՐ&קI8fWX||vhE}uHIME2m-k4eڔzZԗ N"3ɛ/LfgM/_s#>U3ׂ#wz͑G՞vtE)/Ssa(.]!<;3yxhӬx?aTbs}K*ٓꢰ Ry^s $fzxaZ!XlDܞ%YZ1VJE F*hNc7 5zCϢ FmxO-rҔgd$j+I~]օsx0+'#lTz Qo \oWc"DsJZ" %'dO)LSՆ)N;]XAZ*􋭝HJF V/mM׷i^WGnלyOmn@W_?D ,]m۶m6c۶m۶m۶w ZZ=("wĎv{km&AG>jLkGO^>g,d`} !cAi Ut#^-I6w$x$_9c4Λ!;9EoV:~%(y57mД2.$7NR<9N!2?+9ONe@)^B_;^^ !UA>ׁogs>gX}ʞLH9D5e )F_X@c+>!KI76>&!F.gA3\ī5w~S&-f!,Gc >9Nx| h^Tg__vdwgny|[o/ٸFM[!)g!6TKp^HIQyFo P\"=73"]0n?{ancS_DW} wH\` Yt= z=<ՀW?fHV<!K8Fͪ SFP!@^-laZ`B^F`ydO!;r,8j^yLCxrN"%8m=$[l}.9\ /۱hIH<0wLu`^1EɷCsA,yRvDI4ѫ@` /M_[ȧU|?v㦟qr 7'OZAd1@L L n5Ð ܒ #CA L3$6̊k5 GRzKUOPt#EBH] TCLLɉyaҡg'Nubʃ/+B @Kv mʭSeѥwfoޫD$_+C]ъ6И3YW\dȰ*W> ޴Y~B+b`R|}y0A.LN|Đs*Y1=)1h$UǴQ,W&c,B3e9qRÿܙ7x, uOFhSh k6eT醵]U)G ^9D,0)[Pmy6F?NT{ FYFzETTźl4R &Q{> EiUoqcҰqpxNFM:$Cpif2&dTmr|aQІ4FQB-\k2e  Lyl qOb3di[οL~ +߂HqjLp؇QJ^cA̻!ف"EwŔqd_k : ߫n>1rs7bVbQ:@|"J~, j jj ;bEvd;gte<| YFl,Ҩ۸n_W\LVC.Ű=4>_6*bv)_>O{;!e$ц OxĥIy`2 n YuE%,Ȏqf9[RbX.'3&!x}ʳF[F#JDuY2ߙ+Û  "w1ƥ2xP]H; L Ue$͸ 7ey 'G馤;&]w`4qƉY;2պIj l EJ]!pg}؅Vc. c{,n'Ƴğ0C0n~ M7jC\9 ?2L^fFB9d^EEDzP^0eN.SdwzPiYAS{o@Tb?WBh9L܊Bg{><ބ9X`eA+RELv$4f@{ 4S<Ё?KB1AD!: ,)HZDΓ)>Y$&;xrZʿ*F%N3#Λ.MFZB=gQ$݆C>1ZTm_ FSt {G?ً# f9&R-6;Mx;8i"ÖE,r1kSYbUGPSiݱiy|i*`XŏF҈y3l$s%Qj(!Ul{k;uCD[ j6B B-“! rb`x^_=um/Vzp}Cbm}/[?*b/vLmXnGH2QF+uErah˗ #T}ʮT.q u ?񥜑}~2.[&GlF%EِNQ) y̎rHZٵRAxꆠlT +ZZ0>Nf&v6=%z ipU plBDK*-\Kt\,zzF ڕ` (Ny?68ch^lFTQ=,%(^(@WVlJ0` N Uel]b\yug(VzᵤEC|̬ NLkb(_t}8{`Ց1SG+DCDgtw!߿~W6~UGev'P~#:wF;ǘ|=R ,t J#o'MA;%]R>ZG0YՇo>S%2Ug7 n PͰUKsdz?;%>]eQ툑 1+D7?:yCy? si H/_]5]s~>=G 1NOYwC.MvgT#WٸcrFY X\=}ݏpcKYx"Htfq9ZL$Pͣ~C{-!lSUUA 9A VK4:/æ' < qD4ߤ%oi s g?!z+!Դ0w nAwWE@r$J+>'~="5qn8j%%wPpVӀ_\r╯7f?)8q? J<'ht~::z}xk:ڠq6p!F'aO8^Sj2K ^i:} l>^S!VEb$p]Ih exxIȳ>T[Sn,b +Ax2Ra P@Y`i(A X>%(+1L#7kk5/$2f!P3xl3"ZЦh,ṷh@g'^pu 2J-va{K̪ Ka?OFвy G,bxɬK!v.lP'|t^4ПIa/Y- /AӮӜ}L?$i7{N-ﶒ(tqNY`x0f0` ?Ջ86&\앤b ]1O忊.=9lAPtazpE>?mfw[4mO'=L>F}a;+D} K* J$9$$ *kgv =RCc]v5r!U~0]X@B/ŎrN2B3Pi?|cҸ a 7+!Ԯf_#Ck? Th=s~)(XL'-/Nb?hJ{ջ4L}lEo -!]8&LyeA?%8kC*bt!Jb9xbpV(f.㔠n͈]-`ZDR/ߍr;AO2O,b9%{b\1Dd>'Ty3 ԛj=QSfCnd;}EolQbh| Sm %/ YN^?g'ec~#+\TEaaN 9|+v?]7L\[m;չQ7t.W 2yjc!8&7at)UQC\(Z!@&&T-TR- W7ccQI¢NKTmG߯j$VRc[oR9st 9]mOv~ͧ1"E1QI`8&`w}Cwe4AMo2oR&} !Q@ %NRAɸ8P+]][Mv2DxJk ouĿ⏢T,No+D1@÷[7+ 8 jKN ۾ Q`aCtu'⏾&=XDpw0!`z.6!:S|`)Ōþuht}8-BW8mXe(ph+A.el8 vfMC^});׆`學i $yEaJw63T³)i+AN}+Qc3IO3˪cq@oR?<Ʊ<}Yg>i^r8-߶oVA[+/.Gb,k7v sG,lT,`+6,TKzh/27 c( R'kt5Y_H [P{bj^vh01((N./>Sfnmimie؅AHY۞ۼzI15xY|U߀b2LY_^`hy4,&2ϥ$ B\~Z#Y*]pMuNE +GP %yKCڦ+唲Cª);DqHV;Mö_ F#M?Ҥl^:EppQ'5CX*( @F#QҞ_;EhAI14*/ u1Z q1߫==!l#vԞY20yL`@fWH:m},x!7 }ʐ_T( \9ԃq:_&'2u.LS9+$n"{X z*#-*QU+@W9!Nߔs EdzTB{LvmS422# Z~ uaI9g|6 xJ0I[ +r8VD22}ab/ CQְX3i~>7_aH$U1eR_R"U?j 'Mpԓky6 }E=`+|5lzk~"JĚT6qxAL4Ǚ r6T9xBl3$BA(B h8<dz ZX_NTPs=K ksʖ[.n WQ3W鴵,enځ%yϻ*֘A[ 0 @hi[hj4trO%Egdg6s[_cpg܉tmT^Ϯ$@2JP#CfB ܏HܷVma:)}qfa~.A~q*CbZbR+5-Tt[vf%P>-/\BD)T&T.d##Cj4a4%"N\h*%ن.tlOYh ~#]J;we:dQK/B |]Jj];|:LG^-^e?0EsqK2 "9m= mtID#Q״@wf4o<=? B+5yCƁ8'F." 6J Ҥ@=]`:@s(t Pi=fω]ȅ0(pm?y r4!RN Fl4 OYYI-sD!R*Gh羽I#q-8x{ժzA7ۺm[3:m 1D,F0)Spe")돕MBxतF>s7g{ p}(52 P4,dXv8!ǶQIuϔ_ ;b4;!|MW.u7o-\ .Q]e( 9؋#Q@0E>Oc,S_6 J"wVR8Ȫw!SeArKHV#nnK nvO?nƠ/0)}i ՘^F5z4~yЖ7֒`pg> -Նem:nj[p8H ͤm)70^/\Zl,,բit1W1|(,L '3I#H_Ü)xp396M'6qXs)=$ܕa,_=!8:&( cjƑ{Q" ! /{*`\Ӗ@0ntXYt'By ADm'ҵ[n ?|,e6 MHdzՠVKOa܊UB`+R19,ͬDm1Rn7ჶOd;\wrMXI<>$C42v Qֽ {h^QY҄>]̌ṳrHې!щBS屢,\]E6SXk"unl^l 7(|Q/^)a*` _5uRjdUNrP^q eНj6SA,;2]?6)o\M8w?VtXrb<~Ww nn=^ph~VDW3s;G'y27q̸Q_- [3RBʌ)X2h,Ъѯhz dq;5aK^PC{|XuiZT4w͝B4$H 1T 5j.0|~-Ă!6&rR$]qԘ R`~Sba~@NLKPC`N)Z[IV^ hW*bO.:<<iZοciL1ځ&'3-nW~\%q7Ӧ5B{P_ ۙ$ 6V8e˺gOp=㲲 crݡB7϶w@7GMeaAiK\NW 1LݩJ (pk1Z77?>=?{r*4I9HD$F]I:ӧA8.u/ u"q ޤSNa2uew (.W8oG_6 a@}vy5s_k)8]Ah%yCǞ#O^9rLo(.aHX=ܟ=NV%yy5(>F @DQmBk"n}p?6J+4Рi 1dJ~[ƴ:ˬu\v-SI7֊tDéH&Tz>etzs}0^}n?߭\/'[(_l8c|}ҳ.}C?ڰ]E젍XnN SDC DJzZEZx`NdK"&&اlf1(xۅc .@H˔H*I-D^XZ@o~}Lc:dٷmALIA6f,=rS~ˀ8Tywqؒ32Gfwk!FwojHbŃLG\,V?\gkMla^LWt[)), ܗW 3;:hQ!d/B泌EVCm6;Ʌ_XEGEL<6YS_|pKojE>5͚;c< c8?߀CEI#`c3Xe}*!FѾW!dp.6aP&q ױPy%8GւR\8>qG OFA 'DL"Ա4D@Kk<3oY94N&&H]6zA UbB«˪P<*`tYPb%'ЖD@O&r{ %%c  ޟ}0}T>]SHż˘cŅ#e$I̤^XI[%hxԦl3n]Qi}nBls8aw.ɜCcψG攓qaO8cP8:W'aZJ[knsN>cP2GrjS7egɷZoCR|_͇O˂o@V&xj\x4zS-|g NLq k+xxlAz*h֏AE暫H6Yknsal\:Ccpc}%nWWR$){tt_QG[t˫NҐ&Wn#R7F8ko;%JSku8]>G sMb8sb;!& 1gSkӺH<'8tauN%֘Ӣ jB93 ތN ѠWq55U5y8)%#|{;MDܫR/JihHMja%;Iv2IbsCI~ a$wJc"1zu(L&5?1)ǦNOO ht?{F0zM _ =bD)ާD$ (UB6 eKg޹z7J"PzWi9HfLaE[\jj7~ 7Y{zHS\?9|=M~3QHFjQa>ͣ1xS> f' ni[Rh@ ǢZ:9/ڌ!JߓAXL3opV2w<&!Uۄk8`R$ASiqjaR]Q;QF'#SΡɻvd(lZ!,p!I b@q24fepLoХDx0K?Q;|@?r6Df#F|bVpXvE;^![&dCw#%t_lPE82NAn9Vd1?qMOahT%@Y Xmr9w?( |&|6fΥIK3i Qh{Dw"L@CW(rLi9fFLid8w$= 0%M loX WRuPV s)l*(ˑdzE tGʛɲ̦O}Ky1~hi92e$F+R:)ajCg;:j~y% ">*w bU?azg;gp6#U"bYl:eiYŐx=iC{#_/= 6.³.] ::Dxn2us|zAaҌ d>ӄ W99|5\xeGdG?^3Eh X֕DlImOyx4VJr$C/> 'J;;L־aP;kY;VB[5Sqʦz3xBmEݻh~U ^mS}r\gZ4ߑhroEh:HX>YҺaP6 V1+arͯ&MV%֊N?{sQX5, ]k._gF'mEveչ*]r{zPQZaoX`+8:>zͤt!g-j7ԩ-\]Ω[6*VP~ήa/gMu"Zu zASO !ĀqۖWpmXV8b3׶9.2vmq5Vc+C]FS;0us^.v˚]8=&)v,/W}ȹġgvlPԴaPn%耈 h]a͸svG$e׍_H?"kSk:4r}z|!/}ǺC4}c<'%J:eε Sd\"6 X|Zy]ٸd1޼V *5Hu V#4 ̀ OUm<;3Y d0vOhf3YQmcT5ۼ?qсn8-Y(^[O*7!WO L^9aO_@ڼ/tNBW iG''^MR?;5:[*-|7 $Ooe\ҺCK4d X@w>-h ?Iޣ*';3f@B6UX qLCj+LM#]$ަ!U" Y}gNdT[yA[h.ܔJ=iNe7CYz1eR.\&o\2 'g56kSr|95q yo߶la7aVz+dꙦ+F%S,hCz4v[ 5d<.= *E^<ư _6ͼ}*|ācgQf+*;] ,7HӡY_syG9yBeB%5!Udqc*vd v渐;pb7Ȗ?x|ujZ;zh ܡ3nϧ aC̺?j{jC.`@L{ܨ,G( f&Z}yuGf H8?knʵ7Zv[;~g\kG_y+tZ,2_3%V^>bMQʠ򁐲yveO⟖Ʃd<9]{fH`iF$Ks W& vn/`+ik͎_Ԫy$Q91|`o\\i%$pHi<$ɠעcwHQ 4H^J 1&iG=CS`Aa[ㆴAji d 8{rzUyRTQg]ȕzIÏj>.-EzbÈVߖy*RWU|CYL Qg,tXDV ʯ$d+6u8>8{?.Ǡ<`|PK6HvHCȸ00!h C ^$>w&>}=/{SB"=sĎhH\"sxoo#Tq2|/\+cza S i^p"%ziU I(+F' JL QX;D/(EQG9ц a ùm" ASPcrKE&ZP;n8${7[DaNk8|*)nI"h.۽50Fz^I*@ϨdJ-oh˭w& g<h$H7J,IxÝ r0SO*_;2;<ކB[.} —!;dX cq RBo'Ֆ-M~fGmm0Yؑ>GbIzNÈc@ݲlXWf ~s+ .sjO@C"ΐ 1I`:.d֑iTB$>8aƼH*.C1[CN- 9j7%0yL$eJ )fB7hd>Vk)>R jiU܊y@ `(8PMpҚe[a>4 DV^ Ll<#}1ħ)a<5OVy?U$Ӽ;LH ?X$?dAM/'wPC]Z)㧛x3sG _1؆-?\(DŽZ2-}k_M_wdžؑRvghONqC8B#e= Fuݗ#qdʂu`i| ! Ja;yVA7ʜo)0aTMp EYa%gy!}Z6G^qq೾x /&w [7@)+A$,N#i8 $U5R7XL B$qllUTi6Q j^ S̫GKwL*LRWSf="(1W]dz!)W"/Ns1LTZK[M=x{][;S~jr5uLޡ\uLMJǛVx d^O+~uWL[%:ԬkyaP k FˈE7짒p_@UL@G171<~ {]5''8p&_bћ,ӳ4:q88ōz`n_1`B3=ܗA.UNV}Y֘HL2. fE-p snTV䑜TC PRn<Ȯ罧0K@x4q$Ԭ$ƅ3 =JR=7EC]?.Ǵloma;-ћL;jJ9d] B}5R/v:b@ ~d? [Z(!&H8ʗE ;k@7/;d1tWL;8“:.:0e:9t[e h:%Qt 7>}?Z=pDdj c𩦉U[;;Y"`ˋ-~hZIwQ~ȶ4٦ZQ»ovfqws7RMSK ,Xq~K[]b1("faM0554˞d#8a{>o^ [UXDEQ>jRдZ>f۽tz\CtoJzwɌ5r [7-pJ) ;Y+ASn#ݠO6JyLB5ʖJܼHM^[>Ƣ0Ni0(YQ Wd€kr)4am,O::uK~`\Sv~kLK^q+'mei^1$x(PC2PFL υPa苣i0^po|,r8^?z1Q\\O(-OZxNb-fӦeah Tvʗ;Q [gFali,5Faѕ3:d1[ne{թ\ֻCW2sٲasرJwT^'F-Gy Gx:ú)AE"%{ yIh/D^I1{%');>a"IWH{?X“ej/NjvJ&zyP9mNkmZgcI. P`1`tC:s8.Fk"4K5ԇ3" :PCxzxHcs2r4HW`ӿkRΊb\`i8jYXuVM,%+0XC|̲k迆W1 "$ okfssd^lЫL$ȹSLOWfyl<1fЂ)d?iО }Jd}u΅/y" .m=}jTn+fb<۷'Dj(*i+2ae7镓,`sKNJ=YͦFʃ ~:'=4c\Ƈu¤62V^GY07x aH" K=)C6GM|;U$=gh j#U;"vV<- VY+D~"` WB,>]XQezzĕjz왧 Emr:/{,1&W(>dj ưQ-jFoJ2,~-jG rm>) !Ųߴ) I6+Y,YVa:?AUF|,? avI+ve5/phOdo{oQpbvU:϶xyXj mY"X' KHN+w _-I0>_%0̅7 d O`֏l?q`,uUj7-i:`4;? ӈO <,ZmEv:$*mURr5U%*j-}{8g׋&'XM 0^YMӟid.W]^̋֯{z," -p9֙6wU4mů]RodlGxye=[Q0i螎?Td(Ojn4"-‡dqobwR\ʀJ6y]M]|*ǩF~t_]8*WCz%]]UU8F&'t 6mY`RHE_Y= PIjĨA( {1e"ܰ5$rG= 3wGwrҽm,K͒g C-X'wP-Ҙruc0-G{Pw)Zyd{HL ֲaV !nK3HR~Ȅldt~n?} W r.^҈7ʟV 7JSOP:-)ђycPv#/.Ȝ[C[{^:Dpc0e$*}UM*YOm(uv! ͖:+ 0E!'$L=<Em4. T '`+M"׆zړ\L AWZQd#ގ;@L]=Ipd.߰톪b4u{oMΖaQcIEߓMܦa'udߨ)9rS)#G/ +q$,lm9C m"kDIyDC(~]9qy>o\_Cڜ6Jb>W_&d~ܩ]3**"|bGLӰٛ::Y:9Y:Z[;XNlؐ~kmKU sXnvJf+ecNȕ&id Lq} 2G"iмlKAzQ]Lfr|jW8Vv&[`S#>`2j9T1d6&SJ^k]T tŐA%X>,VoN-o8$C`2u굆UTu`0H7v}rrnڍ3yk'n;*Rӧ]:@#!QJr\2h0!pSO*.TZALf(iL6[=f֥3HT{ƀTatҷŊBqGs ȽO>DА=iWFBKַQ^}po֞wr'ɊsҺR";~}(U.pi~_lg%/&97#K]Gd*PYW}p4 &;s- $c=6b ]32AdxƱBL`>zD2] rX`y0'-0KX':r< L(ZmZE7*Y%!1O?Ѵkdh>NW[:)C^+v=z~;ڿ&qCwz Sn~Z&ou@ZGw 7i>Gyt.&;;#L_f5ڳN.ԣ56zrx߃X%ҐɑHt}p_nUOEh8z-gX+N/@AoJ`?BgcdV,,J%Zlš<,% BEsf"ɍio a%x"[oe`ڿăa T `궙8\CGt &H\Q@p1V$' *0^egLkQ&@ڠ_`ތ1SUaM٬MEU*? 4>p3w7W%Pdd=j6iF * ѩSm?@AƳwxs1*! q,z_e3w}{KMKݢ͸(^\2&(8AHKH̱a~ 2U~__A.PDE7D2\\ |n2 ݉9)aG*h pz7SذQEװՓ R :PV@qLGfvK ^C:)@&Gh6U%L *rS=y'ю#/)  JiyP&7oB[ەz,Yí@#2r96;wV\u#9Vv,鰑3ۺ{l9REH )`%qէ4P^#=[sb<c21?掼ˢnA'_7&Ldi3n5;SfXn7\`WMݭIj%Yģ g_ Ǘ#tn]ffiFaV8oY[&41KJ1Q]Sg4[̑ quve^7@ Kq]Tb\PWR.d{vv3h/e+w1Z5&Çn"bMJNN=.:%pՊ{b W! O# Kr,Xh$l @gEk`gQ88AncȯZHxQ"S)S~~*P&cuGOqMZ(Ⓙ|Qm!U|9xr7< : `v)u!os@B8[k|vU8#cǾvm%f|o}&->|֭>TnXizOЉn8DnT||8[_A&ٴ Aܓ;sMUKG#m Ԩh>)aƭ&V2=AkSq g{V`vLj/kSѷP)/Ri/sY_unY5}W!U|z 4XyӿUӲnAՠA:D6۴LͪƓюLX6?xXH&mD1}^*8/'L*/RICG cCXi\6hV4H%xp4[,[4Ӭf}p-.*+;\% AqM.1)+Bd{Og&`Nj(I|i eܪhMyKY:4לN}wo~L^$m5gʗ!liTPXBYZM/uhdk7Nh]銞o7v假^X,tyw/tܖ.km53P[ZPvzy(;BQLϞXt689y{CxMsQRߑEw9 pC"BMNH,]W YU-Pm͸HαG~Xߎv|(J' 8N[IYd"䏁3,X l0*Rrb0 ׮!pcHm=av&qvj; n(A⃍alcᾂ<@7# OJZvS4W\ѿ^ ^Lڏ'υi\yCؘ7dڰ9h:'@`18˭Fn#aGS30@۩_\v jlM|m* J Cz۔t[RVZW_L_:< vuH m9otjVCoZSeq>mMۮ"4Lzx7=cޛPHN(yMUD+CkQ#6YkjA;H <#H[܊jY"~`K‚.YCPQ}(Vsʧ><|vp<*X >U,(}C d`Fxe/c8Tn5!yj2K PȬb?zk aaַB0Σt߂SVWW#N* v͜{" :Q en@ Z_)tXDO,hlj H L6q Sh_ $,l UGo,Us F EE OKW(OZ95|&ܹ)o*-oĈ jt15#oKbDdEĄ&;CR8#y5X_ : ;0R4A20-; "Ыpեꯍ )#B\n,qMgũ{^BXQ\޽VCs)D@R"A8.\LHtOtli#^BhpA]#@ـD6- 8l`blեқHcuMk g^߄a8b |Ս 2GK<OWqwOBʁzfsޠHv}v""wC69יG;@z_z<d AŁFow,7f)/葑u/)f :LO{$Vcyo0M`4Di|51>ښ8a(TKGmxP}nn`\l|V,xJ;I,:FSyAbĝbA@2@)jgY f1j ;Tj&_SH}ؿiOz wcZ fw+ },BslY>qI/2n{{XV.|roohQzf0_C~ܛȪ)GʣőxQTD둸B'3&gAɝ1mHDb,x׻"ȼ OӤ=JE}lfq=%DS'ʑcn7=nJq `.IS`7]8f3mpj>w4d0퍳!{8jVu[9 A!DP甛&Sl[("rݤĄ%@[-h=w+];Ŕ Hm2H|2Ք~V h(۞\ׄe>]i!FCLC<֊P"F`D8%4Z2R ŝpt$ 7|{z~x=Q%zez5;7͏Wh- (`}x Zș{K˩mW,7рB ؃ObD*A cwG  9 J4.%c3k6K# PMIVeׯSBr 0iJ] &Jiob3vByȞӷ&1TFѸA#dc$ (賹UTޝz+,s N`d@uɧ9uH0qh$>uԭsF8 k zıMj[L5D)g0L YnloHB K*4Ȟ }( DY}ߓd3|aߠh e PY139&VŪ՞jnS.#glG#?qj1ȱy|szgdl3t!^Z~[_GH'$|#+ɖ%ƅ7-%9O'r G>{?t4Wdd^˛ ->ϜWC7RO~}%&DV^ y'NsPa~p?5Dk$`_˝u|@]]ؘTCPoMFBB{:QQ y)f.2R,V)0J  'x<L mĒZJO A܎@RRT Z MPryQud5w ]bd/>i,lH6L !܅Ʌ\ǚ|b@#]A>Ga=͞4ͼxJ!r?yn`0 d?=zY`0.~ʽAA x,ɯZBʷb]MVx8@0 ۑAO8%=>Sb؟)4}(:)rˊVv-U9(Pr9dyb.r OYR;Ņ54o ̘}mg;wdHsI Rp/DFh[pMfOj*eƐ'71+mꊡ퓊jæŮg߯)#WԕInH;H5PG!xpS U^:4ֽN8=~\dDǎz1zh @MWj .WWLQpbtaW}.N O|ChFj\I5>܆TZSTf[Q:ÇdT`VfsAH21q+W%.\ׅJ/tB]+Þ|$HaŎzlfo ֮b0pa.<U\!2,iFN/ j!-m0{x%YlYqEE ? D G|nfW А BUނ̪t0tg / @'WHC`|fE;8S) [#4)1˰٫8K#\ X6h L[Ӝ"$JncPȎ9w'’}":tjnՠer{J8z~!` 0>"#0&dCrPLhi6;?jiF:739 rNy[?$ cQ5V`iᾒO x~Tjף؂| 5phեh[~y?V:Pc*~VRrJhEi:ILi&Պvؔ)/z򶨤 Q`Vo泏-hk#cXjkcsTN~l%j5)aer2 4$d. FUWH;8D sj*< ={0KMRݬ7h4QT f8cA,R]#/Ŝ[q\p=A@\@D7*x/Ҽw}l5쟷(5򱴺m E8o^A%VvCkͱ&$YcSOF3)Å"џ1ɢc:{1˯c;psE8}#E%Qf:S?ZKJNd"*Fd5!~>фw4AWTǪC~+(M0ӝ%1op;#_1 =LP /Yց||#1/{q;n&OɑȰfWc/K:'U-V^]ևZ_fwMMCJZ{7ukU9/;^w{q)K !W=æ֊3$xV(wfF`S(Lr ˠ}إj a<t1)9;쬘kˠg{|]ٽ̺, 4gku+A3DpC6`9KC.0)Z3Ld YU"E5P&5z_c_!k _Mbmhgd-Co@bFpm@x=ٽ^g}Kr$UFAEěQ7IKc1{vd%YެAUZjic(!ZdН^\IO B6q*@(x~_ ܤk j# dv6qF̈H "6e3XzCu}p`4n"+60*jIlk Ku*&5|b38厳Qp[˫gA+?KIfIuk+7+`PO|cjk-.N@Q̑L^_lQ!ȷtJb^ &,bvy[sRΖ47:߅ ܜ6<o 3 "E4X5}Bpf8RžAycUю9^?;k?`Qy\/Lҹk\LfvPc|9Vdy uHm}R`TV ;v k8D_ֵiFQk?.T HO+,V[y :sr #0d ~C5$EMHύHXc2d7&/^Xɍ诛976S4˽[qݲyD5CNO՟$I-YXY>CUv- U Wl0^jJ6YW' C[l UȰD+<x5JpR+\//eXT`ۮ-nloTh]Qdُng'CdlXW'ӓy Ea)cܡLOjr@y/b]a,-"})ueɃ!#p@(]YkzYC=Mbrj=ԍN'u[ ]BBB6F{[`yP3TLwyL\P~7'2u/gE޿|ؽm"k$nx*Ѕ@$=\'%. s[ؐ}$j4S3IRt5{z x,$zF٭ٌѺ~vdȄδV5^"Z3@#hܴ iz\M B+wcڧ3~fsKmT>g罪30$9aj'HˑoR4[beP+r-XK (YlttRช@ 8/NhHB&2Nls >r>ԶϲtV|VkeXffyYۿ^n7Ϙ6d'*51ǙZ|D;uR_eV(B'EWi%ɇVwl5onΝEPDFŔm*k񜤳2MyBC b}`pvr]Ĺm# q+ KH'҅GbW̨WkH+į\MW]G(F҈^ zO{dh_W./>fX\T\=ȕzo N#wݻR AX,WAMSuV6\s?<`'\O<] ( xl4eܥ(XLh=I 舻p3iLnI#Z4Kz?s2MCϸH PY,fC  C s.{u6=MW)|{VU{uƻ|ڗM77E*٪^yRb|C_BآFjNJ|Ni"RSej&[LCSXa+WC%iGMNQ/B,W!JL]ZrF!yKUUiÒʚ9f6/B:c[Ա,#;o50j$Qc}*g뗰t+ǝDzN/ʹ!\3J,#ĂFrI4stAP`3ˢfÂ| -ѽvłMG'v6+=A 'C5w]GEupw}/&Ҝ$"eqU勴EMQb&T+JդEMu̺jRʠKKf˒ĊԌq'Y=m̹WFMYmOsvm41KysF6yix-qSiVjvAKf^nC"Wv A=0Py zE.fQ5VExshh7uתoϑ?,-9m@vcd0,Cȫ5 /W,z+խ f[X=L?ZeU(k5^ȰmWUC @)zN϶x+QW#NH|[AQ-_ArQ?RE+-Ew.r-c;qx*WtZ{Tɲw5c*P6!w-eO"OvOɀ|v4 E%e\7U5 %p|5SClgdue !F pYi۶aj9l]4_>M))" #׍X w:NB7t#tm[/'1'Zc>W8(H*wY:#h8ADsWm@),LX=b+,]ٮ҃kNP"_]ykʄ 1Zjtˣ@q4ڙ)S;~$ t;{M]_#PҘ"5ʚ{0tk?V2N_ !AxMN9JTܤ=qcM?/?#K,oK}.%HQ|Gz_?Dߒ&_"#$[mk`l}w$K'\:/|iXd.Jvt%U"xbTLe+6B(@=AKHVN5Zar=WPWY; bY{rcHKH VRS'z\q`u-BuO,AxHoZA+|MQ r_lw]qRP(k7HM*S*yC}Y:B yhƝBskq^}tZTq^X6 0 &ͯYe=#?A2!ҡS2D.^ϓ{4w]k(+{btn@&6fhm$=dgkt Ҋ͉QVPT#<3ӸC<<ްno. WLB®Ӟ3,zTD *g=y$SXښeS}G4[KhUﴶЙ6isM-PϚаnfS,KzeU8`0{F ( ާUH)`yp^yöBPwQw)'fK=`ЍS1e^6ۅDQ`fm$P]]0s,{L)I).@`"X^Sد=qDXAf=2tᔳ3RQ}q?]ҶZa-@0˓v?5RnGc`J:F2.A}k NZ]"3QCi.~"O 4HmPñ}#Zn/%wD[Όaƨ[cKbŕbX"g/2nI˹iui^Z \?8ׅ.r5k*y %5 ŕr܁:$۪ ݖ7Y^Ɗkߋ##wC`_:ufd<ecOCL YP*;c.8}xVc|Wg0k+ۿq@& swR )SL7}g{r=8rrO[z}ͭ{{z(+>zg7O&Dw'i($Q#AI}B1W5CM=XӱR0@ HPZM_k iQ0aZ@d"vc.Q# ;sT.SP@FF5C y''{[ ДѣYb83| lS 3bQ3%$bjp4괝y#W:)'"rsZ[n5q=1Fvn=tz5UJ‡ƳI,Q島7E]NH=XOSgC.cHO1Rj<ũ$,HNv32Ti7qVDn a<48+)*}n<՚' `7EcY2iQEЪAE78|:G~yF*c 0j[$+l~hv낢!z4-gn+(j.YRo'6άruG:YPG `K$w*gY!K~sXz+%fz$7YS+#'1MZkaQwސ}\*ݡXB-jj&2#Ӌ0GsJp\R"iK[Zni2<[Fw͙^OQ$ilԯMok=G@ z6tՄ>iՌ O(`8Е2Hm܂$BpNUv#кӈgS$Qyt9){ uU*!s' cVlWbɨ IiK#%ףk}Ҵ4/PT, .S=U?~m;.iyA OP\JI8JXd+0]S?GS4R/gFaMZ±z"[rM5Vi?B CEmJ4OQC}17>0\SGޘTNR̴G8DZN"_?4bɳE?A 4j@_eRn::V5,o-N?ޕ> .C5B{iϫ!w?;?rUwKH/LɈw[޻b~boyYY{jYuuj^jGu'Eʸ90-דR*!|_Y1C/2/'+a+/3/+uJJ:},~fAΚa1܅Х旂)rŵ~$*-7a_ ;_5?<ױ6:IbzI#&([c'nNZ?E t*e%w-koр6hlG0P4.XRxK//F<:G{N.y 1rR(ZlD*biZ;.^+|u<@u )=%9AVXw.?Y26 l7d*EK:S=/d s7^<&kړfH% G|(ChY (2 eiWaU—~qad(8 3"DrݱvMͨ>DL38X/JE&l N 0:EV۔B|n\*l-k (`z= nLz@*Z$C> բ@iEWceSyl[j&)2 .bڙOp^20OpOX>=\%G .@c7Ғ& ܝڌl%.i[7òGAcSAH1Tufb<Vv{`)Uz@0RQK+ AZ]a=c>C}%1oSAge&?RclAIDrȫGrl!- 8Y .$V'Y (ޅϪ>AIЗ2T\`g|[]!0Y!cq6I/YU_{My8u]?, 柈xiB)`"f3\-;(Yj3]CG(;ʆ NYf@r\yڨleu# =mPY (*\6.C UHk@.e*Nx+lLǽGxIyÇ>6*GvIR9X Xy(3F.S1-vN΃ڂ h"* A%A;Ud/9|uenkC&`cs$덠`%TMdBsD9獴)dfMN/[k!2cx`TeBwObgh+ss M &AӉh"׻Mm&Y웥'@GEGH~^n)Jd='VKe~RU"`N:"87|=jofC_a UʍA 2IbjfwTٕq񛭐M{?/Z=ĈUv1p%.U6w D66<2ä;Cԩ1 _ldћX?>p>Fz/, |'C@m5zOg.B)~_[Ź선vl{ONnk 16ޕ3rE s6o%L{|ab7g^u娞5Rj2^}xOcydg>Cݻc;(*?Ñw׀W8 k_N0_԰Fhysэ-ʩ5OeRTJ6PZKqge3{PN]IUBi!Z3ӗTh .Z:[,*!I`\cfY _ZW+ x?11DWJ;jؕN% h2*i>ߣ%wL[vp P[ 6Szs8zL0ሯ.k^[|J~Tie (By)gTN4.8JX9?xJ=r̠<^be(]g"n0e{*~DR xGE+ |œ>;S*3n7w߮t'/ceR!PPO݌U)@74KN ~M?n/,&>_.jP$g1q@\P QPT~G^jV~Ep8i]0KAƢy93sE |Qʽ5Ix!^wǚdcD(r\,?,C1ײ6*FaNUĸwQI$ _Ɇ:V[p4 'X|JKXC6GMM#=\s\QNECȒ!p"zZe;^UonHqj' = lfۧrf<ذekuzI'~CfbS\p>7JhP\]K5KH]Z$nL:cMDJ.44U'ĜA;5qtO"  &] 8ɝз0+Z1QZG柴7+~;xq+X?[i2F[)~P^ =҂߫=s'c2Pe{r$c눣0,z;J"jA8;Tʄ8/ّ0'(J~&[-J'{WWm_/\a9Z~xiVJtwE(\~_Q,}is>%ܧ0tME !e* OĪvYl`~Pl&r"&[>&/PjdLX_62NtH1z*p@!g$,~%O [ L_O'& OC;?r7?Z;-@EFSoI-)(j7Y␄0?\ Gr쭹|OW,)۵yz`,у?v{^W&_nK!FO:im5;ϡ, _d[Yczp P5~8<9.υ(1+A^zUݷKKQN5L~?b9$.VgZN-sb_fza,ɭTo#G~˅=]]V4un+[yftv#|#}yȂwv_0(h Ι z6Ù(GDAIȅ'c'(*x)m:Җ U}*rE@ӟHED֭,i ;L,w1l 3$G|Bk+gH5[_C`emF#:G{]jnX ]cC|B 17mЇk׋'+ucQx[IO$ }ˏjw)?7tttD\B钹3a;Y?/R퓭);/eg1$׷6+h+vXkS8ۧm~S֪l?DjwwqLZdRxS4R'ޕv8e0ڭ }mG"7Br룱{H'"nZ5L|X Cטz9 }z9H奃-%xxnž&¾iG#Mɿz`F>DX9w,+\O=eӦOk3e6k/4!fAj<7z^;k)mknk@J;% j06=d/h,%#faE*s!i!sH~m|2pmOZ7 FOn͚uF!f l2@3k CɁE!tmYuV|odMQoǶI',0m$?-.ttj &zX ʯd\GJyݺE'ÉGjA!pvnLtȇTCӈed=M뼢1-ր >y8m"7h$nl'Jz2)t. 'j"rRH6cp\]Hguy>ެL}}!5@"O}6@ WWfb - L<_!脺#i&KQ#H7PBد@ Yae32>b5B0:ƊM.x@FuM.\!LT?{sx]oha u$r21HXr1-H=5P<°e͞ޞݎqe<#0$zBqua.9VUoKwk+o.5t )ŕ:TъlyH{jRT ԕ% 4ՇKabV*pe(7$N3&^0quųS}aޢQ\d_ E$6-6::z-iZj7e@rUaA <._2B _DaS߻ p2;#_S4 J9Al=%gy[gJ+QW:4/iܤyDH>_q'#ah&.I >fXҠ4|0m_jygHx-lB]65> 6zSꮖ\wWc3UjZ*~u7=^xJJɃ{fojuf+feA8װYOI+NKiOuʬB9+!4@N1rE kDpIpǿj.JNkjRS φ{{d+v!lN\ng+'ۜB.BMh?&3T#6m@N׌(bY=9{l ǵ_iłT>X'fWr]H&=GNr oGK@}Ev O WV -W31*ajDMQ8g)P[П&3JSS|g.]R){$D_h7[>ۼc,NgcP2D<&q@',B[p9!8Cegٽ[bb:|f _ |3] |mx+7)V'R0+wN] w?' [3kg;[{GgsGB5\xb3ļP.mYY!1cDJq5_ϐnb(TS'sÙuMN]I;cD 7Pۅms2\ `8Z )SDBgqIR@>"Ŋ5$  vc??IOf52Z8]^HYBr"63{]4BF۾8 &3v1O??-760 e,.X7S gOzq{TW{+od߅{uE;l*nW0I,'׫J$b-M3 @G5*pxe:MK|(Qvmn/?l[*\Ȗ<]HsE&8eo8A WwdDFۀ23 (yJMHB77`ޱL۱oHu{J!n2"+Dby)de~R#{`|$Ȃ)rɠA( b:ğ]U@hCBtcwյ<<(K(-# ńS Vs: U7kEݩ(0~LW'/C.^_O{q~}VŐL{1!)K "r밣#tܣc/*lݑ6+{lEy%s|`wS #T8ynv>vl;o|[--L8,"I՗ 1҇ghfU+&4{v(<3u~ڏʘ/'H$8؛; ց[i=skwF Y֪/Lr?C/.Wo1+FN&g)R-)Ec`V;Y?&b+{VoU|<4I/ʧ4u+ҕ:iU4v0$` <^>{<󪕴&$d"4`5Yl8P =g(jJ2QRN$+ "aw{k4( p5W0wɞq%O nh opOf-9*ܭvViYzuk^jj֠'ЯQSNh#G 1n\'3=(ut661qv0qo?a_tG|Os]*|®%$i&,5[OHGЭ'w]&3{zhM?-`4y=#[.fOǑ=w A I{tlm zg}\K!vi~NeW. rCFy,~ISdCJ2/!Ub}1-#WQQ?ygօϟDON`Ɩ1+3upQЎM^Y[D5F~o[9cM(cV64RP`:v~QDbKy|Y]~Y^2-^>"BL&$&X9Zq))R3?:Q[6 f)q(JXwk<#Lv("3yW=%I P{(<4&GEL&kX'jHTmHdeҽ0!-5xcy e8(&׽O:oJfޞDI^uXnʗ2qYiOv47>Mpra &]r4P} iS $e\5WAk8q{ô|#wΒ:gYe"@b#p*F]$1A偬C4]}XӴ@BɻNMJ=V!p /!duNB!l+ 4^crKf+6Ps䤼H,զWzgxZ3)#ʸk[ybIf6py(KW8\~q-Q4gnh3vOм;f:2}8{v֤9T0Y j'IOOo 3}]/6M'x/=O _X`Q"| ԥ4UZ4AR2Gm)Q`7ijR.4底iwA#.?`*v[ >[qigY$o?#\؟û}}w@ZԬt6b$I|5 $$)dCGaa )cFS/8Ir^ &Gwt.jo9\\..rf~޳FDl cEZEr6@is6H%bE|#rx?'D;ivoqdw!32[Rv:1-W':n8d=GcoA"*ܛ'M-\J.XRWզc9s(mvx7 ,<ߜf:V2P-\Iycy,e2=+Io,[x]a!n4^y*5 0VJhqP Ǭ.l1۷^|} FÝ2D2G^[֘58LcsHlIv ]?#5FayrAmuqY e u/V/]LT#zti}i 9;.E|fA1`Hc∪BCS+'YT`b@g -Cbt4U,K[4o o5u:lڶG1. SeJGaeQ +ѥԨeN#Y DhND Ef@@H@kvݺMGȢ^8_ L ^1'AE u;Y_! Q;379{"L?"NS_id4k%R뱃;i$HNd*ܗo<XOټ9pK3+Ap8Stٴ7lan~%*/,,Xpߠ~-QqG9DjClXb_.DT *@;?? "q JU;74ugB㡔w\{VBC̅SA!ÿD?йdFN(;J 6whEh'p^lǒG]'"kb&ݠ=fKHc484Y6T[D3-E:2Ya8e, u#Xa߄ήFa!X.g!JFGG߉Byx(jm0U1-Nr$ExB_j£2} L4&wŒV"Y!mo7å=۸֛])K]#kEtlW\ᏃHARXG}g˵S5#n;SzTOJ틉*8X yċyiSkā܎ ;ztΈ-յ% 2C>"Y7J2/e>p_FJk(./.dL9xvݠW`{kD~V1?ˬDt:uOP?ӿ:?ޟr}3T%oM2˪khZCOZ}׀ ' >*8?==?N.ˊAOn`8 #1p_d7ljB[?f&BOv^"oZ$kcAu+p`ZJ"I'`FSʔDBz4j"`AxGL9'H֮[~qgfg{;Q'!BjiI(l=QG)gaWB6"JjdOD6`ooϝ l[@ _e- jb* 1H› 204axc>16G$|Ŭgޱvu:7wqIZBpcb?%P\)4_#ϴܣGI_Lue>&-8yc'Pbd/|H҄=,m~qg$Se;V uRxpff K﨑%UMH?c)kv)>OAcݘ֙ծ\s0'Wt̖G1CLIvrڃkKُ뢮D>'79lAhѰ*7}Gnɕ7H?JⰧA=bNeC7C`".^NN͍wvb>[ iq@ޗ .XXھS˺(LJe5w&%ыJ..ўaY[e~.x`Wz/OqIJyH\}vSm8y4Yc,{@2.p+,eL1y~8"^/\\X#ng ‰[Qo!3 ;&lvv B;C CKӫL7lMT7>FA0s2zjn88@:#qShrG!ͥ%<j-2q삙iXANxZ3A*bԁhl2–B)9F*H.bkEph!kNNEQ@S-j@XN}xLX3; -3{ꭀLw4C#};5 tA(] TW3p3.޽FJ =8 DAOހ9 /']'k[_DoX&4񑴵^CK`3r8biAZ?yI|bxD3lhZmf2`8k7(9&E"HP-KuSnp'A#BC .f9Hc\wo8Zl>Vʏ$ 2I}VB0++o|JFE<ғqc~W-h%hܱ +ue }R<ԡ9z='+'nE:ŵGՇ?~fX2yixO8dxD@v=;"/LITզ6J"#%c,];iz&~ԠdeEb^w. Qw|s, [VЍ"j_Zũvj,: e nP=5̵J91 #(w˸;Tf6珦9¡ H 9#TjF'H~BW}B2?0gwSV LX)bgn,P"C=O *e,iaԣ5f}ѡ*|O {O[w;xe2+UM,o^80u3ȳ~iVHɓ`%F^zarYoC `0-1.鰘\LZKKO!I{"K}[OƜ)x\u^|O|aHef oJJnxkvv2jݤr[AI%3i8OMHj1Q{U{jL\+Mg")wOAT$%軪'ټn7e-L"ځ!w凜kvQl6b\Fe֮馛z.c1b?NG'mw~N+g蘮뭷?0@~]߀&:Ȍ]Yίgv'~]v!hY`Cjo[zӁ)? Y_[z:UEyYK?YtS@% 1F-wgnd?owr8w8FӁ|1*Ecꬥ1U70 9$Ud&#DI7 &Xc̣3UX|)œJ&Tcs 3xCXr>ɗXEp=U0Ii:44i<9k|x}MTbb*#9V}$ ÝkmW6 Q hSl J3^6tZA9 59cZܢV񊴌Vf1zdM>W)(hDW*³?”3bS4y5Xvpegx5)lW$]בJ2܄<I;_pӱFV?_22iԵ:s:ϘTY9FjKf5(jS"ݢ[k# ًͮ -y mhC-+ "odr~-tFT3$g3ܩ|euj a <^qfoᵽr60}rN͛06}Fe.|{s.Rrwˮ~y'm~$3k=D؁wj̦jmBǣ=ҥ R;$rWʃTGnu']K\/yF`MǬ{7??MPT&0(-?&=:z/Q!y*: _?{GsO+#жV]hN1[ZOBf:{/cXi.]gJ2oJO nu7h<[gM/ Tj=֝w#EZ~P)al_hCps/u pwb;==sEJLzc30mx*nzJJ:㖙#o2? 9~r`ץIZ)Gah}Sb2 x[O3 wbal3XyHկ%4]6ML D52 )=ܢpd{żtUnw[E˞s О SdЩ[{A )wzMVqfVQy Glm4Jh\] F袗Xx/PxFҧW3"^qxf ܬ#6cAajЕ'5ۤۓ6R@zm?Na@y"߹U˵mM m:MebCEG l)<w w.)  >'[ ӂOE L{Ix-ٲ\-YA{(p*z2n j*Se>HFB<|勭AHVjR_Jڂ.mE L`P,%  ľp$h0f~ $J{o]8GXd+9.A+ \QNoynD( h u?\01ȼOx#E*FhvJ&VjEaK (Q{"3Pe m?ymYZ޻mX:>ku$M;T'KL\|3gskoQb1Os[sfhĦn&MSk Yf ]NGH%(,m-&IRZa~5kLTfr|  @ m\5Y)Z!4&_(%WD*W_2i3<EMߞ1 g߂UӪ/?EY9*_;xRt@=?`MA+L#{`g>_?WEW1s{ӹj8sq"}3.G 1fkQ۲Z>.抷RJb|O-,[iGݖgpUb>r[/Z&Xlۊ-2Er|P?v_cd _'g)* ]s_Q'N/@]$KnnmWT z<$%3#n?J#ǕDiuB~|?ulXQQ4VP2)Vw+8>0oG*i#Om𠖋4BgcWZyD|$ VI'W`P1HGpt NAsGiJkvY(QmץLMU\&i5s'iP݋o2B'Nme_'G"  N B -y dC  ΎA#(ďj 瘀#%[>>v:nvQx@:!(}} :]1He殐yp3PfyP#R.0 ׷v!eh_9,\m;>tant3??\9d (=ɾ(5*3,=4YKV<=>Xn(IW!c\q)bEkzXI7B#Lp!-a)#P3+!}ULOטsKJ; m_pDoN7?_61^l8peI&5̂}J0d :"[2-G;4:|a?ak 8W/IX4a4mE eKqixG F9@Gf^5k)Vc99t# iapF0:hZ2SdM**ҔITC˂J[HiJJF藙޳eC~{Y$@&*;'g kz== [ g==:{D,~_TE*:WSa\5la+ }Y5ryޟ-яJPյ}@jk\lB3 [J1zYqVkھ)cȸq{䚃t=ѧKv8Mf |RV^B#؎ta{1JʷN*qnߖn H6G&ۤV|G}l@ ?avɤ"ER1}*ќl/2B-ZїFvbeztFxfNHfqe6oe}qKB3ma3t=Rpt++jKfnR5qIg zՈQ M0F'S9To8!LARL=~v~sXnCU,5u-nE5žO5M 9)dcCC !W?eiPoo5Ȁ%$$N.i,D`SeB탳sk[3ц DN$&.4ThJw M AZ=,˵5 U@^S=6d1m.ZT'Ö!jn4)mXB@ R@٣}˸ pqpp? Q=b_5l4(- K (@1 ~w\#.3GYcb&\:?Z\/Ւʾ,2Y=%7ߚZE5*E0)C8)EzT4fͱ94?.\ %vG3b|?k[#Ѝ|$D #&UV )^5Gnٖ5Hn,l,w=wC,18 6Dcճ[!ao$:U(La\ ^nB6Aœ-/ NJm&oApbAQ>Xr8A3oq쵁H>'?#S) i 4FDR7_Mks8=;K|*17َ|s^mrb"(7c_ iU"x)}m<%"6}Vl:C-JZOm=E Ѝ ϪXe0=J?LHz͈ Ue`,>-[6OIR/}!{L.#Դ"1\"BA#w`u\7ed!{W:2a9F(YUh^m}LuQN > t`i\@/@?  `c#O@;g[U_s30N.e^D#-/ $u?@dtg&ehZ@`~{! I )g"}`u0|1H.ph!!N*T =X*uߝD2m] { j55\hR(&}3V6}~9|9;%FY_ H% 0E[p59R,V)ML1s[ӔG2XG{ b?Fj`k*U~[ t } ZD ¬Vw!TXM^ 6X̡'$~҇k!,SԆck%/#I.&s80{*! 8q!JUh 'ZMc~Mt؅=|dZ ;U@9!Z(`3$!8h!JeGdkQsE=zzЎ>2"Ԫ8Ojmxз~?)qTi׳#Bg(\\6vGLGd=^3?2ʝ ֧yҢ;H$<߾eѡv6:>_sc9k*&9Aﳂ9>wsQ4 Лls ~6TFԶn6v!#UTQ)*_ml'J0A価vjN j0-Xf/le~ʿ8BKt0:փDEi&Aw+N69 כ{WnچQ|E2FH ac0 5Q8u23WۯwаV_qUиEqmzW@[b忶AVnʁmq8G! 3^QxCߑ>PBi@ 'ҽ+'6H`m"Ҝbj0/j I OT b>n=ٞVKGd˻ ǻE< _#q]k]}mbG{=u\_d(`mFPYآ17?@J_(6H8 E=W5EV"kQk^e.3gȧw+# Q1D2 %5} C}FL<+#b2xurͿf(,XiƬr3ɌSL<=d0{>K  \ՐܨղdHע;;Dy&%6# -X-D!a1D]%x3{.(_ƣ3ED}z ~MwNr:n/T5&vNgȖE(mۻ oX5?~ kX MXW@woui(qQe H~ޱ1^Q+F b0GSh!+4@4I_YtLK;S(BZ^uv- I۵z̢[# }rvB#\+xJӑ4{% JkUIGc5 47Ia)QȖxW 7~PӔK(7mF0*RQь)!GrGUUZX?{ FHTAh}=CRC }I8fK=v-DIV>\6!UNL\wWNƍ^N㛙^άNU祡+WWo` kH8d< 1T7pR "F -$1TxG@9NTb>l}PtmI$9ЮMdUL-41[Ĩ8T76m}6= ^mpMn5]d ;yb 5CBa}.+GDM@&<5YC_+' yN )QF=Sf3W WU 6H`Z, v~hF`w8 B@ڸ &8eZ`v(LY,C5cvaw$mUY8e}o ;#N>h(jAgPS':Z^sȣ/N;+>@ڛ'Svjfvn @y5m A' jI${5tvr.Q/&M UQ?x2A!KrƃEpGT^}E?t1{4g8"G)(K@N>Fh)[V:SQHkHEGrg%(쩤R7õj!@wBWf)vW=SEc~>Sn#/=0[z5#XȐLEo31%Q())y? 1?L- Ԡ( Р$Sh:aG1|H e]Bu'rOs'.6j+a^h vu"m:@Tj8U>pҪ.us fMk Jl9M:N9mh5( wm#hO)*̣@3-JvqT3MBvU gFapNz^NAbedb눃j\H+xzjMU9' >YO9]BހG11#^ٝMxMHk͕1*"҄nt|!?L1/DCX 0ymŽᐴ!korJgQO|izS:"9ƄqLJh^E3k!! (@jG*psa o,DF{ Z\__YigrËy&%V:tS8 " _D[D BŔP3b.e:aYh(áP<)XKm۶m۶m۶ڶm۶oMUR[d<2ӓ{wV!яR3&-Vp`J`h5ЉZ*LI}HW [/:u*Z $;Pr 6:JZ#hVt xfz bvJ:^R In<<"[7hy8ɡK0>b6zn9Ua6"P:oFghe^Qi[>܃f@Ck5L#Q5)a5V'}k |4 SkjX5ԩlp.t]WV23lJ+!O8s rAHQ!.!ݒC!y@0^CA'< [t*dɴQFT&)>z~9=g` 5Bi)rz/)\~С"GYj Ǟ`EhIf* @#^4I QTYFY H D+*3UG !<,*T7EZd f2JNyYdTtǀ#/n L>EY!.13DZ$\aT9kK6;MLH 劗P>B{7 Yn&b VSN9Vw&{ xFNiF׸O,#tؔG|fvl+02kٺ ",-!D~0Aa )˾w$l;8=]8$0,ۏ^&W$y:Byh ӭTYvRzW*7Za{}as +)C ]yQ33`K!5QKCtK fd0'Eݴeb Gk-}D!#Xd򺓃'~96_9ՁI%&uSMAN>7,Y *Ǹ+( ͈`'y Pd+P[EyMUi^,gX4rIj͎ ެPO&L~8Wܳn0ݜӹ U3d7$%rO+-7W^ 5?W?Gmpx~@y=)qLÞFSM_[n$g>=fzg&{d`-}; &(N楺iP [=i[V]ophʖ,'%_6f~zZ wreՙ`nMyp2#e%%`RPqqyrEo߼fQ B4zS'~ w${ʳ@旁{iL_",obm6yU0"Ȥɤd,Nl{=pNe0%,sA#t$O@MVkrSh$Y(,nf& 4* C6-Kh5ML`;8l5-2 ]D |[ gnxf7PZ)_^ vRbH +ELT -#yl G1[$8jry YC圀>58Pt %BNi3y?uMvmP@ JjNK(@=(Hc{EOr%[&7RF\2oibeX[=Yڎy= H28YIwZf5RwHyܶSxƼSkV+G'^pz !ϝB.h7e)ӂ\Y\#نby %5]ASE9z]Hj6btZYqOl6\{\nZgs=/8*.wuTںqI.Iج2%8;N-a=!0Fg7ϚeW jVlߪ`4s.g6*Tz}̐1oGe=q(S&)x]}ĉSw$}gWXsr##-`m!A2Bwdcv˥{HB+Âi) /VObkVaNZNGG6Yy{(;|/mtyv-g_=wzxvvW2z9mE}^gq6(T>z:/ڬ.TR~@M`[](@v2Jǘ`V% #bOq ͌3v_Ͱa7T%)-YvYt>ˮ`jiJί }ߠ]fhKܠw濉K%lTb`yf1)8L?^;8=r'36T^3=ȸ奕X12Wr&ľ:5?>"f#dE uZjc 肭i a2j)]Q*(ވ}z=K4v4{f5+޷3:yoeX~!޹Oh%`T|wogOvv8k D.<ymNldj{ݣmh z'sV 4@|z >)X=v߸3#7hN76cCa??' ro&A񉶬󶘗LHReO+!gWQBlijiUz:X`fANvh awIN_It\IRNtB3BJh /mçz!_~?b$joq^VW?!/^˻ה|W?1{_arw59(<'#^!٨І~zrFγW΃}O%3ѷ^JrvGp?bz@SCUhFa"JB4F-ajB._؀4ŹS$F9 cO=LG+MDSϴtF:mLOfW3)dohAp{~m m@mwQz_)ff"ef"^N"^B;ݓCmMąlf"PV O^^s>0QQDtH{s=RkmQCVМ"}q+l6@@9Rڂ1eX轵a~^=#H#UdZx58ĿyuӮ,Ӥ=0Lj 2)K8ju wyM$:pqČzއE@MT#k9! :ԣn!5fn0q~*n`Ñhчnx2ԦjOtX9ȅ3AZ.CzD<\ٙ3vôCLpќR;m%lO] #* Tմm(1S uX.Y!X`Ӆ",bm:(n m%.ж+׶T@jOuhI-͒Rix7uQLX&uu q%*#0n*I!2,fM_t[47x/zHx $Gb½e)hۋM%Sy[-7<[C[v2:Y].goo[uDĮ@GZl}ChbƦו 9J'&RV W€ S$ZocՉҋQm Dj3[ ȏiθܷ{BL>-.zV+!Ⱦpt1Ǖ3tArn*W6W8d(@cv@1 N$j_S <9~7n\(WC0ٯ; 3x9$ڀRQzڑ$ -b(ZVBg1TC(qO9zZь #'$tXπՌgpPi/^aSk(E鹘 TENGG`ߵgQUtuiY.VaLe gL>SN ? s 5 nwE`?ߛ9D|~ϔ4a?yI'/iA%OoYekVf:1. sRlWO.1V/ 5k;Gd9܎\;qFmel[= &M.V6eV*PEP+GfZtv5n:6VYngn2䑤hv">sj sQ [k*^xPWRc1߶=d.KC}X0`: ۂKPIMmubG5XaK%|x<3OÍDO:@j4$l5TTSJS[qqwo'@b!gВ@qq34V)m.+x,[4q0?=i_6X\uBj+)bB)LeiΒ }N|x1D8!fcӎ^FN$ҺAM 5(gt _IE=Ŷ҄hyC4('T ?=W͚?IQ$Z*>?Tm$З$^GI,/K]%0/v92{ÇlfŒp^*ɽ&]"]\tmDLLķX";1>sm$%W^ kLQ f\bC'#z)R mey`oG 􎭩$zmyvXcĉE8vGq:LX"lXx/~[j]Qs3! i  Q)+O6x eCd[~=n +Ml7,k1d(T`p8w- ko>ԧ6-ZN23].|64r?<BpGM2l{N{N~~Z].SmxTHB<ݚK:ͻe؇t;:ۻ> ǂp9U>n_C%udJ&ˌqZ{FK5^$C+#P48PYl4K]@&flk# رiblRq4]N`P?`N.5D:LDUe]E6p0ة%{k;[d7;8o4g[<,S>eLO'l6qA NOɸK.pL?~,TF(OzP6#~ta?erh 9#\0=18ejLpAS)k_; A9;N[vk\Sλ`9zC"6,#XڤGK~qWu3IֶVmi?6GU S ZC)"aD-)ٶ :nMCwVIQp,}Űvwe %VK+cKPõ546^*ۼ bNކQ"6 r/B"16 9 ( ?uz:si3٨ >ll(Z1|(sޑd7(|Y!CFMtj%{v eO}J\(ɞ1qL3L6Y TTE@G=zֈ.lZxeE]8| $ :==i!ķtJa*bP(>v}_Di -)aE7zf]utSIcPC-fI| kPospڤZ;hϬs1[@%np;'l)V]Q(}||N Ea귄vr!Tp8욠Og(7 /~ u]/,@׿rMI` %^L #4]]ܿP4o!9UG߫$xe2tAsyhG3~܁2+ˬ=Ҩ\@׈R=j"AEt& Bm8'HB2\'~@^`t`w\5pEi*3uN ߂}*:bwy*U-E؟sXĻD6Ս|b5,8?T%UP!쫢)Fs-1۷2`O[\a Ĵʠm8I6R]y0z;&xJ.^E"2b$U3t/AW6F gtȭLzͺ#Oqoc[=BjmTDE@mENerz\۩X"D=6&9)[SHR*T,YZo5|-(OQ WrJFuyGPB++?܁Z lѲ/QtCGNK4k 6I)50Ñc^ 嗷Pӿ+ yȢ@aAΥ<9]rHCx~+ҝs?kHoN+Եmw 5EDPSU/ҵ7jU52pLN~;HC,dfռ_t)x$vfg^:UbpIӜ%` /t8N K̰ {Rè]7Uٺ!>$ȉ3 ضCK !aa5c<|n:^HsM2U%}~369-bkvfJ,: mX ?<5Gܹ1:2 /&⸙7{(^CW=RC9b|[IđFl5ݶs= >\4N={;ɻW;fu p~a񉼳1ʛ2W->x}|azT)cjx2&^IaN4"Jgf>$CT<* ]u]V{1JTk)}5́Ӗ35K3ٖli< I.Tgp@Z[IYg7yCiRSv`o^/ ר&=qynz-f.6Y lօ||Z x9Y+W 73^\ bS ٰ؈t|gPb@Zyu>I;+D&h( VV9ȴ'O@Y@$wcet w{tFESwmK׭!mE-d[u/*' .GI2@$Gn_%k: h!|.L^ M AZ_%V=1k\.ҹ7wPy%ޣTZݹsPX6,KB&}.K^u&' s}2i8q'J w.nKSE㊵S@| 8\ :/zʻ;^;/pWA0;D: z;^ЉA7_nf_,/O~o =cV,Ԟwt/{(_:OV~])Ó(خ{gÕyJM5@aOVJJ; 1+s@B z֞ϨOg1&1He9e0GKI|t8aqkdRdlh3;DcЅ ۘ:P/ﭾ2e=aw6DKxRj៯x%|".6]E(΄O廋پ5{0'csSeƁ}9KB j,{.wT2Gw V`=>!%tF)eP9D{?(SyN$HGB7'[L Ys s\LD׍f qeM ֯=eI{~.}0QfR VoaX%ʹTYR]rP8=B{Fȏ"@~g1a^^oO{{z=w;\AgAh>z1[LfF,5Y#Q{qzMC"s-4$鑳n9zO ֯NV&uәv'lѬs g%ѠM ܬT|ԱPӑJ (t'dǃDL蓺%Z.Ɂ (ș=(0HQEJU`Xb-8zO6 LMiOr@eU5v)Y?v_]Ђ9O&ڥ*un+蹃rj!l-1˜XᨍgbfEvĒرh+Qv r~*]]俼+ 0([@ԛ Jk2 e3K 8 t8_񶹉.Zԛ֔r)i<*T0ՐU}ZFnc'f KZح/j{-aՕؔ&jmj[j#5s3KH1] inrnfv ԳUHce5mC*Ex[6<\w 䭒u}9҆6md4ʭɻH ,##Z~b0\;vfwv~~hcok2YyANN_ڬW9_ #䓢9;_)[#Rȩ" =dѻ-hs%ܰjzjoFjg3P?bIo%m6^Ib3:3,7n $N,e *5RZ۳{u%@.IL 8DG $61).Ru?xX9{N dL3xGa%PID/s&?09!Vf 0` /HZ)F*L'-Q *x`d/*G*~_tْ5}4OyǬTߍXnKKdqLzuwƏawnu}lE&3%ްu㚞ɘXJ+Gq=ЙS.22m=FcQًDOd 7zK6nD}̰W׾,$prw.|%?gB 1gdmu-$lyH,SojLKyU4K )~/ f;  %\.[L'Rmx ^)p8h THx1@]xt#6\IH VP)"\B\ ZZ_CV\|0~+`8z/$OFB]2br9 Z LCH"^u1#"wA feҟQKZ}q01\'+ʺ4?#9cd(~rtxm/M~|9KiOfu% oÂwjN}Hئ&oaAeibbf)F#iwge_R蹖- #(" "( TWFr:+>E (Ά=qf1ӮGaCclNMͯ+GM*khC`*+m\ͻQҖCZ-&&xUrPKIx%),;F vC9,E{ZS!P ;Jcd$QnA'3 7M-uɿljOm;㜥w|:x_`7ݪ=$(Twﶜ?2 =W ΤƜ9 1ߪw㹜 a=^ߌ;;PS?@;S9V/WkEݡS ݭRM̖yӔ[m { μ瓇yV Tˢ_QRY [[b&I/WȀ!Lhl` B /WxU1 DH6ʞpO qJu$#AV 9>{#ħ_mH`3^7Zճ!C4'?WJ`-l*Ƌ4JZW"ѼxOҦKvJ!hGk%]=IБp 9Kɣ3w~GO朶;0bɡ!k+KH܏^5sF Q  )n4)Up6*BON9bْbfdz?m $JاKL b"/d<* S@44J6J7c+2W4HԬ(QpRrl;ˍ&t2m3\g m|C!Q  .O!L_H}TLx(L\ v[@*[kqmP#+͇NX :YaN ]rn >=5dzURǘhzV0f[oQXk۱>fc#)o-Жq7bnҍKИ&VU=JޔH x7_38Z;eׁ+DqTYXNrFBH|KWN#e rfe4WwYP,^s-FѨԀnWUM~O eܵ1GmiW|Hۻ@_v]#"K-UB!ؕaʠÎқy mn;Y6?su͟ 6lDE4߮|+kgPA7\o2QvAM5!uʑ6e Mg.c1 d.ىD/Ը҈l*+P?pG 8I54,km^ظN_IXsz6 yzwDc\5 βh 1i0`2l3cr]_]Tgل+ 3S,sS2,bx$ Ei"^39iG&Uu ;V7]u0XRI&i0H A2pL AEI ‚aQP7oR7-)5'@ Fۘ7gUkB 9q˸ ,v XUa9/"#^3)Q683\o>o%c̏R#5GizMjGpt)F GzzAwxͻYEB%r B+|fK=d/ OŪ1xZ˗̌ #B-Bzhb5r@1! -.$Ih]Y׵#3Di R2CM>ۚ&OV }IZN,zVJ?}E.h8H_HU(;75N&@zjHpQJe}lf7n\b ,X/Α~폇I34s%hII4O\:]L/qh>ϓ'z9Pgn;CJsZtEsbޤ(`EޏWNQkS`u( Sj>ALL_IjPBQzϓPΕWA!d  qr>'<7]Mƴ&6+_L|Sљߖ7)XpD|eTbT_=|@6ʴL'`us1!eH~k'Ft]CaHnQ/;μzSwܡqƽUtAUPJլxW!0AfaC2E8R ?~GuŌ9h”ҩ0LHD^e $22~ӡVK;:t r{blI|:tvj& (/Q!(Nl K8Gu ΋ UDHWe>oDjN^7Д&op#DgzCc7x"d7xs +)TrN3.3ˎ ؤ5(z-[i䤵a"S ZJI,W/"d&l­%G{`CLL, +3Emn M]E][. ##$a\0D3m]]R!ΔĪs>/x1Ǻ, n7$Df2)+$Y5]qL R'FW&х",hwn~V)>hwGN.4 ꢱSFi,L>?R|.K4 Jm9 v_ni \t_Z{+ĐAqRGRDjXλU\uN"4q5Q'K/ * `7,3ӎ,Z| S_7uZ-d3SM&O]rÉPnj5 zAnk~UdžWo܂ydhH]㮕L#N^]Ɓtr[P#[[4W.Z {f=}9 $. ^ੁ+#|  ك{^!4z%HM;(řD.:Cە(=X#r1$sJ:mB`\\8gGL)'9*_ د[u/tMJ-^6!8q^L~G<E7Ť!HL*4<4:Nj$d>=}Ro2!k;)C~[$kʸ+l'} r ml˿18 gxkWX / s?(Ũ<?sN>H(Hc +?$~K d+rl%jh=苁ضY4V<-Y>1EM/Ht[Mx7?'wc_dgjvG]9['p3j@5V̹~-/nk@5/0YA#nYM Lhm@qL:ȿ 1$n`{qpa@~@=aІU--k o^qsZ O?q\΁4A9 qF3̆Od(@vpDÄW8AnI>MKb˫om8Րzí=DXlBes*^)sgл1_}SpNO4*dH IBT0[i!^qƌ1elܥ91^F 3zVHAEq=z#&}TK,5f6hQ*Vl?oeLYs#EC&TPE J\%JDPĺQ"qCY2Na HZL!4T;Yu!BuExf:%+%v-Ex̜ܶĹ?H@%9B_P)qr c gIx>r07Ǻ6>'' bU0IƙоM[E&<gޒ74}8zH^ 9F۽_6JT;.8bA,SF]e~!-my:;eG,%,P6>8Ht]A넂a](όMWPY6L=^"(te 0t3୙9F|^?oʤ;iOPSr:xXQ@# ßCxremRLs"FЕCxоd;d(&V'a#9~⻥tqdn{ -;^93^ oHwD+IC.׉M DqE(43>|X4+4<:mJ\ܑjjMxZM'<ӱC +o\b&7Ʒקzn6,HI5hz?܏Rg r&8 ʳOffXcu(Ffvy'v}T(jZԇ6{P: ]m\7[ P*z"ҍc '7]7G<(6$J';~` $K^Vk 0f`ٸSg6W,1FEP~0:2>gc4K_@# hh걯냞f)4" oǟS)#NxWMc@T"ˁp)*8) c5Ĺӄǵ}@sǎʨ:Ih^A]\sHny)f}X|`x $ JH[ 5γru׻OOFΰ_VV>f6޳tu]'f&} %$tzY!H`vq[D N$"ivQɵ0~kf(2zWaI:(6% O8+_8b+˜?dDMq:t>l9ݕ~'91M-ܪ:9\ġA$Kn|OyrEMl s pB䊿 PLRH䘼VOaʘYUe}WP;z,)F;`RqDQ0a4w劽s$p}wC@s <_(?H}b諊%1s_A+q_zRNtW*G'1hr|]>=zۺasg_Gto\5KBwxc= Z4bҪ}/ ^D~kkNJ_ ΁HE;YC9K-wE$$kpbYs`7qBX`(<0V4Ey:?A>_b 7z83r?-C&[yߺЯR|sznm ~8kШh [-س0N5sU:lcacf7uF2ϟ*iUsfEd= 0Nj 4):n^k[,ycD"n#TH2jXFEdj:t8)ZSEߴS8;)`Er1BZ/zYoi_ܤ|fx96oJ'l sʠ6:eH % 3gh=a  Lw~ e`6Mar9`p@K1"MҊҩbGH_ˇ`Ҍ=ĥoW8b Ls M@ O @K.q;@@  LhLѓBek˯&#AE`J~wqN>l.(OO/0|xČ|)'rAAІ&䜯~}eɺ>a:.v~t ~z&J ug֐̝[Qeo[$l@,6j-6xZ s-䲽zBQ =I۾s q@ʸN.rydi|(*)5n: aF8XW}ڴVijl b80$N#0xg,Om[$a2.R"wAODΓyxo܎+Bn·e0ΓBdoJn+<Lee[h+qgYulDA/`n0 Nut}yӤdԎR#z ad$.hl鮣m埬.䝪,mz7udD+x q +NM0g9h3$'Kгg*@O(R J6h) .,odc et㼍2v3@Ty7RH#SG/G$BqT"YO uMOfl=hިD֍MuFhr ?˙t p b*s!g[{bhFvB7td{=Kڳp>wf)]# Vn#iSц㐺tƝ׺]Қd0 JטQNM]VҁynA/ <$<=QEy*-艵1Q=wfUQ嶥J&0w*&f@f]u{ &F2kdKo\C7?`9P1}ae7zrcO[( +7/0M.AXZ2V:2IuH׹b?X@ޓeB/Xds9XfTi"-Aەz䠢j.Ӈculk E6?Pe0k/SB;M\k0&Qfڤq?(ˇ&2Rb]B;.Jb9,!z !*{<"uG!ѰàЋu1eP{p)]cY&s9MI2*2hf˵GkCR,݊*4 Na4t cn5|xM8´VwpC Ձ*ӣ%w3[!;t!N`T*%.ChOcËk(Dԭ(E=!!q ߬S!_ʆxPR!xb^PLIAsLcU#c c W0kyWDRہFM4Oʀ F_@#M X*z#NFPc>D|Ɏ2BP-x!-Y줎"D0D eD)gܢ"󵂼J$囗Vou>?+ė SЩ8Yq<`#K!JÐq-`wEM<ޘ)TqSXZaJC0Yoűu}шJ|`9`X0 nSk`֢)d1_͵=n["Cna['A֤c9Ze[uiDb/갚 {Ed3 ;$L>38qxO`*&t/y ;TSʋHVHR1SvlkR#]3:@H(%ܠ)(|hlLARPUT@(K9U1SQ|þ/7"6w:VT F.u|jhϧWJyM*.OHh$Tq1*}}!Y(`572(nkD6KQ |3sP6L: ,rr)"[16nG7J8zZ_N̽ۅZ&'o7>pwj%';*e+4aV _ d!E7 񬀘sA}yFI-q%[GObh>>?A]1bIyX-m d-DB1NjK[LAB%P=!A,Koѿ321ŔSΚ:*aRmBuIA`[Tqq<(~EBžxķJ1h5EĥIa:r.er8 tI1dw)cvE7&5"oǩC ۴^2Dݐdn">tzb:|[gƠ50j>Xnڽ~[[ş՛{i_{:n>WA DO3Q nX ~VhA:\9^CSiXC{t"B;YкfȱclP3̑$ќ:,ȫ/L^bcEAM^m"!AI]ش2ǒ}p`tEhF%[)m֮q_w?ϰ%Wix *IȶżnJDzTXA5sOξ`B)P|6 [ѥO9\Ӭ1¤Y)-t yϷT2(4-{ PBe9W:?@p俩uD(qfG:"+<0 >`S;R˶sojيUabbe.)/SzU;'.K>'}b_bt9S/RQV)ek⏩a.ĂpQ!GepbapTdO=B}H[Ƭ+A2t|p4atsޜ+)x8PlPƊI[j.+6+7{H_"V{%PiD~j>M1C2g/8< /H eʴ[}nD-ȚƹJG:x<&@PS9_0:Aqa5\byEp4r푅C"TxOf"[Onl/pVnSY1O4g/-ţx؟}n;7 \4nho>Bް?vW)C0MFRjre1BrZal60" QG\Pe 2,FR.v)b%wm;.u%=&moCtCa(%S&<[6{OiK7=GjNC&YDl/zx`ӕ?m dI}d!A nr#l q}\6qE'   -ӒgXHKűl 8(R-JRV; 2#}ZUQaђQ-?6mtSҕv>n[;9 gfd3˒XsF5 &jfRLsol6 b~HPqLp04OW"oudBU':5tJ*$O4m*S =`Šxq b@=Dٮ_?~>0+zRORA$)2345Q4\ʲ|$Ob< p %Ѷ/Ea?YE [|Nꕒ8bj%R?H{SuMd:S*CQي<aЫ,v`~a )g/ۓ9unYb$C_yT<)e!KW--5G Oc<k\Bj(J+z p) |1-1A]ƶξ[z(&Q z@7бPvv4ͼls|!_Y5T']>cGZƮLajQxwk8)@5-Y? T m޸->t3$9h k';??H]&\QghMPx8*lcf9{hr._4* 7;"kVF^RRˌbgA z\EI9MakQZ04&??J;ѳ-=wQ&^>'Y&MNQ'O{wuύ_Y9>"g{xyx:T7;4c`Q͹nJ܃vOjoBa~NN`Kvfi%ގXk=܅6V.{b\)|w^5Q;6m6ݽZquD#8o&Ͳ!:+iBQ&qg XohDbMbU,hAmTU@DpptQBeB :,~ ).N(h]@%sW 3OTUE. "ؑjj{+ J֝0 m@m88à QMs2qHmжw}e?t!b_aBWbtzt0ʸݒ uQo{Ɍ{4wƌ ,!S+IoJ霢aTw0S@)g'rܰ|uCmв?SynE39֨ԝco{VѢ `LnYҢsEWGUd+LCQf%͑ SC!JۑŜi a&'7Q^>'MwFY,Zo8ϵ\~z6=N7EƜ)x_!l:xT7M#a궦 Oɳ=8mvyT)EU8p>U~J})ZQ J|[|*]Klh_e Hy"tx-ʟHIu Mł*Sc$m(B%I^>itr`%4@H,bYaigt~qQypŏ"dIB|Imu4 yGu%-~K>C683KD0;dΒɌOk5L^7Edcw ?{E j])ݻP=*/>Iz\}TIdn Ojv(shNġ2M2LG<9}Ff վ̥HJrqpX9+\r2'Fp>>1d,|iPǤiZR a|YNqRڠ)~L/lXWyG? #,7C{RO6(G؁n9Pd!!TvחDB`#EK[)jQ5.%n='X 0)0bӞD-J_wb|o ꜖Rk`D0z9)iP7D# 1pj/-c+I\+TI):;9wv}8?zNL||GyvCʖ0N118 1_k5Qj.Gh]sDUS>p2KcC"Dsw/IayKQidQ {]z:#?YJo0% QPvO $K" W)ug 44M`RiSFen DՂysWJkTHTo$+n5BPږ 6a$L:8F`4?AJ!:J24 5o wiN OU ;]>aXSʖ`!&y'%H3k?jnUjns]Jdew`K;,yE[cؘ.Sx.ْܓCW&h¨yhEACjz?f#UfnvץzoT^U`+ۄ/_zn+`)+G>'m&=}J9H-Ђg7nRulJ'eN{sWnHKmcDo+zӖo"X(Tn0 N/wn""Fp`ގ;*!`I.@G(cڤWKq-&4aZ9!q͙ )2qtZ~&Y9Z*,jL$:R8fx몐a5TgxXMd`?Q{P*:zk `hJPԪ`<^G(}^n_VWˋ/8Fޯq{\I9 sѽ>['ۭ8|_l:sZ9>괴N6v|\:wPxYf?L.=il 8I(בSc&_<'@Qx&hr<39P޲7*-nEwY5ft?OBPxgW.V졓qTYèe3w6g5!]=7jCƖŷv&8,BGw%*H{/-eݡy&gjN-Ze2{\®Ux;hG!*Cj)4Ll򨕶KI1Pی#患`HG[u|oo:I(tGoaBE"9:iUI>>Fɓ[鶋ւ0?2%n߬߅Ibzi'l?Ԫv޶?z=Qc!QT=[cf䠫ѱKQ5 ϖ TDO|K+D z*}:^iL}Lec&}ە\T&[aE(id; V^_z_BI1v+,D;^esE#p\ک I<3NykYhx4ԗ#CdDIEzGv?\4-88L8je #9zo<sب(9e}Lh5 VL{vo}oǨiFW+-~I~ZW,/X@?r"|,2a>goV. Dd*[c斪Ui^gRlrF>J(EG0yə.0U[`#O\Itz9S0/m|4 3Y#9XfBûp_A*ӀViI7Iqp4i*@+5I@`'\p:ZuTRﳛ"0*ňH*8JB%:7ߒi5{W_`` Dȓ;;iSc^'-,#W"W}c4s+=v4' *5S ; s_[dcӔ_*p\I=!m (VH#U'Hة9 '3o?᫨K2=d D؆p5 iU\JhV7o;5] gNBq K(GlfF" n9R~vM(+Wӯ.:R9#Zizx~KS0Sʼnn=m bDQsQRDxrdba4hPUzUe(:BяH oHWXQ }޸\W~ ѧ,3i=ljPE<2QV+vt7qKD3Qx.AcsFvPL\ũy- /u5j˚X7˧,ywHȰ'uI.ȋ#>cQ`68[͠Œ5"XD/ On}3ޮIgpFJSCVnsJ!x2J F\~nWDJuЩS:z;ڹ[Gqo<5dKJ6V׎E|hVZpw-0bbl'D~:H j:9֒Uӻb~,AatEܵ,'Xġ-q7_vu jU(PISU~7)mvd> HJ+("GFD +%@S~ׁ*M%Qfӳc(l?.T:ZD}Tjļ.r%D":'ǔ=3Yesti^î_ XgfUP2Ѐ|sAY " T+-LR&rS!y[k+٧6ܶ\JI.CƊ$'Vb?ukS=K|w$ Dz`Nx?:snaD{@'vhz)NQK8Jo)0i-d k|y~xka7J;ؘmȶ!xI͑8]*ww2[A&"ZJ `6_.yd5 r2<(/˒{4.K7k_ITom sի6X՞vݺP%o"2xڄ /[uu(}79Tw݄'Bm|+/ahbиh[;ȞAY ]bM-ʊ2e]RېK;χ/ᖔβgʣ*CuÎ&^[ءtK=s &)44\ց"cKMW)%{SYcڎRK!v ,Ij0` vXb+C2h[ aS]r-Z*E7A(0P@K&#լ?g/PO )-s=߲lUqu]fpKѕ\mzTD!kS?;Aw#vd1hraU!d=\ 0$~`s _;pUlyNE V܆I3:+Zt61EcDyR  `X#?t[ߚ!m=ۈ^Q<9d\r-SվŋCz8酰4]R9u=M9ˣlt,C4 VGЊn:z`eԨ{HXQ(Is b:xV}u~^:wetU IPpuUƋ:\Gv9"} V|Ȉak},7(5AdIÒc^ƥŝ"?x#4m;Jy 2SQIR^Qߠ3=YMn]yAso\V^':!wpL~kȪ[Mjqu,#N(j0X|0 `[E~`Z7>ZgMzB cSWq*-N =/UxuR0 $F`%~]r~)K7fNwN6 ?5W{?+G.eͨnA@E](]þ L [{Z ),TF$:=. uwxhBw.Yc޼Y͒+*ZXA3nWW7?O vo4[ ^0?9(xU sY ';xFST 6ĸ  w_1G\[[.$ph@Oc1 \۳)Orq7"+Xfq,p=Wۅ1@UݬOLuZٿێJ5~#sY 6AgJɈP4p>Z\cS6P/)\Ԩ#`D6=Sf0u$h+.y;CGN$c7@cRۨo-+O> o-܏Įy/B:ij0WС@q2͢ﭵ-<0w;kb/é׎%pFQl 4T{7jq?$p۴PѷoQn9,V HGknOU:g@?d] ȬRfpozS%r"=2`EhbwHe-"묋. '_Aw䐋] zpmMb O-2[w (c)1 쟫S*'T(3 " [u7\N8Ub{O"y?n &k5%42 r_sB^%}C#AρƗ>1xR?z1Xs!v K"(W 5[^x޶ Aqoy&s=V8$NMK}BC(2&@rN"8y{PǺTsEYUw`' J;z؅%o.H9_}3o;5yVBLڿ6"z@l<킣V@"p?scIr&EΜq %,T 8ѥcݢd2zڒVjLي2F2B4DrfCŸ#ؐ:#]]`6֣DgwmSerWKطoyg|9S99Ǽw=Suk׷5Ǽw=okykƧok͉?ؼ6hk}ǯɩ \w)\w :>cLq=+IZ-Z>关3Fi~5i:\w4H3H41]w&|z Ś|[w-&Xczf¤d$pz2> =?=u@6m}6}6ˀ}6s}T!L!]!>{?"|#ždQg_~F=;|i>>/,xl>%ܽe2Yc6S`̆*2 Lyn?42{PtyWMhfeUuLVJ^-|F,2i?tvX+CbD1/xev+ YƎNtt!/:5@X1Uevc2;9u,C (W9A'^'շڮ9,1uibقqA skjXC(j^ Y$إon=:?Ng_"N̤1JpfKMDA2uY'"wek#, tk׶m۶m۶m۶m۶m[J+YsN-EF8~k?bMĹiZ ~PnM EղD2'-6H!Fr{Z7b9bm*@\hrm2٭)-_ -}v(9/ sΦr~2}08Bl)[NtR]qzj=OT=M}ML 670J|l~:d%BqV=,&H ,/A@Q&>:QÜCxCȜE(YZz2WOIP*Z{DX-% 1)9yN 9LjSʎ`gF\G@=#=b[ɂp{/-PoJ7տc;*mૠrW7ǽ?ˋG@ Yg|Jt299}{=N?a93{^9%ɉۍv~9'yC#9}F ^" |dIPZXjīN WsWx,AZ 6V!L0х, Hn3|8rur7:3"sZ?NG ƟQIes_t^~1)R5~t%2BC~vxv_aB 2< ˆOS7ח$s9V >7q_}6GLwp[B&U3-" {lXrx^8sqz0-s=KI~􈯣UEWW-p5-{4 Њwd9 ;t#1<4,{ ӂ54 wF 9yn-Mz_Ky,Rx6/LV;-"sᓀ#il }' C1#do] ba/~I ->䋏2Z',2PX[i 6aFvoeU Z%8LEU)c'W b(^.SMHL4wSԆ0c-/Ѵ:A8t6_{n;q޺s~oZ1%LHP>)9J+2^4q}. "IS .rǰ}7۶ v@U铲r)裴 R ژLZOKW[|%Ikn2])W?ןeLpJɳzΘ!2tR;/M1z0u:U>vFgwq a1Gӕ=G?fQ\0ezI؞y9hzf<3I|Jྉtp`&D%zm;I9ߘ೧~?:ܩʏJ4c^άsTk$k|IPngXQ ɸ7hQmZj4gך?%b@×8 CW&r6V=TԿHrB{ڄ VXɴf _i y潋sf%ɰݸKn.4= oA!6}6甪UſHo547Cs;=Vz`[DqkZN }62|#߁U#$8㋕ 5!i[5Dz\HR~OR#TC6{5y8h|4Z翘 pg2~hyZ>Rj i+{lZb3v=.CLkw=$+5uv{ ZP(S}~f7J$p 雛4Y[b(Fug-J"tz{`' 1kj|gW؈_!= 4pHC޺z'&Tbq@Mϕ͓ǂS2|piԥ`:2< HM#=+ѩݢRYg`AJ'(FЃG}ĵI;TI.LJvKγ7ʹs_Z8\/Ua)2+vK%[E)5f[%ϵ:>>n)'\075=Ў-)rld 6( hZfR~'Bn' Lfla&!}X I2Mİ3Ocl~~S~&q&bES6ʒ G:xWMJeƚY*8G eO騚 29+gY]kc>Cu]*kߝ]]O̴ރ8q8@Uq>s"ܱW M<=}ft1FOq%Dƕ^ԩPw5˧ &QfqÖe'vg$/;)εԮ@#u|e+&Q-1h۸`y))ؾ]eqnK-l@J\+w~'x!50ڰPUӬDr#v *uְ֨1GK2nQ kZ&;g@kIհ(pm.2B}@+jbشT< u;(ZJ`|xS5`r#d!xmPb&"bp8Q$g%ڟ<64m4 5#.=Xr/ɰʉkM.Q-#<,-je*7^OS*@rW|KYܬY8q6I L=**. y|.% ŘA(׬ )2F. -? d#UE[ԧvoHa 1F}TFVrx>~r v|}2QuVJ:a>bل>b54>yPq0eZ5u"u9BNCU:EuF6hj)7N4Hn(JT8\vl\2!E:'ھ>i tS`_v=њk0kM'$'lj\}:\^@3D){yj3U\ IDݮ `BT*Rx)RI(^Ss`U(a,39TJazy.lӯhB[ymWQ9^a̫5`/->'TD0<)zMRf>[ VA 3ð~Z|+߅>kDjd6U1(EIV78@yA#/Z:XPn ܑ4PYȷP%kF;wz((0B6OЮ)y~>$ &BJ=(he&.eM3.[`~7%p1NAUR0Լ&\%Bŏat` pzVR[6%d/M0*5ajV}whDT2z)0#r VR^>[ĉ5vZ YYrI-n`Qqd I{5k;<_ևdN+$ 7v\ 5x`p;M3e_7@CTopy uzs3.S{`SJpJ"de>p<ݮ55/BI7XwKs(<0ech_ߠMFv..6XO(/!էB:ӵ[.kJqOXKصI 1 ʋ~J7r4 '; j5Aa&!)GcL K{M ْJF9$_~er-aQG.}u6 i vu*:{IK0>ÜL?dPym{ ^}d* :uw pG b-\</(=K#SCZ [3`LZL1kcJG.X ,2#֞F GY8ʛ26FN!Y\lTĜn ľ/ F2)!qӞ:q]a3O(oV6냒?s_H}xIƦ梥S jY/! Ռ"q5BKSKG/GOJZYmEp'Y9;Hw\Jmޔ#ktTaTeh-Z&g B>l z0ґ,%qf^F-au'tmaR8\;jMkԵ©T&(=#rWȍ :eّڎwg,j{$* `U,gp?K>+[8okm3۹iG>5#3Ϡ's%15Ф4/W3 DwmפX~0'-JC٨5'.Tf;oQMKM@$d5vMC̉c|ϺD85՛B\x!K KJ,C^~ bZj:gwxYͩhv)@X5'<7A͗yz0„ye;^Pg`9EqLXEԛH~ {.3?~_#7YOhγp i-tD~BtsCk*bWOӍdhɨUv V}B.+C9~)(]Qa<[}MΙp %F܃ҳEŭ pw }#Zi3X׀kGbPkw7idkU#Uʬgx~ Ihaߖ x ‹ b=􁑸O\Ԁ6JYPb p@) %Nq OGR]a`cA2U,Dtj btw} @gd}B]/*8_@9]Z)WbQ? }eO('idg:(|k y9ac[STj u2z] :~fE&]z@N,N$'E5w:wڏ;X0U>808f̨%2Is~x{L3,L^w(R70lFTGnrFPTt2Y|GiUfH&\RQU"$noM5O/ruU@*|3i\w?!]gy3ge RkBI=䌒d(E4\3I* J%®XI`ȒZT79"X3W{T?}17B1qh o37Кu*9y 4|ܜ4EaPz t2NPtϨeH׽DzQiILSFjCAwM>>ٻ0܋rm\?y*lD$io\`&c9]@7H~4gOn_-Hm0mMdC`aTcխMsF'>ێrW,*Xʲ`-EmM X%-J:l?"#[Ryc n?hC9^RC<5=H@&W9n^ǁ2ߚ \Qt^U/+ݏ揟ڗ/}Y^,{ w@v|gi XC/ă(=RnHR7J6gM{(E4b՗}p+P TjRItf@{)1W`O~9ͭ` $fx/]PHRQEn|H͚40{E` p=[Ԧ3FiEk. 4rB!!qocϯ[)AN"rcO-PgnCCkG%H-ɔo N^]ζtq\=qn {&MqGIqs #cQc@ۦmpxpl<|^!|$aF-"~~7ey K[Kӏb3*\GyY1Ƴ#7!5:wN'( ְA?\v*[Sl|Wԃʼn?vSpd؆Q@ V{\J| e)o(߉8es(~_'.ȅ Z>ⷉ=poY=prj}K-!@OT>|_x`^8pkq~[K57_u$7Cgxo y ec&x;Wb3eӖT XcJJh%? D]k$_P UU+6Wi;d*H:ن elhPC8ۄR)ӕ#_5(BE0ꃉ)g*ޘFFyUQ=z7#},NU19>q) +~aջ C޷j 5/_'ꉵa3'nj崆{ >c^PHU8Ȳ2hNw,جiaсdn5`yKu3@Kv#l3b`GZvITW*[ҍװ}Z=?sO 8In Q4+X [H*u:4M~(mȓnX5oRȞRwUhZI:p]jzt$#|(v:eB 1F@3QM L"VN1*L9-H ~H Viܛx4AV 4K"M2Ȑ A` zR9lb4d9hA{0]}p"8G$@6S !txso>uoms[W&&s?%1e{WΞ_\ g {7ԓ6mw>G2B͚ާ羏81qrhQk"]^`s&#]r䒙e%&LV+)c\ O Zl=ށ`q VP -#z$E_o/n!\ k:Eb|e7K$QHYGK^CPXa*{| Pٟl4Ɨ6&2rT'URؕ"2bCҶ9(ng/*R@Z<?Å"R."3tɒN3KהM\7fȘn2/"^ob-#8p-Dƈ* UvgIc (֦K͐?/kKopEuo^s obQ}G^jvʅ>ULo+ծ\_V([9Q2A#Y`ˆ +GNJ KywMGʬ!Z05{+_@u_ڬT]Eb*wA >7JKzҶ-|#ҩsٹ}yPѣaRa&&0S{Mp{9>~`c+sBR{"@YU'(6gb sIolF×M~H"Xdt ςoGu8L}O7!tUCG;0r_C&ph9*z(Ķ(wI6@&U,!!3sɍ-Gm_ekQGѿ85١6} ԌQ N(Li5us疈eQ+e*$7߹=P=S8ǓS85?(À:U1K7tmf>Aa:cSςRoOJąWV4|28 wz &r;Qٞ\n0k::c .M֕Zu ̹W\b;D>FB [{|[on6*Jz;2?uYmZ`כՀûy HٓfAi/ẻ\I-VB9^4yoP"B!%\K4g?ZrXZ 6qSܕ.>ڃ?^/6]%5[?Oe[D'A2R6}VV_$`܊FZqqOmS$C8;V_S-J%FA`0/K9CVlҡt˂M(Hzb4d*qQH\N=J 6ɌBNO|u9gmJ2π4p6ڬfF,Sh#BHH_Բ$ag x4yJH_To_ G vVfzNa!vwK ӏ2[ٓRe V!z|/Adn[i+(?7hxlgu^̲[g Z\#ͦV Y l/tR/(:* _'w|Ω8#%ġ^:W^/1 }BoC|H'FlFG(}+R].)HaܦbJׁ2KH(%pV7.;Nd*Uz#)gSgpbYyxI|d%"REw[NcmAyW+yIS"pXwY-m4::clж[R=fA&FJxvf )3*fBKjr&NWqpnnH尠) VF>|, BvIgkNHuoZCC26{F~tAC\eZUx T?g/KCu+ S \T{oJ xbj%FOswy*:@\rUN?/LN¨k{jRc6|$[V ^! J"=]~^ԫTYݿKcmޏ ?)Ȧ=$;?/M]Aޯę[*Z;xִe:ܘڑ@gnY u$5$N qP]R3IϯBLg]M{ Ԣ.4fD?"k⎐3Ϊ:g 0]~A+3 ' dCU1_wtW6k;B_'XO5W{p2hEP GOysֆ%y@~)H SԅF:vniƟk|kҡD[LEB!j(ܡߺI5l0i0;IݼI6fk0);0 SVo`CUlŞ}^lnfcI)ɱ _#^b}PW*pP&Ďa-\QQFп7ܭ_v/l DܚjU1Ao#jSDCmkV6?JͿ`xll 5<b7رg=gma<4*"+fԼ6oɳr1Vs]M(26 7_m+@Z1kevg]ՍFqKJǓhVU4glnФKyEf2{<̓ g ayz 5m74|nϦC_ˮp5U xr%x5jMr owwQÈ;Ļx61V|/z'Ӫ槡;;ލFXXjKykZ?;iթӤ\ a|yմa]v$ e$)prGB}!n"o WWeٗk"J]pFs^ PzQ|/P]#W+f]S3i71@1/lK!񹌞^kW@j͊b0 fiUc^*fYɈdTzf~"AJ9Ts̕WG͍VEz))GAnq^*rlݭ)υ0~!8`q@kD3[hlp7xmZ* fZ E3" b{DE!-eTip9jD8dRlɮǔYn1?<۫--yZit]?^ t97I΢兘ض[: f'+Uc¨q_?R+rQiԺ&pT=;q k b}zu7 dWt8WAh\kYa I* 9yGmfX4-I}`K,KGu6sWm%d¡5hovf3G~tJ5XBr)K=bZk@>Q.zS63ũIGXCFYf=wOM^4+ςgcB`ܝ" 2Zq5S"#Wz!li0Jﹻhsj@Hx{%ӰeKjv(X𠈫XTKVaU=й;D' ڰKK| +H+f3ݐBEBئtͰ'e&h$'3XoҠ,G\T<8.rƅG̛lK+tf18]lZ%yұ| 2ʠf+l}.W4DHaDp3`T dHi%0 2'a!7JZ$aߒ%Qx r[=-ԭ m NY@ujZop I._\kP$JA.42穃%"_>ЗykI=5$`aro]( ^(UDjWc8eI}%@9 , #;)JKx+Myo6d%kϑaGj,p-ղTT%%TcE"V|*2($.4GWAMb|Kd7P` 3:9L=Nߌ'9o@3:D%? 猥\&4 )<3ɇ 3'[/ 鍧/ܤ)l\ޟF>i]7\3 *փoemc5K2b-T'IY|gPr|;$M)|UlpM]TXE{z[=F\N 8uF.u, R#+Wb]ˬŴE1'!#Ӟ<i ^ 'ͪBKX5TAnn;6*v%=]AY&*Y8Q%, RqO])Tj~ҡI{wtv_oMrGg2lJzBYQʺTNC`(0[!բnwhp*$o{*4Ԭ-ːa1nmw"2 iwbJ0urf7Tyy!bYE&a # Y]3KH/ Pa>4[,~Zd1δ[6" q0 >U0oψ"l qZ2;) ||9#R5>~,{/~sșW^`αZ=EFTa-71"2jLI)!­ڠ4)q%tye%w<bC0_zܯۧMouv%ղ?͗ѠW"k_2ڬ=V!: / ]!$ghި,PoUֵ=r:Q^DUYYc&HAG9XrCt T@ҜSCϏ9qqת`ok B%1ʬEpٸ}x]C c8pC(VaeG keңQ ]HAcA'S<\$̳X.%2i'A z&ŋvׇ5lZDXCk8{;fYG(MK iaw[%N< hbPQ!yx4"e35&0%%c$(k! 2h#꧷FaSJV(˄V-CF){9E#͵c[52)W_#rvK',=2]&܎FDR/ތ'Y04t-:M<-?b@% atƘo EkE;K9C`&3_جzz$ZC9m𙼡SPUF_ӾSk_?zOaԬ=A&-pO rOܬ 1W85% Ό9tf XĬ:Z̬~XY}A3sOMjج,~hY}a3sFO{Oo ؋}3uFOq޿yOtF,`hX.M9φMhY?S탢/2뢿KOh⽅lo:4xX^X?:a?nR&#ɯkK4悳ΈM\\'cĜM|'Xq``ij`lAkQncG;[n l lB<\g] W \o"{\Y=>OIE>?&_}NhHě$/WL~o.9vv53JZMw-p`DlR](PG*}l9!j ,!iދ5|4Śn^ln2jjN@ݭ ԥKi~tdΖ =jχ M_66*Y@@dP/X,-ԪO0EX\W~̎^dDáiSI'j*3Pha2j>D.嶋߸.Ъ G2ELS*F<QNXrh3i*a~d,3ΧhX*:[,Ȋ!FHQIJ.W#ԲK* )hE6`yݎÆyV'GCrDn}ꥊ -P>z8Mp<hQ`t;%T O_ޱz͈*&%M)+*QسBdmQzi .8i<ؓ>©_V7l- B (6ȺV7 j"9!E+VԤ T#0 AX]Xݷb=$$35!)jxc)GL:'ϴ} :\PG 57=N[ibtbˎg+d/sz5&nidmcza4^0[6JL 5Nos,%xZJ3P"f93a NelٺrF3Ռl̸H(}Vγ >UcHI\hK]zP Ui@v-RQɈ_y{K(+WY5prv %`/Nb|Z|렼rRx@ `` 5Jpx Sd@YG)KgSM@ްPМIj@ 11`LaA5f܁a9Ϧ=u(iGڡ1ځWb:7Y;cgg"cʪOoc|֝T\1+$ѧ( ޭJAY:\ qY6Խ۵-%FZ="y8l}B]r_wxq?/)l{펉-nG vu^M-;zg`D/l'U-wou;L}? ]y]\5vլE^EM}շ A}OlC]X["C26'ok[Rӟ"+lzA##il^2Q9UMx/_ I,VB߂ S.xbdAX<0;j5}Pya'\e!tK:%da/|Қ_{^Cܐ ?,u8nSI5w֛Gx }0745Oy΀=J/|XUdXudЭÞE` #io1*P tVlNd`]Ƙɰ>vOh<U01j7ta2u'(r;W([3ѶL(%1ν 5M3IeaXއ<_9kQ[m+,X뺭,b`ja1)qP?l7 u]9!8R:\P="qx~ZIھ;U{މ6нVMEC5:jjCuƺ f@eeeFZF5ƺ0div=܈ي8]V*!%GHT0E~_ŊG+_ִiݪ$ڢ~YYY"Qjj1㿙{L^fAX2U*'h,"nic -`ƈ&{RHquN+.\r{kso{KZ5sZnc 1E)ѶT/j2]IH͘ZwKi"ad~[M4f{8m]Q7s~4QT"S&t&~1,v}ttgV&Y5mwit9CmX$Z7w-[)VH'|W줓4i7y(IyMJwWq#hbbdn-C\ L|v*2yqha1  "><ګ'@g˺*!BLK']L{jRПzIr7L GGS4)iTv4ؐOɻA6vCp(壅#( xM"I"sС18a0U3qS$?##yXR6á "B##(@wNf#$̲IA]ݚՖ]X7'%`"5ϛJ΂,EXs?v314$:uP*٤6׉xLX.<9{)%.B._ak*\eRjۛ2v;,uLLGX//_?i".,Q>+HYM<=RJ*W]$ϥkhPkp+߃]dd!t-%Ss RW8wu =Q'KiQ(0[i+aX ?eiI91g-[~f ܥTU{`7B J\T dž *@Enl BNGc 'Թ_al[iP #VigƎmNȕ'h@%҈ )X6Ұ|4F6ȧ c?LxKb~K;4Mr"pƺܑ?e:?+R_ ~klv'\F 1jGv<-Uo C hOmԦ! `3-Op욒02 G%e‚0,-:r-$KyW]r+JՁGA:>Dtt#f0NJǍ }IC[ C  6>SP OoƲN}1F,ƅfa»b㛜+&jR~w)F݊L0/zˏƄ}Tsva UCu(;45Nb,FV촞M/"g$V1YPsXBpdvٻf߮Бj\)g؈ ~[ V;xrÓE]U_$E?|ivWVi@{Kׁpk^p{x:vs{7叞@ vXLu3ͣl<`z+I4"- -0kA♤8Ʒ嶟L1Ѻ<gf !wADYF}iN)V/V%C+1\77B?>Baҳ/J#O2YskgK;b't䵬J<6{^(Yf}K?ƲMs7b1wSW ꨢDPE֤]=_(%Wݥ/l /xdtWfzp0Ew.?xZ מH&-_.}{nCeJpA{5?yT9$^TwT3EձWCDŰVR^9(0zQ/9HKc&~u9洙|5<`,r]gKL( Fu}y S}#i9> ƈY)`'c*rtE2rKLg#YJ~MR B;EG攄. @O |j`eY3&YO;=WMӴ kuW5E@5[uNyV68ZԸ$J$]8DOGh)h ]#ذ=ڄAկin^U{Sx3Pw/$KexOf+1Z6qLAݵLL0Sld ' # SJ|rWУqe_^%lIl(r ă3suK%%\Y=T)+l;wRx0i%}erL 7U%2L}\ȁ3qL s8w!G<,jqԵ 2x:$a~bDFJܵYΣ}Z{h[)bzRD2G'STkm[̍VEO(sa6GS\-& }d4 ^ j%E2uF'dv_Lao[ɪ񁱻"j"*ˢuDލHL™g{ݿ>1eca,h$@?jOvWN$ r 4s؅Wſ2 eM@w?8^*ݪUCr/$Y׬cVIt719A Q:" cKHE,+jPAP 1$tJ'5ą ?(Yo+S.Pu:بaǚH$;QËBv\C+a<m򮳡`1#TaT[3 {s!Y?A% {o#Y$U tT)M)|eLMTqjҕZGsbgDw,h?Ďo6wo)/88Nߴ< "~^XE.Kg YIЂ&\4k^g7t6rkS  fj-Aֶ7FOB\.נi*}ı|i̓IQ~d}F(:ݛȝf_ WTKU: 5V}y`0Ak:a~IІPÕތʒfCE8ћ A6dE3gRSBqe|]H?(3*2!X2,Jl?om65(PwT^0g>0m\|)њ? r`rbl YAtgY+~ՎbCqٯ8܋w'VU}gGmsggCwb _Y[ߒχ;D` v,7eK",le+n'b\jmݙfaz5o)K\u,l.@KB$}[z̘/9b(omOWD$o77Sk54N' VJ.k*@h6W D,5j6Dx/y,#xKͩJd8x3*)ϣ֙[Dݱd zh" 5}vR>[YBQ94ppWb,d%:ߩT҈2Ε]<}8b쳩^JlyGG9vJZ١-85 mXC0sjP0B0{JYD&.^r0; ƓEG{mc;} ͛&?00+; K{񯨝;zNAP}GqgXL?Rf%beE0yН(CJ(FHmɂ!#/)=iPRq.JVl:0UIaAeO$:Lm^P d1pZۦ N&L* і?HT>@t,+:8N ,@ dsAtk~fd?8BXKcOvْK#
c]_!1$dqɐ+-upQ."WjQR#8!`(@B~ )b@O hzq,S䐭gD5,d?/v*4?Jq\kIQjG6cdcϕjC'^?*Vrvȣ %EZ:3F%*#IWY.a=MN9n3[ӅB~AoƜ!BW_R{ӎ=0!((HMmxTXyɓ92#h>񹯟i t:k AR|0C"uѳax:P8|t=l΁]{KG0>RLQ*ֿRY.s·Vy1(DMƑ60(k6j%h% C1g< FeƝC4޶x^Ja81}šŢv:}МRLJTOd~pn@{C{}IqvTX&;S(PD ӡC4|Q% >#Q#e%U[ĬX3rl N1Jbs1(iL1!5aI-:DGt n}t^3q%5dMӭ-iI wGS+x.;=XY"93ٰX=Tk)JH Low"џU pQ*ZF%S:gLEARF9b9LcR~Eh3MsޟM5XSWU" 3[X&̀9޺禿VQk]Lf@QpXvتpk㾑DD;]\F0L^EZYAx*L)A#!aN[R-?~MEA?Ȫ*{{HNLj:Mn㝜 @JtS,a $;F' X*[Xy{W4-DqI~]| pĜ(oa?P)|]^9#ԀHz Wt``0Jx^ܡtgغ],^F_ @vU-_JXゥbGLACHIfW 2'dua3bd&#2I`V!,dL \, C[%Sa."E8=r"R?0D~ú0S-Cyj;֕Nq::or;a'RV&|j:M:.if% /Aj9)3my-eMv܋v{ @dV%KD1ci"RneB= pFkCV ln_OiHEӓig8_'y&3j[sy܌rp + ē )pfnnFNVYvLNYZol5BӯEGV%uzMO2sp'.oPNO#LEuOGfdaFțd"*|0bt?t,RNJ<=1qq:IJ@"bc۶m۶m۶m۶m߱m[^11sg3wӋ^wwefgVeʺ8Wue. BJ蔓iujhi^:9K}=UJ^pq8;vֆ~\ߠz~Q%.BcCvTiGWuy5?<^nckIOq0LDQ;l &i#m!2xmXz#vW-DdC=$lfI)Ic 9$u*w;X(A?k.YBE-q)I$M$_K=F*k1,碭(P"F-&=NjzPb%\+SMm&E/PzMuDU~}*hj5֓o]I([tjZls$ z†MkPK#LG^p$T T oI[2R= U>5XI' k Q|bզ.\/ib8!GcCQڲCMk?m` nN7/%w)<1li$ /葛8\^RDohVVYV"7u6 !rZ;j+AqҦ&~+sI3Ouz+zj%/n/߬io*vAk9Na&1Mi:qJcx$AvlÄ8~p[s.'.Վ׃omű2y҄#}ME9f @mw9=nȘurlBHlb{(}5S\\k65 ]>G<(sTdz,4$5Ts(OBٴw\1s?O#M ,#e{ƒkէ81Y{1_RR)j'2YO*- XD{)ODCF6֧YLAZ~B2)p ETx[v8hSĒJe*ssa:|Xlcp:1' OyF7JQntʊr݅0xevP1I5MuX+p8[qP=IX?Qu:MrC|?4K1S;TDڭ3 C#cu*OzG@xip]b^=>&wް+~,Ĭ5B*,"v=8vB+6˒ֹ0>rbs~~Q:ODf/ ~n AaF`xԴJ8qr8ŐJa$d{I$J}qIS>|L\\$XԢ=t,#]$)ojEWgfr2t.=aN>򚼕;5‘f-饀jj@3(-jn+|쫙d!nBc7w? 7MX!S{vC"DnRfrfQLйsA)2&7\ '8#oZ PNMfYQN=xx8-ߍh4Ng ;.7ڙԺg_}qg`\T5+U(qcxzc=oR9/4zQb0~V3qx( É36IыN 3.~ ݂ KcZU =+L ùPwtIwho1InCS<[m[ 6ۘ85Iz1jS#;'.r|v[;DA˄v1lú[7 j|Ȏk1̦P~g1&MLih뒃nYW8g]k]y%WH3uǗ 7MdXi6iZ@R{Q@F ܚS˧mk'J'0[6^*&7msUp J;l^;B:RW7)eq)pֆ/xòq!Y`P7|Az@A7[ pB =CK@B .pwGoxe`I.\ُ3L{PTk@}xpC ji!7fJľ, NZ8>MYq'$#:BмBXݭ.\M*ԲamEj"ъ#l(GC3(+> 6T & CeQCq:z0) VLc3jPax -J}TJ!s}ߵi*B'Y {"=AH̰Jב0F"m1ˈ`y ҬSqs]扂lr۶%X+іS2Z!qBDfCZb`NǤPrwmk2"8恞6Qjsg W]~aqP$Sw !\Ӑ2{;WcAWخeMD5#t~gݓmSiqk HI.NçЊ~I=| 2yҸŨ%rO!Nr4Mdo, D^k֑[HҰEI h۸Ev\ɄK'PnJ˜vL- ;{Nj'amSCDuD1rk;0۲ I~ԶnR]DyK s XpeCb2Vf+3ɉ[q px}ZJ{I)3nٻ)Hb舨f_7G-q(uQK;me%;-%IAf%.|/%[etdQ' :t"W/1UpUIXqVHh R.BxdλlhkoEPRTϨf];"5OZ-k&`-Wbp[2b/] 6yS~i:+Om镣AStgtZ( _ |6vMY7.}L ɐ%7X`K6kqr.1{1j iXt5I4 8zMvN=6Zÿ2?٥ex8`aQե7xc$~l:).7 fp q=hpo6 |a#yu OO?cU2eq|&]6n3[7dҏ('%=p?,mBMuoW6f-L헦X ~M`2` po,9WÞޠ^'1Ǒm{X ە$2I,ڢ3Ƕֽ|x>=>M.9Z}$CO'+ S!u^0n`͜Ie$vfYQz!1<-#@vډc%QܑEM.PiTˮL* M|o*Tv3k褋|mְ94@_$S5y͉57&*NlT38Jś.{Ld\ ,z¸:4mA" 1f1^iaYE۩}I@L"h0kɃҔ_s Wo_}''Vk^#jt3zpSܧlj' D;!sIBmyVj6 μ4*+aE"n.b@;W-Y[)X]>+hx^,0)2xE-D,Pr?'Vz&݋Keъ?JlsA/<+i‰t'ވ J SVÅqEșaLs'mO$ #Ug Sg}O%/*VYqO3dI]=ɘ)ZimW!b~]2j Ö;@V^X^T9.I0.W=RԼ1* C1rC'|ؑow2bsFXݛV9%2tClzZeA;Uݴt/Aa Լ4ܜ .&Y*u 8/Ծ߆ \8eq2O%S yE|s۟_l72SP֕4[]_]AP-7Sb.LIi"][g1<ցL.n ~f75MCS=ͣ0:Z]OG+w* tֲL;u|-"i3 ,Y.G[ީv2gsOnxh형=q8"s ķ5y1;v~]`˳ͬI7=2u7S5+ mo},byt V)ʒp[3TKo՜^ٻ<ŻQ*u<ّ1#9JSx3Kޱg"S^$QZ 09'~Ҁ"@KYBZ(Go{*)+uoLCe[jyĪ! <ڙKwub\Չ~TS愺)VOM c_-.7\6{fUl{7t͈f}MLԤToagėVʮn`Yrm_p87opמ'~nHm|. koatމy-;X"&/V9[>3Zi+nre;mgLl&[gxC,}835B}ӚX<or$,4r#}cΩgۍYga0+MEd>6)9R_ubX/F+yż×v_V{(;/X{͝_ÅxOG d9C>s1ToDН~ayB?ngCق|a7i ZJ2r,U1 0 cFF 8h6f"|cVeC&8ߏ4¹k+ OɟUI1q[ X9ɘC !vqH=%憖t[mUq*Ikg;9'^u Y@1{_<AHܚd|ww8=gM}0xbsia2'< LΝ2=%#Bײ84\dLeuEW0{=Ъ jr%8]i:%peq³N~->p Y uR꘥Lf0_:n_agTL:lO{ĆJBg[LAvvp4= Pn'SbޮHzi*E)~ݩReљDžvI0r;"U9hʀ0D~tx4bYX!gc-v{P)3Hä:O0 QH K(ԥ*SE92?r LEivQ"H 2ZtD qgCL15yf3EIb''ObawQ6MFwc-Pܲ\̨#FT03ےN<$jA0'r%|^(o/VPٹ23\-D2\ű O?uxBQ[Z{egJUb%%Q>v:(b}$|iģ_+ uվvO 0OSm=׫Vo(*V e4) vGUEDA1uԙq3M*ӚݷuQ60Զuj;fЇknt/R]YD[DF,t ^*EoJ.Ǫ-g~V2(!΄2Ԭ!UH?BBeq4hϭ0LN[d8 &DLf[.NUv"9;XN&Pqz]*33o[E…Kam`\4Po>Uo|$b]z=v *W`Ez3 dK9,ZzGheDX\-6ZL9¥.731Dxj>G .Jpg]}*)p/0FScIv&`kAU* 5 ~]Q0}gМ v~Q-UGQV߼e~)<ڢ=hHx} nOd6w_El}-I8Gi)?65=jbpg8bמXr!Dg_^kW\4;g,r@6yp7='vIs{)av(ir)("ע wǃJqXONN^ &pa{+vR{j{鶹3|ާi㾵b2Zxwz2' eTWxLCy"[=/ CXr 'vsY //St3ՂIsIKB6 ( ‚:[ʴ.'>JZwZӆw"ç˸lt`}\OaB[df:Teȵr=cvjg\OelC8\!E@dA^"%{;v3bvm>7yw(%[#,B B]dv)䕇ړ.MIH %B\.("&j2&b+RU먈2b)IDa R&j"@\ĖTA\VoGO].r =d%r H=oU.ާB4$)6\r'WQ +2BuΊGmP/5j7|Ԩ4U7IjRvex&_RV (/wn& O 7]90̓jJn<{"ãPm4oPs8xL YFe', IC{9ڬSQ%d/[<&wn՝^P4݁9E+(c44F/m(qNǙ.ma^'L ZiC߸24O}7Ȗ`G6#T>0|.7Y8ڄFu33qdj,fJdI Ze1äe4X#k \uGdrO֋3,{ZFʪTțjuPJ4}83]>~o{@܉ iMܘk\nFxq '>DN4/3!壓s.sw#G?UפlQC,eGn?L tm%$#pPcZMԧ'_1FgE b{^_<񱝚rG[ &S[2u#>.ML3R1}A77dU(9%yZAh.=qEkCNy*^鑙$?Yl]01E`~(B0IaQ МPZq,$m\R]Ҿ7 V (/&nJϕ5nt:"]h79nC.B}aðEUq7tረLR v;w}T\J"e"(x`Xx2;y w&U(ucRϜErJDU@4 '؝|l#ɡ3HW{Eq>pKHfEWŦ3D8=X/%rm1̭!S&3hSLqa8O=JKIs-%@@oV\HHXȻy;)\>XЕfGwR`tE"ϱ%'WoWOӦm$d0%{Sڥ7&ʂ!r>ѻCjN'A7\epnR > XB-hKb{YkQ,Z2l-s֎tzw)/PtOK=iʙWR-Ϊ00+㤲@y䢼LQB?=^;Xb&5S5M+DI'ܛ ?6BM[?(Tyyn$̰0GYƅ7$6P(bY_Wud"vrfJ.2TG:;ۮG猕:iTklqHIZ^g¿we3 z3j]5 ]bfƱfъYOܩ@7cLu:oO8Miu |8 l,l]QTb.v. Du+ @א4#+,L;Ƶ_<QGU13@,WO}Yn0QFx㞽U/v0K-|wi(*MqDU+$ ;}^Ӡ:/Uk^46I}3Y1N6wyͶeLBBX(f/]ȅ{]'\{MwPgN9ΥKbRnR$z\!_nKx!-~b9nם? _RVTN=k0.{Ô*>Esy^Ɂ'Vݙ,y9LM@ xLzY]3iEMLmYC00QxccoB0c > !*IRjBQe$JGJ;YͽX<-]"׻l~_-D [AqeN˜!C.dj`wPҩ298Es55\pn^c]ds:ą:OfV̴wGUǪ\LȪU3wmyqxE.}9{kRwƊ<^fy*SeTe{TOucy#1DW]F̊]˿Ն:kZu"Sp'rD1kQ x8Zeh&ZC6 -R>AtsB-vNF#'Z  F=8[Kg~/+:ŌSsѿu63B뫱!#0KZmKOQ! !A'[t&eаM>*=妡~ˬ!##I{($r2(rf+Ey$Ij~[)Ɣ- ~kθDQbqtDv׫'l<F \H#L(RD[T"\i=_tD Gcc`>IT߳;s}P7i*ݾ6X*,7b_a014éɭM 7Rt|Ne,qƃ2_8sy#ZX6Orokz4H%qS͉i1i4)l V涸7*VZxs㏒R%HIj .-f*(*B:`i5](/ 5#kjXOscnʖR#\ QR@#i3wf~36I gێ(}D:(gr+A qyGo3|YSga ZA'- .A!*j:!w!s:A`k (u[*3;@<`EKӖ?d\J"+yIoڂ֭ͽ.vכ-5jkwӲ$zv l] SX|.DP~X2W SYI!ppFHW `,\͢ٹSDdց.;$|W,xsrݬ:BTiL]NJ(#xPb◊g3){͂u$ReK Li)OBgq [,͵YjFlrB<|ͽ>oY7&EEqX vI7947Bo2 { WXI# ٣קoVO5\L$-< U1ISp/,{b8MDQpޗLzUOu /ò3d1tdf[wLßp q%ĻP2$ihk$鰳>Cp2<\B9$M>|U~h/.QGZLp""י*w|u(1{DxNtL"/:ap8"mwfMT:=UUӶ#LqmRFAKH(*YZWK i[wt P#R[`<<%o#*Nq}4Y [ʌ69W4;:50 FL_tYoT1bzS;zL^ #f=P`'sa3x8Zv:^ \&Uy1TW9u_t\G̫M\.&OIMrŔчI ỏn2(t'?paĔZ-1Y$E Q'*^(/>?)y ևIb--u!(-=_fPI.Wv Xij6r.a&i^[cCfO g@9ơGpC[XWxtDaބ~S(ll)^Jw4A;DB6߭,@xNW1Yv½SUݒJ>q$O%lBq\̓ b .28  ȖZWyN]d4!U a2Jzȭ>Җqv74`66{$;el+a6V3?ϸԦ=/[\$et剴Ww ~NS*5cSᄀQNf[YD6#=}zE}+v^k+&TY#H㿍QhO/s`⼅m2xj ffhEN/4m|aP=ϓ1&H(?'Q'䷋[W? >rL mH&)eof1"g@ە05ޠvĥ0Doyo >@=ׯ@ս{ߩ6{Z#J퉙n:Qğe) VM.Ϫk'o]_nuR(؝,A`3øp/(yPA'6i.e|iW^ O ,)n^;ϕ8f`*F/`|G,X(Ү۩2'fpk[Wm,-sh_C}ʹR.V Q+Ccp%& %ף|Ǎo(Qƿ3Eёs{mkx]8t-μ^o6HDmS.'2!Z>JgzȤvx&vS_<rGfՖR6/|v/ *jM)M:Zgݣ"܃2=S2:97O} /RQ B/ /ڥ-a.+r*N֢V.jv>`,уwU[jjlZuUW V3'ەi^pbr)d^e*5PjgegZ6oJ:`H'-e+|S DEak;~C{]x \epW1!Pq\i Keg=O{" QAOoV m#64}* 1\p#P%p9Ds.@X+I9s#4L[;O!:>FN띍mTKWpLHS(Tmw !JToFBD4O@$.=R\KM"kĹG'Tȶ4;ժPkZ!zpVp߸j2_?&5@XM9m=ӵr %CZCPFCp^DL؁eM U,l@Ɩ Ay[hSݙmc~3wq׋_Wo^sG;7/o5_On,@XY xbPN\`$]LgI{iZ>JE!%%&` *ju/9YM?5D).'lKN/V݈Z?*n$M;~ 1cobͤk@46*$& b7B^|nRD6SZVBЃ(@j|m4$ a4NLV3)4kԻ PBc$A>G;k' ( !ae P . ~L%F$j89'1]L7+ti &]%(l2v xdM,VIOқG\G0dqFq3Bմ-p`.w%h F4\؟K{{Ha]| ~,/"`ng'N𴑅N~7̑EicKF fbsn}N L}9LF3 Bsf]-7Oh&Ύ5X)aHK͚ws"K) _&{/.ѕdC5fI_M{Hmh $|^a3ߵ=|†RIGBNꐥgp,lNFv Uԅ7MTRB&V)kTqe/E>JMbDFq_ziwȜ58}7h6Ng+8`-{z^F~VYV$P- ߽3ϗZϣ=(![aB4~UMxVC e: .g)U"`Pk8s % yXr|#Eٴ:% I|s(1N4@7U#U2`Uck⬋ojg#:V%ʊ!5KV[c|zX TX~j"0py+w܈ pbh|,.iw \>i9KLeHy6!ki->y*ݏ(g9Wiwq! L$Ɋۿ|Xj TL#R ha!Rh@K2-NuSeBȎ^+܎udndՌXOQzmq<-'/' nu3pe NQӶw_Գ:Ozws.%*Z&w毛9/;\ZZ f[<: ͎ld]HZa|h%!V;ez*oHUm~R!lQd4Xȟ?Yrj 8Oߚ}2ǮĊ[ۇ'*,^UCpw_=X3}570MLXІYZKn\G}6xK-鏠Ko P/H69 8{~l` S;w AR(hF񨦶.(R թ](whJILyPWK2H @#tha8s!̟dRkN3M Zwto+^n֟;ߛƯГ1z I)'6MSJYhlu>- URW6F G,A2{adX5>ӫ?EkؽU*߆wE}1Kn'ؤHxO\iJ=|xn?bƌ':٭LMHFeQEF:sp +Ku@>t ٩fX&~LY<{tew``h#߲VnTʤ+:E3EkH^=Bk;ֺ'-T<:AXsr0:pg _$^ɪ*,c-9#rn8 )M\U͎eln@ ea;xM*`Fobs4i0vr%ao:'8מl/i[([ٵ 'j)м5b^a&!~!MENln@+9ᩀ4/]F%,5`R3)!.%jt˓s؈/Mr#Dv/㮅uA1$%j)C t¡BxÁ\y7 {y T3o6"P|0;Dw\q%SIE*T?r9&!zȅ-@(c<^@IBARr vqy"ͣL/bSH ,JxD 3Wj&u_}Lr'R¥ϓ9@^F |ޚl 1hk 5V?p?;aKc&S{ ~h@̧/ܠ2R(MP([O)v_dՙǗ1ᲧB nO9>3`& rݞp?Ğ jmhU|%DQt`3DDZ*=Mv#E8 $`Z?dmq-%PNDL+xk3:)Jtڛ4?cupֻ!?)UfRt^q΀S%G艹SWeH/%+\h%[~Z1}oR2FFyVJHhQ[ZRJ=ա] ȵ"8AsNx( DV {%k ÛY`@ HdK^ h(N,+\exRuGpkH 2ߡܚ^x"l{n@b[5iRV jVOK"(%l)z4!]OjN[[$\}@[ɳ\kBS$󄄘Q24MDUϟ kQXk9L%4z^c" !yM_7h`r'W' 0 t̄"r8R PV87b*|pbHXZP:zHp\0e܁k5Q'{G b}@Frb`w(XW@) *sK%ˤ ɠW!JKwDZ2Y~H3N"?A! f+o4PRb j b wz\=H *}"mU|GZ96dOۋylpNA4 `BuOARUy2Nd˜}&m..X-kmVMIGgs#SpLn2B 1?ҤLIKw6O(-BH|zm( mfN(2zX" wϱn{sl2%]uF@rBM|,aΠQ'5FynQ˴E& ʫ9sD/,4У'.u_:[y;DĮO^aSb#07a&/]p\i%\u̓u5^PV]Oo+ɂ OK#p}{E2<'MH n1%4 4D.l? +FQ1eZ^ Ҷ,DFz=ĹX!ߌ2Ν8G$!e$< g@lsibACd$tңP8%3ct1}:#bic{q: 8"Em-]{hxwϢc89W^1Ji$dV1^=EL qy7L2xg7Z<.YqPoozfgW+^^vjת]wtװ歖Ve=.S_s6a`r% ]b,>n?xx6}[XT7f]iW}܌&J=YfW"f;XB/UxA 4^<)J |;9m?P.@((ňMY4\(!|HCԻ,Qkw#HOrb_ip@ r1ETJ A dYS|{\gDgd_i20bMl1_9C/I u3##} Hc @1e"I$()$A<~CdR&q2#0_B@9 `m$X^3idϿQC1Y~|L@=TFhY ]Zx}T=Z~~\0 G?'?~*'O𗑏 m+3ۨfd :dɃ@E"%߀Qbf'B#Th[_S+]&gop$ꪵ>Lek+Ґ/lJt/cwxe0將 :'jPt,tHn%2en/v~}|]<$M@B t}y=ߟɟ{M= 染e "_":ip lv$a`0Fƒ0 X=^tJj~Rr  CY @ޢ<^6UzN/4{ *$+jS|/Zu _1}P!J'e"D۱b&m uiAń:1@AvH4xgzTejTChBSygɟDZxIpgrr'i& 2JUxGv , T&5]7F 0%4T4{G?~nZk MGbDC,6ijd4nm/puTEZ֚85jg杻ݖyoua'Bܛn۝9q0Of!Pki빊Tp-%H*7z۷Y7v$6SJAA#idFFeVm5j97ti@L\s颦 "IZ AU$A;0v)׃(+{% m7qckU:") BDWVK`labRt/O,(g*)d0P j5GG:PmjśJ^@HX"&VldY,Lk[kEO[jea|1ˁ')W/kn `ItM4j" l ĐI!3c2 womᬵډnXT{ e0:H*Vy|Q9rqM/l=AwX{}yW:0w$11GG_#%Z<.~;>|&gbk5"MxΙ3b'5*jG0`Z=Z2 ?ճd`zzG|HaR4%*1;n30vw#]%+3nDk53wtǽf>–Q;׻%P&ip 7Fد-K5'@Xm? وb[K̉}UMݥtr:-l+LM"r*1ㄩV{_hkBOdsm:ZժZsOb5C6^9C(u]$9xңMvqjl,iVp}.1%2/h}ŝ*ͻ&_j=oKHg1ڢo[-6_MH DDvӫ|B;صTr r6 `>h[Pոmc69پۑ\ 1 /%+?ρ8V9iyΫaĢSY M x<7"ZG!Ϟ&0&-,H98{Hh #d1y/fVrE.I6Y2,jqu5z+W׬ksPC+lv@/9z%;%(M8.(l\ZXZ:,A}K`SBlAY}Ysꢨged5G( b*]UdS'b >-V(BZU(Z]E1W(BMLfx;270?Sf-_[2{ *ms2U:VtIJȂXIc;&>M m8=ۢO-Po(FE.s;i>-v+5%@pf'n(mFyWྙxQ.y(u$Be0߇~jgsHN7gBpxL/^QuGSE9*6 +3Ew]mځiQk cqcL)͜ޖJ Uh S[ΚS1eO?l<~h>䈏2JHCi.FN&"F$ iM A4!hZ- QcAV\m ^.E{3aHJxVp5_Ρ#~.Rg" 8,H6ޣ zg3gx;fh맨):QRϵmX]zdjR< SKk}ty>ކ~ջ0_*y]َ{{ΦH>"sy@sS^c^&zg;s*9(\7O<+ᠿbg#rA{Q'͢=AxGQ\3~94w8ɭw2Q_;p_k^?x*+Hm@sC?X>\$t<`>4 _sq򏧞𨟆ΧcdғL ;}- cG#q4Đ.y)@:ѐEu(|e˜sߔ+8 5S)7.$gnϊ ˚]+3>~NG7[P\n .Z~7y\>w מ:eo? =͒B\}-l|B:NLq5 r1[ve`7Dv5AQ,Uy+l[l+K(AdNZ2CPOt+! WGAޥ|(!D#+"bVdQMLlN 0$S~ayCD.h;\v\CލG?_E<#t)} HB?SNo9 h?,]3ҋ0F[oh&r;MyMBCQ{)A];!bJQ3 PY KnM>l >Tgy>K^0_"Mp$uY&ZQN5\pX 1w]1?6CfjD E y8ʽ*bb\- ]Gx nfWJCH};/"6` }뗀[" I`\<32D3 :r%31Ӟ(L`*h 8)~_P:׍j>yF qHsPi4^&VM";[Nd86ϥYEjp3kd EQ w|Us~Ϫt#Slagdq8 LĸP(HZ/܈);Pdk#" *%=2oE~gv̙5e>R{ ѕ똬gl6/xhl4DQ D6ioGȉ0'>`Fps-b@:]!Dc4tvqlD`aY|0x(\Az|8.6qSfIaI|˲N]پamc.v*:zd[lEU7&BƜ&N(M,mGffоCr/2 :r\plD}QܦgGkܦɿٗnܓ>g~MAJpJe1mRl(,(K Cx:;Bd'w$) !~&nmB㉵4L`t8e`yDnl0?c d.=h衆[+O?< J841>n R0ⴟtLy޴t!J8QHPv[h"n;^CxȔ\6''y69k0 UnF},i?I_8G&>HM~ m8;ﱤ~rST$c5" z0h6v5M:؁g,jd{ C ]BEH.AQ Uno(X]8_/Ų(r&rPb2nb 66lHCŝ.QO ,iEsrYbEXwP F+F?B;C/ZesGJ~NkH˕bgkr8 .l_F Mw:fC l:F$B*VwgM2-ţ"_O5v)0,/jM֕s+@ﴳEe]Ժ_U9W;uwcOΝ瀒ے-u\$j,E0gpE'{t! _e*p:Pq`ܾSmH\̷pӰ;|,Y#DZSJڴI.y:JcOz7b0t%k bo>!UtFz)Bm1ƣa0z!T*W v/3WH&2rq߾jmc̜+S2^<9›Dwbpaejej*QFw_f`VTp^1.g+5{L*5}i&-)D`Pl$ Ap%fT'-+Pt/{u@^HAdk8QOi#FiFv"L]uܙfTy%t'Oo S6TOޏah*]^ -s/q8 Vv:JXr#8YߤIdiÓ''7;&^|&U)e)[/UoϋIE$]JeP)>\{> ^`4W;O2jC'~434(̆]V܍̴>Bm7Bb uSjo~6=;w==pM0;xx]nL w!0L h\$a̳ ZmOIJgc 'h3MiZDr$iy*7'zd)n~/AmѣH֟!fD0IPj_H4cq)'/ ]<q2Ӂئ$!gt>b]͇%]!ۻ}iȸ)/|)$AF6 ~!36e+ lUGEl#iɖIuyDf}u K{1IK=q+e;V#C34Aҋ 29F1Ĉ![Sl߳gAlŪwJ84Rve\x4f qSKt%Cp0̺)s9)9=2 ܷX}uf1 q (1}vHPz_SLEУMb/Q 5)+ߚ;y3n7ZM~ 11mN0ONQqMN4s[9ǪQhdܹp^D?(~$ nF8!%5+'NDgc g3ýa.y0Hs;p:2}tP}W^.t@RoBdqiG!_EmU͘ẓOtP9N%a B@jE (R0An#\+ɰx81?F좞AR^tl^k݅Z3 =LN;0Y6E< g"%c߮giNc2+C+ŏ :(Я1K)O mq)Q{ g-V\+Cm3b}C![jz 1=HtԐ d=k r: HkB$o8:VMO7l<ށeF7u ̉O_6& eg٧_>9p9Eqj=)ρ< 12H6,AW_[OZ]GR2drBU7JI+ ?Ec謋 fM _}hP2W% ح-VGB+ QXuk9Zm X^yWkgݼ >ճkϸ ZͶh Z݉6DӀnTH)(, 3յB؆mVʴVQ,˽ނw(FlF=ѢEyȶI,J~DIo) \pTͱ(i 9gdF>L*hQ$qn85]!=Һ [P{dnW[|&^/y,&AhP1̡IAl5ޯ(4.8Z.aǟ9~hS0* 010Jrl̉x9;EKM ̋R} 2?Q VC(m \5ow[n?(~(~t_QB+C`?3{`ΥBT)ᣳ3SY?jy(.‡3Qvvg~u؄n0 U@- CWH:߃._J7uMŝvWc 8n>(cg^;e>]oϹ5g`T ),q8pQ1get'&vBr ^MgicH@oWN.kҰ?@gTҰ!դq 2swk$0QLFTsw1J)l^G< 鏖.6Cյ͑vpHұfpz=C(]*]"u$RХcHr̓AN͐KI;iJ"rFp'MKS4&uIѾ4$鑔#$ G4hb0-d! ̤วǤČI Om&' xPPN"ofec ''^2<82VYoܞ F’ͭ L+0T+`i<0C^ adP4垫j9##@ԧ';`䋵GggGWo?zC v?zxA ߷?v?BOz'L<;n<56ʋ̜X_u|6'sQZO #fߵ ~">,ICqH#$,k5$0J5+gM" ͚W$b#1B8O|JWoTR4(S}l3 Y -8'{ HNCƜ ]\c0Z|up{Ă Kop`L 8BUbxa_@UԜ#ҋlלZ-ԬNo;A,Jn&SEFJJAsmw8_]-g\qkR],on/?u_$d7u8yDK=o2 <O%BA#Ӱḳoem]} ceË8%kD\qGjƝ`IFulkgͶ/Tzfͪ5Xft*~QƋ#`š^[!kKj: ^dbWuՄZ| KD/0hmI/3/89T0JĘyQR"~]%GTVfյzc%lhFWڣu0E.݌/A2l$mΩR(Vג&?aW]}h0/!4[AAl9P!\MKn,},8 g:i4*[<#rؽIL pU }RO؜-E7K̘@&nwQAL5ĩmQWj0)jn83xs(ρ[}GpzځƝ8=Ty3Hnp2]\z,"_<J Fw>S1\P!5K,9Im.}ޞm* 6\!*w,@ 3Ae+ Ьöm۶m۶m۶m۶o>g5WrrISV(8m6iۉ׳[BT U,' |#r| g%ӌWikGGtIPRMn6K,zLnXEw5W<ѩ-[D9@u{\ZZZ+*B=Z6B!n ʞGރ/R?s0c.| B` 5HOC^p IaIUb-)^r;7fWK,*yUOw'Bb` r&ugθzj($ g_+} WD,#Nc"R:@iZQeU'24T gUtUQ(l8mxR (1 vw!PHdGA BLqޮM7iꔂ'LG|EZC œoGz?ճ{ 1ЧZbkTm*cM]kŠqb-M]rh[YHYT٨ y'I" C$|{7MDsjSEzE/ڝ]}F~voE˷ksW?w$wd~ЪPԎXV5u)C$mE,߬m yTDaYggMC5hj-HD*.ݣKЮ]mLp =/]Ikӡ y=B%e-9Gc껲nIO DrX̐  #}OvW(zO-+SZq`虨d o|z CzE/i1m$Һ3 *w27a w\d6;W{IRi58r$*rTTbk@h%] jdn4^qYlP̎}㰭ۻiȑVdJrY$2yt5BOF/t`WL5XwLS;hnV/ls|QCmT 1 2ЊЀJ >i ,]jQYrI|!$ R}&GIl@ro5i ZFV~ڤR&E&BIW"G@_uAz08_:N+v'?LK嵈:0q2f %`AJn2k83kTbCmi =FTԆ *?W؟{LN)8NlRW&~%ŅwX\]z`FIT?q+ Y(+*q``E qUQ=מ^+zmif?}pLE|9'j; er֏ͮom, ]_ǽp©9%Nc"-eǩ6Sd74."Dp9-][iv̄ S;.BX.n2 0-.%S-GWtېlxGr5>\{^)0lP=j3$b>߱XN+V/xi]Z"r c67.w]Mh&8U"8!y+Ʃ͘ɛrRAV8Y0ISɘN ZOP{&8K(= 2%)ov26-XИiﻙ=l:J[ X!&2=?΄F]01vV)W3kWk]j #E^t^=:6mFw`!?avE#z7 tfHI/=ʴWƋIN|]O+ +PwO\NL\e5[>ⱦ/DV% jɹ>`?ߋ qL3>aKse==◉DHCc`+(6˞˅n ia#}\{ Uc |~G^#~c%IĐp-*R ԌȶS*)QfR$nI.QNG@t(;\P9YK1=4G3_L a [ܸY|a"-RWӯf&/}JdwWTp,:l28ߊnyLH q;JGFڵ'4W&\sG hǐwF(`*ۜ ubt7;8Z_=XZԬf8s7`w VNVZ {;]b?$BT{93}DQ0ft+/jn>QD .\j4nˏ'ethTˍ/Bdd1V:1[cdMh!U>}<~4xAIS!?W b1,~?7%}=J y#̓rlʯg(oѝlw\+QTs78_{Xc%ړI/V[3u]wf:NCF{W+CcP&Yp9OvY%eR0(gNB4E…"`}uOE5,ʍOl[{~lOF˽FcG &d0oOۗ4lG8g=7.>a[נz=Sbyٿֶ@t󣭵0޸)G)QEGA]9C9X_=*9jN_ږ ;DeeU۞ZQ(eBd[ 40WM{7$}%YTiccyߺ" zryjW?wnVXʰF`1LM, em6[}G =[N0sͤ1eTQD͉"1^?_wV9OaHw|Ç5J6v/ZZY KCRpόlXsrȇo\4(roB(hFM X$)@u>24>DF7Q,`,RerHGє1#H)`jijcx iBą K啶&lP "0s a~0C<=^.ѐ16<\z^_r=uE$݈ *ɺ4!]ɜ zÃ,")t%M;S=ݳ㭹Q9 >ID-pLmK#z&6O+SR@^ށClܚ3'/:xK*peֶ+4!!ƪ܌A yR^rw5XkN5B0ӣXAi8$]ZjAR>Q_jq!ZjHMmvlI4ySu"hj1ˀbi"fYH,()a۹pwʱS7XQY[FJtYiRȮtvrG LHk 8M/ƭy∋N8ARJ[©s! CXEKei5&;Y%M]့Ido:'=Whf֬W :_X8NzeP0CgdDgݻ :(OU/R(iKӫ@IAz<*ELkLWOJjY's>H]{$* z{DVĕ\(RvIKʬ/>¦.X(6ߥ/UH L#_6ψg}NJZގ yEZ={ܱ1 9p@lZpQ J=NY`ym][nkÉy 5wTxq$x|= ]kַfgkyN}"wf܏]ʬis![w-5󰦿s eg~sZbؔagR$SťXҔNIטq5ś]hm,n\9wHWܸ+n6'3.[;߷ٻ }G ѽ^0Tu?+P=%s1j4 ݗlwQk (P =lU3X܇afl[\QCI92^/Ew@'Hw X 26VGW/O/1Lݴ칵.KJyM\Gn:qAhi>?*Űضy5p9bk"Ie &{-.x"4hR67XR&m)=G.w-7CJ} ?ve|ӝ<&Oݚyz..ʌXU/<S+Εx$b۟÷"kxV~{ _)z.̬ IڪqƲjmSt >wcn7o7ם'LӉwM㭗MjM=4736{յl'V{եn=c{t̵r얾a]R9-ih(FtLɚFԮAtOEwRL{fwI,UGٞ r=[\VAmMcߌiwnc=gUaCvL7 ]\ :KgB(v/Ƴ̎Nb)I%vsZ,N, Nia޸2/Zуm#Q8_R[,[pG1F%1Ն%`$[@`SN`pDezeLN{Tm 7(,1)7ۛ+NT93'|왨$ o/4'  0(*68#qQ64!c[n.O &yAlWඝ;91dE)BLLv@@!N B#Vݹʙ?,\py&!Oc3& "W/Nžb ?zo$G nldۣp<I6wƮST`;Hj76/n\kwYF'rI0M3"p-Ю\Jw5 aKLR=}NU:=rvq˄hG/ɘi/չ 8Y<(띊1S9!%J'a(;1Q5XG7F󎎰l)P"Jh8_KqV$::&!!XbPXJ p ZA=Sr łƬ/ep˓ܐlSavYXLj}$F[a ifߴj //$߬Lto DAq@M6Gϭ 8J6Ҹ&SIʜ CX Ph9QeF)e0xxxT ?K}߰|˄'_=6P!b! q 2#SHȥ?xycD%R`p^yOm\nvZg)0r,3;D%/@4H~NahXa83\%Zp '#qrH'慮BdE'GӸ-W\ GMWx.2!WɚE ?:dq(WĮZ%WI"8lP!Tw,bdpJgb9VSȮ5\":9dGdmYh@UF8}_6U&m!uE$Հ%]XLBnٷ6Ṯ\:<:i8-@/Ǘ93eP{V*c ]5cGv;֠gNqiP pzwE_3'|'{Ve[:f'&& /]ӯhךg^xQ@%/%kWs=ːv-N8@…w9N!.M'!B IV",i gg3uB:=v|\k[eY(ijzQ?W\+\SAr#`1 冀Sbszbjǔ;F. b!Bz{Å:J3g0eQWQodfSAwV:`!ȰQ#E]OHajehrm, O"̲O}> کAqV$^9_6=%OLZzKݮan 3mxp?k Fe_`V)J?H\.$ט <{5,6lU*MUa$xX2./s()@%XR?ga`7m=|P۸ ;ȜoVNs2zmJ7hhYl F>69_IDJ ɾ|7|\%߅x:Wwꍞ=u3.ͧ?G`&8y cV;s?Wu񾡯m]a]yϸi>ej|ɼ/sd\CDO&?񬦶=$jEgHX.G!Z9"@c5)vF $EcYwq&00}(}.~{lF\ }[v?Գg<rm]sV<-LoRI9u|H54xs^y-{HO e>Fd` 9UFi1d\6T2KW['+~y֤C&G.Pq &\'(pJ,!<[\CYGs+؄56%%"x9A r NQ\Be+d*U^ؤj8) ClE\ެnLxOl>T._ω \1Y6[2Vv^~@|nn][$<f7pCgr@'{~"E֭Fcbs`F~P:M_Ω=IU@UUcY~1&6VA 衼| a֝R!S\= qO/փfQЦ0"/(F fp^fν;Y+Ǟqz<<[ZL{BНκrlg/1Y;rGmԄֈ_n~H{%? c{g@Ns LVPoQsϔ߃c5:hxzH ?TYtŝQ[ _\Y,qXrz~LUU8qt>G+"DіCBIJi=)'I4*i=2ùNK3z90";ʏY,1%5~^{Hvkrn1ȰCNȉ/O@uFϾtMU&{h?yS5)BNw*& "a{|2qۣ^)k)͞һm)y/V9JFb;Q(o$a<#>8,i1Arv}RIaS!<-$VCuF^pk"^`_LC~6FP$b3boܩ59Fz>Iˣ̛2$sX1Sޕs~4jMY^DQXA6LWѪgz6($mSE, *[F<.$}秡rJާqP Qe5ڬ`է,ԩ]5b_O4PMڇ|z3#!D?msdyMqߨ<~U֫}k=)Zh7*c,E _L ]}Vq~7c+hqƮFgLvxs 65^k{1&riw0W2R'nz[(WOn~U\d-_qkz 5/_Pןklnȇwwx6X,`:.e8MKu75X788ʱT2&&Wwgnنjy2ytz*GwYDܪVr~G#i>ڏOlR'@dܪ--&d$ ZX"+QAP*H9%>|Iߗfo$$$ǝqcxNU'@3jcպ]0niIfvs1Od\/j=knѶinz\/4$k^Fv<2kR {u9h[ lS o N?[ nA[zT3D@!)~JB %AQl.^\ Ntނ%ʣr"L2I"/ly]ˮh j}׋i[@l@ Oͷ<$RThiI"%J[#( ?0(Ijk^O`Qj9kg3=oyX3\-aОIENJ}}4f_ KǬRyGaOb#bQkpK(im-3ʄFHG*J{g}jI9z['eGW* %kCPok+H%z|<;Ѡ3yي Qe}4':(:o;J]X0Q-l_3yѬX6\+H]Y&Q AV^g˰iK{BWr؏U^%4\ ~zabi 3#RF >d\Ҩ`o0d̞]<1Da4|$ bPb_Kz}lx]nC?r(jȽXA Z8!f4X]e`|r9*^D9*^kOvB߅YOaEX}D 3eUYxU`,vX3t,N c%kxke( Jn{<ۂ{M[M|Sw\?Ib9Q9Z]6LI$,_HHOW^epg;ֲŗaR *>o,l):lc_͚B6fMW#W;{戞6,fRXWGD2(#DȦie^_1v~./Bу^71[ Ϣ `=MPqLIކ ;M""! 1_*{TÁ{%HӴOVQxL 4lgQEePeY#hi}He{őoP e#fQ-.[~ڦc2J|Ms_± n1QE1e$Dt?$1 UdRR.)@ O!8>0ۥzoGg{zyO4!L݁^-< 0B|&oq̀qfRp\eL8'2F47g0f*2Y&nR-kjO^jve00a$y ج~$ؓ9:UG~-"m,ҿjHYzNv}Fd!LL"IdLv 7ܷ,E( s.(2F;EΣA(&ok8_'pĜK_}EIIJ sKK%x& N{Ddt=:qܳtN{RR ̢f)(*54eftYKN]@I5᙮u=z<Ɠ> IyhSsД5 i[}21~J$Yg&ƶǓ'*Fs3FGD(Rdh$)QkT JsuK2t KLL}5n7Yr^^=4\ ]tt6ya;?Z:BOٍgvqQZq((or$Ae+b#T=nw3ofv7ؐBB #FS}O^`3Z߼n77>r~~?kHKݟݱt>YE#vAPGC4qTFB#Akb6rôQZ`Fƅxw2$+]Ր/erQd'4֔/s̑/uro}JS unkx/Z{Jm>`[5i'+բ<*c\  N ؅m z-LCUtMj(6kj,D:o 2`R dFelƘNmRWaZ|2hY۸u_{s!^dA#[E<$n.4+~}V ;SU#(*STy*A(>*wJY/y 15~ "O5LRHEj5ժC=$p818HX -&8BHr4۱jd\֑a WΫ7Űl.͌?;D pwJQۺٍ&2v ^~{TC-VZseRLRt5Ф-Iٷ_[ctʾG,|r7st̶ +:=GQpCChqИK99'QjZWu>x9_q3#t7pCx)k" tZ.d5ک\DyO|^cG~tHq1%)jn922C{#T -q̲(>Ш'u(|~48J}il!;p6ڬmZy`ȦQqPQA6 mN Y"RAl Q n[\ _[H㡀X0 OLu!ț9"#6YCEHvb7xӇ؞i<ڙ^̞#]W c]v #vz--9vЍ~AZ*]M&}d&/8QxnR6%~b2qs% (. iy%ye[?kδ!ũ7x~I0Homv5<-1b5$ :zLz GWG0c[5 j}4lx 4`;Bqx_ :>?nٛt:m[_DT.Xc|>?6"ڈ 1AA(L/3T-|X71'H50:zZjo,PCYgab5osW#xtFJ RX.NxTXAyYq#{"IGc8_dF D~M$w%~ ҏ@r[GcRndBF:g#Bˬ!|؅NG$m iEĄ i1L~~8[c0qs'W&He f+]+e$aeX̚vSbx E [{ M;$/ձ` K&B? rS-B׺ W\Ynn _^ݥj԰Y/v0>qcrYWd[x߲s|$ x_l<혚*㺚lP`[,0yĴr<>_vqX8`Y$cHHge~*Yx~AZf-- k\Q?7f98ٻLD>}-ݜ(^KXR-ikK@MH+>L%t!‡ىYݜv۳'SXʁ}[{`4Qm)4tĢG 嬂 üzUL5ȷP Ofud=42Ԋڛ `p@<{mtoۺ{v|>&o'9>y+!K6[LdNpqn9ҳ||p~%Oׁ*-i4;[fR=xLF/Omԋ6O7 6}kRuDO4WaFaՓۃ k?q4_̽GKϐ?XImO EOwHA;r7La%$RTxn#Q)ެ t4s@1ͬ _=Ȥ09 % Q[Y:%D3CRw2erd%2ģaQl:Of]wA9xC6IRIrI(Bv. G.Ōθ(X%`?4%EDKmZq!>oQ#.t&oK8QهŃgPfSs{ӔAvcCVIf9&(޹"")3 n8f(2E= #ՍU(:½CPξhwn??.<Çm(gtTa{4#x 1 1.,]*BYNbK0GfE&z '^HbO9M''7m&6o62{tI143R1yK=^@Qeً13qG'rCLd Gk\"SK?ReS?}%V9sf S) R-+McI"e" ʄ og/P*C :a@> Cp$j\r_ ArC'Ypʹ6 RQ,Ƀ7qEK͂2ԵKE]xzt{MSJq@F-y K•=SEtCQТ?,c^pJ[1:rHd_$ jte)WjY!B)p,aμ$dsH,UF"/v^2([IracciVHEm3WHP`\!EQ[u% ApxH?ܓ:9D[AzX|l߲,vSsVG}jU3DvW^SCm^&{\,.>.[9˷JA(pE)(#S.Y:ka gdϋ g*zHwcri>֥G==V5rX?~nj;_Wwu"TQG݄7TGEyw >  ;\e[\IdKH(1x~6.$@nvϕfw¸m20KȞz~G6ٺc_̭ne&M@'t+bi8ʢ`bM*w!Ku>+W'JvԊQT3+)7&Gsή7` ]o S%t襇}kP+ G>( iu^'rͦYV֢rCყs40t֮ݸnvj5: D&W.rr4'dUY#?ޞGhr 8x9Ԟi0w\fR1k$cU5`VS 2֫e3_Yo:wy9r (Y$Ko@\r]?n NݖfzVs-}1D СɼGQ脨L}&̲9Kd:n~"=c<-B<q> xuKa1iml67nJJ)BN>t;2"#ت_Ԇ7' FԿ Zbܚښڱ^IWL9F<ņ{l"xO8.eʉODA^@xb:*W%"5"w>ߞ59APn$5:®o9o:z`yw7fpfuA P}\<|h>SuLq-"PI[ŗD",SP/ ܠW5`R?h`ġ*n+èK-Mwk{+hqMO/teE+RLCː5JHD1ِ ޲Ȋ^UU2ln.}:=z)Wxt%䞎͂AO=P&- UZ~P%㪹x@nQ񮊪}Y į =x,ERo tg4F #L,YvG;w[3d+l`KmQ_ڣ k-+{m:E5wT1 J_|ⅠQ MjTFnX^KQ}<b3I}dԇZzrvxMa@tT8 ֱoDNMM _W]ZNOi=kdUI3ز(jDn)E"iiX=PWjf5hۤv␊Y"\/EE=Ykjfi7fpk v%Pa"hJImDӘyxlzFϕ%jV0V)ifALGt'n@ؘ.i"[ C,ed nu؃>܊4_::uٲP Lً^#WE*ƩJ3-$<Flj6#3R=/KD-5s' Rˬ7nB|b҈wX}[wV$pq%u(s~H#*=mY$,qJ'F^2ITO2c Z|Ca'ma8=||Xp:HGڰje}.xܣh'i"ӘX/DC@'k@b%8s;:"͋7761ȵT_7[hM}#]oJƠn0H@ |5-b\_yk_y]M\~\_m{]7Uqg*3 <Y׃E( 4(h8=Xb!\!(ÌjICb=,`O}Y8}ݓ{#6Evwf\-Qei!w.ik>FP +G=.,иWmp-'Ŏ#gNJ2ڗ >mE1rWr-\E{F[H]mطR'mj{<%5jQkOq[މ [;`5Ĺ_O#u@<&ߥ^a}m܍ 7 :RzdkZP$:TGR-}AN;VicRGJT-c|_ğgoo|wS2kxtP00$ AX`bfR4Ԇ'G֙C?%$~g'YIW,_^\yD.sPFG CEN%U)%] .$*ͼF^IGL@ L>nq@h$!0"0Ԩ u>jMe5QIJl َB< x p96xO1  qe>VeygK/Itp_^;2Xam #7bSrQl6N2LVcTܘqA=&u&*rZJ*6G ✾kIR5JEQ谦bYN2Uhfua۴5sNBtt $ 뮛LxCfq`oI{Kx`((vok 6f 5h觍Oɖ1F,[LlAz*MH=3R8:8 kڍ*,>dAayCj3NvIo5Տ6x*;$RACRZJ F\DV3j ,u)WW3c؏T '[FSm2UG?tjycT&:F-ABl1{rc8G vY3e\tX]5Fؔ~M!fc~Liw}ͷ5E)BKmSW~_6c k]Z-L(oI1ObӨ ?UmVr[}cKj32`&VN=kyS7ZE]_b2w"Tֺ-z,tvJ1\W-p״l韛W vؖ'ry[BXVMΟH>'",/''4Yn'sFfYER'Ӎ{Yx&Pn'$V4Eݶl,Gc!_>p# g%˿BYD~y τo7Rҍ쬽Kw[ m1 /)͘1`[F\C<&@ӵ_-4PqU QgCsIJvz"vc+?I^lR=sH,Ra&mR9/ד*uZB8ijBcc-.h|m#:P"YTbB]'E]'qsi':)kneϱ+ JpBMι'Atm!3>D0t嫺#ݾ#VU}(#0~NpO(%dR.lcUSL}ஜTCd;dԬkaN;ɩK= x/&5hz\5D=RI+KM>E>Z23Oa02t*™[R"E>2iGd>1b".3|LBa;jh*mBGф>ĪG҄>4-r;klyE4E\M[UI]CTi]Sd\Sd\SȄBO1 ebnKVeb/o;{6}{p?ֽ!bA%&>2a_^.e7ٔr0כMx1כNzǙm(a_s-tFb(1t1XoCPԮ_J&s!wV27_"YW\(됟TL/{{ 1UtjTC<ՇUڣ]U] ʩ[u_Ȫtki5ܐ۹*s%ʺgXF8/"^n.4ʴ(ɯ(]ʑrtՔ]>yMju!{= @0  OF5 4?㦟#p*ӓsƓa#S\wM\U9U*:#CBt{",HM掷B 'bkfd|y;gأQ2nvVBQFx7Nf7j289%3V"SG?GhF,O< B@oM.蘣IkDK*ӐJ[TZ,CaU<0 ٰN HA"=ؠnT%<@մknPW 5^lߧSzUau%NH6Pj9Ws 9F׏Al^v >V EnXSW{RF9]ƺ/bw_h;;[*1nuF{@uZ-ޱU Jd:f;Z9dE"S*h܌nCjXRui=1 Ӊ.nw;+_4}d"{Wb46yһ'}+VK@M, Ɇq&J+Kj=%/jDP^ ; Zf~ޚqjɟ9" bŅi)-Y ~<^T՘k2BզDl"xJcs:c:JTo *ҏr 'P~.`[~;rJu=!IDJ8Vjbc'(㲧tKOވv-#n$"K%o/0o{FA#M9ެ;jPC^NQ1W;Z$ ձŬNeNbxC ƒ:_ԗSu_(h #Vg-úpf7g)I/qon9d(qkƭ -\6d"CX-ԅyesBX̓^h|媟pOw }(F`jBp V{o/OмzOM"cد@E}E=0tD ۰4?NX)Z,?Vu>0L,a)?}wz ǚ ]TOo]fKTtS*˕l yڏAXM09+sgk^"Q:p2f2XWI;5 z3mqJ]o ̏G뼵lVAuQ1G1~0MJw/F5Z`h=vR[l?NwX]+~ f[H6@{࿎L$ &k^#F|tHW|A<͕h_=s4K}7)XvuzϖV{*s _M#$`Ryhێ(y!~1=lGd_#c4&+c?ښo1O iԐ1x,ظqV?5v(rSd_!D/z}ega]򑽈7q ܐM*rѽJs:g(Ysӟ\ SE0r6{UԣǎrDώ,RhQ$6QSr| elfc*F `hD2Cp#3ޙߏrAZ~UjcSMoeU2̑*R}PkAeWn_6/=Oߗh}q(Fohw}/|TgUMdzD]2ɫ/!arn` ]Ȗ{4,}+lXvר=g.o4BnZ7DYqS|V(@GEjL / uan#0i9-`:z0! -6sD ӫؕC 6lAyhK*o-{D-DxߢKu*KxUM/}} aЀx;{yQLFzB1xZQokUt Y$]HIЉl돘|.ҍQHtMj ;ǂw[?fM)N|Xv4Rpw[(qMM_\/AOkѩyĠ:$ZiFrT=,؆\mwX[$R^=U-ERMǫ~ y#% dsXJ9OLGbӧܪ.(D-e+݊}RaHT3[sx2KٔDӧ^0=o"67e1"H !PBSwLl1ҞijA&GM6=xƁļ猆L/$a.CN-2-G~ŃHz`q226ivE>ƅ<~H_1)&TvB&%XMH1%\o6G',zl)a!j"b 53dbOa)WJ \E\< |"@0\Xe<ln=U% O !6+4~rL=ooUH TI$$`%*O04j'CL )MO hK+p*K㕷n9 %K$ΐpĎYD. 8M'ޞM*-&ssSu N#[p_[mPԂ[mxUsCLGsY@2g4 sl |H"e UA\ق]|\]BVfzi IJK4ECs!eReu{܄D`4D1[ɕ S7PV v5lT9 6}uRnFΔ>XhժKp;ݰMjU a bM〬SrU$՞Tsg7Lk+<}tʯIP$5L2{=}Gr 6LG9$'k\*yYA-i$K8f3Q[O橃.2zqbvi}:o3_?|c?!qw=#z%3 p [ˑH2pgotD^k ~J+%8K8IxDL:6ڠuQ;>JRu҆8,TOCck@ bLů8ErmLɇxرyzdDOJߚ_-33Yӑag9d9T{^ {%-vycھpNVVgP}\׫ 9$`Fٚt~VKWI͙ZCiaCŹ9T^&Dїao[t.=>Br򥇾|Lm4ۼs4zg^<֨9ۼ;+AsQ$N` 2O]g^zW%-6K'!tQ`:I-^imvWCvY-v5i]n#Fu}jԇGI0,Q9Bz, r醭 Y3 {|Ǻ3s{Z~_&wg-gn/bPY"ɫ&dbe~~ǿٝ>61d66^fVNN_K7Gh[.sK5`Tbl ص QI8%ѵL~6piLj؁ZJ MYc_[O| kl5v0kKe4??<aF a-Dd3zsbnr+Kac)1t[^ [q՜OiFAiJ4c#ޣ=N;pA9 ֟8n ݃>vSkF>rz1ߚx6%A*@<wfPȔ|%cԻJyd DazU]6W(..2e"byۻ_Xn.+"]Fcetn[ ak+d@ $js&m$@TB{zz<q4d07ʞrY{Ȕg2Nߜ{(e=T,M֞@$oL ZZЂr" Lo̘Nť]g)"AVޙ^_Z(AM8- kz*jU~iefuE@@#a21rh){ϫ_эx3{&n&}wqoil8'>YJ󅅤lӍqejԪVx9lƒѰr@ e°ɉ @_8iӚjGQ>[%[AXLen4]ET1gz̉4_im/ {QGK'Aj`E2d&N`b.o絔KX/{nЦ'&ֶWL,~,b"ѷ#0xQ/BqwA=R- lqP֎aeM,&y7Y.Sn9Wt&Q*!K>`"l?ʹyyx4ÁJPGC$h{ GFklM Ozs}``sFoudRq .{cTPl&Q戊22$s M@tI$a"vjHL5iL/* ԆCeyRM/RM]OrpNdㆬ^3-{g'O^(ZC`ٸa`)< XĂ#B@Xr~Y屲fX a/}FT Irm hDsb|z$6ZO\>:*(VQ(yJjڙBp$eshGe6*.6Ok*|)adVtK~+UZ9_aumVV6c+UDUԧ[@w˨k/CnxRx(9q&MϏӀ\i[lGP8u~t v^jʑq8\I^e̓U f#ȷ⣥oY҇_yY; lM׶mwm۶m۶m۶m۶l:^eEeEfe,1jmMlI>hxpYUejj??؇t0JR.[b߲MT2$H_ y|]5^.s㟨I+hBglR >|Hk0ldJcIίS7\j)㒟=ëS} z( #U  lrO[^[G)5smY+jCZ]Sɥ1ksuMdQDsSh%pw3NRR`X*=m>^L9iїYnZx8AW ~F"g>g(=`{Sw-CFe5ynZSԐ x8274:hs:MA['5ݿ+&+P%6yc[YOގC Ҁ-/i1E}r^(fV s,3/=; 7AwQzإ&8@ bdnb`ldjaRF+W/Оo]T2~m>TDYxX6GL^A0ahg3,pї$; m<#I|j t U10Qo)Ѻa?Ruܐd{Mz֡o{oEΟzӎHA"`GNQ0oPZnhb3UZ̶AN@J@% ")}̀6be7m#mvD*<&DLʕۥ y_Ct6ΎxUu? t5+^EߪB;Kwi52NLǟ̅KXA!lPHqQwVmʰ"uD8!K2DEP YRB 9 (XzTL7>6+ogYHn2vRXN@sM2i?L:DE;fOe 'D]!2׺Bw&5TJyhil;C̆7Rx)\E'Y0d}- 96-,.%p-t>_?J ;/s^r2CN8~%g0!$nG0w8摌f Vp!'j N,83_ɶ+}&p#6ohsUTz@[U^ӍGeeHdu3^Q ˓-ENOGW"hZQʇQ)Bgn&-}T#tlfFj؃ Ru^..ZF_L0ɘ(UC$-B^p!0J(tј_x2X.0IIנ iU>:`%5wư3%R8d' 689W,ڬ\ UW>oMfT,"Nxj`erV` Rл.u7XL;284 fJ-=MO~_},V̷ #%rYGpf_-zM p7]ڽ/Y){9gHzio9jva=\<> Wg όf9HAqe#.nYyfyqTouq5ySʻ4/|~ ;Dsv gI״ P̨p1:,ٲD&ޤؔuݨ`2@+Q&bHq>S~j: Ys,b>Ֆ",Ne9&$n)ld lϲig:{qώ>Pb\AE=]dS60|B+Қ{*Q,Z@j τȡyfM ,K?uU)-n (CEI}WZޞk.Be+b CMB정%ʃDJWG ;PV]gMԿ߼tr1=Zr3nzڷI(볔9ͧh%S3# fXGFXV# xV]JngsϟWx7=3А2MFAjQrWH'8OK2q:b qP~$HR* ?Qhlz\udg &+hUxfɫã8Y"(* Ryx+|L0kiYW DB0x'5{y(lP#A^doH{g{[nͣi3ܢ@q0::'bL^ ʘӰhy=m ca6|7|/%KADs* =fEͭ} 9[UOAmj0A~\9j!M o! %*1!YN3~ήG^͖*GH>%Ɲo:ެy[mReUbpڀ]t?9.?oh;﷡DvyL8<DHj8m#P18'`n0$:zpU Hi/>NL)=y:%f w+b͡6Q}mB^5\0R8H`fU=89:"aD-)"~mI\SI=,lp&Fdt'!Ǝ[ܧ4r~wM+7sRnLR?w_[n.>ócpGw{obqah7ij3;#p;}4_@5̓i0Q;@0'HKꁼ<ұz[ <)#&R],x߆/u>`P]˭xWq3Wqi,ӉHtAd%fUҠlSeܩm.A|"k'B%)h y|u:4|Keg M/F^&&|`T>vM' #jN±ӎ0ĸD[ AU>x;B6I 7*TJa^RDDT~Z^Giwa7^xECD5RÎ5yN ՎtXeAI7Y-^vuwjOX{TJA4W- Jds$e/ϵjFиxrB<pqv8N)*\3LH bPZ6 f͕*4)mE Bp=Ѩ앓TMsUbCj+KFnb =C`bHݏ%#ʉЕ2Z$[ũc40;(U[ |&uZ*#(ԑTNv,[@prlg#I2!1^fi6#f"϶?5A3jYR2wHX/w}fG|mRl6IF{kj :bqta錳Uj |2 +'+QiµhUs)[`{L{%: n.a6kd3zULK`SU7)0F7mAoL.zEkdŢD I6K:ŪlF$-uad H%/Nd yT'W,%Јt80Q:&(Tw “˛ 6*"i$5#,T)#@ l2*CO,PԹA1:U| tTo]Miy(c1)XuY(`3ݝmYeyIء'_PӸ=|27]TXݿ#G>:#Qh'mQoLj5.C(_֧b K7+Pw;0b2ADE[\r].gwt|P2}ڙ7iړ-cܿ_ }3lɊj] ^ z x2\P3Tu*hjي1NqS1򳝠(5(TKJj ßCX ׿3At47!Ma`xҁNŠE6bsS_\Gn :S [QS>n*+V .ZMAJ{|"w ?̐Ő=U^5.x(Ms \%qjBCc5vRݧe1 5] }6Sl9ƻteSHrʦd _ +#%n99߁P^D{X*t3R0۝ }q4흨'řvBwfRy&;ǡ-LƙdS[ԊfX-/M\&(Ct@j Odޟu>G9zVa9Һ'<$UY*4W1gtf//B۲2<:f tS8~+]$^(^oL l°ZAy䓹j3p|"bR)XC"K>PI *; 9Pܫ'hݶc^*ur}$:oAQ[ /ci9mWMǡFA:T7jP&| {NY(r)d;qJKog׻a!z_c)%?z4ޓba\WHC)}z6 )4 ,P9}X[eH<0LЧ䴣n= H #F}fes!z܃dǡZf:^m* (y }EVx㿦s2`FK|?y*JUGoѠO;A%WXA'H?Ytv`_,N`|'6bBDD,*ceD*CKb';(@I׉=X󸼽""~^ϣhBJ% #$E|Lڧ)ᘢM0k[3ȋ.ﵜFFeDp}~0\ZIF󼬟^x?JYOa)mE ,ږWP"ar{9hݴe|<+Qo娫1!f\u`"3}_b};V+^F7%8tfn&HmkLRZsrsR pR3bx~/kq֮bJ?h܂Yڳ8K ,%t˪%_āЎI|p| [b:Ÿx!e|H׆HhRV%4!nIx՛ Kj<&?CIݤN`ٔJro^ÔPh?kd/Inԣ5e#Yw y|O")U'j<깆'E;_Ldg)dOA "PH Iqlqo )']Uen."~aEegޝ2] \/^PR7aelV奷S6H: .$'"2rn>ڊ> 7`by\eyy\ez5m2`*2bj0eK# zmR# u!q%s#ѠB BUOfnBOog 6`Q`nry/-O`Q..;D\;u`.2=]kduizUt@ N:^kkNZ^W'G=D|Jf5?б]^M,./ 7-q~)};[HroPKxF_lZ:T|Jţw߂n}߂0:- cĕyRdy^ZP< uO_N^u>: @Ae]*4dk]I/þ7֞ W?\?ʹ!^/Z1H3au`YLQ+f By'߱ĠI" Oy[-:MhS3_`b+.yTH>=[wO!%Kgφ%Pk?V]<~Q2̴arCQ#̽lmzNʴGA#.:.VR֡*Iu$[h;Y: l91Dx8ҡϧ~X!7ZdMͩ[1X/[1=(Sp^1KVs/B5$k ҝr64ú.zL*ϑiJ1l1WM-E:[~96̕$'՞6Eq" hac%9溄}e ZܐM- &Ne~r$`RxH#FU| Qwd}I^ppnYn2p$@w!U`+a@#D*CLK GO*,I#}}Jxh էqd2\`&qh|>pC},/<4SRޑ&>?:d$uu,Avq^ٸqgKBIjV^,э椝^*z+_9N`$_}-<U>Tz~A g3@ǧzCnxն)d-qCzhK&p5'/⌮ݵ 0V@  QLklϰB:b2 z_D'a)5?(~"Lkl-GѓX 6$b\1ȫ: SJ|L'S2a._:-gH?Vf=!kIh!ljB&i!D4XL>W3.l D4U=y(C1޵540@2](s DN2%'xmG+_.=s*=[UN65|YQܹVi !ˈaQ-3V :JkYp^E\pI .T/b 'Jg@( ߘZS|,@_J#q'?u50N, DF 2B+UI,ٌ0CaWG{72"pdZX|q#Sn,L: SףSnCƟ#5j~W6Q!UU(^̻ 3f=Q~즶BZ!!XTBHλ=yn y>ТҘ 9NU3#p.Hk 9ʭ}ȋ&+wrVRcExS~ ޜ\t@ķ2-1!>ǔ͏PFf8E0tG61аcL["G"Xe X AIbCb̼tU&c6a7ZG#~F&i!"/>mĩcS)ɒ#-G5{M՚~\X\aƪۮ ]%/ ['PX˯hjyn'Zu9;NNU0[Z~k2X :<<'u.h'vKD/XSO~mHVx|ւtP"? ώ9 q0 ܦ)iB,chuE- 0$b#?$LX] %˶+= BcG^XU'~9 h:kF;Z,پa2׺*>$燽Aκe{KM|ܙ| ? _m ^ gAnutֈwbE5 ,,g\6Xx:UMn.24|? ]g9Wnss,?DqK:IORcCg9n J1MJcFg'qKN# =eW-ygmp4׉$@2ԯS&fI&!RKfeWyh6?{2Q?.!_gM^3JrpIň`XsHg(̩[|,|h=~>ͯs'jT{5R=\D&Z.0Hڶƺ#{nu[u4N2ӑ/_Q(o9)stG[㖹) RN/>cinuID.(x;0ě j):\*&sC>sP~:ĬnaUb\.%8nkZs-Ł3v6j̮&C;1HHk < GTi"+P<֦:rb&FEʠE??IAj9՜ԙ D9 <,8;2=d-o<8u {(l|}~̸mA<'|fF]- ٩IЊJbg]7mux62L؏'RˬdY;XK"u#>$ 1+'#OTIХZi$8 Z ӣKM~KjAmxѽoXAe`|Z;)+qĿ>I--l !4l ӺZXIqyBH%1z/򮖌܋ۋ#n"8N W "jW`=r9ndbD΢]wYh.\D{B ib0q6Ws(Ŧ.@DƏoQ.(a8MDXCgs_qǑRSO+F\onS%Ɉ2,a 4~PnΚ$MҬd,Vu[p_ W-fu"bk$ ˪v5<ֻaOVB"F>ܬwC\qD7xx KѼ o-գ#{QCwm .Ѡݵ>-I~$Y#Ώ3O yg/TAg6؀:lXt%X%lt-74.hpLkPMMf#_Ӵ<;$ k-Yc lc2h#6 '#8ذ f^t^n1vfs urs;hkB!+x}a' }OpQHp,^k=:&2UXyȢ`}8(UOy^h@ c |YJgF4u"_vmm Y$O]cX߿H` N#a<34!d̝"Աd81;MR@)UӪ@VHdjW`<>Pڧ.CE>$] `B1Zr\| >+oxK 9#)#?MiًdKO?@fě6'6p`ɗ&Yx7ue(OPdyHWMjS^= W/ǿ&~vٻ&&º&ܒnЏ>Sn6pkI^IŴLx2M<^? ]yay2B3;,9O9d%wYG?-x(,T$w/6 OT|r`jXKU˽<dX)XYgsNn/왪jswFFAwqw(6}|/R9Kk.#w+].W:ywT>VӡI4K[T4{^W^_t_VwWtntZ?)JFy1\lGPo/ʉGjA8J#F,Z1_VPZ [/ATHs{ձXǵ+2DzOOvm>}߁e򁐞 zbsڸ;to3݂<ޝ֋sb*c,흡 G &S-TjE =-DmH|MJ"jf&ch m*b?N)DZq^d X{Hx*\TSɷ*}Tse ʤon]$2#wۼM;n?&~]\^x~@X*ž+$e 9Yl]'_ШU U)M!%UY4uapW]nٚlp=q!~~8aw$qjMKADd]W(RҊ.*)nWBui9.@6:zM<{ D+:= 2F~)a1&,-n75Ԛnvhg"a{ {;7c$7wa`ВMՕAeoA38--ƀvsxB0ũ3#˳(rc݊H *Lsq“\KA "hƼGGD$qik1#5U,VљgҖU561C53@LTZ_hڌL+ ^~ʹ UMa#47Z-qn,i*Ne$q0=lh:kx|3P Ogr඲EjI\ieVoۄ;VvdFZצ(2bYp͛(:}~>N'pI+S_N'2Idw{C3GʇF06kiA۰h.HܳeU(In40,17 5F'f-iUHn *~f>nryڧ 4HPZl\_xy3^Җkdø}!@rh^̛zb4b>/)!]{@S4Ө\\d5x]OXV{1)MmwG*HV_W+Xk?+` _>߉M\w](2Onve;v:Y씴CoBc2ioUuQ 7 < §H%Nj8|weXiȿpgO %e ?Eq\ I.=ll,tISxo*(.Q]!ԡv@eYPk^ƃl*g 2SAXMHrw>wVD*/7E>1-׉eIؕ>z"B%u=g#Fq ^iS9ʼ{R #lc\iַfS=LZ 3HMɛTBurfY_Gw<^̕`~`RsЇEЉ7b(z ЋuTڪ|vOy~!e1 +[3'x:vg^[`Y#5ǫwEލʸ EEn"k2B")/*{^ T;}6%R Iu4L||bSSek7fk;!4b/y*cB{' CO{WOqXU)ȒmnRP&eQ!-pQEudxOn x,:z7!r>h{gZ?;w6eqkB&,hqniF>u9RF? h[:o$cuR%ؿ2TwSkw=Vf|ğRehE0DosBLtG]:) K:N~ 1d ?]O 9!OK8srO` &Ǜ#5)ʜXRoJϢg9xhkfn#y {gb;Ky7 L*-(AaCD\km|ǎ'/UZkڽ8H|o3|1ؕq<-]2o=:PR!v͟ǃ[:U߀sȸW\&VL|1WYT=MNYCݨ41Vz.1`~CN%+M>WR|С fJʩ#ܹu.xɆYpXFdIӬjc=7B)SU '#>У5TꦤbmqhI #m;$fLΥ5ljVD}$!wyeN)bdNV5ђrVs'#YHQ4:u^`;z C7^*zfYmeމΌ|LiJQ6A0N~6ei!Rvu"1ITx4w}ֺx!#-*wA,rѐMm(wlXeQ 삹#%MAm0SmރBe$p)rs <0#Ndh'ي2lF)礍|ZE'/2=,rھnaXܹ,8T|GnQXߣ2 Fcѐ}SlS~|^RM1ezv.MmX66RAX_~o`kQ@%9%ZIצtdظ׊d?аܲ"r^zm3Z-Rh{cPJ9& Lf"6.߄ء=Οf~EF z 6oҡX^2y]wyBS׳iwZKQy']B)YKRA=+Xʅ AL}Y VMfffۚ.o`1iG?ez˨i oFLw" ?AEhTO'_ȿy+: g8p;0;J5(IgeHG~#pp`|w[Mg"Q%CcǔQ6U}c@]<8X?I|mbR^xj;RJܴdZb6#r[uۚku ^{MLqQŻvy'^ |+Zímx+&;$1ZBe6YpB>/MND.]3ߗ iFBaKq( HG\]&~aob\?=3' f$ xK#N~{ h)❐ԏl=Eq8L;OH?!!>BѰ;y{@,!\Hv !64䯖K$,/t!Fx;zr8[\F/2>g"sbY~PinWNa[@zwT j/F\-P\ƺo#8H.[#֧nn܃vvH#[لXads1 K7 ף jWʴ"cvߙ³YS)YWm4eJ.'y$6C$ɚѦulW,n|chgn@ܦ[J`дҹc#sXx.ؼ VA5.7cC.D9q݆eX .5;*TM̡p bV^](sX~([fd M-+-b/131u5=fDe ڤ[pѷO5Z{>g>9g}RڤݤQNndڝDb0.j t(uJ"ump *woiSv %U8M4ǐ0P"r/fuրͰoPb}O};B*H!ȡN|lZf LI6phgG-x`Y->10q&mL+AL?&X|(FhH\K(?0/LZY\H UlW,R^ud# .;r'ch&wDݏEC LJɋ8:CH*DzqUvgUP%ڋVl6 # ֏qGpTF͋nח¢0 iSq Dz=0B{S aLP@XCc13d "FRډR,T^WRi>9?+"TL_!X;>/aIٚ4MNtչ/HX-cpړՕشZqn(B[VGd"d4*-mKCźU5!Cx7 ;괛ϺLs.[IkIf"%n]cD51Hx d e) hRӢ|=z[!z3W] Ts 1~o[ոFmLΘGTtU[%#L4^`t3‹" *nG/GP7y$nF X0SY7DЈG͛SRܚdeaŗG0YiD2׿!j@vJ/bQK . mHr(c>Z1NaPjw8;K!۟TCMP/E#҂ YPQ?Vkփy*=VҹIO (DS,i0{!IWδ NeoWU\пnrDyzM|v^V[jT%=sTF>DTs˽bxcDj6A8F(WLQЙkj\9r oiSU8RZREEX P_> `nH vݴse韖D{X2ss 8l! t4p]1c9n= gX}ywM@z8 <@fsd\0,1U4h WviO YÄL3('b#HVm^dH6Q _ @v_X Rh탴҃)D`.겜n IR%)Dͧ/`* uMP'ѯƠK"F߷)?SUuM\<7($%:ë ͱLZLJ\\_$Te# + aAETDdpk(Sg0Aն½}0h$+gH?G]l*.DUq(LU*dG Ix|2$j%@:Sv!QqX=чV7l 1[>%l;4Po!UG^ PZ;mv "x,dr;S}y;@$G4(c1Ø+U久dwQD~ɗӘ!!y mRaLn_I?cbFFֆ ;[-M7kN+˨W}n}t#ja's ,f9hI\F%J? L5X1$,)< pSO"1ȟ`B3ڹMROJJ߯&S9NݥPEA Ǜ20NTtgۈUݰ4默n.aʅXO]g晬|"q$XAIҙY8%lnfQxokͯ E6aM ~6(LQ$:\)1BkU qc$LҺGuF j)^Ǒv/ᯞ>k7BR@ 6ڱe{]L{8Q֍h\7 pI 6Ml]鲥II;DN#<؈}4?‚x=jCR7P1FS NI|){w-ムԫK5wh`y됱bCcz1p05-{t1y=tb[JvEf0ʏ"Gct0cö+Ẅp ehڄYgܳ/36P|Z לk n'7$3,hb0V3AH,Eu\w|Gw/vIǠ yK73}-ӞF^;W ~C7#q`ő f> iq"3%WDXU_D}fjH91Dq>D:~t `(yXjoH=%Ms۹hmWXJ)O8m6&zQVšޣFU n}Cڋ]f >[vs=(qx@AbGMaY#@uϷˢݤb?ݏ5p0Yefy Ydn\{Ya}KVCZټ2JWfT^.Dة,61p0]>WY[RJU>JQwXE`jH'ؽB|X*!zLWCȃ\Wp&2_10ߏMZ 78800NaLXj@5撴(KS4N4/o` Έ_3tȔQt ww0pDSz_)G4"QIH|*P[e7>[p6"cy$Y^(90E2.qz锻$I +0'JΜ6ž#?fVH*<iݽBM{.1% [р>d&sFj3hm9b|Bu<^hs ^1at있G([S>HRǪ]xc#<" ]4 i ua6ߦO.YFH#NBXsBHr,(|NOo}j7ž#lPOT Aʲ x]i`_Y=[^ߋɊ|e) zTyJ"yF!07IscPUW:p`mĀh07@f ypbGTA+*?fsrdbYN 4\:3~zf$dj֌ko1|UEJȎ<4,<6QfM^ aW@䧶u _O 5ь*=t]p*ĺء"i5^:(lLc7ˇY̩qRg(-O_kKny/[>O*zu/[}Y?Y:|o}3o)/Qp֖CéﲔڐZL=b4/N<@"a@:[51b epJy`*}^.0.8XK{(غA{淃a~᫜o_f~t?FTwo`noSFl'L!JPt5_(ZX5L1\Sz)ƾPӵz0uNRm}8=c8l&l[ƲA_P֌&ѤE0Nk&*_o21 &ԣGi0MM.Ycc vK6%ޛVx6T ͺ 6\r_"v7Ky|3oDoӋIK,( VA5ӯbPwSKsyvpsG@÷ %O֚iS8]T n:eU2pA A–KNbGJ3EW^X$ ^: \0M {9YN71Lix"qTYs%QDӄFp/еŨOp'ylٝ t_Nĉ`I')ݫ朡~;%d*ՠs}3_l(. 4)}W O-j0(rPc f&8/YAE땘*RցuJ>("ipuO?aUjz8K8U i i 7e !M5j>gâ8 %J_([#" 2f"iC>HJ"aZSiX]\DzS5l4*woD?R]̆o}.̜JѥȚtw Q j0ثjH״h+Ut;q ۛI{L'd8 qlЦŦ t0zdL۹غKԷlD|`!̹Al9  V7S&,KW\26j!Q 5EB"jğ1<,@j[U ;~$+Lkk <*?PQ FQH$gŠl) UOir B&sޗś CH7׳#$ z)(`e͌V<С\ܼ[<%>tuȟ_z VӢe$ #^84*( b9-AR鷫C·5 ]#xy|>};}0 8;bˑ߀&ݛt`&SƑ$S%@\D!~9'[*:3"Pf`]"E0b=K`x 6hl0u±ꬹ_WAqN;q 6KP3^t\T~o˱ 6Ld7ӤO \q CGH{<` (nc 88r~G zGc8,)eqapn9HjҒ-:̂ҼB/xbd[[#-uҩ>{}F9rIܬ}s68(Gķ%n ZB?@) 숷/C4Z6~ר;v/Ȟ5רY&~IQF2)ϪOw*xfsŅ!U%/YZQF@Τb XWE^txgHB 栱ѐkըC3nn:"{ec+Ņ,I csoBטk[y7\ rJ,5FAX!cdMun]A~EudAsoؤRabRLRt hZTʗ&LF̫lҜDaY P n2l ۊg{vuhra^W+#ڜ:Z=״g7ޣړ j/'iӨX_q5AU8{d0֫ ;_r[5]۝@Լ.Oɽ~9@Z# R޺;=5sB|U7]6U=⥹?:Mc*M.ǴJtjZ^ f 7z<>ϗpWbou3vg8G>OnIlW0adnW?_]80W~1B9}5rCgDqW6{⾆RwcီPb~0jJ@.F-jzNGv ܮl^]E(m."("d.Fm#X܎$1r̒OSesCq(zA;J- Uf^k~ʟfMm}|_;zwy51 ës o2} CzAW 5H .`1kփ qer@`x*`؂m0Z CꁇdnBW ua Gۇv=0*'7bxNG%-zG`j4hOV(Vqu;sa8JR1qF"Ҹ:CaN8pP7MB>s}9U_ɏ$Gn}i;nG'ncF_NZ 26gKf ;7ޕOa K2/[j!;Zep |[&Eɚ/.(WCвe!Eqr߼KI|;>޾ .]ȝ )~ir6\^cc91h.V):5A1 iA DrYë˹H7ޛA}рӈ V,73H 29DzfcZ` &E4氎st+MpV(KJV ڴ` Xoxi{LaS-]owbpE{'KFdZ֎Сs Ha%1? 6;r9W!(DjRJ._h~Jƃu#`&|حԙ~amgQ+Aw֛}ƦLє]^- /鐻]z_{<_#A[=G?Nd$rF1Pz/C検^$x99@iQ$ِQF֧>)N&/a^i5[['3@FȪqɩ=CJfJՓ25g8io$bB kriꁫsf%oyƁ٠DtԲpA7n}hI?eeDamzջV53jY&4_f@sα9v0n]8+?7F=A3ѡր_'RJ1f+TBkuNKxK@RZ<* `4N n+d㎆{!7inF(wӡ3f)W: pzj@ $OFBBooZj |vb>.{`c $FD+XÚK7L]u"bgy[-Y"s&q{wb6?X%xgBJȍ 9r2y6հ.TFbPPaN#D l;~s_Bq4% X-QWAR9 V4Î&Igo+NQj];~;RN-:r[Ɂ=Z"ExiY"]! IG4<5wdzDl9[SeRT_-SF_Ab%F-y C{@FH?KṊ1r$qcF|bK0}R|?@ J=j0 73:hyd4&=ŧL_?F"~Ɂ#(HkZ6]n6a_I_g3*w4qr6`Y9r@ A*WD%fKH*%J$9 Nz~h>[f X]}R CL9X#O4<?ݡOҊ593BL$t: džWuO0)HOQrB^K6H>;1Aq϶,k.|ǘ c!@){\Zxԅur^5`"Z6\ܥ5P)ov]@jAP^LJ&ZY[,,lkdD>{fn=xܻ=sɫZLKA\U"qNC]ks<ȹ|`|qPL״>A܍CР'5i2߫Z3f}k Zoǯ8I[bkҕ?((S!ݞ }xF hȇiSpW$ S69>uI(>d߹ ʠ dTR?+k/B@4 (,iUaLf,f&:-}0\/,kę!#s߈ AL4_7 h;NhqL/G5%RxWMQ> d&6%RY+دݳir"#7Wgǂ} %2}"D_%=֋t5 оA6huC SL㤌UTbk6r*- 4!MrcE!C ZfMQ^V~@Eݒ#9boJM{-̡B{1xswiF_*4CZ 5:`_0׶>7rE.Af]_F$ v#oނ7-Ec{U8<^B.*]r0yBZRסV׆<=sW.-jp%bDj#< ,Q]7n#V hMQdz9K (iXձ/Fy @},͕༠HtL/M_|qỺ.- qWLy'/Mk=B_Z*4SNB"͜<1璟KCQcn8󳿞cy+_aeN:q1&ϕ:GK6BXcQDMf;5WG"ݱk{wp˾5{ ؏،=:= ~ޞ#̹Zytq8c%%?D>tɚZwrk=^~:?,ˆ=Ds[}=*ᜁj۷.hv/ }B1(J/v:@s8ݠrU\44.IoM7K '6BME;dDXqxp! {t^ ]*Ҋ;Z蓶BDD]s#á,]Zn1lJ 7{MJ_t2DZҡ i8R*cS')'yDZl.YoR=T*Z)+Nq/MV5Q3|Ms;@P4/&=קkE d\;ljHX_>mf*uBUl/86AP>ԏ+OUoy:U;xӗ@^Nv\pߚH$YW/T=win[T;DŽ `gw{8=LaVKRUEa>uժ^D}(gW[bRㅘ㬱MPW~.,\U˦6_L% VPB{I%>KJ/l)X^jJ$d ̼2s@67sh[GŬ/Fzяv,"Q*tDvL! Qxÿܘ)"E,3Gcym]e"Rݾ1g<"X{ݞͳH5ĵkogv cj}8!#Ǡwl˟miyꇆLCLYz ŸoOۧ 4p; ۚm۶mo۶m۶m۶m۶=Lܘ;O祢˕2(^%l}A *1r`ht?->) >>0)<V`,K[pI=480wqy&ߌ9 9;4TW~wf; 314GCh 4gyiGEPoiiGGMf. 3o`guq!ݾZ؋!#+퐁Ŏ&˭m儻P#wwvt|!!q}.*wQe|܁ԜCQk7' [ôvAS[E5pFG] }~fILZt%ief=;~PT٭m ;lV|Q<n8`g"m'2B/S\9 Zie_ƛ(W}9˂fvީk"yy0M%Z|친N}&>WXGG[o ,wˡA$2NqRN^ Z\ea}.Jp%y>TkCMMJ/WY)1 * ,OʙTdQX`{RZP VٱliSPU|{qX5J0}f8 R$>jӯKA6.Fn˧^\rugܾ)Qo" :ȟ$Ʊ7ϣ3wVCYL%zhKeE"ƒk(+֒HjN2Xv4 PwjI>E&j|] !`‹V?:\u ිMV*,ucĀr2;o*Q$gm s\Z*n=o 7U݅g w '7,ԩ*'-./T UUtƑwc`B~ j6{'Z;DXZd`DEUJT熭M{v^m <U1{z frzz;%#"n k6&"$Jvvc{%C벲#'5u]'FZǩN7pcddcr0e#bYbvY񲰆NgzWy_IMHDNY3Oc@9ºƤA[pcj8.Sh@#TRrK+H12Pg%>QU8̐ VAz+ \z?7&l/ '+70g&uOP`Q+ս%`u}$`u_' ٮqoDLM,vJҧԡ}4fyiu8v0q< [XKZqKaAb=fCD&Ƞ1{Xj 9_Iy~K3 nnKQ:4yr\2񋁾w!~8-ywEW<͉ai>/p/ )rTs'ϛUI!̿ͅEΜ@Is@Njz|S $_'~"55Wj|'뀢**5mx'=S)wk!ɞgVV{RHI4͉aStUjxc$s`YKT_ J m!JpU5R)6ѧ^S'92k%>Hpe 8˞ ɿ[bA+i87g >%;a_!? J9ӊO4jQ2 k/ ؍Iɂ6An3Vf.gv .:ne|QD2t+DbgOr=5dyt6@{`'XLXee"ѹ$pJɦQAVM̑TD|!+F;Y*g>H62/K'bZE= mT>.l/VX ŞByefgiSH/5ir8/gGS o (k{@ dNEt2!M?uw_m}HW᎟0j3N _}K»ByzstX%-Qps S#*XEI:&@H &C!4~®8t0yh QL#nWwpdى6$p64X3h}hք@p|A~RI ݫvmڴ=izPx{qdNqȒ*tx (eUdr)Dk?AlZa ӣxtQT)qϭ;Ür&>wJ܎ 8`ݎ͎:bʐ`ƒ C8'b̜&kd D9@xT+4ʱ &.x#;9?'gܗ*6Jg$m72jȔb"<g{u)twc_r#ڹ]e!#Ar xJ$# d֯@w#8 ?6Mo` SCh"u{;I=q6%A!G3쳙`pN}s<9+M  խٽI6ӀJͶ7g)T{Dp}Y{KED ?cr/AʲD&cSOx`㽌Re4=/Z6KK-zhOr?oowkFO_`I?0~f͕\Bhgy^H: G_*ȑ+M[*^.ʋ5?v }~8 zj5wtco4IŌRl={  \i?˶:Ç8tĔl ŚKf8۽8W${ {^h:RLӜae۶ŨjNᗆH\%F }d"{8A/\"w CԠhA{+,Lomo ]\Z)_O>- Ђt0rV-^<Jz&\biVjjgNQOtu–~'Fd\RdMz,\ wQ]. 2O0 z8 ̠/yӦlXv-"ÙQ X ?p9XK7KIZ6Y3gk\^\B??_\( {"gRE` ,vd|N`*&˺Mׅ,Ffs_c*$ES)~bۚtKƱ Ns(}E# HxQ3|-ߕ~ :Mlր׉ҋ;t+ (oMP05+ѳno@s1:Y!Ǭ, . 륊_wfJ(({ƫٻ Yo^o␲'RHsJr.bxNL-<ڴE ks6k\2DK +˩L|f Koa:2yubQ0>=*&7UǰXkS3m枥ubg@jkC?} 5'%mK@d p̕vr.w/Ւ+YReDXQ V1m2UfULx:A^훭i9 |6OJ8ePzįd>xa°2]c9>eYHҦVRUbz\l0T`pG"ctx2mQG̲_g5_ΐǏ\:n*aB I{K0LnN1cm OBuѥtTmv7^o]=qwT,!/vaϔ"qKnЂ=:qbpAEK%Бv%C*!V&Li탚1d-$͡ zgo%deKKWAꐆƑAY$XQEj\!:Zjʛ̱U3Ҍb4+ )pXٹbӘ860GtHu0RpS\G~̓r%A? ~IbL 6'[/̀5dng^ ɩJ@ֹzj1.hGhP!ד6њ7nu>R: Q"? +׹,RQ,,xAƲ.NO!_ɮXq^j@t-Fz+Ż!NŅ-`S"vky=t05;tjChmw"G%kp.)nA-gqE_SZZ >6#5[]6T <"z`'F/gKGkNq"ܰ4Zc⒖Zu=U1zDVXj9'EV&,Ts`;ZjZ bz˄]'|Q3JE#|`G“]$BWpJbs"bSwQfM8ބpTeϬ(F*@ K#RQ' p+wQSE p18ϰIB;[.b/z=b)LVGe zXkq֋ꀒ2NjZ]s5Uå4}kỪ{u㒼k| J]ua;"v|QU 2յϕg/ɇ3M(U"*L%d<H**bU2>[H\4 2)$I&HUQ7!_SL%z`CmD ۢqee<*8^u%ڝa̼nC=@gK"YA4'Thx$j#ܣR!jTxHa]ckGW`ݔ?֫n* UPN^Eۆ>,> ggRybmQfĩX$F?Jt-! ï7H;RE(E{yO}35Motb/ CLK2Ѹ1ʂFڐTXmS@ZB^$5aP~ъXA[M`k"0T=Z4L!'=KX/Hu6ۭ/Ghh$6daKmժ:Dp̞ n[hh/],ݖ|];5?,X+!&FW:i~]W49xúO)bhm5T_KcLwi> 6[~sڇ7*q0'&tRMsYaనku'K-vUv zhwijQ*#Xz`w\fU; 0S~UD猓1$"Gff:괻[دHC߭3_S˴ )Y[0LP``ZxG&:qgHB#@nI]fռC!<=KF63Tr3Ur\ #$Ai[zxQq9 " Exg f(Jɉ.tܼټuܸg}nk X9bd num'^$Z g!"c ql2ެY!T!"%}kBƅ2 ?pACfēwꆯXԸVMgܕ7taW5f?t!Qy<|W<g)" 'Iqf{x772'ËWԞ\I(Ufn9OUzYY>ʇM[n,* 3MHղtohuп$\ABu e|\,xU afX<6a "eܔ;11aɒY6R˜|%Í?X1jw dE)~pGg|Z"ǵQ$c{mw07}<9%|G`>p 넏{ޫ}/C;{8-8^xNZy"'L&nV 1;ЫY ^ja0GH0]}Btcx4[zsѻsqܹc/' 4Q[+ GL}Q^q&?V1Q@A _shqg\AԈfXwD]xC׭zc)ɛ:quHenqs$S+r6%se\7UA0 Ũ1o|ybК7f$sTֵQ5ќsyً ِB#'{kp f6[K"w6&0܃sS2x?MVMg:>m'zVi\^~,BuO+id<(Tg_A|w*$&A; yrWnpEy;9> ԉ2r όF?C&{xO.ۈN>#}#|z =V팂k -yR Z3LΎ*Wņy. >Hv˘ 5!Z!{3\Ν,el :+(6 >s繒¡ilKw 9vUBo Ey45r!ϳ'Tk&' aZ>#-)Qj5;]PNtB[z|>< Gr{7")6OEҮ@P wcf.5;DuGLf [.]lʴ0UE_d H 4ׇyK-OCE| "΍">XJ'-8_~ze->(Jë>8R8%16V&^CͿ5ut&P*^Ld/_f-.f[F^_Z"ruN,Sm dZp{CYOt<?FwnO^9@[_pf SH?0"ҽN=~XOa,(1Nع0ɜB #`޴Mל('AwzfDܑz1TŀقnH-Ol͙(6j\v'J9n.>Qn=sT16 [8m/X{I:ؠgK%ʗe} dxF-Cbʢ Aӎb߬g#iX/ѡZ-0_Xqd<`h^zһvo[t}'΅Q_+Ц+ \.d&IK GAPb~%GSȺr<23K \>?b>Ь~CݹIu{>GP_tqRPAPYN˔RVz; }P|Ze籙h†Ml27tǿo 9 Y8|< -sH,.O Wc>Wեk_ˈVT+)Wtr#ݽ_[`-.z0ȚIbҬҧ`ҷ#c#26uyV |o25y~`#F;L_JK狝~2/F/3%5r:i]bo+p4\oX#Tt\T*Z* mT:&ySdNYu44,į{ggnc3wcgGWܦul7u ) p"Q@ 4lde$.ZH3Ȭ6Au&J4 ~&9t mϘK;{[!!!vZ즟]vnnM];RܴG-"cw'A_#zov*%NƏ_8OL,QNWHp"1@bԟ=)>xHP} ˺J1~'Wȓ3Q#˭S`1gN0>̵iT"04{t$xB$Z;ܙ"WK( ࠮u=Y'J=Y-PES×=sᣈk%2" hp$ w "ktX Ff},UiV 5";xrH+?r:x0DĄ>& =={9ܫXNYTϞ]%KUI3GTIy!; sQpF}u+E$\2zO__YS2ȣhjCT4 ͱ%%;4xkЏU`>*AKH&g*UdAmH)7Hr>9q]EQtΠ=1 pUa 8$p;Pc GIgF@=A:{ ɭ5m.`@[-:1 %xl5βP>xsAPJ ٰzzb|.VggчkՉS@+[= Qf܄["]2{6R1֞p;GPeJ 0zٕ8*Som9=# 1le5R;G&}##h9Vuo0xi#2=$6;HSa#UC ܰ6$ϯ=OnZ^ N#dEM&ܢJx ڽ.տ~xppFfG92\U&G=U]쑵JCWa Q wO4xPx W3mPiCF/\ ;%e7䓡mZ {3bHmS9k]|ھL g){?%2wO202=o b]M+l۳sӖvڃ*ri},Tm{Ņg^> %C/3T:!d8T'x&C=˰ rsƄ,-ONG6L AEv@l,@Qu@[DLXkoM~֦&!ݑ9xG1PA<'^!"a7hmjMhU_74F>@RXx緹6 (0z5˭ߙ2x/|?oK>!_S;=gjs:T|[S3;־7ԣa3 q$J~̘ÔbUL:.9Psh,7~rމҁ;ދXFMbuUAp$@AZ_>7,f[N G%FG$b3/;CPVRr; ~8cS-5;F7[À7ytFW+jXz-[=?XA"/%kVq.MlբjtJoM O&'>=on"Zu*|8Ӆ-ORŲ6WY=2l̸Dk&.Zg&T8Cz]_@mUV|j>X)0NgD)Y! $V%{KE"Q,\st aR,})U4ii<9iaGDQF뾚 -Qj-cDCrWIَoJ#01 )`eH#jxSw  w~5o ;I5@Vt7b_-Pfgzi+CLD#m ֭R aSֹn2s|OVf)կe>gwyL ȫFʓw0 @"fוZIzDe8i;{(MY},R^cXy FK?αIlG#nI}lwv fi*B+:NJ;ҡ;G_û*DĪʱZq E(ykD pFL~X+΀AC=Ό ް4[ *s+٨aEpC< c _tOEĄ /^͍OlЂ)!gx:q]i<& il[ E&3 ],c9fSP]B;8e4Sup`d'nluٻjSD"9&y2uf.1~zy㴕#wM@IE.^@A'l=`vK4kR"߸QH Qp`V8ιJeJCفAҺ,(+hE9Ä2v0VYn{o F [ kԟcA9l-%ɷЭp/xOЄhȻ<EKKSba3? q@HQ&UI{6% UMPX *v4൐K}` ԩ~BB6n7ųgRqS̒3 ~#Y죻+@w Y5| IzϷc -N |sȟ٪!sc%,u^?iR}sN5l7{ie }+[O xYE 'WP*%*<tdH.0Q܀Od|ಎ\t T^1 YΕhXA$#cY2ʚZ|N$rD4 =n_;_[E#c)V[Kp#πnY#bܕd"ΝWю_א$.PSc: zJ](#O,.+"#!'3O[ @CF@VȐ/ıAʎ2E^:K b_1nC\ ѾnV6p ^xh2u4(< L C{7^HL*D!\yx ɾ%0oQS+Y3$醆E+uY:Hf3N}94#J>4בH Q ęiؽ#z{Dkl!hDQmcFУ ձ͔ ~ ;YlهvhFP52:s8vDp$d]ĚcUC t$H-^nsܻ7/E_ߝV Ck5{rˮvw7}˼͔`^ i> 3?XMЪ υl'{ R֖a1UJQg3 :J]i6_veDNxԆU#[1ifSE FIHtn4-̛giK0 & )s)T@fS2nx1Öi)CbR؟b|$DBձ~Zeg0mmBD{߄H7>_u֘!X^u㛣ߺOKJÖVK+ AᅌzgP0DOer+E O4k2V7{@~uk 'G;`FR%džnii.:V3(`--(kC[fׅWȓYuD8ڋȬxJE"9Xn{Yc4(?][1ųg١5jFiPڶ(Ppv1M}Bqa 5v_5Tc v :>Ch$㭔O]*,WYb; )Q+Q<u$0h,#78!/z 5;|{4yxŝ.@5ghueymێ`>vkՊa[0}RIJykD:G2h&t+[}a;ɽZ.ox o4ԍ,|ÒCugX6LNkї׶WeN AwIvtqlҌé2{G1a=->fABD)!K,n&ȶDS6/F l}a/wՕwPDkXIY, (-ɇ*^搕'НlYw՞ aS zޭ~㉞m&E]敷J[CyRJ꾔sgȌQtTeJ[T!7zÎLű #z,?/ߣܳ_)s3k>H:%A@ɵktq*9v)3' n]A"QݪH>P \VYﵹ[= ~Ƈp&KH히繹1k^%KÝ30ጢvrb4i ʔQ??4oVO1M$bnE\c+qd-Ugoĩg|l%XWMHY)n;<[EoEmM)byplROG&e)L 4R0oj>wG6T &G(9 \Ϊ8gW!@E>b,A.tdRBm=WpQ c4hi.Ӷ ;lSp$@ﲷЌfE11M˼@L}\¦Za;<)۹ |1ec~Y`~8Yj,>C]и::#jBߎhDz1zOd׊|+uXZUźWX/*|X:SζSҚJffNajsG{l+O1|O<)HPU[\0%cTCnmsNB08;oO--KtI S@L|\.V>.ǐg`,ߴ9NXXP9pʾ5) 961V]6=^5SΒ+/Ld/E#wzxή䜮rEJEaǗQPt+]6@' s8xq$=leeN=@VEhGoXod5.'L$]fnd q:vI&h(Z'[H4K)a }WL]NjE E댠.H'`{x1<6϶S..ŚQalYդVo(oӷ @JC33ƱOF4t}L1s'O&?»iG(tR}^j*Z݊X3*ǫ3^C] "!%PyJhvsiHыuΩNECYրC.$C`"@{4 .4>,Ғwf;EiFoʤ&auM.YRZOhU4IG\gD-5丁Go(D%xD/oi<*J֚[4ʇrqIvd'o! Z=9Eb-t1E[}3bPŨtO\tÃ7˜\[Vo0 'cFuH%3|%eX[gK9=jNƁwDZ[b ~Fk_Wk?qɓҁ^XCh-veK>UƵ\3aI`4QQPtHpY<֏8tB&nQN_Fog^ÂHknIH~ph ^Y;EJ=$KM5_ Y%v rvsfg]? xsBk>mљ^^5qZp Dh#yqLfbhmtȔ7} Flю޾?Qeˡ` 'nR2.at,زo-b[vր ^Rз!s).c >楅AD(Ͱ :F;tQ-$졗<ʿ\8IXHNt/w;/ǺJbΌ7oQ{e@tsT22+\oj?o*%¦MQ{tv i׸4p!f[zJ:XpY 9{ߓ S, #16C}=TpY#1)fS.nhΚ; 3#۞_pƉNεrXyC8c,V#T-H ~QOeB^DZ'sbII;70~1uhޣRfuŀl CП q0pMr>x1?L"l3^uO  ~|%AUkE8r)XDGE$Ubz yf:8Z$.ih%4VyvJbBYL~j|˾۵SH4=O0rI_~Yeڳ+SHQȘ#W]To6ow/@0OkUEE-W=㸹kqot  ,d ?RJ8n$d,(v Ռgk:߬TI5SS`rV"0&O<4-euwQ/;ݦ+8]K\Ԯjf5kGG*I*oEzd@["̌˫Y枑 ugm4:}9"7>4{_sڰN-+ɁH ܉_;&U`0z`XrmD 9dF9AH.bWpfO~t /J\hD _qz2G ,d[۶f2}%. UATKX:H=P=ש-Xnch>vw_AadQ^XS{SĘ. XV\ƋYI#Y)B@ziKDIq^E)h%k7ZVɯ"F_ ܉ɢ5O>Zm1Jxϖns}UѿO8Í3ɞ";]2X 霘  tOz# %Rn?;;U1gMf["/" w?r-n9e8k9 sfCxm9s_NIhY5!".3%kc$kqxr8|w6@o/vqi$1vQssd@ /U$yG D'4U4WcƓe"#wM|}z0> ys6v;v0[v_L_϶v)<{ Wjlwf;|QKLP?*K'r+ӝRrJ9*Wϖ%uk`+/AU(0'`DۍlIJU]7+KrUW\+>d]dW\>++Jx+{R^+?*+J|,T~U'WHU}2W>Goc+(S* U:ߢ*pJ{V~&ST|2p^\Wɗ W*V~>U>wW)(UR*xJԆU|W橖,)rZ+0W:ֶUmjI{ii誾)_wBO%EWYw|T%OS\dQKfbT%ODCYu>X[uN(3Z*?(3NP&쿍0M~ 2,rwE 2`܅G`eċE-K\}T^k]>vrUqR\>B;4y"_GE`@2O CoOy wcq`RSn#l" (}bE$$j).WL!ԍ&2u ^"II=◂6ԯ3Ѕ*[5ktY8ʵMV) X7]-~kIWVVffjrdل@=,3Vݤ M:}0hm0cYY+w ɥmzdZe5u#ʰdp}.:ZԻjؙ%izXNrf9 34>"M\&SBB#fBU\NWe@=h٤gqܲ/37uI~fvncB[|tzs; ,Y)zqzLpk0M|[R;`zG٥l RR!"_♑OpSZV'˺AvCMuBsYڪmYF<\[K@xAAพi鍋 .`)؄o_]);ؒn5~׺ۍɞs5%hc5Zh*xfzf,(}VpjV&g^eֲ\Uݍϋ.]L2)kMszhh? GVT9G[joU9+o1d*Oւ=4Ê,+hݬ;hTϴخh͌!7t6hMaG5a?T,<}-&d[Kr:^ϮL^ =<^@&r/7tpB $ b-҆)0_wmAD8^y\h9sToov ՞ ~ØhLx_Xq"A[`\r8[pMխfS6< V hΆ tԆ O7vĞ j韀pʶHR>OU_㓑ާ$7f2 o<2'z4`;{ȇ] l{-;)Ɍr\>;~^OM oѰT.21BJ6w?(", iC]dآǤ9 ]$ 0 H|/ho.dIZex\~ëfU9V#{Zh’ْ%suPd" W׎+ |눃 YJ.uKB]}E1FJ >;8`m=Ɣ5Svڣ|2{ =Nra!撤ISgiu/w /` oj\CG )uSks /Ksw T<wyd*M{ &AS\ ؛o Љ,zț/3A^(!5d(Ɠ|XTBEM n9W)oˆ1[AvxT9xgyjz|*A5 JUObj]"Pf/]F+~TxS9fK 7نl{zh_EzU+>@U[*ۘgt~4SI1`Փ X]ob M})t5dx@9Ȱ# cBK|<*5MC)*4& )3TI=돵s a f30/q7k.?"K P &[ctZ17А}ڥO4r~PKW=~/b""MLyP pSL6a"W0.-r(x$EWgxL\og4Hq2{>,)kSfJsv-ف\' ZnqWfKZyi;t5͚Ɉ#-TC(3p9L|쨘 0Z@oˇb\1_\Ґ,`Y`ܿ 5e5<C#9ОnM=W%FT!T3.Eᓋ;X5Y8XT31na ۭU&R:bkVFI':]áEwN`i!F\a\'Ra |6g͙2gB[(pGy)S =g2'4VajMqq7aWŻfM4d]=:˓?SFT -qe ySBx&ýuW]ei\qFHhiڮ!ٰ{F)F 6j1ygӡ{_Z.]=^2uR4"$%crSsR~\Z7*V3=TL ^ lRFTI*|am_|g[.XE?3L~,V;%>vle5;14bt6#f_$2|^kšc݃u}J؇/x .tQbjkoiK,J6\ ۻ Tj8s2v9uw{jͩ~8E}29<| 9T\{n8K|eF)a:ӊ&w%5NՋ'J%]i{ Qqº;,b|z4i.PJ d~)=)msQXԢZVlJx N[-5-.{nMX a]\.b/c6zDف)il' |&gEæO\@꒒-^RhK_C`h>Ony7XUr4A]AӯFQ3HT^,Q,Rz[yIvQ!d~Co=(L5 U:&Qm\T"$y'Ѹa h26m:4; p}azint>< OCh\zw?GcF[4(2WU6GFVky Ӈ ]LCH˜MQ8`jǭ~Z >ubR GnGtk+ D.Ã/I3=|ꅩ]rM 2"=}Mhl"ȑ@}>&q Mg$a;j8|)HX4WC)Eٱ $`HnC{PzM-+s]/S01O\9H\P`!S?`{XUȉlf1Kp=C|NM"Aydf20cӆ͑9ȕdU㞫D$3`_k (i@F.E'6O j,/^j5ÆCcm5diAg]/A?b5^7 \{]\.$>/p0gk$ʢ|'oiQJLFQ 4os_!62qL FwY)7rpwpX߃zop`>ZQ m\D#5az<7|`W/'PtRX nDl4`輔vIy~gEFtgӹ>Q4u9yCvD~n9lDK%!߶ILf hE~C~Ȩ>3eVaTzUm.{gIhAY6|3߯F>@:^tteGvX N[ !FLQUQ("m!eAqRv6p䍺J>V:w͋@X xBpfmi"2uRs`c'A0>fg\PNSFq4Ds9Pk CER5j .ɥ*ZKAa@اY`YOr*n0DsFV VDC7KOYۊĩb_1Wz_'ݚEEyK\k6n/t\'5UĚmeL$M>l~p>M򂛬c0t2{?ˏ' g۪MW[v5ΐ ˑъHn5nj23'cb<,4[k,[c3BLEmNKٛ&.QҲ!^!t²F  [/5p( i˴b-@DTаeEKF#oʂܸG嫽@{_m?q ]/3@uA:ys FcvG0̦0Ca #B4XU8^cDfW?A =s簇EvQKG+ݯ MɼЕzN.l)\,CRJi#9@D d`0[]V+F*1 oUF(p!+TuH eb~uXiRң N-@W=PoVщ\ܱ=ݧ'D04[Q?U~qXxu9nxjW̷gsK\koB޹7ߠr-;.R+Dg}|ˇ jZo?]yQqΠaT^8(Tг_L;xi1LZ$8mIh46Э2G<=:I&d24ǀ:[l|{L?yZWZ)i)֢2:w~K -zG?˻sň҂N!ڞGN`wu<ϗieٹw}z. dHn}),6sySc8L-;Q;NjǦ8Zlӂe}\ޤVAӎp )0F~ϧG m.&`b->h+F9lnP'MjU@3.'-PP)?C0v[QAĄI*nQl.+&: P]>!]4bG=FԝߡF`kc6[(7m', jh \y63Z+|) fШh~Q,sa3G}#~#F"18cxQ#11_Ht# F1D,:Wpsg\j.3%x\zre}v1"\. %xؤ`vv tB w [J$2|Stn2Sk4)Y`NS(*Ʊ8h6 ,)}3Si`<@R(̷&']J.s;17A?[q>|MJ:k,"qEMt|V4}1fg6}c|fahCxw.5 qK\^ ⪉/vE/ Qd$Xǹ$DC:?$$l O##6z,R.,=s󄈺Ւ@amxXX),!@= ϱl.f~ R*"aUl(oA?X./R5xxfYͳ7.na%B!ZD/kJ"vd^P&c$M5oc~C:bĢ0'I 0\bUD$p?>d)\dӲ9-eVoI]V֫,ilkڛ).@_:W7u`u9v}gire2Z̖%HЃs9&a%D B*vXLvoZ ViHxp.b`l( iHj|#cΈRsu*W1ǕOh_uQ190L&!*hB?m+=|Ϝ$*,;3Xf9 d>7b{̩Lwt A]CB6n48L7́f[Q\/4i^A53 vBP Q TvIN/ 2m譂Wt(;&/[vYYie(C%le/LTRkވb S"{4F@W0R&DL,'@LuyY6eRLgؓ ] ;7L1/ךP6vH+TފU%9Rud۠yHaF&O;ʒC/eO0D1D;f&kfk[uSwDrgի y09 +?+#Β|Qdyn-Cڊr+υh˕=IwhZ_M]7²Ƕ=zq4Ĵ},^ TsӇ~12iGYfbF吟mν2Y>iԚm0qۮJ^=>RjpjPaתupT%ۯ^يRF/Ck iTHCV\ϳd#" QI46b;-YtLtyXN6bB%"uW X޲eNp˩_kI5j0Pc>A-tzngʤeqvqT-zw Ǡ ?xϐ̰σ1Wɹ+^"Q"IJ}BV/pDj&5͙('RF:dZ!$0Mcj R ]/EJ`\[iZNh4缩\R`ؖ+66V5Ϳ&܃wy;`Sz3QdDEKHc0&4i_NX) Yɦ0h|KrKy@j!ƈsv$n=$tpn @0I:>\Z\2^ UDȷ,vzq2{=-E7-y-Iz\|v:=R;C7=}2;UVD݅o=Zڝ^鈪LC4MHwgR]*ȓ:O rv `~:Lm+ G(oYk}-/~Q|,ru!ݺqu,AJ&}%TZz8M!w9^SOrls-.j55uBL1dZ'ܑϽU43sm]A׼Lj #i|48 UZ~9A5>Z<~=hЂbr.!)eI؈bz =ڟM/[ 4"_,V1q_}c,WT W fgfqst}>R໤^_뗟:A֤n"Sqvh,Y+0-66e(b=ڝp<"w.{d'(fUW8Ȥ=^]ßZ6[F9p8^Ӊ& &MQnk bs:2`=DGu3zY vAɹ[뀗{:)zI񈥴 Ԏ x(PqB n؃#2k ?B@hH]jx۔bHa~v{4݈ aʡo̵C9*&fn+#ג ϥaMq9Ww3kR= =GۗCntuť\5RgJ q[]"Lw+T{[O޾N!-$ϩ\AÔ {k NIsӡ uO!, Z[i˛g,'@14FLة%H#h@{&&zjr VJ :+ûl:qSy{Hl$闊&ۚ!y\p0 H|W7N@;B5Ō[զCܶT <|^8vw5]y#Xٗ^R7lXtI3,4AM`bHDVO11-QoFoC3!bνͲH ȠX&042 /VD-,ݱG$+3AM45N^ ŧC[B"cgǵZNհ_"7޶j'DXcQs(GT< POx ]^o^ w(s0^JdymN{~Q98LHn/ co^/F}tf+H= LKxA,} wW3;A?|-  2B(4Q-`vM8o9B|YD̒  "؀&+/[-SS5.Zq¦gr' >BbshU4qR2/©H.UǤT:S#p3M 5D(2~Ɓ(5 "TnseqϝUڤZ4147cmWB +Z.V3vJmʝbKo(6@3Wmw8F4ȏ_YP]f?X /yGg߅8.H5 &.g_4~tJɭ yBj *+$D=N?Wd=٬9),`e$y0P`ǘM/p2\b>Ko7V,<'V2a c/eVaz{8xR \k'1 ?\3IǷ/ڲf]?u43IU_$I冕`d`&1]vd'ɇzL=oJgiIk*W!ȄTs:XH4Y'z5YTk]ZLgxp;|교n&1>9o |CKΤܸ`XMMݬL1]VNU1I|7KJ y&R+mXę+гBwP2~F]t>*(Q  l";q۩ RU͎N% } SLo#5he0}l =970 iJ%87 UJs 2[Hnb_haӗ| k?b69H/Nj1@XPѾ+i/-oW4% +5}#iy dfvO꼠a |;򳋵13ɓ~I{wq6 1uc8<)f7 |Wt/ڪ>SUH7XNp_[)x 6kԳ>S${ gkп*)O )ՇՊ׾j:NOo>ixΰ&JۍAr>zpZb*ϙQ$hw8|kxπz\>7P^y x#6bY@?Up Hry~xמ({8ǟ< dĕm` we6Iڅ]P5ʍ`HRyk} Tt޿ 6",6W9Ph;m@~dV3f[8_X'&\{AUP[]1ux5 㻭rY'H #y}p]:~yy mm0`KNȉ홸BTFKq%z"ͬj*Nfw8|QG^%+xLEH*wUZ޿Z |.9,ȏdup>2P}N&w HJaS]}3 /۟zh(S7FsjaH0rEkM,=4Ld=yU<9?kk`=OtVڗH.n #&Ý͎͜%@IbnԨNђ*xhP% O5ĽJZd>;>%u`^ Ncϱ|3B?gH?2( V)o}J;nG5|>2AOb){|B7me> ༦Rm&G2pu8]ݐН D'wc6o:>un;FM@=ȼJC晽 FljLğ~wp߆K= ,d'~OhJQ>etXTG}M?s3x}whFfp1k6~x}ma;q>$%M8 '6G!'xgDA86y5fzYA3^ sڻdEmomwe;ϊGgP8Te]k2c aޏw$!GLrHuDmTԺ> 'N*p0^:v[i|䉬6Ah\XR3|.t}~4b`Atm5BrROKfkӤZuG+~5_aQ=+̗Ҥ'$"g'a=t:@ b/)GQvdIc hf{KkԽHzlY\Jt\jׁu y1T9&&pHb +{M hP3ӪH!3Wi)s6{7F #ya=/C)Pʜ}+A~1,t=;{ՠR@{SY;#2|akkl:5{Ohu<BΊ}7 KX$hkʶ[CW >k49=e C0rZHRV~]!寘/{oR둭6o9FqdaD^e8&[Ƶp_ͽ):oI%t ٠iƕf_^^G_ UWN Ul.~ zGR)?SE}>XMw4]T8t9 B6局idm_Sle܉?͒٭zOd^TһnMp9%}/t26M|:ScTh"sI(orxG5ySC z~6.H?vZeP)2cj( q͵J^OC89,A^KrPGw"%`jcSM1tVvr7ucEP/%iGX?k٪fl:E1 C6ht}-%}A(ste~9fCA4,syw|}v*ye'/q3Hd4^Ѝ; h* +܊wDc@/4Rx8|Not6s[{'0xet0{#:V#1'c <fKxɣ:nOk;s<$oМO"XeZW83X(r1CXکs&mf-:MuDc]zĞ`E䰪8#]):[OLݘWa nԛX%bOvD\,3(^.;{Xɜ'qÛ$,˥/Mt>j)Jbj^a|"DS1ӊ[H ^=.LjeO%I0=6Q/?O[$9zbSrxLB}^lcz_I:r}*e^?OUIk^СjۣkʌR!؍g"+wWwjeafIa MbE`t<$ kG) > EhP e|EeB0lUBnC T&K;wN3y77b;` .KUXb[-yz,0%|cژ ^5/<1(zh~,uLC: q~q`"%1v? 62Av51"Pq+DJ_yjZ!1V>)^:svKUU,e_-o{bʔ(I};J.RơTz,_-H_i" ܅JQG|WG4f*=bO.'ƀ8D7?LMN yow rF{Qu*Vwnwlؿ.fD{]#d; H>C/QBm9Ʀ{dOy8Vu_%98,V&fReJJ82izhiBzcre4(nH`ߢؿUX9?GَCu)qu^19!E miku\^ ^tߦ|Hٿ1La^61Y# 70ޯ9+"ץDtɿgV<g~6 w Vpa >1Fw9gۏ-8q}]\7s^6.ڪݽ)ѺnԄ䄠C?3kPbnz4h455?QH:0a`qNlCEmHFwFg:3K0\_3 S://] n]~c0yl%xaJb8$9fkFUknSYZ3"V!KF,?[QXߛw>?@/عqH95l:F/@)ީxdp  $&͹q&Fc~ h06z0JLAA$c _% +2wi :# B;UțhD1q܇G˭n8 :ҁmMkw=ղeFzoѥTS e"^(ޢI,CbӅڴSR>HdH'tpPb($&D^ Dr`m愶s$R$Et&3ĐZ잂6(ygxB(X^A|G*"%aQEB$#-š<&xL$ "- i PÄAٍ7+ǤiEO{&i[*20@d@]"]DZ !@ݷ' `$hf?rȴ*$ pŭ ֧PK$o(x=D@A_Ȍ^KWBSk='W.Dn.q ^X9jtypNpKNraK ,ݷ G=8eEUޟ#DH0LL[OvY_d{HIiLD׆@D0ubo H eGz%M˛.=X=j 4IԄ(ӵ, e fo0SMIs u"w+QYw; 'Z~.ϧvv:A^ 1Ԓc QY[Uwlt7YcPh3}|~?U;3AeȬ}? ĵMkf:muƍ`eu&KV,dMTOWhRpaozYOVَjo^CP7cFmEzI?)u!9:ZsYQ<3Ļl9"?TCEdB.gR]33gHcW G2S(7tC0vʷU HPB i''eBdCgOؑ.vbw{׶l ^-LN@oGn*\BE,؀3A1 RjT>2i" I}G~c?AJ9EWH`b)@*Iw[E\0?gK1CƽgK'/hHtm77;^6L'.%+?7tW? U'5K7VH!ήj 8>/qi?_N!yj7_c&W1۪ڿ'of?ʅ}:~:][nqe;Dw?dK;dc;R}{gq'Rcݫ 1!F7Pvr{pQ ;$?ܓo,;5>/:úᆥy٢p%gJyu j.,-͜Zڛ7U&7.xea H<xBA&i~o@{߳,A.mێi>;UǤ9a;-gJ*=$ԋ-֟ht_j(KeGCRc]݆)&BMކE1&s{Ou<\QONQO-xLQՒe\eg3QZ-P3~Ȫ?ou> zRۺ6VjR/i*<`Wtl|U%o", Y+ݤE6;2@-IͅI ]⬏HG<Ξ(1ϐn * _8q6]0xm Y f8#b';aDR@ KwWa :_ò+txx`#2;?^.#Dq˽gOix(rF8E(Pa+hnC^GYM\^F>SYy\RoFi7hfEʢ <MMohiZll& 01l,o8bvIu`$NsM0˝r=UuuYYJ'o U!/Ub)|4,qU" aUrEkY a@AIBhB '-„X+z>?9Y*iUoK%+f EҬ%WILwEÇ:CW m݂<Ln?ELBᤁJY*J+ %1`c/Tq}t]h?tS*@ɛ[z `1 r ݤI1ELH\u8CS;,& }>#sSp`K]G42.Gb7ec㄰d+AՔ;AȏzIWS0kl|7 6$/j\ʮUk9 `h`)1%Bf;96""veY< RnR>& P1xvq;p\vS܎  6cKxW‹-E]7w5GKqZN݀BJܣvnY?LT6,!j32ȇՌb7@'WPb+K}r ? DA(_7<.ҘX e(`߂VXuq=`H|=+Z#;hvjQ #@gЮ׬ds:W3OC%09pkXp*(fs1a%kЗ6T8Y1ȐCƌBW%,:{p:]/S͐?V.b΋>/).xW ckxN/2369OXhôKjW-2A%:%5mdIo!KSL ]YU0o))؉VRx]Q()$@6f70{ׄL?Ti٫c).x\L`$RAt <(̺[*[^7ud:jقxJ^w(َW\2i5Q-clPܴؕ#&p Dف[ɽ=@dhnB2ip>m!+eN0Z5$W"@)`0Q{&|9{qRQW H}K!/)Ӥr0wM꙱/=oCZFD7O/$#q頵%;9Ո89 4%xKNyt^sJ)瓕85#@[`O`:¤Bf~pUe+Z/T~? )9Xi]NbCfG6H:F$ 1[z`fzkr>U58jo1VvgAஏ,8oyMAXŞ6#-Ǭ]JXΧ,f|l D@˻gv0FO YrA&O^q vZ[ƿ7➮h:˹N#|x[ܑ=4J:WRw2FI`c{5oelZ?^t. )f_[xsE_t 36[QXp'|2YKiyosD] 3}k>@.F4D=`)mG}IՄ8n rDuVP`>`FWOmDZ&0b~:rW ߰Ѝ=?hGnVO_CfduSUݨ"?tSJ31oF")y'“ ;S;8]RF5Jǻuv5ZN,$&lU YA;Q9r-$_0Q"( FxEtw$4.6d/\ 1b.E0ףFxDް-S#hDF{,%2ʹyMspfA%ώmdgzh|oz,ړ>ʪcȗ93&}ejGؼ%=`pzk%`zmBtKQ෧CO쩱hԅ!gW.jGo+Rk22~2B45 fտzhAFv(ut=3qN!WIaIR}rf,c',6z'($ (aF"!{( |m" ʼnnZOi\zӪ4q˦ qu&$ i ".A}hHZƐ95&p<Uie:(J2Aú߄F r ANǡBv۽ gδA?G OwoD~og!8ۘc-Xo\!2 eq31B'DG݁%zԇ'B8Q\|pD_3R2ɺ&30|BP<Ȕ&0>1~Ր.WuJ*JiJ1l26 ٶ693g"Mܹ#쁵w%B CQ$( úA$"S,uspgci]6Iߗ섯Taomb7#CS}:'QdDC2L˸)eKŔ~P|C[~{(!YEk5߲z3\72<؃-Av<.0CY]hlԣP1^{#6rJp;oj{xĤKVtJ4"YyI#7z5zO})CZ4{zG)[`WDZ*NNBwB%Gcmi0DcE4`3㡱sɒiCԳ5tܿv{abײ*jiU T)x|z%5yFN*=fW/-R-Q+ZpH\<^̇2;ٸ#ږtG ?Ks݉W֬gbtws]᎕ps/#Ҝ]lsa&.Qk\ ZӯzT1S4Džtw= pӐSW뎃l"_O'P'$ ䷵q$%?]n~ptф33!NLFMA4}! ̠ d=O1-[ A4WC}Q#86xlOrGpN-6ׅ4 )&Go.0Kf`jXƟxkUQyarںS8n87"$g;KR]R쓮 u޴8L$j{8*uu;C;l? Pbx<^ߋ{= 8k[E:ȝ^~@@QSq<7s M٠Ұe,4 SRJt {TQ;\8m.|r F qU]Lb U.+8I`LY$b:{|ԩe#4ef=1=Q͝N{^f98 ;:Y;89Уnc%;[WN4jT-kc^zm#ϴ?5l&GWWVo3QYwi篕@Yq2ؿT¾^<^RTkҡbP YHiLq̡NiUH-(SIt59o|ƆCpx!( dWgwӗZe#2q?& X֡/2i@<=H/+8 +dH|aag"սm tKHCD^Ng'6`@&2FƵD<0mO/5p|' IiN]^h,MzbQLưJoPT$!-g͛ {~taދǞ{;~ 3z>g/yWB !`\ekO#y\rD>]RhL']S$izBQ7+}+ȡMl]ईdQh8Uh+ihI:&۰VI悊?ʝ"bTh:@ qT:vʓnKEQ5_ב!XάHbbb} ʈ&;ڸFh>iʼm}ow)_8W~ 8^]]4;uNiMūrt>͹چqs~P"K!t8h'e;Y2^Fŕ>aDWmsyL9-}< ~ŦR.LjhxװW`;WO؁*27ۜeCux:1)x齾\Ob3 XVIXTJ3bx<\ZfPo,7^͙w嵥{{1Xĉџ4W~&=G{Y~2w5 *h[ws&d8vRTFŅ@-oЀm {8|CdcWp yf4w4Fxˤ#g: w5h w.װi/238i,3}9s+Kϻ$5p6#5ɀ*v8wOo6;SMJ:,9X29T{z0J3H?Kn+k pU)~gfwF)MG:"#.0 n.&*;E==SXf3߸V2Bu>4=j)W,' ,?yDҤTG[&TF4?N,WydBQ7fLZ>#`cb^QOG`i_ 4?NkߍxN3<\NU'o?vwim(لc\Mzؖ+v.+L>ؐ]Ӡ1go˘ے@o&pYU4Ψ#XFg$) z8TTCŸ#BoL1np8QaڕO/8_ms{7oஷ{ZPOtʒ밯+>\-읟rX:lȝgex.Mm_5t"|5z'^ʍh@?^0n`X5_">A"%AOr0jQCP$u%-mO3R, ((!ըlO&X51cMi,X ,[Pˇ2KC!lu5s;p:e2SZ{tky!h ތeo /';d/rh'TԓAlh$}oQ?V'}1 F<%N{ o >K:L^q<1D|Tiur'gۀ!y\O6^&}<~hGR\NpؔaD&kr]G2Pilyx,mFhyӜ@ $4pgkliv/l}QpK0PM-%iVl$ybBçfށ_%r1VD].]侺;ŬATLecƫU[F&`ܛ.D6T ! ts wbirY&U{d33,h8(k%V1oi M}'[ b%߫yy6U^c BhPS<}ˣr:6m|zVL$ty4;,SȤrrvwBMjY l↱'4bѓ\Y9S|Qg׾OL˒}1W#nɸFŠ>;ytnzDYǸSeJgv7GT@H^n\c‚qfR0J4do)6&\BH:sP6!GWVpC,fi}a9'M~zM?ux1 /rGbC[jXui x ~ DѵPXDlT)Ԣ?C|eT\A n@|w›zWPRb) {otoM+_biJ7#OʓCLe3a' R]3nP>n^r^~<JcINwg)|̉j +naZ(|CIzlgϴ #$)hQd\s٣rݺN9]87י]m{U$[_pQߞdz^*?QbbEҪ915]k'V(D/H/Z {'o$b(Dze7 eMXՃHҙ>e\pژϭ,jR!jIR7 8de(Ƕ)нPRLO$ /FG72}lY~z$%zVz7V)9\"Q]~3c{ .H٣T4IOM>" iקP eY݋5 GH(S{ˎ~u]N`g2J&*x}M`^5GȘ`mBKZ;mZh/ q>'lhsə?u:⩲ߚbEp1CU(]+uTDy%`!fI@!( .x=tխ]$e*ܹ\0%(405q8rEۺPrU*kn!`]-4W;W3h|-5~ (cq&H2Q5}"*ʭ m37Tt6|:S7ܾ Xq]v!hs>79ޞ$[Qz[ZrkDQ.N}2DZzr~^K"Dף`17k1bfG &ېP=F& u#3ԋsr &]̒=KQd˚DQL .SkP}0e(W(ޫa~$-0&iKC6jeq) ĩĈ*da> s,[C?GG: F$8R.]958T1tIEI`SLh0CV67Ƿf7kAS+k'ז5dݬ(= H)id2!s9^;NRL)d T%S*}Ry}nlhc-Q҂1L7 o{:Tv7A1?#븇#PF6ݰ"5?]!Ke}4s)H2%{櫅 ؞u"pĆ/"֍kKZ=A`d2 tG>nx(' &%Nn/RIV7;[3nG!p:(bvqr Ts5T[pɉWU2Nc}z(^?y/\hӂ=ڵtPsLhDM2+gEwwҠ1Q,c`ɾR dնR 1?196pF q.]#m9#ac q7 M*{tm9 0K0Mj9FOIKqJANXu$V>H.ԯyu Q&4CTJ =~e cy}ͷǎ'H~?P 9\7jѻ¸Pw]N B5z#pý'`?sq<~EKhL2~:Jg}{Ƣ&E1d*pkli`QzFP7\-Q;I~x`҄TmUG8HӃlzҔ$5Թ7^~˦<Usgsdp0wDap3f#'v{Uy>JWi$6"Tn W.h0}&cB ]K?*f,m()Zni3[*2{?›&Oe2P:u]ƴ }΍fxW#̯~ጙLt y [Rn/o@<S//aB-ejGM>+]`6ͻml-oX0N߈Ѧ43p೛dt|`nevoW|R *I.WZ=,_zqw :jpҬ8x~+  d9q 3a9TIib "aEz ƘPʅV΋A*%0iiif|cR L T2k*NY5J‰2^>Lvh9Ld9NO{:˴#yuh;)jTQ}  [5z?x8r C65rr2lއ+kqEutﮟN>ߕh)Bnt.GVybAӔq}s\}Gmٷ#æQ~6 Ġq;/|Z˞༬`ZJ+^{ f9# ֡-7@n J) *436W)ٜΙ35/֞{QHҀz]}FVSbrN)yBT&D'WߞG[!yjgC ө6HwȰέ;f..$\ mI9+a_-["lz53r.vPJMq淭KBYo1)gSEG2) RooO0?{v%idiX4h~E29-Kpc#`$,reC2Ŏvk˶vzY,R};-ZtӟƎrt}~΍ֆk\[a8u9 H6Cm^u. ֯~XIuAfdIBb_1∄H:\MAJ5AS  фԮ#V1Ne?wŬ3[{{m6O$Q&LNMXJpXϟp Q8?[$/WIϞ>‹Ͽ$Tm_EDN5|` x=Os+~(ւ8}12*!y^ ' ^Q:OX:tkC,I}>XL#MTE(!00H6Z}8S46궎-Z#vH8(@F\R4("g"2ANRCLRQ8LfAw@ԤCcl6'q5@oVr_zD"O@iڊot2HIo i,F,"4RԅJX%+H\  uw!r ;\H'P&Z!ko\^ՠͯ;sl&i`ۦէ :{p7qXEa3 qyFc.iv.ļFkЍ"ChT|g;'u0420ʅr4=~X:}eZH.q Hy/ Lb9@:=b9 'E7.gEƮ$ n=z\ n>;B ,wRVOMD|N= KOI/Œ 9?o6i=gNG6nc~'#omqgHQ0xY:`v;wlL(Oոow=*( 'B:zRc)ɩls&/g_3ZO00` Pl[$z 3eLBF8T3N=>H`99ťH;!QJzC*#+ KMaD6:X8MpW7= 14c5jkrZn8-[S6A&Gؘ9.ܸpM [nY~̈'VH/b39Qȓ_-3+-#r??&OM2,f {h}`"#qsI~"uan DklRĕp;dv;@B!&dˢ|g~JZc$d.֊ ߿1ŗ|K'm[ prx^P[Bc=m%| +c `,Dip35M"KFY@"eV6.\ue哈Qj1YTH_aU&S^as*WxQXε #wWF];Oü{60c(S%>{s; mjNC^4g^.(Q^[sL:rLRefH~ x?[p@X>;@F^7{Tkս+vV,;}9$[Q|Bz]k0/9ͦ1:peg=dx2utd1w6zI@?ƒUxX[Z蹚8 =vD2n⩝ +Oo%9/2WZ#)zPvE`Tܵ.q YI,(WrF ɤ#c\\]:q=]sƬi2 'i&`ȑb)]z% ď cc49p!._ۦ@  @״d$[ʈ ^F Lv+I]98kȬݾ~29W\J*_x2(ч=fw93F8ęseGC=°(0TM1KK_es W? ,M1hi e- ^mLͬ,חh//'珝&6bQUx EݤD RtTq67 ^)Sj?fmKMA2o*!N18Ar9lr@T XR~MѪ&g2WKؗ!_԰{&Udv__r#*m!;NߘqV[5QVR:Mj{/Щ咆?8/{IZҔ}W0z)Dţ Ĉ\%w4?xD$_&~G6IG(<3X<zܹ~ @}j~ -KbXk}; Ebނ%}㾱6,10B0^ph s6`CT}gk,97 ^`fgIcfq>%pP{L43BHAa"ɇS0YuFR~s a4Jނׄ!¾I^"K=DsRt [T CI 8ZGV:E"zdT r$@3D6:x[4MEtt ,u 9EuomH4F D /ic+fXI`5MH׀d8{-]7TMrN;W/£3*t+in*d{dbC 'Ci+*GMˊZ76jm:[ݖlEhM++XMO\ɓxxBC͚L2WQceQr- DKM&gd<᝹ˍ;JexI*<_G 94tprShJ-4aF k]#^oV qˬJcY~oDZK(|C֧s8>C οoUEbd9Q 'P<߁.ٜϭqO ҿAM8V?MBxj•  y[;c&|C4F"4YBHb]}vF.X/S=w//i:j^Z7q ba\Gm~i>Ӳp5hg0:D/9sVI%z^[K3-I5FǤ$F2Cp{iR$Un,g2 $\|ot0Lb“37=6.?>Sܰd5N'$'؆xXݟ* W'Nwz-PeɳҴp]]JMF~ŭ_~BQQm6d]a#%s~8Lm;b!w|x{gu.(&jMّ-f̅__8}Y wx;@c5rOî5E= $3ќ#Ca21 k[2!fr!/̭G$+{[繽s[^ >@&@:Uyi6{"D؍v:\*,^. CDl]oL&{'1Oɷ P(E|{[&@dNrRGȰ;x3E;:,mޡN8V,އ&\ !o|Y ]x__6`avvdQ9:M }'og%Q ˥ PsI/pCR ˠbڟ?zWk7 s:|W`H\?({ a(Hm۶m۶m۶m۶m۶zߥTY/^U߸%)CmRX7>@meQQTS\b ]NM.G%3kAb:4م.rߵ)VWĕ@pGz۵ZػVU5hj.<*ɓTbF{=WK92$ahMzAi1+!oâ)D4|?)Cwm]]՛ֻ[ EC2//n؆@zO(᠊i iW5pmm/&4VU{j }N['|`\$)eY.PzE3>( mCݞ]Y"w AISgOo&AT w}e"% ZD4AX 25O:/U0"nr@" Jd0t#r$w\`*L"ZA=F$(ibѐEDSmSu ~F-qe$xe@Pk2āZ7 !wOXEW(TJ"^fe.2D n$Y __Jp ^ͺ9 ƺD&H u|<5֖ Ձ}A0-ٜޅ B 6k8`Zd?1c09ºqM`2gKqqeuqK[LnSJAR_9{Ie"kUSI+N7Ϭ&A99Hb`Q2&iddKZ__<΢Q]'J4׹D)-N7FjҺ{yY `ABugтeˌv)ˌJ9m&uO^ALXx6"0 S_:Ms<9I dc] 6fIS)eXl gLN{0ͳX+O|#K-Zd;cΊuqh֕QD#l):h[vċQϯ]X#zbQv@7>w0dkS=InoJQ7nD Ƌ cgWIlWv"MSK=vXKb!kO4cLf=˔DAB<0n҉5Zw2H@{hn[U!,׽OKb\vCiUN2xZOXasʱoh\*1K"L?A5b4]ӣE}\ZtAVyޓF8[<&t)pfZPi򗗆;Ѧ84AHI壈xB;iB 1ϳo*?+nOEW2Qus[㲖J:?qsBҼP/KVM6[)@M}&; ` ^ry.v@SzNF$iu "j!WPRl詼UU:yץ6>$e+ufVènAG4Sh˳'3x)Ŗi\k-41tUU8_B,2LB)j>檟^ڡ_0</U9,dx&gsl.fȍ 9{$z#7i'/zgL&/]9 f%,%@lݾ u8Zj Zjzrߺ_CCY2} xٛqs `-qۋyo>4Ey&q.^<$p+咏a|sל._F2 cȠ$K/O)rA@R/Ի%Yz{:K^:Jpܳ$[6 ';&d g*{#81Pu Qئ϶HKhO%MrUiեg=á@хb,Ke5hF$/oZb:{]H_Qڥ2\+-_5_`ZnW4 m?z=W&m Z[ d0z}Y. 0xwYrKيahm߅|nqK[v#ey׷kW.XOYx£Y}`g|w7:pC830Ȓ:"@t ]0V̲7k "3B3;L~眚9;v{A@ @uC{F!Vq~c9`G䡓-VDS֡_*AsUϖ3:J9(*|khJ%q`O 08s8S^:117<} BӿsDZ0O==,fE&2.ݜ,ʷ7f:Z[_a5ِe S[#Tw!#B#b?, ϣa0 9~6#nB IGghj.)$baS?O,sֲX:ͮadLV/6M4W҈=j;Ik,wn%a ̼'P>Fk⠳Z_n9Z.k*Oj<'6& .\Ffo {okIBrXI{G-K#o]\n]-:-r[quZ.O#n=w/dB҆[Ζ(,~*MN_9jhI_h;M5h)bQf+NA<em_ec+5mXxx@׬ԵJ7,U9MmT'ޘͧ#@A#(W#C/w9 X@W/w{7҂T/ˠ^PA:/W%iޜViji)ަ*=fz3~Cs?U4ϏZQ {=B]|zJɢh"7E-4$m`h%.$ogtJfv^HGIDQ H.on e3]j>$CM_IiJOEy7␹"ynI69㡻 a6Ɖ !m;!/23CqDc#HƬ]6BwI1ͣ2uBχndf_&~:zҳ̸Qڧҙc/ӃBG2 7-Be Tl$ V]UrvWRMZ1Qapy:k^:s-1[jYonnRzՂ-Aqە(E *3SAO뼘G ]0vBFez{CJoB.y⹅,)Kw./`2e}?/.@3}'Ѡr3,&On<9y?0XV@Ӟ+^*ӃAˮ VͶB?_< AU3!Ӷˬ%GEM0:?MZmsmF i݇t!>o3ϳmhqxebrOոax$e*SJL0,2xrX{|iz?LT,Wʃj"R*qa{gс!pO]^qlͭ d~5V;'sbN S5CaڙKfЋV´##-M Qr4N]eVf'xewʑ(ix|IBKl^,:8hlƤR.l\Q U~8Qv~.}TT/ <7oRiNYteO8P>vPLo(9}Ɂ^݈J4tѳBlNV~|?78BxRm 8ZҸWkG>st3#99&^3AfbXC/mT"Y)]|+Ws Xǃh$np$НMIXR6kUUVɈ?rBe~tOF8chޖЖY.ke],o ன,h18Ch\ L,Ŏ< bW4ًEd3 dZG?X~Xh"(Y~̒ lw._De;v"n43# \o1lS0/ j &` իJMEp.WR&>\mk0Pwg3ǥ+v7W6whHg;jlwAvmtwv0i滶ϺŨ캀jz;}`{'v1Htn˶SȻ2͑fQ,VFSuS7ڶ xl. `T.T1?Gg+#= . -@BMfN1 (lX("MHNj&J{aJc1%FtR-pOxɠ`z䈲ֲveKY._N~Ph,4|[!lJAG[k)NJ&mP.9B݃rؔ^R,J4G6L*4>g7~}\C-=bhlw\ml`7*.U-\[g-ҹݗs~v{T(/=Dt]E{>NpXn XB̳] y6sFC 筬'Y2Azu)P- ?pWPz<+ O ( uX#k߉H:n1;} R4md-تaT1) THG2RM,3Q5`_̮I.gV޻-W ( ybd2u!*cKf{<=%jmt{c]k>Tz zEHqMs#aOvw4V!aD|nv\qQ dt%tDT3K$݄1!$Huv+ l N>qʹҰh-Wk%?`mA[ JTp3Vj$H t_Wk"N}7{Q#><1l0'{b8jI9LM#/؏}c1&"A{[{E7,_1cI^"grUÓ[~ t  U+Iq;kile0 7l|/\9>$F(kNɄE̫'`,V~1#`A 瀍E! ϗ+KpƏ1g) 뫳t 7^ =ӥ 3Ne[epb` j<7P)ЀO,0?|c\ӱ7 yg7& ?T;4grE!UA@2OqnZ@v*6hD ]S#f&&ޡy EizsiwMQ_;.TP;Q DƂC3↏K`&\HOLfGh;$#Y?$!q *n_%J:E]*-$*gȞGNj|hVq^]rm4n( sr[.m:wH%/ ;R##7NU7*zm Aʲwuhxߺ_P }d uYoIՅn34;Pb[Rwp˷~*$A悔"4v@1mgّӟxsRYswd3(NxN/g 3#w9\[l>!nT~O䵏7ԣ`Nb(VSh^-*xK PVQQ,|E?S ;D;E&hZ i~~dz58]͔(걁ot/6.di^lIXM 0`r"uqA̳ftf$8D41+p0,G6p`6~8 >Sg +5&rnΝfͫv-3>2Ҟzۨ2X׃d\t_ﭷݾfuez#g#p?= *1ƴ3)q tJvY (| %nZAwt4(39w*?|Jʦ'8mtIo8O|E›$PMcOgQ)Y[/eݦ"=Ͳbb Q24UUe3^YW mo <6US*B.v<圃!BO'~1$BgX~f24L'`VxHP0PCkBV@K#\ז[q8;\0[v5`k(4u Jd#evFQ0c9\7i ؘ7H{ːoSͭOG჋ u}X KĦ¦cFPUy?IiIҌ>5yS]ŘM $Fi=ռ@XN5G75' Xx*NSkmWjݨ,1lSy8' Tdq7=+\D~\ײ2r_~q=$6hIkayxC{X@-&$DMN:bsF28p4cx8/E鍡P?ڦeqNG:@ \|Xɜ9fw<袉dsҺ|HXh@t~\' "qw'qI+ķ91O}xG/g|f͊'lf&]ʒxrڡ|Ǔ 2n.pM͝Ǡ@j|P59j]f' Bbx:";DVE/s"tǰ-`(&'_AlLV̳]V|a} hᐩWWsڮ "t{BLeoٟwrɋA^v/BRgb0KCs&&/ .c>r ԄZvI md9GkO@&^0x7No˘J!D1fـ Q ˖A;~M}l>K;?#|Yt.G e'//~X<*&ώ~ՆM}Ύ~p1r/m3#Lx=-?QLF m-E 4T a!w|i|@GX.=5`|3QDL8U4ݼK澪O9LY6?\J7+u#!RRM$a5!Mt=XkEF $&CH_Ͼoq`YՕFRy+D9bLi<%N-<8Pv)\SZ"pC Й4ٙdMx9x;v7YRH8FL ,rAEQ' e#s1>'Ig|xF0= p=B yV{f[D *@܉3F{^\ÜNwMݑfhx!d df [1~gaw6ym&@酿!ߠag2YϮ;WvlAT-/]¥@~=8U '$ˍ۫q&Cϡ1r^?a0HRfu(&V@mQ7 ^jת >g Yh=k5CR:qȊx!נ;>^}LV+q-@8DynWjj^̃,}8&Qe[yV^o%h8|<\Rk< ]* DQ@ ?8C2_1SiN7k"#'O&\'ԞһGy NCxUA)p]Y (&LR4=Ҷ[NQ FZj1t4l\p$NX)h֩ekntKF:LQ>L^ ylh*4&eI +ZΖSJ'u4nT͜mF'URa9W^}Z۵8E4$jF _wܚᢇӮkO/( SV|ߖ s#B?iah)m[SprwR;|XtUÀ͡"%uzhbί.?E{vG.$s=k \EŢZt(LQ$-84]n`S3AHt{z+sƍsj "W]pze<,`:!',oRZ|i6t )ژND&j/ pp̣oH;%Mxjt z"36\!2$m2?ބN1gQ]P` Tڴxlw!^pRB,Ӄ;'B .G˃2 1߲ ٭TۄAjR89JbcڦiS 6t7ё^bOsT, 9bފ)x R Kvty^q blJF&yS09IuRL!7/"}YPNîS5ג]nOx? loRU'ս~sr0RKqҔ&P}gK:*zA/^g vIG3X9Zz:M+inCbq:-\!Uɉqxky<#ktN0[]d&-: 5vZB[Y͊d;$jiNkZsK15VHMKuDrDjO(l8}!*0KS*[^lj-Ĵb^k-~s| 8a6 i\ǿp"13B|)Z߾x-SђKpAu+(RaJױҩg9u0''_$U+)/\ ggwzUҶd&kdEI30$.~3N \גonRL!87mAVpO_]Sd~l⪽ꖺ뛴ZZue̘ħlgRtyu`C ϼdy-sPi]Ffe|喫>?;JX{iIv1TzhVxWUv(DNoȯ눰u *\B[icE=KtjHA L#so2R /UC{ۏAՓ2\عM)?XRμs?d,Ou7zF^ݟG!Qch\-(8߰rx?I22&15ՔB?oT'[L*c"($"ţ<7e.ݙUWRo7:|)UVO Zգ[Pʽ7"XcۈX+չj}2#|i V<>sX^pRXA+"?I xHC}ővJ=Lv%)iY}XT:QZ66_==cc 1Yj{~ yJH>k5hI6jOM{FIPY&f*bXDIlP<)XC"߾ǥ,e~ߟW\w3npniOT%RE$%.* {ͨA1:%dK)5ٰG=%d( dt׸G/SuC">rfE.j@၎Jl\A-\B.j}I$El䬒"6zE )ҳ(U@8-V2FCTWc}#\Wtl7wJP-t=$k=DGZA f>P&C =5ʯ|0@JOFYGaͲƼZXJPcީ"Be y#\#uI_4.L|Ęv|(x`rGb~3 ^3ބNqOe|c11$('cayEkeuwFkpVcU);ȉo@J5㯝MjץWx98/bC#y5ӳYgnYgQ w~&3Ԩ `RB co%F Jzm1c͸ ه9qiqN6ǽ}ǚ7O1O.gR~Z["t^#9~b)nVlUYZEmG~ar7j9*WxiuN/5rz4 ' -RP՚ڨ̓ժ@Hҧ.+d76cHW862M 5^1Vm\bZ=j(c Y=|OYkHb^bm*P.Fu ٣shQ#C;G46"x6i0&]J݊X5] tjNUb-Glcz~^:#f;`׶_3u)q !ATʥY6רD8l|k%{W(7/wJ_ JM=fdDeB#r:hR{\c hU7y>l\v QHlGwrL5EUK0"^{yMjBRNcp膎.S ն+R?"e EOybF&JOr~QW5Ɠ*)w+QVCFYe+:A˂ &ISVF-on 6aG-;÷qqC(Hi f~qFQ)ҘJꨉ:xW8p}@he ͿhX)2'0Q8x"2YXVF (oXZ`@*w~RHB?ޠOC/= ',嵛' ӨS>- Kxv#:H81`{G`e[,^ِ_h8*e :>Rڏ/nVB`_1 ߁0+pCwom4/ޚNvG;FK8)؃x?$e9J_)!\ţ58]QD, DJTMX33uwM Nx2Vԥ5]&kk-SѸj00Ѕ "-=OSjm"6}vg&"IwSi%i Ӏ!pѱQyi~*2_}dCP6cu`BL)p*_9{ #-79ԷHcl7X,P `]b-j^[nKҊdXqi iVc+@ =cLܝD$/aW[K <, ^OO0d_ :=A- ֻɋ4 +KЅEdf\lQ ;7뚿 ;$18Htoy3nY4ט3+޴(N MR\KM|{GdR,{Y(N ,O1I3I#=Hg ?z3<?TpܔOK;EK'StZv[9 k\g&nRKCA:;āJb̻l3(s{ELsPП8Wq*g 3&"iGRZ!9TR M83+m'˱WƯ 3+"2׼zРl"̈́zHww>e]htdOw-y:ilf(1aPyYb%@-l⽢gԔf])3G3uf;/mbk̈zX'm[h^AUIі!֕bG,2^䠰i!Gus뜈 fWC^ġfC6F[)曕cHVvpY> )<]u@jѼdoTC$wrO)GtbaJ0/5Fil/`pP!ԇ}N/6s>CSƸ0Q5QQ+R\[C9cT"E?ۨ11'HΨTb@ƗPd(8l 5d"cEvA'˿\dtYI0#;5٭]C;pDV:qH3#6E_T?#<< u]RqmB$"&GjٞC&L /JK3|&?ѫ.^n-> 40Qx:3XE&jMEpo~CR^<0zO]&|*l\"BPh]?z@\ $_6Cy?[ens @eAC;3m ϜOFA(b[: `hd|А.DKP -6v5Qd+sF%4E&4=͋&T@h<,DzsV!c'SMDWɦk7D5 טW#ksbQ/ zj<+S`YP{5gOʂC,Hy2/e*4K2_`zhl\gqNmu1^+\b쿝c!/ 0탟Ii=!{1qE!o  )yZ׭"?@m\`7ܬ籤1y6;8)mꈁW psdϙ;{ϟAK&%_l]K>ґ2DC_i~E.,MY]^}̡T2r}v⠁.<0h5<ؙANzGm /EdcMI䃥ⲕ+ܪ+`Ac9Za-, ! Sqhr{*!(pǒBlg4!z_Ac㶐y]1K( U] Xehp9B2D8;z+Yl 2ɼYS,(94ٕWWQ~ a6rѾkB>O ӘrG#N_ R N +'ԖaxMZiO;O.؄IOqy \ц00|&D^X:ef#7=bT,w,tHB=M/K*d]6k[>yI^ϫ%g?7{Ҵ|>竐J59%5ڦyM(DUs|]̓F Zr/`BLPQtpQz'I67W~,-ZWocktt1O[C*3CP6fB3R(@@m=Nj'D:ߏXC喂ŏw8[[a>u0w:HfOIG9Ua |!$(O`DɗoLEd<ʵZ~x^2&1-h_FŧM'[~ӱhZnuNC$LRr56wVgTV`̠'[d es$>s}ޡJkmu . kӡ_*l@+GS`Vp>KV9ڸt`<Ƈx",_@нo29 2<Ӡi!]O{d,׾aKbaAk?xk-˄& ljrxn20 ^ؠ) I9bQBdO[OY65BHủh/bt vYyicn'lDMuO廷!79̈́M|er)\lgM<ƨQ\CmУ٦1 grUc+ZfP6?Ǖ|QՋY8"i?y9)˒S$RNٮjǠj1seϞΨk‡E-vo8(s<9bL/}&q5\lLkaaPm=rIED%I^>!Q ju 95Qbh_tP7G>T[u=ij[~JW~8sal'*:mM 1&Q^epqJ}*U42S &2܊U-e˒y14lajc?l[mmm6$*$̤f㺷vǹ/5Ic<2 !D*@1{nihR(_>~"JhMTJ*No+^nG]qE+3jж$Egp|y ]!~LOjh2[hg  I0P7 .'cRY%EKRL=BwMտstFv{AQ.A>`q5S~]eEjD;1`h !15+'PexA0XN=W,<3']R}u@X p8k [hYtSK(SvJExy ؛TԘ$ Y- 2@sDlL#TDrda(+_ou".aaμ LU̶TqD~MA^$SHlak?ŅW^l>wNf̓'O'RTzDؼ#_(cMm^O J8 j}SX"2| [;.xԅXV=yDMG|% tK5]Iճ$6g1?(A]fCZ<[1`DR2<Vg krv /Lʋm0Wͦ*X{2/= q{;c2<{+ѯCq^ [{vۆvTm V؜QK飶3؍V2k@ދ0fpTj 2nmƊ;L`.&K3VU M<xõ+5gd}RH<@ʮfqCnhΎFV-wﬞljS@&a!>T[x_7~"@zVbYW%Wa,rnsa.N1k P-r )Zcb坖F@0C]L18)I[paenApSvYC'_hq?L ߊƗfaGrE٘Zzh~Kςe ܕ00I0F8h^z B;@>b+xV79^xU`^^c$0qv,75^ ,Nh8kMVrKo*Ww΁W`)3 \YsCo3Zk̘Sb|yiy_k a<9MiH ?KH\೫hxz n6UY31"`mKOv\æ؉wN\;WUp}Jo~פek\N_g>]]o`ߏ:K*WM*" ,Ǧ2o6eFukޮ*ryK=4{_!6'm 2!G/-+\J,2" ;N]w%0M"@~n[P= 2!%'|5 TJeuWF+&o*NIe+̩tL]#Xbg ĦO8xQ%\Ra@9Pll2W+)J?*Y[)zWDјI['R дo.h.K8oӔOv oy u+W"Zs;*QL rtql ZKW󬣙xC]̓ݣW~uI/{vӲp =kE Y9qSeUL>baoAM X k% CvgKصW{,+P)AZ~'w[/p/frG+@;?\!O+]KZNyEDҮ#n։5KGN]epHxMǔ*Se'wh.(giyJuiE)UjޚUb, 1nFlc@mP!ZMʉΖJ1~WNcいAkAe#A'za rn.`4]f8˫E1ӽ}2XX1(=q7IV3LkC~ev]m:wx~ w*;#/}U܅هf'gbn᧶&%Jn33q~h,촷yAL FQs n7]CW,]Ymh`!w$rKD({>)>=疄-Fs]0k^3cWKe{mMNL d*){;N}ɞH[rљ\z,Gyq؀; ˫ ccRȂhևtּ=k3*|kn5#gԿ(h@H_݄%9bT:Ʒڳ8yh H(K)V}-FQ1 -"aL73F34`UJ@"tGѹQJ: SGդU:޻O|$a޻4+#q ^~P^B=K=l7q"W4j#QEG T%o8~+(k ITWn " l)(cY>Xdeeyh#Ci B-39 >Ah$yL/^ZR9`\T12ץ?>Qhl ehL۾eÖOC3o>)UkG IYm /jv :utj$mn )+OFh͆cCE^  V~ ɂvHCw;o&DPDP(@@*z'S8g" b"@LUt~ɑh.\1m9I@O-e:u]K*̱Nhg{mu\{,[E:`.ƻR!m:4qiRu>%S]/"P^Vwnu v2vkR gS|"mQOJQypI>}7$m۬\4iQw@{'=e%-60ʳlzXF3O+#5AErnܾ8:@5;9LK}LR1Fm{Cz6_y\> Yߩ[sERjx vb+"o> 6tpqe$wA&̱˚YNx M' /*ve{3^̗} lPL6L V44 jAmv!fV-8-%8JUh5Fpm B%@x» A$@aBKpQnkt!q!× =@uL7g[bTڗK>eV^[J/D5ұ;TxoT#q;$R 5KTbjMlarIZhyJǧFh\%(V!cX}yb$FWh3 4DI~{ *@-*FwY%, @ lAx4 ,71/[\e 0knr~UG狈ݨ+ 6Lzg"]i^;εKfUSC,\lT/jzY4~>%LJ= "##ևt ΎyŦpݶAa@931@P}OYӸnP}HYтVK0{Dxh yYCZ|P7]bXZ=<'g43QyY^J~xSW)zC-v{A sjR,C)p7|-Dyx]Q5H-uBcVB)+)=='!emd8',ۮֱVB|~T۔oU3z3۫ ++^\s =qݨIJɌjX)Q/]a#ַ-o0͸FxrLdԤ؜B*:󦐍^u`D\N%ߌtl. rسsP^O7h׾rwMj<>"k3& %HzHȂc`vAggbA3>nآah "NM?YLʳ3/,mf8+WER}ZЊ.(!l{G,eB q 4:0ƥJ!N1vDU9[1/N 6Hpԯ>u# ٧HЧX(&z@W32{˭UB;8l\Hʴ فP~iq=mߵCy70+"1 ׼I^̇t/xM=7d8 tmavvO/80'}SiqˉT53QG zUbw6o,][яDe$lϨ)}>''IʄSrp*y[̒>flQ,/i_s8ڦ|8ƒE[br K&5MKm|.|^T9$e\b}D@l 3H8SIĐE792zWB,ml!MɪSgtK4S yGWtzs{wM < 1 m- #G),r1(b#$Bu@ast֮0yk)-]0@qU"%#n=/^;.,t>:eꢀO UD0=CHز Җa9'W9<&6t(% ۊfcurWKr<0>3b?ȧ64xC; n%{['8nMҋ;m-zɆdS/;tf+Kp~ 剟p_Jd$qdimQHc<~K`3{ gq #VqZ!: )|."+lyxoxԇ{THܰ2peO>#>}#[!"~O@sC3\sT`N_?:̹3;"{Ջs҅8%?pUw!^,vcQ}阹t`νʶ0g6?ԁLpc84FF[/CrB`.Ʀ$YjhR|{P38:G,o`h giA"ւ3 Chr !myzbSl~Rv.ĉ9 X|D%`5gVI;f@EmrOFIj5#@|7NP1TPU$rLr &Ζ"E&Q,3.z˛T% Z40<0y h.{@0a3C@ y#kӊf;&[d<.,]yFb29~:m䏍RJ˰߀q/HzMVZ!"}aq~5AS;Yk ŊZ  pxsحpQBnCT}/{;G +Y^*Vؐ9\pS.SI:~9nr԰uv1A`êEN{)ѹzCgOc?a8hWOLV]HlKX1):05k&(ǧ!3l9Cl?;=o$rNN.oMzEO-}@,@b`3}m- ZV'P1:+%Ҳ^;yzUskSf;oد*;1@i:{|~Jw[n<ߥwe e7^ da[>fZ{=o3NYu-ߖ^+ۗ~־R^'^CKePܞ CuQ93^wO:~b$3p[zJ,웞1?#+#6듵 t2V>RZq4q<C Kf yshZ}OkiIIv@C_<݆s|D.C#6)h.WS(^fm=cq20?] F<7~k 8  1h 8c%{ |l⏜v!u:1{I7\};O@ D.8$ܥricTu ؊%a+gcDVuHM* `dDʙ@;4̓fދp"Q1&OZTdoV'axDlt6x:ZAkR[͐FhkS:s:ֱ,Rd;UZH Ӧx0CiH~Lu/33֥H43,kwi_J  LVh -aUh.x(08rX׆L296Aej#Y~QdkY61``CKEIWT0yM}QlUЊSC๹Q|M} @,GxqvdD^x6bhAbD/XlIm?p%$:۱}B(Dϊ ,ɌosYMv>/\ )wX'|GdX^ո=üAY+Q1R Nnf5\~ @oiM >׶j3%Yo'1ti3-\&H_d5GBjSgpWVܝ*&B͸@ʤ~="@k?H]\`PVUiN58|=O|h,xc7Ȼdaoq1j$ZD`r"(ͽ4`rMI,ÑHH,͸d)x0mjqMffx:PlAb$lM Kk.~}%ynRZtq ,&Z(s>[p"QQcEѵ0WTŷ4!Xgr GLɝ N!HWXoa[}x(ﮱiQJj3O;KtAu` dڗR{˗ྍgf5z?5UR R2{168{8ϲ~6il1P{ЖҸX{;ԯq:nq=}%lhL\֗ { CTOY!IqUH!2:0DhpQ4ŢOmŅ/j (%Q n *9K.;_Wt䟂 v]pڶm۶m۶m۶m۞m׎}֎>Ȉɬc7 <$KQFhUAnc3B2 !Jn](s I2`kz$܎#ՐIϨEOJEKz 'D/-އ)]꽷36VF85pÔ /TxQ$XPk :]g&Q⽍lo~$)᷸{+%XVYÚ4jʵ"zFoߦ>KnʃJ\x5lso|參KAGf(63J6Ĩ:.WԷ L'rNjLnvf%h$ku!\pHE Ṭ,H{U#rl rʈQӌУSM4[C_7#C uY^>|||럧a?f/:^Xm1c]lk5ę_b+Q+̶l6jPtJ>nc|/ ɥ@ۡc-nlpdIٙwm`սD\N C08&^cUr)M5b5 ^^\[;y|tQXd|8?n>?WG=V`27eLpfIQ4//\F&MH&9Zk&1QGiXBΣLkc췺ݳNO}wg dgu780˕fIĿbWfpi26S ww$]Fg巔,uL.ɤ®qZH";,KؕR! P*(q,(+Ep7+Sp*s/R!BAbR|GeS7Bb ö͞ɠ"/L(Kڹc423o,1Ќ~ |czGEuv$3D*L=g*h/"hT14UϽ:$ /&HoiBADhMLjpi H30D*pLjO;c{@(HPJ5q&7F=ZYjK3YjK958Hg20ϱ,!_\Se$I9MWnZIW(qH)q^'UξLID4/./|Lu[ 5}ӇJGЏvxJh#_q&GLx GZ<K5Ӱ~VPrC# <8J2<bB!E2̈́܋Q: yS0S@e&qdZP*))h*ySCc"JRRNZN bNS04dX++AҘ"VtAHUv)'O[Pt3_&kh<jB"8`V3zL@ՔK ҿ "dRreMD>Ȧ$^F]wAl]*ʌh!ENlD/?cUy}:b Oe9~3 T, Re! u#&!CCPRMU%Ȟߴ,.QYBgUM(?dK)JÎBИL'{S$Jg@! 1d߶E? bц+Odd j0$" %͂ofZRG%_61bGDj{,f#_H}mn1g6tEŕ\Za NJ&NPjqAbP2LE9+F]}' t t}AL $YVL"9|>fW.WOsU"eDŽJ/ 4_QxLVb+?'[T wV!W"`6d2qYvTm Tg=; 9Zd:=lrV!WlңaK,.dTw.5 e.8;bw ]ALjb^6 mm}+6 &z oB  DVA:`PP \Ng>8CmQ/emQ(@0vkr!6:)JsG^r; 1 h JHaS`\=̍FS$8N/,?=Ē?0v/AAGۨQ@Y[zoIyc.*\zcx-;,41~q ^2 xOeTO1z0Oњ$f.E +dz?HX?[,!8 `vs:X"lHӾ栊WU*>3fL*ŘmN8g/TݒuA}f,֖아xcR5X\p?Ye%fȵWbmAڏY.A3%I?x!75|V%;wY.io hTT r CmW[ifر(<-n=/;je]kphDMh{,@\[r%N.UM1+s<rX*^Y杏R.ם)[mAopz  ;ND$d*uCVTiMV s X>\y Ur.r{ם!{)r,zgxpk-kˠ֐r {;|(bP(rS͕%bL9 /@ SjUуR"*@NS^0z K3RJ|ח=h!^Ӎ(-ДH/Ul5m#R;MfԱLF@3݆3N: ~C IhMD  M)? mP]!q[V'suVF&n>8yLWu^P~l6D>1QB*`D|-F5ipmx #l,am҄iU'ZK 2ϑV`b1=*P3܏,YF4h5żDi'-9mS1I3 AObNJGn3|:%ІhFP"wLT}\Qzg7LQ%I !&45^7~ ͊fE RHR#4;B=XԐ7vP R /7?xh^ ;E&+[Al.w~ G)8xh E=x􌢧wi`^>EMo/N/&o鈘}~Q@CD< '%=_ "Mi5IE2? 4M@b ; 8x/WZ8R[\g4=H3AhܚJp֊qt`7r{WJ`=6Y`<0^ob"EXM۽ܺ VT ;w 2R{BEE#]pyI*oRPkÁt>%ʬA,圄;dhj ~xaH?Lb+\#~? ѫim֣vz)l4ʥr8D4*"ttSĬL -\A ͐mYIWMo("ժIY҇{+0?5$ȹTW]vL컝}dCc(B`ls16eU{W 2ϐa46_Bp7-TM 0?dPIDB0(fAor%X h@NQ{qFmEOTŪp9IEsx/x/l :U:4:4j0S -;BU( KgN6d+[%Gea{{̀W3CR&2J)As s2a^[|>~6`cq``A2x^_PVR 34C$;}@JKq~&"s/U,}2GkzaX"d.NX숗:G'Ge"_DeSNT t:&QGNq,\a=w5tgWu ʅSqx5|]IPO06 {y>gnU@7xmd4LQ^c\Mn| l ֈBj7Hb篕Vv4o@%d:U$n6Ϻ `o7PkA4m %{Z#CI:޽Q8v%OzcTOYS8sֆ[h_Cֻ1At!vRB\m"fnn9=ڡ_T2\~1uM_]X]q^#e.~K!YE;|xѾz|ht 잒 \3EU+zSC9Xn=pNYg*N90X .{:WrŌYۑrmߠPE n{\x97Vcț7;5zZkcͅp>Y}1񊒧 H2[?d89}'~/@I) {~n 37 uxG!fv;FJZ@_3>20.A a#c'c'sʓwny0 wv<6G.$`qRuUqGGϸRN^ZU{^x~?M qZZ?CE-,]PE6f$1nK1j8l [dX~>Z*F>+@VOEq ͣ7n)ı&XeM!'Okv(2W p[@S ;}N!x%y[;3D\ت޺/&@xZ#z^|3!ٜtjjnffQ=ٽu2iFoYXlt-$+\m -3\M[{"N<}E"-b\nQp6CiL[9eہn߷ꦝS#*RHIZs0/b*!=^vtݝ_V(LBhw %=DQvD~VQHl$ۂ0YŐdN_ړR@EtS:D' -BƨKщJCل nL_.3 [>zʉՂwe ֙  6u=LϺT 6&X@, K`cxw=?1bX|0xń_ qdCK~%͎17l<3].ʍȔxsu:9ZQr*{aJ(cb|=*ֵghl0+Drm} 0o|hNl41Fg(aKi3NOy{02R"j:2(;yF߁64C>["GIMZQFe䌔FӶ rw<0#Nt3HceLVڣ5s?ӴM՘- SIcƛs:߼s`p`ߠz  Ou{2x֓|{cfSr2><\t^b-LKxu29%ʫOR 4uR1y0".T o<'.3zsH~?Xӵ FGG׀k5)`HT= m,V<5ڸ#4+䘮 t0fTJ9 Uf5%IPRnYU!W┄ L"Y7VEEk쾯9K>fŭHQ3_ h켅3 s 𧇮?WQfP(),L@WŘ<~*Ej?&۵iC}gU#P ctEmEy YjD< \Q .L㨂BMRDW6bgJv<|}|kǷ `ΌD$GJq*7㑱%z)EsŮZI M^,%XZO!?b6?-N $L?T*J1W⢁d9-JQ#Dm9f\ &R!D?PAFP6J֢Dܻܯh4o?q3 vӸMY.Y;})RiY&v_RfȨMp-;2‹0RsuJޯ$Xv+w86p^JܗԤKrM'X9… NW:/fX:#4 4 ,|`_<.␽eg;`>GA|<t$nRWEhc yC%#D=86$Q  y~4cy`<9CCuGvâl*:ܭ1b}.|X_'ڽkS6..Ry -iFݍCMcKwT-Cuc bf^nhxؠ;[Dmu"0V-VNEW?ȷזΌ"TD_{ӖX9޳![:ebwc};ܥH 16 6Kdv&|4Dk\S`1m ߞN[)v=%٤X-FutY'ܝ+uCNMqm1otvp9RV;‹cto^Y6$⽗']#a7Phj@Tr'6 L>x JvCrs*?ޥSjĮ۹}b[.֪)-<.*pm/v)̭'/ V9BXYn@.0UJ]. ^fi*}3fEǩ&x3bRhx|0mxY>v<љu׼p'ĺd$*-(m[adzA ]]mC$/#1}ɣ.ţz#HJGj缪_<ڌ8gTw"7"x j*1; m`K epX܄y@X D+79pN^HA @?hX*~>UBN4c\-qܽ_břCpI]Zo iQR_Z;hP^E#eJ>[oB1Ք 5ߞ+MdX+m!LkHDԣm69ޭ= ֽfgƞ0s#k 7˓nr:Ҷij|Kby,~cg O*w?$eFXDO+lKI ̃!xąunQ̧pH$Ca_ sd1i<^-b),6X#iQYM}>t@HY[O%.bt")93\^vam3&bK)pW$k 0Mva9QuGY)blWf3y5+FGQ!N=+@v4?!+jGfpU|ZfcƟ9/o% w{XY%?o|pٹzwMcL*rl2;fLIćCoް"ђ3V uWX)4Z-wI$;j_P_bz|1"k_zwFGC P?etxږ:LJD$v2Fh6Fq #q"d5ɺ}YA&93vRqCmTdՁc{GkBh`.4:KPJ% 4RY :j0y?|åzS(KJ2Bs NHG=ᬋ O5r vYa՚v_/afλKf<]Bsħ$IX6eD5нt<CVmdIc)vڧ@M6@'9Gv`öpLY8O0L>a};COpx26O4(G;*Rł glc%(1fd Y)݈ 1‘~Ah4 ?e]{744rzffbP=Hy8t|nhsvv*V(#;:ۻ:X:[nev)>rXCJ n`Rʫm_2U xoSip,{D4)~m6/zphqJ^D(jF% G0'~ ﶀb9:_8ǍᬨP\1V`_fFO ɣC:=O].vCrl #TJޘp赚/о_ޗ{sHKGQ¸&U` 0U橙}wǒ+Ґ9Q *Ry&mSkK%SG8IyǼcE8p`/$[ <6M|$ee7iJ}} ͸)ӹ **:oUS4DnsF()iK¢ޟN6p~=_cUv9*6܊8u14F*T71qgg*xXCxϳgKk.Դ ='4n<̘volp<4<7CT5- d@tX$z%_BF T\+AIgd.WuD5Ld(n1p $ia|c3ͦo#t"7bMQ?X@8z-,1zkNm_e|(y/!@sW? mŖBlIˊDiUu[mΙĕR@bH?mE@wm5(3"2ћ+%7|7!=9v1vMDϓ*Bviq/3o3d'O4K!t% DS0$q-F dLKB}W(  d6?zH}rd^19N+W3y/##G3w^q*EBRoSc? JNC*+cHc*qD߃E)Ckf54U dF:EVM8䲝ՓW?'ʾ4%EC̞CLfw~_\zZv(F%hؔAO g˖_F~gߕ4'POj`ef 9u~ܲ0qHmmtб$4m8bvZk`=;d!H*IȊ鶯~]>7%P\jRsLaY9$& Á](/ݞ^Λ@Q+mxU7Ѓۊ9/2Q^3e#=#{~O ߨgH$v%@4TաFyiF *b4b *DEDB @ÎL0#i:MhYi &t50jh bww," نb*\8Wݾ2GiW w ],.Y2 ckzpe@dZ6A$Ŝ{Ťa6S4]AcC›3}&,H"{ BAq&M,alH˳c`M"xNǰ,&Bj$zMBaBSMeh!NF21RE/a:UQH47>Yܝj4HheCP̗MS, h@kGh#{~>=lE _)9cBAѮpƯVzT>ސH) cDNH IF4| X4#a 29C?G*WP?{q:vA9hHjad4.+SP3[;2{\TPz9xKG9:Ϧ5)!Du4l ;)@n(:9%\.mX[ֶB|^EGXs-yG$b=,{ҒVs{?[}{c{u@CkTŸ49j [U@,&+ܲAPj%+ 'Ѻk"]E")FU)e8>,q=rquŒymL  HNXp0X5gl+[/x[=bو5=))[7NG-,inAsoz(V|>CbR:ѦItj|D/}SP|@x s kj0&R{'qXW@(Sguj٧ĝrHH%]4\.ɸH v;D?=frt{ma:P=gܡ18t@> .>h|]D`o_6>GA?o ^!~表C,xMN]N33 |V:/P.e!*Z\*6^@5CS71T܍[M AQlْ13 4{(r:>ސrPG"_s5+iۏ R3Hb%Hg_NZjk=OSvk=\Xq4kD50=B9a%RȖiTTCAJTŵfTW)HC+$?v%%A(#ߒK:$LA+cB%,%ň8 # ?Cp9ŸUk[P !@"00'2VZ461vS˂gF`8%j6߈!B "6Vو|˳1q&hFf}uWҟ2XϟKVgmv]zLѺ,K~uUSfTd ^R+H4!;Fsml)q<|ZޅYxycIvIE p=n}@91 Rm5kd9fo4W.ڢA#'f;5^$1&kRr+ZOf"t`?P~b]KȮPѩxw'o~?^N%Һ?z//O"$RؾW`U ^,z 1X !(NB? fl᥀I[Ce;0 ls\.d"D. s%0L ڻy}Cg*Pr Aw-ι ͏I+sOUPF+gO@w,rwM1)iב'ڵI :HmoGr@3 $gqpg_i&_꼛2/ɒȃhQ D}"FC A׈)!K44挰q(vLGǿOBNkv̵_ї7GGAɿL$᨟.Q,A_RДi4Q ^'"M)+5wBLTnSDs18ǘ$,#k!AL LpO"yQ6LL2&fl"LkqH6:2VZ/瀚,&M? )u%z~K`2AFP7i㷋8S&<`J+ydmm=埆JF%GԄX)bt[rm#1xfà͓Z 'W?jR4H?hоo jQɛ AkbʤWƹNMN0+?S*X%k%C^p@ p3㕿j' :xzXW\pqj7OlDt@!#Vޮ}4I.rTG#̺Ac3xdU8_qJL~$+P+~4wՖxTy}2< {4Mzcep`M{P!o%ߖ\QS65D8V"8 8>J2bb֪b"=:aP)4MtEvQ,;f-w\!sd~" U=m_ )#^3U>R> G?\t (9-R٥F~t88HDʱ%]9ܢ _oriR_`Hxe끝4(KfƇ=S8䒮K<EL4eF9 ?䈄FJQoCtN T:?jyAEhUc!Ԡv ÍM RLJ0> yOp ӟfY}Dtf\sg2 J̸d*A#*;t _ (Gӳ(!^XZ۵׺ %n [Sz5m +)j?׫{W!pg3?t=&"Px%&$# .2 ٿ!tA'kq{rj1O>]Rsz er# !k# 'F nXS%)fڜTifZ%o헲,d5\ogUms|ٮF׿ 7J 3-b! /.YM;W^Xu[K%/A;ȍ|vA9$|3hN#MfYzJ5K0kvz|8ڐLq.wՒ(}%|>0v Lvy8 ame %)F3Webp{b$U1LNuuOr#[V׿#\ii~'~KU3n;w܀q$o@6ϾۯԲ ΋/=jFsiB{Vf]FJE%NNKOAmrz>3_ 62Ur o"V T?QVyiZ]׹~lj:2/1$3㥄nbg y?? 9Cq.It5,NfN1*J`39aQd*,) r,cu5S.pØ.}w9U1E_~ܦ]ݜpO&PhugH.`ίi߿37NQ `B;M !l+zoJXզ<'m+dC(〞v4r>¢t#k _NVK<Ӓ;c=WφTF5Qt}AEUr\q70=4T!NaډvəW]wb(r~H üf'Qĵ*^TVPZDr?MRCK[es۩w̥ -ߺcD׸]"5(bys.SD:1 )Z7 |lY0NgS>ۣƚ?Dz$نD<3DǪbG!6GLA)o0OEiV(A42GeXY//^(Άξ,bCxɯ^17[CIu($'UueK,!tKR*JeGB9+FD=6 PNṆ+$Pu U FjUb#i4:#춲i8xT}䡕ꎵ.{f:?i{0U3j-Rc>xvʠRZųu+ÍL4/oh.DX_YENѬ@ (lDS9f|i8#F^ TuYq𥳐O`."tW@p]J5~=k5~-Y]it So0ǢanL ݺ1[vF2`F'D('%![ZE2=ߧIK|A43Cc{'O:XA-/jIPKB*U0CVTQlnk8\pPF25%[U4:3q-L-+ Rf+)c0d kAe2A`+ap%`.jd [ R}0<}) tiHCKI")&!!%_-V;uՓ&jvϵd,dbƂIڮMr3jf4˒])i(KꏪpŹ/hswE𺞉x/NNH/g4W`{om`j > +Ğ6fr^& Z`(AY2BON'?vGMljaĉc[9-=XNXch0 Jژ @wpIDpMk&v8Xc<;I.в)}Tw%"wzԶwQ?;, +ۯNN@R8y-yOp]8kߣ/2<|wNfyC{|؂eE̾U iZαձf_ZP1:=g&qfsQ;H$b$({!X_$qpȤ˘E5U?⤍kղ2z:2 0:W22VD4]!U!h2C}ݦf6kMg:fd]p{P o8/1[u5ST'zG_ElIg/SOSLJ$?R:Kҡ9R36+GQ'̲|+Zl:QC<(UB!jD'igymd]]v[=- [4T-) hQ-#__AŮ?/ԞÇI')NvGPR82j:o!7KtItnsi#wsκ.FJ(4e]Hj>r&3ҼlKHɰN&Y!pX J1̲xQ 1F@AQ 9c?;T*kK}W2El`!*㨆@C ۏJ{Q+fDBWΪF@U*/S4`c:*~zgutTC}|t3{r MA GMƥ ,/0Q+ a<( N*K `FMi)Tj1[3Ia7׃k?pz0 щ7ëósX{ ^/C/B x ^6Ut0ޑ9fcf4it 0eC%U&XbP'x;T֚'}@'J*w+ ,++Ȉ$4BH̘][6KwDyk$zrM}2ޘJGb)'Xc>ud^#Mig x̲{4}_9M+ "ъ;yHu/5[%(^%_m~Bt]VtH:'L=D{(>S U%V%qa<KShTw`o9ϧI*bpMSNҀJ?!UFc i%R;2rd5no }#8!3NMu~xN zAe V͐cDŽ7p2öTeH eSLnlVϴxkeYos ~KoDlេRVƆ=n)8wwJz͐- %1˱hڨ=T"KUohZ~8 SpV3i?X(gPp"t{Lƚ$ 2U6h` \kDVk~ r;I&' Oi@ʼn9ኅ9m@!U>y?P%ye:HxsA{[3VXpl =3OewP**ps, :ןel EDݞ-:+QyR{%Lѐu{{>js>i;{*ЕSy6$W]7Ќ"$zl7')a74-d*6OL#ڦ.ъEh?@ȕzu.sWC˔+KG]A=K@+!ѷ!+ˍs $rėtĵ8k#|=s7/QeEmZO6xKT uKkܱF^Fpm;V_K˂|b֒}XsVƄho7TGTS5O˺kw#bM]>e'gsE%Ax!ᆁvX˕lr(cI F-DtqOoG9BȤ1{tO^CC -6-$0 *x1!nH`ٺkЄqg+.<@J^H N y爏.zct\ÿ:­_]7PSeX /9"u1l D9>rDoL}# m)v;beQ@WN֨ZWW31K=f+zyZTU3_\%P^[U (c:3bGbL }MAUUQFQ:Ch'".&[z& džКGEoĵ8 Q Dx̱4P L'm#h|MxĻLȫMV4: 6?m|ʰ|& $h6pA%UN5URq)v^(![ bPq#E@m4u L3gh;ˑY*C_HB1FVٝ(^B\Q9IҞIeΑFf%t ;+@45X|'w*LnoC&HX7OZSXwUy:TZ}^/MM/r5O7} eWaX>܈Wε_1oR9?]cyjͱ' 0(@ 0w_b vHU,h:]x<[!4 h)搴V+r1F 0e|tvȭ]vV tIA.ؐ%m@ ckwd6̮^1mp.+|)8D8+W.h<)0#ջ&}3G+ppYHS0WodYFZqXK1XH]l(L͵NL7PKْ'^?X 񕁗|Rm,V\om{ԀX#P} A@CYtS~QŸTeGJ\?aWpTO`xWʗ{7AoGP{(*1hVn eRY%SJrwH7'-a}=rqF2#(,saq0@AHe*; ƵŁTQ/M$lO@4M4l~l?LL#=1MDah5W$^^+$wn;n3pJSV2(Љn 99x8B1S2fav˕FMZ(_|\Qk1,|ի6WC=J:%S2g^};q+m9c #JEYY@x=ÃDKeFD0S) d|ӥ5)2._G|vklQj V.4b~=ll )ŒIc"d e[mS2ʀV7c:i!r~1ŒƇm{w{zz+]y_ ؇.yz+1sQ@ P2Jo#<0YceNYH G0(֊إ ₩fha<$7bu H>R@"cί.љ9 ]}}>ʢk"u 9[R{" PbVn<+k%T:|La F lnM.åӖ]i^ЂܚaRu:6qVI>H?x|>ԏҽ祾%yHpV[, U2(|ȗ@H:$粑x-LĝLNbeb:nñ r{U.vI^"fEkpeKZ ^^c>g?6 yju~X?n5*z3z`lj!F[ȷuap;U`_866Μm3 Woʗ ؘsLxӚ"9'~rC|сKу}{ױngIt_}KP+2nB3\Aa [(¥Ďg3ՀAZ)KIݺH%pzu]L1Ϲ{OsvUq#e!h?׾5BR0@n]k]V(z6H'7 YP2r0!H;;ٸ#؄`-2e(yZul758r%D(Y yKyYbCi]ENIHp罐DTuIź䝻 M͒_ aE/Wvgq?T У1Z#ibfFTB0ԙؙTF LDSh}gM@3 hAXhqG|U[;L /̦8m0DYɛjC }3 4`& YSH֤U#rOֱ{Ze KV *2.ִpfE@Įk_((Y MDr:3&*O5&v-z"q&a8FHi3x|mcjPfuwؐ xΡ}ƨZ "0$ǨdGΨ~A6vV hSutl Jy tưuCdZQ]Ȗg2PCVR1VXeG'*n˄p-]ela?#鳠Q]Cc,C|la[D LM ѻ1MKW _b3YTfh]nw~p"E0';Bz-x{\J"1 Wײ0?"#rsMHt{]2 Nʅ6=zy.>* eܵM|y>*vNѩnYؼͼ 6 > ;!u|n6Gwby9r o@ &o?3>.`6'Przn/۾ô1b9zRcHw2[_6e6NSs|Kn_VI6 ͜Lh`tmd&s4jfdfkjB{jۧ)׮HV]y۷=Gg3 ]:A8fPcC+~Uk2i< _̀|f54'XiLJx&?9n,z i˾~& 3ԶӅvvG3o.T/ }3~v-)2]!8rsIky c!R (' @2%!:n{vi yf@QTEi~)C\Sdt`t(V4Rgef}uɴɦRJ37bLDIۑa$ $sRd2Q/~drɐkiشШt qā(0H5%"|#GUis2%Ń ˅3 |Yq?khheeUTe&-U5}3˜Q0w~փ0xS10&Pѐ++W Z>o[^͟FI@КMm8 b-uܜ<\~сYvg|σ-YqLGBw"0gL@I njew`̈X=ə`Z"\cv&Umb.`4^qJ&%Upً_ka+/*BLqm{N9}AY d:4qxjc͓ @GHd2/*7V,t Xׄ6 *kIE.\-D0Jj&q>nVd zpZjԊK>HS6 u,1\:.(&@R@ \r} ke]]NO&YaP왥0Ⱦ@(v[v}EKV^sBB$ | m|z yU%nW5վ\\vdKDg_^Vs]6֫í7oaۖFWPr0Pdb8(K-_6^@s$򹎙5~7yp.fS^d~rπe5Ӗ5?vDr5U.lhӀӹ{xc0}޺-cI?}K]~LZ8.?K@ ~r(\ (fՐw[:LR;/}۟ ^HoI:wvl]Oܪk!dF"=?m +hь#hO+d7M9b5@"("L>B;'*NW+hҨ`BkE鸛#㍳m?g?~$Ëwz%g¹y~dlz|i{!һikvVk2p icYtY@{l W^~ݠ(3޴f2ZIq4mI+e`|(V4e&ҧNj1ep?_P>H!",e.DI!<'7(ܓdžrлsd5Hy{lV%5!,_I!,YL?5H&Mk%?kjLDbi ː&^ۿn)*% b9f|G |jg^m;t\\VON2 MbC?fPQ߇ڏC[+6%:30~ϵDbyq{66 wOU $L $O-(;FHởx&72gaZ* o4+'i}51 7`8xIS.'15Р$unMxG"D$\14R5@pH IE s\ a;"iw00#ڼ"*D${扱mJ8&ͪKLKߒ^ aB=㑂U2F#XzP.DΒaHj3X-5j+Aw3~Wtmh[vdW*MtTM#Sd,M!xh?]LI,f_;)" )|#Tk167J(aݠ 1 )ۮܯ>= a2ﻅ~B?_v2'הEBFyY \v FhɺQz+V%9%:BdK9Ѻ+I,`vtܞUf\ؚʙ*@ren9 ZƉx9ڐ`.8cQ+z} epI'Cl}9=IJ ZbN^'G4/&7HWZI8*nא} _ Oɢ ?s'5GM mLA5[rvaqZm0 ?mˢok،ѹ|f%}i@O'镏5nK K0fUӋ'%;@hW>25=aG&:oN+&NeZ*'+Ln&?/jch-3Q;|_C0Pœ0W!;{KٵH 8mײ,6_^|qiM<`*>KA\}UneJ9$ђFX5,h1x^^&&b^ݺ``P[_I JR k=vrq2!]ׇDMڛU'8v?nfR J/#_4ǣ_+#e+a&[3bWWf3@\A?Y[9Yۙ9 ;[=qve┛|VGչT ZFM[Zq;@¼NnrXsgG?΍mN\]ٻrvf74szeMS꡼e@7D M%(J(zЫP Ê3%hKNGmoVDY;dG<@" #m*fK>ZKAgpͦUCiBī0U_,[`C0$!F([5xrջd %1tXR+(W"h۔m[4Qھ#;NhonϐE/Ֆޑ^2* .8ۋRַ嶗Na+}><8nxOCfA sL$f帽 ed=ٝLf]EeLl.ݝ:}v.v*/#z#+Fi LD?wvcvF.N0T6n$NMM05ut*t!%PgPu2T.@-JaJqZjGŏcb86b2O'Fg^$bGQ& \($tʷ,zh``>.PgTfvPoZvD$3n4lEX?@M %Hؤz͡Rc F)+H:p#|l`e%j-z 3 Z @8sXm[t!~1B~mm DVE!_H'i=&(?'](WԌ>W'y!<޼ao3)1`::wKt}3l3]͙@ufz +%'Wq:Gltk n"1Aj]K+1;`]6hUl~U;ٲ= 804NV" 'Ds%@>A9,ڔEﯪcfdL`{>Ԋ)h. I,;I+ܓJ<_Ew 'A@VKMJXj=7*X8^ȞAΚ``Jhz2ʹY)dD\@C4A[QMVU* 9E:YZ  +Ϡ&W=RH)LA[Uk)rYx] L (N BΑmo8bfi p{8U%BFSA)) [md8Fw}CJuI^PF5$Sy A/]e̦QR`@V^ʯ)>r ꨊCR@](gݨe4!Kr\IWvv x}9=8eU}9 k@,ДsrBG-Y?!jDC՛!0LF]2;+3< O fG;Hœ^Hwܶ;+ d3c>#؅9Ʈ2қՀՑf-PHHJ P>%Gfb+E`s70xLGGaD躠] *i&U! 8ގ|>ܟ$j$t-mC+.aŽCd3LS(/f(x벰$1oY$29Խ*Qo]U:+3=G*7 *#}Q['J"v=$aJ{"UM4CszrN=Ȕ2o!RusU˯ARӸY!dvYٔxNBϬ, jYRoGvС.6azFnP8Vh "KS -vsQhѷ"E^W <%Qb_}K_繯Z: mR{8Ii0 U2ZbCSܫYs{-Y5RZטrLXhf*l-lZ#`-9en_ h S&Wk4۠If @kh/Z4ot /O[7C"z%5'Sa6I:uw)S”))*%N?@$+! ȿtF^`KЅi-X@-}0G ]361 )C Ec0C\,_Ac㤐b yh%YJ!*iƅQK> P@U0T@*b=6!r@)q@XwtKpj`{>`{fGOnϤ%˞\yvtR:{Uqw,N8xHlDX 5PT)bYqtwn\ߞɭ=jM9>i=v ,u/to/*N84ȚbfSzG9Sqe"m^do"%QOOAAɠ5[Svd(>dߢG{:~"|l_{A 7W +ϊ+/دX@_ {-`YMųQ9keĨd =PUҰLӻѬG '7הȫ+QT=wkҪ ׻BQ"ѝ|R7¬7FJ'ە#M>3 {Ń.,C %1I2M{Y&5nOޖM+|+؟gvTܪV<r :8+Ie>wKrՉ>0,)fTkE҄HtA ~N7V6rtB:/JmRxC==Aç>Xv~:~?мshﹲ?s%?' '>@6gQ2?Fy3Ђ7{G(&l[{ۜu 6LxKgm'6#?ՕVen5[MHèvk]{`#@I/ hUƓWq 4>p"O>/9ؒ$e:W!thE;g`3-Xm3|ie2KِO$<-clZ5^CǬBMg^oW%`נUT,kp\:9fZ.zb-wQ\kW=G6_vC̝^ٓ-c?}cz+"cH̨}4ܝ.ث&ρKf_JO5BN)hjangIn!w2T|/NHr9z=M2muסvi)BWΰ vmU(%CQG~gٌi_x2a<`kG=@CNqz1_ӑ׀Q%ؕ8LoWdlK~X/bct0 6:3h,xcnޝ~-G5u`&ahE"$ ƌ% n$h9 Ѐjz1WSɾ%Hł$f|.rZ_M;yܒ,GBbt7l Ϳ\(lh9z!i)*a[+a Z ~Ѻ1"\, dK4W+8 1d&{7O>W͕e9$o Nnjmg?PdasJܔX@UU DVEMsX!(neFɅ&O-#Cxz2fpܽ{1*J[_ g7Kt$͚FO:z |;P)ZZyB8#-})sWv5=F&.J+(Єw'qRƇE΢D]i(;6ˈ7-}G_0]aP v_Qޘn\Ϲ5+ِbq$mSs@L *5侅GPE%Ӕ 4M2d0lzُDofRetf & l7+5-vF,@|"7( a.|`ϸDVa4 UPv/mT^n_U$>ٔ, CMNeL3VC݅re3A);'qtHV|_<6{c2CaV\!(Mf nzj-z2]IѻfT(TfRߢ9&]J;ѤsD x6c%7 㶢o]%fFzkNz?av`TtxA,H EatfwB޴8cO%"ckou3 eLYB 2I%8f)PN]6VOsлv 1:A!(?iEo̬Qp '#E đO nI*֓d %(R3Ϋg8Uc"+*cTFE ڳc~f0h6AL>ү %ծEn ]H(}o\uEz3зMMODlQP|GM%]j|#a+T_i(v?mLZ:a8eT!~9%xAH68\:,>$}MMlL=L\?d Vk}2ٱjILg K+d5+r̈́^@>AqחF*R }~ٵ ##c}E}JR\nc<MLGtTc4f.)Sn S [6m]]7yc&M!x J~bLR2/T$ FIIFtSWqgq<7e*b`[F򊣟-my]qA#3o&_RZT"'~8*o9TtW*u'-{wcN+qxSW>dxˊ`j&N5y$#+{! ~7WƷ..nw"NNN+8{>T|4B hnf?7?4$j"'-]zB &t]EeaX7@;c4؄)z6.="Fs,sH*さ=|2:[(z!ęwLOFGxnj"ΚCcUB |ARCTjJʌwbN,&:K챃l1Ƃk th2w'T{?o891)0kF^> 9Z?X9g^nǏ{o;#)K00o, wXu%{gre`XN:Z|' ętcglX?^r bDS#-r`%qZvʞǓı#FLv7'*weav t$}O__DOoŒӵ>ffVDbv3W* As\X}f,AsK% '@Q!R%µvE]{y#A`% 5#|X+Ne5@nW{o7BGοxXw3R{R$ueWr_$> {}t߁w.5\-ds/.DDYg K_Վ\ 3YF3$w׎{AWO{x@SQ,=1W4DG^=Đ^v`+/[.؀6=,rD7)JP^F #^do*_Eus`4?tkVmF܇v01[mW F-4͆%/Zlgo`/"J׎#c2hLFd_6 T bHǜ[Z*.@Qě)Uu z4˭wlv_p~a3cJcaЖrfd}3Nh"~z([>Km^"C)6XiųC#ȡrE#̱?OAo[sE  n'T&7G}bE3 ]x3|MlJY~7]ϭzhZgcMV/)%[2{;*ߜJ%n8OVBwptsZ\GDjiq= V~PPP.TDWa"R$|[18)8w֌˱KMie*FS c̲ti~, a>(<^݄E( Pf,zi P;fj>hև"'Ƥ;b-X,AZ+>.ZDo Wuo&3I.wVt\Nq7lkja.eqͩ E]a^Ԙ/&wI@cV "k-L #+RӸ̍~Z>6X;L|,KB, N6(+|r/6[E`:mk~ Y=+ݟivA/V鬭,le2qqJjf%Ƽ~ƼJrpfMU2'XpM (SYڹh M;;9(~Z:4\+׫h7 L C}ysjnf}Qm[ߩ&q)׵QbUR-8^/`2ε CTSM2ޗęn`+^yc09E Lun%P@3\՞%4xŏݢ *2B+_ pf̰QJT2I+lՉŎD=ҩeC;0*h4wĢ!ѲN;4 ~ԟ6+~TюBBK6?0z|D \a9eP~ȋ<]D.>pRzֱUÚ~&N;Aѣ}XƗG6W14q1viwl s\]?mw:{gLGU<3cc^QU$u>bȼ:Zn\01Vb4o8Z(ڔ686tEZg iW\0xVϖ'c/QgĵB off81V"XShaIװx=]afC1bT} Qb-/; nqMxZKV\r0ѺF,*=Y@'@4mynLOi$PSeo#|GW}dm!! 􀎡`lf/eKOrwJƀ8 oՋ )=Z"]&vg$yT#uGͫ.##8,UF+$K&izJK|UwHj,bW{M+=eUZE>O| RS'Tћ԰˘%8\t~yM s&8/.zʎw`j5&$ v q+UajV%;VMRS32րS=tV W$=UkEkN1ػ\ ᡾ ˔mX(07(Hg2}ɼS:Y2B94_ah7u+m? ! *DUQLeT~mA-@c9>%$bNJ-3,&WG7rJ8&onHL{~/еfOv?G B,2j?ñhgD3o#!*zC*Z@E}fIgA,HWbiECM@q2(w!jKB3la=ngNMQ"QNOrn ~D ׋ɖ(^Pu#x3 :]%ÉQ8?d<-bbU^{ VRNyXTv yʏBb#$.Zcf/vHK𶇅UAKuXup&$}?J8+_N}4q.GźQݮo ;Ԫ*1 To(Fl4_ iwXG%e]l>A:|#tL Hè|Ām$:HoaJT6?ozkX(҉E>/cz"$ZjJd%-O=ұ<&G÷6D'F)G#R*|nw4n/$^ۂB̮:0}̫45Ⱥ3}v}r6?'Kס}P=ybK\%ykSR2P1mމUvh g~ǰW)( ,ba='z%g\ 'Ih HyM-Hm\[~Rw}/6쒣ⱨQeġw+mm\FE7  M2G,ƞ@2rr[րQQJ[5 JgBeĬ`qu #ƱR^ʄyi%!M,^$m(~ã }wjG9UNU Dr&TFrEلry t*|0nQ!*Z[Gj+,]#·[hRvvXn˝cy:/-_c?w'kԍ2N_RykyHV eȹ6 S2/ .gy*0 q~nڵjpה+F5:tŭ:IIAqV{ Iu jpKš]F;x.ϒz'|pEt[s~2z}Q9R~zFA9d\e$1vusNidzJ3Y0t'λw*ZgU^/zV{3Q.!92YMG\+OɇvNΟ c(!kڛo$znF 6],#X傊~^wST['o|iz;3|whR@=a$h>>l4L бTe`=hq"KN8XLj$/ 3 hVq';ɑ&a%-"Mo`bP{E dhۡy]3cB8#Qc9 j`jȽw}uuk-2Vq!frO+U+ T~֗(}dj~;U$#> >R gm@!qʳJ棫[ F[R{O$%52Rac(HEnAQ"^0[ n ~de366rZA+x-*r0 q3 {RCX$Ÿ=IK$:~B(Cr8Y\Xz`iIndkqPM'Ne[U 'Z D5A܌U ۊ:g` ˦-[ ܄d\/D7G)$E! śdIK ZZH.1^3uPGCETSGPS +ŀ^r ։ۍ)LI3Vv3(5WD>5O,Ouk춮=p|ǦhLi\Ck{UV}M!)_IY+.6F%;d5=Ęf*(X'\N,ƀRd##\f0$&HW>2FEA~&N2*rnh{>o[3yԴ GKZvszk?k7Sht(~'qK`D/ks\U٪Γ˥2e^m;/4pƻ}DFmgfdt*ЪJu@>æZ:tl-Nk%S:,nFfftZf݆Sena 4n/qќvɐd!u饪|^m ^!IfyΕV̭RrNe^++ykj@(],Tf X]%Hp"! yTx.Jrx< t07_PO됸 #hF ~Ձ ă8(y&cGIM 3(FPB-P2AUdKEzQDuX6K-oaI/ FqEu_6ooo}x9rX!|1:D80tdȸۚr?jv,T~X@V&,lU?94j@Co -zj"؎7TVH ;M`ҡ)ithF-Q/-8F'I,eqpb_`w*R/S/:l 9>QC^:FAP,CBsw/o/@ܽ40^>19!XVv1`B+/J'Wg-j3>i5AZw" J>c-Y.oL~4J\=tz֭5̓ViQ7U5$2$ಚ,t4֬b!6e*!iޅ OaZlŒz_&!P}+NqX\%ƷT? xU+y$VwFF|bV&`nlQZlh$ULD!=yg /]veG9CA+ oṰL[GLA#) yMWbu _ϫ0?"1Aox3pEyxRik&o! 7@x lxsp~u%Ƈ=xNh-HT/8~}ʴJiuOe⪑ :N.1\bW5KG昬jw3bBAɰĩtќUjC-;(_pvOgA =\3~/ބJl hX?8Z^S! 2"++$~&a2}WcnQhY"=AZkѲp@Ȉrcb|H6e`Ju"KGMd\"b^mX7x?{5 p .fSC|32 O,abdqOՙKep.w7phe-ѱ2ޮl \ǤzxRIkr,M1yj5r؃%g\yבȑ4DV@P*Ҡ4)'ݫN@Wh;k΁iO Qm;h:n:ǽM|ሟ&es5~*V"a+-%䙼wc$9C*iQZc%:Ϗ(]dm\ifrUZ\;`-nh\# '>K1y#2AFG^.ܴ+|aNB NYܾjl)roOLX֎Xl nAlxN҉r)!p5;{n` iEmݵSq u'|[ X.Q¢y{wë:+[X<` :&y0fgMS(JjS609v^3ulLYL$S=كPC@c틉9@K+S[h0rz2pUٚBMs|]K4}_1v Ʋ]_ @(GzJ)]^ + p:vAzjvf(KRzQ,(ZTCy}lVYAQGA XJ[΁m$Y mIL !u0;ܪ◂p )h "qGe+2ʤҍ~kW\Iyvx陿[$~s?SE̝c00ZIKzV?08ľ4C]#v%(kxi]k L nSScP6nbvDr|,IXdM}m<~t52w0\Њ}x@Ld5PyLp\dCmR9>Ƽ7>Iă~뻾"w2 o22,*<aBx{x;84}% {d 1A*U©N [ \ІN:d9h8ۑ@;>}nLk&}o:ej}ah'v#8AQ׿t raC&+Qw|q Ր ۪- ChL C B}<K ~N]wKl.l 7E[ΙTk>3)ZqGg4vs*qʑsu|ꈆ[ֈ}IKn{I\>*9YͨO\u!41IVMfn;fc6>1,[RyYL0;cz%^a]ZBeW(ݑgt;=0'X-s fN5Ph#C^-kfgʆֽzl(͐Cp zt˾Y3!B% ld5 ݩS#ᗍP5>&B@rp\=!9V-5ۚӲsJ4vE=v- )%XjSk91HP_0`QjnxMKTK* $KKX&@Vj7{`m\fఖI`YBCn]z*Ra=Q29_ނ%!݋=z Z?k.0eCj9Q)`5AkG=΋&#oPMMހqliIjhfzSLtNG HU[4@64ly-:eqy(a\ Әj+Jo)f{e')u>b7K"O{n$wbP{NU8Eo sSh2!bJx*@g◔M::gVH>%8ԘFKAx7~H8lpJuK%IohsK%`~胀9|}4i$|% vaHhh,o_ $}0`MjQ(ӣ,1>KN0'6-쀄1&O I'F5l}Yz:EʹS#Y3 naqm`cDG W-߬tk6A '~=#ƥ7\N84躙*9%g1Hv8cU٣= 5NbT2PgwN=+ߡ7 ȉ šNR,ib4J 9mE(ET!$gL 8&&,θ<tŹ!E"JpDž3Ƃ*t%i+kmҩk8퉐LypAbe\6 <{+x 0uP][y_+Q W{ 3pv^*^_PT)w48r84hyj+ XGydlthfþ ީؖE7u<:wںQE&!4=)V8KlfAڡ64FuW\jvwL_p^RvEυJ+%{y@YSf)C$Ey‘VmINRqA-gP* (mAٜ !^Ù>S~Ju4Q>"{sa&SC\SzdIݜӁz}Au=7 ]J (Bi`@. >i5`.U9F_XW:pƗ@㏎oAJXQLʝQR=% rOgl.&E&Js`ފGD cLxqZ R蒭~vlD{1WZ pR ؀lXuw5RBn}aoOJM`p%1j_ruI]g' N'Ăަ^7k˛Z,4F0.L\a5splHCT(SX$O^FxZ?:0,[ c'=+䓯l_X"6q͔ZkY㦘]׊tB)RCj2Ňio~ʺ\*$LYA/)ITD0b]E;ܶD5%5$sɚr "> M]DQ ]ep+&U༵EXv&0RzL`W0V&uiwG6t׺"4i[c (nKBLY1Np*-h:2&;wh*h1ag#  ydwnE~N/ZūaX*dZ?APBgT!pj_Tx! iMWQ$Kjwmp LZsj9'@蓐:}~Q$fJv{ <ŹKQ0ҁ_ qBm/}L{l]ۇLYr W9=V9]>-"]4]q (75 *W `, va2)b*iT`~mD`[g;_'6 6Qt[Kk?.}S|cm, 6! NoL'1R<5Q4wWP ^ HI1/pߨmo~HI }r7d_DvgM_o0$ʪ)=w k6yqn'oƞYWk>>bjMޒ OSֳA7~H8?"̑**$nVi(R?aqe$Q+tTyAh UptYiXҢu7ѩ]7‡afAҤ{T-pg?.Hk l}jAgD] e$wɱqm8ESO}2B.+l snDy !0g֭PRFkQPY]XZMG p8Ux3Vާ;*Tx#D*8Q|hH2{M#ړJ$Ip٬eA))ơ4cŜ9+Gv1Ď~!%s%BF"f$oM.x{[H)]Q4 iXdS/)p{TZB]ӹҧu&l w*~]t ɸ9F-q>/Go>_2.mhHKBlD@<;Hԓfo&kQ(*H'%ӱz~1W%8"+64(O>[ .!AEi~? .^sGF{J2spFB9H5 {Oi/ŜmQ ݸxw>T}dt$A}t{%2n}E>Xyo|gLg OdW_8ߕUD4=ސIEt_ro#=C` &6L+._P[Jt]g"Y<4d`&z ;ZEbJGGp;`Gu 输Ȅ8c9ngYO2c&5+ݜ)PoBRm(1ixmbi&SH2G*%d =P,HI$rr S0ÎI%ّ 4lVS w[/eCZ66ضnXU+W/̒.V ?1AZM4&92{z+:K:s3!PڶB^ZKSB>|m/'w<N<=5mE]ui=Zpٚ-v&M|\&==m|ˣe~Z2%@#Fi'#㒼$FC,JAXW+2IYfMOTäkݰrzIQTQT4G঑z52Ha·uA6 %IpkUyt*K950@Ԧ߹4 q$R3!Xs녒S;_sKSܓFɞݽ×;,^og%2Beu *Y7|Ï-9p'vc|sfoU'A[Zv;Eَ. WСL?zҎQ|5x?z7a~V^91JȈ 1H?_r-~'o;'|n^huvuDU6(.zt PQ afGDU`˰XGL[y7rQDg(Mu6. ,8#ydp<+5_QN܎Qu2{ə22dløtA9/?vqs&( Q|> 8#6H!!Qх ;ٖ{?| 2O X"9a׈ ̘UW9wHkBc2FۿQ ̞~@[hC5&a3ASĽ2֮+(pZ=Ŗ".8ՈdH*Gj~+?a7_+,tJ\I|'Jm0NaE)Y#AЭ>9tK!A=ZLH˲cţ,8nׁ=T|Qp3\ #dHLXVo1&+? M|Y#3%v9Sg; SdZb?}IJ@o6KW4EEפ?PY)"ׅ_z3yEV;xҖ=L.E*H**9%^/vJ '=@ȼ*A%PuE2E)U?x&sCE>b^& [% a@ 2N)\4XXF<77kbhk@~j x:v0>GOJQZ`yh{sB:O7S̐Tܵ!]b]ɷ$J#̇灼^QP3! $~70:k1:nnc1$i ̬74HUIHSتaRfT`+"<4N#A8fqyԕn;62M ;a> :$^ n. n|vg1Z: 6?kȦs #/;^mVo?ʼ.1o.bhD<֓?SA:LafQOZ{?Q EÃbGcp!EN~]w=E62֚_Gn\wiV~@=5096Sg+!Tf )n) ;?j_Zo:C+6R(rTq^Ԋ{C"3,fʷ i"t/)fgd\ T]@=B.X]lu:>r(R ImXRuo h``SnF9y"Й(a,qIKvzyu;zzjOMQ[٧8YPt%íڍ!4zKrO5@Za7kAm ߈^%\DXguIt0y}17 ŚfXZu;ȈX :G:A#T-~[]@.sLr:+y~~ZS9:8Ή͚1{lAjC.R(_tP,M(A l C͋M{W2EaX+\#ǜgvS?g=;LƠO #sFmukʸ(lDud7qŒYcQJ\>L[u20[Cmf+^L_|RO۲J7zqf.6x}mgl&T5F5lEߺXn 5VٰUnK/ [jк67+4ƾ>/Wm$Gӏ'SB~ٱF ,q\o yNs0R3f roݱXgQt5f&}ϛ79p1)0F#3Nb(~LM%)/R7  k(56!D#xX\Ce-Bۆ>=~+ρ5Aev",oeT r(MePsc> tTwhmJxEg-DE8DqOUw<7, @6 xHk2I#)u "[&)b0 .32gbVBes.'gn$3\{'@M9^j`Ø K=n5c=̉]݆Վf"3mpWڡ, f=] =(/,PwAFǡ[!zсXBp|kh'4$4\+\y}62cf6LkۦľWZ[vSv ]YlG=hVA ae2C09!*gqׯy&5^TڐU; Tw6xr*J3;vER>2YXQ+ S\;W Yr- j賲90I ,2s*1uK-^RsPloaCnPd,م;ߐJ~0' fnו0웼h~F(fGyq0> f 1[H)>fSl-!π"wr2*<%ę'̿X~; %4W)?Xc@]x'z7hd i/`t'} .F6@! { #XNq~5&9F-`7V~땬n&I?!^VH9Apd@vU$#oJxs)$w /KְB#шlc3Qȡ mѷ9.{ gWߢm6d0jf܋_8VƽH(bޅK5v1JN5WvKrÒ[Clph㙾S` ,W{k_ ZixYxޅu'WѢSdQ~ \|UnZr/q|b{wH~V3/VO4ӏ)̀'poW@_\;v5[}#03D5gSf1P΀xx]ȷ< NsGc~pF~=Ah(ɥ8]X kj{ecʂԋq C:x8C-Nӿ5@vd6HY?2nXw n۶m۶m۶ܶm۶m۶;MڴII&Hvp%;suL# Yˌ(B 4I$Wj*X"?'&I[8r,S -ntJ(˜βI=!??=j;xB.'EFfsP°+CkYCn"( k7I j=8p '* r9S@~aрsN իr` vm:Lխx |ļ>7Qw6,+([K1+}!Q/je p SaaeZsK6vFLvcwjE /uW`u ##E]\xDr &yqG|a^(Gizz CqQe/ֻ)7I@&i7/l qu/H~zL-KOd9Mj'NVi6KO{\V3R1;8͑5zvq*c@4#6r;hmHQ9EqFW Ubh4&JD"04vBMECCѺ> l fp%+2|dVqN^|zv\/GH;Ks#s D: "Ec:VtDްntSLH-D~iROl&uS.aSNOMSQj)[>`&;ɺ6)]sZ><|.]2vcW|Ln-e]B =]Y#6c]boI1Ayj#RỊa܆MӨ(OwhfT0Al8 )[8ADXo0|>.sQt7Zfgot3Xҟ<_JT"UCc.Q W1CK$5DSHׅ66*m OUqT>hj0kR!5ϸ:i|yC_:*"VW\'my ze0w๘[RW~7lg03; |٬v_?8]X(,<J(1k4>4 K2.sYE@Zj4JBEfTUuFsʶMkQl3B ~8 Q2Nw⺸uE_V-yQyFʷmW0pZ`b_̕&r_a"ˈC_ +H`&8t o RdFy Tk j"Z%KG~*D.&Y[ cvZg +Ԝ븲|^)j`S\lҢھ9X.7Uqf5h:ss*̬nEyZZ!dUSZʉmI(|'xW)@Ėԅi `HohѨ`E rIzXi ÝdgRHZOKJR\j&_F/jC 1@ , LlLhþTH1T*YsQ[-x&w#8 $&ύ_`YvVϔqD~WgW^]a,mQ?ɘV:v]Ճ [S_%jNRƑQv,_D I]+_+;=BW]3FQO@HLSYy1@E%T~ؑUo:&lXaOGݻ=¦'k?\12~Q~0Ӫg_cz'&zxpn4fcϖ->\`GN~K//GG֤~z/wC<340v9sNmbwĿы~żzeJɉ a큹vopq%Sx't@B Mv}wRGw75 Ք}y!nLl!Ncx( xk+b?c_T?'[ nz+38AF=O"hٚݲ ̦MII/OAG`@={Fqdz~ H'.}h܈\OQ_D[z^J@:>\ċdCEco Y m ?y 5%I9% #M"mӯ8f\ԙ÷3FG02]3fau$ /`zppAp>y3`qS㵐D2L" \k3T x('CrOСRGj`]*1pDVQkt4 62B,<Đ|kz''pc5 x= \לYǘzVDI+-a믗+ԞN HEHo2ހMN?E,]n$qQ ?jCwX\#߀=]JZ߄ξyиT͍'< =lLܣ= $ZsJ˥i&yJ^^.-~n78@lW%K]_oX(xlyN!<ң(MB4 KާKaƑ !%-B/ʈL $RĹ|;)f_cw)q_|{fg-;ᯩid׾:0NlC@<|ɪU뉬A¶1NI1r % @cbSu$Hu).WmC0 dW!v4 ӻ*iT}N&;Ck|}y!r昺p퍁**%'>/ 2Q[v .E^ zbZ&fa::uVFt x\Zijmd*w lر`=(O4i8Wkχ(B#b @gʓ}DZh&kN2+ږ,VHmvnk`[f+]>ʡ DlO󑃅b5!WuWHi?U 5(" o1WFha~;p㒀bb: ^X֭4 0jo([k꜉>ɬH+Qlo>Ta]:uJKq@Pf{jC~*gP"Ld L.@c,FMI }O۟Bv0@!CԀ_8ufF]ф vHt\X&綸|isXuq U7dC(="aKPl= l57dE ]\AbkrZͼT(C37ZՃ : |$*TQGdA}zӊ/0|bQxD\xoC/W[6M{ʌBgYA5bG_ ;Xcd31B)M]T ɷghR-l8*Q^Al!nq[WᴥV]4l#YZd.-ʤ,@cXe'7>`~AMkEhJxS; #! tyQ+ҵ~OPdY2GYrHQ"Վ )~[TB%K%Iᮺkz1_ILApZZpV~孾ڽm#JPKc}c>Pa!,hSUv- R/#Y+ `lMI%- ^Xjm#XنN(Y:sfsq_'4oF9TA>UU B:oQgL6~-}_+_ج z3,܃A jA7uRg|ʌ}kMf,kQ}A4*-a,5_)w)NNlZN¶uu0@mV !fJv>9j,U' 5:P 8gk1xuIJ@-I#qFgQ>ˇJShKlm*x0H.}&K(xX?*}j7 E̥ۨG63u6suy=3uT#x|J~I׹}kGyjs!UW0LP7z\!8"l~Ϧ!sb~ b7DI+!z3ا`=#O8Li $c=fsB),pW}o@o1 i_Hh]hD\a"t@Θq (p?dYmu~Ԉ(mO]}wݰKarZǐ$X^|}Ě#Dѩ ^V%BnN_J?y+D+(=e )2o#D4rE{'rR1o|.:~V漢RaWN49C5]:[nc&Q2ÛÒ {oexkH-D@nW!Z?DžE_ 8`a^~* 6 b;$q_5#zNi>?];N-+#`p *%#!;8k!uZ2ޏ9;QUoԣ{9*DdpLtߵV,IJ3ou.ե˵4o]RP>=B7 4F{x9G[^a+ϪDrV$H@+)Ip@qQ0Mr mOvbcD!Ygu)a 3)lvHa$7Z+w\ŁHC<:a٬%H!=K 0TלY^,ͬQ-U_KT>$lmե6}Zȭ0hZXl{O nuO4*jd vZc+2NCVWK9{T̙t b22T=P9C[ԥȏ,5ɖdVKqhle.(gƢ+̝37f>a `]0ۧM> îh6Oo)?lsy2;!`Oŋd͚>OYst7mI~ňSkԉ"L)efHS "V\p4~x"%#H|̣pQ2+ +2{:  놜6pIWQoJרJ$V54îӬ4]bcޓMcd޻oc\4;1T^yWhV֢jLWh}h|ۮ{Tj$' c2e =4@~V{m5T23#ix977bHI (`џMX* ,B Z'AW=["5pG`nDv:MA5iIeiYk1C1 ᨾZgknTf]X=A qv'kU_vYxG{ #3 &olZ]?Cdž)97" yيcdNASGCWŜ'n 5ӑh,| 6Ւע~s~Ln muh€FK2zĐw-q (D5(օ(`HZkE JWVl)) єtOСX po|Vџj}F&nA'g*DL7[^5@u埗->qGlL &#`O1wxn2 -ymOHCkd|`&|w' %K%$43 wfd˨6;MؖC^TE[1=)PU*^Xwt :5A5¸YoPɄ-dRDX1Iv:(ea !wkl1hkP9d#tQSZY7I8ۈ^]a}x osSy5`<-^k.;57KfL7yv0c̝P?47i؝lrNG/8+ j>O'׺tJ:>.tAvHҰ%դ_ wɁ!s7;$\iEF(H6D:+Y|eO9Ŕ@ zOI jh0o_ˏ EADs+%H(ڀ\-.C VP[͞a?HF{( Dz13)TƄ"7Q8olld:qǫh757w΅@wRd( s7Qk~zU+AcɺOmM)ksr =D&#Wvaq=oTԁ;<"tD9K0zgjK N,duF"p z($C czmgB޶tR](mP2nb\Ek:h[qi]ӲV7BMɏdYαj3L!^ MH HT;oo /|(p0JQp?ϙ v3\LYs $KN'pO[z&^dWC x2Kk)7&V /ڋqAؖJ_T1鑄3Л`R.rް? 0HUfcVnH<2:2c.ݕPe2F ZgwbwW*HWuъ/*~)X}lh|  u!݋2c˛ .4H]HytO|-;oaGQCd\I5uHFx- Aaz =q;}H偋\c&]9ECᕧ'8\8*knnj`dM;3uJY1RbS? "[!)VΚ.R70@}'86ZAk3HDJSPQ!57"٭h\zka+8p?:&c  jֹb}H6ɹbT TԊ$kxPdЇ8Y2u٥럋]A/N~,]C m%68ƱYhQm'˘BAiwHB1ee2Q.2ALb%ǵx $92#צ'?O&~Ha>M7Z"pEu~%;RS_qǴ9 Qס1M6 I.p!r6KwRDz|^[%M7Zwg-뛮{2owI1L@XA c*8,âaf|a %vHQBER3 X >]4f5КB{sUJ;i0 ˔{.rtEut1 yI4sRdsoCPǥ=~>g ޺|(%m(?K[Ve'/-׌%ľمZ* f7 4Xd"K /qK8)ȈAWV)ۑKW=7Q `Mv*A;e*zكCbGzQ^>'(rAoE쭊EK˂+N9e>@>MOArO.y\ѼdPBZ߾I^qwjfN|)`z);Ga9oe.]SĞ)ڗ&Go'{ІJ800EYY31yUS=;\]Py_PX, }ԅ3.d=*pYn玼쿥oLRyљNOpv`$ P'|1 |[h > V]CtГ+23@j(j>3?Vԭ0>.NdP]T(FȓwM;kdx #dAw?'RI% aU{,G5ь|U <ҫ f2f _GXZ V-˶gjie@wњՔYo7mAhET@\Ѧ9|3T*TjK|:T*[ /&] ݑVŷ,U.m\U޵P 3XyH\.-A.~Xx{{TvPz0zE۟V(+Wt5=șp6~"!7hA6J lE5guXTC ʺ9Pg\gX bRqsvM+2u>J eX6k.]NjR o\tFTm/>?RQ7zHO_怼 jYb:$󺃍֥'905;'|*r#:!$C&˗Uy @ ! H]5G\TF`v{RAN.ktS. ( '9qzÌ6sUkQ`ry 7sB &MnaD7xİnӒ!Qv7X8ctQ"Pu0F'zPpSM7@iׁ H;<ӵ]2!ᵹe860.`lDKK fJ#VYZcH(<V/1xbi63,  𤾴x= o×yzǵj[pJDn"6W_̒3f/֜dەaZb|*iDȁHO } +H2Eu/z׸t;)?fA]Ji롂 \gbgLZ-Aj7JdYz vdՃ̥wBMb? 璓X Þ{첉(cִI5G3|a.|kŃgA;tflx3r8U hz6ƃ!z4/r'IQ A9w1|`!6r0 :b(k&`:.к.-3HˎENPxErLux5=82ųa>=橔,>)W.Pv(QW?-5Do K5k Ihr n3O.%i1&T:NI'Z OÖi[<$C2HPa>iU]JKZ49S"ԾY8o|k3R0.gZfkrbkyr#pKƁ\AM }]ږ=XC8X@ZQl u ƻ}#X>%rGQ$xϾ9*80c-.DX[6H NEբZ^#Z, AŽ<<^R!k%RŲр~C3`;¿``H#`/)E" w9ADIe7tC3nH噣ʰTEƱ3#wvMcLΡϨOLaof{$XL;Vu nt{ ʝ~}>~n|H]~\\}RLNA#Lf@bˑ!!-pY2bN.0}v: aDՍDM{yE?εRE[&>ޱd`j&?BLFcfmVE^RoTY̡psKɿ-?-㖈]~fL3~iLaka>C%$):S@I@&|a?fy.~Y|uǑ*NOTOcI^l!Dw?7 }d@#Ϥ$h l3hw)xMdg⛇" ,_žܕr_.U7g} >*#Nq$չ,+ZbO ګŵ,S-hr,ݵ˩MܕkĬM~q=k8ugoUZLIk5D|,^Bw;֟@[0=T@Ar bPiV />Ӕ5L+X"f/礝=Tzrj[pF';:/&5%5Tnw5ݨ]'h.,̥rS?3k $"=e9ڇM[(Ε91݅Y")U\'A}i#b'Ϣ87HvH4L"LPg+Ȕw`)ji)ly],V=U:IlY|{oQyĺV,eurAPGwof vt!RX~$f&_=݈Ȝw^}|O5W)}8xƖ?܈Xe #?LKʐ֦;Xl0.L\={0KM捙EZER(&Vp=$^#RAQmT:F9$I5~E1 .=JwRyfd>P]c-NR|Z;`p쏋*8G2^"v)HB~d cxAɎ](EchUz?&͂!.ܬ=tW: !k-$R<c ]{K}ypDz?F47IPɲ/H-eVR>HZhVZ-XH7^om˕yHu9܇z]jlK})qAhuS'HLz0lQXϷ/d!<>2nUxflaunnǐg|pRՍY> 0B0bGF~][t0aRy:{V:ק"+x$υ9 QhHw&곪ĕ7f_ˊ8Ia>HHp2W&5IfT瘨Xc[Aߙ0砍㙬1Jx[3klCAW#;M"oF}dv3ɝ vp"2'#B/`D`U;U do b/8?j_y^0M’#JWi?xtl ]D`\gs-zp~qguh٫2Izn)YO (lQԄPzLѤ\]}f&Drgb;MbY.& ٘ ֩x_)HbxK(:eŌ a2ؐpMPӘȳ{FobZ3N5& ΢(7;K | Ug#S󫕆0I*뉷 u6s@ JN]/ă.DH*M./WjsVG-4Pt++g7M ;XD{r(Iџ{Z*vbtǃk^&Gޕ뇋0=1Xxt`1mF+5V@:NDCdnv 5]d}em(V}rj1\x~"{Ηg~C&wsWp^o`S_-o)}FVĮ k ^|ғTMDNZ*^y?Bȳ&X9>p_r=+]w\؟n E1ӍCQ ~y ;vI/*-?R7 1h6Wbz%AGbѱ'$rPgLhrFnT)yFm=|+.u^.,Ef42y-{f用mބjFcɸ9?\$![jW\c}ߕkl[?ug9Qlh<[jzBOybY'+A4dTi;Qp~j'XgR+߉q6Z {l0hJaqgٝŸ.7GۈZ?%~;4-z&;Hҙ#edP#Ƙd0jVE{m^F]u(+TGRFK` T0N,'gt+ygMIArF)EHMo;R#QF5a!Ee-s4QàYg&$GOGr75nP䀸YdRSz0={=le2;U5Hƪ<^\l+ʐ<|tRX u`7 ABd|[݇5;|qyʻzr<6pv[<?HSPkRB5Ϳ39kN63 ߌ38c;g2\%8}Ucd[Uσ+H:cihI|k6Dd10zgPpWp0]R>]Z7q5^H1K-_›B4o6;wƪC)WҁxD$Q{ܐ| <.<˧!NhEjNz,`\AW]{> 5*rS+a\%w^V\d?3Pc8T3v.ܰiZR^nm!Csd-vȳq/78O3KQ^`l]V > ]oN]`~c-_9 oB2,-<4k3ŒB& lW|B"qx}?٤l7$|q JPu׸ >[]װ >kS\! >sS\٫2.cq4BW"rgqJ\FQ4"3U̵4i|F &W4߆qy{uf&4 +pgv E%̵O|Ouj. |Mϸ6{ULe۬>c#M%9״m[|:ry3p6]C5lPfx_$LTɹ{(4-󑮉ea]k#$ݣJiQKצJi9L'd?6XcfN?XS]zd?3|wY3ݣ;aԧhw,&Ǯ3x(3 suXO<'㞊m&y"d? Oe? .z@.>VKy&éXNNq7d?(ݯ~WIF[zd?~j_4Ok_{[&2)ϵKj˿?Hjʎ"KBWSӮbtfJVҨf{w)F۴k4+2kv瓻^w95+GgV[`qQjѬ$&3|M?Tz55,^F%Yo٢O|:m[ne6gqʰ`T-G;YLMСe= >RQѲL&ץduc&cavP -.PW? 8FNZRyhZ^"xbmĪ۫]6ߧ\-4Xp5S,9ii;OF~Q-RMzY$oj\a刕j׌jnv+7VucөLI3fY(yw;THZ8ttvaq}4jɌ;JڻMYt9\8\~^QvAgy2ƒˋIhen@-giCl1>.~'˱8g dca0I5RHe.*=7J fȞfJf7z}@ &&'ZK/^$g1 r#i|?ڬKWyeYdȆwLj`^}o r$ KцEA۔9*nlljGg&0[AԆ]iԠE_Vd>{tҔd^*y<#N%}r>*fکnqGU]{]rVwQ"^T$@|M<z V]B.l׫f"&umh|8 ɑ`xiN} fGf5l(-g7f6IiN'+#{fyږqXt41e^UF-^F6'7^{eur<5ҨÊap,R9݂=VudԘ:t*`b9X}i|t0TજF׍ُ./-;.c&*PnwOc6J:U+A cDԪ@& ĦXڣa󔡠Ug-Psdڡ:|)ke'5CmݯZm{~lf_Vsنgr l5}W~k&%Ԍʕtz: /e|ZkTfBLIs`Mn}CG_Fbf(3(kC(ݭM(ɿJ+!8sleog@gT(-Kb[+ kK0ejQ ]]5BYy =8RD_aPQ^GdfJ'N}y2Z|cT7*@͖6{ci!ER%z#]_Sj2&&ϯ^qĺLcc gbT'"Z,dל]\yv,%Y5n]gY Ӏ-}L{~A@/6$5o~YK% &~fSUwijk3m&QKQC־bľ}4$%jKZ&˚> ; 3"D@5 *dњ}go!CCQ_.uk+bDh4,5@F11qa#Iz761C e jy@Ŝ' 3fx:&1o#=0QwNjHьZϛQmu1V}50D`\ho-ABk`}8}T5 Xd{{l`?Ӳ-F9E* kD._J۲[=HƱw3Nh_)28JQZHB?YI؁$AV>~@J+^IwrsE æ~S<ETCǺٕ@p#:iߣяCYnoYTDJ@b@"&Un8V-)׳Hs殌") J: GMeͺH† Y\Esx;Ņ:^5s_*j4FpT!E?]ZTnÇ2DJ"O=CYo#$Cd7qMBM9*Z;Rfy-w/5 9=˓mqz\#[72ank2\MK@A eag7K~zv>*Lc^?v)wsN|&FKJ7_L:5xE;{Jpp|com1z2ƥ:VS mw2VMoSmk*|u˨n;T5,;EP ?z Br+ -eGƣӴѯ[S?z1e ex;=.k0{}yvϓ ӱWAjYT*NYo5nQxuǯ搋+]6ձ-Q5fm25q;D ټ԰O5,۫e9޳ޘZFSsښf)oL%)~W%4Tc DkShj*5,[2L}Q`uNwٔS '?xA-la8=B9/`C#6X 42,+CrO2x`Wᒶj.v*;Լ,T+{ER-Wݗ.o}bgu/G]VYS#ŇtN?S*bj&e"9){P vTA @7NhaˆPqfys^vMrң )wA6{3LBA ZwGぶ=:m"f@z%GJUJQ'jual $ňQ:K/-7a UkuR(ˆO_Z @oY8/"UOiDV"G*x+X 3x(7\Tm-*M9/Oav^TjV Ẉ6aq[q`)A\ܬ·|/r, >RiGZ/=v ٭~ *J$$$BKGq c4H04fYU3C!}䥌+ߠ9)ŕ&S ۛY7 mHE`RUs>!o69j%Ҋ3<ޕX,ƦP$q.] T6ypX Zυ"j̑)U*eP'ǐ(JG)?DG03u+gmւ|Xb26x*jhhRҤޫ$M$y᠄䮣ڤxh#b+izHb`#",|,{UbqqN֠s`'`\ @i-vg^G i5iФɳ'~k" _+K=iY-ѥ+>8AodVt7?nMIQ$i]upӊO͐ 2R/gJ8)zhlOAc[,qZPM+Xm-B8ò1Ҩ W{QK֙͟73G/_LUЯ`r콹nkW}d+0Wiy}%.lR ;n#Z`iv)|l!!5m.n ]-z]ix:!V8PeS%cldz jB8,>}|TM_hlOPF&g|le켄QfPeGKT4d@RyK'pC}vZ!u" g Nx>a1cZ(N|tXԬ@n̂zO'6!Aֱm^wkp ovyFO,o{Ov\ίVM:Wr;~dxT"\&8Vja1Ǖ&>$sGzo7"}3;xVpqf@: x8DASYTHUo0dE\tC—EX*l(oխN xL]mP񪶪tGyoĚg6} f<:D`1R7o P{nJT{JױB1PS&8u - 8ۄ_Ad _Pgή;TC8 O   dT Q-IS=y" ZMfмj-z `),hla%KoΗ懰b)el{ H-yXd绞eաbfpkS蠠_`t4C J\ ?yy>جj~~޳]5={.}+![A%-|psH+8p/ PMWe9"DmKˊ;02 PCr2_#qǭzDd+%9勿|ۮW} KAM5w|';{e|Xn Wj<2W~ B]J%S !>->B/]օYcܹ>Γ.Cw/(:m +J8Z$?IQ9QƸH9Q|s|9A}?WG0r yvr7<׿dRJ,yNS75RZ#1du&rE{C^-00ok{wɍLI&hU(>]FOqumrR(^t'G-W˼ڥÇ@צּM@Ý@-IVqoCS| \Y? pgtr/˱%Fmw~!J,)Y) lwj9ڇL+!m'ڙ_G8Qsŭaۘ~_G5ydlGlۿo'̳SIYת"Gq9z]/vXg>o'2ى-pR57VSߵ KhzoCH-˚ԝJpU:E`wX *0pz&DU/k.B*b7զYJટ8mQk˙Iu&y2x(9 P祭ԙ vJ=u/W0Y*T=_(p^o ׿G_ -N\!pOiOOٜS[7Ȭ ]@GyQ,L$LAzs^:ŜMOpރGRmP)#i"pͳAzCQYڭV1Йz.sPy,$Cմ+$Ų^ٓ[KN& CoӰMiΣe-c_)lDjxu-RO8 O-}ުteSޚu< Y :\.VR9uc~1^SOg(\i9ܥW$!igq=ܩ (-KjiԘPj:C3q -SD* : D4%uIX@0tI/h ]m$<5*IIY0%kb86'TLW¡'G5!$r~Zzr&Xfv>^(0 v(6V%wJ|˟DKIgԓa 3` -t"d_hwnq`˷Sׇ4NnHrHugQz^V$FJHDb$οjfϽdL3p*j#-z(4v |dŬ=D;/p2e {_nTr nSկyQ=<*ww]@w'n_LeTdhǧH+J3>zǻH8%cG]P( c ޔ {oxɈAXQOvUod8mTzyȄ{:'|=faG14򨤰Q>;r"m<^M|Z"aP.BL,VC}-&*Tw-[csYɡ H6Z5k|^2f;ҚIԊoYToT-4~:D 234Kal#4p>.PEu/xyLN|qAE9dri[(A"ĠyӍh&I*E$|^P =VSh$ꫝDӾ+GD?)`Ro;s$ǵ,MBZNqAгݺ5984_=ҙ^I,>b:GC+ 5,\eI)Sj"/&V/Ȟ+?Tnpo~nhaJN3jʭŨcmk/piZvN7AIP X~䰇(-hF7W ~b=Mq[bpmAXlx ԡҬUR|KeB:U#W; 6_3y7A!ʲ ɻ kRTc!SIzFijs1/an' G~M^ΠKށۻ*?OيWtba*%(?ÁBxVmHKNm7#ݬr%3 0!]4hQr}+g2'[\ۙU䦻~Kbi͌Z1V.a cIU[$Ҹr\>s;W}|wv:Mzq<"?t7 @p5iu~-z`a[]d"?i29Zߍzt CԀ-+!XS k _c$@_]oժ>^YIvG1vZRhӱdԸѺ0f{S=V9ҶTjPS sLsl3 8\$}gdl˲: Q;f3ݘ4t}Tb 2V}u0q/.7 "bd0{HfjLN[*EK'2*@=rwj^0 -,(i ШYź  8P_lp@A0y/(.PJ!͏ArQb<2N^F%,xUdx vTH!&I rG]_}5ZdAtJ"mj+^'c16'-~7C`! +~J l 9w\(TXE?C%1\X;H=%f!Ң(EI_r]8gh4/hvG26/yL34q %*=}@=G+>Bͥ;f\K@nj\:L? ;n%aR;vH[Tr/xp} &$C ^뫐*vpuwG5MVs]<~b{-euXΪf\`ž8+խhcW$9Ur]Mqʻ-n]њF~JezP'-]3oż xt:eS^UK.onA\֣>qC$ BV 0-^F^MeuD@Ԫ}uX'Z{on:>7ή+ s*u7vok.;YMF5S9> ti`s[FQ Z/QS V͌`Y,.3 I ~bB1gЊqR>Đr s|ga\/:9RyېqB FOӚ':$a! ,RzYT<^Xq\?p:wBc`JaUѸL4`Ug)^n%{QR<@W6)"Qh8ϖ"WĢM-/uci OPkWӯ-`Q `'kpP[iYwCctb}@2Ʀr?:^K=IBͼO!^#B2NӶ{6T .>l 06s9ww$22 ˼){>~@ԶIG2s/f;3ށDgz.G|xiwϏ!L|oׇ;IM[NQiFmDpѺtD1]ؠ1Cq w&zwG?_ ̠m7Glx(,x";(<!( xb9M>!<דܮUt/>cuM7Ћ#6+ߚm߸O3R!|ƣdȉ & eb8;i .cpVd0Q٧ JqlQߴS'FVo˲W̝Ʌ>2S>B2\H}WA6W*֙?/[&k ,][k?cQd'D+t.,g_1KC|uց[-UmLI.m)~D8\):a %Be(>q~^^=|xixW dTfq ^O̰ Sэh֞Єa 8k6N\b:A: _rk\Lנ_Ȣ eV2_C!6O\Aj> l_݉6بw1o2 5]jpLl0\Dn طp CJwef(hit3TFL+BpIN03u]Y+ Ue8uLdPt[<7>lj0D-Jnr ?*u PnCx'dK)GPًyeAev1s* =ZcҦZ- q"TfivV26?)ioY~6!NLmf3kDΞmiK5[I<0H] sԶ,-PVq^UNoHPʖƩĦ=7<φݧQ?f2r `a2=ZK" IF,+f*L6Nq"'-/zpBGMߚn3dQD:BQJ|Uaw( F`),sBb> V,)ߓaI9s7Sr8N2"ɂNvUd0An<*j>Y2gmla\Gs{ai6n;MbO3{/WN1`8M+E8'0Y|>A(sYYl)Go%l*w6(a~S[]"ɑf592G߲c?M q9Tژ]/WN/=  ~-67gqO-n%u cjn>F\T3ϔ4LO83Wg+G33sK[Ƣy ,?fA*3 9c74{bq`lЉK6*) !=5ef6gpJL> D7ZP3GiZY5zнjv44~lTQR&4؅0Okѩ6{X`17&'/?հ*n4r&HxH *YEM8B UvN9 9VdsdJ/K' qTDe$ m۶m۶m۶mֶmn&9U* n}${ߞmr,-PYEM2V-{%L~׆XƟ`&@aq"Fa?W,+ݠhht? d;Uo5%$pϓExd06}14;ݾgY^;f57H+ˎup(MLGĂ.ʞY?ZtH]7 >T\{\(.5).N g;*`J=I>M"kXisԯyWmyp`_ǟrjg9f?1b@T]j-ӮR_bi0$s41)J!;q|:#ˆݢs44ʎфk1|m;n4^Ig.oPW%CMy!ՕU8ݶk߲r,zIѱX>nWYRHLY?VZb~P B~ _ y {+`6Ry&Yd>40w(?]$Pv ǯ NPi`R;Ø3&ߥcHwȿ ۷GT`l s Ynj @&4>" '$ T*JڨgjjNK-+y;~ ʐ>HOS;4;8DIc\'̚AƩh~Sp'ξ8gғB>zsW=ܖ.C5;Zt,Ox<XA5 8 kB$hUyUӟ&*0?`sŏm 村x\=ZQ~bP4N` mV@-0eSl6גndǺc[_4ڸӽzgCEBhiMfR!}! btU2O:lS{gB[,]HBDKQH;G\V@C'ɕDpH)Ct7 eNNoIDǡy;7+'S SEvdMD3XM|?ؑ2|N4{c&f;lc Yх5<nܷW2Y4ɞHw< P#ȷi5Bv5W"4,͆2vpԲ7W4WCEbw^A;Vxi ?(7I0Nkm栭؝z!gjDRwv\' 8 @.|qvy4Tp]8 }kgӎ8>/;6"/CR([jBjJFp *a$@W-"Sj޶7>kdhUȢ}o*mb4t,Ɂ\Ÿ{Mͳ%c-Џ{{QuMWLMRxHv~UD(n"*ȈXtr2%oWdFymiq-N7+JYDomVA{>"{FJdmNꑸ,8RJՑՉ_]x*zKr*6΃O[3BjZWwT޿w61?8~t h[^ ¾9g $|s$B PK +Ʃ/hM5W .B =2 rP VAD#0%FMqE0w$Ϡi-:d@ĞqN#8]췭(~)V‘|}>I3yS[O/Z0ɨuC২} *tŰ:dQX[c!Նƀ'S4M4 Vl)*ohrXgxf9H=oCIw7}-{؛ؤ%>\C:hrݹk%_an_v̑*CkY5Z(hؕ6]~Msvٸ/ߴcy DUīAvS}^r&]VRp5gvږnjuo]7h#&S}%P4,6<ԣYkfQ{鷯7Y~&eE8:J=T.Tb R2$A}ߺ֖q, /A&ݨᢺ2oM학sXsH.`LYDbt'TUV Mv6Tq1v2!*EHzB!Sɇy-51hGӵKji EJdƨkXI#H̦m۶CG@,(Z%P-)iM:,6:d,~ei]o-w}L{3s3YR4P}*R*@P/3lpԣOket!p%* z:Zr_Q( (@^8ݱ4tf@2tCuxBJ*?RhA)JcDhX/9&upvZ\&t6{u3!H,D?!Fޑ膟d'} FLl~( 1toJނ{y/o8DH7k<,8Bn;5H4a؈b]{ "Gk#jihs~^ipl-'k4ʭkO`py9]%cE-q.$}YPھ̆?|z:zgwxœfW, ]nW)k9_YAK 1Jj~XSQ;ئzY\Lй_mYmjLlW||!lo*ex!pdiwTͬnܳ{OPdb⍫$@rz=௾SO0"|d&\ pY'Eq& F;F6/*qտۊ\3Qr{62:\68:0*JN)UIՙj{~kR))pVI}BC'<9s9a$)):_0Uz{3ޓބaLA8 !HôcqKk:CvD0"O)fAUVu35"(L+|[cY2 8lTCv)ZMgFhz-jׄsIo@@i~Pbk'f}kk5x4z4l{8 /[8 kW/;J Y)./~~FVV7";)Z酴Z6PVE?xF 6+ "`iV Zu''ϫy#?gp'[&L/ b)svPm`(@_+qqInA|Pz;V P=0iZP; hY*y!pܜnv͑?)NvSF(ŝkL=gՈ{h%~'qɊ3X̯]ob_<7ei혫6bǂ%Nͳurr[l6tS}Vuu J!xj2ϨtiMv<+ycVZ/8ԍ/ fU +8HdYkY У*= PKb@F" F76?|vݛ(P=l*HylT3k߬X>MPC0٠%$( 6΃^ |ʃlNu;1+H 3g{M) ji_\v݋SC*YufZI>u|U]t uH}ŗ1ɽLJ\Cg,7Eҷ6)^ڡVN@MB(Ѽaf~ӰEeK-1v[@*K%"Ö`,QNUK򸍎d3$6:=7*kH'󄔗v\J;TJՊB֥mKi5<8I vn! ՑGU5}^!oq^TُTl4$3bf ?UOD(;W񫹧!`jѥhrݝ?Q66IkWVuߠ7 k' }Mw1kS X?nd 0I>\[4|ZxnQ|h'pʔGۿ ##ߋPvF^}oЋx {t/r]sA4vKGvM­$mg |< 9X[Sy u lhRdW<*o%E $qkW\€=je;^MTꑒr0:^0Eo2]2RkݪXvĥ|ɚT-QvW^_okd9&ٿ,d; J(Z JUoکa G G}]|vթ[՞w jVbD3~/D۶$5wx_0ä|b3'-!َbBDUT+Mp'`eT;"#^=> XvԳ)26t.H-%DमަbcƔ_eA6ilh`-@0m 2ÀQg8ٞMnAbΜ..|!w_Gρ!CtTkyNwIZ,K0Ůݕ]^EEe:{$!KiZ0m;0Pý^&UdssAs (  ovvQ].Gkp=k8' uNxm@Pz TOxu0 bܫzj F)hLtB R\\[+YȪL_OnkP?qT]^ ?ʇ*]Ϻ- wz_=Amxq{fNdϦ٧Li%*S݌Vo?ڛ/\~ߊ_>n=ד, E/ 1G@jhhXfh U5:6b4 cT0\#C- P٫-I0Lvѭ\)YS)N}ђ]}nM\9W qY&P`ʰPw}IRdBa>&{đ Y4ȬH̍Qf~~s6}aB I߶zvV}jU @IgК "yf@zp)d>HTS+_4PlUoQl^M#c'5v7eo#@Ned_]|=Ke4{̇P]O" S08ݘe _E*<kʭTWSEjs@B_4X='2.VUYxnUIm3̓gaG&DZ:Eu3j㥀n^wGnɰN)/sW8t@R0e|tC{]7Hm~{6㬆n-^=@ Wwi$oKQ#s |^<YrXFjWBqGc+Rc_fZ4\Dy\5 6 WSZ}DӠ/FHO½ /=)ŴMb_8~ѰBLQǍaubwCE%s|!# ƮJ14`w׫n9[8H_(j$Zӝ5r,?W*[,~#vҭaD=+M{GJ~R ͻ﮹"&a 2Oy[9\D v[MmJ#h24)W0r9>!47H&ذ2qMSxyoaRIy/RHZֹ5gKlC*~죵@"}eF7C-}w57˽ '1ηee{Sf@|\) sY#?]=m%Jm@e>>z/T mBV\|%jU`t!X,sRT̒U{nidp>/!_Yp7\b"A72N׻Uziǖ\63͏AlXؑEnVlϫc ӧq&iq SAҽAOj^jwTq NhAuS{Blgˆq-/$z4:l+ag։68~ɍ#~DjZw|c0٤˝Jꏗi~|?1~m%<%Oy.|UɟΕݳ1u~Hcfz_3g_:w=O=ž(]P__H<`cd?arQy=+]kZ_~=lI@H% 'Ь}=߻ UTw_(V2!ץg] 74x0g{X6iݡqd}mWqo;[n&f}2bM&P5DiME,- pp(]Lݱ6#|kp]lY ,e!s|.࿡kb}٩i$Ė{S\ae򁨣2B_7rovgzbsxpFlvZ-UO7ogތc9;˙o#GnGǯ_{y{ _sҌ9ׯ 62{ 8,/q \3F~0f̈́ u3y97aw- 䇹W!k.}oT>d` #}{ 8yژ1f=g-M/Ro.WXNJ\,#- h?9%濸l_A~j@:Pdyxm8ǗijzI܎`殹U@%W2Um;,V#*nZ(%_W:;HF숃Wq]ӄ&ET5Y/M7RY\>t6>ޓte?Oi dE)ctoSGz龾{fYvO}}.Sym[\jJ8Gzn^,>bźc2RxJ=n$3q*mn>r>4CПXu2(tTD?ۦI):#aSwIdtwmH7|N?#0oraggF']d>_9kmswN5'VRq[µvF\;Pk} 4)z}e: Q ;;榜vMiHAxz;~쀹/mg^d{wor LU`jTe~boGg}ØwD݌XC+.6+ܸ˜=lPb@+qo_Ѝf}ָ_ԵJ(!t\-bS]7^o_$ Qf;>j(,]A2^ B̬t$ȃ;vU+L.yc=i)=vbʗn"u'rdcw~vGnas\5CDlmOɕzDeRFz5qÐdL;fVgԲ'uv|s{@&z\JVR;aAc%YJ%9ߛLԳ u z.wBz%3 0$o(4;1 $BL~;.tI%y&M~{wJV)En6W`IHG'}nu IA9+Ͽ⬝ڥ:ْk]CRe'%|qLH`)ҦZU8}*TLw~wH|z7<z %H l/XV%;c Vᐘ5\JL+ӎ([~$ԩdZU,#9Iqb_?Ew{Y:n߄&7 |WA˕CYJkUOte~> "02*vHL fmǤ5?}re `a:*OgbL}1 f`;aeh0A b0{M)COOԙ# Y0Nrr%)1'4%X> @(D_xPy]dN҈TBa*]-jOiZE#آ@ |X gD}QsTVӒ<ů2DǸA8AxWǥl(5ເQTU3*̠?p܅NfWh"rU.K|nGy1%Փ`;}r'4 J<26& J%:XUγ FNFʟuqr7ZOAQax҅J#CV񟏯֒x#9lU6phΓk.%d'>s|;h U:ټ2Xp.TtF5!Xg"GH*@U 6 ]rXjz+T}J@\}߳@A-}ndCdɽuöC+a~O,EPg`] Mlr?U)i*}Wo#'+}3@: DJ۪_w4Cۗp}DN!?f(`4h\CO|ǑQX>|<$J~M>c!hMa@ 9:zT^Tt?|ՃpSڶyhbA`;һghy+C;$YĥaXO!k&*Ix9~wLs7bopu$Zo07|\ٰB5>?K_eև3jT"jꀘ'fE'Wҥ坌xk゜'4XBh;W>.m(t|NT[ʀ/?MS gGG91?][ԯS@8[H2HM0UPEGofwE|A<Ưp)ɵ7x1[\ 1a}K7 ke=]Q =]a{( (}/dkEݕyL1煃>1Uz̘t-DjP̿$Wш `4Eí#Tsb)l^ R°jU!a|mV4g'ʶgAiLHK*"@R6t -Soַb*WMA/-1oÚ]5@3{ma^ _eeщن}RhH@䦡3R#+[nLԴP&UnW 7N/U=R"n%b<@EsY[pL/|V}KC+jtEZ1A|jo0NV70i Xf"(da VKF6N?vR*6zvg!V5$kĸGKGROصx"qLqXhmfʂ&T9i½Tkr2]`- b2h0u_a D[~wjJsCm3QUΛ?3?FaoGa(FpimI'@ Q"t:sAjJgRHyN m!V岸J/fH_IB^iӼOZƲBAļLi z%J/\B?Ttzޭ h2]R]z^ kCyJ?|.1paS07~e/`QadAEr@S7]ќþdr$ >clYvisںo߿×<{gќI4ҋ=GzMdSR5$2B?x܏,/"vGOpnbG$!/&_, OڶYeG>#iZL$ڸ.RfAC#a_ydasYj-#rd)fN\BNrS{Bwq 2$ǜ8cvKS[] ѿ㴦v[ C7(6/Z9a4Ch56WzBE'q]lp+Ps˹`34ialrd..gg^))#8SBWW ĒQ *E5 h4!>*>t\`r&ϒE=+THAF/e\H]dl16o)\R)0_WϟcQH#WK>#^ëQԾ6ptcIpqFj:#P+2A$@ڛ0}#HIP8\@dJ(8* 09iCVI)ZJ >N8Z*ҩM#R%0m}])+)Fqe={ v=}w]= }-;nٝԕgB7Yꈐ[o\5Wܟ0vsQOiBʫ)4b ܲ](SGJ'NBpܷs伨6좩Nx"8b~{A1PSG@wIiw{΢Ρ!HQra~]pJZXMwvC5aUS5qutWvPKM2m*)&&+oOV&p5E#&ַ?*X,56`r[F+y1.7A`d-bOs}i"E5oP>)(W.'mAcZ2v*ga6\HHRk.Ys%mBGT'nD@x>f'׿tD{bTJcz԰ܷM^`/Q=C2dC@ 3 vdV!7=~d{MV⠉) vMvJv2B!!.5 |!⌿5 )ԋ8+2k8)L"Hr⑅v*Vr:e,TPI.r]Sf; .w)sQ9)pȃ$8{M=sI';t|Y_wxMICKNPC9T 51PcqP%_7TXuNҹayWkTu,˫^=,HbixէYSK9D܎5Nr'tn-fR}ߘf KˍPIW#<Gj[ޫ5l#v O#6w{E'TV_F)TGN.!^Ww41jWiکtHR?&:H1qqfJ]CBH6JBi] 9xJu+(JһPb04/ xַ䑃`2WaZ44aNβQ)+ga)8*wPNm i|j̿H6I֑]|KUK|`GAZm_*e)oDP6k~>VI䬤Vj  &0C:O f~a|Pa)kSrN ľ mWU>Si ' XbeEv^K$*wBa*Mxj֎oEEnYIUc(#JHܽsD1*w=`=7ˏң칤ͱ =6ؤ2\b4Uocj,]Ce8jzVAQ.-XV⩖:ٓ AAI} ?7.r_&>Lof맥_a0t'&!AoLE t_N` p3l ,# X>hsaȻo;"}ìj!bSOxU?Ҋf49z/9t|ޅ=VxȤ'"o\r8vkcyD NQ솝._/|7#u+cD,;3lh1u|͌N=CZг| f{mIq6|Z܎DW"8$ިcm9O4dޤ\(g3r-^⍸U٘nrv2 j+r"C2g]1 u< sYUѫyY̰ ^pc"l7R<'3:[}}6^˚J u q߇]]bjF@)1tk?gV96F/bj`7IPO2v̚D~"ƢRJ('F|z?q1:okUNGf(̟Ie\D|[,kށ}ÈDtx(I5}q׉9Z%%{K]w:>Tl0꺇QުtZN)u՝6Y#sR>Qcޟ睊 0K\:*TAF|s.jJ9xu?s/ jy3(2 r0u74ַoebioGņV1 _ḫͥ$udgCR rѵpuh~X+OUp>C+Ie<0+1^8-LC9/5ijLN]["ݧ+/c';w>%g$^O~`Âyt Ͽ_ Q2v?K@YӘ%˹G0@IH\oY_% m])H(EdT|W&t3IKwcl,& G%_SJDj_Au <Ɖk@v\ 4%Ý+X<1PNpwxVcqe[ں%T9#* nMXo vT2͂&\M`16у+o6$.*:_#<bdfj2RGk>.Γ/vM%?%heS7"D8OmH?X (RkKq^ۂ\ʼn/۱tV$O7PP&RD }0 VtvT{Y%&~եQi}rշ!OX~v,8Ǯ3o[DQh %Z(N(GUkWBvSD+n 6'hd%@W;06K(z :-X׍cKDGQ^>Fb]4 ny<_;( v_tIOVXP&W F{- 5t P^ipyn:P2\Usd:|If kU]&gZؒ#<}dE|_a=L׮ZrʗEi5>e><$9![8F6RκA^I#4 uFa*,ŀl.(: '9w("΀FdAJXGh? 6L #Geͨ0LZ9p<+OM ծ'98˟ӪMM#K&3AXvN/)1,CtQzH9e =|]oU x!oTc{:jؔvcV k,zBTH`&Y=kswc͖,<j OZm4r񣰋28r5Q{j%KffݒBsh_l) ޴'lQR]~bLY%`'-7 <3eT>WAbbl#.)BU9Yz#ݙQ1aWy,-vpPȧ)l0tD/bS9koc:;R"G#dPI}}-+E3W9P[˟߇ `s5|)|fch'A?-/LFtTK9ɱ ƎkC`Aˇz6mo  1|/*ci_V<6=\IdR.׌ cCiS@,<+gn&ےb'[;ZM^yts䗤v :_$V'$_zu%gMS#_D"}*ԵZv@9ljWIn*Z؆$vx;(8T +Բk5K6n ڞKpaLLm]L MZ T3yQee;G'S*H[ RQ"]`]l)0I+v ᣒElj]z-x~;kj%a>@?_< >`'{]BP> hO0.:0oH}!ռb:Biʲ铒s\ $+Dc)Q2" y\cݓ𯳌|31 +߼Z̞]6 N7f756Κ/Vu8~cw`4`Xߝ;߆z{7^`ƆNZ7;?y(Q aPԧ 晰_7V׹Σ<%7o/5Mi4w?b޾\_ IG;s޵ku'mO5%iylGgV؋r2g-QR>x:jB]{0!,BzrTr8,Pv%xX{DΏ &{yEk?%nA"qH$Իη=s#3I')or,L* [\LhG}rCҩ3 [JZOm,u BpH{l#q`DUir2;cKxh82')c>MOu)e\ϧ/oJ RW{ۯ iK 2|s;j?b> _IkHF^7$2%Ny,Q +|9Il$FdܑCTP5z :D{9 *u\KIPŤF-9b咬k3\ RRVE9QT َ/vi1%vE-HMp6#ʓXˆb+=" %jEBo<^@jᥩp/ jat2\9TR^'򨐖oћ+aJ^FEqb"9X1`tv{/N<>B&@O5a S`B.\CC|\#z vZjL&Y/޷(}i+[K %eU0r?EQ 99ٔE#"7 2! sL D ɣ6[R΁fe^ЊkN ‘CLr[Z0 F<1S&ʹʬf &%)+QWWTdpO=Q#ݾyIm%-9\{C2Wvcm>^?u_=o.2L/ަXu/ :>Ni{*[ &^ 2CfFTw:E.^spΖ"Cg3m/IR/x}~.,U^{O2[ Zj덞+s4/9X/L#ɺAzD%W$NMV)mHTep%밳_5~~cnԂ7 ' {XrmhL9s N|YԱ3Dwek)+%YҤRl1@K5Yr`˓C @1?q3$쥧])wO36*FhC^=hڱE'?ҷrm]c}}pA\"̾@:cto*'骪cϻg~P>*D("UM%vvZ:h$)ea|JTPBqJ4Jb;VoH˹%%;VoPҵyv qy4 x( A f)*Rk'/ZTLia 񨳍 :H;ЍDظH?BSI*䯂Pb8џ p LLڥ;Y(_}(fu]A.%FDزUܐ%kdKsܐyo= <+Kf&x#EWpOu lN+W(|m&ԏ`?]&W!& >D]GJ[)-cuBY0F /% Ͳl(F9\I?[~r?֮ZW2ێJdT]"T:z*|hsPB-K~$;P5w ҽ?b{`\#Hظ,(tOV¢-# {yT2a\Yue Wm>5R`[7"c{5;a=l*o;Y%Lag21Z-u7 y j<Ӭ2L=8љC~I*#D R0a\bݎwaʏ?IA=?@8*}a^`htӠ]Gh|O!}-IќtسsZwA0C!cuOfuvk,u\eb:6U|YT$(#:Ԛ ,-y|7, hv0&9b! |'vl0;AHLFx )SŰ= "-"Hi:#m`q\~+ ?nžR)qfM$RJ%pqڶ 7]#bEHZIqO*F_9w]}lW+M:ϙ*xxD_R_ĬGh!ND'r2wH>csp:uF}[Lo3| MUx. zG_dlxg6١'rv9v**ysv-lؖxÍ+VBsQ$HZ#1k<gÞfݰnlU:Pb=rGo1srq~$^]tZ~$ oS/?rKz2-Sw{ن' ذo4:D2Xñg/[5[qzuS5lakZX8WY>/kf)\yZd 5=WQq֎9il%q;Όnb̬a1rNf?T^'SB8nK^7BbVlD@$XSX9@ӫb:n,;;r1kH' %&:RLΊc^qr(}_?+I9~NfO:L2 G,:+sƖ;(u?N= WtMA -;Sŋy;65÷k0C91G-; W49j$6BBÎA#RYoGnK>nKAO9)_Z*6L^ŷ$w8IG^a JH-zE7Lcߎ\! 7^gv{߶O9M\p5`e*:dHq~3JvP B' 8r ] e"/K#`"ʉMzy~¶-&K!ɸisu:k\;c.7"' $o$!{e{/8Z2tNV)L4r5;ЕNӞ~Qe ]&xJ.pwD ܽNl r;X۶GdV/9G VG@tz;a:VHhMq[$"Jnx9{f,gn-DV)gfv Ruݖ@{e_1T*){Q~/!mǦ;:ڙ軘ژښ8y꛺w4xEsoZ*memtN#!y MJvuHr>% _Y(H[aԨ E4MiSN/?/zW#ofKJ (XKjf >L9rz3b-K [XYD`z0ɤƋ&A 0I9N9zp蘼FUo߾"͉+,}M*MK]KsjpK%"dH0;k?qBfԜ! 䃣l^t7Y> =7C]f튳9|'RN훕uz|TxaHG!9 3Se?:u0b,Еe7r~g4.P+e&'ǫ\Rհ d6+sNEMɣ&16Ik fͩS@*5@1\>bKaeŎC!S/2ϙIFrce0^;`mGc#e*v*(Ml,~f"DzQA#aitmI#~1wո6}]'VF5'i~4_z>9ʑ{ J$^^gѦ<:/^m~Oxe_V{#ѭڧW!qh 1O6a𾱇YXYG4=ɮyUtZ+̗}7&o=nSX -*Ȯ:Sf ^j L7A@7L dFF,5R-E5ÖZŔ@èhr ni\[\v djM$b Hѧٳ26pc#P{,1I`C()FŤ/ !Bq'׾) C!,QNŦp`Av J<-g$#w LeH5c2F8ۆ?衬{HDj$&2q3 7,kp'T.0?s⨡Z0Ch_39϶jKЅٗRH8/陼nu0ep=|GQ;x?gzOU̗X' U!*p "B?Ȝ \V)D_nvBT,$D:=d( SZA 8>% 8kO 2';1ށH;(. ¡\7D,uk*rgrx!|ܪXD*F@]y-S|27\v) cRՔ6R= N^{$6 ǖ0⎮6"IIϒB}(pW^$ 9}F͜<(nhag+GD5^p˹jcdJpd.F}ķtGG_KuHbwsi ɝ J$1s}EPr2t0t,UBz3ފs9ꨉ %G˅`ewϕ0s#+i=mfYˇܲX,|p` )"Д6H(4ǽ3J^)R%Zぺ}srwhh}&E楳' [÷ T%Zy56^L1ѥ9qf `p*\umcAK&1ǚݤ6w߽5#}]SikU8yRקkF656 Q]kdyjFYlB)`hjY~R*Fh5re75qyL_pF1ll%[n B~j UlbM,"e ?hNXԲ.aIbP^V|ь%Yi/HOߖ/w4Op2v=7; owX̬3ڀn;2\}bX[wxt4bs` OpD)yo*uy$JHpȇ!@aayVHE{JwBa,!+L`*υ\KGtнQOܖ;C-B 8&@5Y $2}jDz4d"2-|lyAE*ߊ:.wFb sgn1p0h'g -K;}n9qGl7zbbM}Ec6e*)ZV֚#zyA!O!=aZ‘)0e6AgGHyTzAT5 38aȂPw;~wb$xs=d`L-M z+EA=!Q*m$ſЬA !`ꩮ<d-;o%;L.ӄ\͕J5;u@3ZtΐJ'ldp[pҐ+{q,>O;W*n \=oqy:G(.bm#D_pHgJj4+I'r%{3C2`tI KǴ )}5]@( e:8lK/uL+؍ ~u縬Z 5i*JA^'vʡx/lzl3hꯩ@?^9r|Cuoi/Mp N囆B17 ęs ֕q&4482Z 4ZyZC6ԣT]˾u欽޸>#~l^v®؟.aGƒ ^33AR1@r!kA0N*'*n=i +Go!gzy?SW3 aTy8;:G;[I?Q\Qi]gf7cu4_

%V7$=1$l>}_ 1 t._d۳yFIwM=\<@N~e~8fv߈%Dy^TOiOqWV5a5s =#VwFotDv]yU6DZ%馨MvnnT.'B,!$jd_9CmIۑu3-IOS?>&~[ߐ9:8Oz%ƳY^/׏V~tQfzmU A}ٲm8Ȏ/$ƻ% &t~Iocn|ӳgQ5P.=J6zvȎ!)<=E1 Gr!,z(X7ȂQxMqDl?#ؠ AΞ 5nD:mnZseriaxE@W` u#84Iæ0?Lm&0!/G+^+ m0Q0p ,|yf[ ك60k &~Hn9Mww1 }&6.9~|?3k}TC`zͯk_YP<s)X&lm۶m۶m۶m۶l۶====DOLAEdEddjedݑx9N,lGcW{RX0'*%-in44 6ٓ i2Xr` pEQlقDI b))<_H 23ԄyX̌\yEc˰b{f3`?A]݊)arfNwq"I{̠2R.%{0~` n U\]2 dl+;Fx_8 _.%ƫh:<S 7w ^ah%|fA2웅|+jX^#"C33Eq%p&rVE|/k0EASf-83/gPi>E614Cڨd7;.NB‚DZ-Tί|/ 0*X@*+c?NfBC+"4K2뿕X-eKV-3oZ-@c](Sqƚ2K3}3T.JVBQ[-@tֿ2P@[HGRFqYHW-iImk%rJ{ie ̒aうh+y]6YV{|l$ E2NGpߕ], /oBB޶l6uRk2җG}o6lEWQ'w(ͼj7p$|r-Q6(k{qd2vGN5{pytd tHѫm4:H&LӉp|v,m1qEv?ًtUA\HxW<y4m &@7" zNx?B`\Zd& xD 0 ^H('{'_jX[o""Ax!vKC2Vї@x \bȷ`J?w,bxLA+1v%p:v%y=쭘I˿ uzciu[qXvbJNbG̅0]f=*fnju$G ϺN1?M$?JyO:K6Wǂ̴DAWBB\~C(OޮVklh=dC ~1 tg7dv 3Ek]n;VH]oҍ߻p2?09a|(r8jH{ۤл[ ihm.haW\܌&iܪrXO[Y9(&][j^ȴjOG师wk;@vMbc`)& ];@fm3G`e'3M.Ν8(0M"%{:8et Y2HZ"1E}% dqR*~#DݷUciF Z|/|giV p| ).%1R$2Đ`$n~WH2e>܎݂l?5}2zU1t ̿.n]SZBڈqV =׻(jKN#SKZry[q<|]tS3+SG.B=ټ7ka*z gXnI@pڟXeh 7vukoS_ȶuk".r~3Sax?BP7S,1k|+y@l!컋ggsx$kH5+`{, #̸kZ$r*b!|%_0AjD.{t_A]tW @+;VfL"Bp|[#t}2st p>>̅;~#a3' 46i,GN 83zdy`-mg6 E|DB`5KXM8S!whFONg݂"3Wx G2US8{*(r%5g4& -JkP| i Xrp;"k˂I;ohL<p%튱-LWl?=yo8旒f+̂JY{^9TwI IXd) :g_:z:%sj^6+e%ϢJ:ds6>7VjɨwĕD  C2,ߺ*hIW `*̞I{>UAf#]K,wfZlp tAAL=glo·:$xw72wwi 䯒)'.n2$q*]'l' rhebl=r kqAi[zE}b~)Dlq%7ٰAKo)'ΚgDF*T`ve<Щ8mQo$ߺleE,\̮ od9̌oݺrvG:/78b GpmYfW@ajɤ8plFQM & -H4%!=9rB)8r)\IXjnyCzYd5T~Ak#@c6[% D Fq-=. DfΆv^'Rijwf"ǽSqt|qT0?f u9?=ǠOK*US}R0^mR8@E5܂ȷq9W{7>A3U}f!cDh),,ubjk k%cύ!{>cvTo~a6E9>ykY'3U͂eBnS psw&R5QaPaDO0G/{ v~9q>#OJSNxA/[Nw|OP]?.|1ӊٶe{QʙطyC,%S%1DaNl[%A9*fHk?ۓA2^ y]>BCWAצ :B5$Uǒx4d.߰~ OQ,}, b^%X͉{Yn(Sė5N*f#uLCQx[PA (}&]g=@u{bK:3)(_cGf yI亂-K?$xb=$Üd8ˈ98=1 a^t1>"44Pd~z1O]mOjګe|5od'OW{[yy !+0plD)3{Deοzmz\AXX$VzQ}ؙ]65-8et8ߊ6w~lcGKwkp_kgv hۺ#G9-m#W+CYLH@+¾d̠0hʭVnݘS>[=\-a^Z=|c<< 9[u`nJ.ʘyTE.K^S'ƷQ#/Y  =EɆM0nܑѧyq(usH'<ȍ$zY4طs֥e& Q9{4hw> cIorpCH3ީ )CVM_8.F1n?rN41CR~@w)o0EKoxdo8j^[T1i S'mbS| Iۻ65I8R c͛ٶ/sHEEɽnVBV~62)Jõ'͛Wr, e qtjӡh4S_+Pt T';2N]n@\_.[R(QQy)tp-}:.5'0JuZ2p0t \4 ۥ8, ͗q8|\.2_o53ܗ|t ?݇mD-d5#KV3P,~x8Nka%*V {oS962ERYLϷT&:'u0@EQc+NNv/'ymt1K4\oD;:b]g TFzbť9\ Ef$HDn(* G|1,^C~ta0 O82C /ej"k<Lypg<˫Ď(uhv.d!R6P{WBed 5XD{8|{qTHޙEUʕh%XzqAgXZiTfk{z{y'i'ێY J'BiԬ_,B#PL|mi7dEUtSK>~B&]~QI+"KqrPΊ^;W~[l\?L aAs߮Փz\  ~4 _n_ 9m|t# iv O/p>μk#9J2#ғ,H%[TQxa5lSO& -4O{PǸ}M1rܴ{UvRߐb+g'AkafɁu{Mo}-d; 1c``zz1~dȋ'*l=n |oxCX{ׄiy},P=4!5d{NC-9 9'-,Җ*8yUV_OMEeQE"6;b\iaDHnRJ%a:ھ&n> mkF#ͯfsD6$q8$Gq l64567&O?}w+$ $.+V}~LpIK(Tv2Ots>R8}6q+}9o>윷yO;OgNO;OfR 㚩YnUZs ptTTLӧaRwt0ULXgGS2Te*/U+UoU_NTtDV⪾9VtN7!S UKr>U71S+ZU69S0UyU~xkWT}VT?O|«ʭ櫿TW+6rު/^K諿^Uz6z7ErkT2ήr<,VuZ6U.zT? SmLO{TI-}HSn]1S[U|d/S^I};P5W;9;){K{W=SCJB<{:'Sfbtr٫FǤr(3'3k$gAx8oB5`8A4#,H!)c31F8A0c !dpnr @H @Twݳ~(V#ⲓ~yGGx|I`LB0("Dtq؉Y|frJ `a PC2Iœ$/i%)TTNP680, ik:|3tpnԚ pnF"1N<66e3)=lc\Xw 4Z7+uk/}NiHVH@2?OKL\97^xNp(!ґ =*jوͥ'@ Kn]σONHg")*usեbrXv=5KXCZ(A/jK!&4ُLap[2cW.K3Zup^t㪦wPnTkڽέ6WC\Y}a;OpSq5a梍˩b/St.ݩɀVȹGT<4)IH/x2:?)^m_.se-f%$fwYr[WQ=,`{חRLe+Z[:("䠀H. p7Gff1:HMoV JPsVwy岈AC +<9◂G5 4E(%PV`5a:ڝINUVPZ&캄Җr :T41M0mA)+PjZRB$t*ou=wx.wTb.NENQ BD4.._&W핡g{n䵎1-8錇]fh՟*pjK.mKàq!>2+!k֖;j|Pp MtKӴ#.x'׋.pg'AnJG!$&+m1 (s`r\)aA`XܝH;k Abψ@vE杒- >0s-SUC}p|2BINygnDMFewC t']9K/]Z-:L{\8Fyz<]z"I&:uj EB~5N:t4K8wlRθ!"9-,F襵rCafKU"qֽzT%,%k?w!hk_\%?@}\WF~?h.xsI혿gno%ݤ)ari/K%!9O)gQig%֟|]д.!V^[Y% ?5DK(A)rc1 aKXxBCqu|&j7{M뜑fevV>Ը0Kp6bM@?&|oQ_os4hpp;Iz! RfU a+d\63Fo{hMZe!76s4 >  $8wgSXdXh >Gu%?=j3ҭpςbG;.wש,D[\*T=B iͼmac?!D]W3\7B۲ hvj$V+%,f#C-OWkc 1$uyI@ 9ٝ0L%tkTWqM4Z4t vl~0"l`AO@"9P`@!eK#yFim[`$1ooK[%qk6*]Ͱ3(5ׇyduS3ЄtlI>R~1X[M.Y8eEM>ҜVPe21" Ix۶[ !.s* ض3v9ʒd]d:B÷B#Jx[N(f,cύx&C"Ȭfa0b²#3ɖIHY뤷 dj}jԬLeoE g|X^ [)Qs;Ȯ|Ʈ=3"nF0nw @_~Z{iS8˖|Odn`TW(۴CѦWcIy׾ OgZm9R&Pzb_)}xL]qB֜J* j#?pv?xi"_`~IC9{{.S ~GJL5ғVUifX5OavN08qV>91r,3@G}hyB,1(E8|S *ƋZ|xd_+'H覽;[d8K?7IRB|=Ǎ[)Ell ;皠hP|Dc'Sb~Ue[d]fU3a(62bh%# I Y HiJƙREI |EPY4Ahs:oJX8t~~KW%3,X]aDY + uQbN)lDD<6e~je<^տVY a1nV׫k[pn)V Z8q-UYz)f )L)nF爺V/[~jq I_`d$ Ξ.RDl /"UV_VXNM++-x}Zˈ|0bxZWJ> {-̅DZH:~/ȲHX#U !5sUIy6M[FV@.EW{ʟt"zST?P7BhɶFPKرVܩֈ&P҈ AF`K"8Z i\_OĠm@o%ga|ZXo_0'bTn(Z'!5eWa}t:TkaGEGB9,p|F+%|c !4{l7*86 E"ǺaCO-.SȻlx3FFRexĽy8f)ׅ;ɸ9nv㋹6PrbG]X>wkP:8ψ,'WTmKB<Gx$o |I /\Fg^& ꄇP%Ҳ&ٱ]2 R3velc/dvl&ڐ4cT9iMbdfv@y"6o6x[Nbn2>IGdu`IǹKޚcGfw)RcbiSr~ٽaw}v7K$#8 E>!R wnl;Zk-&pX~gcQe|Zjp{o" b~RrdN+Kߡ($F=!p#PɒJpBph~OFf`n>wE6fOp5sb'CGhJ}1ˢGBE#74lZ*)M_ ˔#Nꉙ16Gfݬ!A+xAQv)#lG^?ÙǒDC<8ӅOx5D@!݌ ŧqbbAq_-& u0Re]!uOhXӇubU(Y&L;ntO)?sKqq#UՍQ\%m .>^I| }F% 3SoEV=ôD÷rJkDž@~#om$I(-^ ;RHaĜ<G^#uK3wOS2"֤!f0̃Y@Xxƣ]n{pܵ,>wC^LxPN/D ݭ2}nݰ~1v{3"ȑBR,Lń^V:7ks -ƫ]vKVKVVD%)DBkVhR_ rs]m͖<,\ĺR {U%p5isWLQP"7lV ?Q߫bO+0 Yd[eСʋZhmw5`$,zfǾ//A(Sx#pa%^uHy٨"h}F{Ch*jФZ-]?h@dc; 쿠5fj!N,!v<#RL2hl7eӺE˥7,#=؎{7D1/3 -B" m_Ejie $ǎzXf|q_5o(1\iك}kPy<1gEkXǝVl|# vlҫiζ=̖=•7Y +)b;UB7cΝtcQٱ EQ.lmWN/m1wߋ!rع-FYِ$s~w8&Y&hMPo!' }vfFYw[p\3sc5/wQt) h윆n]>rEV}u.өWz g+/?y͎lV=ySsjȆsӾR&KkUʻ¥CAbi;UoD"ljR'T@UYv-yAyP};a|1<):Z&xQF< P`ڝx GdKBJG=0< >&;l1Kb$`G!`Svʵ=]&zoD^0u1㵋V#+q3 ~ WmĬe%JX DؖuW=ƕ~zЫf%>*Vˎ{Xwp j*큆 (=ܦ_րkτg 9WO ̭ʁN2^*\ӦAhT۔#!ZS!wǖ{yq6\c6\Ubg"0 P+V, -Gc25kh3Id%I#*tVҟ'Cϡ[ -Hf$|W} #ȇȑwMYrW/[AIwmU;G,a[7qA vyR+W!m<8 FA/j𦍝뜩 >6H; 3͝>:GK僵V q );`. 꾭s5, zpM0McMʝ Fd'ݢQ^^_Eb{݃ԟcAyV.A[˕NjP>1q18)F.gsx9û%L6 Ⱥ_p(S)P;%uhrLx/ ^ǫMMjUr0h-nUU5{}l\Ík _MfgI8l,H,nkv5zK?,"лFִ=aF Icw_ Hb*+GO G 7a(޵.,\ޒn `+Sij7M\C:yDWn{-N L4 c.jAs \ Fn5gd }6hAk DOXxEwt]5\x[j÷d'yT̋-3ܜ:ڮaƧϑʍN85߷x/ EmBu!N ź78q7~d!n0mo!"}ȖD5߃=Heghoڏkcݘ[)!_o/puMXc)Wj[ڿYKd|ך8fPvST_rՑD~ܘ^s,ӧG\ૣvN7L p3)>Ri mhb}=z:5`x8ar,]/CFTtIjl`̛p 8x汼=P㷋B ͠8_ֹJنD#a,zÚGM1A ;{ڂsnsX+Zz@gF)v;_/K ]Vj)?oGW4_)u͞n>s{}~; G-A;8wH>ckƇk5F:f>ݻY6zGFFx|&I+!Vg$V/a1;d 0^"f_VʞJNWx8w <߱! -6͊p;:;on?p(eDri78!e>u,$D7q{:6Z/ &}83z4(rDvaoz ֣ynDmtzaiOl{ U&]QȮ&SN=ݮ\5?6lx1 nl;9Ͳ}ĻvoRw :m*""s9 -^&0wC۶u!(UR4ljdSdQ;wq[1yxj|Ȳ^t&v=~pQ-l!{IGh}dͮFcmsɐEzn~a EK}ho 25zlp U[lY.mk"VݙNz+R-łcN,ݟ7yHV&Ey:wCh@A9N9T!Kq̼ 2 (uQ@NnhzoշsCPqI?0x֓ c @BLZS{ A-oɷo\|}3wi7Fpo\U-=VA)OcmC OL}K|ZZ4n"x0`<iHf,lVwݹ=DP`ܣQ$ߊ"~vӚ{:ɸ~2OK&E>"fM !|u_JcGa~"6tl߾4O0~c~{(cpr?0J 3eW @#Yd>P^{i.,?þw>2jψiiwiYNJ8]w&H+b?E pK G[(nY~rՁ{cJ×?8FA,iT*{*%rzwdԑȧ8aY3\>w_ͶG>2~&W%d~-h~6s.l2]z϶..{98@7 ^Cz}}K;K}}:O$Elн'긤tYYJ|,K \Bqp:R#6̓;!o`u n쀫X?vՄyfPJri'X¶tκb־O?/j sHPomZPʹT:pSѫ! VTEpTk=;bAWpf٧Ry<\^5(W)*7f]G]zFUjC&8xwU\_1cQ$#ҍWHn- l^Wɴ[gg?Ȥc,fct <kOLv%r+!ߤe$ğwåg¢ޝJqBew7<13cm+hŹ_Y^{a]7il;3mzX=#p 02J#@HLrX$LU(di&$e'ggjٓggjHZ/OLL§W_\G_Bg"8?[̘҆?韛g]B~Qπ,FT Ml{3uy}՞O繃S'۞{LD2ǞjJiskk X[<%NiTF?cƒ1{*5 _XUN RQ0JX͓Ch@Ahq!˟vi %F@1/I\mZ_:נrzI5N+G(S1"wAvKһrmB$oj]r>\!aF)i7nO/ܲOmt̠eodNYtPKE.d '4kLHZ j=pisUoN?/:IoXkqM>F* v(*> o OG;n4 E5H?<ԟO~RV# [L&4B1„-s "8::j۠##@jUՑ_ZdE`$#<ywj)dH (H&Z Q2#U(kolIztUZ [$|?"<>*rg$5*۠o990mb<(}o#FnLDk[$]To^xg$C#W ?$N G2|}G u+M@|!6?7)Y;%8[s͏r+ }rr>mLs1MUOCR.A1zXzu83{}cGR}y4N,fdFh ^sx5VX% Z$vxL+p2*ZdLᝪԍ$RqYj-ӵ)N^,H k628Wn>Tm-0)ݺ@FN@ӀTʻvmGͰ}U$+[fۊSahf 3\$M c78c/d U&y2JppyI/T j 0ևI7ۑ7Bt.fBx T`,Z2RoF=ǵA:KX]h5hpco~_TA2 WnKRj~+I4vg"~ =eYe7O+,I{>yɯ Wj%vP5ًzKa"dyo\),=]6 HrD-QLH6V&g[~ׅܝŐ/n2?¨ s{sHG.eʍ[dp씰 P Nv8D& +4i6%q=[^@K8H`Kq*M72f:W|ߜ"{DJ ) ΅6A|p3}93^) >C1:Zz]q)yCb{=y'hcw8gz%RIDHߞrh%άS2m]<  C 7"`e*6h * ~;TF cb[{Ϳ6~56ٖQTvwJ'6ZKMR&.nt$4aCqVNU:oʘU05ѶMi`ILBgjŊ)g*&!TN$GC(ꡊ#Mo2.8ԶAJ:ITS??ZoY ȕB9sc`415t_W*o2`׿rUfe|&,Ů WI%'HlY^qX9>}abx]GHНfP`э'Óg2iav$SО*Lv@T7 ;`(ٿ'b`o):@Cg XU3 ;_Ky*W=}W,//Zo֠bS*ljq%MXb|k: 2z1` j9O]`vgdMv[s=@td%+YJO^=t{tƜU"E6I9~<|cY/#/vNlyQ[/hixXꬬ\܇'2DT؃_$-ԓ%R|C(Xc+I7 W]烇*n6fpZ鷿 < 'fK(ByEO)A3 \gZu6>'wx(O&qq޾k ``r(p/uU?,(^g mUbK|̈́%0RM.G,:kkf/C5m gAk;(G 8ٸ5IBظ6 gJ U`D^DBP6u=Ţn/̌_ TFwb&ϝL7;?? ̀̀M;0$[%1)HjS>t@|SCg pJ{M}{wHZmJ!CS"Y"$F A.`S9JK<:+IzL)C]Vi`QK^X$i/! g^SeYImq AnzIYsx#wh\ajhfz͉/'bgg aZf`aGP]bPhuq`w.N5[2Y&: G%>0wOjU`f0%hH 3Ƚj=9MI+) s7% kU.٩KcwZުŻe9hH(R!@>a:4Hx;  32}P @\E[ZAdYN lL| ]M|~]ɷ<+&H[bKhӨ⧩{%IxsA?4f@r> ̷&hr 9 O-I@6ׅP#;{`w`mN:&U_Up1׬1:BHMESWT֪S"4ZYD') *2TZ=(N)tiL}(MJ1})_*M㽞l#]( ]j,(6v9A/ j5_NOE=TQU1ع|c*q:k!򤍧O^h`gqA(_B?&{l7;~Xd+\T-m+ܪU("lú7sz+.I^n'XK$+K.x/f)mHgY=_?ˁ>B"͂iJt#Jx9I{I X.*Uv=KBA`V,` Lu,d)) eY sB2Gh ḬGd\.߳.%HwN~ٹy";^<R|ĎP]H"6 } t"&0KCfV4PXNUQR!ta"b`Z0\ _3~lvFlo'Egt`m/a sU`yKgTaΉad@M'HYLрťDI&L*[Ȁ^ 5.aR[ܭV&cߒƑC"Tf@25k4,<]>d?Qz55SB 'eN Ï˔jbOy}*sm}F|)Ay:T(Uʨ| zw7lW?:wt>v~ -a'''G j 3tw~!F{) 0N;?"+J)(X\6iw@zbb7 ,Y=Q( :Qa8\P$;!E A@ w6)Pr5)IN/xg\΁C5P`ZE-#'YW d=Q q* ~Э~H{W46o/;N~rۋe=w/;Ù9qbG*O[v:yi XY>Mu=_nNz^k0IRVں \Wd:%O|{K loY[2q h#R8Yi@cr0 IH,+VJ90|?I7-fY[sقT 4 EaFcmMxÎw=Nk@=P9xUe7GVc;3xe\JQM}X!AӺ'r`3y[`AlWDtɯqw(QLƬ_ ưv<14ObVz$O k!zl!:4dda" iX1O3jB%/ʡԮN+\5þ3iarLL(Qvi36-msz5>΅Hi;Gu{4u)>y0ɏo33,ِ fD/m0}3̅X!^ G9[KXÓ<'qW$pr#?E+ ηx@\Y0H1c {ƹ8ҷNY +sg%1dcRN<0< #|Շ{XZ=u(~PjdԮ*`=f+Q{Wl=8jвrRUKDΚU+1""af-Ɇ_"P>4#O@u?s >*Wqo^3{5rUwҚ"bڨoKpyqִ=8fI$/I\d@R[) me,܋DIAާ*9@ 5=Ÿ>{4i)'BYG܏7Ϳ> _(_4ƶ6Ml2=lQe٤W٨J%X[YK(Sf0W+tM.SFq^Fdqm քN#JZ^S= vM!OzSTBBSpݶ%m۶m۶m۶m۶/mۙ_Uqoժػ:aufl(,l@ .1^pk_oaڱ)`LكD4x}L~ :os|fҪ"-<8. q%ǡp 3*`RW,,8"2*&fv1`t,}Z$,,3 %CYEiА6KUSAV9tiz(?&Ƥ1s R7/Ǖ:d2hMRYgJwq<u`2tؿݟDgy~0] Eux,e?Tjs=rȪ|7}=$88`)gg!F@ɲ ib-NQ7*DF.GmR^n9.LY 7ȢjgFr mZ ⬤?}(E` zEK)ᛟ )fQ%s bX;(%!zL %պ*|/!bQ,$tj_x 7%p-`B^ff?J%Pl ;fy SBx ֲ%xuˎU⑍#x뭽(O9w ƀ8\JV~ fNP8ttJgsFBB7&n&$5nmd)[{>+^y/HrY)uU73얨D _;#-m"SƵΑÞAvEGrY]Ho8'es!>߫2 ܀6`hPw.ItNzeP-rjḠq-<1RC݈y)tN+3؄N2aϺP>>Gc-yJ5F3Yz GΠTh9T9L | p 0^Y_&Dd}bl"kCnRx̑NrMx4W-E&^ O~[S,dmAc).>ya^D:̥jJO^56RuGavGw$ք?TqF>z66YrrVGB9EyHQxT H-\Pdyfz߼ :X 2W# Ssp"$fi?dqP<ٖcFtRh0!; YR%0\ku=-ne\ed&ֱINֲK}dst<Ǜ*o_jm+%jwh]P+pY\CRV $e; UOW,)͇my%8>۸ϜJw(9d Z 7:l`G6!W:c:41Db鷻B|:2I dkብٳ76PVݐRTAj]ʺ8URX qeE'\G-!CS*i]ʸΥ(d݂y߳g`hiyuJq)&819^LchMorRSzAދKi vhe55WMm - 0`sxЯ X&'+tG p'šo r5\C~>gGe4F2A93mg-Q+A_2&SG.i/J4 QuG=+>ȷKo"A{T/ِ,{qDrY^06!+ɖ]W/ ږ #a؎5㢂U~Wkx̝ng1KKj["@[ !$֧s}HWY3IIamYSMu ֔?UDu@7HziicD$,U\)][{g ça7ʾ,ѡ2U-,?[`,%uze6|l Yw.GtoYB9^] .pwQ9ZHuH) +FM4BHP4ͧO$2R7ڭ=}c76fba+v[zlV#Z~6oZ43 B=  ș$%m居p'2 .]݇c֩O2'l!$ǘ)5 D4OX J,%$D7,oQrs3A͹;4!զN'uۭ'71v4ˤ[s}D2_pU܏;8QZiBGJ$~|>|T2GRx܁kV'ӾBDZ#$6tIئ䇢pZ .g^F0%PwYp]ar0.s6S$^Ȣ[ 1%!Y)\d)v܇-Y54yJ7ӢM ^?PV۹I}o@,Z܉7P4-M:UG4U4 #)Ligzkwl9K3g"Wz*8^wwfp9pt]"_g$*B26~}PB&9O]~cB~~jRWRb/ŀP0N DtDH's|c `:&Y||<&zթ̚J7ߥ g6n@H+G9DCv v2=.zsҿ2cKBjاscփ͹%vntRY^3~n2yf~m_هM[|Z(JO&tq)-o ߭gJi?635^[]X0f.N*,m^GHQIDE!RB Kz#J:ͭ+U fW5EIZp@hB$jWBS߸ pZ`& ˙6r#(!VteGc^@Gr  eJ&:M%f!*{> nb#nں^5PT d}rdI6CZBm/TL6Zm3Y6%/ƸaJ2$P~bbaia 6^qAr[qy\BKejV#"=ń1ifo[jg꬀bꉧ5lzFR '6 J)6Xn*JmD<Acz퍻(/_whZޝȋ$I[['r,ciefZlAR Ƀ3iG4ϖ!ln-]MW $tmzB%&{>Dņ|%7ə! n<)#FGM&adb ^'W#W;FM S}z2_}\Pvv^ΨlF KB6#5sn]tcFvrߺˡ6Z)æWQ(X;`{MCم z-w7ؚ"Q6T>|$Q`!*VT.u VK7DAmtZRCZ&gMD ZJh?G\9![ZubZTy24%H hÓDמC>j^f؆ NݩS-` :Bsw崏II٬)dW2&TXB뙬&lIqK8 (4%w;rP[.v9\/Vv{j()9Go)ŏb ֳr(3;򙸬Yw0"hP֕-N5frB,Ct+e5Rx]GZC^HG; XrZ4RPѡˋhr eyw+Mt7c[x{XH>w:LL"C+qVHحvȠyτl)Ƣi$&E <<ۈ6ʇG;MvDe^Pʏ䒟Pt(5ax.ݠњP)'z^~"mZ,p9s?(!\p'(W$_KϘM~c_/ws{3|k^VZIMH}ϛ:ӤscA(&~"0F7baɤ! Ygmuܚzcn=1u5oZD$rQ:Ԙ^?3F62`PqHP[-Cb: լpH o A oݚ B~XDݎay?[)aėBll<4BYT{54iVЖg#eORfPdlͭenwrKp}t _4w3s &GڵG=&61aM#`?N+t![Q7eǟ̧^ss -u'mQ%!_L2RkOPTTYNCI*dX;]g:\o؝w-m`=Vw_hECM g: v㎤d$OytdʁG#2o|Wx2ߺpN?o藱 P anO84%. l `ͽA?۾8_ٲK:RO 9m*.9Ih 2~M+T%$?g˓.u&<($>露^pŞ Z"jHW(b㤙DF "P7<ҌpGZH8^}輲 u"1r7tSyBUN6.-t8i\ܠ.yxy' %k /pAyƓD(9F> 9f* z|+FQؐ"8]8;~-nރk{7cϮa@_b1?x^+h^:;:Inp[Sk-seEY[j~yQKRmH E \<>+)Z\[@`㇯K 92 xhS21yT+ꚷ2ĐEnIdCjuM`_`ND 0©kkb ۹OlI)ha.d* VBuܣ*ӛ-{oxt=,[jg &~&B/ey>;)Z:+O16a|Bp/&`g[Py>˸?y3`ZTjsbr[v, :v,vŌ; Iˈ[jH}^w܉$o H aU/<+U,cgOSJݏkVV:e ,ɤ#5\ AE D]`Atn䲋H:t/kE7UCՈkם^Y")YKpinڀv [*CSѧL~3Zj07 xkFP^ÖGec׷;+8,-I]K!{'+vm: lR&PPwS[7[WO{fA[&adEU+~6gdFd`?gtD/%Epo]%v١_C"\R4V?|%Ogf>! |jf֔ޥ_W""fj,NiŒ3״=#Y'8icũ;&1pֳOR.\T`_^\j tj3i"&N:S:Ѭ?al}O& {4&7ѝVZ`2N/'Hh^hg7\i2G.HU#zx\2*9[IwY`P{º:-Fjn^cH^1uM@"v @1XݵonC Cy%OUO.Φ _l O8M&PFSm!vnݱMPm{1HQYOLy?$9NG_zDqE_?-7%>@B)cx:2ZbU}GWd)x5> ڞV6ct\27]I[]k=o;zs-ʲ٩e9KXj՗17mljrl1ik?(">_NЫhki!"PU9Xg~@^4|<ƭtXȚ"ڜ .Ԫj.F%=n׾,\;r 'hGo4nɐvC ID1zM|K{HpyjI 0hg^Zb^͹:ݵ6x/꓿/[ AcCPyD$xŊdW`( V[XLE\ߎ{+w ;NqD9+}\g m`g,m21 ]B^,wMgOXP[t~v01¯<Ϟot2ukF#*~Եq/=2MCW 5If83ٍ$J/t+i9N7E7ۣ!?NKiRv]l̔1l"ِ@eЖΌ2%]PGYOed NUPrPorSz;3.yzn5TYg҃)TFiEj( "[O ܊xϫB3,):̹j*REC;C!f~)oopk;/[;toF $h{ҸA>?.ؤ4Jv&ހ;?ܔcQE b;5`_qiRaÕ8j_g]œōd2!}ZޙlƅToy| ٢V-n-@ʹxX YɄno4(Z J}G0G*cM\%83_}+r S 'AE<2KJಃ5,Or /9IgR1@a' W΢1.LsYK2)lBt%1KL]?_<*$_4 ׿qGXB6 -d5;AR>],P#H^ӕou0y)"!sȮa,N>R(`G uIQ[Zɐ?plRydK{c~'3S(b4]P>MG;^e; wD@ Kk!ebf7eWO1g I3oEWr^#o%]eGUŗD SǝOK`'=ȕ{s᪄j:Teư}r40J)-D*&eme 5҉+uo+k;4V1.nd`&,!(ϴMYoIGkoءKJ2f}D޵+R9h~)LsSj\RԱq@V^U۞fE@~r"#ρpm_(eLbӯ1R]kmh%Sk ޢ?Or 'f3 ~(vAEK);ĸϰ >J(crODrZT$qB94h*xFq !hq[xm~);e^nsM"Ɬ&c~&EKyq~#*|'c``K\^(yY{pw_cp]1F]6N)gm^֙(L,s 4@]*ؚIzĊyUc90eI$ɉdT7,G||rec0[c9Y(QMW`RXx nف ȬRm uU1ȘJ2ErT)Yk-gKHac溬`ʅ E Ȋ 7fTѷ3[1őj2µV{,ۛ_noЇst;춯Ogfu%~F\0o P{ Qw\QMcN~oj${1@ 1amh=b7y΂. bp](7haUOl S_3%KvהO[$3ZUM>aCqxšE, 9PAY9`@ث\9g06{})A|"\9JE{ŧbNǀ0!G1>K2Ŧ|c]"AbeLNI֤t&<g]x ^L dg܏2ew}ge9fq@.=`lŘ IEȜ|b㜰E>&.L,jAP|xARѽzsN":6OoHHFZ J"f sŵHk ]\BhZ.vD@* O!*dsm5 )@C6*_Q`Ҍd/.TBIAܻ/Y΅e[u %7?GWE>J|7)Bʭ` :׌;-3Vm`N/: R.o/:,.D6/?v>-uwp1EAkPK:ҥ0k]oXwf-/@%b{RT)# \;f`rɆYSJr=zŸ=_= 栺$ zD˻Nz7r5Mo{c,9|z sĐj1@q=?]!vоbM5^_f7A.Eܷ/j$t1Ԕ/_w=a1L |etOBFEGZu >݇+Yc@Ր phU|LSO : fʛC4!k7i)e&Ct_;/bydIέ*!JP_|ꞩ,TiDEVU<ȇ/q?< -*` 8&SυjID]чO'TAA܅X?OvsڬZtHjRelBn"AߑF"=F5ʃDl9YC|IfNyt$EFeg}AP$0YB'i؉m*!I NYPB\WCNL1܁~K]i]AV*=D*jERh*G2爍#(pXbM E3v "TB`Dy `H}?.ZQH.W8z=4dĻo9U`!ˈjBaP }(t:7Ёxj}9n2³ҟK[+4c,Thrc?i v,M|c* pe`nɆ +\"o%&Dd'l%-G4'b%2pRpW9|QP(F5ʨOϊ!TJB  ~%5Fp,6hKzlL2HV M/OyEOC.0Z*(SCIE!'l$r'!L>H7͗Ƀ!$O@5-t!˺P6<(16†^&XV n qb;@6uSpmHS͏R]&*&40`U,Rdgq}xpj~=Gg^/NG]$r"z^yO1{~;)P=)G4jrc]Mii3Џ;GoІ2LHV`ѷM| Ī|ގ!e**#9^CV0EtvBvothf{;3H'G=FLwdQML&GM+cӋeo׵A[KA7Bu[kԧ_y~c׀%1"( hњqs7_T$ #*Du̘&KKa3 n<1AaǰP0REZ L٬)i $QP! ' vbB4FM^56/*rw-eaSYeava[Lę(c'cuDmtk]YF)'"SYOaީ蓵?wPiY ߹#7ki "6XhPg"팂ܩvN58 ,u%.M‘Q*έpAvR|ܛ0Ƥnw[Y n YAJK5ًbސ? X"I;S3bMV K}JhV W){V:9sHM7왷j=YO= e⾽}zq^v90ġOi=`<\Ej1nBZZ) O" ŸbZ%X<=[bZgwvo܃]2DD u `ަnvh:B%WE_dAأcÈ+HSh Ie%PM:+HD{$O &Va::_]=y>i~'m9nӕrNWqڷ4!n ͸nZW}[ w5#gye0E1PdYnTG g5Puk("+qb gp6CuhYG,ky0̔<ط4jlt T_:Bx8 VOC>C?z`=L->FnFc窛0Jyyđ+ӠޣsifFߥr28M(~h?tյ}H7|N^HrxsbX$jd}v4L0 sqbvWgM!·;'՘%W kʎof@ztlH6{ `c-% :5՜>\ S s'|F~;2>znL Ք>ϟNxd tr81z-?x_A7X^)8x)$OӒ_)c Jcx}+2$U=5SmdYH!J+ ftzi⒥câEZ&̙xᾚ;Q/#xi ا;2bYj^JxTNBuI{.3ڨ.f}&YHIQc<0x~h|WqWvmy֮EL[նVxx~xiWa/WZ8Qj5&Ěwߢ`vI\]1·U`7P22곲'0-daLdLIPRt"IgC2H{ckADi4ָZOTZkS-EDQ eP:M#$>7<5.‡U:xY ZU: iL~_1"b!+.ro4$p5nwPjkӬiژͿW}3h Nب6jx4$΄f 0 z܈ڻ׀!&c#-9\&w_RO,E),O?,ag0:vX8@CLU#SgM@$ QsZ%B{T{q^s} H30Jܡtl͉3e8A GyWQ@^\xPUעbn=u.Bt āK=1§KLjƛ|-A֔6FGX8k/ĥfB<9[ߤlnՅ7.ѯ40[~N=O# *p);YU>dAŒ~3M-r!9M0U7{ D=eTס aLa)?hHylFx|oK1VsYT'+cRdR~OMb 91dR˄pQ5;I<Ď^HR#**n^i)w E[TA*&I5RgF\qπvVD \xidǀyÖ)VX0J'EY\}'| LT\j7ح!@QAjF:Ied2LwS kg+"8V#Kަ%V֗]fZ& ]4LWV H<5D稚z `:PPD3[Jz$R*<+?aSfl(]yLPX B؅' 9Y~A9s!gO |pfzQr&0,Sqx=NځEe>G? e:-tv <7fw$tX@ׅoKCl@C$S:$LVY?բ`y)L`0oy*!嘢KaƸC40}7_[[ U?U^YW~zTEnlf gEl#=9e=IJVofaO<]xsû2A~y Ls&Lit.N@D+1%ϒ5XVP>4y$, ޻eVO6Z$&U'viQ "-ߥq9 k? ELvSr|x8mk=hzu'n~mQ&b1Aqw6ø0Df.4`9|Kߩ;dL5,戃TT])\kqpr#xD%̉LL᪶69kJv9uFg;L.h7<ɐ֮6شeԭkGdTvT ̀2чh>Vk'LKtxA+,T2ͮ EJ3e%8Ky3#d)[j۵cUC׼Vc1Ӫm]VbuM6ӺT,B>Wm:XєDŽEW$g[ea1y~`gUߘ+X[q4|#TSRKQ/4UT؅v,1Jh]l ⩘sK< NwEE+ ][o2INq7{;87;!mI%ְk=UwD),x:{rK!)XTu0HJyw2%B:޼E|V.j({BR S'X61xz[L-Hu-!AAo8pa3m\]^Y!2]*;XX)sJsA6-IEybRHT_iI2f(F&UB-@ t0XwoX?:S|r?8bM9}w;CP9(ELOY&dbXaH BD塓$FSq'aOP7IKY`nI![2RO]?KGa5BTn/E쥳5{r@`]s 4ܭ^w0 @+J?_ X]6WZ YY ZzR`M{V{:x \H6m-ge`g8v8kSr0%螣pe4^nd`\QKoF^^U/h2 ct5')ҽyAXHXO =,p(|4asu|[9ng`C BHdY= 6Hޕ&x8\x&}߻L'ek(FyzLwTD<[1ņF ѩI :˘UT&jj֢g_AlO E\F $6ލ>\n%N%SdUk[c…9mwb+f’W8WSzΉ#*rTA Xo >+ϱI@TDRSk :RZLZ*Llv[ "?QnNqPǃNHG(RDWNt VCt_vvkmwKN4q(@7b!$a.RLCċ %WN1ts52R-17]{Y'Bq%ލEjDnMU{V 9`~`!U6of[@<6 a!_H.ACūZ21ZlBMx(/sLh:SB=ԦLFxr^wsf7ZvغMtv fviK8s4agRV{*^'p$r GJXsB^ *z;ZJQ݇fwy ikHUب(Ve $z'wcdN\?;-\}ƨ1`!'+q!# F'05$dpheb &GFWC=f}XĞ`46h~t:\q򛤏וEQ+p~m;&c1[P^#i69yS,wkb0ߑǴ7j4?bc5T\^%M[kR [3-YeK6DVxcgDz X4hO?F6Tc(\\>ѩNIԽr)%-Ɍ BKQ"hM̸\!1>a^PIfj (Le T҃ifS&c˼M2Ɨw*+ϟƴ];-H5# l!ϷKJx '*R' 62Iȏ \LrAPɓAT(GH$@bZ&~tT.Btv׍!w7__7O0Ǫgæߐ>6|+dA@>|ȒeGF1GA'~3{Sࢭ2Ǿߏ]Mqqq= 87O%|p 1s0LERb\%Э# Ů!fwrpԀq^E9Qda9YHOҬK6 }$zd~FzIy ~W[c$r*;He͑{i[9,cHR.1'-`$'_^ׇ"B˅.6d`5$bٕ[er<[^H掑1lZM@d͡)]k w^|GtDrGKL_-/0_=Nj]FAF3C]qVH/`x%G{+XWSi0 O fZqڊTg;'h_H닯"n=)Ksʏ hziNdʡ=KaV? x|IJV00"0|R5 -3[)azE=@TT8VZ_u\gzH)Ƈ!0@22{ˍ 4hQ٠b6„8,qD9z߾UkIqe5 (bւUECu l4h΢[qيKNdŁ/+D%b@+pcs5>~tOLOw!54{)D2yPHF Gcjœ`;DgKSLō Fa!\VU:^p/e[1hAtDKH$sY .~$A9Ᵽx30#HH)p=L@s>U@구bIX- v0] qHfD} Tv$^n_Pd@Y#|_y[0T8J3K -<3yA5|YaaUMrxk/`?D0VTV^4p tB&~^ᎱX;, gX+pGgkDM!ۂfu)VbNu݌NOiC-ZrKTإMfc4]zN|85U{R4?QэKR;q 4wuJYK /Ra$$ #E{b@{ȪOx&.Ԁ*ÌKXAdVS=?]ى?ʼ64jHҬ!CvKQځs]:\s88gCe*MwGK|p;U8Q߿<crނWlҴ tmJJfqC+oy#}Sk 2ɦj6Q97wJѱ6J(;/ ctյ~w+ {wSB\s' h~{{/ٗ}&_zi} 9ܣO2-~Z^6Qcwç>͊z2χ!y,6wx0;Rn pd͛՛d;> SN[OY>+x{C oNԘ5]28hwk% y#"b"hΰW&R:\*=n,Je=; /_lP$uGe$BrT8 ˈ~I-K"^_6y! kS9 i%wV zW53B@! {5=rT( 0\Qj]^9`:9/v[McZ ŦcgvHeQĊBѸ=U\`;5 $q/fow.nlW;ͯx]u|Yg5]&f*ROnSs*gq܅?ML~'qq>p[\&)*(763M@Y~.ыT"嚟D~GO(|1DHT\8wq 9"*>e!Y1vˢA 8cl1V'D9y 2 }&BXaKITGE? h$vCCSL~1Iܻ!IDbhOHԁ5rՀtπ DvI}~ l F,\!ʅP\mzJa7a8mG6B`_E\bhP:mGGu7( f-9_|H:@ve<t1 j?P>dWmAoyQ}2 RP8Mk\5ACzAQ" \vJQ==}t"C 5J̑z~7}7pDFs)}rz*qu!q&mՁk4z "GHD{,֠}|P_휼B:9fyFLG˝§dwTe0q?$a4y^U} G^k?&!-[f-Caj6]:+YSƬ&ʜZb9Iәa6Z>XH@VXZzj+e4NR"[]fjiԕiblM"yS}deفy'ar1Q?g@Z*r`+( CcJ 1aMi'R7lcl6gј`j{m4Tۅ6LZ]sO2())2 I™:ğn&n( jjh]6imO!⟬d/\$z؍jҮ|(V<ΒoE?d*-ك.򚺃| e~oXvI/CB9f:.ȦL˙ҹK4~RpP7A4 w N14>XRQ+h' 쒣\j:,;, gB s6dfҚ$ԲLy3C.VUc^fԂ4,FӼQU!Fb5Sq#;Ћ]&i.PӰ薎XUzaVjGi ۱ VmKG×;5X)1mr^1JVڄ<ǸVڛYfJAوiyDD|CH>uG54Jܱ NB$P_bq:Ҝ%Q}E\=T.!Le :תg0ù w G M͟*Cp~vR`~$r'E͔:e65w)vYg`ڜMT47iyĂl4 6yҨ_$N0WB$P8H4B:3MK_t[*50DYfYEH> @JR;af9IB zxR]S$rFF<'u %VmK4y(-y[IF[3 Xj0Ukq .'o'vo:w`]84d R矶p#:s77ǧboF,9'XvyTNsU.< b(tvoFT߉-"XO  VO}OcpFz.0; ׭.3޿!{TF¿ʚ:^`u7rP$ dx'~HCAs:Kyqae1R LnG_-:p7̭ ֆ 3](0*ee馿W60Yu|4*To] Y-urmUν^Qp6b"I^=)2|>йAlw!n{W`S[6y}dl9MB/tf.j`Z{j*=>8@X0O aVR$rwT Ma5M92;3A_П%ghyTHbђ,wJR)ĻUB၁I2erca&QkJ7sVJyH? -%Hܗ x""j<4čKT+ԺŜD6bp7B|w" X-''VL (IDYkaO@ȘQ6XNBT)b!E d*2 -Xj̩H {zư-|yQ'@ַ|*03*Μ)x&P]J9﶐,@Я8.6JI'*VUXֵ}Xݻ s%P$-NV@B ]GZiKg3HW''T }NKZ*<NS /b ɭ3UI34=fM٤OXtch CVB1*Ѫ.n1@d0uqɨs(i'7xv/կL5 OƕF WϢҖ:cD-Fu{`'ړ!G _@?Țz4DHG +w~gi̓9o4߾ZdIff[DGIT؇þӎ'/*nOM=EϛS*iu o m-7_ހڤ~=i_L>Lȃ߻ )-dH@Tn?d%V*nk 1~޷n-cƇ>c'c)oadp{ :vqU{j!` "*kv[vȊ0ՀyvUO 2酶`qI/]?BŅ[aUi`=b{Z.5c<=[vӡ}61T (B>gҖ-pNrGNR%.Pʻ TM`Yީ:$j/Z1VHU TGY4,PRN䒦*6S654Y-D"M)"E}H=8WI% 1Ť0q/ؿ^BKCVϪnXq@)\ )Qzi~X]O%W^V qhi]"Ћvkr!"rWduuRs&UR/y6C1(׷~ǖ59r^.z|k*xJQ!$ :y(jhmwEWvf"#,F6nC.ٮpBt7xhh+GBv+o4B\Y2Zޏ麤WkLpv:I.WښcR.@>E]MH~F޷%}#&$z$Q$WR[NVҵN|syn^]2 72hoyy"w u]F-L7oߛ{J4Dzhi d?jj*?М}3b߽r}" p$KeUHz^Po̒Di=ieGAP:NAY}/&/TSӳ5#j'1ʼn`edj/S)l= 6U\ǚF"͘64A's\/`g4|ެ!һ nY<6?`Ӎs'x{g_W.xo ٛڱv$xЎD&SI_D(k4Lm'9R b%C4uy"˞m[,͌T`w{ ebݽI~c 6pG#HmFLė(qĞ$5Ԙ.Uf~쟀vmSmˍ}Ǚ!8.5Sqx}տ=x#n1(@ 8@'I/e뾿b}yI/(2aµ z$ߌ2q=RWkose# p-װ:>$BPDJNȣ+0R6XľgV3! M(14B;Sɓ+@TGnz(x{H١>oHzjb.jST .wiaM `;);&2 Ya}m*z?|tibM`eƕy4:%tΠd"$%>Y @^F[܀QiWp*LFp.F7 & G$ܛ}>B HSM01%p0iЍ !CMi yFa[+nYU0-2ͲVP, ѢDL#n.wE=k0sawEZ#*Y9PUpD w4j؋z~Q3rE!Ud75'(]3r;_4hD"f'[_6sJgwE[#NA?.FI `^+"v\#o<\(QN)kxeOI <Aff5pa`]-\ R2 cguYP5ƫ @ou9zv\%[g\2)=Qˆ3yҦv ` V6P}U2L_a߅} y@%Wv-4'?>-OʠF94D%N~k+2V҃[J*U4VJYz5qġ GT&9n7F^ad0xw0_m;I: hܰ##+hL!ㅥ@k>@Sb9fuoJMF\;xgVUW9&*024#u(h~d@D$IT0PNNpW +VDUr0^zD8CoH3X'6l lKO8`p;5tH 51)BmU GFQQ36nhr%Z{~wE}@0L(cd#O-OP[[S%xKVA{e[tzmGM\YER8gYn6ѫT]fPbnʲɧp d3˜ ߜO5%:-c_'0'>lkQtVmtG2b@f d1V:TRYEO!h8ʖ-*m%E?>qo7 GZA-V;k_Jg2>fMknٲy~K[oDf~sp$WpFOg/˗ϊ\莀ߜ@G蠙&^PHDm,M1ɿ2֫`.|v('Zqآr,>)ۡxK޻Q>lIεNJ-wrOE-=4N_kIz,'J("ݭW0[U_XVw߯:S%qv?'YUv]pSαs`NnvVu=/InZSx168a;Hlą @k5U ."H*@h"TbTEJoH(= oaFv&P{}NY/!w`05 3c^;i(&{T(Do߷3|Fu:sssL,,gb^܏&H WlooH̗\wC/(&sT>Tc}a)~|!g /GjKBn%޾pSSw!o9\ѿA2pۭVʈ#3) k_0?aK 2x$#b($ygf%z.1܅1&2wr7xxYͫ Ju'4")Ru\dLt"Jb/M=q$dyFJ 0~v4ke^霶k6>UJe}_Ps9.^kpHhWQ魊-Wc囫%7 ߰*9h^GmG3g5Oqdž:U O=&d0awFBFQ$k1Qi&õ|0mYq|A+)QGի}cd'# D603x'K9t~}|ylY^+ft^KӒ-O峤7(jQ+R5Q?V|Q&sxQgpݨFꐔ5m^WԿip$o o<q/ΖsӅr>([}?<9Z߼~9k7:sӤfXqBjxÝ9>%C]4 B$$20J*XK߬jC;ݺAig|MטʨM At,OGe=i||\R Msd^]#/6IX3jɘ޾͜I^ARY`(۷L58ߨ.ާ^(`ˏ'rz(cK.O9DƁcG2/+X6}sZP=x}NUG9Zc﷧k#H?+7BdO,z?-kJh7sێ[غӟH831N6:> />Ô46<.HSqe)|Ju/)WR+Nm^oxh Υ(.٩ku|å[eH?S%J޷Rb1M.^ 6ץo4 l4҉a36f}R+,0˪ P_~l1zvLY׵Eӽ]z#脝P_m1+ H 37E4tfB;0 /Hy˟fT;q!O( ߵ>"ٔΓ,ִbVaW٤fLKUNN"o}V{& zIj׋e%<c܄nW.͉Pڐb899e{JWtA:!ϥgEG+ŤV3wT;-R;9<Ͼ"n.ۦ`8tм"w./m:f# KE&72RFs毛oai')%w)ҹȅZԖNunoeѧ]_l:׍M% 1gj9ͺz&/on6݇ȸ|]ca]=0}Llh˵պgCuad=Z`.Ȃ˛[鸚&oLwy|5jaNɟH[f8@cQFWӹnp>yIYr`]˦ҷ8s5t ک.uGDܷtιz؊F&9IHv"t&e?C<8S',/ } ;] exepPж:cah۟}RIR'n-|/epV.sR1e{g<)20^|C^<[!%#{ZN,Њ;pTF%do=;^^zō2~AqM\UFD \ⲡƤ-}5z~Nib7R]ڥ9ߒ]t8Z2cts9]MȎBt+(0n'WyaY1{PDc/3*ӌ'ўe?l|9YCs\˲6:!oZTrޙ/*FmZԖ14d>.3w*ܛJL'߳3"?>qTJWr 9mv#v>VsMTDNO e7ǯ[ TasCsPD;ׂDҎ=n`ΘB"st0Jp춿`eq&2 a ۦ7(%-$Q2~fWqg' RH53]#OY4Ct'ÄǓMUΐiR~T2U[Co$;\ƒzO]sk$&йCKI6zz5I9ѱqeyk-8ܴ@Alwcöͻ+ZF,|۟,F )YOx6]hOuAR̹79/kb@v*ӑ73];,F>2ٸ~LΘ/65'*D@Df9Y-7CӴl_>"igGs_fe+pQm*at.,6wCv)u/]QK>L',drY=MzDq@]M) 7j|fŨЧc/ڡrF"L%Ӫ?$~R,ѿmA41꣣Osn?Art5r{T3-zly_W~~UϿHsVBTU!7J659~بȈcKnyg(O/_'FNv4s-K˳.i"k^i `,\u5+:T,ϵ6}tLc>+iD =CXBI.Ʈ.bZIש-p$үz:aK Ɛk,zz$*UL )~UZLxg٥?D|<G-ŇFxLc5/pAp2|bqge;.,'dqm+~5ꞝєk0$^\}iiju#cȻr6$ #ƚM$5phC]yU3lhi_H}A~һkRœ>i՚Sl4B5E x1/k}|> .dt (^>i, T{ccxӸ֞j>K#܌69oLr>6[Ȩǒ~fe|j?bԇ~ T(nO0?7}cοs*'Wv9 U3 "hYw )n%* oVs}gԕR['zWR8>I_VUT>#qiyxy9)'9a3֬, oH$ {Ь;WIqI[oRЯ37 (%zp}^hE59;sVEROr_v(vlfMx+GHVRHaZ>r :23Ҁ{ÝVV?O(Vh}V*nĮ<6$R@BQ'Sj{Kxœ}.?iPyAI&\'acd&RCPz>L֡KӢnX0%2G,ԄYdϾ.ިM\F/9|<XFsRD|K[ZDY%J1uQT&R] ݶ!ŖWo'ƽj̛A? f|֋ܗA~E23[73 p: P/8ـ Ÿ OiG?pOQ4]ϐ8Xoxl?`U:(bJ!W.NjRM t@wNaze2@ ~} !o/6u.̚6 W58@:B=}{W|k0t;r!Q@Se?t,Q*px뉢^DDZ]@f}أ==( ?3 ~܅G(Xl<ȀT'{?_c_+ چUm!~IC{g%c; Łz'JZI~ -@@,>O ؏B Z.HD'  Q{hGm!@Dw'%:=n5!`0KV_p) cO%vg=8wVCI0 L?5?G69G1œ(ĠN{zA Σ+ebYP ^ R?E+959]?EB ` &`a%kEw~%xBC(4+eW}jPwe=<IRW\Ȼ?g_`LQ 86aEtٲ#$5+8G!9<1pDB/IZGB%`HؾD}*w<@טhܴ]R_ /\N }[@D"#b9ٵ8YN4S/tuB"0hu,V)p1(:(˚e%Q ?ДA%eY|>M4egSEwe_rK ]epMÚD.VF 䋏-[v_' 1.p@أ{llc@ ae)VeUh<`)x\}@y q 5-x(H('7 ^@9bI,1,Wߗ{"`+ uC:-@"wZGtξUa5XN5Jh(@(Q0kYH}ɺkpXm^5vUsa!8Vnлh^ s^'d&O-Z_\\(Jz >C>fdq^XX0N*mTm-BHN #aDb䮽P$@Y0S  ߫OVM3Jo)y@NC-IRO^8sX9flU0'!{;Qr`(Բ[@b;pQap^ pY*O ʨji'!E07?y1B4N1dbZ xk,V΂Ś> B,XA9pBb=/Ģ`0G^: V5:*^>8Kl,ͬad_ºB{u}&y`qY *ĒZ3@'iDUݒ}tp?-5x;TtW ?׃ EUpmP! %.I8^e[%.z8bW'تeb {yH2`F"=a{KϜ تbՒk!ڕkL`xʬgkfq%Մ#0 2`paQ0g7^Zn# E . u/Ffؾ\ ]Dbt U^/ N?s+Vk)@$88Lj)@^er6.=Z-ܯ@rԑsxw*h/ zw"0ʃ;cY:^a{5n+N:&>ۅ>h,', vA74 wKd- : %SmT,m~x$vՈ{R d6jK4V2k+z9DL~ wwkcQ*SbMX MHuu3n; #fˇ0;=akqisC:aPu&ܣg?p٫+r9L*RBcNo tV`@s?Ćt;7\,V-@l#=7x I j"]P,n%8 zn/|Morȝaԁ0nj}oZ ?0ʆ~X[QYǴ1FlƁu]_| U]b@! @ @KT''fx%,PLQvu ,-@+:]sppa~ǯ@9}>L ̮3΀~“ɠ8NGl) ؓb:- IJYƅE}D}\|f1ӱkX ')|$%"Q1JM8) 7{3ƒ,G π 򯢴^U'MWLN[zjCw `, @ Zȁv_`֓UeR@ `K{&l˱u@3ClS*^:@ūzSup<~q;˚.q@ff﬛ ^7*CtzԆ%lq ha{|P` oOfa9;sn Sw^Kyg8¿yX԰QakGc {աo| i$fY}lp䗻D@X\ڌ]O^Cl,3ںw>,|}[$k~xS3#X(߲k#~,\`ojKJ*^T`.G%yh/05&+. *[d*Q&E@C-UJ\̺˺w?Z|@D!Uh#ThmhzF=̝&/{99gCv8;r`/va *~J[+Inp8r9Z‹(@#c(PRuFBߓ{< ŒTAJaөxo;j8V+uQtUW偊׊Ÿ Vc(2G^ܧ7ŀ#Eړg-z5B/\nN]U1cࣗA Hޚ@.e3-ђ80-ms%LĦJ^YԷunko2d ?/W]OI M^~*0d5c|>LH~g'#qKf?,ӭ›K M2t9!̩aCcRRaFX%%F0 3O 2; 6?{_8Y,RY$tY4V&!4oq @TkB%k& ?~*s5Ȅ- 艽@yXdVar֚ѩ@dZv*!6%CR0vf,V3ω{ F9ܱy HfO48ō?'w ğv*pDuԊ ƞKл`igdawhZi 2υKlgt PKdRebK+  "bin/WALinuxAgent-9.9.9.9-py2.7.eggUT$@`ux PKdR7tm HandlerManifest.jsonUT$@`ux PKdRp  manifest.xmlUT$@`ux PK% Azure-WALinuxAgent-2b21de5/tests/data/ga/WALinuxAgent-9.9.9.10-no_manifest.zip000066400000000000000000023351671462617747000264500ustar00rootroot00000000000000PK EUbin/UT o:>c;>cux PKcRebK+  "bin/WALinuxAgent-9.9.9.9-py2.7.eggUT $@`o:>cux c1.۶m۶m۶mm۶=\ܺ]gQuy;i`{9?Z@* vtIw_QItnNU"ꢲ9"u ][;?FR*pL{>OipqW;~'1ʁre(:oq!URx0stҖNpU .yAp*ͭBs%F^ ]""PqsO{oXGh{uPO.\wtbܫX^HX :!0_;n4ǹzse8TJѓ'~r"ãan4eKհkAhKgu,&*xA|clF )ߞSw,]g_β!_"c7stwnl+z]HdiNnY'̵[Xb#*({]x?nVL г@eG@#000010000130010J31iA"UKE+93=KQ#/*=I CPI4;"3=D#df%&h ͊+h$g+e%[,:A# qY-͋/WT\VRNL^APbWq0w{lnIV=/*%Mf{%1K^9 UyH}vh2w3c63+hcL"u U!;( m$~ li#HSÁg(5ۺ҂Հ Cq 2-6WeuZaIЅo E6} uO?.3dTK1\5Xy "#2#Ta).&a+Ȼ.@*)N�˿,9J¢t..[lcw4 vAJ&diD|_w I-qSF3>x__Sk}Xy~@-IXx.#QSsxL0;df*S,:>`pFatAs% =="BpkSoUdb=ź%U R7Mb8ߠ g"ME %R P&b83TjeV_6]t!Y;e+O #@IxP*^nAֆJjL~ 8D N[v76#ㅷўu^r@$p)q(GgZe6R23㘫ts(:6xOy/9eñUL'b"IaL({ny*௠T1v:uD%f#Z1j2M[cg"\i#LLLLL=m,X/3qvwз1u3fsgR5TR3Ե23kwS,h LCrg8ϝ/A \LahnjBoigqoDFZ"9QDpQMY&֖$VR}|ҙK(B59meg˿O2batkQ/Yt#*USi%%>6tQhE2Y7ćyk3o5-Ϭ 5xjd+5Z] с|z1*2t εvuo"/O/ڨFl3A9_jhRDJunY<]YD_kL1 9? Q/Z7>v v?O۫PJoFx)VxV`WiV=I- 4냹@Hr__aRgOg[NNF*e Pb3'KV֠qU ߛ-b<9 oSl|y}h{HvzDY<-$솙7Eanm>5Ha9io Ϳ3Į5xBgͯ 쓪,Җ 7?vo[(^_M-//ǖ{PB#$3vQ0gؿBߑ߯d6)J l>T9"V؝=…1'>-a×Of;/~DwT@e0rNB`< @qEJl>y& 1e(7nOl-(\ؚkׄ9wdH}5c]st||R&l3\5J*"9z8 l4BS:|W6Gf#-~ <i-[}b> A @@n GGdb$yΨ0Mc>НH+Vl|xS淡 c3=~~4|d:OTrp}H {0<\o܃`{Ә S$}7A 69DAEr2N٫ǽzzgہ yu~!5R$i߲]k?R*t [` S$ۤ3GM{vXqCh2faTё: B"6~pNO/k${ O!9d?yqw?@qDI@ 㙀f|Ǭ4'C}UKakZEt|aE#l2=QWEr'OhbRޟ,MDB(6/RH=/:4G/T E~~0GG ;sZhNs-G-@ )bzcr1 W标^ E ]" .$M-.b=R+jkT?ifz)Syݼqv@Ѡw .bsX~*ɐs6  ̞u Mm:&i b8{ O&[Z[)Rӏ$%D/UJ|8$A,[8~oG;uʋRΡ Y]SQ!h<~@ X ED`ɿҚ@JsN rluV}-1Qnw.נ#5*5zHT/x0~i6(i-YD$*o'EaNgVKФ?|7l2h9ʾ2 ؜hr FUF]nZug=ǻZE{%b*.>wT2Z5O&.3j4"KPD'`*āGr/ں Yr'BDU ̦{$Xg ^ka^eh@-0 Av`0͙V\M,`>LN4wU伭[`x$reⅪF^=#lʯk_;g,Нl_Y*(TeV=@= G{9< čUrB"KՓFww@l?zI4Y\U qQJ~Q=7>n[#\N ,8+F8\h~eac$(hSۣϹ/u7O~QV v *DИsU\V> _B <, ]1ƀi4E9o v،mv+" ˃~vYl֜Xz]ΑQ\j*F֬,9{퓺0ڐkV]acCӵ@kw^R62ow?{'o8,?BR(LZ~߹)(nJO!ZUJG ElU!Y?0IҺ:aϩB*`ݣE!EhR>c*z7QuqiA cق7>EylX tO9Klz}DRo rJ,0X6@@DRûcIp c6#۠sw#/t~,ļO; #V9x4xGCU\ ?ɒDggSNH̡&ܤ(#qEw{- HHQkh!x q`12>Kc&^x۳h(QD<\-fW~z7o~wN'CY.$B@&Sb"/ioOTEΈ^OZPs 2Qڲ BsO(nF(n]+z8?h+bZYvA?@SZqY&f@kB?u[ݕGCs)]Vf ΁`%OYYR\mӨª .BXGkDb{iͮQ` }V.kP0fSUiԴD#JfQ!%(V|(%™#*.mf /\)})4 2*-辷 G^eJص ($RtW({Ru/ߔ2ٖd[%'_^#TQiahpq~I99h:}7x+®WR9yٳ,Xkn ?M- ڪHcԨXYYt]~aBf3{pPs_RkN,0:==r"yyrͶt/FnXi ]֞"4QҽP*}icW4+d4NcBVT"y͹,BwY?Moݑ@J&TEZvݎ( NA꾛}c aΧ uRAF'&]RJoȧpCڻ"dD%q6GW; I#|/x`x]n.Z_,tݧӋϯ:qy[%S:]j:.Ns@ L|X1ݲO.K)wg)`pewTa}~Okl1e"<RSCHl r/ '(a 8=z0ICHaJstjۃsocQR$e&tr8)Uaso}dQ{Rث.ڞ&' 1Ẫ|!,O!pX wPl[ʘ-[Rl>lz1opx^x^CkEʇ2pf:IFmPtkޣѩy#yO_7-Eí15_ y夡 3ߑ]øVKG]RZոa,=OS)7Jґw$sEyRL)rU`&4*ڃ* =0 0מ}Bws/ggW_L/aľ=,!d ֠dDa QNewnI\ē<o^nKNE|Bݤ$CEURb&b$XGf   "Sp#r?2ѧf593ME>jEEy'Ez)seez'E)Q$Y-zI&#OQ{EVYt|b!ZyQ6ʕ )!?\(-VDv.2A=IeaoǸC,dC* 3`D׎AHVwE˰Y?pV;¦Z{SOR"aN^QޞEʼ}xy߅!֖G Kב޾fm_mb!X[0Ux)1ʎ=wkk76y R8U7w0&x(ΣpR.[!D4i7ԑ%+$<}͖4ӡnxyϝZRз)gˠMC6߹ݴ}<-V݇4!ݧlY*h"E6ɻpizr!h֑ۤ,3ęzv3BLLLDZ/ K3Z_%VodJEQҬ V|Íl"1bi4X9iKZ'Pif.l-:~8uwKn{AS+oYrMr]+%X5?Rfl_qKTuq0M4\Vv3{7Yunɠ9_T?Fgn? 4FZqmRYXJz["x̒n>-A F}@BZfk8Y2:ߗM6'r['J`rZޫ(-ݡtrIf1?+TQ<<8_dP9q,pזtsNag8A El(KU`n.K1d"p GR0 T%"5?$\ Q[JSښ u/_%o&,T33Z{'Eul~L7Y!p2_!+>HzRX<2C*MDfxk.=(TBEk*(:)7bqfRAS5-Qf E:7'#_1FaJLvXZsԍ"Ǧx:=+4`=N PD@f$&uڔAX G֨bħAwgvxFH4nVd<"CTChmo!_6UsBLn/ ڭi4o(Y7mA3ڝ{!'+̺H<`]>v9K;0 n(0hH/oɚ)hJ߶n%d4t7Х҇'?+:hSצ]q.#s3U{Ӧ{j'c̙BNj[EW* |ECl%WjtZE6\g[|h%{[]ƊR}[w3Bڇ vz+zZ섪^9Y6< jva.(q3 ̏@{np;t٩ rlm lerHDT]$ȌaxkM4vGYnk떅\ ;NSlW1 ږ/٠{ܲ9`_imפ`O[gH`NN-T\sd.uxW$1˕a:8 raG<2|b!R~dqH k]!SDzp*mFq[.bTM'LW[MQ[P ͍= ^,`]2 YVA 69G~ߋR1#vDj`&śRu=&<^|K/"?J`P>cpw3]{v 2{lK@uk:sPYAQ;[ 7@K7iց Q4#~3wԝ/#dwn^HU,a,r>v!7Ur!7 P/FsF8g}:w[ggEoVjgƾs& T 5B6,<01ӱ ;l* w9(.w"=ACr}Jb-ZO:G l]mȻaAiU jݍ7F;՚;JV:r 55Jj%oo AR<l;7c! 6%)yV*5 s#Aӣ:xe&\'$CNwb26A0}嵊UaP!r>W\1*`8S8:™z0>/Y@PC eSȨ4߫ȇ9ꨔ".Sy y|1O:Y O[aUsi"ݼ:K|õR΃TIAJkȺnx  2OxYYb1lY L )N][:NF;5Yz usӥ[ѺO:r]O]߼j6+at00{"}XA9˜T6Ij }ip2%h5E5ԅ_Q='[vcVrN-[5 `v(UVnvTAT'5c CI& tUS"hLtT|ө ߺVQs:Kz:G{1Gʠt"cf d| #j6@nY5TvnUl`D#tޏ7v ')vԬ7S>.i+"V>/x,{Wܢ?4 8I+wƈ(D> Ggd!mU^Q"A&ooI.bOs~ sQvhtM[R\SvpQOu1e Is Hv<*q"kɍ|_Ibge-8'GKODbVH)ɍSL$'+1N$d?&0N)JQ-<$q 1ldjF \nIn\jeƓc稖-16 )=_҈UPuV?Rs{G6mU@G o'ϒ,x8.!4s' r Yv9FU ݯPCFhӂjYx妵FY(@{N:"* ~-z*Ѡl#rRV GQZsVy)T>.Yȃ^i m_QaBhsnKPK6YLntxXd-gB]ޏ? O$uS  0_]ߦʫܽ_!!Ci*YٻLoړ7H>5IQN{21e:y"]S{W d9BŨbS;Enh/.Y1A;O{8r"aw,Znfp QlKj= `.\>iMf`L'ѡ?Q9l^u/Y$~ f fʝ,Bqw`Zr ~,U*O$i<]!,c"p̴ҞF6 yaz%N3PrC恛YM/Z=33NN%}`^S9wټ1i.ql/W!vKxm|Kqn,=dڶ0 _g"K\k/p03= 6r,)>;,au# t+XL$gERR/az-,;|p/*֓a#7ϽS~Q<{Ϣeȣ'NRug3  K0,eNض8Xԟol}EgzFCb$=%/O㳿C>|_Z[u%zsh  #ncodhL4_`OjAۍp*.Kj)eEPFԑ:JypxJ8sp?Aئ+լmBs );&\Zň'N\R&S`h RAP0ܻ0Y 0 LPtzHÈ )B@+3HW-$ N ! :A@t~*vӧM]MLwormC#HfՂoTAƛsRhK\"a.ٓ(7I&a.ۙIpmTD/Tq}vwCq ʭaϾvcg4#ɾ,5Ħ+Fmy(/T#FhPEΒ [_F ڰ\HW2𳡂Ҫ1fhhzdZHʼ% ݑ!MT֗ϋV+NSZ%t:.)]VVr3R+"WvE΁ȻdNHrţ+7}.;e ]5!i_l R\ *G037&'˯5\ p39O$6y-v?V(fZ*oY R NR RO"8*Pzu1J+suIM)xB~\/+¤ \IUgt[?Tj}WY5HLMGPP:;:8;蛙o3{b~g08 nQ,*;UTWGc PN?#:xRc662zqHv {^uH Qi#XeHns\ChZ0Ji )H2?a:7jv M #A3EHCx?Ay6JOlQlZ)Vc;SkK..ܺZ2ɍ\N3;gAm:S@~+wJC C%@S#GPuT*!ILK2 dFy UG De.JH<``=Z+wCѐcx^jq Ce3P 2B4N9=Uhܓm,/De"J1Bz>"M(}D jwhq?D.F(Lo uq32^~R=%=|N7!6-Yqt1P+|0ʖP Ug]дOWߟLu!3v5ZVҫ0J 9mUTi/Z9\S:K9dElPj 'HBe/%-Og=<Ú@Yv\BqqP0ԊOqPwDEQ<˧1͆óoj,ݱfx*{xNMy~:~cMU\u|yxœzJ򽋖ZKW -(O nɛaFb#I+c!y m~K{n]n5e;&QT. ɶNlZݷI{"+Hw.(eCU}t7{ !^tW᫭2\0XxyQ:H:~jql`hbIGѲX)oM#8-"dy1YL=RpBA{QAJ~~oERu):)2W]z!9tCf85lS%)40Pްez8wIzHnX*u$މV[yRk4J8Pӂ w7g_4vJ+OA.c[Y'>¨IZK3ZퟤE&LOcj|boP.8G"=ґwϦ 0cSg4D&Z!-% 2ȇ\PܠgUg4gm/5=w 哌`1 {s>K[v7B6D,9C* t&H=( :ՙ[PʭFd݋! i[G_r-l$].fhX?m;LuV_Wi> ] hà0& {VaXzl.8ƥ,k<ȜHTLn>MiSQ D_-dޜr3]1oob= 6+| V6h+y%{f0_`lElLf\[ qp=0]`17 i!S"G{Ś ))邆I }sA|Q($tq$l|Cr1G*_T.Ŝ~-BoBTlӘHjL?~='~`٢'3pG|Cae3i=X L&Cpn -tE;~W,mT&ZS%w#T1T6xj|4I1Ŝhȇ6j/xkj賏 vRy@GJqd8 Z4`.8 *!p_VH`vpLVU?SxPR~9+(R:3m-;mޯfW쮉G-`@npybE]5,L % ɞTmٳtCIe]"[f5.ɍ|\S{5=R%eIXj~\X(M/&+ˏN"q|ZaK:2{iY >H<~UoJԈxAA}hb5 Fd2"li@4H)^F? d3}' 8 #wuvzƒoy&AL:9Z$֩7pG֙O3:ce(tpPg]!ZV?wp;h|:aldoѲMC/ȸeioE@~m_i6Z2-n9Ic몺sKٛw)[6?Iz(1 Ǒ†ׂO|hP$R&ظ {E>'hP՛\8!$F=; qD>8,o\{+F P)ƯhqS֖v8an,ŬX#/DMk7$0YL<96P[@V®4+ujovɀFKA,i>̈́\Õˋӏ_ Y)4Sޜ_^]^ySgH`>s ~BM,6e4wzyZs2!㑷~3hD]`[i⪶E;L p"<&G3x( ;aR*S$Uf>ƙa_dLM!8LMd9  C= Bn}n=HuDE zz_wVjO++g+qJ`v#лuMu! Na9.o;wۣ\dMϥWnl'+|ף*0?-[7O(lB*KC,` 8m(rn/4] I#?Ns{^⢜pF@W-iG K^\U^8F㫁9#u;8HP'1!oiL HD{7X싕>f8Wjo+JR!wDA/y`hj?" qj_ 1r+eV]6Ա&ZnoXYp9VM;\UsU^ 8#N%`Hl[hP׃cOH/wUܥW9PIMG#)a>H*N' R釟ftL\S**JHxm8fcmLl`Dٴ L(#4`$e{Ial-j7P#<|NIiI^B jyD4(f)h4#oSޞ ;NeүRHα"5q"V%n[N](Ggk͖}H&tzCКؐQx|ӌE}zrQV$s(w3NvH|C%r:,Ə^W뗑B4\;9d(s)^yM& u\4pPdRT4DvIPOi**pI!tp^k˱ڧJș{q1YKv +`ߋ* eÒ\V:wYwO 'א.xjOKĆXE3yne~~)k~>}]yUZ[k.,Za,, Va oS.eMp{jscnGc?G"FDž*$eD xuH>6F,-\5cV#\Gua!Ľ%%mtux PI(Fy0~>dR:p{:y$.e޷zknz f&*眈o qz1VUXxfJ%_$F,oi2E䛋Q.)$B^ll_ج,Q 獁SHF%cTWnR2 Nb "9E#"*ʹbbr*-= _.e!d,1@mch/>)g@ OȢL,D:JsB87G"YP!*,z47:]*$xg >*7r` j sWs<^~ ѸM#K iCoV3y[,*CMZ96Y1o)0X&c%rYI/%|Ck4 ǟf'Jz^k٦ukUGf wUʀ2`"Af*'%5aKo |n Ah?% a? {|cAg}R*k ;}>PC)o.ߋdrҤ_-O *bޞޝU 4l媊U=t P,a;6ꖯ_^6${cXtщz8@*#}\4Gᔮ/-ni+0ê7^ZKmL+3]DT)&ϊQ{}r6Zh1S P >]'V׎*hϤo8Wh3qPA a<%팣ڻ FoJMEicKUB*@:OZh)Ų{q*V!]oJ VPDՅj M.c7$b<]2mWR\EBΣ<4h-~|IVSƻK7ۯٌ(Hz߭4.~%)_,Zqne\m5)y{[/nd;ζ36UC/cNY#X !0vUiŔ [5} S|J?-&!ԫnF}zu\#N^ΑZwm};T)*4wmOܻ1XÕM +]7bN<-`UuгQuI1'v].W LQH@4gX I8wS &lC7U/Ʃ[ A jNGM&tK%_[7!K,655\-Ms_)LA6gUv=l8\Ɠe|ޫ\7;C!LVвݠӶ}h{1@q&s2H& zM~+\7qҐ Hoҩarnz1g58ZZ0A˔-zU(Z T-3-UGpV|PhѨ%#)镠8S)h &A"R|YG:Kfdb˒W;ZmnJ[d=)«x:ݿ wϺVf9+gL,3(rK@s Kj17Y< ʡTW,p:rG'3O!l(N52MQ(Tw0,IRBй+e;E-pV5tɯG+NeheNɂvM<(:R^^<#a\'ҏU!zȼ#ddL%.`bF[+zll( EmA{DR"C mm [nש `C50͕vw i`QFp9o,x\ન3N>{Ia(9b%I:l_C=ݴ'wP@Mx-e~s$w'XWoÔpF*k &U6}maOIrT rKm1ee$[)1hrH@+ơ/8w̑R^ LMKفln_5Dizۿ"v 6S="l BsvS-@E%F^3l`:UV(Xx[8#9I[[NKbQwDM>ȋʞedrAntvuQKélg̞#.Xs>, ;j'1m3$XfCD}RzF _x{W8wl7}֠u懲2gթ#J!lNTZA~w"1<K䞀NбZ, gU:"0%Ǖ/Xrb>dvs ֌2h3.9HVy(kHcZ̤J9sBblToç3go?=lWLJ14KYVy%.Hmp \>D47c>R2|8EGPh1fǭg_9 jN8@s muF< "9_ߐ_ZB8#R~+? 'DJ UdgLPu98/d ~>:/8doc{o(HQn"T" IŔy'8z}\=w RWDEhT{C%g綽k/P$^i$ffks o  #< mVl@o&T_Z:otv.F;Cpd\rqb>u&fH^ Ȼ@ ߲. K$`%jqсlXT"OBEKRJM$V3N+a=Y)Ԓ5sbEjm^e%[J";9ROFÚ'DLjy_z1Nq6y9HIp3[ͮ]Oِ!1);^'h% u.;:pAwCWAH.Bp(âMf. )ceW5"$g~K&9`ƙ?7d%v 6`ΑyL,eNkoj4[U>>A"B5QUSl?YŞ^G/+QWc}qxKCU2N!+ic\iAX# 9UY>P^%$-@!TI0a̰z0ͷZR 9rъZxh\8Ja^N8\kr$Fo< 7 {jWNF-I><WS|K{5y&vw8r$L)g._D0k'G>\;9b3yXsl#q i?.G'З#wq!Z?6@e{Ft0"XFZMn omoszNC;[|1#裞=rʙC(<\G@D EGf ֜C k7F2īG(:*ܗE6ٟ.*&5_NCJ?LQ Թo'Y!:}'zD4>^^1IRt\VZ X7eX ):^Z}P]zϨxy"<^wd6gh|>biom2d㺣*<[Q[ReU9q^̒e(| YvӐu.h䆈(˅; $1v^L!G{暉2j=W(4ŸxZZȅFcRX"YXm4dW9FX mG{t D Z !0⣍*|4L:@R>m#;!F._V?e->VˇQtJpYz9=~Ho¿~;Dhlqzz97Dk"m?x%oő,y%H5{a%~^ɖG?rFn vV^@ACK:m[vğ:@L)]:)+uy\nct$p2L(([ fg}9HZ$ ד ]<91֌Frd?lTQ:ΠQz0j/eT7Ǭ1a|]ҵיʟ{@Z edö0ic"0%{! BҀ0RIs|15rs?ru{Һ8?.H}ojnN'9A`ϰӎ.eŗ._K'#kSKM;+/Y3MNto7f6nnNŻŪi 8b8uluS˛]G~z Ld/˒f60 M 2"Ԣ\oJtfgd0_H1S7 hG=yp=YC TO"%0} %8lh1aQu-i5LsúIk WKVkJ gȞh6]RyR=-Jb _6`bHx*U@xmJ@3[è&h{ #Y#`Rh[y3u*`Gw}봴0ib=CLfrbQ6P?5|` If):?K@Tlo`Z6҃Mnp"t, GmRn+AqtR0sqG}TВpHj"YsI {&mSUS  ߹]L/o{裒Z@ $eSO<7+ I!Ϧd5MՠW7X 'eI;#zJ{O۝ VcRPo uxd=.Lm/אKBˉgRzSnAi=4RgT-H)'N#:"s3.[Jez<1ghj̅W"%\,33x]xe^Yk#-9wHC[ڀ,g74]-N{wApP*!g!ͥ D'`*`xD7N_]\fV{~ʓz&X?p(~K/ [9#Í`tJX.9['N$P>ASNʪq6]H@}-~r/̟ ӚċH/@b& *O+{V4;YH9c/ Ud4DeEVT@+Ws\/гfV :]mD6X䒇nd$CnsUώ Z嫌 -}'G+l(fP0Fwac-2Sv%,4XLmpRذ&T^q kt2ؘfv V9'wп?>?͸ vPPf7n,Mw~.N;sc^O"t35L/*$VZ T'eZϰ"\ mQf+S.DnKMo3tW獴Q~yv xM;w'0PWqݣKjī n ?mGFnU[<>;%!wgk5L+eN!+`² [X?Ϳ17k ;d^cU/li/n%t-Z,qgojĴ3uMq}?4J'.f(;swM]Z)M (R"\URjX̧ K0MCV [`1fzo%qptqIJxcXla2v}­Twn6hN5J_ͭ5q}z ]wF/j"G5i4%!*z @o¨#uFN 5 ݬ͑E}#D.fzH*Z~łhp:Ei+LyY1ٔkT!m:- hva4p.0C˵^0)Kz_nd ]ӈ2(QIڰf 2sJfH". YWQa[[ftDW?̱XmLAi0:O 7)<8r-@*6/}V᾵ej+#)Q{])4Uggw~]~%]d~$b]LL[~62hY`ga JEa`W"8F5ߐӺLu3>/HDXB#mzBoo4,׳aYEj2*ofq\@ %q"C%&%c,R2 8y-ׁө;N'˿@+8˝H ǎ{vbʽ::db154Y͡<[ ϐXN=!O=1lU@P+,@1T_#F U|1eVdʡb\ As9 s--dP# U|A4sF-ϻbU(D Ѱ c9_` X_zr9_-8d  **7AX]}0읮*_hl:phNR"ba>(ػz-(>2.TE]{Q_D'Z6sǐrz,$bo=rkH{xNE-wa߲g͔moRѷ[5#ёX~{*BZk<ikv9sIty[`kL6!ΐً4mH鉬@qF3ܐ.d/XT9U0rjڹ 5/U_"*ZML2!\5$$dc 2Wܹ//a?!'[CF=+ŧ&0J+o\FKA!L l,ikyntDXP$dp<+;[Օm3MBɀԑda?Z[Kk6q "kDD}r\jj&j8Ƶnk9@GETn~o4?u {Co2T泒>o$ kWT=sow>XeF~P"VKsR鶘pi*qJyzO-8Aᘧt{f :[*{0V[LgepW̳7Z U`%SUO ~z Ӌ8TIZKJe}Oajׇml6Y^|w_cnhjk| @[kTJ$U*1$H n{-I'`OǓNŌv`BtIXK!FO$0<#~?a|Ch']'s&=-EDcpf|}{e$=<%)-,u NH) ` nByHiVQu|z*./2soMq7<"W~⎃r)K0N@ӿg cw!|(W0'Sre4ZLgY=I&'yPiq2ZTA ,5/.\6$ؙVGv+-) ,kyo !=bl9{>y黶`ivR_Y1ǺdXud3*^-'-p}n 0Ɓ.k(ݢy١%9Ҧ~]]xuR1> fJ%c}u^4qM_ J,joJEg*tǻ`*oErN. Ht<:w- + ⰳ+>1#iv8"_x '<ȩxyTWK"?כ)q™ @FU+ K{qvF:O޺Yn(l|hS\d;JI[ zswzIĖ*H.|f&hrpcG9=~=g桧F>agxhJ~i-縙ET.qüZ|6`ϴ0ʘ0OV$Vc.UָLʪCuFVj!Cܽ_QVjQw9 Xna |}hqP9,^J:$ `sjy~\Z2EG'A!CKxhſۤ|V2fsBCԳCR Qh7jv+%]>^k,rnk IgM+(Y$R6pk!zELؘ҂W=8/-\n.(TC-'Vnchhx}WSH $;f"`[I#h_}Oٌ۳nE|v;?|摺+0ԾӇ̧Fšfs]&OrE6 z c.51JռhoG iOXHoHYvr2JU3k뫹=qNaT`x o@I,}/`gNR[ޭqq#a2|1 O4!{+F-#ʯ'N؛6jd)d gu@ūn^ki[g"j&礧yYW6Ņ<R$t8 xQOAP x۶m۶m۶m۶m۶ssj滋=SܤRսIp(ROhoq#q@=^lB2)$t<*m*Շ|XBvJߑ(}%GԔo*n, r,ށNq;?e/U>/D > 0hu9#ls9³TM / *.깼~6 8TN^X{MbߜĚIZM1B)ytX\?RӸ|t1xA_ Prؔ>ɔ0@M}H7 Җ}C7`7z- m>5=FbmL O= o8p0`2p,a1l t~ Qqvoejd qtJ굵+>۔3j5006b0D%):_Y8U}1;#M$}x'ɟ1J$alA܉ c09un#ޗ1ϵyVvn}basU􏝗 8$8!1a.,[ b)"^ 0Xw!\n2G}*m=+ A|d,?t*LXiOأg ⴾa ڏ)w^ o0^\LhdcvKsA&7Ej@_w>a}Y)?}|C{>`]o7}bZ3l<typ.0vչ(g22 ]دH5vwva|8s1#;7 v\N^֩7ntsPl~2=@avX}͊ozInw=t)!<+<,xF]FGӴ$#vR1Q/y6ޥ !\}o#Pݜ,.6ژyb/AO:k: 2zL F^@>ȿiV`ZNZ=X4ytƪ:?Fd24o :--)5ct%(A,ygu} n^_IK۱NKPUB*$u=ڃzw3O GQbg`ڤ)5;|FAžZSF+>+'[p'oY pT'gW^qNURj2|[e;'+3i&UZ.[N-kB_DęL`BRc 6X@ݿ4H(7ބG j+L97y1;i6PU a,d[P5A2\qi*-i/Ÿeز,b Q7qm(d(9GARP $',2ϼ~ tF=EyEW s-\o%tFMSP{4*p\{lj/\r"~&Frug;QU^WzPL78NƸ\M#8k(jc$gr6ޜ%DY)7xʎӒQҷU Pf%0 O)Fn5_Mw1;=/$xWx#w"t:~ :J̺cEF4e|tAu5M78ok#ۗ-#YSn4\//$e*#'>!S,l}D]k:D.z\uqYpڢy 1z  J9jӋۛu\Z iAs`Jf+ywWQVM=0.l~gK»fTxԳG?ŢyI7hUy/dtdF\DU,A7,:v2oM@2Nf^sBF :]$3R%7m[9j~HMtwlm E>e 8؋vCQI'v}r*8A….D W Mlzml՜Em&$No^Mد_YΙ ì.!QFn ]T _@k6k*|ޏ.bXd6 0?Ip+8W-F $ sƮM&$H[Jƪs hR"ni} Hɞh2 2$nn{*mV{jFU|<|dD<'p'5o 2z@'>Ld`rH*jFq%^"MBEoq/S1NaL%ܹ]QmDgT($5ݝ?$ Nϑj{Rܷsx4%#Sf8ȿS V&wK+RNAg4ٳQ`8x>w('qS$g &}fwn~䴺UryѴ) [j1v LVx.Ys"tTfųR'm"KW2URuQժEݛBfM_WUJJ5XUV%0GчІܨ#snWGcN{~3[NCT~{c&Q6ҥt.Z+̒cT X)3='˯ȳ3࿎Qe gYx-HFf2ga6J卑*A1q\iK-'Oӫ] cxR26Z)מB^uWG\j&VT',[^P O`2Lj*FZm*Ĥ6Wx]s3*1dVepȂa-q,FPgY sj)O_Q?nYϭӽㅀ)ˉUnᬳGAInb\ ]tp@Lݞ N[.ԻgAefCWt?S'd0?P,#2Iٵn|,~ bNO'(9S/[%U/?87R@b;סl,mq@+V(f&3Y*CVwG9|;$@O$}{>3h  }u-xm`)j&U~~O1~J{{ΐ\*X')"& vL۹|HJ\m_)<`'Db$U9khCp1yFRۍ#ND' <(|(Ի+ xy2?QҔGqP?ΏHهS\߹[~4q()00b$^6>l:D+@HsZsEX*h5dP8 FyUt\~8ADUpLrrQJy) Fegk3"<3|`Jmc|Y ,-+{\@R֑B }>\+ZN%Uxs SCLwو1L;t˨l$K4A2uS6f.WSXE踣&CWy SyċI]/;O-כ&i˫1x&tEp̃I,1&ʪʤJSB?tJIq塠?%/8A?S(OtފKjfm!|yzSL iEUE9}]#fyueHD֊\՜ ,P'C23(2iǐގW;<=>7's!pT_v3cftE5?dKXA`+$Kg) 13,/E+:둘&E\y =YPƸ7pK bK¡QQJmI"bǵmkĠN[)>mIĊ_ jRT*?R  hgX GB TN-XZM ZO Z"ez|ʜɅۤ H}|}~HهPwoUqwTwEW#kJ< )L9h+ӄZeU'tƿ n m} ܜZuQPўğs:\. f17\x,@o7/jIA \nBCnCJe+.nWP)\X߮n 1'Ie/6?ݪp{ BڷzδY cZ<%2r9x<;4 Ja=.~=^Ls1Lg'%@g9Yν]ܥ ;nN @ɨ[{#e4iXD♉d&Zd&\B:!%bpCٍEE&p8@_ڷX71mPܟ,%) ~cSB2 D[Zu-{ۉl)kNox2&fS‰uuW7Z\h n!tEk1HK76#C#ؑ9Nˡڿ'{vDBܫf{wDZ|\s7EBMGՑuTtZJ\H}8Zr5 ^*AZI,,>\>se]>;K4-;H#M=4N 1E.Q0CY.ms(ƵR O;qm,%"ֶ55wt&$G?rTkFUUvq9 Z%>cؒHDZ|<(SZ4A{猘v"?e0l.c3Lރ'{V9Y%oBy[_niDMs?рo!-|4Qb )J20bY$!ls0vN˕J e=Qj8Ųì{WKTYz[0m[gtE@#86LL¢D/WQR(Lvh[ڹGel!su"F˸θ^6=b8AN֚?OfIbl|3#τ!f3f_İ+H98EGf؆O1?:u@ĐW bd~:|rO>8Sͪp'vY(/1cr#<%5}L~N*P,2$Y>,Ҫ Ox <# rXQwWuߍҴ-~]əG.d;cvOUEߛ;I0W=ⵁW3pϧcud1NTGZx7&˃ 9fRwdIU~B_9f?y0;"r7}@P\,8'Rxqm-j'' >_'#U*7I-3IC[y#@q EK稗9 7OgAb9L\zĽ6xpGJx,"*#LAqDY;1%{3>+ gzo7nm$XgCp-EJ;9&[_ E 3uu,za`>;;VkJn;r_0YKY5Y@*RXHsb;77=RJGmPlQvA 3i\qhA漖-0!e|:wlw%nGisJrZdoaxq}Hzԏ0O6JeJC; I@G{iXWإɥ>̪6va;qJ̉V:%QB<Q院 :Ўg5x hF%u<V@h ` lФ ~^4)m*$ \/"*~_He+ DzBJ?wFND xXpGE4S븹O&ߞKr6Ǵ2Cnv-TcS \]lb09[en?*F02D=v?ꎲɥ< OU%[pIgv5t5K" Ppi,<; "IcH߯}5\Ѹ^9Zrtzc/F<+0g:w\Y.r׆VH+/KT3z]iNxЇieMOYE ϯ˕-+nGzONB3^cea'3= }3eCMe#1T' NRl9WE\fpYdt/f gS`2" 20@OץCNFG?߰̑v9 5J ir5)eW,@KG|ws^`u# FAt#֧S͊4Iii#AnH4r͝LӬ,u B' JmDB~Y{PC \w^ tb T)gn<gi,7J.BpQTڥNb>(f~>lIʹ=.E76@Ag! (BQ7-MhU^p*J~*AدĆ4c\^ W˥A@*^iԲgjM6sXC2`(2")LlH~rX:(2s"ipoE_FK>3b;{!3y$c$hfZ>|durZm-/?#B_H-j%;Y1FTGu2дU#]I"k 56c-XMTW?1W.`8 o"$@أPFM*Z4Y|V ͐[+=P\.@F%}U9a|v #Vw A5=~Qj*s U-C2mdҤVV[T&ˆndOOcϪG/J7,!%)*!Ao6@Gj.Ll>Z ϼGq3*+=D:ڪ٠lJܜ`[rW4Awix1mn_{;cf.2j.nFBoOȭ87 kGE `S1R`d2=IfcPk!TlʀD$`L|Cc̀g׫GL`Y2lEr'vJ|HFIH HGj­֯LLּr.蔼_P9eRV#!ˢꟗN![A|ȉ c~p]B|zi_:PwRXshUeRn9yS8 I%w&u5@B+Q#ɱhqcY 4)ţ2Yjo7q`%K!/Y81 RKbF4CJ5vle@eY3c-%=[Cz$nȦ9GvmX!N#lLݸQD=4"Q6Ji~g=:Ĝv6B=?kV+s0AGsSUw5'J>k-<|-ଫp}v#SNRP땤uګIJAOjAA\L(HR9 S9RٸPVfu9_JZR{h&,~BRf(AI#!4rC˘"'2ؐOY[R3^?(zzjq(yӹ8V-QR%Q%|wcIdI2o\&AeCNz!Qї$$OIڕ+qCo 'XWӺmqMMOg%Cj0#Wyɦ4ܺy6 O} N_aL|0Go^/4Ht!%<mrnp>ūoGc7" G̸Nz0gbc͕P/c>ܟt*GAqTօ#@Rt4&)t,qrC*ә2C̦ZؐKw|b+ڑ!g ԑdh _:hb م5Q,Y)N¿Pt3%Efb+V!1( ,ˏGsR ܤwQJ8#w8a Y>;K ^#L=ܨ̊ڞ Cul_G-AQH7zjw"ѝSW[. :AV)lɕo c] *5T1䒫qIp|q&a_kJZKm)8U9Ԉ5t7_52Ii yr`4 lxr\ ["V/wPdb:7I:?!D’Ij(f1JcP N~'캆{ͼ%r=4 yTK7׷ykRjwy||a2>N$ "B6 oב?NS9,'ͿׂqT{1n^$9DǩΌiʾwuef|c{9O'O!q'r"'s/O cװ"y.ɹg*p<<5oK+YUP)cL/_ )F>D1M “Л}I.p*A܍CF|, P`J P Y|#7̶0早w  ΖdZb֬-B8֌S)q;&%9{moi; HHṕ$6b\`K"*{'˻!=KYi3ͮ4 hkK<2+̇*Gb8B'3O}3 0(=He|qc?<`U]rUcdBs&*5Ok9F߁:DƬ!$ta5O&HmB{j CbcBJW}Jr~ZaCX*hiM Aِ 6g = cX+qSˆpIi:pngv$@&T8H]Dz2 Qa?bd`YadobˠpL~bu#(p˳)Ic`wMs3u(k8c"m~8Ub}]>paϑ 9(r~ UQ˯\?eΊN>~40HO}TV!|GtAT ?b!(WK" ))P W_%#|AX',5'5~#C@ӊBT5&\(> |jrwX ªW4h8ʖԑOu cňRp[b5!rrUdnDhFa-&3d#x>aX^h**q,"( V *+(h虺ku%ðU氿NkKB3#^ls/i-Gz,9Hs803ɾD \vZ/qn/R$D '/͝*)~TҲ8ސ&* 6"wLs03+,?%nQ' USpu 6JHp9:WTe_ +.}*oi`6BcԲ h,cj k9{P΢:ɰjԕ>"z]Htb-`tה OwC%ay9{V<J'LmK^UZ?UK-H0P!|dr3= JٗK {q}}%^Jt 23,DH.d%*Wߕ7|_M9^~zjjZv?wPP4/'U<{׳-4MWߪ.άا52Ba?t<rnd؍.ap.nnm?@a =%OKa)CR_:dT x_. U(x Q%st&ǹC1s),h(fSD-( wx+P|h>>thybs5`Y%53㞥{NC]'.ƪ3Ybbwϒa4}Rɒp.LJ /ퟣ<1CG35xm P55zX2 HDali`J$w3"rdҍƥ"T7`G2P :F4eSxE"e HۗO`#D2ރ[3?ڦ%Lg9 3 |G0^=7#|v Y'@["ǞDl!;«W3By{/yo$/\)%)ߎ5%|n~c|uT3A[![I1Cpi8=LJa{ߌcfV%9/'z׸dgּy홷%'#tF쬥9= [ ^ ƒ-~\EY$xv.u^8,\?bƦM9'C~/ Q@6GЯnH@6AUt2DPx*$5/oƸ5dV. 5 rdi䈩R\l{G Y\ykb86Qj FM\>~q%ma"TrWt: wb p ټ%Hܵ+a~6 &{o-yCީ^en#k 3T=x'R|pBy`Q¹$,ϰ\߼Yܸ|GYNHR|3pI' ƺ/fLK駿FlZV N3\o]MN(Sv@|Wᗊg< iByԯ==kߎ-lO1Ry6$,Ydoz}^O%S/*ajRb륧0XKuZPsu!exhs_Lv"mG*};-WL#/ -mןBԏ jKohõuў%PMzm;T`dqr661qvhQӱbEykw5M|6@FOjnQ@6Q `]<~DNJ{;?ܲ舥M,PFu.mW,I%( %\ gHȋw"J_v6Ғ-Фm7ɕ$zM8.9q'8K=,2HoAes~( h… JzcӶ^xUULϬV't f18Xr+ ]dBZbZqrǫ6>bM|,Yu5P90,U|=glk|V乗'ua^4 0͌qEu7W0 f_M$Fkt@H"Msf8ӬVƸe/NeÒ>$Dh{m}=ff&ÈlxehSy8!/i_߿ȩpr*оJ9W D1B&#|ߵes}Z۲hgӵe/jƋuUge\8K8"VQZǰdi7 .Kz*gI|G/[-שT0w,'aZAIid+C/cy k4JI)4T\1ѽyˣV~ T W^u#@z0A+:ghs &=V޺~xp4N>jL=nKh!}R%وo-Al[Kǁt;5E$*{g5 r%ע;L& σ\ږgd o(Ut Gٚc9d{%$ēi~6pXl}ZkF3^KoVR.zp{ ȥY\O,RO%c~bY/edl(zh#9 {[-xw)g$ $!4.z_*Tgғ@ E 6{ӱ?Γ.n'`<Hǹ$+ɬGJw|&d`0œ{ Υ5I,.gζ.ӹ &%im" Y( ohbja^_ycDi$$0tmP5ܼy]DjDUheL14(L gJ$k֥U5q;rݰTڨEuqK.є%{V)<*t t2rR&&9=\ EyBNDvʗ=r2~b~b>FFg׍oS)/(@E,  %x-zK Uy(GwX_0w{}֫,壿 9rKvs$mސMb-,։2gFeJ+%AO90dʍ,(`W HA90a˒ #R#MT#Q/A+lh(aiR@O44q@0]soҞ)췭5B_᯻ҙиuZ;.@||?ϙ>719f5 Sr*r=X.ԁm2FpዌX3Q$99BR$A sF1bZ ptuAet7u+IDc̉,7>؟?00Z\$BB#GG*LPSeXg]!Q߉x1mgx8"u )b P3"9~6nB+;Hb6!ɇ=wҤOqNN}/Po0+%kIO~Ӣo}c.7!a|5\Wmw<䪡X/;nѶ.Gc৓S:!-r9I+G4V9i6Le8f Q:[i#iasw VM6ۨ gT{"6 w'@3:\r,}ejJ:Z9W8Ϗ1=\Z Bqv1 H@߆I=bDt>JS`pPX9ec^7@M]Rsg|rT/u`ȁ}|- 5q%X-҈Ci-]0H3"{ybAU!! c~vx8p1-%nڰUd!%ջ}_ \u{XEY&cάС2KULbBh8*@xoz3lIh3'Hq8}J:ζt")dGh,*q}W~ !8/kb$ԑf=E eb:5qƠ}_es067Z>e-儞`峐{ò^OO7 X)"ګhϋvu\LlʴNf~Mϓ#r5jANdTOθ4\+L.Ӵ2dT|"mm\l^x2S4{}Ry?212j\!x ;TBPֻ'+}*{n8ƥ]q ׌r~|6jxiAr%x+4ȕSOif?AP PQ#?²#U>#gFFSWS BCFZ5 eMw V)TZpS:Kq6_`olVayHi=|z:90 LQGu*Qװ e(ȵm͊A0~i|V"޲ho2+x#G¦G;iq+ 1©@ƂNQ@<s@5 }V 9nFlx:jCm.Ѧœϯ"E)NGUGolQDMtPM:QI-|e󕭜VPh_H#WHD! ȠKz\ o?4e2%XROz>N 6,Yr`H0;U'Ӧkv$ω`RNzŮ%4B_.R-}I3#47ϐ#kB֧(OpEMWq&?Sƣґ %LIoZsNaKaz=NhalsJ7Eۜ +U _?]^UrǪC!?j!M)速 e`]9vҀ0ENk#sV8Q Sj$>Ma/|㲫xKx"*Ub%f9\S^S†JMDNrenGgTQަrug4L2y& ^ҚOEpJDP#-ZJcsik١CaeM '$)5vH5oS#]C=ftf[ynElRk'B*+y;˾j6M}!H;OhWUQ Lp| +3c [Ec[D sX8v;:.ZKg\1e_$ e@_Y=TZ$h8}I@&ӌh٣:^man'f7К%PD * 9K96a%92 [KZƪO,7w3 C-suMOί,ԧÇFBBk޵>tM§%=Δaqt5y} pSy=oaf5ѰPvֈAr#*A.ϤOSo1xS4NbF5R2  y q= 8VkcS!;\Fj`8T؎{ פ)ŕ7T2j+h.l( by0kuE^q}`.u*N/j>c5$\M8m{>!wYօT/J5 /akl*GI/\ YeN7ۃkcǤc(ciD14tqDТ m O/#(B^xQj38t޻eoӚ~N]gaeKGquܹBc;6"L&ed<)g91b%c) ClX]vȧh<5F:X0ږtiCȧP HF=2b"Ϳ<4S䉉aֵ=7 ֍f;B ֈwI։2@8 Aq yxqFVvv^TϹ~AMD%.wJAP #C`N;J*_%w jI,ool"VÓ}X2T+%h W΋9,(ŹY^Ź)5/iӻYY{mgygugeAg'g~prXA$Y#I0ȜSb 4QNYa\\R$-[zJP܅`om2 %@x&>F}!R4H!N-{d\p1d<F~J呠`=p pb?`?. irb,BQ0(G԰q.5@dDF-@h__Hed?ve-@-fο]g-v_(^f_f8>^g_e߫(v-if>.,Ay7Q<<+guR6iǝ^QW2h>\C*uΞBy'SQ$~af*٣b?{¸(8xn(N|%1jiPq*gKv) !avI̎(p0gg蛀wUJ3KaT<htw\.PNgA$׵q`! y=N~c;!dӰd%Rw u:Moz)A=>ЃGN|4W1hN/" 9+*zKΞ="W%?@?u$5Q?pIee")TSJgA0h-x MjJwDDJ:]"RhsGCҕyhT1Qdz_[>JUzi6c1&eKdt` /:]\`dιDҽ!8 :I+Pz!0 c?YP\bG}:c|D[YuU&Bp u$e04P+Oq{ۂB3lKܯ3;O; *`uL󌝊sZTbA>?G 7E hu$gbsdW[]%fgyE]G1gժ;I_BP h=}ky!Գ* ! cN"FҷHӢσoSWTh$_*o/Ӭ{h5>K^8bі5Q#:vk=U'9Sۺ5r-$]_\IXqẂ֕߀aah܋:4}ԒpP1] 4悞/|WЗahKRk';A_ц % 'ϽRhpe "rl|MH ɔlW%<'w F9W;ejp8>=GS,53< yt&M4P 2P=^xJåa0OqDzX gˢ[.=mT!ml";׫mIr)No6{/K_ 9QɧL=Myk"Cp KI1qۉL탐 i3ό*@5M Sr)(6lX>-?߼Ƌ ,R@| iq6&bbCs oMɀW cA!܁θ~·<О/KGV@y| }t; =w&wS㐓4x˫@nmFh9?VfEcO \5/4Gǚi[ ~s(Y5 % ln>gΜ\xܼ_zkHЄcA (ՁL#|~ZkKu ʽ1d-K=[2ӏn}M\`k M7/;lUK,G ux/&[X ,cLK:X&kz-o`] S`oGL vbIž窤ڰ!Y* %=Z ݵtMTV fn^34`9R%d^"%+;06?c" 8,9$0Q6!*⴪Bk,l: l0TRj8Z퉘h:xUbS D{%Ta;B]r ȨqHB۶;5f.9ˆmH}kqo|HJqaΨ!fx.V@[Ħ`(]W<][j.$qu-RNl#Rl BQUнYXMP THZ压S(r.~W-hFK16{TgƵY'drȳUT#PDXNzI:|pɂ"Ec[YZ(ŞcԹ}Q)y>'3UΦk z𭏄PGXnTqmVӠñZ73ufp:e5خF-%K_V"v諜㵛A8;}<2OӺMss[ߋQ!IP]?LXam8|yGh)ٞijEdh )VWEH\'* QFS#VAM68VIQz IvQ.,Eywu:e̯`JK"nrӚ1.oz/dHĴ%2Xtŭo()UjMbNK o(sLƴP4.rݼJ2nӚ(ӡ(b\{iHQ'$#O6@$soppFc-w'Xc_TɛO{/()J;Wk%5rhH9]][f\Ux F푣S{SX)x|OO攍K0Q*Z0snENWu UqPR_B\8C!zDήVsOUQAњhB&z"{I+b O.(WbYg)Mǎq&եWw6U7lEleZFG`۶>맇KJEK <(e"aqwtS^h_JؽPk]^lu9kQ?w5ڈ iR=4$ٔ,KЕ 2a34LK/}Dxov~R D ĢrWoR*o3ʼn7y(r󥵲 'h XﰊGcKN i;0Q嘰8'q㤌_n^&ȩl17-ŕ6)xg-J`4g ׌Gc"b܋Vӱՙp[*gl:=. G8n_f~}9P\ Z e4'-s\It!YVW "w@={aX&/-hZ FmM nǢiC.{@EI1Cl˪zR-A8Cj;&d5FQ(K(j.7t`tMK[T"i&ģ2pӮ~7>6~ԏ{sƹv+ v<BW l i=rTEEg1q^+T({W* .;sjy͓Y'h?z?ͰQ݇EǴz^UDXոCϨv ohA<c5q l_70!oC:ҔrֿN̏gEAbYVSڂyg7F&x_n?fIӣކ&}~}{o򜆾#12e@R;OG$?!c ]Œ$+ldDt{Y۰Yߟ e/}Wa@LؠX#%Am1o-Te]1x["Ty 9Ԏ>!vi kV$ͤ60^A@!l˶|.(b6Y7^YC0UwjpYӊq ^ua:Pq5@%O@̳a|UpeԬt}p]UU(dK>vUF(;8#DyĮuy%3 ѹ!y=8Ak1jQqwqs98I JU U :6)RH>lF{3U$[}?.t*E-óRGG71s'Dɇ3g/4'K"!FQz.p('l&wF(6ӶJQiа_Nɘ!7zq|PͤNwzjNXB / | gs 1%@ΕÌ~.;6=a/RzM{?3"|+`КȎw_tny?BfrC،P[?xįV717wk '4 $S9qxQp=;$Iq~hGc"e 1&&B`TVlDāp49*Eqg3*D&PT`g\Dl<МaZ|2 +~nxD( T\7xd jU  bзwM:A.!6Uia'ZL_2m˅!(OabnF9:9[`RZ[,M2L`XQ**'(Ij_WT K(ˤfH_* %ZCo9HK_{|\ɭ\~w|j}̦!%23i=TTV%&KYאikm] z!uG Mwt GV# #dУD8 fK?.u##Ejԛ#Qo3GzEtg oLV`2ɬL$%  M;hľ,hZnjkF%jopkex8[\Ԓdi.TŲ\+g[_LFVOcs#zr8m6}'U*KEpO, &e2ֆ %hlع&HPH$Ӿ~o6V]6bE O+!pU   yōQ8Fc*ѼR!z5 dVoLn4[Y߂g?Q< F ˞;>,7N+ruqP Dd͘.dcEזxCSi~1sn1yj)͆FsDS\3$oFp?|lp(s_!kˎcQ`a]sݼ]ɱ]gَQtG]UM=̼B6N_6C)D?'F&x7Q:@J~:ljط A}p?!+گN&`\!)^G|uȢgWDz:Grfwmos)DWg~ ĺ>)|ODqR;8/8Us~ 4c:ջ Ah70(5C-kuz&wdAL.bF {ܕC)YO)?@1] ^5`L\2O%PSx)pyʔ7\%8 ~1H3dL~tC#| 42<"T< ,;"&iu 0)@2G'Π6T%m;;:#zzGW>L禹Vmj}Y2Dcl֯Pt+l ?BVL)#${\mY֊Lȯ*V1NZZe78z'MW넒t UK12P$t`:C4!?4j彃Z8Iӯ:EwiCk6>~u7 Y*G|zd(??XR݋^ kGJmԮԤa%ƈՐPJ[$#>d/% ӥ" ?Dp䶼Jl|ex&ʺo9k1z#s$dzJ@h~<\hOMEa׾]7Co5E tb$(3:N`;U3-v}tӖSq)Ĕ:E 'CRŘX.g|JC[Ua/S@аݝ 1#FgUv3tZ(7Ҡ]qv$6G qyX{@ujK5bZ%+ x7d#m%uUEsnqגsm Ya ф\ ~) 房v(;5ei7@pD"WmDC_`iO81U$2ZMRer詤۸+AC@x f ;#!["aR(pg.z=q }nְMǔ*K!S#Agw0}JGSgnmɯxa\t/ xPzAJm0Tz4|Ivi;2HP.N(>-K=Ȋ-cyh4u˴b1ߨ0-)sta-H;vHURL(-V琹s(Ќ'?] {t5X2i~-{4 j&Z FkiOˠ% Ha}fe} `R-$`r=J8w=o EfkT*Ξ6m#-[L▬c+ l ۶m۶m۶l۶m۶m۶=$ݛ̙NVJG S >>dTBe{{C_IsbIWGf7Da;41\ok 98Ք-za:駯bb3HCNOԪ@T]hXLDy!E/2mBL)JxLsMŠ&WɤLs_a}|GgC|V|Dx)DQh|! +u=$?n4)V޷mf4 ʯk}F -}g;lOcޗOODAV$p$@eSmOYf>3IUY،e3p8F%y&uT1_i LMsޤmЭp4"4|ti!]/} PPJq[xVF3Ϳȫ1]2L,>A3a+< aS6C߯\3~Ƿk\?c~J]=YK(>ߴ/Fo7{h/)-Wi`_YWXJShTܺU\Vx. My& sAR\'`=:yHww/:@@8.ymwdqL.bE -)(Wh-xK 8tWc#&ne14d'Ҟj;RxP퍂6| ,`1A,yA[]WQjok >4` .1 nwW'|X>$d 8#!?~9Kq&]a\GƶA NgjacFhiAP^CԱ@P#vr&O$}A龰>B5 &Ŭ8ӆ)恕`sMryѨoOgWlMfg|CC]it-G/2\C){4;I ՏNYJ%jokyu+KΊ=[Jw4SEjDBJT#onЅ&Fn!X6z6`6{5y=fDx.p-`d^tOY@օ(^/RUy< #!PO;Dgxj3tDڶ] eIpU)5ړHfRq ^@w=nmorc%YTDR?([dzCnHxֲHoӸ w .m~ʭ;PX(^}:ưi{zxjۼN޼;v24r3\k= l]-XyҚy'L-y><ϔ'Qۈh&y<đɫC=piZ&}n1Ww>6Lweȟi #sɨ(}\&_ddy)Aw#$V28mOvٗ ́>Bn z )=9fEOZmp)^ybNX̬ErJ+OJ)Fw 5[FgЛLLLt[9ֲHֺD܈:8p~Peen?|]}]C'8ZC45`lfM/tF^MPcԃ_܅jDlyWbLo-;v{/2RKH?A8R>r<.[FzڍCda6yܰ6AD:}4Vb,n/lg*"1BĪc|$3NSA^0g\ì|{RpP*7y%-M[ֶ4ihOȔ,3(\c3bJq8I"1"5{QnvsTv{+V+j*՗&eb<²J+a/ s(Dg)vc3!*G(_<9*$43kL2q2B7,OIr򇷯Lsgepf\#(|珛W}cN=% ֯%q5% NK23Oʰڏ|$7-q; Amއ FF? cAjz|d4rp1/GF~Nb#%l2q 9&0t&}"@E')y,*"$EjW$ƹ |u=mp"HDrmQۙ sN u1bBB66Ocb5G;re?g QA;c0A-UR@d~\ܫd4RuW{JeZ ~z\BizD۪xtu6v`9M"t0ŝiujz%b ;I{O{d=+cu\&(o~z5got%M!*:LP< f3L~WPNtvr4q6L!OyP/>}~J^k9ivE0THя-a2yEc =W">c}{W4[S'fSeSfg|PeQwO+[?w:j%c(*P:@*PsSrmq9Jesh2|p mGGQ?_<@n!1ńEJ2͊* 5/7F/z=gmJ$~{sÂ!#G0L:G]-w[Wgߵ ގ|X|Ʊ׷ԋ61saF>}x|Q8q^c`7Eωi؆5뷧ݏ[$>;4?,ɭr?jf}l~t:9gOG$xnkNqV~84sI ֭t Rd]v1?t!+hՔ 'Jw|xNR4VWO"Z߰/p(ky&B"ީM+}Q ) =6HJb~y0/uSHRqT|6]FdQʲqqUwW6Jsד9j_!jtųxT hWKI0چ%o­Wqu"٨8 !WU]T^7oM7S(7$Bīq_m]^"((*^oY/QҜ1MNu >֧ԇ6l&)WYż@UA5rJ3Ĥ[d=ar?]M{<8gCg^uL?J-b <#+q% _擷G!^lj[_} 6'DCiCJXU\hl| 1d8Msj՗2{+^g29st+y}=*iUgUw k^" ;edbbJY-Ā{WwE gBi0,ADjaZ%ԕa}vN?t0^mxp+/(iB̨ǠTW󰼏Uw],b.fF )wqD^e+\$s $@w(0Qx-RƦB2G2F% ŧNrb_M\ eE64P;?:Ҩ';ZPr T޴n,K%0CE$ s_A 2 Wxv,y]& jujɁI5n(lgj >o\\=mwŽQ47Mt-*Wqf&F#Xz@UMp[? 4TTV8v{9Uc|9rr6v?v]"5boO`GRX ZE|P'uo8=:IZuʥok\e/>T1bSCjr,=h)oNkjl"nVV_2CҕR O$}a\nMa}Rލ*Q1e`OEPO\\I;d*_[m gte,5AҸGkkFo!Qe'%-O z?F SP5vxKυw'U9< o邜Wݩ;Tv!9&F$a)EѤ*3aESQ$>M[ݳ!go8-MMIFb(raˍGf$H?B!"?83=҉&B٧JO!161侀N^/J@ |PQCN9 py3W<8lۋ#?l^\Np`D-T-@vF$} Gw$00N6.+}<=؜3pQ.3֣RcZUO^4'{xHX@ P!$K/lJ1?҃F!&A?_ROV<sCsz}c/ho_ƩR1% gcuHBC~aSdL˴zOtS%崶2a_;u9[f0b['ȨccP+̴0dLofcJY/ZlsȈ96=KDHRv(/ԑjÜ,3@?xg*2)| .ܯ)5 XQb,/2Y [&HGCy71倫*s~vq`r܃c{y Yo?3Gz8vQ񝘺xYvX2oIeyHDhdZn6~3ZϋC**Ʀ;;ZKi&6't[Apz2~w,y DjbāOZcl++r HtZ]kmýjv;3:S2F{s{]xwzx2X) 9~j2ŁVQ8Oi~qd4V6H7qaaDg<`X8 I:ᔙJEeh$U-|EA!>dp#'f78,%^\\`nIB s b6&8v`"Ț-8 u}(FjqȲs} *O?ۺq:2i'?1ȂwمU=@]dN+Rzd-"QG)@ĠЕl"bI]C` 1L)Ff%CM}T2BjTKIxn2:Uї)iP\J);r$I$HJ^Is jr(fsJKP,#O{yeK4][Zn&($g̔ndxZ@1$ĂٓNסwžGsiZϥԓOF_=6'I]xlCf簜} ƺC 5BZ Z/UM-jaQ~aZ.E.IC9Ie‡`Lݛx;nW'0"{ 0,U5EѦƬh  c'NJ褮AɠYL#O;3p!rC 㾰s5Xem"~M ydP`oܮed4**d^;XXXjr]?~qH&MER%HXs#Ia٦pYlY %iļs]E?KG{\{qnd89#K%Y8L (4Bc-<̵~ԨfТf`K حO۶\.b C%`D 9p`hN0ྐD{ՑX(GWљm/'2+8TE:sdXzlԗ趬"zdU Y3G_(ƳFx"5O؈`Yi"s)[GQ'ɨѪL^<~Ph҃iR0׬ >< c *B87@͆1Kd. Q;e!&1DbE7.jkT)1 6}"ZNpi9 uvws3{l 4Sxf SG Z+;@UE_&9T.!|d1(89,Ix^{AF@P >$B^? S6/$ozy?Hz-M՞Qĵt1p`$^`ךD({scau08f28p :A 7΂L^@P3gF Gy[9/3*N:41x ZqĐ-nsO^0qiu0^AD?!Co&"ojj锲>"} +_u%`̍Gs} [燼GnꌍPu(_ C?ot/'OʩD$!+.e&~ 4==;8~h>?DۂvdJqKm@)$Nbnx-JFˬ(b6cN*abjm;B!%Ơ]ٜ)s$dyw#G,<QUSqHQ_\a *+TD}}^@K #X@)1mM6#!̓  njs)J aꡧ'Df7uk57I-&!T9hjƉqܱ軧h *\民az ע!rs-uIY0y 2"hpOwi,ז;,19= C$*unLB5h2UGo}/VC|SMSdq!*X!\{AW%y<|(wzst/]lg=fr l}U=JOVczN65w"٦nQEcӤMZWN!t.xtxrKcZP_+6dt"i< pQS 0vTm ^! 3H@ƍZ s{UåީW&Q45zax764b5UT2*oC9Րn[ hJorH*c2 !yr)KXC%k e\n _~ML7mcyTܨg)\^KuU CӦ^ժġ`(G}k H+uBejAiqVq'Y0z2I^g'"_tsfaɎB]*fs#@~A! Zuo쨺e!<>M;i'V. "?"{hkBlok v3z=-\`ļ3Ș_tw"uHևGɁ$-# CprHyURfTmdY|t.w(r8!(*{TWCw]MNA.͖߼ !z̔K\ۇ2ܭXK"[gZO:Ѷem4SVjÍ:iq*ٽ6Ab14w@Tv:Ϳ)UIRqre鮽\[}/B`!@c~ :0}x9-2?N@arWmk,sZ-E&m.~}+?0\KAov 9:t\я^Js3=UW<}KPmVh&Ȳx#i=W3vaÛ*Bo)gbA[YU!94(S~P #DajrH$>KzZ *g JѰe' O̵NKfr:8- J6ZRl+Rq7[p2!Xԉ\._1UM~ȜJq+@K-?VhYh$Ljv2^2(*xy_ ZV|ʟwa1tqLW=]0zA9$Ěw՜ ǨS Ґ644aP}TtRΫV-RDΎ~]xXv)r1oqx_R/缧ufbn0*+ 85)/F^7v'@2 'K/e~'o=6ZLI{ B'"0A,E,̛bhcټ D0Y(j;-ߢV kbebXa=Xd!Ø[L@ $e`ElV6کqq :?i~1X[kZmf_hp{Z 67[+w+rMf:1&&%$r'Of{};=y&NM6&uLH}ad^b` E`D #`zdQtCg:Vz ]C{4uWEI~ caɂjSFYJ.Xcv!/+J/g'K'Z=XTZCIHl]%Kk˸7 õT6k_f˝ tklkBZ ffIc gfS@6N\ϸab$Ƣaoˆ4H$58[I2S(gPF6cD+Tѩ!tg,ñEImє>=i_kuH_"P`ӛ{\kAmp SumC=Ӟg\(wreid%viN"TΉI:-^ReX >fi|\Kq:zhkn]_>lDDq ]S [}Pb]"c8^fc[XsR)I &vV\ʵ^î 7|*>S6)zH#|Wjk9ā$O=6tњ,ɹ,Ru/N֜X-c9Z1s $ݟm}Ě'}@#An/?{Fo6tulDDO-yOxߣ_zf=QGtݒ27pchڠЖ8*#J崈>vh(hzliθdht^̆Xz$8ENі2=y*Ra4t<\bWht\&A, Ywf^kk}_HN*3i|bFvۍXi[gCGbDs*"ЉXW*Y-]2k\Inke})~9nnX=jrpf+Ra}n{J1PE6G;uK*R QԱ][ͻC=lk k 3=(?MOOkGi5q֧W?w*yeзƬczqRk]((Gl`j]mEw=xQzO`JKfʄq;7G?Ů{_2%{h6q?n'cJSSRQWpej7*U Mjy'd^-ұ8wg\8;аZs*KT#˽_͎-:45oXY`'QTQ6\Fp(ﺧLT!Xe հz9^ߢ)-Fk俿'gכ9T6Xbጛ+K՛m Z/~|;*s4-_YYǐh+c# x/^j?S!\iO\?\::"fye?5Nvn2Tp3!ukHq*VzϮ9)TPȱJF+Фt M 0Q`" #՗UWjYJ]ęԑ.~ML+H;&=;(],' _Gt j[4RnP|Fm'feCjypJO!Yc7'!ʐנ :ٴ2z}U:>[qG:vR-mUolYL<Ƽ73{;+wSJ\M֡kh)鎸 \L'wEmKx{%Z;H~8($cL>330܍ X`!Oλ@!*@X{ B0|2'W<%qxhNXHlB@-Kz6l@$ xU%V3;*YU$wL 6<)k x# X8:sXeTƠ#F~C#[sń?M 3!A&fkRTek cHDǼpY;Tt6"g?kgD"Ռޗ_Uc~DWXwB&[C|YTal|"w<^$r4Nۙ6jP`{zP>2m.7iVuC]0o*ŴyY--ay:0QWgI31!^O:um50zN~`ք(*g~m~^+cGĒggз)Ľ!$M #Rk2k!m+ngCX{yL9ahWfA` ȹqjG</=~%fLO1ػ'LwxR)90* 0),1. HUi*=e(W.;/_fB{Im%*xMPCׯv^Nˠ+A|S?hpx_1yp Rϟu_@Cr@S r# esv"01oܒ. #Y>:leMVc5#,cb4%Lfac`J0~y,yvr?feTV1aGY;UKM6޲5`v&`/CXRy+Ps?qcO˧'`mIrw&c 1YJ3J#(!-ר''C]'I+aYKdͰS .[Ŕc_-5sh غJ1[G&A|{ L3o4<1~[{4O_Uo6^c-K4?~}|+,Tک9c_閳߿]DR|ۢKC.rQ" -ϡ<[&dW)wH/Hz{۷XO0i Cys1u~Л$ANp^/lE˄p<;Mkշz{[BuE9Z^/A{ 4gȢ[A../ <߮_u'jؽ툠)y v̳/ }ps>!5JWn/LF{1.X3m7_qݘLWv/Iר.=V f͂]O3 𳝯+Cל0.σK+ 5p&́ hxn3 V*ux|w.wvPiP; I6I_HEOTg;j͛6gU3Z5bg::]{.z;WJw-10\,9S|-|/YZnY~[PFX_=1-,ucvDtlnVŠɖSvZ/J82h/0i9+j@A(Ǭbat~.`;}٣עOIu#<*Ʉhi6/[)b> c%M,دnoDʈu-:UT sM"Mݪ. GK(5Wӳ1aW^jbީ,buaH580KG3r%ӊevyٮ0KXPa̿^Uݲ^Vj6DqgYpZ="S#`&;d<ץa"`WBՠlek-AH܁g %)-Ho(@ohJSQtU!e-FeۋJہOrh[-ͼq⛳=v7YxS T{b I_\뎸9gWjB>9B0(8S^>9d?&aΧ 5hY)ؠ1tdqxP lmG E9ReU}j)plkޜn ߮&p/F`4 ]3A8@LӇJrx=< Qr56]/9/3qk2rGdY0 +zOzq?8;@1\ w |>Y.l:NK`aRӧ5OUF4E = ݶ]vд?-}k :NleS=Uc<;e)p瀛a jZ٪@>Z-?ػ=A 9Q@jQt]2=uV.%%c W,i1+ud:jlĸElN[Cawe@]ts>õ6 x2O,XO8-0XF?gx!&nsQZ%/_ݕ^qG\'GJ)ٴYT=R-& 4?4UB4’lp<i`[O^:`,6EPEcx(+23{Խ摅S\!cXİ`0{5IZ]^˞L63Cm6I&Y +X *d"ч<;vzF_sP5[ d[KeC >A)?yސ82+Bq2a,.㷐X(-ɪ^h c=41XY`=@Y':+zX8lH(˫ϏڂFvϸ`޺bZ ϐMTBԵSTMQX3>8M~% Ֆaٟ0^/jQEn$!v7ؙyURHYWg;~EqUb~'25Y+r- DNt P댵h]7 =ݒ)eb(fiʹBoGw Ř$̈ vKHo&3 п3TW쫸BYZ>7(xچR_Ib/5?htk tΌl2\w qs6Ա~FeLX"`#^ YM]C#Qj5Gdض.y=!"f!UG{%좖-m71.c 1\B,,~K֍k'_l6x+=nA Ze{?8j^)TlXLۼDaFQ0Èn4ضzqr~m3_<-3VϿp1ɓPah93'!<6Af]17ݦtc82l4]˪ Zdf? a8~΀P*n#s$5_v7aե:X ;JW  d,͕5\)MxfWa2Dzai^Idamb =BCh7I0XKh^1U@.혶\&0t8}ol)pȁêbhmM)&q[ؑW,"o~xc fOѹhT;m)f%\2;c,:`|O$3F7D~2%+S3R\t}J4tVc[ yė# 47x~=#cUuN9q@nUC _ ecM樒fhlPHutq\O!k!\t@3x w8QBցu7# vRAbǔ[|-V[g]Dd4ѮXg wj馬}[द; 褚a|P^V :訵{*a衶.Ap*ڠxY2tb_W#(ӹަM)i=[;Rseњ F+07c"ACrMat>w< TSAbGY=HdA\>[&J&yJ[ѥؔpFXpBYFI  `he{\6>M&c%"3H"bjJș*XͶ@jmR15fG6?v xTm[sW$C"[r6Z(;"YuJ\aHv |neC7WH{Xn2l؟ڱ6Q[mOv*xWq;.q"]ع-b*R\=7'ܲks9JV:&Pn^^x}~OdDH?M]R{n Kv %. cx{bc6hnI,%3!f_Aw9tgzԸXt:# a(gεі v9eMH[ͲF|I_ Te0o@24{+A![*Ml-a_EXo KiF۱OZ"l&LRSdrxpb^ÒK*f}15͓8#Qy+ PU /*X5ZC\FX{]X 2EWsڍU*>ĹDx2Nst™G2 %u @"}MCKygؒr4*M~:Y)D46Ļ8gy? SW?1sqB1ىZ JeH-9W6BP,, "Awۅj#q܅SAv׬tɍKi,q$h(2/'w{Sa*Z(!SՂLJv >C[ 8(Jn 8'e|FNf A=`_e[o+[n;i 4QJI< =wiУvGaժΈ:oA<1ջa~FG:.2DL#&QPUǺ3۶ C\H|O21. En<<87^a? qGl-srf^LDs!$.aL wcXZK@;t ZE 4JIݥƟAECo َw1bs:) 6T{Fxz%f#zmVsj$Eɮ[My⠝$$h9ݿ6h'eL|K x)V_WL|bu VڒlfhHiЁnQ+{($ Kt I4~ 5ȷI_1 #0N]8+rO\!"@z38&s#}MOi'n2ֲC߷VtiaR,jJ+XKvͭX']1 ek{s[w\DN20BQ '6vԾ#8% Wy\zqǸoMTb>uxc/vjPp-+fQ"B3J(HI5_P.Y!Ae3Ky6qRprog5tTjl89DREمn.ԕT{[m`oʫ6sXޑޅsמ)l#u_޵Y#l4Sc<F@wW-<ϟ<@ө[(Qk#2C,Y?P痝7nuZ7ֆ#kO{ޞBQ)DB_/ Յ`tyܹ(,->+:Gg5k(ɦϴ3a7z)ǔ/[yv jq 0TCNOK1; kC!vz"5~AGXUiԎvoSlH39# m[dOSB8\pwo/mm[qB?t3YMJ%~d{O@َh{^"!$ԊK]孯0&@p]TCd0~CVUulGY x 7 .2SbεK>u%pC$Ҷhe9'IA~ap>D́{^W9x3@1_t #jYu./-;.@x~O v#A#@߀?]Ǚwxw鎰4,2AGg XLwdZP=i%28xrW-1m;hʔ.U/=s He'O%FP/@J⣍Mg=aW0QwLϯhu)oF*_Ǥ xPs5,\̚lc,5!~Q2j #ţ?C]%1oKٌ5Y 椑mcʉTg#Z/H}s^~OKItM)K1ѸoiL t\jN+x}P6 Kr›|rO :8j  67شQ4k-[罩 0ʟ^WP}:f.սь gzH<5W"F}c;Iƽ{>%[SCHo&0 ihSr=| ?'V3ec$^` [b2`ݻ$8 9&Ƀtf M.!qư8;p]М{k_:.?͜IO?^Ay3򼝨OQp' ?L'Xɐ82hw9]!&6RGFD Q6~:GD?(>Ҳفug>|Hq=vaNFQ; 8xU0QLp^A/ӢqV+&eI( G>85'^w(: h+֗]ϭ9?2/r2>cx9u@ &`?C%O8AOZT|o<=|Cjr3/QPI|㒊^7q8\T8D$UHKN*| WYq~:ǺK(6g.Z'4;"*-5h>϶J&;a;&Qi0LtF΁QA,# ȴۖ~({%? Z$Dv vmzob"Q4z-H^l!Eck:]9FTvVbe1r+EseASqpQ vyR4Gxx땑FޡCktcv_5ʂ0]Ahhh>cR.cH5cFX[ѧWvխt6)y^h7hufv)*Tc)F_ /,<ҭ<-)?︀ea&! &Ks>`pȇUa1%Ӄof8Q+`\X1ƘQ|S~5L:`#I|M5n$L,<*ί|”5 uG@]BꌎXm1nHi&<ф8fF~`gEj`/99Όr h3BO1S&t^ 9WMu|4b~%C_G t+pzc-.vوZh bu?=*B]o̩ \REۯc. k_vKs5 mtV{ VoSASunՈƾiMˢ;^]Y՞DvjTkL3뺌 X\^^ _o =u澩zƯtG*jlu`3 zk|Hje*gKwg~Glm`4IO0)RQ֗+E,T  //0#Z8⨷5)&܎nPJ$졃Rѥ3r&5ʠt8p::N=?͛+Z7Yk?_/W.Տ_qs>c{-Ljpp$P!S SR!Y q l%մlSc. żƥ !ql%-CK[Ը֓/( ǢSb%.͢Sb2iyKsXJsf/ |5{,.{LԸDI}6 7Fsg/TyY.ո׫ټq3q"'Ӗa&ywz!LEUfWP"vGb9 l.((ozS#\iO_#) \|N߻r<Z;d T+C&vYL qBUI[*[+m+8mLE_@Qn~ilbHw'"zkLp eoG"7k}63Ei\H%jJ\ G?A2ws./9!O 'BP71TFnl;WFO WR_ "b t9@l* hfNzʋSJENEN2f4ؒ鏽<.YVdFOG+XcxJPI'VKo?a{  A)DCWyQuvX2g)d&RWoї- f:rzSmB)pS}pS^h:U qAY]|-{yy"-_j_Ѣ{tkP~T7]G‡.ͬ']\qBon:ݬ.nyr ޳YlwL+ey̖xZRN$-[gHx+g}WR35<;7oCoM[/|ի7+U^ðw ^F=$+Эv-/o0vЇȝ\ .* Vd'D^pQ/vNwQujVotO=\.9Iɲ3XXe_o 1{#Z 8ZN[6l{oح^>(df9ufҨhX_a#tMX:T >YsxJ6m[N@PW2CШ]P]#W֩-Cd>Ci0uwUJ~٦╗4Zhd* '9ӟ=l<>_ADg#@^`D`U\'ۼkJ2-%Y.IfI0)s^շ ՒާEtsݼCc9D[N7 [51ZpE5+.::2񲰗2&^&(zp_B>L|[zmڮ,`UEɦ4jMKepW؜E.BD+DH.( p1. ~.n{G潛WxM1\Vzt6vOkG^&t )eu*{մyenȷ"~8H$=1͕$%Гk!0OWDl6M7{Q.s) :L{q&x_u@t iFzԌH[z,ԇΧ b Ã01KDk{}lMLc^NՐ&קI8fWX||vhE}uHIME2m-k4eڔzZԗ N"3ɛ/LfgM/_s#>U3ׂ#wz͑G՞vtE)/Ssa(.]!<;3yxhӬx?aTbs}K*ٓꢰ Ry^s $fzxaZ!XlDܞ%YZ1VJE F*hNc7 5zCϢ FmxO-rҔgd$j+I~]օsx0+'#lTz Qo \oWc"DsJZ" %'dO)LSՆ)N;]XAZ*􋭝HJF V/mM׷i^WGnלyOmn@W_?D ,]m۶m6c۶m۶m۶w ZZ=("wĎv{km&AG>jLkGO^>g,d`} !cAi Ut#^-I6w$x$_9c4Λ!;9EoV:~%(y57mД2.$7NR<9N!2?+9ONe@)^B_;^^ !UA>ׁogs>gX}ʞLH9D5e )F_X@c+>!KI76>&!F.gA3\ī5w~S&-f!,Gc >9Nx| h^Tg__vdwgny|[o/ٸFM[!)g!6TKp^HIQyFo P\"=73"]0n?{ancS_DW} wH\` Yt= z=<ՀW?fHV<!K8Fͪ SFP!@^-laZ`B^F`ydO!;r,8j^yLCxrN"%8m=$[l}.9\ /۱hIH<0wLu`^1EɷCsA,yRvDI4ѫ@` /M_[ȧU|?v㦟qr 7'OZAd1@L L n5Ð ܒ #CA L3$6̊k5 GRzKUOPt#EBH] TCLLɉyaҡg'Nubʃ/+B @Kv mʭSeѥwfoޫD$_+C]ъ6И3YW\dȰ*W> ޴Y~B+b`R|}y0A.LN|Đs*Y1=)1h$UǴQ,W&c,B3e9qRÿܙ7x, uOFhSh k6eT醵]U)G ^9D,0)[Pmy6F?NT{ FYFzETTźl4R &Q{> EiUoqcҰqpxNFM:$Cpif2&dTmr|aQІ4FQB-\k2e  Lyl qOb3di[οL~ +߂HqjLp؇QJ^cA̻!ف"EwŔqd_k : ߫n>1rs7bVbQ:@|"J~, j jj ;bEvd;gte<| YFl,Ҩ۸n_W\LVC.Ű=4>_6*bv)_>O{;!e$ц OxĥIy`2 n YuE%,Ȏqf9[RbX.'3&!x}ʳF[F#JDuY2ߙ+Û  "w1ƥ2xP]H; L Ue$͸ 7ey 'G馤;&]w`4qƉY;2պIj l EJ]!pg}؅Vc. c{,n'Ƴğ0C0n~ M7jC\9 ?2L^fFB9d^EEDzP^0eN.SdwzPiYAS{o@Tb?WBh9L܊Bg{><ބ9X`eA+RELv$4f@{ 4S<Ё?KB1AD!: ,)HZDΓ)>Y$&;xrZʿ*F%N3#Λ.MFZB=gQ$݆C>1ZTm_ FSt {G?ً# f9&R-6;Mx;8i"ÖE,r1kSYbUGPSiݱiy|i*`XŏF҈y3l$s%Qj(!Ul{k;uCD[ j6B B-“! rb`x^_=um/Vzp}Cbm}/[?*b/vLmXnGH2QF+uErah˗ #T}ʮT.q u ?񥜑}~2.[&GlF%EِNQ) y̎rHZٵRAxꆠlT +ZZ0>Nf&v6=%z ipU plBDK*-\Kt\,zzF ڕ` (Ny?68ch^lFTQ=,%(^(@WVlJ0` N Uel]b\yug(VzᵤEC|̬ NLkb(_t}8{`Ց1SG+DCDgtw!߿~W6~UGev'P~#:wF;ǘ|=R ,t J#o'MA;%]R>ZG0YՇo>S%2Ug7 n PͰUKsdz?;%>]eQ툑 1+D7?:yCy? si H/_]5]s~>=G 1NOYwC.MvgT#WٸcrFY X\=}ݏpcKYx"Htfq9ZL$Pͣ~C{-!lSUUA 9A VK4:/æ' < qD4ߤ%oi s g?!z+!Դ0w nAwWE@r$J+>'~="5qn8j%%wPpVӀ_\r╯7f?)8q? J<'ht~::z}xk:ڠq6p!F'aO8^Sj2K ^i:} l>^S!VEb$p]Ih exxIȳ>T[Sn,b +Ax2Ra P@Y`i(A X>%(+1L#7kk5/$2f!P3xl3"ZЦh,ṷh@g'^pu 2J-va{K̪ Ka?OFвy G,bxɬK!v.lP'|t^4ПIa/Y- /AӮӜ}L?$i7{N-ﶒ(tqNY`x0f0` ?Ջ86&\앤b ]1O忊.=9lAPtazpE>?mfw[4mO'=L>F}a;+D} K* J$9$$ *kgv =RCc]v5r!U~0]X@B/ŎrN2B3Pi?|cҸ a 7+!Ԯf_#Ck? Th=s~)(XL'-/Nb?hJ{ջ4L}lEo -!]8&LyeA?%8kC*bt!Jb9xbpV(f.㔠n͈]-`ZDR/ߍr;AO2O,b9%{b\1Dd>'Ty3 ԛj=QSfCnd;}EolQbh| Sm %/ YN^?g'ec~#+\TEaaN 9|+v?]7L\[m;չQ7t.W 2yjc!8&7at)UQC\(Z!@&&T-TR- W7ccQI¢NKTmG߯j$VRc[oR9st 9]mOv~ͧ1"E1QI`8&`w}Cwe4AMo2oR&} !Q@ %NRAɸ8P+]][Mv2DxJk ouĿ⏢T,No+D1@÷[7+ 8 jKN ۾ Q`aCtu'⏾&=XDpw0!`z.6!:S|`)Ōþuht}8-BW8mXe(ph+A.el8 vfMC^});׆`學i $yEaJw63T³)i+AN}+Qc3IO3˪cq@oR?<Ʊ<}Yg>i^r8-߶oVA[+/.Gb,k7v sG,lT,`+6,TKzh/27 c( R'kt5Y_H [P{bj^vh01((N./>Sfnmimie؅AHY۞ۼzI15xY|U߀b2LY_^`hy4,&2ϥ$ B\~Z#Y*]pMuNE +GP %yKCڦ+唲Cª);DqHV;Mö_ F#M?Ҥl^:EppQ'5CX*( @F#QҞ_;EhAI14*/ u1Z q1߫==!l#vԞY20yL`@fWH:m},x!7 }ʐ_T( \9ԃq:_&'2u.LS9+$n"{X z*#-*QU+@W9!Nߔs EdzTB{LvmS422# Z~ uaI9g|6 xJ0I[ +r8VD22}ab/ CQְX3i~>7_aH$U1eR_R"U?j 'Mpԓky6 }E=`+|5lzk~"JĚT6qxAL4Ǚ r6T9xBl3$BA(B h8<dz ZX_NTPs=K ksʖ[.n WQ3W鴵,enځ%yϻ*֘A[ 0 @hi[hj4trO%Egdg6s[_cpg܉tmT^Ϯ$@2JP#CfB ܏HܷVma:)}qfa~.A~q*CbZbR+5-Tt[vf%P>-/\BD)T&T.d##Cj4a4%"N\h*%ن.tlOYh ~#]J;we:dQK/B |]Jj];|:LG^-^e?0EsqK2 "9m= mtID#Q״@wf4o<=? B+5yCƁ8'F." 6J Ҥ@=]`:@s(t Pi=fω]ȅ0(pm?y r4!RN Fl4 OYYI-sD!R*Gh羽I#q-8x{ժzA7ۺm[3:m 1D,F0)Spe")돕MBxतF>s7g{ p}(52 P4,dXv8!ǶQIuϔ_ ;b4;!|MW.u7o-\ .Q]e( 9؋#Q@0E>Oc,S_6 J"wVR8Ȫw!SeArKHV#nnK nvO?nƠ/0)}i ՘^F5z4~yЖ7֒`pg> -Նem:nj[p8H ͤm)70^/\Zl,,բit1W1|(,L '3I#H_Ü)xp396M'6qXs)=$ܕa,_=!8:&( cjƑ{Q" ! /{*`\Ӗ@0ntXYt'By ADm'ҵ[n ?|,e6 MHdzՠVKOa܊UB`+R19,ͬDm1Rn7ჶOd;\wrMXI<>$C42v Qֽ {h^QY҄>]̌ṳrHې!щBS屢,\]E6SXk"unl^l 7(|Q/^)a*` _5uRjdUNrP^q eНj6SA,;2]?6)o\M8w?VtXrb<~Ww nn=^ph~VDW3s;G'y27q̸Q_- [3RBʌ)X2h,Ъѯhz dq;5aK^PC{|XuiZT4w͝B4$H 1T 5j.0|~-Ă!6&rR$]qԘ R`~Sba~@NLKPC`N)Z[IV^ hW*bO.:<<iZοciL1ځ&'3-nW~\%q7Ӧ5B{P_ ۙ$ 6V8e˺gOp=㲲 crݡB7϶w@7GMeaAiK\NW 1LݩJ (pk1Z77?>=?{r*4I9HD$F]I:ӧA8.u/ u"q ޤSNa2uew (.W8oG_6 a@}vy5s_k)8]Ah%yCǞ#O^9rLo(.aHX=ܟ=NV%yy5(>F @DQmBk"n}p?6J+4Рi 1dJ~[ƴ:ˬu\v-SI7֊tDéH&Tz>etzs}0^}n?߭\/'[(_l8c|}ҳ.}C?ڰ]E젍XnN SDC DJzZEZx`NdK"&&اlf1(xۅc .@H˔H*I-D^XZ@o~}Lc:dٷmALIA6f,=rS~ˀ8Tywqؒ32Gfwk!FwojHbŃLG\,V?\gkMla^LWt[)), ܗW 3;:hQ!d/B泌EVCm6;Ʌ_XEGEL<6YS_|pKojE>5͚;c< c8?߀CEI#`c3Xe}*!FѾW!dp.6aP&q ױPy%8GւR\8>qG OFA 'DL"Ա4D@Kk<3oY94N&&H]6zA UbB«˪P<*`tYPb%'ЖD@O&r{ %%c  ޟ}0}T>]SHż˘cŅ#e$I̤^XI[%hxԦl3n]Qi}nBls8aw.ɜCcψG攓qaO8cP8:W'aZJ[knsN>cP2GrjS7egɷZoCR|_͇O˂o@V&xj\x4zS-|g NLq k+xxlAz*h֏AE暫H6Yknsal\:Ccpc}%nWWR$){tt_QG[t˫NҐ&Wn#R7F8ko;%JSku8]>G sMb8sb;!& 1gSkӺH<'8tauN%֘Ӣ jB93 ތN ѠWq55U5y8)%#|{;MDܫR/JihHMja%;Iv2IbsCI~ a$wJc"1zu(L&5?1)ǦNOO ht?{F0zM _ =bD)ާD$ (UB6 eKg޹z7J"PzWi9HfLaE[\jj7~ 7Y{zHS\?9|=M~3QHFjQa>ͣ1xS> f' ni[Rh@ ǢZ:9/ڌ!JߓAXL3opV2w<&!Uۄk8`R$ASiqjaR]Q;QF'#SΡɻvd(lZ!,p!I b@q24fepLoХDx0K?Q;|@?r6Df#F|bVpXvE;^![&dCw#%t_lPE82NAn9Vd1?qMOahT%@Y Xmr9w?( |&|6fΥIK3i Qh{Dw"L@CW(rLi9fFLid8w$= 0%M loX WRuPV s)l*(ˑdzE tGʛɲ̦O}Ky1~hi92e$F+R:)ajCg;:j~y% ">*w bU?azg;gp6#U"bYl:eiYŐx=iC{#_/= 6.³.] ::Dxn2us|zAaҌ d>ӄ W99|5\xeGdG?^3Eh X֕DlImOyx4VJr$C/> 'J;;L־aP;kY;VB[5Sqʦz3xBmEݻh~U ^mS}r\gZ4ߑhroEh:HX>YҺaP6 V1+arͯ&MV%֊N?{sQX5, ]k._gF'mEveչ*]r{zPQZaoX`+8:>zͤt!g-j7ԩ-\]Ω[6*VP~ήa/gMu"Zu zASO !ĀqۖWpmXV8b3׶9.2vmq5Vc+C]FS;0us^.v˚]8=&)v,/W}ȹġgvlPԴaPn%耈 h]a͸svG$e׍_H?"kSk:4r}z|!/}ǺC4}c<'%J:eε Sd\"6 X|Zy]ٸd1޼V *5Hu V#4 ̀ OUm<;3Y d0vOhf3YQmcT5ۼ?qсn8-Y(^[O*7!WO L^9aO_@ڼ/tNBW iG''^MR?;5:[*-|7 $Ooe\ҺCK4d X@w>-h ?Iޣ*';3f@B6UX qLCj+LM#]$ަ!U" Y}gNdT[yA[h.ܔJ=iNe7CYz1eR.\&o\2 'g56kSr|95q yo߶la7aVz+dꙦ+F%S,hCz4v[ 5d<.= *E^<ư _6ͼ}*|ācgQf+*;] ,7HӡY_syG9yBeB%5!Udqc*vd v渐;pb7Ȗ?x|ujZ;zh ܡ3nϧ aC̺?j{jC.`@L{ܨ,G( f&Z}yuGf H8?knʵ7Zv[;~g\kG_y+tZ,2_3%V^>bMQʠ򁐲yveO⟖Ʃd<9]{fH`iF$Ks W& vn/`+ik͎_Ԫy$Q91|`o\\i%$pHi<$ɠעcwHQ 4H^J 1&iG=CS`Aa[ㆴAji d 8{rzUyRTQg]ȕzIÏj>.-EzbÈVߖy*RWU|CYL Qg,tXDV ʯ$d+6u8>8{?.Ǡ<`|PK6HvHCȸ00!h C ^$>w&>}=/{SB"=sĎhH\"sxoo#Tq2|/\+cza S i^p"%ziU I(+F' JL QX;D/(EQG9ц a ùm" ASPcrKE&ZP;n8${7[DaNk8|*)nI"h.۽50Fz^I*@ϨdJ-oh˭w& g<h$H7J,IxÝ r0SO*_;2;<ކB[.} —!;dX cq RBo'Ֆ-M~fGmm0Yؑ>GbIzNÈc@ݲlXWf ~s+ .sjO@C"ΐ 1I`:.d֑iTB$>8aƼH*.C1[CN- 9j7%0yL$eJ )fB7hd>Vk)>R jiU܊y@ `(8PMpҚe[a>4 DV^ Ll<#}1ħ)a<5OVy?U$Ӽ;LH ?X$?dAM/'wPC]Z)㧛x3sG _1؆-?\(DŽZ2-}k_M_wdžؑRvghONqC8B#e= Fuݗ#qdʂu`i| ! Ja;yVA7ʜo)0aTMp EYa%gy!}Z6G^qq೾x /&w [7@)+A$,N#i8 $U5R7XL B$qllUTi6Q j^ S̫GKwL*LRWSf="(1W]dz!)W"/Ns1LTZK[M=x{][;S~jr5uLޡ\uLMJǛVx d^O+~uWL[%:ԬkyaP k FˈE7짒p_@UL@G171<~ {]5''8p&_bћ,ӳ4:q88ōz`n_1`B3=ܗA.UNV}Y֘HL2. fE-p snTV䑜TC PRn<Ȯ罧0K@x4q$Ԭ$ƅ3 =JR=7EC]?.Ǵloma;-ћL;jJ9d] B}5R/v:b@ ~d? [Z(!&H8ʗE ;k@7/;d1tWL;8“:.:0e:9t[e h:%Qt 7>}?Z=pDdj c𩦉U[;;Y"`ˋ-~hZIwQ~ȶ4٦ZQ»ovfqws7RMSK ,Xq~K[]b1("faM0554˞d#8a{>o^ [UXDEQ>jRдZ>f۽tz\CtoJzwɌ5r [7-pJ) ;Y+ASn#ݠO6JyLB5ʖJܼHM^[>Ƣ0Ni0(YQ Wd€kr)4am,O::uK~`\Sv~kLK^q+'mei^1$x(PC2PFL υPa苣i0^po|,r8^?z1Q\\O(-OZxNb-fӦeah Tvʗ;Q [gFali,5Faѕ3:d1[ne{թ\ֻCW2sٲasرJwT^'F-Gy Gx:ú)AE"%{ yIh/D^I1{%');>a"IWH{?X“ej/NjvJ&zyP9mNkmZgcI. P`1`tC:s8.Fk"4K5ԇ3" :PCxzxHcs2r4HW`ӿkRΊb\`i8jYXuVM,%+0XC|̲k迆W1 "$ okfssd^lЫL$ȹSLOWfyl<1fЂ)d?iО }Jd}u΅/y" .m=}jTn+fb<۷'Dj(*i+2ae7镓,`sKNJ=YͦFʃ ~:'=4c\Ƈu¤62V^GY07x aH" K=)C6GM|;U$=gh j#U;"vV<- VY+D~"` WB,>]XQezzĕjz왧 Emr:/{,1&W(>dj ưQ-jFoJ2,~-jG rm>) !Ųߴ) I6+Y,YVa:?AUF|,? avI+ve5/phOdo{oQpbvU:϶xyXj mY"X' KHN+w _-I0>_%0̅7 d O`֏l?q`,uUj7-i:`4;? ӈO <,ZmEv:$*mURr5U%*j-}{8g׋&'XM 0^YMӟid.W]^̋֯{z," -p9֙6wU4mů]RodlGxye=[Q0i螎?Td(Ojn4"-‡dqobwR\ʀJ6y]M]|*ǩF~t_]8*WCz%]]UU8F&'t 6mY`RHE_Y= PIjĨA( {1e"ܰ5$rG= 3wGwrҽm,K͒g C-X'wP-Ҙruc0-G{Pw)Zyd{HL ֲaV !nK3HR~Ȅldt~n?} W r.^҈7ʟV 7JSOP:-)ђycPv#/.Ȝ[C[{^:Dpc0e$*}UM*YOm(uv! ͖:+ 0E!'$L=<Em4. T '`+M"׆zړ\L AWZQd#ގ;@L]=Ipd.߰톪b4u{oMΖaQcIEߓMܦa'udߨ)9rS)#G/ +q$,lm9C m"kDIyDC(~]9qy>o\_Cڜ6Jb>W_&d~ܩ]3**"|bGLӰٛ::Y:9Y:Z[;XNlؐ~kmKU sXnvJf+ecNȕ&id Lq} 2G"iмlKAzQ]Lfr|jW8Vv&[`S#>`2j9T1d6&SJ^k]T tŐA%X>,VoN-o8$C`2u굆UTu`0H7v}rrnڍ3yk'n;*Rӧ]:@#!QJr\2h0!pSO*.TZALf(iL6[=f֥3HT{ƀTatҷŊBqGs ȽO>DА=iWFBKַQ^}po֞wr'ɊsҺR";~}(U.pi~_lg%/&97#K]Gd*PYW}p4 &;s- $c=6b ]32AdxƱBL`>zD2] rX`y0'-0KX':r< L(ZmZE7*Y%!1O?Ѵkdh>NW[:)C^+v=z~;ڿ&qCwz Sn~Z&ou@ZGw 7i>Gyt.&;;#L_f5ڳN.ԣ56zrx߃X%ҐɑHt}p_nUOEh8z-gX+N/@AoJ`?BgcdV,,J%Zlš<,% BEsf"ɍio a%x"[oe`ڿăa T `궙8\CGt &H\Q@p1V$' *0^egLkQ&@ڠ_`ތ1SUaM٬MEU*? 4>p3w7W%Pdd=j6iF * ѩSm?@AƳwxs1*! q,z_e3w}{KMKݢ͸(^\2&(8AHKH̱a~ 2U~__A.PDE7D2\\ |n2 ݉9)aG*h pz7SذQEװՓ R :PV@qLGfvK ^C:)@&Gh6U%L *rS=y'ю#/)  JiyP&7oB[ەz,Yí@#2r96;wV\u#9Vv,鰑3ۺ{l9REH )`%qէ4P^#=[sb<c21?掼ˢnA'_7&Ldi3n5;SfXn7\`WMݭIj%Yģ g_ Ǘ#tn]ffiFaV8oY[&41KJ1Q]Sg4[̑ quve^7@ Kq]Tb\PWR.d{vv3h/e+w1Z5&Çn"bMJNN=.:%pՊ{b W! O# Kr,Xh$l @gEk`gQ88AncȯZHxQ"S)S~~*P&cuGOqMZ(Ⓙ|Qm!U|9xr7< : `v)u!os@B8[k|vU8#cǾvm%f|o}&->|֭>TnXizOЉn8DnT||8[_A&ٴ Aܓ;sMUKG#m Ԩh>)aƭ&V2=AkSq g{V`vLj/kSѷP)/Ri/sY_unY5}W!U|z 4XyӿUӲnAՠA:D6۴LͪƓюLX6?xXH&mD1}^*8/'L*/RICG cCXi\6hV4H%xp4[,[4Ӭf}p-.*+;\% AqM.1)+Bd{Og&`Nj(I|i eܪhMyKY:4לN}wo~L^$m5gʗ!liTPXBYZM/uhdk7Nh]銞o7v假^X,tyw/tܖ.km53P[ZPvzy(;BQLϞXt689y{CxMsQRߑEw9 pC"BMNH,]W YU-Pm͸HαG~Xߎv|(J' 8N[IYd"䏁3,X l0*Rrb0 ׮!pcHm=av&qvj; n(A⃍alcᾂ<@7# OJZvS4W\ѿ^ ^Lڏ'υi\yCؘ7dڰ9h:'@`18˭Fn#aGS30@۩_\v jlM|m* J Cz۔t[RVZW_L_:< vuH m9otjVCoZSeq>mMۮ"4Lzx7=cޛPHN(yMUD+CkQ#6YkjA;H <#H[܊jY"~`K‚.YCPQ}(Vsʧ><|vp<*X >U,(}C d`Fxe/c8Tn5!yj2K PȬb?zk aaַB0Σt߂SVWW#N* v͜{" :Q en@ Z_)tXDO,hlj H L6q Sh_ $,l UGo,Us F EE OKW(OZ95|&ܹ)o*-oĈ jt15#oKbDdEĄ&;CR8#y5X_ : ;0R4A20-; "Ыpեꯍ )#B\n,qMgũ{^BXQ\޽VCs)D@R"A8.\LHtOtli#^BhpA]#@ـD6- 8l`blեқHcuMk g^߄a8b |Ս 2GK<OWqwOBʁzfsޠHv}v""wC69יG;@z_z<d AŁFow,7f)/葑u/)f :LO{$Vcyo0M`4Di|51>ښ8a(TKGmxP}nn`\l|V,xJ;I,:FSyAbĝbA@2@)jgY f1j ;Tj&_SH}ؿiOz wcZ fw+ },BslY>qI/2n{{XV.|roohQzf0_C~ܛȪ)GʣőxQTD둸B'3&gAɝ1mHDb,x׻"ȼ OӤ=JE}lfq=%DS'ʑcn7=nJq `.IS`7]8f3mpj>w4d0퍳!{8jVu[9 A!DP甛&Sl[("rݤĄ%@[-h=w+];Ŕ Hm2H|2Ք~V h(۞\ׄe>]i!FCLC<֊P"F`D8%4Z2R ŝpt$ 7|{z~x=Q%zez5;7͏Wh- (`}x Zș{K˩mW,7рB ؃ObD*A cwG  9 J4.%c3k6K# PMIVeׯSBr 0iJ] &Jiob3vByȞӷ&1TFѸA#dc$ (賹UTޝz+,s N`d@uɧ9uH0qh$>uԭsF8 k zıMj[L5D)g0L YnloHB K*4Ȟ }( DY}ߓd3|aߠh e PY139&VŪ՞jnS.#glG#?qj1ȱy|szgdl3t!^Z~[_GH'$|#+ɖ%ƅ7-%9O'r G>{?t4Wdd^˛ ->ϜWC7RO~}%&DV^ y'NsPa~p?5Dk$`_˝u|@]]ؘTCPoMFBB{:QQ y)f.2R,V)0J  'x<L mĒZJO A܎@RRT Z MPryQud5w ]bd/>i,lH6L !܅Ʌ\ǚ|b@#]A>Ga=͞4ͼxJ!r?yn`0 d?=zY`0.~ʽAA x,ɯZBʷb]MVx8@0 ۑAO8%=>Sb؟)4}(:)rˊVv-U9(Pr9dyb.r OYR;Ņ54o ̘}mg;wdHsI Rp/DFh[pMfOj*eƐ'71+mꊡ퓊jæŮg߯)#WԕInH;H5PG!xpS U^:4ֽN8=~\dDǎz1zh @MWj .WWLQpbtaW}.N O|ChFj\I5>܆TZSTf[Q:ÇdT`VfsAH21q+W%.\ׅJ/tB]+Þ|$HaŎzlfo ֮b0pa.<U\!2,iFN/ j!-m0{x%YlYqEE ? D G|nfW А BUނ̪t0tg / @'WHC`|fE;8S) [#4)1˰٫8K#\ X6h L[Ӝ"$JncPȎ9w'’}":tjnՠer{J8z~!` 0>"#0&dCrPLhi6;?jiF:739 rNy[?$ cQ5V`iᾒO x~Tjף؂| 5phեh[~y?V:Pc*~VRrJhEi:ILi&Պvؔ)/z򶨤 Q`Vo泏-hk#cXjkcsTN~l%j5)aer2 4$d. FUWH;8D sj*< ={0KMRݬ7h4QT f8cA,R]#/Ŝ[q\p=A@\@D7*x/Ҽw}l5쟷(5򱴺m E8o^A%VvCkͱ&$YcSOF3)Å"џ1ɢc:{1˯c;psE8}#E%Qf:S?ZKJNd"*Fd5!~>фw4AWTǪC~+(M0ӝ%1op;#_1 =LP /Yց||#1/{q;n&OɑȰfWc/K:'U-V^]ևZ_fwMMCJZ{7ukU9/;^w{q)K !W=æ֊3$xV(wfF`S(Lr ˠ}إj a<t1)9;쬘kˠg{|]ٽ̺, 4gku+A3DpC6`9KC.0)Z3Ld YU"E5P&5z_c_!k _Mbmhgd-Co@bFpm@x=ٽ^g}Kr$UFAEěQ7IKc1{vd%YެAUZjic(!ZdН^\IO B6q*@(x~_ ܤk j# dv6qF̈H "6e3XzCu}p`4n"+60*jIlk Ku*&5|b38厳Qp[˫gA+?KIfIuk+7+`PO|cjk-.N@Q̑L^_lQ!ȷtJb^ &,bvy[sRΖ47:߅ ܜ6<o 3 "E4X5}Bpf8RžAycUю9^?;k?`Qy\/Lҹk\LfvPc|9Vdy uHm}R`TV ;v k8D_ֵiFQk?.T HO+,V[y :sr #0d ~C5$EMHύHXc2d7&/^Xɍ诛976S4˽[qݲyD5CNO՟$I-YXY>CUv- U Wl0^jJ6YW' C[l UȰD+<x5JpR+\//eXT`ۮ-nloTh]Qdُng'CdlXW'ӓy Ea)cܡLOjr@y/b]a,-"})ueɃ!#p@(]YkzYC=Mbrj=ԍN'u[ ]BBB6F{[`yP3TLwyL\P~7'2u/gE޿|ؽm"k$nx*Ѕ@$=\'%. s[ؐ}$j4S3IRt5{z x,$zF٭ٌѺ~vdȄδV5^"Z3@#hܴ iz\M B+wcڧ3~fsKmT>g罪30$9aj'HˑoR4[beP+r-XK (YlttRช@ 8/NhHB&2Nls >r>ԶϲtV|VkeXffyYۿ^n7Ϙ6d'*51ǙZ|D;uR_eV(B'EWi%ɇVwl5onΝEPDFŔm*k񜤳2MyBC b}`pvr]Ĺm# q+ KH'҅GbW̨WkH+į\MW]G(F҈^ zO{dh_W./>fX\T\=ȕzo N#wݻR AX,WAMSuV6\s?<`'\O<] ( xl4eܥ(XLh=I 舻p3iLnI#Z4Kz?s2MCϸH PY,fC  C s.{u6=MW)|{VU{uƻ|ڗM77E*٪^yRb|C_BآFjNJ|Ni"RSej&[LCSXa+WC%iGMNQ/B,W!JL]ZrF!yKUUiÒʚ9f6/B:c[Ա,#;o50j$Qc}*g뗰t+ǝDzN/ʹ!\3J,#ĂFrI4stAP`3ˢfÂ| -ѽvłMG'v6+=A 'C5w]GEupw}/&Ҝ$"eqU勴EMQb&T+JդEMu̺jRʠKKf˒ĊԌq'Y=m̹WFMYmOsvm41KysF6yix-qSiVjvAKf^nC"Wv A=0Py zE.fQ5VExshh7uתoϑ?,-9m@vcd0,Cȫ5 /W,z+խ f[X=L?ZeU(k5^ȰmWUC @)zN϶x+QW#NH|[AQ-_ArQ?RE+-Ew.r-c;qx*WtZ{Tɲw5c*P6!w-eO"OvOɀ|v4 E%e\7U5 %p|5SClgdue !F pYi۶aj9l]4_>M))" #׍X w:NB7t#tm[/'1'Zc>W8(H*wY:#h8ADsWm@),LX=b+,]ٮ҃kNP"_]ykʄ 1Zjtˣ@q4ڙ)S;~$ t;{M]_#PҘ"5ʚ{0tk?V2N_ !AxMN9JTܤ=qcM?/?#K,oK}.%HQ|Gz_?Dߒ&_"#$[mk`l}w$K'\:/|iXd.Jvt%U"xbTLe+6B(@=AKHVN5Zar=WPWY; bY{rcHKH VRS'z\q`u-BuO,AxHoZA+|MQ r_lw]qRP(k7HM*S*yC}Y:B yhƝBskq^}tZTq^X6 0 &ͯYe=#?A2!ҡS2D.^ϓ{4w]k(+{btn@&6fhm$=dgkt Ҋ͉QVPT#<3ӸC<<ްno. WLB®Ӟ3,zTD *g=y$SXښeS}G4[KhUﴶЙ6isM-PϚаnfS,KzeU8`0{F ( ާUH)`yp^yöBPwQw)'fK=`ЍS1e^6ۅDQ`fm$P]]0s,{L)I).@`"X^Sد=qDXAf=2tᔳ3RQ}q?]ҶZa-@0˓v?5RnGc`J:F2.A}k NZ]"3QCi.~"O 4HmPñ}#Zn/%wD[Όaƨ[cKbŕbX"g/2nI˹iui^Z \?8ׅ.r5k*y %5 ŕr܁:$۪ ݖ7Y^Ɗkߋ##wC`_:ufd<ecOCL YP*;c.8}xVc|Wg0k+ۿq@& swR )SL7}g{r=8rrO[z}ͭ{{z(+>zg7O&Dw'i($Q#AI}B1W5CM=XӱR0@ HPZM_k iQ0aZ@d"vc.Q# ;sT.SP@FF5C y''{[ ДѣYb83| lS 3bQ3%$bjp4괝y#W:)'"rsZ[n5q=1Fvn=tz5UJ‡ƳI,Q島7E]NH=XOSgC.cHO1Rj<ũ$,HNv32Ti7qVDn a<48+)*}n<՚' `7EcY2iQEЪAE78|:G~yF*c 0j[$+l~hv낢!z4-gn+(j.YRo'6άruG:YPG `K$w*gY!K~sXz+%fz$7YS+#'1MZkaQwސ}\*ݡXB-jj&2#Ӌ0GsJp\R"iK[Zni2<[Fw͙^OQ$ilԯMok=G@ z6tՄ>iՌ O(`8Е2Hm܂$BpNUv#кӈgS$Qyt9){ uU*!s' cVlWbɨ IiK#%ףk}Ҵ4/PT, .S=U?~m;.iyA OP\JI8JXd+0]S?GS4R/gFaMZ±z"[rM5Vi?B CEmJ4OQC}17>0\SGޘTNR̴G8DZN"_?4bɳE?A 4j@_eRn::V5,o-N?ޕ> .C5B{iϫ!w?;?rUwKH/LɈw[޻b~boyYY{jYuuj^jGu'Eʸ90-דR*!|_Y1C/2/'+a+/3/+uJJ:},~fAΚa1܅Х旂)rŵ~$*-7a_ ;_5?<ױ6:IbzI#&([c'nNZ?E t*e%w-koр6hlG0P4.XRxK//F<:G{N.y 1rR(ZlD*biZ;.^+|u<@u )=%9AVXw.?Y26 l7d*EK:S=/d s7^<&kړfH% G|(ChY (2 eiWaU—~qad(8 3"DrݱvMͨ>DL38X/JE&l N 0:EV۔B|n\*l-k (`z= nLz@*Z$C> բ@iEWceSyl[j&)2 .bڙOp^20OpOX>=\%G .@c7Ғ& ܝڌl%.i[7òGAcSAH1Tufb<Vv{`)Uz@0RQK+ AZ]a=c>C}%1oSAge&?RclAIDrȫGrl!- 8Y .$V'Y (ޅϪ>AIЗ2T\`g|[]!0Y!cq6I/YU_{My8u]?, 柈xiB)`"f3\-;(Yj3]CG(;ʆ NYf@r\yڨleu# =mPY (*\6.C UHk@.e*Nx+lLǽGxIyÇ>6*GvIR9X Xy(3F.S1-vN΃ڂ h"* A%A;Ud/9|uenkC&`cs$덠`%TMdBsD9獴)dfMN/[k!2cx`TeBwObgh+ss M &AӉh"׻Mm&Y웥'@GEGH~^n)Jd='VKe~RU"`N:"87|=jofC_a UʍA 2IbjfwTٕq񛭐M{?/Z=ĈUv1p%.U6w D66<2ä;Cԩ1 _ldћX?>p>Fz/, |'C@m5zOg.B)~_[Ź선vl{ONnk 16ޕ3rE s6o%L{|ab7g^u娞5Rj2^}xOcydg>Cݻc;(*?Ñw׀W8 k_N0_԰Fhysэ-ʩ5OeRTJ6PZKqge3{PN]IUBi!Z3ӗTh .Z:[,*!I`\cfY _ZW+ x?11DWJ;jؕN% h2*i>ߣ%wL[vp P[ 6Szs8zL0ሯ.k^[|J~Tie (By)gTN4.8JX9?xJ=r̠<^be(]g"n0e{*~DR xGE+ |œ>;S*3n7w߮t'/ceR!PPO݌U)@74KN ~M?n/,&>_.jP$g1q@\P QPT~G^jV~Ep8i]0KAƢy93sE |Qʽ5Ix!^wǚdcD(r\,?,C1ײ6*FaNUĸwQI$ _Ɇ:V[p4 'X|JKXC6GMM#=\s\QNECȒ!p"zZe;^UonHqj' = lfۧrf<ذekuzI'~CfbS\p>7JhP\]K5KH]Z$nL:cMDJ.44U'ĜA;5qtO"  &] 8ɝз0+Z1QZG柴7+~;xq+X?[i2F[)~P^ =҂߫=s'c2Pe{r$c눣0,z;J"jA8;Tʄ8/ّ0'(J~&[-J'{WWm_/\a9Z~xiVJtwE(\~_Q,}is>%ܧ0tME !e* OĪvYl`~Pl&r"&[>&/PjdLX_62NtH1z*p@!g$,~%O [ L_O'& OC;?r7?Z;-@EFSoI-)(j7Y␄0?\ Gr쭹|OW,)۵yz`,у?v{^W&_nK!FO:im5;ϡ, _d[Yczp P5~8<9.υ(1+A^zUݷKKQN5L~?b9$.VgZN-sb_fza,ɭTo#G~˅=]]V4un+[yftv#|#}yȂwv_0(h Ι z6Ù(GDAIȅ'c'(*x)m:Җ U}*rE@ӟHED֭,i ;L,w1l 3$G|Bk+gH5[_C`emF#:G{]jnX ]cC|B 17mЇk׋'+ucQx[IO$ }ˏjw)?7tttD\B钹3a;Y?/R퓭);/eg1$׷6+h+vXkS8ۧm~S֪l?DjwwqLZdRxS4R'ޕv8e0ڭ }mG"7Br룱{H'"nZ5L|X Cטz9 }z9H奃-%xxnž&¾iG#Mɿz`F>DX9w,+\O=eӦOk3e6k/4!fAj<7z^;k)mknk@J;% j06=d/h,%#faE*s!i!sH~m|2pmOZ7 FOn͚uF!f l2@3k CɁE!tmYuV|odMQoǶI',0m$?-.ttj &zX ʯd\GJyݺE'ÉGjA!pvnLtȇTCӈed=M뼢1-ր >y8m"7h$nl'Jz2)t. 'j"rRH6cp\]Hguy>ެL}}!5@"O}6@ WWfb - L<_!脺#i&KQ#H7PBد@ Yae32>b5B0:ƊM.x@FuM.\!LT?{sx]oha u$r21HXr1-H=5P<°e͞ޞݎqe<#0$zBqua.9VUoKwk+o.5t )ŕ:TъlyH{jRT ԕ% 4ՇKabV*pe(7$N3&^0quųS}aޢQ\d_ E$6-6::z-iZj7e@rUaA <._2B _DaS߻ p2;#_S4 J9Al=%gy[gJ+QW:4/iܤyDH>_q'#ah&.I >fXҠ4|0m_jygHx-lB]65> 6zSꮖ\wWc3UjZ*~u7=^xJJɃ{fojuf+feA8װYOI+NKiOuʬB9+!4@N1rE kDpIpǿj.JNkjRS φ{{d+v!lN\ng+'ۜB.BMh?&3T#6m@N׌(bY=9{l ǵ_iłT>X'fWr]H&=GNr oGK@}Ev O WV -W31*ajDMQ8g)P[П&3JSS|g.]R){$D_h7[>ۼc,NgcP2D<&q@',B[p9!8Cegٽ[bb:|f _ |3] |mx+7)V'R0+wN] w?' [3kg;[{GgsGB5\xb3ļP.mYY!1cDJq5_ϐnb(TS'sÙuMN]I;cD 7Pۅms2\ `8Z )SDBgqIR@>"Ŋ5$  vc??IOf52Z8]^HYBr"63{]4BF۾8 &3v1O??-760 e,.X7S gOzq{TW{+od߅{uE;l*nW0I,'׫J$b-M3 @G5*pxe:MK|(Qvmn/?l[*\Ȗ<]HsE&8eo8A WwdDFۀ23 (yJMHB77`ޱL۱oHu{J!n2"+Dby)de~R#{`|$Ȃ)rɠA( b:ğ]U@hCBtcwյ<<(K(-# ńS Vs: U7kEݩ(0~LW'/C.^_O{q~}VŐL{1!)K "r밣#tܣc/*lݑ6+{lEy%s|`wS #T8ynv>vl;o|[--L8,"I՗ 1҇ghfU+&4{v(<3u~ڏʘ/'H$8؛; ց[i=skwF Y֪/Lr?C/.Wo1+FN&g)R-)Ec`V;Y?&b+{VoU|<4I/ʧ4u+ҕ:iU4v0$` <^>{<󪕴&$d"4`5Yl8P =g(jJ2QRN$+ "aw{k4( p5W0wɞq%O nh opOf-9*ܭvViYzuk^jj֠'ЯQSNh#G 1n\'3=(ut661qv0qo?a_tG|Os]*|®%$i&,5[OHGЭ'w]&3{zhM?-`4y=#[.fOǑ=w A I{tlm zg}\K!vi~NeW. rCFy,~ISdCJ2/!Ub}1-#WQQ?ygօϟDON`Ɩ1+3upQЎM^Y[D5F~o[9cM(cV64RP`:v~QDbKy|Y]~Y^2-^>"BL&$&X9Zq))R3?:Q[6 f)q(JXwk<#Lv("3yW=%I P{(<4&GEL&kX'jHTmHdeҽ0!-5xcy e8(&׽O:oJfޞDI^uXnʗ2qYiOv47>Mpra &]r4P} iS $e\5WAk8q{ô|#wΒ:gYe"@b#p*F]$1A偬C4]}XӴ@BɻNMJ=V!p /!duNB!l+ 4^crKf+6Ps䤼H,զWzgxZ3)#ʸk[ybIf6py(KW8\~q-Q4gnh3vOм;f:2}8{v֤9T0Y j'IOOo 3}]/6M'x/=O _X`Q"| ԥ4UZ4AR2Gm)Q`7ijR.4底iwA#.?`*v[ >[qigY$o?#\؟û}}w@ZԬt6b$I|5 $$)dCGaa )cFS/8Ir^ &Gwt.jo9\\..rf~޳FDl cEZEr6@is6H%bE|#rx?'D;ivoqdw!32[Rv:1-W':n8d=GcoA"*ܛ'M-\J.XRWզc9s(mvx7 ,<ߜf:V2P-\Iycy,e2=+Io,[x]a!n4^y*5 0VJhqP Ǭ.l1۷^|} FÝ2D2G^[֘58LcsHlIv ]?#5FayrAmuqY e u/V/]LT#zti}i 9;.E|fA1`Hc∪BCS+'YT`b@g -Cbt4U,K[4o o5u:lڶG1. SeJGaeQ +ѥԨeN#Y DhND Ef@@H@kvݺMGȢ^8_ L ^1'AE u;Y_! Q;379{"L?"NS_id4k%R뱃;i$HNd*ܗo<XOټ9pK3+Ap8Stٴ7lan~%*/,,Xpߠ~-QqG9DjClXb_.DT *@;?? "q JU;74ugB㡔w\{VBC̅SA!ÿD?йdFN(;J 6whEh'p^lǒG]'"kb&ݠ=fKHc484Y6T[D3-E:2Ya8e, u#Xa߄ήFa!X.g!JFGG߉Byx(jm0U1-Nr$ExB_j£2} L4&wŒV"Y!mo7å=۸֛])K]#kEtlW\ᏃHARXG}g˵S5#n;SzTOJ틉*8X yċyiSkā܎ ;ztΈ-յ% 2C>"Y7J2/e>p_FJk(./.dL9xvݠW`{kD~V1?ˬDt:uOP?ӿ:?ޟr}3T%oM2˪khZCOZ}׀ ' >*8?==?N.ˊAOn`8 #1p_d7ljB[?f&BOv^"oZ$kcAu+p`ZJ"I'`FSʔDBz4j"`AxGL9'H֮[~qgfg{;Q'!BjiI(l=QG)gaWB6"JjdOD6`ooϝ l[@ _e- jb* 1H› 204axc>16G$|Ŭgޱvu:7wqIZBpcb?%P\)4_#ϴܣGI_Lue>&-8yc'Pbd/|H҄=,m~qg$Se;V uRxpff K﨑%UMH?c)kv)>OAcݘ֙ծ\s0'Wt̖G1CLIvrڃkKُ뢮D>'79lAhѰ*7}Gnɕ7H?JⰧA=bNeC7C`".^NN͍wvb>[ iq@ޗ .XXھS˺(LJe5w&%ыJ..ўaY[e~.x`Wz/OqIJyH\}vSm8y4Yc,{@2.p+,eL1y~8"^/\\X#ng ‰[Qo!3 ;&lvv B;C CKӫL7lMT7>FA0s2zjn88@:#qShrG!ͥ%<j-2q삙iXANxZ3A*bԁhl2–B)9F*H.bkEph!kNNEQ@S-j@XN}xLX3; -3{ꭀLw4C#};5 tA(] TW3p3.޽FJ =8 DAOހ9 /']'k[_DoX&4񑴵^CK`3r8biAZ?yI|bxD3lhZmf2`8k7(9&E"HP-KuSnp'A#BC .f9Hc\wo8Zl>Vʏ$ 2I}VB0++o|JFE<ғqc~W-h%hܱ +ue }R<ԡ9z='+'nE:ŵGՇ?~fX2yixO8dxD@v=;"/LITզ6J"#%c,];iz&~ԠdeEb^w. Qw|s, [VЍ"j_Zũvj,: e nP=5̵J91 #(w˸;Tf6珦9¡ H 9#TjF'H~BW}B2?0gwSV LX)bgn,P"C=O *e,iaԣ5f}ѡ*|O {O[w;xe2+UM,o^80u3ȳ~iVHɓ`%F^zarYoC `0-1.鰘\LZKKO!I{"K}[OƜ)x\u^|O|aHef oJJnxkvv2jݤr[AI%3i8OMHj1Q{U{jL\+Mg")wOAT$%軪'ټn7e-L"ځ!w凜kvQl6b\Fe֮馛z.c1b?NG'mw~N+g蘮뭷?0@~]߀&:Ȍ]Yίgv'~]v!hY`Cjo[zӁ)? Y_[z:UEyYK?YtS@% 1F-wgnd?owr8w8FӁ|1*Ecꬥ1U70 9$Ud&#DI7 &Xc̣3UX|)œJ&Tcs 3xCXr>ɗXEp=U0Ii:44i<9k|x}MTbb*#9V}$ ÝkmW6 Q hSl J3^6tZA9 59cZܢV񊴌Vf1zdM>W)(hDW*³?”3bS4y5Xvpegx5)lW$]בJ2܄<I;_pӱFV?_22iԵ:s:ϘTY9FjKf5(jS"ݢ[k# ًͮ -y mhC-+ "odr~-tFT3$g3ܩ|euj a <^qfoᵽr60}rN͛06}Fe.|{s.Rrwˮ~y'm~$3k=D؁wj̦jmBǣ=ҥ R;$rWʃTGnu']K\/yF`MǬ{7??MPT&0(-?&=:z/Q!y*: _?{GsO+#жV]hN1[ZOBf:{/cXi.]gJ2oJO nu7h<[gM/ Tj=֝w#EZ~P)al_hCps/u pwb;==sEJLzc30mx*nzJJ:㖙#o2? 9~r`ץIZ)Gah}Sb2 x[O3 wbal3XyHկ%4]6ML D52 )=ܢpd{żtUnw[E˞s О SdЩ[{A )wzMVqfVQy Glm4Jh\] F袗Xx/PxFҧW3"^qxf ܬ#6cAajЕ'5ۤۓ6R@zm?Na@y"߹U˵mM m:MebCEG l)<w w.)  >'[ ӂOE L{Ix-ٲ\-YA{(p*z2n j*Se>HFB<|勭AHVjR_Jڂ.mE L`P,%  ľp$h0f~ $J{o]8GXd+9.A+ \QNoynD( h u?\01ȼOx#E*FhvJ&VjEaK (Q{"3Pe m?ymYZ޻mX:>ku$M;T'KL\|3gskoQb1Os[sfhĦn&MSk Yf ]NGH%(,m-&IRZa~5kLTfr|  @ m\5Y)Z!4&_(%WD*W_2i3<EMߞ1 g߂UӪ/?EY9*_;xRt@=?`MA+L#{`g>_?WEW1s{ӹj8sq"}3.G 1fkQ۲Z>.抷RJb|O-,[iGݖgpUb>r[/Z&Xlۊ-2Er|P?v_cd _'g)* ]s_Q'N/@]$KnnmWT z<$%3#n?J#ǕDiuB~|?ulXQQ4VP2)Vw+8>0oG*i#Om𠖋4BgcWZyD|$ VI'W`P1HGpt NAsGiJkvY(QmץLMU\&i5s'iP݋o2B'Nme_'G"  N B -y dC  ΎA#(ďj 瘀#%[>>v:nvQx@:!(}} :]1He殐yp3PfyP#R.0 ׷v!eh_9,\m;>tant3??\9d (=ɾ(5*3,=4YKV<=>Xn(IW!c\q)bEkzXI7B#Lp!-a)#P3+!}ULOטsKJ; m_pDoN7?_61^l8peI&5̂}J0d :"[2-G;4:|a?ak 8W/IX4a4mE eKqixG F9@Gf^5k)Vc99t# iapF0:hZ2SdM**ҔITC˂J[HiJJF藙޳eC~{Y$@&*;'g kz== [ g==:{D,~_TE*:WSa\5la+ }Y5ryޟ-яJPյ}@jk\lB3 [J1zYqVkھ)cȸq{䚃t=ѧKv8Mf |RV^B#؎ta{1JʷN*qnߖn H6G&ۤV|G}l@ ?avɤ"ER1}*ќl/2B-ZїFvbeztFxfNHfqe6oe}qKB3ma3t=Rpt++jKfnR5qIg zՈQ M0F'S9To8!LARL=~v~sXnCU,5u-nE5žO5M 9)dcCC !W?eiPoo5Ȁ%$$N.i,D`SeB탳sk[3ц DN$&.4ThJw M AZ=,˵5 U@^S=6d1m.ZT'Ö!jn4)mXB@ R@٣}˸ pqpp? Q=b_5l4(- K (@1 ~w\#.3GYcb&\:?Z\/Ւʾ,2Y=%7ߚZE5*E0)C8)EzT4fͱ94?.\ %vG3b|?k[#Ѝ|$D #&UV )^5Gnٖ5Hn,l,w=wC,18 6Dcճ[!ao$:U(La\ ^nB6Aœ-/ NJm&oApbAQ>Xr8A3oq쵁H>'?#S) i 4FDR7_Mks8=;K|*17َ|s^mrb"(7c_ iU"x)}m<%"6}Vl:C-JZOm=E Ѝ ϪXe0=J?LHz͈ Ue`,>-[6OIR/}!{L.#Դ"1\"BA#w`u\7ed!{W:2a9F(YUh^m}LuQN > t`i\@/@?  `c#O@;g[U_s30N.e^D#-/ $u?@dtg&ehZ@`~{! I )g"}`u0|1H.ph!!N*T =X*uߝD2m] { j55\hR(&}3V6}~9|9;%FY_ H% 0E[p59R,V)ML1s[ӔG2XG{ b?Fj`k*U~[ t } ZD ¬Vw!TXM^ 6X̡'$~҇k!,SԆck%/#I.&s80{*! 8q!JUh 'ZMc~Mt؅=|dZ ;U@9!Z(`3$!8h!JeGdkQsE=zzЎ>2"Ԫ8Ojmxз~?)qTi׳#Bg(\\6vGLGd=^3?2ʝ ֧yҢ;H$<߾eѡv6:>_sc9k*&9Aﳂ9>wsQ4 Лls ~6TFԶn6v!#UTQ)*_ml'J0A価vjN j0-Xf/le~ʿ8BKt0:փDEi&Aw+N69 כ{WnچQ|E2FH ac0 5Q8u23WۯwаV_qUиEqmzW@[b忶AVnʁmq8G! 3^QxCߑ>PBi@ 'ҽ+'6H`m"Ҝbj0/j I OT b>n=ٞVKGd˻ ǻE< _#q]k]}mbG{=u\_d(`mFPYآ17?@J_(6H8 E=W5EV"kQk^e.3gȧw+# Q1D2 %5} C}FL<+#b2xurͿf(,XiƬr3ɌSL<=d0{>K  \ՐܨղdHע;;Dy&%6# -X-D!a1D]%x3{.(_ƣ3ED}z ~MwNr:n/T5&vNgȖE(mۻ oX5?~ kX MXW@woui(qQe H~ޱ1^Q+F b0GSh!+4@4I_YtLK;S(BZ^uv- I۵z̢[# }rvB#\+xJӑ4{% JkUIGc5 47Ia)QȖxW 7~PӔK(7mF0*RQь)!GrGUUZX?{ FHTAh}=CRC }I8fK=v-DIV>\6!UNL\wWNƍ^N㛙^άNU祡+WWo` kH8d< 1T7pR "F -$1TxG@9NTb>l}PtmI$9ЮMdUL-41[Ĩ8T76m}6= ^mpMn5]d ;yb 5CBa}.+GDM@&<5YC_+' yN )QF=Sf3W WU 6H`Z, v~hF`w8 B@ڸ &8eZ`v(LY,C5cvaw$mUY8e}o ;#N>h(jAgPS':Z^sȣ/N;+>@ڛ'Svjfvn @y5m A' jI${5tvr.Q/&M UQ?x2A!KrƃEpGT^}E?t1{4g8"G)(K@N>Fh)[V:SQHkHEGrg%(쩤R7õj!@wBWf)vW=SEc~>Sn#/=0[z5#XȐLEo31%Q())y? 1?L- Ԡ( Р$Sh:aG1|H e]Bu'rOs'.6j+a^h vu"m:@Tj8U>pҪ.us fMk Jl9M:N9mh5( wm#hO)*̣@3-JvqT3MBvU gFapNz^NAbedb눃j\H+xzjMU9' >YO9]BހG11#^ٝMxMHk͕1*"҄nt|!?L1/DCX 0ymŽᐴ!korJgQO|izS:"9ƄqLJh^E3k!! (@jG*psa o,DF{ Z\__YigrËy&%V:tS8 " _D[D BŔP3b.e:aYh(áP<)XKm۶m۶m۶ڶm۶oMUR[d<2ӓ{wV!яR3&-Vp`J`h5ЉZ*LI}HW [/:u*Z $;Pr 6:JZ#hVt xfz bvJ:^R In<<"[7hy8ɡK0>b6zn9Ua6"P:oFghe^Qi[>܃f@Ck5L#Q5)a5V'}k |4 SkjX5ԩlp.t]WV23lJ+!O8s rAHQ!.!ݒC!y@0^CA'< [t*dɴQFT&)>z~9=g` 5Bi)rz/)\~С"GYj Ǟ`EhIf* @#^4I QTYFY H D+*3UG !<,*T7EZd f2JNyYdTtǀ#/n L>EY!.13DZ$\aT9kK6;MLH 劗P>B{7 Yn&b VSN9Vw&{ xFNiF׸O,#tؔG|fvl+02kٺ ",-!D~0Aa )˾w$l;8=]8$0,ۏ^&W$y:Byh ӭTYvRzW*7Za{}as +)C ]yQ33`K!5QKCtK fd0'Eݴeb Gk-}D!#Xd򺓃'~96_9ՁI%&uSMAN>7,Y *Ǹ+( ͈`'y Pd+P[EyMUi^,gX4rIj͎ ެPO&L~8Wܳn0ݜӹ U3d7$%rO+-7W^ 5?W?Gmpx~@y=)qLÞFSM_[n$g>=fzg&{d`-}; &(N楺iP [=i[V]ophʖ,'%_6f~zZ wreՙ`nMyp2#e%%`RPqqyrEo߼fQ B4zS'~ w${ʳ@旁{iL_",obm6yU0"Ȥɤd,Nl{=pNe0%,sA#t$O@MVkrSh$Y(,nf& 4* C6-Kh5ML`;8l5-2 ]D |[ gnxf7PZ)_^ vRbH +ELT -#yl G1[$8jry YC圀>58Pt %BNi3y?uMvmP@ JjNK(@=(Hc{EOr%[&7RF\2oibeX[=Yڎy= H28YIwZf5RwHyܶSxƼSkV+G'^pz !ϝB.h7e)ӂ\Y\#نby %5]ASE9z]Hj6btZYqOl6\{\nZgs=/8*.wuTںqI.Iج2%8;N-a=!0Fg7ϚeW jVlߪ`4s.g6*Tz}̐1oGe=q(S&)x]}ĉSw$}gWXsr##-`m!A2Bwdcv˥{HB+Âi) /VObkVaNZNGG6Yy{(;|/mtyv-g_=wzxvvW2z9mE}^gq6(T>z:/ڬ.TR~@M`[](@v2Jǘ`V% #bOq ͌3v_Ͱa7T%)-YvYt>ˮ`jiJί }ߠ]fhKܠw濉K%lTb`yf1)8L?^;8=r'36T^3=ȸ奕X12Wr&ľ:5?>"f#dE uZjc 肭i a2j)]Q*(ވ}z=K4v4{f5+޷3:yoeX~!޹Oh%`T|wogOvv8k D.<ymNldj{ݣmh z'sV 4@|z >)X=v߸3#7hN76cCa??' ro&A񉶬󶘗LHReO+!gWQBlijiUz:X`fANvh awIN_It\IRNtB3BJh /mçz!_~?b$joq^VW?!/^˻ה|W?1{_arw59(<'#^!٨І~zrFγW΃}O%3ѷ^JrvGp?bz@SCUhFa"JB4F-ajB._؀4ŹS$F9 cO=LG+MDSϴtF:mLOfW3)dohAp{~m m@mwQz_)ff"ef"^N"^B;ݓCmMąlf"PV O^^s>0QQDtH{s=RkmQCVМ"}q+l6@@9Rڂ1eX轵a~^=#H#UdZx58ĿyuӮ,Ӥ=0Lj 2)K8ju wyM$:pqČzއE@MT#k9! :ԣn!5fn0q~*n`Ñhчnx2ԦjOtX9ȅ3AZ.CzD<\ٙ3vôCLpќR;m%lO] #* Tմm(1S uX.Y!X`Ӆ",bm:(n m%.ж+׶T@jOuhI-͒Rix7uQLX&uu q%*#0n*I!2,fM_t[47x/zHx $Gb½e)hۋM%Sy[-7<[C[v2:Y].goo[uDĮ@GZl}ChbƦו 9J'&RV W€ S$ZocՉҋQm Dj3[ ȏiθܷ{BL>-.zV+!Ⱦpt1Ǖ3tArn*W6W8d(@cv@1 N$j_S <9~7n\(WC0ٯ; 3x9$ڀRQzڑ$ -b(ZVBg1TC(qO9zZь #'$tXπՌgpPi/^aSk(E鹘 TENGG`ߵgQUtuiY.VaLe gL>SN ? s 5 nwE`?ߛ9D|~ϔ4a?yI'/iA%OoYekVf:1. sRlWO.1V/ 5k;Gd9܎\;qFmel[= &M.V6eV*PEP+GfZtv5n:6VYngn2䑤hv">sj sQ [k*^xPWRc1߶=d.KC}X0`: ۂKPIMmubG5XaK%|x<3OÍDO:@j4$l5TTSJS[qqwo'@b!gВ@qq34V)m.+x,[4q0?=i_6X\uBj+)bB)LeiΒ }N|x1D8!fcӎ^FN$ҺAM 5(gt _IE=Ŷ҄hyC4('T ?=W͚?IQ$Z*>?Tm$З$^GI,/K]%0/v92{ÇlfŒp^*ɽ&]"]\tmDLLķX";1>sm$%W^ kLQ f\bC'#z)R mey`oG 􎭩$zmyvXcĉE8vGq:LX"lXx/~[j]Qs3! i  Q)+O6x eCd[~=n +Ml7,k1d(T`p8w- ko>ԧ6-ZN23].|64r?<BpGM2l{N{N~~Z].SmxTHB<ݚK:ͻe؇t;:ۻ> ǂp9U>n_C%udJ&ˌqZ{FK5^$C+#P48PYl4K]@&flk# رiblRq4]N`P?`N.5D:LDUe]E6p0ة%{k;[d7;8o4g[<,S>eLO'l6qA NOɸK.pL?~,TF(OzP6#~ta?erh 9#\0=18ejLpAS)k_; A9;N[vk\Sλ`9zC"6,#XڤGK~qWu3IֶVmi?6GU S ZC)"aD-)ٶ :nMCwVIQp,}Űvwe %VK+cKPõ546^*ۼ bNކQ"6 r/B"16 9 ( ?uz:si3٨ >ll(Z1|(sޑd7(|Y!CFMtj%{v eO}J\(ɞ1qL3L6Y TTE@G=zֈ.lZxeE]8| $ :==i!ķtJa*bP(>v}_Di -)aE7zf]utSIcPC-fI| kPospڤZ;hϬs1[@%np;'l)V]Q(}||N Ea귄vr!Tp8욠Og(7 /~ u]/,@׿rMI` %^L #4]]ܿP4o!9UG߫$xe2tAsyhG3~܁2+ˬ=Ҩ\@׈R=j"AEt& Bm8'HB2\'~@^`t`w\5pEi*3uN ߂}*:bwy*U-E؟sXĻD6Ս|b5,8?T%UP!쫢)Fs-1۷2`O[\a Ĵʠm8I6R]y0z;&xJ.^E"2b$U3t/AW6F gtȭLzͺ#Oqoc[=BjmTDE@mENerz\۩X"D=6&9)[SHR*T,YZo5|-(OQ WrJFuyGPB++?܁Z lѲ/QtCGNK4k 6I)50Ñc^ 嗷Pӿ+ yȢ@aAΥ<9]rHCx~+ҝs?kHoN+Եmw 5EDPSU/ҵ7jU52pLN~;HC,dfռ_t)x$vfg^:UbpIӜ%` /t8N K̰ {Rè]7Uٺ!>$ȉ3 ضCK !aa5c<|n:^HsM2U%}~369-bkvfJ,: mX ?<5Gܹ1:2 /&⸙7{(^CW=RC9b|[IđFl5ݶs= >\4N={;ɻW;fu p~a񉼳1ʛ2W->x}|azT)cjx2&^IaN4"Jgf>$CT<* ]u]V{1JTk)}5́Ӗ35K3ٖli< I.Tgp@Z[IYg7yCiRSv`o^/ ר&=qynz-f.6Y lօ||Z x9Y+W 73^\ bS ٰ؈t|gPb@Zyu>I;+D&h( VV9ȴ'O@Y@$wcet w{tFESwmK׭!mE-d[u/*' .GI2@$Gn_%k: h!|.L^ M AZ_%V=1k\.ҹ7wPy%ޣTZݹsPX6,KB&}.K^u&' s}2i8q'J w.nKSE㊵S@| 8\ :/zʻ;^;/pWA0;D: z;^ЉA7_nf_,/O~o =cV,Ԟwt/{(_:OV~])Ó(خ{gÕyJM5@aOVJJ; 1+s@B z֞ϨOg1&1He9e0GKI|t8aqkdRdlh3;DcЅ ۘ:P/ﭾ2e=aw6DKxRj៯x%|".6]E(΄O廋پ5{0'csSeƁ}9KB j,{.wT2Gw V`=>!%tF)eP9D{?(SyN$HGB7'[L Ys s\LD׍f qeM ֯=eI{~.}0QfR VoaX%ʹTYR]rP8=B{Fȏ"@~g1a^^oO{{z=w;\AgAh>z1[LfF,5Y#Q{qzMC"s-4$鑳n9zO ֯NV&uәv'lѬs g%ѠM ܬT|ԱPӑJ (t'dǃDL蓺%Z.Ɂ (ș=(0HQEJU`Xb-8zO6 LMiOr@eU5v)Y?v_]Ђ9O&ڥ*un+蹃rj!l-1˜XᨍgbfEvĒرh+Qv r~*]]俼+ 0([@ԛ Jk2 e3K 8 t8_񶹉.Zԛ֔r)i<*T0ՐU}ZFnc'f KZح/j{-aՕؔ&jmj[j#5s3KH1] inrnfv ԳUHce5mC*Ex[6<\w 䭒u}9҆6md4ʭɻH ,##Z~b0\;vfwv~~hcok2YyANN_ڬW9_ #䓢9;_)[#Rȩ" =dѻ-hs%ܰjzjoFjg3P?bIo%m6^Ib3:3,7n $N,e *5RZ۳{u%@.IL 8DG $61).Ru?xX9{N dL3xGa%PID/s&?09!Vf 0` /HZ)F*L'-Q *x`d/*G*~_tْ5}4OyǬTߍXnKKdqLzuwƏawnu}lE&3%ްu㚞ɘXJ+Gq=ЙS.22m=FcQًDOd 7zK6nD}̰W׾,$prw.|%?gB 1gdmu-$lyH,SojLKyU4K )~/ f;  %\.[L'Rmx ^)p8h THx1@]xt#6\IH VP)"\B\ ZZ_CV\|0~+`8z/$OFB]2br9 Z LCH"^u1#"wA feҟQKZ}q01\'+ʺ4?#9cd(~rtxm/M~|9KiOfu% oÂwjN}Hئ&oaAeibbf)F#iwge_R蹖- #(" "( TWFr:+>E (Ά=qf1ӮGaCclNMͯ+GM*khC`*+m\ͻQҖCZ-&&xUrPKIx%),;F vC9,E{ZS!P ;Jcd$QnA'3 7M-uɿljOm;㜥w|:x_`7ݪ=$(Twﶜ?2 =W ΤƜ9 1ߪw㹜 a=^ߌ;;PS?@;S9V/WkEݡS ݭRM̖yӔ[m { μ瓇yV Tˢ_QRY [[b&I/WȀ!Lhl` B /WxU1 DH6ʞpO qJu$#AV 9>{#ħ_mH`3^7Zճ!C4'?WJ`-l*Ƌ4JZW"ѼxOҦKvJ!hGk%]=IБp 9Kɣ3w~GO朶;0bɡ!k+KH܏^5sF Q  )n4)Up6*BON9bْbfdz?m $JاKL b"/d<* S@44J6J7c+2W4HԬ(QpRrl;ˍ&t2m3\g m|C!Q  .O!L_H}TLx(L\ v[@*[kqmP#+͇NX :YaN ]rn >=5dzURǘhzV0f[oQXk۱>fc#)o-Жq7bnҍKИ&VU=JޔH x7_38Z;eׁ+DqTYXNrFBH|KWN#e rfe4WwYP,^s-FѨԀnWUM~O eܵ1GmiW|Hۻ@_v]#"K-UB!ؕaʠÎқy mn;Y6?su͟ 6lDE4߮|+kgPA7\o2QvAM5!uʑ6e Mg.c1 d.ىD/Ը҈l*+P?pG 8I54,km^ظN_IXsz6 yzwDc\5 βh 1i0`2l3cr]_]Tgل+ 3S,sS2,bx$ Ei"^39iG&Uu ;V7]u0XRI&i0H A2pL AEI ‚aQP7oR7-)5'@ Fۘ7gUkB 9q˸ ,v XUa9/"#^3)Q683\o>o%c̏R#5GizMjGpt)F GzzAwxͻYEB%r B+|fK=d/ OŪ1xZ˗̌ #B-Bzhb5r@1! -.$Ih]Y׵#3Di R2CM>ۚ&OV }IZN,zVJ?}E.h8H_HU(;75N&@zjHpQJe}lf7n\b ,X/Α~폇I34s%hII4O\:]L/qh>ϓ'z9Pgn;CJsZtEsbޤ(`EޏWNQkS`u( Sj>ALL_IjPBQzϓPΕWA!d  qr>'<7]Mƴ&6+_L|Sљߖ7)XpD|eTbT_=|@6ʴL'`us1!eH~k'Ft]CaHnQ/;μzSwܡqƽUtAUPJլxW!0AfaC2E8R ?~GuŌ9h”ҩ0LHD^e $22~ӡVK;:t r{blI|:tvj& (/Q!(Nl K8Gu ΋ UDHWe>oDjN^7Д&op#DgzCc7x"d7xs +)TrN3.3ˎ ؤ5(z-[i䤵a"S ZJI,W/"d&l­%G{`CLL, +3Emn M]E][. ##$a\0D3m]]R!ΔĪs>/x1Ǻ, n7$Df2)+$Y5]qL R'FW&х",hwn~V)>hwGN.4 ꢱSFi,L>?R|.K4 Jm9 v_ni \t_Z{+ĐAqRGRDjXλU\uN"4q5Q'K/ * `7,3ӎ,Z| S_7uZ-d3SM&O]rÉPnj5 zAnk~UdžWo܂ydhH]㮕L#N^]Ɓtr[P#[[4W.Z {f=}9 $. ^ੁ+#|  ك{^!4z%HM;(řD.:Cە(=X#r1$sJ:mB`\\8gGL)'9*_ د[u/tMJ-^6!8q^L~G<E7Ť!HL*4<4:Nj$d>=}Ro2!k;)C~[$kʸ+l'} r ml˿18 gxkWX / s?(Ũ<?sN>H(Hc +?$~K d+rl%jh=苁ضY4V<-Y>1EM/Ht[Mx7?'wc_dgjvG]9['p3j@5V̹~-/nk@5/0YA#nYM Lhm@qL:ȿ 1$n`{qpa@~@=aІU--k o^qsZ O?q\΁4A9 qF3̆Od(@vpDÄW8AnI>MKb˫om8Րzí=DXlBes*^)sgл1_}SpNO4*dH IBT0[i!^qƌ1elܥ91^F 3zVHAEq=z#&}TK,5f6hQ*Vl?oeLYs#EC&TPE J\%JDPĺQ"qCY2Na HZL!4T;Yu!BuExf:%+%v-Ex̜ܶĹ?H@%9B_P)qr c gIx>r07Ǻ6>'' bU0IƙоM[E&<gޒ74}8zH^ 9F۽_6JT;.8bA,SF]e~!-my:;eG,%,P6>8Ht]A넂a](όMWPY6L=^"(te 0t3୙9F|^?oʤ;iOPSr:xXQ@# ßCxremRLs"FЕCxоd;d(&V'a#9~⻥tqdn{ -;^93^ oHwD+IC.׉M DqE(43>|X4+4<:mJ\ܑjjMxZM'<ӱC +o\b&7Ʒקzn6,HI5hz?܏Rg r&8 ʳOffXcu(Ffvy'v}T(jZԇ6{P: ]m\7[ P*z"ҍc '7]7G<(6$J';~` $K^Vk 0f`ٸSg6W,1FEP~0:2>gc4K_@# hh걯냞f)4" oǟS)#NxWMc@T"ˁp)*8) c5Ĺӄǵ}@sǎʨ:Ih^A]\sHny)f}X|`x $ JH[ 5γru׻OOFΰ_VV>f6޳tu]'f&} %$tzY!H`vq[D N$"ivQɵ0~kf(2zWaI:(6% O8+_8b+˜?dDMq:t>l9ݕ~'91M-ܪ:9\ġA$Kn|OyrEMl s pB䊿 PLRH䘼VOaʘYUe}WP;z,)F;`RqDQ0a4w劽s$p}wC@s <_(?H}b諊%1s_A+q_zRNtW*G'1hr|]>=zۺasg_Gto\5KBwxc= Z4bҪ}/ ^D~kkNJ_ ΁HE;YC9K-wE$$kpbYs`7qBX`(<0V4Ey:?A>_b 7z83r?-C&[yߺЯR|sznm ~8kШh [-س0N5sU:lcacf7uF2ϟ*iUsfEd= 0Nj 4):n^k[,ycD"n#TH2jXFEdj:t8)ZSEߴS8;)`Er1BZ/zYoi_ܤ|fx96oJ'l sʠ6:eH % 3gh=a  Lw~ e`6Mar9`p@K1"MҊҩbGH_ˇ`Ҍ=ĥoW8b Ls M@ O @K.q;@@  LhLѓBek˯&#AE`J~wqN>l.(OO/0|xČ|)'rAAІ&䜯~}eɺ>a:.v~t ~z&J ug֐̝[Qeo[$l@,6j-6xZ s-䲽zBQ =I۾s q@ʸN.rydi|(*)5n: aF8XW}ڴVijl b80$N#0xg,Om[$a2.R"wAODΓyxo܎+Bn·e0ΓBdoJn+<Lee[h+qgYulDA/`n0 Nut}yӤdԎR#z ad$.hl鮣m埬.䝪,mz7udD+x q +NM0g9h3$'Kгg*@O(R J6h) .,odc et㼍2v3@Ty7RH#SG/G$BqT"YO uMOfl=hިD֍MuFhr ?˙t p b*s!g[{bhFvB7td{=Kڳp>wf)]# Vn#iSц㐺tƝ׺]Қd0 JטQNM]VҁynA/ <$<=QEy*-艵1Q=wfUQ嶥J&0w*&f@f]u{ &F2kdKo\C7?`9P1}ae7zrcO[( +7/0M.AXZ2V:2IuH׹b?X@ޓeB/Xds9XfTi"-Aەz䠢j.Ӈculk E6?Pe0k/SB;M\k0&Qfڤq?(ˇ&2Rb]B;.Jb9,!z !*{<"uG!ѰàЋu1eP{p)]cY&s9MI2*2hf˵GkCR,݊*4 Na4t cn5|xM8´VwpC Ձ*ӣ%w3[!;t!N`T*%.ChOcËk(Dԭ(E=!!q ߬S!_ʆxPR!xb^PLIAsLcU#c c W0kyWDRہFM4Oʀ F_@#M X*z#NFPc>D|Ɏ2BP-x!-Y줎"D0D eD)gܢ"󵂼J$囗Vou>?+ė SЩ8Yq<`#K!JÐq-`wEM<ޘ)TqSXZaJC0Yoűu}шJ|`9`X0 nSk`֢)d1_͵=n["Cna['A֤c9Ze[uiDb/갚 {Ed3 ;$L>38qxO`*&t/y ;TSʋHVHR1SvlkR#]3:@H(%ܠ)(|hlLARPUT@(K9U1SQ|þ/7"6w:VT F.u|jhϧWJyM*.OHh$Tq1*}}!Y(`572(nkD6KQ |3sP6L: ,rr)"[16nG7J8zZ_N̽ۅZ&'o7>pwj%';*e+4aV _ d!E7 񬀘sA}yFI-q%[GObh>>?A]1bIyX-m d-DB1NjK[LAB%P=!A,Koѿ321ŔSΚ:*aRmBuIA`[Tqq<(~EBžxķJ1h5EĥIa:r.er8 tI1dw)cvE7&5"oǩC ۴^2Dݐdn">tzb:|[gƠ50j>Xnڽ~[[ş՛{i_{:n>WA DO3Q nX ~VhA:\9^CSiXC{t"B;YкfȱclP3̑$ќ:,ȫ/L^bcEAM^m"!AI]ش2ǒ}p`tEhF%[)m֮q_w?ϰ%Wix *IȶżnJDzTXA5sOξ`B)P|6 [ѥO9\Ӭ1¤Y)-t yϷT2(4-{ PBe9W:?@p俩uD(qfG:"+<0 >`S;R˶sojيUabbe.)/SzU;'.K>'}b_bt9S/RQV)ek⏩a.ĂpQ!GepbapTdO=B}H[Ƭ+A2t|p4atsޜ+)x8PlPƊI[j.+6+7{H_"V{%PiD~j>M1C2g/8< /H eʴ[}nD-ȚƹJG:x<&@PS9_0:Aqa5\byEp4r푅C"TxOf"[Onl/pVnSY1O4g/-ţx؟}n;7 \4nho>Bް?vW)C0MFRjre1BrZal60" QG\Pe 2,FR.v)b%wm;.u%=&moCtCa(%S&<[6{OiK7=GjNC&YDl/zx`ӕ?m dI}d!A nr#l q}\6qE'   -ӒgXHKűl 8(R-JRV; 2#}ZUQaђQ-?6mtSҕv>n[;9 gfd3˒XsF5 &jfRLsol6 b~HPqLp04OW"oudBU':5tJ*$O4m*S =`Šxq b@=Dٮ_?~>0+zRORA$)2345Q4\ʲ|$Ob< p %Ѷ/Ea?YE [|Nꕒ8bj%R?H{SuMd:S*CQي<aЫ,v`~a )g/ۓ9unYb$C_yT<)e!KW--5G Oc<k\Bj(J+z p) |1-1A]ƶξ[z(&Q z@7бPvv4ͼls|!_Y5T']>cGZƮLajQxwk8)@5-Y? T m޸->t3$9h k';??H]&\QghMPx8*lcf9{hr._4* 7;"kVF^RRˌbgA z\EI9MakQZ04&??J;ѳ-=wQ&^>'Y&MNQ'O{wuύ_Y9>"g{xyx:T7;4c`Q͹nJ܃vOjoBa~NN`Kvfi%ގXk=܅6V.{b\)|w^5Q;6m6ݽZquD#8o&Ͳ!:+iBQ&qg XohDbMbU,hAmTU@DpptQBeB :,~ ).N(h]@%sW 3OTUE. "ؑjj{+ J֝0 m@m88à QMs2qHmжw}e?t!b_aBWbtzt0ʸݒ uQo{Ɍ{4wƌ ,!S+IoJ霢aTw0S@)g'rܰ|uCmв?SynE39֨ԝco{VѢ `LnYҢsEWGUd+LCQf%͑ SC!JۑŜi a&'7Q^>'MwFY,Zo8ϵ\~z6=N7EƜ)x_!l:xT7M#a궦 Oɳ=8mvyT)EU8p>U~J})ZQ J|[|*]Klh_e Hy"tx-ʟHIu Mł*Sc$m(B%I^>itr`%4@H,bYaigt~qQypŏ"dIB|Imu4 yGu%-~K>C683KD0;dΒɌOk5L^7Edcw ?{E j])ݻP=*/>Iz\}TIdn Ojv(shNġ2M2LG<9}Ff վ̥HJrqpX9+\r2'Fp>>1d,|iPǤiZR a|YNqRڠ)~L/lXWyG? #,7C{RO6(G؁n9Pd!!TvחDB`#EK[)jQ5.%n='X 0)0bӞD-J_wb|o ꜖Rk`D0z9)iP7D# 1pj/-c+I\+TI):;9wv}8?zNL||GyvCʖ0N118 1_k5Qj.Gh]sDUS>p2KcC"Dsw/IayKQidQ {]z:#?YJo0% QPvO $K" W)ug 44M`RiSFen DՂysWJkTHTo$+n5BPږ 6a$L:8F`4?AJ!:J24 5o wiN OU ;]>aXSʖ`!&y'%H3k?jnUjns]Jdew`K;,yE[cؘ.Sx.ْܓCW&h¨yhEACjz?f#UfnvץzoT^U`+ۄ/_zn+`)+G>'m&=}J9H-Ђg7nRulJ'eN{sWnHKmcDo+zӖo"X(Tn0 N/wn""Fp`ގ;*!`I.@G(cڤWKq-&4aZ9!q͙ )2qtZ~&Y9Z*,jL$:R8fx몐a5TgxXMd`?Q{P*:zk `hJPԪ`<^G(}^n_VWˋ/8Fޯq{\I9 sѽ>['ۭ8|_l:sZ9>괴N6v|\:wPxYf?L.=il 8I(בSc&_<'@Qx&hr<39P޲7*-nEwY5ft?OBPxgW.V졓qTYèe3w6g5!]=7jCƖŷv&8,BGw%*H{/-eݡy&gjN-Ze2{\®Ux;hG!*Cj)4Ll򨕶KI1Pی#患`HG[u|oo:I(tGoaBE"9:iUI>>Fɓ[鶋ւ0?2%n߬߅Ibzi'l?Ԫv޶?z=Qc!QT=[cf䠫ѱKQ5 ϖ TDO|K+D z*}:^iL}Lec&}ە\T&[aE(id; V^_z_BI1v+,D;^esE#p\ک I<3NykYhx4ԗ#CdDIEzGv?\4-88L8je #9zo<sب(9e}Lh5 VL{vo}oǨiFW+-~I~ZW,/X@?r"|,2a>goV. Dd*[c斪Ui^gRlrF>J(EG0yə.0U[`#O\Itz9S0/m|4 3Y#9XfBûp_A*ӀViI7Iqp4i*@+5I@`'\p:ZuTRﳛ"0*ňH*8JB%:7ߒi5{W_`` Dȓ;;iSc^'-,#W"W}c4s+=v4' *5S ; s_[dcӔ_*p\I=!m (VH#U'Hة9 '3o?᫨K2=d D؆p5 iU\JhV7o;5] gNBq K(GlfF" n9R~vM(+Wӯ.:R9#Zizx~KS0Sʼnn=m bDQsQRDxrdba4hPUzUe(:BяH oHWXQ }޸\W~ ѧ,3i=ljPE<2QV+vt7qKD3Qx.AcsFvPL\ũy- /u5j˚X7˧,ywHȰ'uI.ȋ#>cQ`68[͠Œ5"XD/ On}3ޮIgpFJSCVnsJ!x2J F\~nWDJuЩS:z;ڹ[Gqo<5dKJ6V׎E|hVZpw-0bbl'D~:H j:9֒Uӻb~,AatEܵ,'Xġ-q7_vu jU(PISU~7)mvd> HJ+("GFD +%@S~ׁ*M%Qfӳc(l?.T:ZD}Tjļ.r%D":'ǔ=3Yesti^î_ XgfUP2Ѐ|sAY " T+-LR&rS!y[k+٧6ܶ\JI.CƊ$'Vb?ukS=K|w$ Dz`Nx?:snaD{@'vhz)NQK8Jo)0i-d k|y~xka7J;ؘmȶ!xI͑8]*ww2[A&"ZJ `6_.yd5 r2<(/˒{4.K7k_ITom sի6X՞vݺP%o"2xڄ /[uu(}79Tw݄'Bm|+/ahbиh[;ȞAY ]bM-ʊ2e]RېK;χ/ᖔβgʣ*CuÎ&^[ءtK=s &)44\ց"cKMW)%{SYcڎRK!v ,Ij0` vXb+C2h[ aS]r-Z*E7A(0P@K&#լ?g/PO )-s=߲lUqu]fpKѕ\mzTD!kS?;Aw#vd1hraU!d=\ 0$~`s _;pUlyNE V܆I3:+Zt61EcDyR  `X#?t[ߚ!m=ۈ^Q<9d\r-SվŋCz8酰4]R9u=M9ˣlt,C4 VGЊn:z`eԨ{HXQ(Is b:xV}u~^:wetU IPpuUƋ:\Gv9"} V|Ȉak},7(5AdIÒc^ƥŝ"?x#4m;Jy 2SQIR^Qߠ3=YMn]yAso\V^':!wpL~kȪ[Mjqu,#N(j0X|0 `[E~`Z7>ZgMzB cSWq*-N =/UxuR0 $F`%~]r~)K7fNwN6 ?5W{?+G.eͨnA@E](]þ L [{Z ),TF$:=. uwxhBw.Yc޼Y͒+*ZXA3nWW7?O vo4[ ^0?9(xU sY ';xFST 6ĸ  w_1G\[[.$ph@Oc1 \۳)Orq7"+Xfq,p=Wۅ1@UݬOLuZٿێJ5~#sY 6AgJɈP4p>Z\cS6P/)\Ԩ#`D6=Sf0u$h+.y;CGN$c7@cRۨo-+O> o-܏Įy/B:ij0WС@q2͢ﭵ-<0w;kb/é׎%pFQl 4T{7jq?$p۴PѷoQn9,V HGknOU:g@?d] ȬRfpozS%r"=2`EhbwHe-"묋. '_Aw䐋] zpmMb O-2[w (c)1 쟫S*'T(3 " [u7\N8Ub{O"y?n &k5%42 r_sB^%}C#AρƗ>1xR?z1Xs!v K"(W 5[^x޶ Aqoy&s=V8$NMK}BC(2&@rN"8y{PǺTsEYUw`' J;z؅%o.H9_}3o;5yVBLڿ6"z@l<킣V@"p?scIr&EΜq %,T 8ѥcݢd2zڒVjLي2F2B4DrfCŸ#ؐ:#]]`6֣DgwmSerWKطoyg|9S99Ǽw=Suk׷5Ǽw=okykƧok͉?ؼ6hk}ǯɩ \w)\w :>cLq=+IZ-Z>关3Fi~5i:\w4H3H41]w&|z Ś|[w-&Xczf¤d$pz2> =?=u@6m}6}6ˀ}6s}T!L!]!>{?"|#ždQg_~F=;|i>>/,xl>%ܽe2Yc6S`̆*2 Lyn?42{PtyWMhfeUuLVJ^-|F,2i?tvX+CbD1/xev+ YƎNtt!/:5@X1Uevc2;9u,C (W9A'^'շڮ9,1uibقqA skjXC(j^ Y$إon=:?Ng_"N̤1JpfKMDA2uY'"wek#, tk׶m۶m۶m۶m۶m[J+YsN-EF8~k?bMĹiZ ~PnM EղD2'-6H!Fr{Z7b9bm*@\hrm2٭)-_ -}v(9/ sΦr~2}08Bl)[NtR]qzj=OT=M}ML 670J|l~:d%BqV=,&H ,/A@Q&>:QÜCxCȜE(YZz2WOIP*Z{DX-% 1)9yN 9LjSʎ`gF\G@=#=b[ɂp{/-PoJ7տc;*mૠrW7ǽ?ˋG@ Yg|Jt299}{=N?a93{^9%ɉۍv~9'yC#9}F ^" |dIPZXjīN WsWx,AZ 6V!L0х, Hn3|8rur7:3"sZ?NG ƟQIes_t^~1)R5~t%2BC~vxv_aB 2< ˆOS7ח$s9V >7q_}6GLwp[B&U3-" {lXrx^8sqz0-s=KI~􈯣UEWW-p5-{4 Њwd9 ;t#1<4,{ ӂ54 wF 9yn-Mz_Ky,Rx6/LV;-"sᓀ#il }' C1#do] ba/~I ->䋏2Z',2PX[i 6aFvoeU Z%8LEU)c'W b(^.SMHL4wSԆ0c-/Ѵ:A8t6_{n;q޺s~oZ1%LHP>)9J+2^4q}. "IS .rǰ}7۶ v@U铲r)裴 R ژLZOKW[|%Ikn2])W?ןeLpJɳzΘ!2tR;/M1z0u:U>vFgwq a1Gӕ=G?fQ\0ezI؞y9hzf<3I|Jྉtp`&D%zm;I9ߘ೧~?:ܩʏJ4c^άsTk$k|IPngXQ ɸ7hQmZj4gך?%b@×8 CW&r6V=TԿHrB{ڄ VXɴf _i y潋sf%ɰݸKn.4= oA!6}6甪UſHo547Cs;=Vz`[DqkZN }62|#߁U#$8㋕ 5!i[5Dz\HR~OR#TC6{5y8h|4Z翘 pg2~hyZ>Rj i+{lZb3v=.CLkw=$+5uv{ ZP(S}~f7J$p 雛4Y[b(Fug-J"tz{`' 1kj|gW؈_!= 4pHC޺z'&Tbq@Mϕ͓ǂS2|piԥ`:2< HM#=+ѩݢRYg`AJ'(FЃG}ĵI;TI.LJvKγ7ʹs_Z8\/Ua)2+vK%[E)5f[%ϵ:>>n)'\075=Ў-)rld 6( hZfR~'Bn' Lfla&!}X I2Mİ3Ocl~~S~&q&bES6ʒ G:xWMJeƚY*8G eO騚 29+gY]kc>Cu]*kߝ]]O̴ރ8q8@Uq>s"ܱW M<=}ft1FOq%Dƕ^ԩPw5˧ &QfqÖe'vg$/;)εԮ@#u|e+&Q-1h۸`y))ؾ]eqnK-l@J\+w~'x!50ڰPUӬDr#v *uְ֨1GK2nQ kZ&;g@kIհ(pm.2B}@+jbشT< u;(ZJ`|xS5`r#d!xmPb&"bp8Q$g%ڟ<64m4 5#.=Xr/ɰʉkM.Q-#<,-je*7^OS*@rW|KYܬY8q6I L=**. y|.% ŘA(׬ )2F. -? d#UE[ԧvoHa 1F}TFVrx>~r v|}2QuVJ:a>bل>b54>yPq0eZ5u"u9BNCU:EuF6hj)7N4Hn(JT8\vl\2!E:'ھ>i tS`_v=њk0kM'$'lj\}:\^@3D){yj3U\ IDݮ `BT*Rx)RI(^Ss`U(a,39TJazy.lӯhB[ymWQ9^a̫5`/->'TD0<)zMRf>[ VA 3ð~Z|+߅>kDjd6U1(EIV78@yA#/Z:XPn ܑ4PYȷP%kF;wz((0B6OЮ)y~>$ &BJ=(he&.eM3.[`~7%p1NAUR0Լ&\%Bŏat` pzVR[6%d/M0*5ajV}whDT2z)0#r VR^>[ĉ5vZ YYrI-n`Qqd I{5k;<_ևdN+$ 7v\ 5x`p;M3e_7@CTopy uzs3.S{`SJpJ"de>p<ݮ55/BI7XwKs(<0ech_ߠMFv..6XO(/!էB:ӵ[.kJqOXKصI 1 ʋ~J7r4 '; j5Aa&!)GcL K{M ْJF9$_~er-aQG.}u6 i vu*:{IK0>ÜL?dPym{ ^}d* :uw pG b-\</(=K#SCZ [3`LZL1kcJG.X ,2#֞F GY8ʛ26FN!Y\lTĜn ľ/ F2)!qӞ:q]a3O(oV6냒?s_H}xIƦ梥S jY/! Ռ"q5BKSKG/GOJZYmEp'Y9;Hw\Jmޔ#ktTaTeh-Z&g B>l z0ґ,%qf^F-au'tmaR8\;jMkԵ©T&(=#rWȍ :eّڎwg,j{$* `U,gp?K>+[8okm3۹iG>5#3Ϡ's%15Ф4/W3 DwmפX~0'-JC٨5'.Tf;oQMKM@$d5vMC̉c|ϺD85՛B\x!K KJ,C^~ bZj:gwxYͩhv)@X5'<7A͗yz0„ye;^Pg`9EqLXEԛH~ {.3?~_#7YOhγp i-tD~BtsCk*bWOӍdhɨUv V}B.+C9~)(]Qa<[}MΙp %F܃ҳEŭ pw }#Zi3X׀kGbPkw7idkU#Uʬgx~ Ihaߖ x ‹ b=􁑸O\Ԁ6JYPb p@) %Nq OGR]a`cA2U,Dtj btw} @gd}B]/*8_@9]Z)WbQ? }eO('idg:(|k y9ac[STj u2z] :~fE&]z@N,N$'E5w:wڏ;X0U>808f̨%2Is~x{L3,L^w(R70lFTGnrFPTt2Y|GiUfH&\RQU"$noM5O/ruU@*|3i\w?!]gy3ge RkBI=䌒d(E4\3I* J%®XI`ȒZT79"X3W{T?}17B1qh o37Кu*9y 4|ܜ4EaPz t2NPtϨeH׽DzQiILSFjCAwM>>ٻ0܋rm\?y*lD$io\`&c9]@7H~4gOn_-Hm0mMdC`aTcխMsF'>ێrW,*Xʲ`-EmM X%-J:l?"#[Ryc n?hC9^RC<5=H@&W9n^ǁ2ߚ \Qt^U/+ݏ揟ڗ/}Y^,{ w@v|gi XC/ă(=RnHR7J6gM{(E4b՗}p+P TjRItf@{)1W`O~9ͭ` $fx/]PHRQEn|H͚40{E` p=[Ԧ3FiEk. 4rB!!qocϯ[)AN"rcO-PgnCCkG%H-ɔo N^]ζtq\=qn {&MqGIqs #cQc@ۦmpxpl<|^!|$aF-"~~7ey K[Kӏb3*\GyY1Ƴ#7!5:wN'( ְA?\v*[Sl|Wԃʼn?vSpd؆Q@ V{\J| e)o(߉8es(~_'.ȅ Z>ⷉ=poY=prj}K-!@OT>|_x`^8pkq~[K57_u$7Cgxo y ec&x;Wb3eӖT XcJJh%? D]k$_P UU+6Wi;d*H:ن elhPC8ۄR)ӕ#_5(BE0ꃉ)g*ޘFFyUQ=z7#},NU19>q) +~aջ C޷j 5/_'ꉵa3'nj崆{ >c^PHU8Ȳ2hNw,جiaсdn5`yKu3@Kv#l3b`GZvITW*[ҍװ}Z=?sO 8In Q4+X [H*u:4M~(mȓnX5oRȞRwUhZI:p]jzt$#|(v:eB 1F@3QM L"VN1*L9-H ~H Viܛx4AV 4K"M2Ȑ A` zR9lb4d9hA{0]}p"8G$@6S !txso>uoms[W&&s?%1e{WΞ_\ g {7ԓ6mw>G2B͚ާ羏81qrhQk"]^`s&#]r䒙e%&LV+)c\ O Zl=ށ`q VP -#z$E_o/n!\ k:Eb|e7K$QHYGK^CPXa*{| Pٟl4Ɨ6&2rT'URؕ"2bCҶ9(ng/*R@Z<?Å"R."3tɒN3KהM\7fȘn2/"^ob-#8p-Dƈ* UvgIc (֦K͐?/kKopEuo^s obQ}G^jvʅ>ULo+ծ\_V([9Q2A#Y`ˆ +GNJ KywMGʬ!Z05{+_@u_ڬT]Eb*wA >7JKzҶ-|#ҩsٹ}yPѣaRa&&0S{Mp{9>~`c+sBR{"@YU'(6gb sIolF×M~H"Xdt ςoGu8L}O7!tUCG;0r_C&ph9*z(Ķ(wI6@&U,!!3sɍ-Gm_ekQGѿ85١6} ԌQ N(Li5us疈eQ+e*$7߹=P=S8ǓS85?(À:U1K7tmf>Aa:cSςRoOJąWV4|28 wz &r;Qٞ\n0k::c .M֕Zu ̹W\b;D>FB [{|[on6*Jz;2?uYmZ`כՀûy HٓfAi/ẻ\I-VB9^4yoP"B!%\K4g?ZrXZ 6qSܕ.>ڃ?^/6]%5[?Oe[D'A2R6}VV_$`܊FZqqOmS$C8;V_S-J%FA`0/K9CVlҡt˂M(Hzb4d*qQH\N=J 6ɌBNO|u9gmJ2π4p6ڬfF,Sh#BHH_Բ$ag x4yJH_To_ G vVfzNa!vwK ӏ2[ٓRe V!z|/Adn[i+(?7hxlgu^̲[g Z\#ͦV Y l/tR/(:* _'w|Ω8#%ġ^:W^/1 }BoC|H'FlFG(}+R].)HaܦbJׁ2KH(%pV7.;Nd*Uz#)gSgpbYyxI|d%"REw[NcmAyW+yIS"pXwY-m4::clж[R=fA&FJxvf )3*fBKjr&NWqpnnH尠) VF>|, BvIgkNHuoZCC26{F~tAC\eZUx T?g/KCu+ S \T{oJ xbj%FOswy*:@\rUN?/LN¨k{jRc6|$[V ^! J"=]~^ԫTYݿKcmޏ ?)Ȧ=$;?/M]Aޯę[*Z;xִe:ܘڑ@gnY u$5$N qP]R3IϯBLg]M{ Ԣ.4fD?"k⎐3Ϊ:g 0]~A+3 ' dCU1_wtW6k;B_'XO5W{p2hEP GOysֆ%y@~)H SԅF:vniƟk|kҡD[LEB!j(ܡߺI5l0i0;IݼI6fk0);0 SVo`CUlŞ}^lnfcI)ɱ _#^b}PW*pP&Ďa-\QQFп7ܭ_v/l DܚjU1Ao#jSDCmkV6?JͿ`xll 5<b7رg=gma<4*"+fԼ6oɳr1Vs]M(26 7_m+@Z1kevg]ՍFqKJǓhVU4glnФKyEf2{<̓ g ayz 5m74|nϦC_ˮp5U xr%x5jMr owwQÈ;Ļx61V|/z'Ӫ槡;;ލFXXjKykZ?;iթӤ\ a|yմa]v$ e$)prGB}!n"o WWeٗk"J]pFs^ PzQ|/P]#W+f]S3i71@1/lK!񹌞^kW@j͊b0 fiUc^*fYɈdTzf~"AJ9Ts̕WG͍VEz))GAnq^*rlݭ)υ0~!8`q@kD3[hlp7xmZ* fZ E3" b{DE!-eTip9jD8dRlɮǔYn1?<۫--yZit]?^ t97I΢兘ض[: f'+Uc¨q_?R+rQiԺ&pT=;q k b}zu7 dWt8WAh\kYa I* 9yGmfX4-I}`K,KGu6sWm%d¡5hovf3G~tJ5XBr)K=bZk@>Q.zS63ũIGXCFYf=wOM^4+ςgcB`ܝ" 2Zq5S"#Wz!li0Jﹻhsj@Hx{%ӰeKjv(X𠈫XTKVaU=й;D' ڰKK| +H+f3ݐBEBئtͰ'e&h$'3XoҠ,G\T<8.rƅG̛lK+tf18]lZ%yұ| 2ʠf+l}.W4DHaDp3`T dHi%0 2'a!7JZ$aߒ%Qx r[=-ԭ m NY@ujZop I._\kP$JA.42穃%"_>ЗykI=5$`aro]( ^(UDjWc8eI}%@9 , #;)JKx+Myo6d%kϑaGj,p-ղTT%%TcE"V|*2($.4GWAMb|Kd7P` 3:9L=Nߌ'9o@3:D%? 猥\&4 )<3ɇ 3'[/ 鍧/ܤ)l\ޟF>i]7\3 *փoemc5K2b-T'IY|gPr|;$M)|UlpM]TXE{z[=F\N 8uF.u, R#+Wb]ˬŴE1'!#Ӟ<i ^ 'ͪBKX5TAnn;6*v%=]AY&*Y8Q%, RqO])Tj~ҡI{wtv_oMrGg2lJzBYQʺTNC`(0[!բnwhp*$o{*4Ԭ-ːa1nmw"2 iwbJ0urf7Tyy!bYE&a # Y]3KH/ Pa>4[,~Zd1δ[6" q0 >U0oψ"l qZ2;) ||9#R5>~,{/~sșW^`αZ=EFTa-71"2jLI)!­ڠ4)q%tye%w<bC0_zܯۧMouv%ղ?͗ѠW"k_2ڬ=V!: / ]!$ghި,PoUֵ=r:Q^DUYYc&HAG9XrCt T@ҜSCϏ9qqת`ok B%1ʬEpٸ}x]C c8pC(VaeG keңQ ]HAcA'S<\$̳X.%2i'A z&ŋvׇ5lZDXCk8{;fYG(MK iaw[%N< hbPQ!yx4"e35&0%%c$(k! 2h#꧷FaSJV(˄V-CF){9E#͵c[52)W_#rvK',=2]&܎FDR/ތ'Y04t-:M<-?b@% atƘo EkE;K9C`&3_جzz$ZC9m𙼡SPUF_ӾSk_?zOaԬ=A&-pO rOܬ 1W85% Ό9tf XĬ:Z̬~XY}A3sOMjج,~hY}a3sFO{Oo ؋}3uFOq޿yOtF,`hX.M9φMhY?S탢/2뢿KOh⽅lo:4xX^X?:a?nR&#ɯkK4悳ΈM\\'cĜM|'Xq``ij`lAkQncG;[n l lB<\g] W \o"{\Y=>OIE>?&_}NhHě$/WL~o.9vv53JZMw-p`DlR](PG*}l9!j ,!iދ5|4Śn^ln2jjN@ݭ ԥKi~tdΖ =jχ M_66*Y@@dP/X,-ԪO0EX\W~̎^dDáiSI'j*3Pha2j>D.嶋߸.Ъ G2ELS*F<QNXrh3i*a~d,3ΧhX*:[,Ȋ!FHQIJ.W#ԲK* )hE6`yݎÆyV'GCrDn}ꥊ -P>z8Mp<hQ`t;%T O_ޱz͈*&%M)+*QسBdmQzi .8i<ؓ>©_V7l- B (6ȺV7 j"9!E+VԤ T#0 AX]Xݷb=$$35!)jxc)GL:'ϴ} :\PG 57=N[ibtbˎg+d/sz5&nidmcza4^0[6JL 5Nos,%xZJ3P"f93a NelٺrF3Ռl̸H(}Vγ >UcHI\hK]zP Ui@v-RQɈ_y{K(+WY5prv %`/Nb|Z|렼rRx@ `` 5Jpx Sd@YG)KgSM@ްPМIj@ 11`LaA5f܁a9Ϧ=u(iGڡ1ځWb:7Y;cgg"cʪOoc|֝T\1+$ѧ( ޭJAY:\ qY6Խ۵-%FZ="y8l}B]r_wxq?/)l{펉-nG vu^M-;zg`D/l'U-wou;L}? ]y]\5vլE^EM}շ A}OlC]X["C26'ok[Rӟ"+lzA##il^2Q9UMx/_ I,VB߂ S.xbdAX<0;j5}Pya'\e!tK:%da/|Қ_{^Cܐ ?,u8nSI5w֛Gx }0745Oy΀=J/|XUdXudЭÞE` #io1*P tVlNd`]Ƙɰ>vOh<U01j7ta2u'(r;W([3ѶL(%1ν 5M3IeaXއ<_9kQ[m+,X뺭,b`ja1)qP?l7 u]9!8R:\P="qx~ZIھ;U{މ6нVMEC5:jjCuƺ f@eeeFZF5ƺ0div=܈ي8]V*!%GHT0E~_ŊG+_ִiݪ$ڢ~YYY"Qjj1㿙{L^fAX2U*'h,"nic -`ƈ&{RHquN+.\r{kso{KZ5sZnc 1E)ѶT/j2]IH͘ZwKi"ad~[M4f{8m]Q7s~4QT"S&t&~1,v}ttgV&Y5mwit9CmX$Z7w-[)VH'|W줓4i7y(IyMJwWq#hbbdn-C\ L|v*2yqha1  "><ګ'@g˺*!BLK']L{jRПzIr7L GGS4)iTv4ؐOɻA6vCp(壅#( xM"I"sС18a0U3qS$?##yXR6á "B##(@wNf#$̲IA]ݚՖ]X7'%`"5ϛJ΂,EXs?v314$:uP*٤6׉xLX.<9{)%.B._ak*\eRjۛ2v;,uLLGX//_?i".,Q>+HYM<=RJ*W]$ϥkhPkp+߃]dd!t-%Ss RW8wu =Q'KiQ(0[i+aX ?eiI91g-[~f ܥTU{`7B J\T dž *@Enl BNGc 'Թ_al[iP #VigƎmNȕ'h@%҈ )X6Ұ|4F6ȧ c?LxKb~K;4Mr"pƺܑ?e:?+R_ ~klv'\F 1jGv<-Uo C hOmԦ! `3-Op욒02 G%e‚0,-:r-$KyW]r+JՁGA:>Dtt#f0NJǍ }IC[ C  6>SP OoƲN}1F,ƅfa»b㛜+&jR~w)F݊L0/zˏƄ}Tsva UCu(;45Nb,FV촞M/"g$V1YPsXBpdvٻf߮Бj\)g؈ ~[ V;xrÓE]U_$E?|ivWVi@{Kׁpk^p{x:vs{7叞@ vXLu3ͣl<`z+I4"- -0kA♤8Ʒ嶟L1Ѻ<gf !wADYF}iN)V/V%C+1\77B?>Baҳ/J#O2YskgK;b't䵬J<6{^(Yf}K?ƲMs7b1wSW ꨢDPE֤]=_(%Wݥ/l /xdtWfzp0Ew.?xZ מH&-_.}{nCeJpA{5?yT9$^TwT3EձWCDŰVR^9(0zQ/9HKc&~u9洙|5<`,r]gKL( Fu}y S}#i9> ƈY)`'c*rtE2rKLg#YJ~MR B;EG攄. @O |j`eY3&YO;=WMӴ kuW5E@5[uNyV68ZԸ$J$]8DOGh)h ]#ذ=ڄAկin^U{Sx3Pw/$KexOf+1Z6qLAݵLL0Sld ' # SJ|rWУqe_^%lIl(r ă3suK%%\Y=T)+l;wRx0i%}erL 7U%2L}\ȁ3qL s8w!G<,jqԵ 2x:$a~bDFJܵYΣ}Z{h[)bzRD2G'STkm[̍VEO(sa6GS\-& }d4 ^ j%E2uF'dv_Lao[ɪ񁱻"j"*ˢuDލHL™g{ݿ>1eca,h$@?jOvWN$ r 4s؅Wſ2 eM@w?8^*ݪUCr/$Y׬cVIt719A Q:" cKHE,+jPAP 1$tJ'5ą ?(Yo+S.Pu:بaǚH$;QËBv\C+a<m򮳡`1#TaT[3 {s!Y?A% {o#Y$U tT)M)|eLMTqjҕZGsbgDw,h?Ďo6wo)/88Nߴ< "~^XE.Kg YIЂ&\4k^g7t6rkS  fj-Aֶ7FOB\.נi*}ı|i̓IQ~d}F(:ݛȝf_ WTKU: 5V}y`0Ak:a~IІPÕތʒfCE8ћ A6dE3gRSBqe|]H?(3*2!X2,Jl?om65(PwT^0g>0m\|)њ? r`rbl YAtgY+~ՎbCqٯ8܋w'VU}gGmsggCwb _Y[ߒχ;D` v,7eK",le+n'b\jmݙfaz5o)K\u,l.@KB$}[z̘/9b(omOWD$o77Sk54N' VJ.k*@h6W D,5j6Dx/y,#xKͩJd8x3*)ϣ֙[Dݱd zh" 5}vR>[YBQ94ppWb,d%:ߩT҈2Ε]<}8b쳩^JlyGG9vJZ١-85 mXC0sjP0B0{JYD&.^r0; ƓEG{mc;} ͛&?00+; K{񯨝;zNAP}GqgXL?Rf%beE0yН(CJ(FHmɂ!#/)=iPRq.JVl:0UIaAeO$:Lm^P d1pZۦ N&L* і?HT>@t,+:8N ,@ dsAtk~fd?8BXKcOvْK#
c]_!1$dqɐ+-upQ."WjQR#8!`(@B~ )b@O hzq,S䐭gD5,d?/v*4?Jq\kIQjG6cdcϕjC'^?*Vrvȣ %EZ:3F%*#IWY.a=MN9n3[ӅB~AoƜ!BW_R{ӎ=0!((HMmxTXyɓ92#h>񹯟i t:k AR|0C"uѳax:P8|t=l΁]{KG0>RLQ*ֿRY.s·Vy1(DMƑ60(k6j%h% C1g< FeƝC4޶x^Ja81}šŢv:}МRLJTOd~pn@{C{}IqvTX&;S(PD ӡC4|Q% >#Q#e%U[ĬX3rl N1Jbs1(iL1!5aI-:DGt n}t^3q%5dMӭ-iI wGS+x.;=XY"93ٰX=Tk)JH Low"џU pQ*ZF%S:gLEARF9b9LcR~Eh3MsޟM5XSWU" 3[X&̀9޺禿VQk]Lf@QpXvتpk㾑DD;]\F0L^EZYAx*L)A#!aN[R-?~MEA?Ȫ*{{HNLj:Mn㝜 @JtS,a $;F' X*[Xy{W4-DqI~]| pĜ(oa?P)|]^9#ԀHz Wt``0Jx^ܡtgغ],^F_ @vU-_JXゥbGLACHIfW 2'dua3bd&#2I`V!,dL \, C[%Sa."E8=r"R?0D~ú0S-Cyj;֕Nq::or;a'RV&|j:M:.if% /Aj9)3my-eMv܋v{ @dV%KD1ci"RneB= pFkCV ln_OiHEӓig8_'y&3j[sy܌rp + ē )pfnnFNVYvLNYZol5BӯEGV%uzMO2sp'.oPNO#LEuOGfdaFțd"*|0bt?t,RNJ<=1qq:IJ@"bc۶m۶m۶m۶m߱m[^11sg3wӋ^wwefgVeʺ8Wue. BJ蔓iujhi^:9K}=UJ^pq8;vֆ~\ߠz~Q%.BcCvTiGWuy5?<^nckIOq0LDQ;l &i#m!2xmXz#vW-DdC=$lfI)Ic 9$u*w;X(A?k.YBE-q)I$M$_K=F*k1,碭(P"F-&=NjzPb%\+SMm&E/PzMuDU~}*hj5֓o]I([tjZls$ z†MkPK#LG^p$T T oI[2R= U>5XI' k Q|bզ.\/ib8!GcCQڲCMk?m` nN7/%w)<1li$ /葛8\^RDohVVYV"7u6 !rZ;j+AqҦ&~+sI3Ouz+zj%/n/߬io*vAk9Na&1Mi:qJcx$AvlÄ8~p[s.'.Վ׃omű2y҄#}ME9f @mw9=nȘurlBHlb{(}5S\\k65 ]>G<(sTdz,4$5Ts(OBٴw\1s?O#M ,#e{ƒkէ81Y{1_RR)j'2YO*- XD{)ODCF6֧YLAZ~B2)p ETx[v8hSĒJe*ssa:|Xlcp:1' OyF7JQntʊr݅0xevP1I5MuX+p8[qP=IX?Qu:MrC|?4K1S;TDڭ3 C#cu*OzG@xip]b^=>&wް+~,Ĭ5B*,"v=8vB+6˒ֹ0>rbs~~Q:ODf/ ~n AaF`xԴJ8qr8ŐJa$d{I$J}qIS>|L\\$XԢ=t,#]$)ojEWgfr2t.=aN>򚼕;5‘f-饀jj@3(-jn+|쫙d!nBc7w? 7MX!S{vC"DnRfrfQLйsA)2&7\ '8#oZ PNMfYQN=xx8-ߍh4Ng ;.7ڙԺg_}qg`\T5+U(qcxzc=oR9/4zQb0~V3qx( É36IыN 3.~ ݂ KcZU =+L ùPwtIwho1InCS<[m[ 6ۘ85Iz1jS#;'.r|v[;DA˄v1lú[7 j|Ȏk1̦P~g1&MLih뒃nYW8g]k]y%WH3uǗ 7MdXi6iZ@R{Q@F ܚS˧mk'J'0[6^*&7msUp J;l^;B:RW7)eq)pֆ/xòq!Y`P7|Az@A7[ pB =CK@B .pwGoxe`I.\ُ3L{PTk@}xpC ji!7fJľ, NZ8>MYq'$#:BмBXݭ.\M*ԲamEj"ъ#l(GC3(+> 6T & CeQCq:z0) VLc3jPax -J}TJ!s}ߵi*B'Y {"=AH̰Jב0F"m1ˈ`y ҬSqs]扂lr۶%X+іS2Z!qBDfCZb`NǤPrwmk2"8恞6Qjsg W]~aqP$Sw !\Ӑ2{;WcAWخeMD5#t~gݓmSiqk HI.NçЊ~I=| 2yҸŨ%rO!Nr4Mdo, D^k֑[HҰEI h۸Ev\ɄK'PnJ˜vL- ;{Nj'amSCDuD1rk;0۲ I~ԶnR]DyK s XpeCb2Vf+3ɉ[q px}ZJ{I)3nٻ)Hb舨f_7G-q(uQK;me%;-%IAf%.|/%[etdQ' :t"W/1UpUIXqVHh R.BxdλlhkoEPRTϨf];"5OZ-k&`-Wbp[2b/] 6yS~i:+Om镣AStgtZ( _ |6vMY7.}L ɐ%7X`K6kqr.1{1j iXt5I4 8zMvN=6Zÿ2?٥ex8`aQե7xc$~l:).7 fp q=hpo6 |a#yu OO?cU2eq|&]6n3[7dҏ('%=p?,mBMuoW6f-L헦X ~M`2` po,9WÞޠ^'1Ǒm{X ە$2I,ڢ3Ƕֽ|x>=>M.9Z}$CO'+ S!u^0n`͜Ie$vfYQz!1<-#@vډc%QܑEM.PiTˮL* M|o*Tv3k褋|mְ94@_$S5y͉57&*NlT38Jś.{Ld\ ,z¸:4mA" 1f1^iaYE۩}I@L"h0kɃҔ_s Wo_}''Vk^#jt3zpSܧlj' D;!sIBmyVj6 μ4*+aE"n.b@;W-Y[)X]>+hx^,0)2xE-D,Pr?'Vz&݋Keъ?JlsA/<+i‰t'ވ J SVÅqEșaLs'mO$ #Ug Sg}O%/*VYqO3dI]=ɘ)ZimW!b~]2j Ö;@V^X^T9.I0.W=RԼ1* C1rC'|ؑow2bsFXݛV9%2tClzZeA;Uݴt/Aa Լ4ܜ .&Y*u 8/Ծ߆ \8eq2O%S yE|s۟_l72SP֕4[]_]AP-7Sb.LIi"][g1<ցL.n ~f75MCS=ͣ0:Z]OG+w* tֲL;u|-"i3 ,Y.G[ީv2gsOnxh형=q8"s ķ5y1;v~]`˳ͬI7=2u7S5+ mo},byt V)ʒp[3TKo՜^ٻ<ŻQ*u<ّ1#9JSx3Kޱg"S^$QZ 09'~Ҁ"@KYBZ(Go{*)+uoLCe[jyĪ! <ڙKwub\Չ~TS愺)VOM c_-.7\6{fUl{7t͈f}MLԤToagėVʮn`Yrm_p87opמ'~nHm|. koatމy-;X"&/V9[>3Zi+nre;mgLl&[gxC,}835B}ӚX<or$,4r#}cΩgۍYga0+MEd>6)9R_ubX/F+yż×v_V{(;/X{͝_ÅxOG d9C>s1ToDН~ayB?ngCق|a7i ZJ2r,U1 0 cFF 8h6f"|cVeC&8ߏ4¹k+ OɟUI1q[ X9ɘC !vqH=%憖t[mUq*Ikg;9'^u Y@1{_<AHܚd|ww8=gM}0xbsia2'< LΝ2=%#Bײ84\dLeuEW0{=Ъ jr%8]i:%peq³N~->p Y uR꘥Lf0_:n_agTL:lO{ĆJBg[LAvvp4= Pn'SbޮHzi*E)~ݩReљDžvI0r;"U9hʀ0D~tx4bYX!gc-v{P)3Hä:O0 QH K(ԥ*SE92?r LEivQ"H 2ZtD qgCL15yf3EIb''ObawQ6MFwc-Pܲ\̨#FT03ےN<$jA0'r%|^(o/VPٹ23\-D2\ű O?uxBQ[Z{egJUb%%Q>v:(b}$|iģ_+ uվvO 0OSm=׫Vo(*V e4) vGUEDA1uԙq3M*ӚݷuQ60Զuj;fЇknt/R]YD[DF,t ^*EoJ.Ǫ-g~V2(!΄2Ԭ!UH?BBeq4hϭ0LN[d8 &DLf[.NUv"9;XN&Pqz]*33o[E…Kam`\4Po>Uo|$b]z=v *W`Ez3 dK9,ZzGheDX\-6ZL9¥.731Dxj>G .Jpg]}*)p/0FScIv&`kAU* 5 ~]Q0}gМ v~Q-UGQV߼e~)<ڢ=hHx} nOd6w_El}-I8Gi)?65=jbpg8bמXr!Dg_^kW\4;g,r@6yp7='vIs{)av(ir)("ע wǃJqXONN^ &pa{+vR{j{鶹3|ާi㾵b2Zxwz2' eTWxLCy"[=/ CXr 'vsY //St3ՂIsIKB6 ( ‚:[ʴ.'>JZwZӆw"ç˸lt`}\OaB[df:Teȵr=cvjg\OelC8\!E@dA^"%{;v3bvm>7yw(%[#,B B]dv)䕇ړ.MIH %B\.("&j2&b+RU먈2b)IDa R&j"@\ĖTA\VoGO].r =d%r H=oU.ާB4$)6\r'WQ +2BuΊGmP/5j7|Ԩ4U7IjRvex&_RV (/wn& O 7]90̓jJn<{"ãPm4oPs8xL YFe', IC{9ڬSQ%d/[<&wn՝^P4݁9E+(c44F/m(qNǙ.ma^'L ZiC߸24O}7Ȗ`G6#T>0|.7Y8ڄFu33qdj,fJdI Ze1äe4X#k \uGdrO֋3,{ZFʪTțjuPJ4}83]>~o{@܉ iMܘk\nFxq '>DN4/3!壓s.sw#G?UפlQC,eGn?L tm%$#pPcZMԧ'_1FgE b{^_<񱝚rG[ &S[2u#>.ML3R1}A77dU(9%yZAh.=qEkCNy*^鑙$?Yl]01E`~(B0IaQ МPZq,$m\R]Ҿ7 V (/&nJϕ5nt:"]h79nC.B}aðEUq7tረLR v;w}T\J"e"(x`Xx2;y w&U(ucRϜErJDU@4 '؝|l#ɡ3HW{Eq>pKHfEWŦ3D8=X/%rm1̭!S&3hSLqa8O=JKIs-%@@oV\HHXȻy;)\>XЕfGwR`tE"ϱ%'WoWOӦm$d0%{Sڥ7&ʂ!r>ѻCjN'A7\epnR > XB-hKb{YkQ,Z2l-s֎tzw)/PtOK=iʙWR-Ϊ00+㤲@y䢼LQB?=^;Xb&5S5M+DI'ܛ ?6BM[?(Tyyn$̰0GYƅ7$6P(bY_Wud"vrfJ.2TG:;ۮG猕:iTklqHIZ^g¿we3 z3j]5 ]bfƱfъYOܩ@7cLu:oO8Miu |8 l,l]QTb.v. Du+ @א4#+,L;Ƶ_<QGU13@,WO}Yn0QFx㞽U/v0K-|wi(*MqDU+$ ;}^Ӡ:/Uk^46I}3Y1N6wyͶeLBBX(f/]ȅ{]'\{MwPgN9ΥKbRnR$z\!_nKx!-~b9nם? _RVTN=k0.{Ô*>Esy^Ɂ'Vݙ,y9LM@ xLzY]3iEMLmYC00QxccoB0c > !*IRjBQe$JGJ;YͽX<-]"׻l~_-D [AqeN˜!C.dj`wPҩ298Es55\pn^c]ds:ą:OfV̴wGUǪ\LȪU3wmyqxE.}9{kRwƊ<^fy*SeTe{TOucy#1DW]F̊]˿Ն:kZu"Sp'rD1kQ x8Zeh&ZC6 -R>AtsB-vNF#'Z  F=8[Kg~/+:ŌSsѿu63B뫱!#0KZmKOQ! !A'[t&eаM>*=妡~ˬ!##I{($r2(rf+Ey$Ij~[)Ɣ- ~kθDQbqtDv׫'l<F \H#L(RD[T"\i=_tD Gcc`>IT߳;s}P7i*ݾ6X*,7b_a014éɭM 7Rt|Ne,qƃ2_8sy#ZX6Orokz4H%qS͉i1i4)l V涸7*VZxs㏒R%HIj .-f*(*B:`i5](/ 5#kjXOscnʖR#\ QR@#i3wf~36I gێ(}D:(gr+A qyGo3|YSga ZA'- .A!*j:!w!s:A`k (u[*3;@<`EKӖ?d\J"+yIoڂ֭ͽ.vכ-5jkwӲ$zv l] SX|.DP~X2W SYI!ppFHW `,\͢ٹSDdց.;$|W,xsrݬ:BTiL]NJ(#xPb◊g3){͂u$ReK Li)OBgq [,͵YjFlrB<|ͽ>oY7&EEqX vI7947Bo2 { WXI# ٣קoVO5\L$-< U1ISp/,{b8MDQpޗLzUOu /ò3d1tdf[wLßp q%ĻP2$ihk$鰳>Cp2<\B9$M>|U~h/.QGZLp""י*w|u(1{DxNtL"/:ap8"mwfMT:=UUӶ#LqmRFAKH(*YZWK i[wt P#R[`<<%o#*Nq}4Y [ʌ69W4;:50 FL_tYoT1bzS;zL^ #f=P`'sa3x8Zv:^ \&Uy1TW9u_t\G̫M\.&OIMrŔчI ỏn2(t'?paĔZ-1Y$E Q'*^(/>?)y ևIb--u!(-=_fPI.Wv Xij6r.a&i^[cCfO g@9ơGpC[XWxtDaބ~S(ll)^Jw4A;DB6߭,@xNW1Yv½SUݒJ>q$O%lBq\̓ b .28  ȖZWyN]d4!U a2Jzȭ>Җqv74`66{$;el+a6V3?ϸԦ=/[\$et剴Ww ~NS*5cSᄀQNf[YD6#=}zE}+v^k+&TY#H㿍QhO/s`⼅m2xj ffhEN/4m|aP=ϓ1&H(?'Q'䷋[W? >rL mH&)eof1"g@ە05ޠvĥ0Doyo >@=ׯ@ս{ߩ6{Z#J퉙n:Qğe) VM.Ϫk'o]_nuR(؝,A`3øp/(yPA'6i.e|iW^ O ,)n^;ϕ8f`*F/`|G,X(Ү۩2'fpk[Wm,-sh_C}ʹR.V Q+Ccp%& %ף|Ǎo(Qƿ3Eёs{mkx]8t-μ^o6HDmS.'2!Z>JgzȤvx&vS_<rGfՖR6/|v/ *jM)M:Zgݣ"܃2=S2:97O} /RQ B/ /ڥ-a.+r*N֢V.jv>`,уwU[jjlZuUW V3'ەi^pbr)d^e*5PjgegZ6oJ:`H'-e+|S DEak;~C{]x \epW1!Pq\i Keg=O{" QAOoV m#64}* 1\p#P%p9Ds.@X+I9s#4L[;O!:>FN띍mTKWpLHS(Tmw !JToFBD4O@$.=R\KM"kĹG'Tȶ4;ժPkZ!zpVp߸j2_?&5@XM9m=ӵr %CZCPFCp^DL؁eM U,l@Ɩ Ay[hSݙmc~3wq׋_Wo^sG;7/o5_On,@XY xbPN\`$]LgI{iZ>JE!%%&` *ju/9YM?5D).'lKN/V݈Z?*n$M;~ 1cobͤk@46*$& b7B^|nRD6SZVBЃ(@j|m4$ a4NLV3)4kԻ PBc$A>G;k' ( !ae P . ~L%F$j89'1]L7+ti &]%(l2v xdM,VIOқG\G0dqFq3Bմ-p`.w%h F4\؟K{{Ha]| ~,/"`ng'N𴑅N~7̑EicKF fbsn}N L}9LF3 Bsf]-7Oh&Ύ5X)aHK͚ws"K) _&{/.ѕdC5fI_M{Hmh $|^a3ߵ=|†RIGBNꐥgp,lNFv Uԅ7MTRB&V)kTqe/E>JMbDFq_ziwȜ58}7h6Ng+8`-{z^F~VYV$P- ߽3ϗZϣ=(![aB4~UMxVC e: .g)U"`Pk8s % yXr|#Eٴ:% I|s(1N4@7U#U2`Uck⬋ojg#:V%ʊ!5KV[c|zX TX~j"0py+w܈ pbh|,.iw \>i9KLeHy6!ki->y*ݏ(g9Wiwq! L$Ɋۿ|Xj TL#R ha!Rh@K2-NuSeBȎ^+܎udndՌXOQzmq<-'/' nu3pe NQӶw_Գ:Ozws.%*Z&w毛9/;\ZZ f[<: ͎ld]HZa|h%!V;ez*oHUm~R!lQd4Xȟ?Yrj 8Oߚ}2ǮĊ[ۇ'*,^UCpw_=X3}570MLXІYZKn\G}6xK-鏠Ko P/H69 8{~l` S;w AR(hF񨦶.(R թ](whJILyPWK2H @#tha8s!̟dRkN3M Zwto+^n֟;ߛƯГ1z I)'6MSJYhlu>- URW6F G,A2{adX5>ӫ?EkؽU*߆wE}1Kn'ؤHxO\iJ=|xn?bƌ':٭LMHFeQEF:sp +Ku@>t ٩fX&~LY<{tew``h#߲VnTʤ+:E3EkH^=Bk;ֺ'-T<:AXsr0:pg _$^ɪ*,c-9#rn8 )M\U͎eln@ ea;xM*`Fobs4i0vr%ao:'8מl/i[([ٵ 'j)м5b^a&!~!MENln@+9ᩀ4/]F%,5`R3)!.%jt˓s؈/Mr#Dv/㮅uA1$%j)C t¡BxÁ\y7 {y T3o6"P|0;Dw\q%SIE*T?r9&!zȅ-@(c<^@IBARr vqy"ͣL/bSH ,JxD 3Wj&u_}Lr'R¥ϓ9@^F |ޚl 1hk 5V?p?;aKc&S{ ~h@̧/ܠ2R(MP([O)v_dՙǗ1ᲧB nO9>3`& rݞp?Ğ jmhU|%DQt`3DDZ*=Mv#E8 $`Z?dmq-%PNDL+xk3:)Jtڛ4?cupֻ!?)UfRt^q΀S%G艹SWeH/%+\h%[~Z1}oR2FFyVJHhQ[ZRJ=ա] ȵ"8AsNx( DV {%k ÛY`@ HdK^ h(N,+\exRuGpkH 2ߡܚ^x"l{n@b[5iRV jVOK"(%l)z4!]OjN[[$\}@[ɳ\kBS$󄄘Q24MDUϟ kQXk9L%4z^c" !yM_7h`r'W' 0 t̄"r8R PV87b*|pbHXZP:zHp\0e܁k5Q'{G b}@Frb`w(XW@) *sK%ˤ ɠW!JKwDZ2Y~H3N"?A! f+o4PRb j b wz\=H *}"mU|GZ96dOۋylpNA4 `BuOARUy2Nd˜}&m..X-kmVMIGgs#SpLn2B 1?ҤLIKw6O(-BH|zm( mfN(2zX" wϱn{sl2%]uF@rBM|,aΠQ'5FynQ˴E& ʫ9sD/,4У'.u_:[y;DĮO^aSb#07a&/]p\i%\u̓u5^PV]Oo+ɂ OK#p}{E2<'MH n1%4 4D.l? +FQ1eZ^ Ҷ,DFz=ĹX!ߌ2Ν8G$!e$< g@lsibACd$tңP8%3ct1}:#bic{q: 8"Em-]{hxwϢc89W^1Ji$dV1^=EL qy7L2xg7Z<.YqPoozfgW+^^vjת]wtװ歖Ve=.S_s6a`r% ]b,>n?xx6}[XT7f]iW}܌&J=YfW"f;XB/UxA 4^<)J |;9m?P.@((ňMY4\(!|HCԻ,Qkw#HOrb_ip@ r1ETJ A dYS|{\gDgd_i20bMl1_9C/I u3##} Hc @1e"I$()$A<~CdR&q2#0_B@9 `m$X^3idϿQC1Y~|L@=TFhY ]Zx}T=Z~~\0 G?'?~*'O𗑏 m+3ۨfd :dɃ@E"%߀Qbf'B#Th[_S+]&gop$ꪵ>Lek+Ґ/lJt/cwxe0將 :'jPt,tHn%2en/v~}|]<$M@B t}y=ߟɟ{M= 染e "_":ip lv$a`0Fƒ0 X=^tJj~Rr  CY @ޢ<^6UzN/4{ *$+jS|/Zu _1}P!J'e"D۱b&m uiAń:1@AvH4xgzTejTChBSygɟDZxIpgrr'i& 2JUxGv , T&5]7F 0%4T4{G?~nZk MGbDC,6ijd4nm/puTEZ֚85jg杻ݖyoua'Bܛn۝9q0Of!Pki빊Tp-%H*7z۷Y7v$6SJAA#idFFeVm5j97ti@L\s颦 "IZ AU$A;0v)׃(+{% m7qckU:") BDWVK`labRt/O,(g*)d0P j5GG:PmjśJ^@HX"&VldY,Lk[kEO[jea|1ˁ')W/kn `ItM4j" l ĐI!3c2 womᬵډnXT{ e0:H*Vy|Q9rqM/l=AwX{}yW:0w$11GG_#%Z<.~;>|&gbk5"MxΙ3b'5*jG0`Z=Z2 ?ճd`zzG|HaR4%*1;n30vw#]%+3nDk53wtǽf>–Q;׻%P&ip 7Fد-K5'@Xm? وb[K̉}UMݥtr:-l+LM"r*1ㄩV{_hkBOdsm:ZժZsOb5C6^9C(u]$9xңMvqjl,iVp}.1%2/h}ŝ*ͻ&_j=oKHg1ڢo[-6_MH DDvӫ|B;صTr r6 `>h[Pոmc69پۑ\ 1 /%+?ρ8V9iyΫaĢSY M x<7"ZG!Ϟ&0&-,H98{Hh #d1y/fVrE.I6Y2,jqu5z+W׬ksPC+lv@/9z%;%(M8.(l\ZXZ:,A}K`SBlAY}Ysꢨged5G( b*]UdS'b >-V(BZU(Z]E1W(BMLfx;270?Sf-_[2{ *ms2U:VtIJȂXIc;&>M m8=ۢO-Po(FE.s;i>-v+5%@pf'n(mFyWྙxQ.y(u$Be0߇~jgsHN7gBpxL/^QuGSE9*6 +3Ew]mځiQk cqcL)͜ޖJ Uh S[ΚS1eO?l<~h>䈏2JHCi.FN&"F$ iM A4!hZ- QcAV\m ^.E{3aHJxVp5_Ρ#~.Rg" 8,H6ޣ zg3gx;fh맨):QRϵmX]zdjR< SKk}ty>ކ~ջ0_*y]َ{{ΦH>"sy@sS^c^&zg;s*9(\7O<+ᠿbg#rA{Q'͢=AxGQ\3~94w8ɭw2Q_;p_k^?x*+Hm@sC?X>\$t<`>4 _sq򏧞𨟆ΧcdғL ;}- cG#q4Đ.y)@:ѐEu(|e˜sߔ+8 5S)7.$gnϊ ˚]+3>~NG7[P\n .Z~7y\>w מ:eo? =͒B\}-l|B:NLq5 r1[ve`7Dv5AQ,Uy+l[l+K(AdNZ2CPOt+! WGAޥ|(!D#+"bVdQMLlN 0$S~ayCD.h;\v\CލG?_E<#t)} HB?SNo9 h?,]3ҋ0F[oh&r;MyMBCQ{)A];!bJQ3 PY KnM>l >Tgy>K^0_"Mp$uY&ZQN5\pX 1w]1?6CfjD E y8ʽ*bb\- ]Gx nfWJCH};/"6` }뗀[" I`\<32D3 :r%31Ӟ(L`*h 8)~_P:׍j>yF qHsPi4^&VM";[Nd86ϥYEjp3kd EQ w|Us~Ϫt#Slagdq8 LĸP(HZ/܈);Pdk#" *%=2oE~gv̙5e>R{ ѕ똬gl6/xhl4DQ D6ioGȉ0'>`Fps-b@:]!Dc4tvqlD`aY|0x(\Az|8.6qSfIaI|˲N]پamc.v*:zd[lEU7&BƜ&N(M,mGffоCr/2 :r\plD}QܦgGkܦɿٗnܓ>g~MAJpJe1mRl(,(K Cx:;Bd'w$) !~&nmB㉵4L`t8e`yDnl0?c d.=h衆[+O?< J841>n R0ⴟtLy޴t!J8QHPv[h"n;^CxȔ\6''y69k0 UnF},i?I_8G&>HM~ m8;ﱤ~rST$c5" z0h6v5M:؁g,jd{ C ]BEH.AQ Uno(X]8_/Ų(r&rPb2nb 66lHCŝ.QO ,iEsrYbEXwP F+F?B;C/ZesGJ~NkH˕bgkr8 .l_F Mw:fC l:F$B*VwgM2-ţ"_O5v)0,/jM֕s+@ﴳEe]Ժ_U9W;uwcOΝ瀒ے-u\$j,E0gpE'{t! _e*p:Pq`ܾSmH\̷pӰ;|,Y#DZSJڴI.y:JcOz7b0t%k bo>!UtFz)Bm1ƣa0z!T*W v/3WH&2rq߾jmc̜+S2^<9›Dwbpaejej*QFw_f`VTp^1.g+5{L*5}i&-)D`Pl$ Ap%fT'-+Pt/{u@^HAdk8QOi#FiFv"L]uܙfTy%t'Oo S6TOޏah*]^ -s/q8 Vv:JXr#8YߤIdiÓ''7;&^|&U)e)[/UoϋIE$]JeP)>\{> ^`4W;O2jC'~434(̆]V܍̴>Bm7Bb uSjo~6=;w==pM0;xx]nL w!0L h\$a̳ ZmOIJgc 'h3MiZDr$iy*7'zd)n~/AmѣH֟!fD0IPj_H4cq)'/ ]<q2Ӂئ$!gt>b]͇%]!ۻ}iȸ)/|)$AF6 ~!36e+ lUGEl#iɖIuyDf}u K{1IK=q+e;V#C34Aҋ 29F1Ĉ![Sl߳gAlŪwJ84Rve\x4f qSKt%Cp0̺)s9)9=2 ܷX}uf1 q (1}vHPz_SLEУMb/Q 5)+ߚ;y3n7ZM~ 11mN0ONQqMN4s[9ǪQhdܹp^D?(~$ nF8!%5+'NDgc g3ýa.y0Hs;p:2}tP}W^.t@RoBdqiG!_EmU͘ẓOtP9N%a B@jE (R0An#\+ɰx81?F좞AR^tl^k݅Z3 =LN;0Y6E< g"%c߮giNc2+C+ŏ :(Я1K)O mq)Q{ g-V\+Cm3b}C![jz 1=HtԐ d=k r: HkB$o8:VMO7l<ށeF7u ̉O_6& eg٧_>9p9Eqj=)ρ< 12H6,AW_[OZ]GR2drBU7JI+ ?Ec謋 fM _}hP2W% ح-VGB+ QXuk9Zm X^yWkgݼ >ճkϸ ZͶh Z݉6DӀnTH)(, 3յB؆mVʴVQ,˽ނw(FlF=ѢEyȶI,J~DIo) \pTͱ(i 9gdF>L*hQ$qn85]!=Һ [P{dnW[|&^/y,&AhP1̡IAl5ޯ(4.8Z.aǟ9~hS0* 010Jrl̉x9;EKM ̋R} 2?Q VC(m \5ow[n?(~(~t_QB+C`?3{`ΥBT)ᣳ3SY?jy(.‡3Qvvg~u؄n0 U@- CWH:߃._J7uMŝvWc 8n>(cg^;e>]oϹ5g`T ),q8pQ1get'&vBr ^MgicH@oWN.kҰ?@gTҰ!դq 2swk$0QLFTsw1J)l^G< 鏖.6Cյ͑vpHұfpz=C(]*]"u$RХcHr̓AN͐KI;iJ"rFp'MKS4&uIѾ4$鑔#$ G4hb0-d! ̤วǤČI Om&' xPPN"ofec ''^2<82VYoܞ F’ͭ L+0T+`i<0C^ adP4垫j9##@ԧ';`䋵GggGWo?zC v?zxA ߷?v?BOz'L<;n<56ʋ̜X_u|6'sQZO #fߵ ~">,ICqH#$,k5$0J5+gM" ͚W$b#1B8O|JWoTR4(S}l3 Y -8'{ HNCƜ ]\c0Z|up{Ă Kop`L 8BUbxa_@UԜ#ҋlלZ-ԬNo;A,Jn&SEFJJAsmw8_]-g\qkR],on/?u_$d7u8yDK=o2 <O%BA#Ӱḳoem]} ceË8%kD\qGjƝ`IFulkgͶ/Tzfͪ5Xft*~QƋ#`š^[!kKj: ^dbWuՄZ| KD/0hmI/3/89T0JĘyQR"~]%GTVfյzc%lhFWڣu0E.݌/A2l$mΩR(Vג&?aW]}h0/!4[AAl9P!\MKn,},8 g:i4*[<#rؽIL pU }RO؜-E7K̘@&nwQAL5ĩmQWj0)jn83xs(ρ[}GpzځƝ8=Ty3Hnp2]\z,"_<J Fw>S1\P!5K,9Im.}ޞm* 6\!*w,@ 3Ae+ Ьöm۶m۶m۶m۶o>g5WrrISV(8m6iۉ׳[BT U,' |#r| g%ӌWikGGtIPRMn6K,zLnXEw5W<ѩ-[D9@u{\ZZZ+*B=Z6B!n ʞGރ/R?s0c.| B` 5HOC^p IaIUb-)^r;7fWK,*yUOw'Bb` r&ugθzj($ g_+} WD,#Nc"R:@iZQeU'24T gUtUQ(l8mxR (1 vw!PHdGA BLqޮM7iꔂ'LG|EZC œoGz?ճ{ 1ЧZbkTm*cM]kŠqb-M]rh[YHYT٨ y'I" C$|{7MDsjSEzE/ڝ]}F~voE˷ksW?w$wd~ЪPԎXV5u)C$mE,߬m yTDaYggMC5hj-HD*.ݣKЮ]mLp =/]Ikӡ y=B%e-9Gc껲nIO DrX̐  #}OvW(zO-+SZq`虨d o|z CzE/i1m$Һ3 *w27a w\d6;W{IRi58r$*rTTbk@h%] jdn4^qYlP̎}㰭ۻiȑVdJrY$2yt5BOF/t`WL5XwLS;hnV/ls|QCmT 1 2ЊЀJ >i ,]jQYrI|!$ R}&GIl@ro5i ZFV~ڤR&E&BIW"G@_uAz08_:N+v'?LK嵈:0q2f %`AJn2k83kTbCmi =FTԆ *?W؟{LN)8NlRW&~%ŅwX\]z`FIT?q+ Y(+*q``E qUQ=מ^+zmif?}pLE|9'j; er֏ͮom, ]_ǽp©9%Nc"-eǩ6Sd74."Dp9-][iv̄ S;.BX.n2 0-.%S-GWtېlxGr5>\{^)0lP=j3$b>߱XN+V/xi]Z"r c67.w]Mh&8U"8!y+Ʃ͘ɛrRAV8Y0ISɘN ZOP{&8K(= 2%)ov26-XИiﻙ=l:J[ X!&2=?΄F]01vV)W3kWk]j #E^t^=:6mFw`!?avE#z7 tfHI/=ʴWƋIN|]O+ +PwO\NL\e5[>ⱦ/DV% jɹ>`?ߋ qL3>aKse==◉DHCc`+(6˞˅n ia#}\{ Uc |~G^#~c%IĐp-*R ԌȶS*)QfR$nI.QNG@t(;\P9YK1=4G3_L a [ܸY|a"-RWӯf&/}JdwWTp,:l28ߊnyLH q;JGFڵ'4W&\sG hǐwF(`*ۜ ubt7;8Z_=XZԬf8s7`w VNVZ {;]b?$BT{93}DQ0ft+/jn>QD .\j4nˏ'ethTˍ/Bdd1V:1[cdMh!U>}<~4xAIS!?W b1,~?7%}=J y#̓rlʯg(oѝlw\+QTs78_{Xc%ړI/V[3u]wf:NCF{W+CcP&Yp9OvY%eR0(gNB4E…"`}uOE5,ʍOl[{~lOF˽FcG &d0oOۗ4lG8g=7.>a[נz=Sbyٿֶ@t󣭵0޸)G)QEGA]9C9X_=*9jN_ږ ;DeeU۞ZQ(eBd[ 40WM{7$}%YTiccyߺ" zryjW?wnVXʰF`1LM, em6[}G =[N0sͤ1eTQD͉"1^?_wV9OaHw|Ç5J6v/ZZY KCRpόlXsrȇo\4(roB(hFM X$)@u>24>DF7Q,`,RerHGє1#H)`jijcx iBą K啶&lP "0s a~0C<=^.ѐ16<\z^_r=uE$݈ *ɺ4!]ɜ zÃ,")t%M;S=ݳ㭹Q9 >ID-pLmK#z&6O+SR@^ށClܚ3'/:xK*peֶ+4!!ƪ܌A yR^rw5XkN5B0ӣXAi8$]ZjAR>Q_jq!ZjHMmvlI4ySu"hj1ˀbi"fYH,()a۹pwʱS7XQY[FJtYiRȮtvrG LHk 8M/ƭy∋N8ARJ[©s! CXEKei5&;Y%M]့Ido:'=Whf֬W :_X8NzeP0CgdDgݻ :(OU/R(iKӫ@IAz<*ELkLWOJjY's>H]{$* z{DVĕ\(RvIKʬ/>¦.X(6ߥ/UH L#_6ψg}NJZގ yEZ={ܱ1 9p@lZpQ J=NY`ym][nkÉy 5wTxq$x|= ]kַfgkyN}"wf܏]ʬis![w-5󰦿s eg~sZbؔagR$SťXҔNIטq5ś]hm,n\9wHWܸ+n6'3.[;߷ٻ }G ѽ^0Tu?+P=%s1j4 ݗlwQk (P =lU3X܇afl[\QCI92^/Ew@'Hw X 26VGW/O/1Lݴ칵.KJyM\Gn:qAhi>?*Űضy5p9bk"Ie &{-.x"4hR67XR&m)=G.w-7CJ} ?ve|ӝ<&Oݚyz..ʌXU/<S+Εx$b۟÷"kxV~{ _)z.̬ IڪqƲjmSt >wcn7o7ם'LӉwM㭗MjM=4736{յl'V{եn=c{t̵r얾a]R9-ih(FtLɚFԮAtOEwRL{fwI,UGٞ r=[\VAmMcߌiwnc=gUaCvL7 ]\ :KgB(v/Ƴ̎Nb)I%vsZ,N, Nia޸2/Zуm#Q8_R[,[pG1F%1Ն%`$[@`SN`pDezeLN{Tm 7(,1)7ۛ+NT93'|왨$ o/4'  0(*68#qQ64!c[n.O &yAlWඝ;91dE)BLLv@@!N B#Vݹʙ?,\py&!Oc3& "W/Nžb ?zo$G nldۣp<I6wƮST`;Hj76/n\kwYF'rI0M3"p-Ю\Jw5 aKLR=}NU:=rvq˄hG/ɘi/չ 8Y<(띊1S9!%J'a(;1Q5XG7F󎎰l)P"Jh8_KqV$::&!!XbPXJ p ZA=Sr łƬ/ep˓ܐlSavYXLj}$F[a ifߴj //$߬Lto DAq@M6Gϭ 8J6Ҹ&SIʜ CX Ph9QeF)e0xxxT ?K}߰|˄'_=6P!b! q 2#SHȥ?xycD%R`p^yOm\nvZg)0r,3;D%/@4H~NahXa83\%Zp '#qrH'慮BdE'GӸ-W\ GMWx.2!WɚE ?:dq(WĮZ%WI"8lP!Tw,bdpJgb9VSȮ5\":9dGdmYh@UF8}_6U&m!uE$Հ%]XLBnٷ6Ṯ\:<:i8-@/Ǘ93eP{V*c ]5cGv;֠gNqiP pzwE_3'|'{Ve[:f'&& /]ӯhךg^xQ@%/%kWs=ːv-N8@…w9N!.M'!B IV",i gg3uB:=v|\k[eY(ijzQ?W\+\SAr#`1 冀Sbszbjǔ;F. b!Bz{Å:J3g0eQWQodfSAwV:`!ȰQ#E]OHajehrm, O"̲O}> کAqV$^9_6=%OLZzKݮan 3mxp?k Fe_`V)J?H\.$ט <{5,6lU*MUa$xX2./s()@%XR?ga`7m=|P۸ ;ȜoVNs2zmJ7hhYl F>69_IDJ ɾ|7|\%߅x:Wwꍞ=u3.ͧ?G`&8y cV;s?Wu񾡯m]a]yϸi>ej|ɼ/sd\CDO&?񬦶=$jEgHX.G!Z9"@c5)vF $EcYwq&00}(}.~{lF\ }[v?Գg<rm]sV<-LoRI9u|H54xs^y-{HO e>Fd` 9UFi1d\6T2KW['+~y֤C&G.Pq &\'(pJ,!<[\CYGs+؄56%%"x9A r NQ\Be+d*U^ؤj8) ClE\ެnLxOl>T._ω \1Y6[2Vv^~@|nn][$<f7pCgr@'{~"E֭Fcbs`F~P:M_Ω=IU@UUcY~1&6VA 衼| a֝R!S\= qO/փfQЦ0"/(F fp^fν;Y+Ǟqz<<[ZL{BНκrlg/1Y;rGmԄֈ_n~H{%? c{g@Ns LVPoQsϔ߃c5:hxzH ?TYtŝQ[ _\Y,qXrz~LUU8qt>G+"DіCBIJi=)'I4*i=2ùNK3z90";ʏY,1%5~^{Hvkrn1ȰCNȉ/O@uFϾtMU&{h?yS5)BNw*& "a{|2qۣ^)k)͞һm)y/V9JFb;Q(o$a<#>8,i1Arv}RIaS!<-$VCuF^pk"^`_LC~6FP$b3boܩ59Fz>Iˣ̛2$sX1Sޕs~4jMY^DQXA6LWѪgz6($mSE, *[F<.$}秡rJާqP Qe5ڬ`է,ԩ]5b_O4PMڇ|z3#!D?msdyMqߨ<~U֫}k=)Zh7*c,E _L ]}Vq~7c+hqƮFgLvxs 65^k{1&riw0W2R'nz[(WOn~U\d-_qkz 5/_Pןklnȇwwx6X,`:.e8MKu75X788ʱT2&&Wwgnنjy2ytz*GwYDܪVr~G#i>ڏOlR'@dܪ--&d$ ZX"+QAP*H9%>|Iߗfo$$$ǝqcxNU'@3jcպ]0niIfvs1Od\/j=knѶinz\/4$k^Fv<2kR {u9h[ lS o N?[ nA[zT3D@!)~JB %AQl.^\ Ntނ%ʣr"L2I"/ly]ˮh j}׋i[@l@ Oͷ<$RThiI"%J[#( ?0(Ijk^O`Qj9kg3=oyX3\-aОIENJ}}4f_ KǬRyGaOb#bQkpK(im-3ʄFHG*J{g}jI9z['eGW* %kCPok+H%z|<;Ѡ3yي Qe}4':(:o;J]X0Q-l_3yѬX6\+H]Y&Q AV^g˰iK{BWr؏U^%4\ ~zabi 3#RF >d\Ҩ`o0d̞]<1Da4|$ bPb_Kz}lx]nC?r(jȽXA Z8!f4X]e`|r9*^D9*^kOvB߅YOaEX}D 3eUYxU`,vX3t,N c%kxke( Jn{<ۂ{M[M|Sw\?Ib9Q9Z]6LI$,_HHOW^epg;ֲŗaR *>o,l):lc_͚B6fMW#W;{戞6,fRXWGD2(#DȦie^_1v~./Bу^71[ Ϣ `=MPqLIކ ;M""! 1_*{TÁ{%HӴOVQxL 4lgQEePeY#hi}He{őoP e#fQ-.[~ڦc2J|Ms_± n1QE1e$Dt?$1 UdRR.)@ O!8>0ۥzoGg{zyO4!L݁^-< 0B|&oq̀qfRp\eL8'2F47g0f*2Y&nR-kjO^jve00a$y ج~$ؓ9:UG~-"m,ҿjHYzNv}Fd!LL"IdLv 7ܷ,E( s.(2F;EΣA(&ok8_'pĜK_}EIIJ sKK%x& N{Ddt=:qܳtN{RR ̢f)(*54eftYKN]@I5᙮u=z<Ɠ> IyhSsД5 i[}21~J$Yg&ƶǓ'*Fs3FGD(Rdh$)QkT JsuK2t KLL}5n7Yr^^=4\ ]tt6ya;?Z:BOٍgvqQZq((or$Ae+b#T=nw3ofv7ؐBB #FS}O^`3Z߼n77>r~~?kHKݟݱt>YE#vAPGC4qTFB#Akb6rôQZ`Fƅxw2$+]Ր/erQd'4֔/s̑/uro}JS unkx/Z{Jm>`[5i'+բ<*c\  N ؅m z-LCUtMj(6kj,D:o 2`R dFelƘNmRWaZ|2hY۸u_{s!^dA#[E<$n.4+~}V ;SU#(*STy*A(>*wJY/y 15~ "O5LRHEj5ժC=$p818HX -&8BHr4۱jd\֑a WΫ7Űl.͌?;D pwJQۺٍ&2v ^~{TC-VZseRLRt5Ф-Iٷ_[ctʾG,|r7st̶ +:=GQpCChqИK99'QjZWu>x9_q3#t7pCx)k" tZ.d5ک\DyO|^cG~tHq1%)jn922C{#T -q̲(>Ш'u(|~48J}il!;p6ڬmZy`ȦQqPQA6 mN Y"RAl Q n[\ _[H㡀X0 OLu!ț9"#6YCEHvb7xӇ؞i<ڙ^̞#]W c]v #vz--9vЍ~AZ*]M&}d&/8QxnR6%~b2qs% (. iy%ye[?kδ!ũ7x~I0Homv5<-1b5$ :zLz GWG0c[5 j}4lx 4`;Bqx_ :>?nٛt:m[_DT.Xc|>?6"ڈ 1AA(L/3T-|X71'H50:zZjo,PCYgab5osW#xtFJ RX.NxTXAyYq#{"IGc8_dF D~M$w%~ ҏ@r[GcRndBF:g#Bˬ!|؅NG$m iEĄ i1L~~8[c0qs'W&He f+]+e$aeX̚vSbx E [{ M;$/ձ` K&B? rS-B׺ W\Ynn _^ݥj԰Y/v0>qcrYWd[x߲s|$ x_l<혚*㺚lP`[,0yĴr<>_vqX8`Y$cHHge~*Yx~AZf-- k\Q?7f98ٻLD>}-ݜ(^KXR-ikK@MH+>L%t!‡ىYݜv۳'SXʁ}[{`4Qm)4tĢG 嬂 üzUL5ȷP Ofud=42Ԋڛ `p@<{mtoۺ{v|>&o'9>y+!K6[LdNpqn9ҳ||p~%Oׁ*-i4;[fR=xLF/Omԋ6O7 6}kRuDO4WaFaՓۃ k?q4_̽GKϐ?XImO EOwHA;r7La%$RTxn#Q)ެ t4s@1ͬ _=Ȥ09 % Q[Y:%D3CRw2erd%2ģaQl:Of]wA9xC6IRIrI(Bv. G.Ōθ(X%`?4%EDKmZq!>oQ#.t&oK8QهŃgPfSs{ӔAvcCVIf9&(޹"")3 n8f(2E= #ՍU(:½CPξhwn??.<Çm(gtTa{4#x 1 1.,]*BYNbK0GfE&z '^HbO9M''7m&6o62{tI143R1yK=^@Qeً13qG'rCLd Gk\"SK?ReS?}%V9sf S) R-+McI"e" ʄ og/P*C :a@> Cp$j\r_ ArC'Ypʹ6 RQ,Ƀ7qEK͂2ԵKE]xzt{MSJq@F-y K•=SEtCQТ?,c^pJ[1:rHd_$ jte)WjY!B)p,aμ$dsH,UF"/v^2([IracciVHEm3WHP`\!EQ[u% ApxH?ܓ:9D[AzX|l߲,vSsVG}jU3DvW^SCm^&{\,.>.[9˷JA(pE)(#S.Y:ka gdϋ g*zHwcri>֥G==V5rX?~nj;_Wwu"TQG݄7TGEyw >  ;\e[\IdKH(1x~6.$@nvϕfw¸m20KȞz~G6ٺc_̭ne&M@'t+bi8ʢ`bM*w!Ku>+W'JvԊQT3+)7&Gsή7` ]o S%t襇}kP+ G>( iu^'rͦYV֢rCყs40t֮ݸnvj5: D&W.rr4'dUY#?ޞGhr 8x9Ԟi0w\fR1k$cU5`VS 2֫e3_Yo:wy9r (Y$Ko@\r]?n NݖfzVs-}1D СɼGQ脨L}&̲9Kd:n~"=c<-B<q> xuKa1iml67nJJ)BN>t;2"#ت_Ԇ7' FԿ Zbܚښڱ^IWL9F<ņ{l"xO8.eʉODA^@xb:*W%"5"w>ߞ59APn$5:®o9o:z`yw7fpfuA P}\<|h>SuLq-"PI[ŗD",SP/ ܠW5`R?h`ġ*n+èK-Mwk{+hqMO/teE+RLCː5JHD1ِ ޲Ȋ^UU2ln.}:=z)Wxt%䞎͂AO=P&- UZ~P%㪹x@nQ񮊪}Y į =x,ERo tg4F #L,YvG;w[3d+l`KmQ_ڣ k-+{m:E5wT1 J_|ⅠQ MjTFnX^KQ}<b3I}dԇZzrvxMa@tT8 ֱoDNMM _W]ZNOi=kdUI3ز(jDn)E"iiX=PWjf5hۤv␊Y"\/EE=Ykjfi7fpk v%Pa"hJImDӘyxlzFϕ%jV0V)ifALGt'n@ؘ.i"[ C,ed nu؃>܊4_::uٲP Lً^#WE*ƩJ3-$<Flj6#3R=/KD-5s' Rˬ7nB|b҈wX}[wV$pq%u(s~H#*=mY$,qJ'F^2ITO2c Z|Ca'ma8=||Xp:HGڰje}.xܣh'i"ӘX/DC@'k@b%8s;:"͋7761ȵT_7[hM}#]oJƠn0H@ |5-b\_yk_y]M\~\_m{]7Uqg*3 <Y׃E( 4(h8=Xb!\!(ÌjICb=,`O}Y8}ݓ{#6Evwf\-Qei!w.ik>FP +G=.,иWmp-'Ŏ#gNJ2ڗ >mE1rWr-\E{F[H]mطR'mj{<%5jQkOq[މ [;`5Ĺ_O#u@<&ߥ^a}m܍ 7 :RzdkZP$:TGR-}AN;VicRGJT-c|_ğgoo|wS2kxtP00$ AX`bfR4Ԇ'G֙C?%$~g'YIW,_^\yD.sPFG CEN%U)%] .$*ͼF^IGL@ L>nq@h$!0"0Ԩ u>jMe5QIJl َB< x p96xO1  qe>VeygK/Itp_^;2Xam #7bSrQl6N2LVcTܘqA=&u&*rZJ*6G ✾kIR5JEQ谦bYN2Uhfua۴5sNBtt $ 뮛LxCfq`oI{Kx`((vok 6f 5h觍Oɖ1F,[LlAz*MH=3R8:8 kڍ*,>dAayCj3NvIo5Տ6x*;$RACRZJ F\DV3j ,u)WW3c؏T '[FSm2UG?tjycT&:F-ABl1{rc8G vY3e\tX]5Fؔ~M!fc~Liw}ͷ5E)BKmSW~_6c k]Z-L(oI1ObӨ ?UmVr[}cKj32`&VN=kyS7ZE]_b2w"Tֺ-z,tvJ1\W-p״l韛W vؖ'ry[BXVMΟH>'",/''4Yn'sFfYER'Ӎ{Yx&Pn'$V4Eݶl,Gc!_>p# g%˿BYD~y τo7Rҍ쬽Kw[ m1 /)͘1`[F\C<&@ӵ_-4PqU QgCsIJvz"vc+?I^lR=sH,Ra&mR9/ד*uZB8ijBcc-.h|m#:P"YTbB]'E]'qsi':)kneϱ+ JpBMι'Atm!3>D0t嫺#ݾ#VU}(#0~NpO(%dR.lcUSL}ஜTCd;dԬkaN;ɩK= x/&5hz\5D=RI+KM>E>Z23Oa02t*™[R"E>2iGd>1b".3|LBa;jh*mBGф>ĪG҄>4-r;klyE4E\M[UI]CTi]Sd\Sd\SȄBO1 ebnKVeb/o;{6}{p?ֽ!bA%&>2a_^.e7ٔr0כMx1כNzǙm(a_s-tFb(1t1XoCPԮ_J&s!wV27_"YW\(됟TL/{{ 1UtjTC<ՇUڣ]U] ʩ[u_Ȫtki5ܐ۹*s%ʺgXF8/"^n.4ʴ(ɯ(]ʑrtՔ]>yMju!{= @0  OF5 4?㦟#p*ӓsƓa#S\wM\U9U*:#CBt{",HM掷B 'bkfd|y;gأQ2nvVBQFx7Nf7j289%3V"SG?GhF,O< B@oM.蘣IkDK*ӐJ[TZ,CaU<0 ٰN HA"=ؠnT%<@մknPW 5^lߧSzUau%NH6Pj9Ws 9F׏Al^v >V EnXSW{RF9]ƺ/bw_h;;[*1nuF{@uZ-ޱU Jd:f;Z9dE"S*h܌nCjXRui=1 Ӊ.nw;+_4}d"{Wb46yһ'}+VK@M, Ɇq&J+Kj=%/jDP^ ; Zf~ޚqjɟ9" bŅi)-Y ~<^T՘k2BզDl"xJcs:c:JTo *ҏr 'P~.`[~;rJu=!IDJ8Vjbc'(㲧tKOވv-#n$"K%o/0o{FA#M9ެ;jPC^NQ1W;Z$ ձŬNeNbxC ƒ:_ԗSu_(h #Vg-úpf7g)I/qon9d(qkƭ -\6d"CX-ԅyesBX̓^h|媟pOw }(F`jBp V{o/OмzOM"cد@E}E=0tD ۰4?NX)Z,?Vu>0L,a)?}wz ǚ ]TOo]fKTtS*˕l yڏAXM09+sgk^"Q:p2f2XWI;5 z3mqJ]o ̏G뼵lVAuQ1G1~0MJw/F5Z`h=vR[l?NwX]+~ f[H6@{࿎L$ &k^#F|tHW|A<͕h_=s4K}7)XvuzϖV{*s _M#$`Ryhێ(y!~1=lGd_#c4&+c?ښo1O iԐ1x,ظqV?5v(rSd_!D/z}ega]򑽈7q ܐM*rѽJs:g(Ysӟ\ SE0r6{UԣǎrDώ,RhQ$6QSr| elfc*F `hD2Cp#3ޙߏrAZ~UjcSMoeU2̑*R}PkAeWn_6/=Oߗh}q(Fohw}/|TgUMdzD]2ɫ/!arn` ]Ȗ{4,}+lXvר=g.o4BnZ7DYqS|V(@GEjL / uan#0i9-`:z0! -6sD ӫؕC 6lAyhK*o-{D-DxߢKu*KxUM/}} aЀx;{yQLFzB1xZQokUt Y$]HIЉl돘|.ҍQHtMj ;ǂw[?fM)N|Xv4Rpw[(qMM_\/AOkѩyĠ:$ZiFrT=,؆\mwX[$R^=U-ERMǫ~ y#% dsXJ9OLGbӧܪ.(D-e+݊}RaHT3[sx2KٔDӧ^0=o"67e1"H !PBSwLl1ҞijA&GM6=xƁļ猆L/$a.CN-2-G~ŃHz`q226ivE>ƅ<~H_1)&TvB&%XMH1%\o6G',zl)a!j"b 53dbOa)WJ \E\< |"@0\Xe<ln=U% O !6+4~rL=ooUH TI$$`%*O04j'CL )MO hK+p*K㕷n9 %K$ΐpĎYD. 8M'ޞM*-&ssSu N#[p_[mPԂ[mxUsCLGsY@2g4 sl |H"e UA\ق]|\]BVfzi IJK4ECs!eReu{܄D`4D1[ɕ S7PV v5lT9 6}uRnFΔ>XhժKp;ݰMjU a bM〬SrU$՞Tsg7Lk+<}tʯIP$5L2{=}Gr 6LG9$'k\*yYA-i$K8f3Q[O橃.2zqbvi}:o3_?|c?!qw=#z%3 p [ˑH2pgotD^k ~J+%8K8IxDL:6ڠuQ;>JRu҆8,TOCck@ bLů8ErmLɇxرyzdDOJߚ_-33Yӑag9d9T{^ {%-vycھpNVVgP}\׫ 9$`Fٚt~VKWI͙ZCiaCŹ9T^&Dїao[t.=>Br򥇾|Lm4ۼs4zg^<֨9ۼ;+AsQ$N` 2O]g^zW%-6K'!tQ`:I-^imvWCvY-v5i]n#Fu}jԇGI0,Q9Bz, r醭 Y3 {|Ǻ3s{Z~_&wg-gn/bPY"ɫ&dbe~~ǿٝ>61d66^fVNN_K7Gh[.sK5`Tbl ص QI8%ѵL~6piLj؁ZJ MYc_[O| kl5v0kKe4??<aF a-Dd3zsbnr+Kac)1t[^ [q՜OiFAiJ4c#ޣ=N;pA9 ֟8n ݃>vSkF>rz1ߚx6%A*@<wfPȔ|%cԻJyd DazU]6W(..2e"byۻ_Xn.+"]Fcetn[ ak+d@ $js&m$@TB{zz<q4d07ʞrY{Ȕg2Nߜ{(e=T,M֞@$oL ZZЂr" Lo̘Nť]g)"AVޙ^_Z(AM8- kz*jU~iefuE@@#a21rh){ϫ_эx3{&n&}wqoil8'>YJ󅅤lӍqejԪVx9lƒѰr@ e°ɉ @_8iӚjGQ>[%[AXLen4]ET1gz̉4_im/ {QGK'Aj`E2d&N`b.o絔KX/{nЦ'&ֶWL,~,b"ѷ#0xQ/BqwA=R- lqP֎aeM,&y7Y.Sn9Wt&Q*!K>`"l?ʹyyx4ÁJPGC$h{ GFklM Ozs}``sFoudRq .{cTPl&Q戊22$s M@tI$a"vjHL5iL/* ԆCeyRM/RM]OrpNdㆬ^3-{g'O^(ZC`ٸa`)< XĂ#B@Xr~Y屲fX a/}FT Irm hDsb|z$6ZO\>:*(VQ(yJjڙBp$eshGe6*.6Ok*|)adVtK~+UZ9_aumVV6c+UDUԧ[@w˨k/CnxRx(9q&MϏӀ\i[lGP8u~t v^jʑq8\I^e̓U f#ȷ⣥oY҇_yY; lM׶mwm۶m۶m۶m۶l:^eEeEfe,1jmMlI>hxpYUejj??؇t0JR.[b߲MT2$H_ y|]5^.s㟨I+hBglR >|Hk0ldJcIίS7\j)㒟=ëS} z( #U  lrO[^[G)5smY+jCZ]Sɥ1ksuMdQDsSh%pw3NRR`X*=m>^L9iїYnZx8AW ~F"g>g(=`{Sw-CFe5ynZSԐ x8274:hs:MA['5ݿ+&+P%6yc[YOގC Ҁ-/i1E}r^(fV s,3/=; 7AwQzإ&8@ bdnb`ldjaRF+W/Оo]T2~m>TDYxX6GL^A0ahg3,pї$; m<#I|j t U10Qo)Ѻa?Ruܐd{Mz֡o{oEΟzӎHA"`GNQ0oPZnhb3UZ̶AN@J@% ")}̀6be7m#mvD*<&DLʕۥ y_Ct6ΎxUu? t5+^EߪB;Kwi52NLǟ̅KXA!lPHqQwVmʰ"uD8!K2DEP YRB 9 (XzTL7>6+ogYHn2vRXN@sM2i?L:DE;fOe 'D]!2׺Bw&5TJyhil;C̆7Rx)\E'Y0d}- 96-,.%p-t>_?J ;/s^r2CN8~%g0!$nG0w8摌f Vp!'j N,83_ɶ+}&p#6ohsUTz@[U^ӍGeeHdu3^Q ˓-ENOGW"hZQʇQ)Bgn&-}T#tlfFj؃ Ru^..ZF_L0ɘ(UC$-B^p!0J(tј_x2X.0IIנ iU>:`%5wư3%R8d' 689W,ڬ\ UW>oMfT,"Nxj`erV` Rл.u7XL;284 fJ-=MO~_},V̷ #%rYGpf_-zM p7]ڽ/Y){9gHzio9jva=\<> Wg όf9HAqe#.nYyfyqTouq5ySʻ4/|~ ;Dsv gI״ P̨p1:,ٲD&ޤؔuݨ`2@+Q&bHq>S~j: Ys,b>Ֆ",Ne9&$n)ld lϲig:{qώ>Pb\AE=]dS60|B+Қ{*Q,Z@j τȡyfM ,K?uU)-n (CEI}WZޞk.Be+b CMB정%ʃDJWG ;PV]gMԿ߼tr1=Zr3nzڷI(볔9ͧh%S3# fXGFXV# xV]JngsϟWx7=3А2MFAjQrWH'8OK2q:b qP~$HR* ?Qhlz\udg &+hUxfɫã8Y"(* Ryx+|L0kiYW DB0x'5{y(lP#A^doH{g{[nͣi3ܢ@q0::'bL^ ʘӰhy=m ca6|7|/%KADs* =fEͭ} 9[UOAmj0A~\9j!M o! %*1!YN3~ήG^͖*GH>%Ɲo:ެy[mReUbpڀ]t?9.?oh;﷡DvyL8<DHj8m#P18'`n0$:zpU Hi/>NL)=y:%f w+b͡6Q}mB^5\0R8H`fU=89:"aD-)"~mI\SI=,lp&Fdt'!Ǝ[ܧ4r~wM+7sRnLR?w_[n.>ócpGw{obqah7ij3;#p;}4_@5̓i0Q;@0'HKꁼ<ұz[ <)#&R],x߆/u>`P]˭xWq3Wqi,ӉHtAd%fUҠlSeܩm.A|"k'B%)h y|u:4|Keg M/F^&&|`T>vM' #jN±ӎ0ĸD[ AU>x;B6I 7*TJa^RDDT~Z^Giwa7^xECD5RÎ5yN ՎtXeAI7Y-^vuwjOX{TJA4W- Jds$e/ϵjFиxrB<pqv8N)*\3LH bPZ6 f͕*4)mE Bp=Ѩ앓TMsUbCj+KFnb =C`bHݏ%#ʉЕ2Z$[ũc40;(U[ |&uZ*#(ԑTNv,[@prlg#I2!1^fi6#f"϶?5A3jYR2wHX/w}fG|mRl6IF{kj :bqta錳Uj |2 +'+QiµhUs)[`{L{%: n.a6kd3zULK`SU7)0F7mAoL.zEkdŢD I6K:ŪlF$-uad H%/Nd yT'W,%Јt80Q:&(Tw “˛ 6*"i$5#,T)#@ l2*CO,PԹA1:U| tTo]Miy(c1)XuY(`3ݝmYeyIء'_PӸ=|27]TXݿ#G>:#Qh'mQoLj5.C(_֧b K7+Pw;0b2ADE[\r].gwt|P2}ڙ7iړ-cܿ_ }3lɊj] ^ z x2\P3Tu*hjي1NqS1򳝠(5(TKJj ßCX ׿3At47!Ma`xҁNŠE6bsS_\Gn :S [QS>n*+V .ZMAJ{|"w ?̐Ő=U^5.x(Ms \%qjBCc5vRݧe1 5] }6Sl9ƻteSHrʦd _ +#%n99߁P^D{X*t3R0۝ }q4흨'řvBwfRy&;ǡ-LƙdS[ԊfX-/M\&(Ct@j Odޟu>G9zVa9Һ'<$UY*4W1gtf//B۲2<:f tS8~+]$^(^oL l°ZAy䓹j3p|"bR)XC"K>PI *; 9Pܫ'hݶc^*ur}$:oAQ[ /ci9mWMǡFA:T7jP&| {NY(r)d;qJKog׻a!z_c)%?z4ޓba\WHC)}z6 )4 ,P9}X[eH<0LЧ䴣n= H #F}fes!z܃dǡZf:^m* (y }EVx㿦s2`FK|?y*JUGoѠO;A%WXA'H?Ytv`_,N`|'6bBDD,*ceD*CKb';(@I׉=X󸼽""~^ϣhBJ% #$E|Lڧ)ᘢM0k[3ȋ.ﵜFFeDp}~0\ZIF󼬟^x?JYOa)mE ,ږWP"ar{9hݴe|<+Qo娫1!f\u`"3}_b};V+^F7%8tfn&HmkLRZsrsR pR3bx~/kq֮bJ?h܂Yڳ8K ,%t˪%_āЎI|p| [b:Ÿx!e|H׆HhRV%4!nIx՛ Kj<&?CIݤN`ٔJro^ÔPh?kd/Inԣ5e#Yw y|O")U'j<깆'E;_Ldg)dOA "PH Iqlqo )']Uen."~aEegޝ2] \/^PR7aelV奷S6H: .$'"2rn>ڊ> 7`by\eyy\ez5m2`*2bj0eK# zmR# u!q%s#ѠB BUOfnBOog 6`Q`nry/-O`Q..;D\;u`.2=]kduizUt@ N:^kkNZ^W'G=D|Jf5?б]^M,./ 7-q~)};[HroPKxF_lZ:T|Jţw߂n}߂0:- cĕyRdy^ZP< uO_N^u>: @Ae]*4dk]I/þ7֞ W?\?ʹ!^/Z1H3au`YLQ+f By'߱ĠI" Oy[-:MhS3_`b+.yTH>=[wO!%Kgφ%Pk?V]<~Q2̴arCQ#̽lmzNʴGA#.:.VR֡*Iu$[h;Y: l91Dx8ҡϧ~X!7ZdMͩ[1X/[1=(Sp^1KVs/B5$k ҝr64ú.zL*ϑiJ1l1WM-E:[~96̕$'՞6Eq" hac%9溄}e ZܐM- &Ne~r$`RxH#FU| Qwd}I^ppnYn2p$@w!U`+a@#D*CLK GO*,I#}}Jxh էqd2\`&qh|>pC},/<4SRޑ&>?:d$uu,Avq^ٸqgKBIjV^,э椝^*z+_9N`$_}-<U>Tz~A g3@ǧzCnxն)d-qCzhK&p5'/⌮ݵ 0V@  QLklϰB:b2 z_D'a)5?(~"Lkl-GѓX 6$b\1ȫ: SJ|L'S2a._:-gH?Vf=!kIh!ljB&i!D4XL>W3.l D4U=y(C1޵540@2](s DN2%'xmG+_.=s*=[UN65|YQܹVi !ˈaQ-3V :JkYp^E\pI .T/b 'Jg@( ߘZS|,@_J#q'?u50N, DF 2B+UI,ٌ0CaWG{72"pdZX|q#Sn,L: SףSnCƟ#5j~W6Q!UU(^̻ 3f=Q~즶BZ!!XTBHλ=yn y>ТҘ 9NU3#p.Hk 9ʭ}ȋ&+wrVRcExS~ ޜ\t@ķ2-1!>ǔ͏PFf8E0tG61аcL["G"Xe X AIbCb̼tU&c6a7ZG#~F&i!"/>mĩcS)ɒ#-G5{M՚~\X\aƪۮ ]%/ ['PX˯hjyn'Zu9;NNU0[Z~k2X :<<'u.h'vKD/XSO~mHVx|ւtP"? ώ9 q0 ܦ)iB,chuE- 0$b#?$LX] %˶+= BcG^XU'~9 h:kF;Z,پa2׺*>$燽Aκe{KM|ܙ| ? _m ^ gAnutֈwbE5 ,,g\6Xx:UMn.24|? ]g9Wnss,?DqK:IORcCg9n J1MJcFg'qKN# =eW-ygmp4׉$@2ԯS&fI&!RKfeWyh6?{2Q?.!_gM^3JrpIň`XsHg(̩[|,|h=~>ͯs'jT{5R=\D&Z.0Hڶƺ#{nu[u4N2ӑ/_Q(o9)stG[㖹) RN/>cinuID.(x;0ě j):\*&sC>sP~:ĬnaUb\.%8nkZs-Ł3v6j̮&C;1HHk < GTi"+P<֦:rb&FEʠE??IAj9՜ԙ D9 <,8;2=d-o<8u {(l|}~̸mA<'|fF]- ٩IЊJbg]7mux62L؏'RˬdY;XK"u#>$ 1+'#OTIХZi$8 Z ӣKM~KjAmxѽoXAe`|Z;)+qĿ>I--l !4l ӺZXIqyBH%1z/򮖌܋ۋ#n"8N W "jW`=r9ndbD΢]wYh.\D{B ib0q6Ws(Ŧ.@DƏoQ.(a8MDXCgs_qǑRSO+F\onS%Ɉ2,a 4~PnΚ$MҬd,Vu[p_ W-fu"bk$ ˪v5<ֻaOVB"F>ܬwC\qD7xx KѼ o-գ#{QCwm .Ѡݵ>-I~$Y#Ώ3O yg/TAg6؀:lXt%X%lt-74.hpLkPMMf#_Ӵ<;$ k-Yc lc2h#6 '#8ذ f^t^n1vfs urs;hkB!+x}a' }OpQHp,^k=:&2UXyȢ`}8(UOy^h@ c |YJgF4u"_vmm Y$O]cX߿H` N#a<34!d̝"Աd81;MR@)UӪ@VHdjW`<>Pڧ.CE>$] `B1Zr\| >+oxK 9#)#?MiًdKO?@fě6'6p`ɗ&Yx7ue(OPdyHWMjS^= W/ǿ&~vٻ&&º&ܒnЏ>Sn6pkI^IŴLx2M<^? ]yay2B3;,9O9d%wYG?-x(,T$w/6 OT|r`jXKU˽<dX)XYgsNn/왪jswFFAwqw(6}|/R9Kk.#w+].W:ywT>VӡI4K[T4{^W^_t_VwWtntZ?)JFy1\lGPo/ʉGjA8J#F,Z1_VPZ [/ATHs{ձXǵ+2DzOOvm>}߁e򁐞 zbsڸ;to3݂<ޝ֋sb*c,흡 G &S-TjE =-DmH|MJ"jf&ch m*b?N)DZq^d X{Hx*\TSɷ*}Tse ʤon]$2#wۼM;n?&~]\^x~@X*ž+$e 9Yl]'_ШU U)M!%UY4uapW]nٚlp=q!~~8aw$qjMKADd]W(RҊ.*)nWBui9.@6:zM<{ D+:= 2F~)a1&,-n75Ԛnvhg"a{ {;7c$7wa`ВMՕAeoA38--ƀvsxB0ũ3#˳(rc݊H *Lsq“\KA "hƼGGD$qik1#5U,VљgҖU561C53@LTZ_hڌL+ ^~ʹ UMa#47Z-qn,i*Ne$q0=lh:kx|3P Ogr඲EjI\ieVoۄ;VvdFZצ(2bYp͛(:}~>N'pI+S_N'2Idw{C3GʇF06kiA۰h.HܳeU(In40,17 5F'f-iUHn *~f>nryڧ 4HPZl\_xy3^Җkdø}!@rh^̛zb4b>/)!]{@S4Ө\\d5x]OXV{1)MmwG*HV_W+Xk?+` _>߉M\w](2Onve;v:Y씴CoBc2ioUuQ 7 < §H%Nj8|weXiȿpgO %e ?Eq\ I.=ll,tISxo*(.Q]!ԡv@eYPk^ƃl*g 2SAXMHrw>wVD*/7E>1-׉eIؕ>z"B%u=g#Fq ^iS9ʼ{R #lc\iַfS=LZ 3HMɛTBurfY_Gw<^̕`~`RsЇEЉ7b(z ЋuTڪ|vOy~!e1 +[3'x:vg^[`Y#5ǫwEލʸ EEn"k2B")/*{^ T;}6%R Iu4L||bSSek7fk;!4b/y*cB{' CO{WOqXU)ȒmnRP&eQ!-pQEudxOn x,:z7!r>h{gZ?;w6eqkB&,hqniF>u9RF? h[:o$cuR%ؿ2TwSkw=Vf|ğRehE0DosBLtG]:) K:N~ 1d ?]O 9!OK8srO` &Ǜ#5)ʜXRoJϢg9xhkfn#y {gb;Ky7 L*-(AaCD\km|ǎ'/UZkڽ8H|o3|1ؕq<-]2o=:PR!v͟ǃ[:U߀sȸW\&VL|1WYT=MNYCݨ41Vz.1`~CN%+M>WR|С fJʩ#ܹu.xɆYpXFdIӬjc=7B)SU '#>У5TꦤbmqhI #m;$fLΥ5ljVD}$!wyeN)bdNV5ђrVs'#YHQ4:u^`;z C7^*zfYmeމΌ|LiJQ6A0N~6ei!Rvu"1ITx4w}ֺx!#-*wA,rѐMm(wlXeQ 삹#%MAm0SmރBe$p)rs <0#Ndh'ي2lF)礍|ZE'/2=,rھnaXܹ,8T|GnQXߣ2 Fcѐ}SlS~|^RM1ezv.MmX66RAX_~o`kQ@%9%ZIצtdظ׊d?аܲ"r^zm3Z-Rh{cPJ9& Lf"6.߄ء=Οf~EF z 6oҡX^2y]wyBS׳iwZKQy']B)YKRA=+Xʅ AL}Y VMfffۚ.o`1iG?ez˨i oFLw" ?AEhTO'_ȿy+: g8p;0;J5(IgeHG~#pp`|w[Mg"Q%CcǔQ6U}c@]<8X?I|mbR^xj;RJܴdZb6#r[uۚku ^{MLqQŻvy'^ |+Zímx+&;$1ZBe6YpB>/MND.]3ߗ iFBaKq( HG\]&~aob\?=3' f$ xK#N~{ h)❐ԏl=Eq8L;OH?!!>BѰ;y{@,!\Hv !64䯖K$,/t!Fx;zr8[\F/2>g"sbY~PinWNa[@zwT j/F\-P\ƺo#8H.[#֧nn܃vvH#[لXads1 K7 ף jWʴ"cvߙ³YS)YWm4eJ.'y$6C$ɚѦulW,n|chgn@ܦ[J`дҹc#sXx.ؼ VA5.7cC.D9q݆eX .5;*TM̡p bV^](sX~([fd M-+-b/131u5=fDe ڤ[pѷO5Z{>g>9g}RڤݤQNndڝDb0.j t(uJ"ump *woiSv %U8M4ǐ0P"r/fuրͰoPb}O};B*H!ȡN|lZf LI6phgG-x`Y->10q&mL+AL?&X|(FhH\K(?0/LZY\H UlW,R^ud# .;r'ch&wDݏEC LJɋ8:CH*DzqUvgUP%ڋVl6 # ֏qGpTF͋nח¢0 iSq Dz=0B{S aLP@XCc13d "FRډR,T^WRi>9?+"TL_!X;>/aIٚ4MNtչ/HX-cpړՕشZqn(B[VGd"d4*-mKCźU5!Cx7 ;괛ϺLs.[IkIf"%n]cD51Hx d e) hRӢ|=z[!z3W] Ts 1~o[ոFmLΘGTtU[%#L4^`t3‹" *nG/GP7y$nF X0SY7DЈG͛SRܚdeaŗG0YiD2׿!j@vJ/bQK . mHr(c>Z1NaPjw8;K!۟TCMP/E#҂ YPQ?Vkփy*=VҹIO (DS,i0{!IWδ NeoWU\пnrDyzM|v^V[jT%=sTF>DTs˽bxcDj6A8F(WLQЙkj\9r oiSU8RZREEX P_> `nH vݴse韖D{X2ss 8l! t4p]1c9n= gX}ywM@z8 <@fsd\0,1U4h WviO YÄL3('b#HVm^dH6Q _ @v_X Rh탴҃)D`.겜n IR%)Dͧ/`* uMP'ѯƠK"F߷)?SUuM\<7($%:ë ͱLZLJ\\_$Te# + aAETDdpk(Sg0Aն½}0h$+gH?G]l*.DUq(LU*dG Ix|2$j%@:Sv!QqX=чV7l 1[>%l;4Po!UG^ PZ;mv "x,dr;S}y;@$G4(c1Ø+U久dwQD~ɗӘ!!y mRaLn_I?cbFFֆ ;[-M7kN+˨W}n}t#ja's ,f9hI\F%J? L5X1$,)< pSO"1ȟ`B3ڹMROJJ߯&S9NݥPEA Ǜ20NTtgۈUݰ4默n.aʅXO]g晬|"q$XAIҙY8%lnfQxokͯ E6aM ~6(LQ$:\)1BkU qc$LҺGuF j)^Ǒv/ᯞ>k7BR@ 6ڱe{]L{8Q֍h\7 pI 6Ml]鲥II;DN#<؈}4?‚x=jCR7P1FS NI|){w-ムԫK5wh`y됱bCcz1p05-{t1y=tb[JvEf0ʏ"Gct0cö+Ẅp ehڄYgܳ/36P|Z לk n'7$3,hb0V3AH,Eu\w|Gw/vIǠ yK73}-ӞF^;W ~C7#q`ő f> iq"3%WDXU_D}fjH91Dq>D:~t `(yXjoH=%Ms۹hmWXJ)O8m6&zQVšޣFU n}Cڋ]f >[vs=(qx@AbGMaY#@uϷˢݤb?ݏ5p0Yefy Ydn\{Ya}KVCZټ2JWfT^.Dة,61p0]>WY[RJU>JQwXE`jH'ؽB|X*!zLWCȃ\Wp&2_10ߏMZ 78800NaLXj@5撴(KS4N4/o` Έ_3tȔQt ww0pDSz_)G4"QIH|*P[e7>[p6"cy$Y^(90E2.qz锻$I +0'JΜ6ž#?fVH*<iݽBM{.1% [р>d&sFj3hm9b|Bu<^hs ^1at있G([S>HRǪ]xc#<" ]4 i ua6ߦO.YFH#NBXsBHr,(|NOo}j7ž#lPOT Aʲ x]i`_Y=[^ߋɊ|e) zTyJ"yF!07IscPUW:p`mĀh07@f ypbGTA+*?fsrdbYN 4\:3~zf$dj֌ko1|UEJȎ<4,<6QfM^ aW@䧶u _O 5ь*=t]p*ĺء"i5^:(lLc7ˇY̩qRg(-O_kKny/[>O*zu/[}Y?Y:|o}3o)/Qp֖CéﲔڐZL=b4/N<@"a@:[51b epJy`*}^.0.8XK{(غA{淃a~᫜o_f~t?FTwo`noSFl'L!JPt5_(ZX5L1\Sz)ƾPӵz0uNRm}8=c8l&l[ƲA_P֌&ѤE0Nk&*_o21 &ԣGi0MM.Ycc vK6%ޛVx6T ͺ 6\r_"v7Ky|3oDoӋIK,( VA5ӯbPwSKsyvpsG@÷ %O֚iS8]T n:eU2pA A–KNbGJ3EW^X$ ^: \0M {9YN71Lix"qTYs%QDӄFp/еŨOp'ylٝ t_Nĉ`I')ݫ朡~;%d*ՠs}3_l(. 4)}W O-j0(rPc f&8/YAE땘*RցuJ>("ipuO?aUjz8K8U i i 7e !M5j>gâ8 %J_([#" 2f"iC>HJ"aZSiX]\DzS5l4*woD?R]̆o}.̜JѥȚtw Q j0ثjH״h+Ut;q ۛI{L'd8 qlЦŦ t0zdL۹غKԷlD|`!̹Al9  V7S&,KW\26j!Q 5EB"jğ1<,@j[U ;~$+Lkk <*?PQ FQH$gŠl) UOir B&sޗś CH7׳#$ z)(`e͌V<С\ܼ[<%>tuȟ_z VӢe$ #^84*( b9-AR鷫C·5 ]#xy|>};}0 8;bˑ߀&ݛt`&SƑ$S%@\D!~9'[*:3"Pf`]"E0b=K`x 6hl0u±ꬹ_WAqN;q 6KP3^t\T~o˱ 6Ld7ӤO \q CGH{<` (nc 88r~G zGc8,)eqapn9HjҒ-:̂ҼB/xbd[[#-uҩ>{}F9rIܬ}s68(Gķ%n ZB?@) 숷/C4Z6~ר;v/Ȟ5רY&~IQF2)ϪOw*xfsŅ!U%/YZQF@Τb XWE^txgHB 栱ѐkըC3nn:"{ec+Ņ,I csoBטk[y7\ rJ,5FAX!cdMun]A~EudAsoؤRabRLRt hZTʗ&LF̫lҜDaY P n2l ۊg{vuhra^W+#ڜ:Z=״g7ޣړ j/'iӨX_q5AU8{d0֫ ;_r[5]۝@Լ.Oɽ~9@Z# R޺;=5sB|U7]6U=⥹?:Mc*M.ǴJtjZ^ f 7z<>ϗpWbou3vg8G>OnIlW0adnW?_]80W~1B9}5rCgDqW6{⾆RwcီPb~0jJ@.F-jzNGv ܮl^]E(m."("d.Fm#X܎$1r̒OSesCq(zA;J- Uf^k~ʟfMm}|_;zwy51 ës o2} CzAW 5H .`1kփ qer@`x*`؂m0Z CꁇdnBW ua Gۇv=0*'7bxNG%-zG`j4hOV(Vqu;sa8JR1qF"Ҹ:CaN8pP7MB>s}9U_ɏ$Gn}i;nG'ncF_NZ 26gKf ;7ޕOa K2/[j!;Zep |[&Eɚ/.(WCвe!Eqr߼KI|;>޾ .]ȝ )~ir6\^cc91h.V):5A1 iA DrYë˹H7ޛA}рӈ V,73H 29DzfcZ` &E4氎st+MpV(KJV ڴ` Xoxi{LaS-]owbpE{'KFdZ֎Сs Ha%1? 6;r9W!(DjRJ._h~Jƃu#`&|حԙ~amgQ+Aw֛}ƦLє]^- /鐻]z_{<_#A[=G?Nd$rF1Pz/C検^$x99@iQ$ِQF֧>)N&/a^i5[['3@FȪqɩ=CJfJՓ25g8io$bB kriꁫsf%oyƁ٠DtԲpA7n}hI?eeDamzջV53jY&4_f@sα9v0n]8+?7F=A3ѡր_'RJ1f+TBkuNKxK@RZ<* `4N n+d㎆{!7inF(wӡ3f)W: pzj@ $OFBBooZj |vb>.{`c $FD+XÚK7L]u"bgy[-Y"s&q{wb6?X%xgBJȍ 9r2y6հ.TFbPPaN#D l;~s_Bq4% X-QWAR9 V4Î&Igo+NQj];~;RN-:r[Ɂ=Z"ExiY"]! IG4<5wdzDl9[SeRT_-SF_Ab%F-y C{@FH?KṊ1r$qcF|bK0}R|?@ J=j0 73:hyd4&=ŧL_?F"~Ɂ#(HkZ6]n6a_I_g3*w4qr6`Y9r@ A*WD%fKH*%J$9 Nz~h>[f X]}R CL9X#O4<?ݡOҊ593BL$t: džWuO0)HOQrB^K6H>;1Aq϶,k.|ǘ c!@){\Zxԅur^5`"Z6\ܥ5P)ov]@jAP^LJ&ZY[,,lkdD>{fn=xܻ=sɫZLKA\U"qNC]ks<ȹ|`|qPL״>A܍CР'5i2߫Z3f}k Zoǯ8I[bkҕ?((S!ݞ }xF hȇiSpW$ S69>uI(>d߹ ʠ dTR?+k/B@4 (,iUaLf,f&:-}0\/,kę!#s߈ AL4_7 h;NhqL/G5%RxWMQ> d&6%RY+دݳir"#7Wgǂ} %2}"D_%=֋t5 оA6huC SL㤌UTbk6r*- 4!MrcE!C ZfMQ^V~@Eݒ#9boJM{-̡B{1xswiF_*4CZ 5:`_0׶>7rE.Af]_F$ v#oނ7-Ec{U8<^B.*]r0yBZRסV׆<=sW.-jp%bDj#< ,Q]7n#V hMQdz9K (iXձ/Fy @},͕༠HtL/M_|qỺ.- qWLy'/Mk=B_Z*4SNB"͜<1璟KCQcn8󳿞cy+_aeN:q1&ϕ:GK6BXcQDMf;5WG"ݱk{wp˾5{ ؏،=:= ~ޞ#̹Zytq8c%%?D>tɚZwrk=^~:?,ˆ=Ds[}=*ᜁj۷.hv/ }B1(J/v:@s8ݠrU\44.IoM7K '6BME;dDXqxp! {t^ ]*Ҋ;Z蓶BDD]s#á,]Zn1lJ 7{MJ_t2DZҡ i8R*cS')'yDZl.YoR=T*Z)+Nq/MV5Q3|Ms;@P4/&=קkE d\;ljHX_>mf*uBUl/86AP>ԏ+OUoy:U;xӗ@^Nv\pߚH$YW/T=win[T;DŽ `gw{8=LaVKRUEa>uժ^D}(gW[bRㅘ㬱MPW~.,\U˦6_L% VPB{I%>KJ/l)X^jJ$d ̼2s@67sh[GŬ/Fzяv,"Q*tDvL! Qxÿܘ)"E,3Gcym]e"Rݾ1g<"X{ݞͳH5ĵkogv cj}8!#Ǡwl˟miyꇆLCLYz ŸoOۧ 4p; ۚm۶mo۶m۶m۶m۶=Lܘ;O祢˕2(^%l}A *1r`ht?->) >>0)<V`,K[pI=480wqy&ߌ9 9;4TW~wf; 314GCh 4gyiGEPoiiGGMf. 3o`guq!ݾZ؋!#+퐁Ŏ&˭m儻P#wwvt|!!q}.*wQe|܁ԜCQk7' [ôvAS[E5pFG] }~fILZt%ief=;~PT٭m ;lV|Q<n8`g"m'2B/S\9 Zie_ƛ(W}9˂fvީk"yy0M%Z|친N}&>WXGG[o ,wˡA$2NqRN^ Z\ea}.Jp%y>TkCMMJ/WY)1 * ,OʙTdQX`{RZP VٱliSPU|{qX5J0}f8 R$>jӯKA6.Fn˧^\rugܾ)Qo" :ȟ$Ʊ7ϣ3wVCYL%zhKeE"ƒk(+֒HjN2Xv4 PwjI>E&j|] !`‹V?:\u ිMV*,ucĀr2;o*Q$gm s\Z*n=o 7U݅g w '7,ԩ*'-./T UUtƑwc`B~ j6{'Z;DXZd`DEUJT熭M{v^m <U1{z frzz;%#"n k6&"$Jvvc{%C벲#'5u]'FZǩN7pcddcr0e#bYbvY񲰆NgzWy_IMHDNY3Oc@9ºƤA[pcj8.Sh@#TRrK+H12Pg%>QU8̐ VAz+ \z?7&l/ '+70g&uOP`Q+ս%`u}$`u_' ٮqoDLM,vJҧԡ}4fyiu8v0q< [XKZqKaAb=fCD&Ƞ1{Xj 9_Iy~K3 nnKQ:4yr\2񋁾w!~8-ywEW<͉ai>/p/ )rTs'ϛUI!̿ͅEΜ@Is@Njz|S $_'~"55Wj|'뀢**5mx'=S)wk!ɞgVV{RHI4͉aStUjxc$s`YKT_ J m!JpU5R)6ѧ^S'92k%>Hpe 8˞ ɿ[bA+i87g >%;a_!? J9ӊO4jQ2 k/ ؍Iɂ6An3Vf.gv .:ne|QD2t+DbgOr=5dyt6@{`'XLXee"ѹ$pJɦQAVM̑TD|!+F;Y*g>H62/K'bZE= mT>.l/VX ŞByefgiSH/5ir8/gGS o (k{@ dNEt2!M?uw_m}HW᎟0j3N _}K»ByzstX%-Qps S#*XEI:&@H &C!4~®8t0yh QL#nWwpdى6$p64X3h}hք@p|A~RI ݫvmڴ=izPx{qdNqȒ*tx (eUdr)Dk?AlZa ӣxtQT)qϭ;Ür&>wJ܎ 8`ݎ͎:bʐ`ƒ C8'b̜&kd D9@xT+4ʱ &.x#;9?'gܗ*6Jg$m72jȔb"<g{u)twc_r#ڹ]e!#Ar xJ$# d֯@w#8 ?6Mo` SCh"u{;I=q6%A!G3쳙`pN}s<9+M  խٽI6ӀJͶ7g)T{Dp}Y{KED ?cr/AʲD&cSOx`㽌Re4=/Z6KK-zhOr?oowkFO_`I?0~f͕\Bhgy^H: G_*ȑ+M[*^.ʋ5?v }~8 zj5wtco4IŌRl={  \i?˶:Ç8tĔl ŚKf8۽8W${ {^h:RLӜae۶ŨjNᗆH\%F }d"{8A/\"w CԠhA{+,Lomo ]\Z)_O>- Ђt0rV-^<Jz&\biVjjgNQOtu–~'Fd\RdMz,\ wQ]. 2O0 z8 ̠/yӦlXv-"ÙQ X ?p9XK7KIZ6Y3gk\^\B??_\( {"gRE` ,vd|N`*&˺Mׅ,Ffs_c*$ES)~bۚtKƱ Ns(}E# HxQ3|-ߕ~ :Mlր׉ҋ;t+ (oMP05+ѳno@s1:Y!Ǭ, . 륊_wfJ(({ƫٻ Yo^o␲'RHsJr.bxNL-<ڴE ks6k\2DK +˩L|f Koa:2yubQ0>=*&7UǰXkS3m枥ubg@jkC?} 5'%mK@d p̕vr.w/Ւ+YReDXQ V1m2UfULx:A^훭i9 |6OJ8ePzįd>xa°2]c9>eYHҦVRUbz\l0T`pG"ctx2mQG̲_g5_ΐǏ\:n*aB I{K0LnN1cm OBuѥtTmv7^o]=qwT,!/vaϔ"qKnЂ=:qbpAEK%Бv%C*!V&Li탚1d-$͡ zgo%deKKWAꐆƑAY$XQEj\!:Zjʛ̱U3Ҍb4+ )pXٹbӘ860GtHu0RpS\G~̓r%A? ~IbL 6'[/̀5dng^ ɩJ@ֹzj1.hGhP!ד6њ7nu>R: Q"? +׹,RQ,,xAƲ.NO!_ɮXq^j@t-Fz+Ż!NŅ-`S"vky=t05;tjChmw"G%kp.)nA-gqE_SZZ >6#5[]6T <"z`'F/gKGkNq"ܰ4Zc⒖Zu=U1zDVXj9'EV&,Ts`;ZjZ bz˄]'|Q3JE#|`G“]$BWpJbs"bSwQfM8ބpTeϬ(F*@ K#RQ' p+wQSE p18ϰIB;[.b/z=b)LVGe zXkq֋ꀒ2NjZ]s5Uå4}kỪ{u㒼k| J]ua;"v|QU 2յϕg/ɇ3M(U"*L%d<H**bU2>[H\4 2)$I&HUQ7!_SL%z`CmD ۢqee<*8^u%ڝa̼nC=@gK"YA4'Thx$j#ܣR!jTxHa]ckGW`ݔ?֫n* UPN^Eۆ>,> ggRybmQfĩX$F?Jt-! ï7H;RE(E{yO}35Motb/ CLK2Ѹ1ʂFڐTXmS@ZB^$5aP~ъXA[M`k"0T=Z4L!'=KX/Hu6ۭ/Ghh$6daKmժ:Dp̞ n[hh/],ݖ|];5?,X+!&FW:i~]W49xúO)bhm5T_KcLwi> 6[~sڇ7*q0'&tRMsYaనku'K-vUv zhwijQ*#Xz`w\fU; 0S~UD猓1$"Gff:괻[دHC߭3_S˴ )Y[0LP``ZxG&:qgHB#@nI]fռC!<=KF63Tr3Ur\ #$Ai[zxQq9 " Exg f(Jɉ.tܼټuܸg}nk X9bd num'^$Z g!"c ql2ެY!T!"%}kBƅ2 ?pACfēwꆯXԸVMgܕ7taW5f?t!Qy<|W<g)" 'Iqf{x772'ËWԞ\I(Ufn9OUzYY>ʇM[n,* 3MHղtohuп$\ABu e|\,xU afX<6a "eܔ;11aɒY6R˜|%Í?X1jw dE)~pGg|Z"ǵQ$c{mw07}<9%|G`>p 넏{ޫ}/C;{8-8^xNZy"'L&nV 1;ЫY ^ja0GH0]}Btcx4[zsѻsqܹc/' 4Q[+ GL}Q^q&?V1Q@A _shqg\AԈfXwD]xC׭zc)ɛ:quHenqs$S+r6%se\7UA0 Ũ1o|ybК7f$sTֵQ5ќsyً ِB#'{kp f6[K"w6&0܃sS2x?MVMg:>m'zVi\^~,BuO+id<(Tg_A|w*$&A; yrWnpEy;9> ԉ2r όF?C&{xO.ۈN>#}#|z =V팂k -yR Z3LΎ*Wņy. >Hv˘ 5!Z!{3\Ν,el :+(6 >s繒¡ilKw 9vUBo Ey45r!ϳ'Tk&' aZ>#-)Qj5;]PNtB[z|>< Gr{7")6OEҮ@P wcf.5;DuGLf [.]lʴ0UE_d H 4ׇyK-OCE| "΍">XJ'-8_~ze->(Jë>8R8%16V&^CͿ5ut&P*^Ld/_f-.f[F^_Z"ruN,Sm dZp{CYOt<?FwnO^9@[_pf SH?0"ҽN=~XOa,(1Nع0ɜB #`޴Mל('AwzfDܑz1TŀقnH-Ol͙(6j\v'J9n.>Qn=sT16 [8m/X{I:ؠgK%ʗe} dxF-Cbʢ Aӎb߬g#iX/ѡZ-0_Xqd<`h^zһvo[t}'΅Q_+Ц+ \.d&IK GAPb~%GSȺr<23K \>?b>Ь~CݹIu{>GP_tqRPAPYN˔RVz; }P|Ze籙h†Ml27tǿo 9 Y8|< -sH,.O Wc>Wեk_ˈVT+)Wtr#ݽ_[`-.z0ȚIbҬҧ`ҷ#c#26uyV |o25y~`#F;L_JK狝~2/F/3%5r:i]bo+p4\oX#Tt\T*Z* mT:&ySdNYu44,į{ggnc3wcgGWܦul7u ) p"Q@ 4lde$.ZH3Ȭ6Au&J4 ~&9t mϘK;{[!!!vZ즟]vnnM];RܴG-"cw'A_#zov*%NƏ_8OL,QNWHp"1@bԟ=)>xHP} ˺J1~'Wȓ3Q#˭S`1gN0>̵iT"04{t$xB$Z;ܙ"WK( ࠮u=Y'J=Y-PES×=sᣈk%2" hp$ w "ktX Ff},UiV 5";xrH+?r:x0DĄ>& =={9ܫXNYTϞ]%KUI3GTIy!; sQpF}u+E$\2zO__YS2ȣhjCT4 ͱ%%;4xkЏU`>*AKH&g*UdAmH)7Hr>9q]EQtΠ=1 pUa 8$p;Pc GIgF@=A:{ ɭ5m.`@[-:1 %xl5βP>xsAPJ ٰzzb|.VggчkՉS@+[= Qf܄["]2{6R1֞p;GPeJ 0zٕ8*Som9=# 1le5R;G&}##h9Vuo0xi#2=$6;HSa#UC ܰ6$ϯ=OnZ^ N#dEM&ܢJx ڽ.տ~xppFfG92\U&G=U]쑵JCWa Q wO4xPx W3mPiCF/\ ;%e7䓡mZ {3bHmS9k]|ھL g){?%2wO202=o b]M+l۳sӖvڃ*ri},Tm{Ņg^> %C/3T:!d8T'x&C=˰ rsƄ,-ONG6L AEv@l,@Qu@[DLXkoM~֦&!ݑ9xG1PA<'^!"a7hmjMhU_74F>@RXx緹6 (0z5˭ߙ2x/|?oK>!_S;=gjs:T|[S3;־7ԣa3 q$J~̘ÔbUL:.9Psh,7~rމҁ;ދXFMbuUAp$@AZ_>7,f[N G%FG$b3/;CPVRr; ~8cS-5;F7[À7ytFW+jXz-[=?XA"/%kVq.MlբjtJoM O&'>=on"Zu*|8Ӆ-ORŲ6WY=2l̸Dk&.Zg&T8Cz]_@mUV|j>X)0NgD)Y! $V%{KE"Q,\st aR,})U4ii<9iaGDQF뾚 -Qj-cDCrWIَoJ#01 )`eH#jxSw  w~5o ;I5@Vt7b_-Pfgzi+CLD#m ֭R aSֹn2s|OVf)կe>gwyL ȫFʓw0 @"fוZIzDe8i;{(MY},R^cXy FK?αIlG#nI}lwv fi*B+:NJ;ҡ;G_û*DĪʱZq E(ykD pFL~X+΀AC=Ό ް4[ *s+٨aEpC< c _tOEĄ /^͍OlЂ)!gx:q]i<& il[ E&3 ],c9fSP]B;8e4Sup`d'nluٻjSD"9&y2uf.1~zy㴕#wM@IE.^@A'l=`vK4kR"߸QH Qp`V8ιJeJCفAҺ,(+hE9Ä2v0VYn{o F [ kԟcA9l-%ɷЭp/xOЄhȻ<EKKSba3? q@HQ&UI{6% UMPX *v4൐K}` ԩ~BB6n7ųgRqS̒3 ~#Y죻+@w Y5| IzϷc -N |sȟ٪!sc%,u^?iR}sN5l7{ie }+[O xYE 'WP*%*<tdH.0Q܀Od|ಎ\t T^1 YΕhXA$#cY2ʚZ|N$rD4 =n_;_[E#c)V[Kp#πnY#bܕd"ΝWю_א$.PSc: zJ](#O,.+"#!'3O[ @CF@VȐ/ıAʎ2E^:K b_1nC\ ѾnV6p ^xh2u4(< L C{7^HL*D!\yx ɾ%0oQS+Y3$醆E+uY:Hf3N}94#J>4בH Q ęiؽ#z{Dkl!hDQmcFУ ձ͔ ~ ;YlهvhFP52:s8vDp$d]ĚcUC t$H-^nsܻ7/E_ߝV Ck5{rˮvw7}˼͔`^ i> 3?XMЪ υl'{ R֖a1UJQg3 :J]i6_veDNxԆU#[1ifSE FIHtn4-̛giK0 & )s)T@fS2nx1Öi)CbR؟b|$DBձ~Zeg0mmBD{߄H7>_u֘!X^u㛣ߺOKJÖVK+ AᅌzgP0DOer+E O4k2V7{@~uk 'G;`FR%džnii.:V3(`--(kC[fׅWȓYuD8ڋȬxJE"9Xn{Yc4(?][1ųg١5jFiPڶ(Ppv1M}Bqa 5v_5Tc v :>Ch$㭔O]*,WYb; )Q+Q<u$0h,#78!/z 5;|{4yxŝ.@5ghueymێ`>vkՊa[0}RIJykD:G2h&t+[}a;ɽZ.ox o4ԍ,|ÒCugX6LNkї׶WeN AwIvtqlҌé2{G1a=->fABD)!K,n&ȶDS6/F l}a/wՕwPDkXIY, (-ɇ*^搕'НlYw՞ aS zޭ~㉞m&E]敷J[CyRJ꾔sgȌQtTeJ[T!7zÎLű #z,?/ߣܳ_)s3k>H:%A@ɵktq*9v)3' n]A"QݪH>P \VYﵹ[= ~Ƈp&KH히繹1k^%KÝ30ጢvrb4i ʔQ??4oVO1M$bnE\c+qd-Ugoĩg|l%XWMHY)n;<[EoEmM)byplROG&e)L 4R0oj>wG6T &G(9 \Ϊ8gW!@E>b,A.tdRBm=WpQ c4hi.Ӷ ;lSp$@ﲷЌfE11M˼@L}\¦Za;<)۹ |1ec~Y`~8Yj,>C]и::#jBߎhDz1zOd׊|+uXZUźWX/*|X:SζSҚJffNajsG{l+O1|O<)HPU[\0%cTCnmsNB08;oO--KtI S@L|\.V>.ǐg`,ߴ9NXXP9pʾ5) 961V]6=^5SΒ+/Ld/E#wzxή䜮rEJEaǗQPt+]6@' s8xq$=leeN=@VEhGoXod5.'L$]fnd q:vI&h(Z'[H4K)a }WL]NjE E댠.H'`{x1<6϶S..ŚQalYդVo(oӷ @JC33ƱOF4t}L1s'O&?»iG(tR}^j*Z݊X3*ǫ3^C] "!%PyJhvsiHыuΩNECYրC.$C`"@{4 .4>,Ғwf;EiFoʤ&auM.YRZOhU4IG\gD-5丁Go(D%xD/oi<*J֚[4ʇrqIvd'o! Z=9Eb-t1E[}3bPŨtO\tÃ7˜\[Vo0 'cFuH%3|%eX[gK9=jNƁwDZ[b ~Fk_Wk?qɓҁ^XCh-veK>UƵ\3aI`4QQPtHpY<֏8tB&nQN_Fog^ÂHknIH~ph ^Y;EJ=$KM5_ Y%v rvsfg]? xsBk>mљ^^5qZp Dh#yqLfbhmtȔ7} Flю޾?Qeˡ` 'nR2.at,زo-b[vր ^Rз!s).c >楅AD(Ͱ :F;tQ-$졗<ʿ\8IXHNt/w;/ǺJbΌ7oQ{e@tsT22+\oj?o*%¦MQ{tv i׸4p!f[zJ:XpY 9{ߓ S, #16C}=TpY#1)fS.nhΚ; 3#۞_pƉNεrXyC8c,V#T-H ~QOeB^DZ'sbII;70~1uhޣRfuŀl CП q0pMr>x1?L"l3^uO  ~|%AUkE8r)XDGE$Ubz yf:8Z$.ih%4VyvJbBYL~j|˾۵SH4=O0rI_~Yeڳ+SHQȘ#W]To6ow/@0OkUEE-W=㸹kqot  ,d ?RJ8n$d,(v Ռgk:߬TI5SS`rV"0&O<4-euwQ/;ݦ+8]K\Ԯjf5kGG*I*oEzd@["̌˫Y枑 ugm4:}9"7>4{_sڰN-+ɁH ܉_;&U`0z`XrmD 9dF9AH.bWpfO~t /J\hD _qz2G ,d[۶f2}%. UATKX:H=P=ש-Xnch>vw_AadQ^XS{SĘ. XV\ƋYI#Y)B@ziKDIq^E)h%k7ZVɯ"F_ ܉ɢ5O>Zm1Jxϖns}UѿO8Í3ɞ";]2X 霘  tOz# %Rn?;;U1gMf["/" w?r-n9e8k9 sfCxm9s_NIhY5!".3%kc$kqxr8|w6@o/vqi$1vQssd@ /U$yG D'4U4WcƓe"#wM|}z0> ys6v;v0[v_L_϶v)<{ Wjlwf;|QKLP?*K'r+ӝRrJ9*Wϖ%uk`+/AU(0'`DۍlIJU]7+KrUW\+>d]dW\>++Jx+{R^+?*+J|,T~U'WHU}2W>Goc+(S* U:ߢ*pJ{V~&ST|2p^\Wɗ W*V~>U>wW)(UR*xJԆU|W橖,)rZ+0W:ֶUmjI{ii誾)_wBO%EWYw|T%OS\dQKfbT%ODCYu>X[uN(3Z*?(3NP&쿍0M~ 2,rwE 2`܅G`eċE-K\}T^k]>vrUqR\>B;4y"_GE`@2O CoOy wcq`RSn#l" (}bE$$j).WL!ԍ&2u ^"II=◂6ԯ3Ѕ*[5ktY8ʵMV) X7]-~kIWVVffjrdل@=,3Vݤ M:}0hm0cYY+w ɥmzdZe5u#ʰdp}.:ZԻjؙ%izXNrf9 34>"M\&SBB#fBU\NWe@=h٤gqܲ/37uI~fvncB[|tzs; ,Y)zqzLpk0M|[R;`zG٥l RR!"_♑OpSZV'˺AvCMuBsYڪmYF<\[K@xAAพi鍋 .`)؄o_]);ؒn5~׺ۍɞs5%hc5Zh*xfzf,(}VpjV&g^eֲ\Uݍϋ.]L2)kMszhh? GVT9G[joU9+o1d*Oւ=4Ê,+hݬ;hTϴخh͌!7t6hMaG5a?T,<}-&d[Kr:^ϮL^ =<^@&r/7tpB $ b-҆)0_wmAD8^y\h9sToov ՞ ~ØhLx_Xq"A[`\r8[pMխfS6< V hΆ tԆ O7vĞ j韀pʶHR>OU_㓑ާ$7f2 o<2'z4`;{ȇ] l{-;)Ɍr\>;~^OM oѰT.21BJ6w?(", iC]dآǤ9 ]$ 0 H|/ho.dIZex\~ëfU9V#{Zh’ْ%suPd" W׎+ |눃 YJ.uKB]}E1FJ >;8`m=Ɣ5Svڣ|2{ =Nra!撤ISgiu/w /` oj\CG )uSks /Ksw T<wyd*M{ &AS\ ؛o Љ,zț/3A^(!5d(Ɠ|XTBEM n9W)oˆ1[AvxT9xgyjz|*A5 JUObj]"Pf/]F+~TxS9fK 7نl{zh_EzU+>@U[*ۘgt~4SI1`Փ X]ob M})t5dx@9Ȱ# cBK|<*5MC)*4& )3TI=돵s a f30/q7k.?"K P &[ctZ17А}ڥO4r~PKW=~/b""MLyP pSL6a"W0.-r(x$EWgxL\og4Hq2{>,)kSfJsv-ف\' ZnqWfKZyi;t5͚Ɉ#-TC(3p9L|쨘 0Z@oˇb\1_\Ґ,`Y`ܿ 5e5<C#9ОnM=W%FT!T3.Eᓋ;X5Y8XT31na ۭU&R:bkVFI':]áEwN`i!F\a\'Ra |6g͙2gB[(pGy)S =g2'4VajMqq7aWŻfM4d]=:˓?SFT -qe ySBx&ýuW]ei\qFHhiڮ!ٰ{F)F 6j1ygӡ{_Z.]=^2uR4"$%crSsR~\Z7*V3=TL ^ lRFTI*|am_|g[.XE?3L~,V;%>vle5;14bt6#f_$2|^kšc݃u}J؇/x .tQbjkoiK,J6\ ۻ Tj8s2v9uw{jͩ~8E}29<| 9T\{n8K|eF)a:ӊ&w%5NՋ'J%]i{ Qqº;,b|z4i.PJ d~)=)msQXԢZVlJx N[-5-.{nMX a]\.b/c6zDف)il' |&gEæO\@꒒-^RhK_C`h>Ony7XUr4A]AӯFQ3HT^,Q,Rz[yIvQ!d~Co=(L5 U:&Qm\T"$y'Ѹa h26m:4; p}azint>< OCh\zw?GcF[4(2WU6GFVky Ӈ ]LCH˜MQ8`jǭ~Z >ubR GnGtk+ D.Ã/I3=|ꅩ]rM 2"=}Mhl"ȑ@}>&q Mg$a;j8|)HX4WC)Eٱ $`HnC{PzM-+s]/S01O\9H\P`!S?`{XUȉlf1Kp=C|NM"Aydf20cӆ͑9ȕdU㞫D$3`_k (i@F.E'6O j,/^j5ÆCcm5diAg]/A?b5^7 \{]\.$>/p0gk$ʢ|'oiQJLFQ 4os_!62qL FwY)7rpwpX߃zop`>ZQ m\D#5az<7|`W/'PtRX nDl4`輔vIy~gEFtgӹ>Q4u9yCvD~n9lDK%!߶ILf hE~C~Ȩ>3eVaTzUm.{gIhAY6|3߯F>@:^tteGvX N[ !FLQUQ("m!eAqRv6p䍺J>V:w͋@X xBpfmi"2uRs`c'A0>fg\PNSFq4Ds9Pk CER5j .ɥ*ZKAa@اY`YOr*n0DsFV VDC7KOYۊĩb_1Wz_'ݚEEyK\k6n/t\'5UĚmeL$M>l~p>M򂛬c0t2{?ˏ' g۪MW[v5ΐ ˑъHn5nj23'cb<,4[k,[c3BLEmNKٛ&.QҲ!^!t²F  [/5p( i˴b-@DTаeEKF#oʂܸG嫽@{_m?q ]/3@uA:ys FcvG0̦0Ca #B4XU8^cDfW?A =s簇EvQKG+ݯ MɼЕzN.l)\,CRJi#9@D d`0[]V+F*1 oUF(p!+TuH eb~uXiRң N-@W=PoVщ\ܱ=ݧ'D04[Q?U~qXxu9nxjW̷gsK\koB޹7ߠr-;.R+Dg}|ˇ jZo?]yQqΠaT^8(Tг_L;xi1LZ$8mIh46Э2G<=:I&d24ǀ:[l|{L?yZWZ)i)֢2:w~K -zG?˻sň҂N!ڞGN`wu<ϗieٹw}z. dHn}),6sySc8L-;Q;NjǦ8Zlӂe}\ޤVAӎp )0F~ϧG m.&`b->h+F9lnP'MjU@3.'-PP)?C0v[QAĄI*nQl.+&: P]>!]4bG=FԝߡF`kc6[(7m', jh \y63Z+|) fШh~Q,sa3G}#~#F"18cxQ#11_Ht# F1D,:Wpsg\j.3%x\zre}v1"\. %xؤ`vv tB w [J$2|Stn2Sk4)Y`NS(*Ʊ8h6 ,)}3Si`<@R(̷&']J.s;17A?[q>|MJ:k,"qEMt|V4}1fg6}c|fahCxw.5 qK\^ ⪉/vE/ Qd$Xǹ$DC:?$$l O##6z,R.,=s󄈺Ւ@amxXX),!@= ϱl.f~ R*"aUl(oA?X./R5xxfYͳ7.na%B!ZD/kJ"vd^P&c$M5oc~C:bĢ0'I 0\bUD$p?>d)\dӲ9-eVoI]V֫,ilkڛ).@_:W7u`u9v}gire2Z̖%HЃs9&a%D B*vXLvoZ ViHxp.b`l( iHj|#cΈRsu*W1ǕOh_uQ190L&!*hB?m+=|Ϝ$*,;3Xf9 d>7b{̩Lwt A]CB6n48L7́f[Q\/4i^A53 vBP Q TvIN/ 2m譂Wt(;&/[vYYie(C%le/LTRkވb S"{4F@W0R&DL,'@LuyY6eRLgؓ ] ;7L1/ךP6vH+TފU%9Rud۠yHaF&O;ʒC/eO0D1D;f&kfk[uSwDrgի y09 +?+#Β|Qdyn-Cڊr+υh˕=IwhZ_M]7²Ƕ=zq4Ĵ},^ TsӇ~12iGYfbF吟mν2Y>iԚm0qۮJ^=>RjpjPaתupT%ۯ^يRF/Ck iTHCV\ϳd#" QI46b;-YtLtyXN6bB%"uW X޲eNp˩_kI5j0Pc>A-tzngʤeqvqT-zw Ǡ ?xϐ̰σ1Wɹ+^"Q"IJ}BV/pDj&5͙('RF:dZ!$0Mcj R ]/EJ`\[iZNh4缩\R`ؖ+66V5Ϳ&܃wy;`Sz3QdDEKHc0&4i_NX) Yɦ0h|KrKy@j!ƈsv$n=$tpn @0I:>\Z\2^ UDȷ,vzq2{=-E7-y-Iz\|v:=R;C7=}2;UVD݅o=Zڝ^鈪LC4MHwgR]*ȓ:O rv `~:Lm+ G(oYk}-/~Q|,ru!ݺqu,AJ&}%TZz8M!w9^SOrls-.j55uBL1dZ'ܑϽU43sm]A׼Lj #i|48 UZ~9A5>Z<~=hЂbr.!)eI؈bz =ڟM/[ 4"_,V1q_}c,WT W fgfqst}>R໤^_뗟:A֤n"Sqvh,Y+0-66e(b=ڝp<"w.{d'(fUW8Ȥ=^]ßZ6[F9p8^Ӊ& &MQnk bs:2`=DGu3zY vAɹ[뀗{:)zI񈥴 Ԏ x(PqB n؃#2k ?B@hH]jx۔bHa~v{4݈ aʡo̵C9*&fn+#ג ϥaMq9Ww3kR= =GۗCntuť\5RgJ q[]"Lw+T{[O޾N!-$ϩ\AÔ {k NIsӡ uO!, Z[i˛g,'@14FLة%H#h@{&&zjr VJ :+ûl:qSy{Hl$闊&ۚ!y\p0 H|W7N@;B5Ō[զCܶT <|^8vw5]y#Xٗ^R7lXtI3,4AM`bHDVO11-QoFoC3!bνͲH ȠX&042 /VD-,ݱG$+3AM45N^ ŧC[B"cgǵZNհ_"7޶j'DXcQs(GT< POx ]^o^ w(s0^JdymN{~Q98LHn/ co^/F}tf+H= LKxA,} wW3;A?|-  2B(4Q-`vM8o9B|YD̒  "؀&+/[-SS5.Zq¦gr' >BbshU4qR2/©H.UǤT:S#p3M 5D(2~Ɓ(5 "TnseqϝUڤZ4147cmWB +Z.V3vJmʝbKo(6@3Wmw8F4ȏ_YP]f?X /yGg߅8.H5 &.g_4~tJɭ yBj *+$D=N?Wd=٬9),`e$y0P`ǘM/p2\b>Ko7V,<'V2a c/eVaz{8xR \k'1 ?\3IǷ/ڲf]?u43IU_$I冕`d`&1]vd'ɇzL=oJgiIk*W!ȄTs:XH4Y'z5YTk]ZLgxp;|교n&1>9o |CKΤܸ`XMMݬL1]VNU1I|7KJ y&R+mXę+гBwP2~F]t>*(Q  l";q۩ RU͎N% } SLo#5he0}l =970 iJ%87 UJs 2[Hnb_haӗ| k?b69H/Nj1@XPѾ+i/-oW4% +5}#iy dfvO꼠a |;򳋵13ɓ~I{wq6 1uc8<)f7 |Wt/ڪ>SUH7XNp_[)x 6kԳ>S${ gkп*)O )ՇՊ׾j:NOo>ixΰ&JۍAr>zpZb*ϙQ$hw8|kxπz\>7P^y x#6bY@?Up Hry~xמ({8ǟ< dĕm` we6Iڅ]P5ʍ`HRyk} Tt޿ 6",6W9Ph;m@~dV3f[8_X'&\{AUP[]1ux5 㻭rY'H #y}p]:~yy mm0`KNȉ홸BTFKq%z"ͬj*Nfw8|QG^%+xLEH*wUZ޿Z |.9,ȏdup>2P}N&w HJaS]}3 /۟zh(S7FsjaH0rEkM,=4Ld=yU<9?kk`=OtVڗH.n #&Ý͎͜%@IbnԨNђ*xhP% O5ĽJZd>;>%u`^ Ncϱ|3B?gH?2( V)o}J;nG5|>2AOb){|B7me> ༦Rm&G2pu8]ݐН D'wc6o:>un;FM@=ȼJC晽 FljLğ~wp߆K= ,d'~OhJQ>etXTG}M?s3x}whFfp1k6~x}ma;q>$%M8 '6G!'xgDA86y5fzYA3^ sڻdEmomwe;ϊGgP8Te]k2c aޏw$!GLrHuDmTԺ> 'N*p0^:v[i|䉬6Ah\XR3|.t}~4b`Atm5BrROKfkӤZuG+~5_aQ=+̗Ҥ'$"g'a=t:@ b/)GQvdIc hf{KkԽHzlY\Jt\jׁu y1T9&&pHb +{M hP3ӪH!3Wi)s6{7F #ya=/C)Pʜ}+A~1,t=;{ՠR@{SY;#2|akkl:5{Ohu<BΊ}7 KX$hkʶ[CW >k49=e C0rZHRV~]!寘/{oR둭6o9FqdaD^e8&[Ƶp_ͽ):oI%t ٠iƕf_^^G_ UWN Ul.~ zGR)?SE}>XMw4]T8t9 B6局idm_Sle܉?͒٭zOd^TһnMp9%}/t26M|:ScTh"sI(orxG5ySC z~6.H?vZeP)2cj( q͵J^OC89,A^KrPGw"%`jcSM1tVvr7ucEP/%iGX?k٪fl:E1 C6ht}-%}A(ste~9fCA4,syw|}v*ye'/q3Hd4^Ѝ; h* +܊wDc@/4Rx8|Not6s[{'0xet0{#:V#1'c <fKxɣ:nOk;s<$oМO"XeZW83X(r1CXکs&mf-:MuDc]zĞ`E䰪8#]):[OLݘWa nԛX%bOvD\,3(^.;{Xɜ'qÛ$,˥/Mt>j)Jbj^a|"DS1ӊ[H ^=.LjeO%I0=6Q/?O[$9zbSrxLB}^lcz_I:r}*e^?OUIk^СjۣkʌR!؍g"+wWwjeafIa MbE`t<$ kG) > EhP e|EeB0lUBnC T&K;wN3y77b;` .KUXb[-yz,0%|cژ ^5/<1(zh~,uLC: q~q`"%1v? 62Av51"Pq+DJ_yjZ!1V>)^:svKUU,e_-o{bʔ(I};J.RơTz,_-H_i" ܅JQG|WG4f*=bO.'ƀ8D7?LMN yow rF{Qu*Vwnwlؿ.fD{]#d; H>C/QBm9Ʀ{dOy8Vu_%98,V&fReJJ82izhiBzcre4(nH`ߢؿUX9?GَCu)qu^19!E miku\^ ^tߦ|Hٿ1La^61Y# 70ޯ9+"ץDtɿgV<g~6 w Vpa >1Fw9gۏ-8q}]\7s^6.ڪݽ)ѺnԄ䄠C?3kPbnz4h455?QH:0a`qNlCEmHFwFg:3K0\_3 S://] n]~c0yl%xaJb8$9fkFUknSYZ3"V!KF,?[QXߛw>?@/عqH95l:F/@)ީxdp  $&͹q&Fc~ h06z0JLAA$c _% +2wi :# B;UțhD1q܇G˭n8 :ҁmMkw=ղeFzoѥTS e"^(ޢI,CbӅڴSR>HdH'tpPb($&D^ Dr`m愶s$R$Et&3ĐZ잂6(ygxB(X^A|G*"%aQEB$#-š<&xL$ "- i PÄAٍ7+ǤiEO{&i[*20@d@]"]DZ !@ݷ' `$hf?rȴ*$ pŭ ֧PK$o(x=D@A_Ȍ^KWBSk='W.Dn.q ^X9jtypNpKNraK ,ݷ G=8eEUޟ#DH0LL[OvY_d{HIiLD׆@D0ubo H eGz%M˛.=X=j 4IԄ(ӵ, e fo0SMIs u"w+QYw; 'Z~.ϧvv:A^ 1Ԓc QY[Uwlt7YcPh3}|~?U;3AeȬ}? ĵMkf:muƍ`eu&KV,dMTOWhRpaozYOVَjo^CP7cFmEzI?)u!9:ZsYQ<3Ļl9"?TCEdB.gR]33gHcW G2S(7tC0vʷU HPB i''eBdCgOؑ.vbw{׶l ^-LN@oGn*\BE,؀3A1 RjT>2i" I}G~c?AJ9EWH`b)@*Iw[E\0?gK1CƽgK'/hHtm77;^6L'.%+?7tW? U'5K7VH!ήj 8>/qi?_N!yj7_c&W1۪ڿ'of?ʅ}:~:][nqe;Dw?dK;dc;R}{gq'Rcݫ 1!F7Pvr{pQ ;$?ܓo,;5>/:úᆥy٢p%gJyu j.,-͜Zڛ7U&7.xea H<xBA&i~o@{߳,A.mێi>;UǤ9a;-gJ*=$ԋ-֟ht_j(KeGCRc]݆)&BMކE1&s{Ou<\QONQO-xLQՒe\eg3QZ-P3~Ȫ?ou> zRۺ6VjR/i*<`Wtl|U%o", Y+ݤE6;2@-IͅI ]⬏HG<Ξ(1ϐn * _8q6]0xm Y f8#b';aDR@ KwWa :_ò+txx`#2;?^.#Dq˽gOix(rF8E(Pa+hnC^GYM\^F>SYy\RoFi7hfEʢ <MMohiZll& 01l,o8bvIu`$NsM0˝r=UuuYYJ'o U!/Ub)|4,qU" aUrEkY a@AIBhB '-„X+z>?9Y*iUoK%+f EҬ%WILwEÇ:CW m݂<Ln?ELBᤁJY*J+ %1`c/Tq}t]h?tS*@ɛ[z `1 r ݤI1ELH\u8CS;,& }>#sSp`K]G42.Gb7ec㄰d+AՔ;AȏzIWS0kl|7 6$/j\ʮUk9 `h`)1%Bf;96""veY< RnR>& P1xvq;p\vS܎  6cKxW‹-E]7w5GKqZN݀BJܣvnY?LT6,!j32ȇՌb7@'WPb+K}r ? DA(_7<.ҘX e(`߂VXuq=`H|=+Z#;hvjQ #@gЮ׬ds:W3OC%09pkXp*(fs1a%kЗ6T8Y1ȐCƌBW%,:{p:]/S͐?V.b΋>/).xW ckxN/2369OXhôKjW-2A%:%5mdIo!KSL ]YU0o))؉VRx]Q()$@6f70{ׄL?Ti٫c).x\L`$RAt <(̺[*[^7ud:jقxJ^w(َW\2i5Q-clPܴؕ#&p Dف[ɽ=@dhnB2ip>m!+eN0Z5$W"@)`0Q{&|9{qRQW H}K!/)Ӥr0wM꙱/=oCZFD7O/$#q頵%;9Ո89 4%xKNyt^sJ)瓕85#@[`O`:¤Bf~pUe+Z/T~? )9Xi]NbCfG6H:F$ 1[z`fzkr>U58jo1VvgAஏ,8oyMAXŞ6#-Ǭ]JXΧ,f|l D@˻gv0FO YrA&O^q vZ[ƿ7➮h:˹N#|x[ܑ=4J:WRw2FI`c{5oelZ?^t. )f_[xsE_t 36[QXp'|2YKiyosD] 3}k>@.F4D=`)mG}IՄ8n rDuVP`>`FWOmDZ&0b~:rW ߰Ѝ=?hGnVO_CfduSUݨ"?tSJ31oF")y'“ ;S;8]RF5Jǻuv5ZN,$&lU YA;Q9r-$_0Q"( FxEtw$4.6d/\ 1b.E0ףFxDް-S#hDF{,%2ʹyMspfA%ώmdgzh|oz,ړ>ʪcȗ93&}ejGؼ%=`pzk%`zmBtKQ෧CO쩱hԅ!gW.jGo+Rk22~2B45 fտzhAFv(ut=3qN!WIaIR}rf,c',6z'($ (aF"!{( |m" ʼnnZOi\zӪ4q˦ qu&$ i ".A}hHZƐ95&p<Uie:(J2Aú߄F r ANǡBv۽ gδA?G OwoD~og!8ۘc-Xo\!2 eq31B'DG݁%zԇ'B8Q\|pD_3R2ɺ&30|BP<Ȕ&0>1~Ր.WuJ*JiJ1l26 ٶ693g"Mܹ#쁵w%B CQ$( úA$"S,uspgci]6Iߗ섯Taomb7#CS}:'QdDC2L˸)eKŔ~P|C[~{(!YEk5߲z3\72<؃-Av<.0CY]hlԣP1^{#6rJp;oj{xĤKVtJ4"YyI#7z5zO})CZ4{zG)[`WDZ*NNBwB%Gcmi0DcE4`3㡱sɒiCԳ5tܿv{abײ*jiU T)x|z%5yFN*=fW/-R-Q+ZpH\<^̇2;ٸ#ږtG ?Ks݉W֬gbtws]᎕ps/#Ҝ]lsa&.Qk\ ZӯzT1S4Džtw= pӐSW뎃l"_O'P'$ ䷵q$%?]n~ptф33!NLFMA4}! ̠ d=O1-[ A4WC}Q#86xlOrGpN-6ׅ4 )&Go.0Kf`jXƟxkUQyarںS8n87"$g;KR]R쓮 u޴8L$j{8*uu;C;l? Pbx<^ߋ{= 8k[E:ȝ^~@@QSq<7s M٠Ұe,4 SRJt {TQ;\8m.|r F qU]Lb U.+8I`LY$b:{|ԩe#4ef=1=Q͝N{^f98 ;:Y;89Уnc%;[WN4jT-kc^zm#ϴ?5l&GWWVo3QYwi篕@Yq2ؿT¾^<^RTkҡbP YHiLq̡NiUH-(SIt59o|ƆCpx!( dWgwӗZe#2q?& X֡/2i@<=H/+8 +dH|aag"սm tKHCD^Ng'6`@&2FƵD<0mO/5p|' IiN]^h,MzbQLưJoPT$!-g͛ {~taދǞ{;~ 3z>g/yWB !`\ekO#y\rD>]RhL']S$izBQ7+}+ȡMl]ईdQh8Uh+ihI:&۰VI悊?ʝ"bTh:@ qT:vʓnKEQ5_ב!XάHbbb} ʈ&;ڸFh>iʼm}ow)_8W~ 8^]]4;uNiMūrt>͹چqs~P"K!t8h'e;Y2^Fŕ>aDWmsyL9-}< ~ŦR.LjhxװW`;WO؁*27ۜeCux:1)x齾\Ob3 XVIXTJ3bx<\ZfPo,7^͙w嵥{{1Xĉџ4W~&=G{Y~2w5 *h[ws&d8vRTFŅ@-oЀm {8|CdcWp yf4w4Fxˤ#g: w5h w.װi/238i,3}9s+Kϻ$5p6#5ɀ*v8wOo6;SMJ:,9X29T{z0J3H?Kn+k pU)~gfwF)MG:"#.0 n.&*;E==SXf3߸V2Bu>4=j)W,' ,?yDҤTG[&TF4?N,WydBQ7fLZ>#`cb^QOG`i_ 4?NkߍxN3<\NU'o?vwim(لc\Mzؖ+v.+L>ؐ]Ӡ1go˘ے@o&pYU4Ψ#XFg$) z8TTCŸ#BoL1np8QaڕO/8_ms{7oஷ{ZPOtʒ밯+>\-읟rX:lȝgex.Mm_5t"|5z'^ʍh@?^0n`X5_">A"%AOr0jQCP$u%-mO3R, ((!ըlO&X51cMi,X ,[Pˇ2KC!lu5s;p:e2SZ{tky!h ތeo /';d/rh'TԓAlh$}oQ?V'}1 F<%N{ o >K:L^q<1D|Tiur'gۀ!y\O6^&}<~hGR\NpؔaD&kr]G2Pilyx,mFhyӜ@ $4pgkliv/l}QpK0PM-%iVl$ybBçfށ_%r1VD].]侺;ŬATLecƫU[F&`ܛ.D6T ! ts wbirY&U{d33,h8(k%V1oi M}'[ b%߫yy6U^c BhPS<}ˣr:6m|zVL$ty4;,SȤrrvwBMjY l↱'4bѓ\Y9S|Qg׾OL˒}1W#nɸFŠ>;ytnzDYǸSeJgv7GT@H^n\c‚qfR0J4do)6&\BH:sP6!GWVpC,fi}a9'M~zM?ux1 /rGbC[jXui x ~ DѵPXDlT)Ԣ?C|eT\A n@|w›zWPRb) {otoM+_biJ7#OʓCLe3a' R]3nP>n^r^~<JcINwg)|̉j +naZ(|CIzlgϴ #$)hQd\s٣rݺN9]87י]m{U$[_pQߞdz^*?QbbEҪ915]k'V(D/H/Z {'o$b(Dze7 eMXՃHҙ>e\pژϭ,jR!jIR7 8de(Ƕ)нPRLO$ /FG72}lY~z$%zVz7V)9\"Q]~3c{ .H٣T4IOM>" iקP eY݋5 GH(S{ˎ~u]N`g2J&*x}M`^5GȘ`mBKZ;mZh/ q>'lhsə?u:⩲ߚbEp1CU(]+uTDy%`!fI@!( .x=tխ]$e*ܹ\0%(405q8rEۺPrU*kn!`]-4W;W3h|-5~ (cq&H2Q5}"*ʭ m37Tt6|:S7ܾ Xq]v!hs>79ޞ$[Qz[ZrkDQ.N}2DZzr~^K"Dף`17k1bfG &ېP=F& u#3ԋsr &]̒=KQd˚DQL .SkP}0e(W(ޫa~$-0&iKC6jeq) ĩĈ*da> s,[C?GG: F$8R.]958T1tIEI`SLh0CV67Ƿf7kAS+k'ז5dݬ(= H)id2!s9^;NRL)d T%S*}Ry}nlhc-Q҂1L7 o{:Tv7A1?#븇#PF6ݰ"5?]!Ke}4s)H2%{櫅 ؞u"pĆ/"֍kKZ=A`d2 tG>nx(' &%Nn/RIV7;[3nG!p:(bvqr Ts5T[pɉWU2Nc}z(^?y/\hӂ=ڵtPsLhDM2+gEwwҠ1Q,c`ɾR dնR 1?196pF q.]#m9#ac q7 M*{tm9 0K0Mj9FOIKqJANXu$V>H.ԯyu Q&4CTJ =~e cy}ͷǎ'H~?P 9\7jѻ¸Pw]N B5z#pý'`?sq<~EKhL2~:Jg}{Ƣ&E1d*pkli`QzFP7\-Q;I~x`҄TmUG8HӃlzҔ$5Թ7^~˦<Usgsdp0wDap3f#'v{Uy>JWi$6"Tn W.h0}&cB ]K?*f,m()Zni3[*2{?›&Oe2P:u]ƴ }΍fxW#̯~ጙLt y [Rn/o@<S//aB-ejGM>+]`6ͻml-oX0N߈Ѧ43p೛dt|`nevoW|R *I.WZ=,_zqw :jpҬ8x~+  d9q 3a9TIib "aEz ƘPʅV΋A*%0iiif|cR L T2k*NY5J‰2^>Lvh9Ld9NO{:˴#yuh;)jTQ}  [5z?x8r C65rr2lއ+kqEutﮟN>ߕh)Bnt.GVybAӔq}s\}Gmٷ#æQ~6 Ġq;/|Z˞༬`ZJ+^{ f9# ֡-7@n J) *436W)ٜΙ35/֞{QHҀz]}FVSbrN)yBT&D'WߞG[!yjgC ө6HwȰέ;f..$\ mI9+a_-["lz53r.vPJMq淭KBYo1)gSEG2) RooO0?{v%idiX4h~E29-Kpc#`$,reC2Ŏvk˶vzY,R};-ZtӟƎrt}~΍ֆk\[a8u9 H6Cm^u. ֯~XIuAfdIBb_1∄H:\MAJ5AS  фԮ#V1Ne?wŬ3[{{m6O$Q&LNMXJpXϟp Q8?[$/WIϞ>‹Ͽ$Tm_EDN5|` x=Os+~(ւ8}12*!y^ ' ^Q:OX:tkC,I}>XL#MTE(!00H6Z}8S46궎-Z#vH8(@F\R4("g"2ANRCLRQ8LfAw@ԤCcl6'q5@oVr_zD"O@iڊot2HIo i,F,"4RԅJX%+H\  uw!r ;\H'P&Z!ko\^ՠͯ;sl&i`ۦէ :{p7qXEa3 qyFc.iv.ļFkЍ"ChT|g;'u0420ʅr4=~X:}eZH.q Hy/ Lb9@:=b9 'E7.gEƮ$ n=z\ n>;B ,wRVOMD|N= KOI/Œ 9?o6i=gNG6nc~'#omqgHQ0xY:`v;wlL(Oոow=*( 'B:zRc)ɩls&/g_3ZO00` Pl[$z 3eLBF8T3N=>H`99ťH;!QJzC*#+ KMaD6:X8MpW7= 14c5jkrZn8-[S6A&Gؘ9.ܸpM [nY~̈'VH/b39Qȓ_-3+-#r??&OM2,f {h}`"#qsI~"uan DklRĕp;dv;@B!&dˢ|g~JZc$d.֊ ߿1ŗ|K'm[ prx^P[Bc=m%| +c `,Dip35M"KFY@"eV6.\ue哈Qj1YTH_aU&S^as*WxQXε #wWF];Oü{60c(S%>{s; mjNC^4g^.(Q^[sL:rLRefH~ x?[p@X>;@F^7{Tkս+vV,;}9$[Q|Bz]k0/9ͦ1:peg=dx2utd1w6zI@?ƒUxX[Z蹚8 =vD2n⩝ +Oo%9/2WZ#)zPvE`Tܵ.q YI,(WrF ɤ#c\\]:q=]sƬi2 'i&`ȑb)]z% ď cc49p!._ۦ@  @״d$[ʈ ^F Lv+I]98kȬݾ~29W\J*_x2(ч=fw93F8ęseGC=°(0TM1KK_es W? ,M1hi e- ^mLͬ,חh//'珝&6bQUx EݤD RtTq67 ^)Sj?fmKMA2o*!N18Ar9lr@T XR~MѪ&g2WKؗ!_԰{&Udv__r#*m!;NߘqV[5QVR:Mj{/Щ咆?8/{IZҔ}W0z)Dţ Ĉ\%w4?xD$_&~G6IG(<3X<zܹ~ @}j~ -KbXk}; Ebނ%}㾱6,10B0^ph s6`CT}gk,97 ^`fgIcfq>%pP{L43BHAa"ɇS0YuFR~s a4Jނׄ!¾I^"K=DsRt [T CI 8ZGV:E"zdT r$@3D6:x[4MEtt ,u 9EuomH4F D /ic+fXI`5MH׀d8{-]7TMrN;W/£3*t+in*d{dbC 'Ci+*GMˊZ76jm:[ݖlEhM++XMO\ɓxxBC͚L2WQceQr- DKM&gd<᝹ˍ;JexI*<_G 94tprShJ-4aF k]#^oV qˬJcY~oDZK(|C֧s8>C οoUEbd9Q 'P<߁.ٜϭqO ҿAM8V?MBxj•  y[;c&|C4F"4YBHb]}vF.X/S=w//i:j^Z7q ba\Gm~i>Ӳp5hg0:D/9sVI%z^[K3-I5FǤ$F2Cp{iR$Un,g2 $\|ot0Lb“37=6.?>Sܰd5N'$'؆xXݟ* W'Nwz-PeɳҴp]]JMF~ŭ_~BQQm6d]a#%s~8Lm;b!w|x{gu.(&jMّ-f̅__8}Y wx;@c5rOî5E= $3ќ#Ca21 k[2!fr!/̭G$+{[繽s[^ >@&@:Uyi6{"D؍v:\*,^. CDl]oL&{'1Oɷ P(E|{[&@dNrRGȰ;x3E;:,mޡN8V,އ&\ !o|Y ]x__6`avvdQ9:M }'og%Q ˥ PsI/pCR ˠbڟ?zWk7 s:|W`H\?({ a(Hm۶m۶m۶m۶m۶zߥTY/^U߸%)CmRX7>@meQQTS\b ]NM.G%3kAb:4م.rߵ)VWĕ@pGz۵ZػVU5hj.<*ɓTbF{=WK92$ahMzAi1+!oâ)D4|?)Cwm]]՛ֻ[ EC2//n؆@zO(᠊i iW5pmm/&4VU{j }N['|`\$)eY.PzE3>( mCݞ]Y"w AISgOo&AT w}e"% ZD4AX 25O:/U0"nr@" Jd0t#r$w\`*L"ZA=F$(ibѐEDSmSu ~F-qe$xe@Pk2āZ7 !wOXEW(TJ"^fe.2D n$Y __Jp ^ͺ9 ƺD&H u|<5֖ Ձ}A0-ٜޅ B 6k8`Zd?1c09ºqM`2gKqqeuqK[LnSJAR_9{Ie"kUSI+N7Ϭ&A99Hb`Q2&iddKZ__<΢Q]'J4׹D)-N7FjҺ{yY `ABugтeˌv)ˌJ9m&uO^ALXx6"0 S_:Ms<9I dc] 6fIS)eXl gLN{0ͳX+O|#K-Zd;cΊuqh֕QD#l):h[vċQϯ]X#zbQv@7>w0dkS=InoJQ7nD Ƌ cgWIlWv"MSK=vXKb!kO4cLf=˔DAB<0n҉5Zw2H@{hn[U!,׽OKb\vCiUN2xZOXasʱoh\*1K"L?A5b4]ӣE}\ZtAVyޓF8[<&t)pfZPi򗗆;Ѧ84AHI壈xB;iB 1ϳo*?+nOEW2Qus[㲖J:?qsBҼP/KVM6[)@M}&; ` ^ry.v@SzNF$iu "j!WPRl詼UU:yץ6>$e+ufVènAG4Sh˳'3x)Ŗi\k-41tUU8_B,2LB)j>檟^ڡ_0</U9,dx&gsl.fȍ 9{$z#7i'/zgL&/]9 f%,%@lݾ u8Zj Zjzrߺ_CCY2} xٛqs `-qۋyo>4Ey&q.^<$p+咏a|sל._F2 cȠ$K/O)rA@R/Ի%Yz{:K^:Jpܳ$[6 ';&d g*{#81Pu Qئ϶HKhO%MrUiեg=á@хb,Ke5hF$/oZb:{]H_Qڥ2\+-_5_`ZnW4 m?z=W&m Z[ d0z}Y. 0xwYrKيahm߅|nqK[v#ey׷kW.XOYx£Y}`g|w7:pC830Ȓ:"@t ]0V̲7k "3B3;L~眚9;v{A@ @uC{F!Vq~c9`G䡓-VDS֡_*AsUϖ3:J9(*|khJ%q`O 08s8S^:117<} BӿsDZ0O==,fE&2.ݜ,ʷ7f:Z[_a5ِe S[#Tw!#B#b?, ϣa0 9~6#nB IGghj.)$baS?O,sֲX:ͮadLV/6M4W҈=j;Ik,wn%a ̼'P>Fk⠳Z_n9Z.k*Oj<'6& .\Ffo {okIBrXI{G-K#o]\n]-:-r[quZ.O#n=w/dB҆[Ζ(,~*MN_9jhI_h;M5h)bQf+NA<em_ec+5mXxx@׬ԵJ7,U9MmT'ޘͧ#@A#(W#C/w9 X@W/w{7҂T/ˠ^PA:/W%iޜViji)ަ*=fz3~Cs?U4ϏZQ {=B]|zJɢh"7E-4$m`h%.$ogtJfv^HGIDQ H.on e3]j>$CM_IiJOEy7␹"ynI69㡻 a6Ɖ !m;!/23CqDc#HƬ]6BwI1ͣ2uBχndf_&~:zҳ̸Qڧҙc/ӃBG2 7-Be Tl$ V]UrvWRMZ1Qapy:k^:s-1[jYonnRzՂ-Aqە(E *3SAO뼘G ]0vBFez{CJoB.y⹅,)Kw./`2e}?/.@3}'Ѡr3,&On<9y?0XV@Ӟ+^*ӃAˮ VͶB?_< AU3!Ӷˬ%GEM0:?MZmsmF i݇t!>o3ϳmhqxebrOոax$e*SJL0,2xrX{|iz?LT,Wʃj"R*qa{gс!pO]^qlͭ d~5V;'sbN S5CaڙKfЋV´##-M Qr4N]eVf'xewʑ(ix|IBKl^,:8hlƤR.l\Q U~8Qv~.}TT/ <7oRiNYteO8P>vPLo(9}Ɂ^݈J4tѳBlNV~|?78BxRm 8ZҸWkG>st3#99&^3AfbXC/mT"Y)]|+Ws Xǃh$np$НMIXR6kUUVɈ?rBe~tOF8chޖЖY.ke],o ன,h18Ch\ L,Ŏ< bW4ًEd3 dZG?X~Xh"(Y~̒ lw._De;v"n43# \o1lS0/ j &` իJMEp.WR&>\mk0Pwg3ǥ+v7W6whHg;jlwAvmtwv0i滶ϺŨ캀jz;}`{'v1Htn˶SȻ2͑fQ,VFSuS7ڶ xl. `T.T1?Gg+#= . -@BMfN1 (lX("MHNj&J{aJc1%FtR-pOxɠ`z䈲ֲveKY._N~Ph,4|[!lJAG[k)NJ&mP.9B݃rؔ^R,J4G6L*4>g7~}\C-=bhlw\ml`7*.U-\[g-ҹݗs~v{T(/=Dt]E{>NpXn XB̳] y6sFC 筬'Y2Azu)P- ?pWPz<+ O ( uX#k߉H:n1;} R4md-تaT1) THG2RM,3Q5`_̮I.gV޻-W ( ybd2u!*cKf{<=%jmt{c]k>Tz zEHqMs#aOvw4V!aD|nv\qQ dt%tDT3K$݄1!$Huv+ l N>qʹҰh-Wk%?`mA[ JTp3Vj$H t_Wk"N}7{Q#><1l0'{b8jI9LM#/؏}c1&"A{[{E7,_1cI^"grUÓ[~ t  U+Iq;kile0 7l|/\9>$F(kNɄE̫'`,V~1#`A 瀍E! ϗ+KpƏ1g) 뫳t 7^ =ӥ 3Ne[epb` j<7P)ЀO,0?|c\ӱ7 yg7& ?T;4grE!UA@2OqnZ@v*6hD ]S#f&&ޡy EizsiwMQ_;.TP;Q DƂC3↏K`&\HOLfGh;$#Y?$!q *n_%J:E]*-$*gȞGNj|hVq^]rm4n( sr[.m:wH%/ ;R##7NU7*zm Aʲwuhxߺ_P }d uYoIՅn34;Pb[Rwp˷~*$A悔"4v@1mgّӟxsRYswd3(NxN/g 3#w9\[l>!nT~O䵏7ԣ`Nb(VSh^-*xK PVQQ,|E?S ;D;E&hZ i~~dz58]͔(걁ot/6.di^lIXM 0`r"uqA̳ftf$8D41+p0,G6p`6~8 >Sg +5&rnΝfͫv-3>2Ҟzۨ2X׃d\t_ﭷݾfuez#g#p?= *1ƴ3)q tJvY (| %nZAwt4(39w*?|Jʦ'8mtIo8O|E›$PMcOgQ)Y[/eݦ"=Ͳbb Q24UUe3^YW mo <6US*B.v<圃!BO'~1$BgX~f24L'`VxHP0PCkBV@K#\ז[q8;\0[v5`k(4u Jd#evFQ0c9\7i ؘ7H{ːoSͭOG჋ u}X KĦ¦cFPUy?IiIҌ>5yS]ŘM $Fi=ռ@XN5G75' Xx*NSkmWjݨ,1lSy8' Tdq7=+\D~\ײ2r_~q=$6hIkayxC{X@-&$DMN:bsF28p4cx8/E鍡P?ڦeqNG:@ \|Xɜ9fw<袉dsҺ|HXh@t~\' "qw'qI+ķ91O}xG/g|f͊'lf&]ʒxrڡ|Ǔ 2n.pM͝Ǡ@j|P59j]f' Bbx:";DVE/s"tǰ-`(&'_AlLV̳]V|a} hᐩWWsڮ "t{BLeoٟwrɋA^v/BRgb0KCs&&/ .c>r ԄZvI md9GkO@&^0x7No˘J!D1fـ Q ˖A;~M}l>K;?#|Yt.G e'//~X<*&ώ~ՆM}Ύ~p1r/m3#Lx=-?QLF m-E 4T a!w|i|@GX.=5`|3QDL8U4ݼK澪O9LY6?\J7+u#!RRM$a5!Mt=XkEF $&CH_Ͼoq`YՕFRy+D9bLi<%N-<8Pv)\SZ"pC Й4ٙdMx9x;v7YRH8FL ,rAEQ' e#s1>'Ig|xF0= p=B yV{f[D *@܉3F{^\ÜNwMݑfhx!d df [1~gaw6ym&@酿!ߠag2YϮ;WvlAT-/]¥@~=8U '$ˍ۫q&Cϡ1r^?a0HRfu(&V@mQ7 ^jת >g Yh=k5CR:qȊx!נ;>^}LV+q-@8DynWjj^̃,}8&Qe[yV^o%h8|<\Rk< ]* DQ@ ?8C2_1SiN7k"#'O&\'ԞһGy NCxUA)p]Y (&LR4=Ҷ[NQ FZj1t4l\p$NX)h֩ekntKF:LQ>L^ ylh*4&eI +ZΖSJ'u4nT͜mF'URa9W^}Z۵8E4$jF _wܚᢇӮkO/( SV|ߖ s#B?iah)m[SprwR;|XtUÀ͡"%uzhbί.?E{vG.$s=k \EŢZt(LQ$-84]n`S3AHt{z+sƍsj "W]pze<,`:!',oRZ|i6t )ژND&j/ pp̣oH;%Mxjt z"36\!2$m2?ބN1gQ]P` Tڴxlw!^pRB,Ӄ;'B .G˃2 1߲ ٭TۄAjR89JbcڦiS 6t7ё^bOsT, 9bފ)x R Kvty^q blJF&yS09IuRL!7/"}YPNîS5ג]nOx? loRU'ս~sr0RKqҔ&P}gK:*zA/^g vIG3X9Zz:M+inCbq:-\!Uɉqxky<#ktN0[]d&-: 5vZB[Y͊d;$jiNkZsK15VHMKuDrDjO(l8}!*0KS*[^lj-Ĵb^k-~s| 8a6 i\ǿp"13B|)Z߾x-SђKpAu+(RaJױҩg9u0''_$U+)/\ ggwzUҶd&kdEI30$.~3N \גonRL!87mAVpO_]Sd~l⪽ꖺ뛴ZZue̘ħlgRtyu`C ϼdy-sPi]Ffe|喫>?;JX{iIv1TzhVxWUv(DNoȯ눰u *\B[icE=KtjHA L#so2R /UC{ۏAՓ2\عM)?XRμs?d,Ou7zF^ݟG!Qch\-(8߰rx?I22&15ՔB?oT'[L*c"($"ţ<7e.ݙUWRo7:|)UVO Zգ[Pʽ7"XcۈX+չj}2#|i V<>sX^pRXA+"?I xHC}ővJ=Lv%)iY}XT:QZ66_==cc 1Yj{~ yJH>k5hI6jOM{FIPY&f*bXDIlP<)XC"߾ǥ,e~ߟW\w3npniOT%RE$%.* {ͨA1:%dK)5ٰG=%d( dt׸G/SuC">rfE.j@၎Jl\A-\B.j}I$El䬒"6zE )ҳ(U@8-V2FCTWc}#\Wtl7wJP-t=$k=DGZA f>P&C =5ʯ|0@JOFYGaͲƼZXJPcީ"Be y#\#uI_4.L|Ęv|(x`rGb~3 ^3ބNqOe|c11$('cayEkeuwFkpVcU);ȉo@J5㯝MjץWx98/bC#y5ӳYgnYgQ w~&3Ԩ `RB co%F Jzm1c͸ ه9qiqN6ǽ}ǚ7O1O.gR~Z["t^#9~b)nVlUYZEmG~ar7j9*WxiuN/5rz4 ' -RP՚ڨ̓ժ@Hҧ.+d76cHW862M 5^1Vm\bZ=j(c Y=|OYkHb^bm*P.Fu ٣shQ#C;G46"x6i0&]J݊X5] tjNUb-Glcz~^:#f;`׶_3u)q !ATʥY6רD8l|k%{W(7/wJ_ JM=fdDeB#r:hR{\c hU7y>l\v QHlGwrL5EUK0"^{yMjBRNcp膎.S ն+R?"e EOybF&JOr~QW5Ɠ*)w+QVCFYe+:A˂ &ISVF-on 6aG-;÷qqC(Hi f~qFQ)ҘJꨉ:xW8p}@he ͿhX)2'0Q8x"2YXVF (oXZ`@*w~RHB?ޠOC/= ',嵛' ӨS>- Kxv#:H81`{G`e[,^ِ_h8*e :>Rڏ/nVB`_1 ߁0+pCwom4/ޚNvG;FK8)؃x?$e9J_)!\ţ58]QD, DJTMX33uwM Nx2Vԥ5]&kk-SѸj00Ѕ "-=OSjm"6}vg&"IwSi%i Ӏ!pѱQyi~*2_}dCP6cu`BL)p*_9{ #-79ԷHcl7X,P `]b-j^[nKҊdXqi iVc+@ =cLܝD$/aW[K <, ^OO0d_ :=A- ֻɋ4 +KЅEdf\lQ ;7뚿 ;$18Htoy3nY4ט3+޴(N MR\KM|{GdR,{Y(N ,O1I3I#=Hg ?z3<?TpܔOK;EK'StZv[9 k\g&nRKCA:;āJb̻l3(s{ELsPП8Wq*g 3&"iGRZ!9TR M83+m'˱WƯ 3+"2׼zРl"̈́zHww>e]htdOw-y:ilf(1aPyYb%@-l⽢gԔf])3G3uf;/mbk̈zX'm[h^AUIі!֕bG,2^䠰i!Gus뜈 fWC^ġfC6F[)曕cHVvpY> )<]u@jѼdoTC$wrO)GtbaJ0/5Fil/`pP!ԇ}N/6s>CSƸ0Q5QQ+R\[C9cT"E?ۨ11'HΨTb@ƗPd(8l 5d"cEvA'˿\dtYI0#;5٭]C;pDV:qH3#6E_T?#<< u]RqmB$"&GjٞC&L /JK3|&?ѫ.^n-> 40Qx:3XE&jMEpo~CR^<0zO]&|*l\"BPh]?z@\ $_6Cy?[ens @eAC;3m ϜOFA(b[: `hd|А.DKP -6v5Qd+sF%4E&4=͋&T@h<,DzsV!c'SMDWɦk7D5 טW#ksbQ/ zj<+S`YP{5gOʂC,Hy2/e*4K2_`zhl\gqNmu1^+\b쿝c!/ 0탟Ii=!{1qE!o  )yZ׭"?@m\`7ܬ籤1y6;8)mꈁW psdϙ;{ϟAK&%_l]K>ґ2DC_i~E.,MY]^}̡T2r}v⠁.<0h5<ؙANzGm /EdcMI䃥ⲕ+ܪ+`Ac9Za-, ! Sqhr{*!(pǒBlg4!z_Ac㶐y]1K( U] Xehp9B2D8;z+Yl 2ɼYS,(94ٕWWQ~ a6rѾkB>O ӘrG#N_ R N +'ԖaxMZiO;O.؄IOqy \ц00|&D^X:ef#7=bT,w,tHB=M/K*d]6k[>yI^ϫ%g?7{Ҵ|>竐J59%5ڦyM(DUs|]̓F Zr/`BLPQtpQz'I67W~,-ZWocktt1O[C*3CP6fB3R(@@m=Nj'D:ߏXC喂ŏw8[[a>u0w:HfOIG9Ua |!$(O`DɗoLEd<ʵZ~x^2&1-h_FŧM'[~ӱhZnuNC$LRr56wVgTV`̠'[d es$>s}ޡJkmu . kӡ_*l@+GS`Vp>KV9ڸt`<Ƈx",_@нo29 2<Ӡi!]O{d,׾aKbaAk?xk-˄& ljrxn20 ^ؠ) I9bQBdO[OY65BHủh/bt vYyicn'lDMuO廷!79̈́M|er)\lgM<ƨQ\CmУ٦1 grUc+ZfP6?Ǖ|QՋY8"i?y9)˒S$RNٮjǠj1seϞΨk‡E-vo8(s<9bL/}&q5\lLkaaPm=rIED%I^>!Q ju 95Qbh_tP7G>T[u=ij[~JW~8sal'*:mM 1&Q^epqJ}*U42S &2܊U-e˒y14lajc?l[mmm6$*$̤f㺷vǹ/5Ic<2 !D*@1{nihR(_>~"JhMTJ*No+^nG]qE+3jж$Egp|y ]!~LOjh2[hg  I0P7 .'cRY%EKRL=BwMտstFv{AQ.A>`q5S~]eEjD;1`h !15+'PexA0XN=W,<3']R}u@X p8k [hYtSK(SvJExy ؛TԘ$ Y- 2@sDlL#TDrda(+_ou".aaμ LU̶TqD~MA^$SHlak?ŅW^l>wNf̓'O'RTzDؼ#_(cMm^O J8 j}SX"2| [;.xԅXV=yDMG|% tK5]Iճ$6g1?(A]fCZ<[1`DR2<Vg krv /Lʋm0Wͦ*X{2/= q{;c2<{+ѯCq^ [{vۆvTm V؜QK飶3؍V2k@ދ0fpTj 2nmƊ;L`.&K3VU M<xõ+5gd}RH<@ʮfqCnhΎFV-wﬞljS@&a!>T[x_7~"@zVbYW%Wa,rnsa.N1k P-r )Zcb坖F@0C]L18)I[paenApSvYC'_hq?L ߊƗfaGrE٘Zzh~Kςe ܕ00I0F8h^z B;@>b+xV79^xU`^^c$0qv,75^ ,Nh8kMVrKo*Ww΁W`)3 \YsCo3Zk̘Sb|yiy_k a<9MiH ?KH\೫hxz n6UY31"`mKOv\æ؉wN\;WUp}Jo~פek\N_g>]]o`ߏ:K*WM*" ,Ǧ2o6eFukޮ*ryK=4{_!6'm 2!G/-+\J,2" ;N]w%0M"@~n[P= 2!%'|5 TJeuWF+&o*NIe+̩tL]#Xbg ĦO8xQ%\Ra@9Pll2W+)J?*Y[)zWDјI['R дo.h.K8oӔOv oy u+W"Zs;*QL rtql ZKW󬣙xC]̓ݣW~uI/{vӲp =kE Y9qSeUL>baoAM X k% CvgKصW{,+P)AZ~'w[/p/frG+@;?\!O+]KZNyEDҮ#n։5KGN]epHxMǔ*Se'wh.(giyJuiE)UjޚUb, 1nFlc@mP!ZMʉΖJ1~WNcいAkAe#A'za rn.`4]f8˫E1ӽ}2XX1(=q7IV3LkC~ev]m:wx~ w*;#/}U܅هf'gbn᧶&%Jn33q~h,촷yAL FQs n7]CW,]Ymh`!w$rKD({>)>=疄-Fs]0k^3cWKe{mMNL d*){;N}ɞH[rљ\z,Gyq؀; ˫ ccRȂhևtּ=k3*|kn5#gԿ(h@H_݄%9bT:Ʒڳ8yh H(K)V}-FQ1 -"aL73F34`UJ@"tGѹQJ: SGդU:޻O|$a޻4+#q ^~P^B=K=l7q"W4j#QEG T%o8~+(k ITWn " l)(cY>Xdeeyh#Ci B-39 >Ah$yL/^ZR9`\T12ץ?>Qhl ehL۾eÖOC3o>)UkG IYm /jv :utj$mn )+OFh͆cCE^  V~ ɂvHCw;o&DPDP(@@*z'S8g" b"@LUt~ɑh.\1m9I@O-e:u]K*̱Nhg{mu\{,[E:`.ƻR!m:4qiRu>%S]/"P^Vwnu v2vkR gS|"mQOJQypI>}7$m۬\4iQw@{'=e%-60ʳlzXF3O+#5AErnܾ8:@5;9LK}LR1Fm{Cz6_y\> Yߩ[sERjx vb+"o> 6tpqe$wA&̱˚YNx M' /*ve{3^̗} lPL6L V44 jAmv!fV-8-%8JUh5Fpm B%@x» A$@aBKpQnkt!q!× =@uL7g[bTڗK>eV^[J/D5ұ;TxoT#q;$R 5KTbjMlarIZhyJǧFh\%(V!cX}yb$FWh3 4DI~{ *@-*FwY%, @ lAx4 ,71/[\e 0knr~UG狈ݨ+ 6Lzg"]i^;εKfUSC,\lT/jzY4~>%LJ= "##ևt ΎyŦpݶAa@931@P}OYӸnP}HYтVK0{Dxh yYCZ|P7]bXZ=<'g43QyY^J~xSW)zC-v{A sjR,C)p7|-Dyx]Q5H-uBcVB)+)=='!emd8',ۮֱVB|~T۔oU3z3۫ ++^\s =qݨIJɌjX)Q/]a#ַ-o0͸FxrLdԤ؜B*:󦐍^u`D\N%ߌtl. rسsP^O7h׾rwMj<>"k3& %HzHȂc`vAggbA3>nآah "NM?YLʳ3/,mf8+WER}ZЊ.(!l{G,eB q 4:0ƥJ!N1vDU9[1/N 6Hpԯ>u# ٧HЧX(&z@W32{˭UB;8l\Hʴ فP~iq=mߵCy70+"1 ׼I^̇t/xM=7d8 tmavvO/80'}SiqˉT53QG zUbw6o,][яDe$lϨ)}>''IʄSrp*y[̒>flQ,/i_s8ڦ|8ƒE[br K&5MKm|.|^T9$e\b}D@l 3H8SIĐE792zWB,ml!MɪSgtK4S yGWtzs{wM < 1 m- #G),r1(b#$Bu@ast֮0yk)-]0@qU"%#n=/^;.,t>:eꢀO UD0=CHز Җa9'W9<&6t(% ۊfcurWKr<0>3b?ȧ64xC; n%{['8nMҋ;m-zɆdS/;tf+Kp~ 剟p_Jd$qdimQHc<~K`3{ gq #VqZ!: )|."+lyxoxԇ{THܰ2peO>#>}#[!"~O@sC3\sT`N_?:̹3;"{Ջs҅8%?pUw!^,vcQ}阹t`νʶ0g6?ԁLpc84FF[/CrB`.Ʀ$YjhR|{P38:G,o`h giA"ւ3 Chr !myzbSl~Rv.ĉ9 X|D%`5gVI;f@EmrOFIj5#@|7NP1TPU$rLr &Ζ"E&Q,3.z˛T% Z40<0y h.{@0a3C@ y#kӊf;&[d<.,]yFb29~:m䏍RJ˰߀q/HzMVZ!"}aq~5AS;Yk ŊZ  pxsحpQBnCT}/{;G +Y^*Vؐ9\pS.SI:~9nr԰uv1A`êEN{)ѹzCgOc?a8hWOLV]HlKX1):05k&(ǧ!3l9Cl?;=o$rNN.oMzEO-}@,@b`3}m- ZV'P1:+%Ҳ^;yzUskSf;oد*;1@i:{|~Jw[n<ߥwe e7^ da[>fZ{=o3NYu-ߖ^+ۗ~־R^'^CKePܞ CuQ93^wO:~b$3p[zJ,웞1?#+#6듵 t2V>RZq4q<C Kf yshZ}OkiIIv@C_<݆s|D.C#6)h.WS(^fm=cq20?] F<7~k 8  1h 8c%{ |l⏜v!u:1{I7\};O@ D.8$ܥricTu ؊%a+gcDVuHM* `dDʙ@;4̓fދp"Q1&OZTdoV'axDlt6x:ZAkR[͐FhkS:s:ֱ,Rd;UZH Ӧx0CiH~Lu/33֥H43,kwi_J  LVh -aUh.x(08rX׆L296Aej#Y~QdkY61``CKEIWT0yM}QlUЊSC๹Q|M} @,GxqvdD^x6bhAbD/XlIm?p%$:۱}B(Dϊ ,ɌosYMv>/\ )wX'|GdX^ո=üAY+Q1R Nnf5\~ @oiM >׶j3%Yo'1ti3-\&H_d5GBjSgpWVܝ*&B͸@ʤ~="@k?H]\`PVUiN58|=O|h,xc7Ȼdaoq1j$ZD`r"(ͽ4`rMI,ÑHH,͸d)x0mjqMffx:PlAb$lM Kk.~}%ynRZtq ,&Z(s>[p"QQcEѵ0WTŷ4!Xgr GLɝ N!HWXoa[}x(ﮱiQJj3O;KtAu` dڗR{˗ྍgf5z?5UR R2{168{8ϲ~6il1P{ЖҸX{;ԯq:nq=}%lhL\֗ { CTOY!IqUH!2:0DhpQ4ŢOmŅ/j (%Q n *9K.;_Wt䟂 v]pڶm۶m۶m۶m۞m׎}֎>Ȉɬc7 <$KQFhUAnc3B2 !Jn](s I2`kz$܎#ՐIϨEOJEKz 'D/-އ)]꽷36VF85pÔ /TxQ$XPk :]g&Q⽍lo~$)᷸{+%XVYÚ4jʵ"zFoߦ>KnʃJ\x5lso|參KAGf(63J6Ĩ:.WԷ L'rNjLnvf%h$ku!\pHE Ṭ,H{U#rl rʈQӌУSM4[C_7#C uY^>|||럧a?f/:^Xm1c]lk5ę_b+Q+̶l6jPtJ>nc|/ ɥ@ۡc-nlpdIٙwm`սD\N C08&^cUr)M5b5 ^^\[;y|tQXd|8?n>?WG=V`27eLpfIQ4//\F&MH&9Zk&1QGiXBΣLkc췺ݳNO}wg dgu780˕fIĿbWfpi26S ww$]Fg巔,uL.ɤ®qZH";,KؕR! P*(q,(+Ep7+Sp*s/R!BAbR|GeS7Bb ö͞ɠ"/L(Kڹc423o,1Ќ~ |czGEuv$3D*L=g*h/"hT14UϽ:$ /&HoiBADhMLjpi H30D*pLjO;c{@(HPJ5q&7F=ZYjK3YjK958Hg20ϱ,!_\Se$I9MWnZIW(qH)q^'UξLID4/./|Lu[ 5}ӇJGЏvxJh#_q&GLx GZ<K5Ӱ~VPrC# <8J2<bB!E2̈́܋Q: yS0S@e&qdZP*))h*ySCc"JRRNZN bNS04dX++AҘ"VtAHUv)'O[Pt3_&kh<jB"8`V3zL@ՔK ҿ "dRreMD>Ȧ$^F]wAl]*ʌh!ENlD/?cUy}:b Oe9~3 T, Re! u#&!CCPRMU%Ȟߴ,.QYBgUM(?dK)JÎBИL'{S$Jg@! 1d߶E? bц+Odd j0$" %͂ofZRG%_61bGDj{,f#_H}mn1g6tEŕ\Za NJ&NPjqAbP2LE9+F]}' t t}AL $YVL"9|>fW.WOsU"eDŽJ/ 4_QxLVb+?'[T wV!W"`6d2qYvTm Tg=; 9Zd:=lrV!WlңaK,.dTw.5 e.8;bw ]ALjb^6 mm}+6 &z oB  DVA:`PP \Ng>8CmQ/emQ(@0vkr!6:)JsG^r; 1 h JHaS`\=̍FS$8N/,?=Ē?0v/AAGۨQ@Y[zoIyc.*\zcx-;,41~q ^2 xOeTO1z0Oњ$f.E +dz?HX?[,!8 `vs:X"lHӾ栊WU*>3fL*ŘmN8g/TݒuA}f,֖아xcR5X\p?Ye%fȵWbmAڏY.A3%I?x!75|V%;wY.io hTT r CmW[ifر(<-n=/;je]kphDMh{,@\[r%N.UM1+s<rX*^Y杏R.ם)[mAopz  ;ND$d*uCVTiMV s X>\y Ur.r{ם!{)r,zgxpk-kˠ֐r {;|(bP(rS͕%bL9 /@ SjUуR"*@NS^0z K3RJ|ח=h!^Ӎ(-ДH/Ul5m#R;MfԱLF@3݆3N: ~C IhMD  M)? mP]!q[V'suVF&n>8yLWu^P~l6D>1QB*`D|-F5ipmx #l,am҄iU'ZK 2ϑV`b1=*P3܏,YF4h5żDi'-9mS1I3 AObNJGn3|:%ІhFP"wLT}\Qzg7LQ%I !&45^7~ ͊fE RHR#4;B=XԐ7vP R /7?xh^ ;E&+[Al.w~ G)8xh E=x􌢧wi`^>EMo/N/&o鈘}~Q@CD< '%=_ "Mi5IE2? 4M@b ; 8x/WZ8R[\g4=H3AhܚJp֊qt`7r{WJ`=6Y`<0^ob"EXM۽ܺ VT ;w 2R{BEE#]pyI*oRPkÁt>%ʬA,圄;dhj ~xaH?Lb+\#~? ѫim֣vz)l4ʥr8D4*"ttSĬL -\A ͐mYIWMo("ժIY҇{+0?5$ȹTW]vL컝}dCc(B`ls16eU{W 2ϐa46_Bp7-TM 0?dPIDB0(fAor%X h@NQ{qFmEOTŪp9IEsx/x/l :U:4:4j0S -;BU( KgN6d+[%Gea{{̀W3CR&2J)As s2a^[|>~6`cq``A2x^_PVR 34C$;}@JKq~&"s/U,}2GkzaX"d.NX숗:G'Ge"_DeSNT t:&QGNq,\a=w5tgWu ʅSqx5|]IPO06 {y>gnU@7xmd4LQ^c\Mn| l ֈBj7Hb篕Vv4o@%d:U$n6Ϻ `o7PkA4m %{Z#CI:޽Q8v%OzcTOYS8sֆ[h_Cֻ1At!vRB\m"fnn9=ڡ_T2\~1uM_]X]q^#e.~K!YE;|xѾz|ht 잒 \3EU+zSC9Xn=pNYg*N90X .{:WrŌYۑrmߠPE n{\x97Vcț7;5zZkcͅp>Y}1񊒧 H2[?d89}'~/@I) {~n 37 uxG!fv;FJZ@_3>20.A a#c'c'sʓwny0 wv<6G.$`qRuUqGGϸRN^ZU{^x~?M qZZ?CE-,]PE6f$1nK1j8l [dX~>Z*F>+@VOEq ͣ7n)ı&XeM!'Okv(2W p[@S ;}N!x%y[;3D\ت޺/&@xZ#z^|3!ٜtjjnffQ=ٽu2iFoYXlt-$+\m -3\M[{"N<}E"-b\nQp6CiL[9eہn߷ꦝS#*RHIZs0/b*!=^vtݝ_V(LBhw %=DQvD~VQHl$ۂ0YŐdN_ړR@EtS:D' -BƨKщJCل nL_.3 [>zʉՂwe ֙  6u=LϺT 6&X@, K`cxw=?1bX|0xń_ qdCK~%͎17l<3].ʍȔxsu:9ZQr*{aJ(cb|=*ֵghl0+Drm} 0o|hNl41Fg(aKi3NOy{02R"j:2(;yF߁64C>["GIMZQFe䌔FӶ rw<0#Nt3HceLVڣ5s?ӴM՘- SIcƛs:߼s`p`ߠz  Ou{2x֓|{cfSr2><\t^b-LKxu29%ʫOR 4uR1y0".T o<'.3zsH~?Xӵ FGG׀k5)`HT= m,V<5ڸ#4+䘮 t0fTJ9 Uf5%IPRnYU!W┄ L"Y7VEEk쾯9K>fŭHQ3_ h켅3 s 𧇮?WQfP(),L@WŘ<~*Ej?&۵iC}gU#P ctEmEy YjD< \Q .L㨂BMRDW6bgJv<|}|kǷ `ΌD$GJq*7㑱%z)EsŮZI M^,%XZO!?b6?-N $L?T*J1W⢁d9-JQ#Dm9f\ &R!D?PAFP6J֢Dܻܯh4o?q3 vӸMY.Y;})RiY&v_RfȨMp-;2‹0RsuJޯ$Xv+w86p^JܗԤKrM'X9… NW:/fX:#4 4 ,|`_<.␽eg;`>GA|<t$nRWEhc yC%#D=86$Q  y~4cy`<9CCuGvâl*:ܭ1b}.|X_'ڽkS6..Ry -iFݍCMcKwT-Cuc bf^nhxؠ;[Dmu"0V-VNEW?ȷזΌ"TD_{ӖX9޳![:ebwc};ܥH 16 6Kdv&|4Dk\S`1m ߞN[)v=%٤X-FutY'ܝ+uCNMqm1otvp9RV;‹cto^Y6$⽗']#a7Phj@Tr'6 L>x JvCrs*?ޥSjĮ۹}b[.֪)-<.*pm/v)̭'/ V9BXYn@.0UJ]. ^fi*}3fEǩ&x3bRhx|0mxY>v<љu׼p'ĺd$*-(m[adzA ]]mC$/#1}ɣ.ţz#HJGj缪_<ڌ8gTw"7"x j*1; m`K epX܄y@X D+79pN^HA @?hX*~>UBN4c\-qܽ_břCpI]Zo iQR_Z;hP^E#eJ>[oB1Ք 5ߞ+MdX+m!LkHDԣm69ޭ= ֽfgƞ0s#k 7˓nr:Ҷij|Kby,~cg O*w?$eFXDO+lKI ̃!xąunQ̧pH$Ca_ sd1i<^-b),6X#iQYM}>t@HY[O%.bt")93\^vam3&bK)pW$k 0Mva9QuGY)blWf3y5+FGQ!N=+@v4?!+jGfpU|ZfcƟ9/o% w{XY%?o|pٹzwMcL*rl2;fLIćCoް"ђ3V uWX)4Z-wI$;j_P_bz|1"k_zwFGC P?etxږ:LJD$v2Fh6Fq #q"d5ɺ}YA&93vRqCmTdՁc{GkBh`.4:KPJ% 4RY :j0y?|åzS(KJ2Bs NHG=ᬋ O5r vYa՚v_/afλKf<]Bsħ$IX6eD5нt<CVmdIc)vڧ@M6@'9Gv`öpLY8O0L>a};COpx26O4(G;*Rł glc%(1fd Y)݈ 1‘~Ah4 ?e]{744rzffbP=Hy8t|nhsvv*V(#;:ۻ:X:[nev)>rXCJ n`Rʫm_2U xoSip,{D4)~m6/zphqJ^D(jF% G0'~ ﶀb9:_8ǍᬨP\1V`_fFO ɣC:=O].vCrl #TJޘp赚/о_ޗ{sHKGQ¸&U` 0U橙}wǒ+Ґ9Q *Ry&mSkK%SG8IyǼcE8p`/$[ <6M|$ee7iJ}} ͸)ӹ **:oUS4DnsF()iK¢ޟN6p~=_cUv9*6܊8u14F*T71qgg*xXCxϳgKk.Դ ='4n<̘volp<4<7CT5- d@tX$z%_BF T\+AIgd.WuD5Ld(n1p $ia|c3ͦo#t"7bMQ?X@8z-,1zkNm_e|(y/!@sW? mŖBlIˊDiUu[mΙĕR@bH?mE@wm5(3"2ћ+%7|7!=9v1vMDϓ*Bviq/3o3d'O4K!t% DS0$q-F dLKB}W(  d6?zH}rd^19N+W3y/##G3w^q*EBRoSc? JNC*+cHc*qD߃E)Ckf54U dF:EVM8䲝ՓW?'ʾ4%EC̞CLfw~_\zZv(F%hؔAO g˖_F~gߕ4'POj`ef 9u~ܲ0qHmmtб$4m8bvZk`=;d!H*IȊ鶯~]>7%P\jRsLaY9$& Á](/ݞ^Λ@Q+mxU7Ѓۊ9/2Q^3e#=#{~O ߨgH$v%@4TաFyiF *b4b *DEDB @ÎL0#i:MhYi &t50jh bww," نb*\8Wݾ2GiW w ],.Y2 ckzpe@dZ6A$Ŝ{Ťa6S4]AcC›3}&,H"{ BAq&M,alH˳c`M"xNǰ,&Bj$zMBaBSMeh!NF21RE/a:UQH47>Yܝj4HheCP̗MS, h@kGh#{~>=lE _)9cBAѮpƯVzT>ސH) cDNH IF4| X4#a 29C?G*WP?{q:vA9hHjad4.+SP3[;2{\TPz9xKG9:Ϧ5)!Du4l ;)@n(:9%\.mX[ֶB|^EGXs-yG$b=,{ҒVs{?[}{c{u@CkTŸ49j [U@,&+ܲAPj%+ 'Ѻk"]E")FU)e8>,q=rquŒymL  HNXp0X5gl+[/x[=bو5=))[7NG-,inAsoz(V|>CbR:ѦItj|D/}SP|@x s kj0&R{'qXW@(Sguj٧ĝrHH%]4\.ɸH v;D?=frt{ma:P=gܡ18t@> .>h|]D`o_6>GA?o ^!~表C,xMN]N33 |V:/P.e!*Z\*6^@5CS71T܍[M AQlْ13 4{(r:>ސrPG"_s5+iۏ R3Hb%Hg_NZjk=OSvk=\Xq4kD50=B9a%RȖiTTCAJTŵfTW)HC+$?v%%A(#ߒK:$LA+cB%,%ň8 # ?Cp9ŸUk[P !@"00'2VZ461vS˂gF`8%j6߈!B "6Vو|˳1q&hFf}uWҟ2XϟKVgmv]zLѺ,K~uUSfTd ^R+H4!;Fsml)q<|ZޅYxycIvIE p=n}@91 Rm5kd9fo4W.ڢA#'f;5^$1&kRr+ZOf"t`?P~b]KȮPѩxw'o~?^N%Һ?z//O"$RؾW`U ^,z 1X !(NB? fl᥀I[Ce;0 ls\.d"D. s%0L ڻy}Cg*Pr Aw-ι ͏I+sOUPF+gO@w,rwM1)iב'ڵI :HmoGr@3 $gqpg_i&_꼛2/ɒȃhQ D}"FC A׈)!K44挰q(vLGǿOBNkv̵_ї7GGAɿL$᨟.Q,A_RДi4Q ^'"M)+5wBLTnSDs18ǘ$,#k!AL LpO"yQ6LL2&fl"LkqH6:2VZ/瀚,&M? )u%z~K`2AFP7i㷋8S&<`J+ydmm=埆JF%GԄX)bt[rm#1xfà͓Z 'W?jR4H?hоo jQɛ AkbʤWƹNMN0+?S*X%k%C^p@ p3㕿j' :xzXW\pqj7OlDt@!#Vޮ}4I.rTG#̺Ac3xdU8_qJL~$+P+~4wՖxTy}2< {4Mzcep`M{P!o%ߖ\QS65D8V"8 8>J2bb֪b"=:aP)4MtEvQ,;f-w\!sd~" U=m_ )#^3U>R> G?\t (9-R٥F~t88HDʱ%]9ܢ _oriR_`Hxe끝4(KfƇ=S8䒮K<EL4eF9 ?䈄FJQoCtN T:?jyAEhUc!Ԡv ÍM RLJ0> yOp ӟfY}Dtf\sg2 J̸d*A#*;t _ (Gӳ(!^XZ۵׺ %n [Sz5m +)j?׫{W!pg3?t=&"Px%&$# .2 ٿ!tA'kq{rj1O>]Rsz er# !k# 'F nXS%)fڜTifZ%o헲,d5\ogUms|ٮF׿ 7J 3-b! /.YM;W^Xu[K%/A;ȍ|vA9$|3hN#MfYzJ5K0kvz|8ڐLq.wՒ(}%|>0v Lvy8 ame %)F3Webp{b$U1LNuuOr#[V׿#\ii~'~KU3n;w܀q$o@6ϾۯԲ ΋/=jFsiB{Vf]FJE%NNKOAmrz>3_ 62Ur o"V T?QVyiZ]׹~lj:2/1$3㥄nbg y?? 9Cq.It5,NfN1*J`39aQd*,) r,cu5S.pØ.}w9U1E_~ܦ]ݜpO&PhugH.`ίi߿37NQ `B;M !l+zoJXզ<'m+dC(〞v4r>¢t#k _NVK<Ӓ;c=WφTF5Qt}AEUr\q70=4T!NaډvəW]wb(r~H üf'Qĵ*^TVPZDr?MRCK[es۩w̥ -ߺcD׸]"5(bys.SD:1 )Z7 |lY0NgS>ۣƚ?Dz$نD<3DǪbG!6GLA)o0OEiV(A42GeXY//^(Άξ,bCxɯ^17[CIu($'UueK,!tKR*JeGB9+FD=6 PNṆ+$Pu U FjUb#i4:#춲i8xT}䡕ꎵ.{f:?i{0U3j-Rc>xvʠRZųu+ÍL4/oh.DX_YENѬ@ (lDS9f|i8#F^ TuYq𥳐O`."tW@p]J5~=k5~-Y]it So0ǢanL ݺ1[vF2`F'D('%![ZE2=ߧIK|A43Cc{'O:XA-/jIPKB*U0CVTQlnk8\pPF25%[U4:3q-L-+ Rf+)c0d kAe2A`+ap%`.jd [ R}0<}) tiHCKI")&!!%_-V;uՓ&jvϵd,dbƂIڮMr3jf4˒])i(KꏪpŹ/hswE𺞉x/NNH/g4W`{om`j > +Ğ6fr^& Z`(AY2BON'?vGMljaĉc[9-=XNXch0 Jژ @wpIDpMk&v8Xc<;I.в)}Tw%"wzԶwQ?;, +ۯNN@R8y-yOp]8kߣ/2<|wNfyC{|؂eE̾U iZαձf_ZP1:=g&qfsQ;H$b$({!X_$qpȤ˘E5U?⤍kղ2z:2 0:W22VD4]!U!h2C}ݦf6kMg:fd]p{P o8/1[u5ST'zG_ElIg/SOSLJ$?R:Kҡ9R36+GQ'̲|+Zl:QC<(UB!jD'igymd]]v[=- [4T-) hQ-#__AŮ?/ԞÇI')NvGPR82j:o!7KtItnsi#wsκ.FJ(4e]Hj>r&3ҼlKHɰN&Y!pX J1̲xQ 1F@AQ 9c?;T*kK}W2El`!*㨆@C ۏJ{Q+fDBWΪF@U*/S4`c:*~zgutTC}|t3{r MA GMƥ ,/0Q+ a<( N*K `FMi)Tj1[3Ia7׃k?pz0 щ7ëósX{ ^/C/B x ^6Ut0ޑ9fcf4it 0eC%U&XbP'x;T֚'}@'J*w+ ,++Ȉ$4BH̘][6KwDyk$zrM}2ޘJGb)'Xc>ud^#Mig x̲{4}_9M+ "ъ;yHu/5[%(^%_m~Bt]VtH:'L=D{(>S U%V%qa<KShTw`o9ϧI*bpMSNҀJ?!UFc i%R;2rd5no }#8!3NMu~xN zAe V͐cDŽ7p2öTeH eSLnlVϴxkeYos ~KoDlេRVƆ=n)8wwJz͐- %1˱hڨ=T"KUohZ~8 SpV3i?X(gPp"t{Lƚ$ 2U6h` \kDVk~ r;I&' Oi@ʼn9ኅ9m@!U>y?P%ye:HxsA{[3VXpl =3OewP**ps, :ןel EDݞ-:+QyR{%Lѐu{{>js>i;{*ЕSy6$W]7Ќ"$zl7')a74-d*6OL#ڦ.ъEh?@ȕzu.sWC˔+KG]A=K@+!ѷ!+ˍs $rėtĵ8k#|=s7/QeEmZO6xKT uKkܱF^Fpm;V_K˂|b֒}XsVƄho7TGTS5O˺kw#bM]>e'gsE%Ax!ᆁvX˕lr(cI F-DtqOoG9BȤ1{tO^CC -6-$0 *x1!nH`ٺkЄqg+.<@J^H N y爏.zct\ÿ:­_]7PSeX /9"u1l D9>rDoL}# m)v;beQ@WN֨ZWW31K=f+zyZTU3_\%P^[U (c:3bGbL }MAUUQFQ:Ch'".&[z& džКGEoĵ8 Q Dx̱4P L'm#h|MxĻLȫMV4: 6?m|ʰ|& $h6pA%UN5URq)v^(![ bPq#E@m4u L3gh;ˑY*C_HB1FVٝ(^B\Q9IҞIeΑFf%t ;+@45X|'w*LnoC&HX7OZSXwUy:TZ}^/MM/r5O7} eWaX>܈Wε_1oR9?]cyjͱ' 0(@ 0w_b vHU,h:]x<[!4 h)搴V+r1F 0e|tvȭ]vV tIA.ؐ%m@ ckwd6̮^1mp.+|)8D8+W.h<)0#ջ&}3G+ppYHS0WodYFZqXK1XH]l(L͵NL7PKْ'^?X 񕁗|Rm,V\om{ԀX#P} A@CYtS~QŸTeGJ\?aWpTO`xWʗ{7AoGP{(*1hVn eRY%SJrwH7'-a}=rqF2#(,saq0@AHe*; ƵŁTQ/M$lO@4M4l~l?LL#=1MDah5W$^^+$wn;n3pJSV2(Љn 99x8B1S2fav˕FMZ(_|\Qk1,|ի6WC=J:%S2g^};q+m9c #JEYY@x=ÃDKeFD0S) d|ӥ5)2._G|vklQj V.4b~=ll )ŒIc"d e[mS2ʀV7c:i!r~1ŒƇm{w{zz+]y_ ؇.yz+1sQ@ P2Jo#<0YceNYH G0(֊إ ₩fha<$7bu H>R@"cί.љ9 ]}}>ʢk"u 9[R{" PbVn<+k%T:|La F lnM.åӖ]i^ЂܚaRu:6qVI>H?x|>ԏҽ祾%yHpV[, U2(|ȗ@H:$粑x-LĝLNbeb:nñ r{U.vI^"fEkpeKZ ^^c>g?6 yju~X?n5*z3z`lj!F[ȷuap;U`_866Μm3 Woʗ ؘsLxӚ"9'~rC|сKу}{ױngIt_}KP+2nB3\Aa [(¥Ďg3ՀAZ)KIݺH%pzu]L1Ϲ{OsvUq#e!h?׾5BR0@n]k]V(z6H'7 YP2r0!H;;ٸ#؄`-2e(yZul758r%D(Y yKyYbCi]ENIHp罐DTuIź䝻 M͒_ aE/Wvgq?T У1Z#ibfFTB0ԙؙTF LDSh}gM@3 hAXhqG|U[;L /̦8m0DYɛjC }3 4`& YSH֤U#rOֱ{Ze KV *2.ִpfE@Įk_((Y MDr:3&*O5&v-z"q&a8FHi3x|mcjPfuwؐ xΡ}ƨZ "0$ǨdGΨ~A6vV hSutl Jy tưuCdZQ]Ȗg2PCVR1VXeG'*n˄p-]ela?#鳠Q]Cc,C|la[D LM ѻ1MKW _b3YTfh]nw~p"E0';Bz-x{\J"1 Wײ0?"#rsMHt{]2 Nʅ6=zy.>* eܵM|y>*vNѩnYؼͼ 6 > ;!u|n6Gwby9r o@ &o?3>.`6'Przn/۾ô1b9zRcHw2[_6e6NSs|Kn_VI6 ͜Lh`tmd&s4jfdfkjB{jۧ)׮HV]y۷=Gg3 ]:A8fPcC+~Uk2i< _̀|f54'XiLJx&?9n,z i˾~& 3ԶӅvvG3o.T/ }3~v-)2]!8rsIky c!R (' @2%!:n{vi yf@QTEi~)C\Sdt`t(V4Rgef}uɴɦRJ37bLDIۑa$ $sRd2Q/~drɐkiشШt qā(0H5%"|#GUis2%Ń ˅3 |Yq?khheeUTe&-U5}3˜Q0w~փ0xS10&Pѐ++W Z>o[^͟FI@КMm8 b-uܜ<\~сYvg|σ-YqLGBw"0gL@I njew`̈X=ə`Z"\cv&Umb.`4^qJ&%Upً_ka+/*BLqm{N9}AY d:4qxjc͓ @GHd2/*7V,t Xׄ6 *kIE.\-D0Jj&q>nVd zpZjԊK>HS6 u,1\:.(&@R@ \r} ke]]NO&YaP왥0Ⱦ@(v[v}EKV^sBB$ | m|z yU%nW5վ\\vdKDg_^Vs]6֫í7oaۖFWPr0Pdb8(K-_6^@s$򹎙5~7yp.fS^d~rπe5Ӗ5?vDr5U.lhӀӹ{xc0}޺-cI?}K]~LZ8.?K@ ~r(\ (fՐw[:LR;/}۟ ^HoI:wvl]Oܪk!dF"=?m +hь#hO+d7M9b5@"("L>B;'*NW+hҨ`BkE鸛#㍳m?g?~$Ëwz%g¹y~dlz|i{!һikvVk2p icYtY@{l W^~ݠ(3޴f2ZIq4mI+e`|(V4e&ҧNj1ep?_P>H!",e.DI!<'7(ܓdžrлsd5Hy{lV%5!,_I!,YL?5H&Mk%?kjLDbi ː&^ۿn)*% b9f|G |jg^m;t\\VON2 MbC?fPQ߇ڏC[+6%:30~ϵDbyq{66 wOU $L $O-(;FHởx&72gaZ* o4+'i}51 7`8xIS.'15Р$unMxG"D$\14R5@pH IE s\ a;"iw00#ڼ"*D${扱mJ8&ͪKLKߒ^ aB=㑂U2F#XzP.DΒaHj3X-5j+Aw3~Wtmh[vdW*MtTM#Sd,M!xh?]LI,f_;)" )|#Tk167J(aݠ 1 )ۮܯ>= a2ﻅ~B?_v2'הEBFyY \v FhɺQz+V%9%:BdK9Ѻ+I,`vtܞUf\ؚʙ*@ren9 ZƉx9ڐ`.8cQ+z} epI'Cl}9=IJ ZbN^'G4/&7HWZI8*nא} _ Oɢ ?s'5GM mLA5[rvaqZm0 ?mˢok،ѹ|f%}i@O'镏5nK K0fUӋ'%;@hW>25=aG&:oN+&NeZ*'+Ln&?/jch-3Q;|_C0Pœ0W!;{KٵH 8mײ,6_^|qiM<`*>KA\}UneJ9$ђFX5,h1x^^&&b^ݺ``P[_I JR k=vrq2!]ׇDMڛU'8v?nfR J/#_4ǣ_+#e+a&[3bWWf3@\A?Y[9Yۙ9 ;[=qve┛|VGչT ZFM[Zq;@¼NnrXsgG?΍mN\]ٻrvf74szeMS꡼e@7D M%(J(zЫP Ê3%hKNGmoVDY;dG<@" #m*fK>ZKAgpͦUCiBī0U_,[`C0$!F([5xrջd %1tXR+(W"h۔m[4Qھ#;NhonϐE/Ֆޑ^2* .8ۋRַ嶗Na+}><8nxOCfA sL$f帽 ed=ٝLf]EeLl.ݝ:}v.v*/#z#+Fi LD?wvcvF.N0T6n$NMM05ut*t!%PgPu2T.@-JaJqZjGŏcb86b2O'Fg^$bGQ& \($tʷ,zh``>.PgTfvPoZvD$3n4lEX?@M %Hؤz͡Rc F)+H:p#|l`e%j-z 3 Z @8sXm[t!~1B~mm DVE!_H'i=&(?'](WԌ>W'y!<޼ao3)1`::wKt}3l3]͙@ufz +%'Wq:Gltk n"1Aj]K+1;`]6hUl~U;ٲ= 804NV" 'Ds%@>A9,ڔEﯪcfdL`{>Ԋ)h. I,;I+ܓJ<_Ew 'A@VKMJXj=7*X8^ȞAΚ``Jhz2ʹY)dD\@C4A[QMVU* 9E:YZ  +Ϡ&W=RH)LA[Uk)rYx] L (N BΑmo8bfi p{8U%BFSA)) [md8Fw}CJuI^PF5$Sy A/]e̦QR`@V^ʯ)>r ꨊCR@](gݨe4!Kr\IWvv x}9=8eU}9 k@,ДsrBG-Y?!jDC՛!0LF]2;+3< O fG;Hœ^Hwܶ;+ d3c>#؅9Ʈ2қՀՑf-PHHJ P>%Gfb+E`s70xLGGaD躠] *i&U! 8ގ|>ܟ$j$t-mC+.aŽCd3LS(/f(x벰$1oY$29Խ*Qo]U:+3=G*7 *#}Q['J"v=$aJ{"UM4CszrN=Ȕ2o!RusU˯ARӸY!dvYٔxNBϬ, jYRoGvС.6azFnP8Vh "KS -vsQhѷ"E^W <%Qb_}K_繯Z: mR{8Ii0 U2ZbCSܫYs{-Y5RZטrLXhf*l-lZ#`-9en_ h S&Wk4۠If @kh/Z4ot /O[7C"z%5'Sa6I:uw)S”))*%N?@$+! ȿtF^`KЅi-X@-}0G ]361 )C Ec0C\,_Ac㤐b yh%YJ!*iƅQK> P@U0T@*b=6!r@)q@XwtKpj`{>`{fGOnϤ%˞\yvtR:{Uqw,N8xHlDX 5PT)bYqtwn\ߞɭ=jM9>i=v ,u/to/*N84ȚbfSzG9Sqe"m^do"%QOOAAɠ5[Svd(>dߢG{:~"|l_{A 7W +ϊ+/دX@_ {-`YMųQ9keĨd =PUҰLӻѬG '7הȫ+QT=wkҪ ׻BQ"ѝ|R7¬7FJ'ە#M>3 {Ń.,C %1I2M{Y&5nOޖM+|+؟gvTܪV<r :8+Ie>wKrՉ>0,)fTkE҄HtA ~N7V6rtB:/JmRxC==Aç>Xv~:~?мshﹲ?s%?' '>@6gQ2?Fy3Ђ7{G(&l[{ۜu 6LxKgm'6#?ՕVen5[MHèvk]{`#@I/ hUƓWq 4>p"O>/9ؒ$e:W!thE;g`3-Xm3|ie2KِO$<-clZ5^CǬBMg^oW%`נUT,kp\:9fZ.zb-wQ\kW=G6_vC̝^ٓ-c?}cz+"cH̨}4ܝ.ث&ρKf_JO5BN)hjangIn!w2T|/NHr9z=M2muסvi)BWΰ vmU(%CQG~gٌi_x2a<`kG=@CNqz1_ӑ׀Q%ؕ8LoWdlK~X/bct0 6:3h,xcnޝ~-G5u`&ahE"$ ƌ% n$h9 Ѐjz1WSɾ%Hł$f|.rZ_M;yܒ,GBbt7l Ϳ\(lh9z!i)*a[+a Z ~Ѻ1"\, dK4W+8 1d&{7O>W͕e9$o Nnjmg?PdasJܔX@UU DVEMsX!(neFɅ&O-#Cxz2fpܽ{1*J[_ g7Kt$͚FO:z |;P)ZZyB8#-})sWv5=F&.J+(Єw'qRƇE΢D]i(;6ˈ7-}G_0]aP v_Qޘn\Ϲ5+ِbq$mSs@L *5侅GPE%Ӕ 4M2d0lzُDofRetf & l7+5-vF,@|"7( a.|`ϸDVa4 UPv/mT^n_U$>ٔ, CMNeL3VC݅re3A);'qtHV|_<6{c2CaV\!(Mf nzj-z2]IѻfT(TfRߢ9&]J;ѤsD x6c%7 㶢o]%fFzkNz?av`TtxA,H EatfwB޴8cO%"ckou3 eLYB 2I%8f)PN]6VOsлv 1:A!(?iEo̬Qp '#E đO nI*֓d %(R3Ϋg8Uc"+*cTFE ڳc~f0h6AL>ү %ծEn ]H(}o\uEz3зMMODlQP|GM%]j|#a+T_i(v?mLZ:a8eT!~9%xAH68\:,>$}MMlL=L\?d Vk}2ٱjILg K+d5+r̈́^@>AqחF*R }~ٵ ##c}E}JR\nc<MLGtTc4f.)Sn S [6m]]7yc&M!x J~bLR2/T$ FIIFtSWqgq<7e*b`[F򊣟-my]qA#3o&_RZT"'~8*o9TtW*u'-{wcN+qxSW>dxˊ`j&N5y$#+{! ~7WƷ..nw"NNN+8{>T|4B hnf?7?4$j"'-]zB &t]EeaX7@;c4؄)z6.="Fs,sH*さ=|2:[(z!ęwLOFGxnj"ΚCcUB |ARCTjJʌwbN,&:K챃l1Ƃk th2w'T{?o891)0kF^> 9Z?X9g^nǏ{o;#)K00o, wXu%{gre`XN:Z|' ętcglX?^r bDS#-r`%qZvʞǓı#FLv7'*weav t$}O__DOoŒӵ>ffVDbv3W* As\X}f,AsK% '@Q!R%µvE]{y#A`% 5#|X+Ne5@nW{o7BGοxXw3R{R$ueWr_$> {}t߁w.5\-ds/.DDYg K_Վ\ 3YF3$w׎{AWO{x@SQ,=1W4DG^=Đ^v`+/[.؀6=,rD7)JP^F #^do*_Eus`4?tkVmF܇v01[mW F-4͆%/Zlgo`/"J׎#c2hLFd_6 T bHǜ[Z*.@Qě)Uu z4˭wlv_p~a3cJcaЖrfd}3Nh"~z([>Km^"C)6XiųC#ȡrE#̱?OAo[sE  n'T&7G}bE3 ]x3|MlJY~7]ϭzhZgcMV/)%[2{;*ߜJ%n8OVBwptsZ\GDjiq= V~PPP.TDWa"R$|[18)8w֌˱KMie*FS c̲ti~, a>(<^݄E( Pf,zi P;fj>hև"'Ƥ;b-X,AZ+>.ZDo Wuo&3I.wVt\Nq7lkja.eqͩ E]a^Ԙ/&wI@cV "k-L #+RӸ̍~Z>6X;L|,KB, N6(+|r/6[E`:mk~ Y=+ݟivA/V鬭,le2qqJjf%Ƽ~ƼJrpfMU2'XpM (SYڹh M;;9(~Z:4\+׫h7 L C}ysjnf}Qm[ߩ&q)׵QbUR-8^/`2ε CTSM2ޗęn`+^yc09E Lun%P@3\՞%4xŏݢ *2B+_ pf̰QJT2I+lՉŎD=ҩeC;0*h4wĢ!ѲN;4 ~ԟ6+~TюBBK6?0z|D \a9eP~ȋ<]D.>pRzֱUÚ~&N;Aѣ}XƗG6W14q1viwl s\]?mw:{gLGU<3cc^QU$u>bȼ:Zn\01Vb4o8Z(ڔ686tEZg iW\0xVϖ'c/QgĵB off81V"XShaIװx=]afC1bT} Qb-/; nqMxZKV\r0ѺF,*=Y@'@4mynLOi$PSeo#|GW}dm!! 􀎡`lf/eKOrwJƀ8 oՋ )=Z"]&vg$yT#uGͫ.##8,UF+$K&izJK|UwHj,bW{M+=eUZE>O| RS'Tћ԰˘%8\t~yM s&8/.zʎw`j5&$ v q+UajV%;VMRS32րS=tV W$=UkEkN1ػ\ ᡾ ˔mX(07(Hg2}ɼS:Y2B94_ah7u+m? ! *DUQLeT~mA-@c9>%$bNJ-3,&WG7rJ8&onHL{~/еfOv?G B,2j?ñhgD3o#!*zC*Z@E}fIgA,HWbiECM@q2(w!jKB3la=ngNMQ"QNOrn ~D ׋ɖ(^Pu#x3 :]%ÉQ8?d<-bbU^{ VRNyXTv yʏBb#$.Zcf/vHK𶇅UAKuXup&$}?J8+_N}4q.GźQݮo ;Ԫ*1 To(Fl4_ iwXG%e]l>A:|#tL Hè|Ām$:HoaJT6?ozkX(҉E>/cz"$ZjJd%-O=ұ<&G÷6D'F)G#R*|nw4n/$^ۂB̮:0}̫45Ⱥ3}v}r6?'Kס}P=ybK\%ykSR2P1mމUvh g~ǰW)( ,ba='z%g\ 'Ih HyM-Hm\[~Rw}/6쒣ⱨQeġw+mm\FE7  M2G,ƞ@2rr[րQQJ[5 JgBeĬ`qu #ƱR^ʄyi%!M,^$m(~ã }wjG9UNU Dr&TFrEلry t*|0nQ!*Z[Gj+,]#·[hRvvXn˝cy:/-_c?w'kԍ2N_RykyHV eȹ6 S2/ .gy*0 q~nڵjpה+F5:tŭ:IIAqV{ Iu jpKš]F;x.ϒz'|pEt[s~2z}Q9R~zFA9d\e$1vusNidzJ3Y0t'λw*ZgU^/zV{3Q.!92YMG\+OɇvNΟ c(!kڛo$znF 6],#X傊~^wST['o|iz;3|whR@=a$h>>l4L бTe`=hq"KN8XLj$/ 3 hVq';ɑ&a%-"Mo`bP{E dhۡy]3cB8#Qc9 j`jȽw}uuk-2Vq!frO+U+ T~֗(}dj~;U$#> >R gm@!qʳJ棫[ F[R{O$%52Rac(HEnAQ"^0[ n ~de366rZA+x-*r0 q3 {RCX$Ÿ=IK$:~B(Cr8Y\Xz`iIndkqPM'Ne[U 'Z D5A܌U ۊ:g` ˦-[ ܄d\/D7G)$E! śdIK ZZH.1^3uPGCETSGPS +ŀ^r ։ۍ)LI3Vv3(5WD>5O,Ouk춮=p|ǦhLi\Ck{UV}M!)_IY+.6F%;d5=Ęf*(X'\N,ƀRd##\f0$&HW>2FEA~&N2*rnh{>o[3yԴ GKZvszk?k7Sht(~'qK`D/ks\U٪Γ˥2e^m;/4pƻ}DFmgfdt*ЪJu@>æZ:tl-Nk%S:,nFfftZf݆Sena 4n/qќvɐd!u饪|^m ^!IfyΕV̭RrNe^++ykj@(],Tf X]%Hp"! yTx.Jrx< t07_PO됸 #hF ~Ձ ă8(y&cGIM 3(FPB-P2AUdKEzQDuX6K-oaI/ FqEu_6ooo}x9rX!|1:D80tdȸۚr?jv,T~X@V&,lU?94j@Co -zj"؎7TVH ;M`ҡ)ithF-Q/-8F'I,eqpb_`w*R/S/:l 9>QC^:FAP,CBsw/o/@ܽ40^>19!XVv1`B+/J'Wg-j3>i5AZw" J>c-Y.oL~4J\=tz֭5̓ViQ7U5$2$ಚ,t4֬b!6e*!iޅ OaZlŒz_&!P}+NqX\%ƷT? xU+y$VwFF|bV&`nlQZlh$ULD!=yg /]veG9CA+ oṰL[GLA#) yMWbu _ϫ0?"1Aox3pEyxRik&o! 7@x lxsp~u%Ƈ=xNh-HT/8~}ʴJiuOe⪑ :N.1\bW5KG昬jw3bBAɰĩtќUjC-;(_pvOgA =\3~/ބJl hX?8Z^S! 2"++$~&a2}WcnQhY"=AZkѲp@Ȉrcb|H6e`Ju"KGMd\"b^mX7x?{5 p .fSC|32 O,abdqOՙKep.w7phe-ѱ2ޮl \ǤzxRIkr,M1yj5r؃%g\yבȑ4DV@P*Ҡ4)'ݫN@Wh;k΁iO Qm;h:n:ǽM|ሟ&es5~*V"a+-%䙼wc$9C*iQZc%:Ϗ(]dm\ifrUZ\;`-nh\# '>K1y#2AFG^.ܴ+|aNB NYܾjl)roOLX֎Xl nAlxN҉r)!p5;{n` iEmݵSq u'|[ X.Q¢y{wë:+[X<` :&y0fgMS(JjS609v^3ulLYL$S=كPC@c틉9@K+S[h0rz2pUٚBMs|]K4}_1v Ʋ]_ @(GzJ)]^ + p:vAzjvf(KRzQ,(ZTCy}lVYAQGA XJ[΁m$Y mIL !u0;ܪ◂p )h "qGe+2ʤҍ~kW\Iyvx陿[$~s?SE̝c00ZIKzV?08ľ4C]#v%(kxi]k L nSScP6nbvDr|,IXdM}m<~t52w0\Њ}x@Ld5PyLp\dCmR9>Ƽ7>Iă~뻾"w2 o22,*<aBx{x;84}% {d 1A*U©N [ \ІN:d9h8ۑ@;>}nLk&}o:ej}ah'v#8AQ׿t raC&+Qw|q Ր ۪- ChL C B}<K ~N]wKl.l 7E[ΙTk>3)ZqGg4vs*qʑsu|ꈆ[ֈ}IKn{I\>*9YͨO\u!41IVMfn;fc6>1,[RyYL0;cz%^a]ZBeW(ݑgt;=0'X-s fN5Ph#C^-kfgʆֽzl(͐Cp zt˾Y3!B% ld5 ݩS#ᗍP5>&B@rp\=!9V-5ۚӲsJ4vE=v- )%XjSk91HP_0`QjnxMKTK* $KKX&@Vj7{`m\fఖI`YBCn]z*Ra=Q29_ނ%!݋=z Z?k.0eCj9Q)`5AkG=΋&#oPMMހqliIjhfzSLtNG HU[4@64ly-:eqy(a\ Әj+Jo)f{e')u>b7K"O{n$wbP{NU8Eo sSh2!bJx*@g◔M::gVH>%8ԘFKAx7~H8lpJuK%IohsK%`~胀9|}4i$|% vaHhh,o_ $}0`MjQ(ӣ,1>KN0'6-쀄1&O I'F5l}Yz:EʹS#Y3 naqm`cDG W-߬tk6A '~=#ƥ7\N84躙*9%g1Hv8cU٣= 5NbT2PgwN=+ߡ7 ȉ šNR,ib4J 9mE(ET!$gL 8&&,θ<tŹ!E"JpDž3Ƃ*t%i+kmҩk8퉐LypAbe\6 <{+x 0uP][y_+Q W{ 3pv^*^_PT)w48r84hyj+ XGydlthfþ ީؖE7u<:wںQE&!4=)V8KlfAڡ64FuW\jvwL_p^RvEυJ+%{y@YSf)C$Ey‘VmINRqA-gP* (mAٜ !^Ù>S~Ju4Q>"{sa&SC\SzdIݜӁz}Au=7 ]J (Bi`@. >i5`.U9F_XW:pƗ@㏎oAJXQLʝQR=% rOgl.&E&Js`ފGD cLxqZ R蒭~vlD{1WZ pR ؀lXuw5RBn}aoOJM`p%1j_ruI]g' N'Ăަ^7k˛Z,4F0.L\a5splHCT(SX$O^FxZ?:0,[ c'=+䓯l_X"6q͔ZkY㦘]׊tB)RCj2Ňio~ʺ\*$LYA/)ITD0b]E;ܶD5%5$sɚr "> M]DQ ]ep+&U༵EXv&0RzL`W0V&uiwG6t׺"4i[c (nKBLY1Np*-h:2&;wh*h1ag#  ydwnE~N/ZūaX*dZ?APBgT!pj_Tx! iMWQ$Kjwmp LZsj9'@蓐:}~Q$fJv{ <ŹKQ0ҁ_ qBm/}L{l]ۇLYr W9=V9]>-"]4]q (75 *W `, va2)b*iT`~mD`[g;_'6 6Qt[Kk?.}S|cm, 6! NoL'1R<5Q4wWP ^ HI1/pߨmo~HI }r7d_DvgM_o0$ʪ)=w k6yqn'oƞYWk>>bjMޒ OSֳA7~H8?"̑**$nVi(R?aqe$Q+tTyAh UptYiXҢu7ѩ]7‡afAҤ{T-pg?.Hk l}jAgD] e$wɱqm8ESO}2B.+l snDy !0g֭PRFkQPY]XZMG p8Ux3Vާ;*Tx#D*8Q|hH2{M#ړJ$Ip٬eA))ơ4cŜ9+Gv1Ď~!%s%BF"f$oM.x{[H)]Q4 iXdS/)p{TZB]ӹҧu&l w*~]t ɸ9F-q>/Go>_2.mhHKBlD@<;Hԓfo&kQ(*H'%ӱz~1W%8"+64(O>[ .!AEi~? .^sGF{J2spFB9H5 {Oi/ŜmQ ݸxw>T}dt$A}t{%2n}E>Xyo|gLg OdW_8ߕUD4=ސIEt_ro#=C` &6L+._P[Jt]g"Y<4d`&z ;ZEbJGGp;`Gu 输Ȅ8c9ngYO2c&5+ݜ)PoBRm(1ixmbi&SH2G*%d =P,HI$rr S0ÎI%ّ 4lVS w[/eCZ66ضnXU+W/̒.V ?1AZM4&92{z+:K:s3!PڶB^ZKSB>|m/'w<N<=5mE]ui=Zpٚ-v&M|\&==m|ˣe~Z2%@#Fi'#㒼$FC,JAXW+2IYfMOTäkݰrzIQTQT4G঑z52Ha·uA6 %IpkUyt*K950@Ԧ߹4 q$R3!Xs녒S;_sKSܓFɞݽ×;,^og%2Beu *Y7|Ï-9p'vc|sfoU'A[Zv;Eَ. WСL?zҎQ|5x?z7a~V^91JȈ 1H?_r-~'o;'|n^huvuDU6(.zt PQ afGDU`˰XGL[y7rQDg(Mu6. ,8#ydp<+5_QN܎Qu2{ə22dløtA9/?vqs&( Q|> 8#6H!!Qх ;ٖ{?| 2O X"9a׈ ̘UW9wHkBc2FۿQ ̞~@[hC5&a3ASĽ2֮+(pZ=Ŗ".8ՈdH*Gj~+?a7_+,tJ\I|'Jm0NaE)Y#AЭ>9tK!A=ZLH˲cţ,8nׁ=T|Qp3\ #dHLXVo1&+? M|Y#3%v9Sg; SdZb?}IJ@o6KW4EEפ?PY)"ׅ_z3yEV;xҖ=L.E*H**9%^/vJ '=@ȼ*A%PuE2E)U?x&sCE>b^& [% a@ 2N)\4XXF<77kbhk@~j x:v0>GOJQZ`yh{sB:O7S̐Tܵ!]b]ɷ$J#̇灼^QP3! $~70:k1:nnc1$i ̬74HUIHSتaRfT`+"<4N#A8fqyԕn;62M ;a> :$^ n. n|vg1Z: 6?kȦs #/;^mVo?ʼ.1o.bhD<֓?SA:LafQOZ{?Q EÃbGcp!EN~]w=E62֚_Gn\wiV~@=5096Sg+!Tf )n) ;?j_Zo:C+6R(rTq^Ԋ{C"3,fʷ i"t/)fgd\ T]@=B.X]lu:>r(R ImXRuo h``SnF9y"Й(a,qIKvzyu;zzjOMQ[٧8YPt%íڍ!4zKrO5@Za7kAm ߈^%\DXguIt0y}17 ŚfXZu;ȈX :G:A#T-~[]@.sLr:+y~~ZS9:8Ή͚1{lAjC.R(_tP,M(A l C͋M{W2EaX+\#ǜgvS?g=;LƠO #sFmukʸ(lDud7qŒYcQJ\>L[u20[Cmf+^L_|RO۲J7zqf.6x}mgl&T5F5lEߺXn 5VٰUnK/ [jк67+4ƾ>/Wm$Gӏ'SB~ٱF ,q\o yNs0R3f roݱXgQt5f&}ϛ79p1)0F#3Nb(~LM%)/R7  k(56!D#xX\Ce-Bۆ>=~+ρ5Aev",oeT r(MePsc> tTwhmJxEg-DE8DqOUw<7, @6 xHk2I#)u "[&)b0 .32gbVBes.'gn$3\{'@M9^j`Ø K=n5c=̉]݆Վf"3mpWڡ, f=] =(/,PwAFǡ[!zсXBp|kh'4$4\+\y}62cf6LkۦľWZ[vSv ]YlG=hVA ae2C09!*gqׯy&5^TڐU; Tw6xr*J3;vER>2YXQ+ S\;W Yr- j賲90I ,2s*1uK-^RsPloaCnPd,م;ߐJ~0' fnו0웼h~F(fGyq0> f 1[H)>fSl-!π"wr2*<%ę'̿X~; %4W)?Xc@]x'z7hd i/`t'} .F6@! { #XNq~5&9F-`7V~땬n&I?!^VH9Apd@vU$#oJxs)$w /KְB#шlc3Qȡ mѷ9.{ gWߢm6d0jf܋_8VƽH(bޅK5v1JN5WvKrÒ[Clph㙾S` ,W{k_ ZixYxޅu'WѢSdQ~ \|UnZr/q|b{wH~V3/VO4ӏ)̀'poW@_\;v5[}#03D5gSf1P΀xx]ȷ< NsGc~pF~=Ah(ɥ8]X kj{ecʂԋq C:x8C-Nӿ5@vd6HY?2nXw n۶m۶m۶ܶm۶m۶;MڴII&Hvp%;suL# Yˌ(B 4I$Wj*X"?'&I[8r,S -ntJ(˜βI=!??=j;xB.'EFfsP°+CkYCn"( k7I j=8p '* r9S@~aрsN իr` vm:Lխx |ļ>7Qw6,+([K1+}!Q/je p SaaeZsK6vFLvcwjE /uW`u ##E]\xDr &yqG|a^(Gizz CqQe/ֻ)7I@&i7/l qu/H~zL-KOd9Mj'NVi6KO{\V3R1;8͑5zvq*c@4#6r;hmHQ9EqFW Ubh4&JD"04vBMECCѺ> l fp%+2|dVqN^|zv\/GH;Ks#s D: "Ec:VtDްntSLH-D~iROl&uS.aSNOMSQj)[>`&;ɺ6)]sZ><|.]2vcW|Ln-e]B =]Y#6c]boI1Ayj#RỊa܆MӨ(OwhfT0Al8 )[8ADXo0|>.sQt7Zfgot3Xҟ<_JT"UCc.Q W1CK$5DSHׅ66*m OUqT>hj0kR!5ϸ:i|yC_:*"VW\'my ze0w๘[RW~7lg03; |٬v_?8]X(,<J(1k4>4 K2.sYE@Zj4JBEfTUuFsʶMkQl3B ~8 Q2Nw⺸uE_V-yQyFʷmW0pZ`b_̕&r_a"ˈC_ +H`&8t o RdFy Tk j"Z%KG~*D.&Y[ cvZg +Ԝ븲|^)j`S\lҢھ9X.7Uqf5h:ss*̬nEyZZ!dUSZʉmI(|'xW)@Ėԅi `HohѨ`E rIzXi ÝdgRHZOKJR\j&_F/jC 1@ , LlLhþTH1T*YsQ[-x&w#8 $&ύ_`YvVϔqD~WgW^]a,mQ?ɘV:v]Ճ [S_%jNRƑQv,_D I]+_+;=BW]3FQO@HLSYy1@E%T~ؑUo:&lXaOGݻ=¦'k?\12~Q~0Ӫg_cz'&zxpn4fcϖ->\`GN~K//GG֤~z/wC<340v9sNmbwĿы~żzeJɉ a큹vopq%Sx't@B Mv}wRGw75 Ք}y!nLl!Ncx( xk+b?c_T?'[ nz+38AF=O"hٚݲ ̦MII/OAG`@={Fqdz~ H'.}h܈\OQ_D[z^J@:>\ċdCEco Y m ?y 5%I9% #M"mӯ8f\ԙ÷3FG02]3fau$ /`zppAp>y3`qS㵐D2L" \k3T x('CrOСRGj`]*1pDVQkt4 62B,<Đ|kz''pc5 x= \לYǘzVDI+-a믗+ԞN HEHo2ހMN?E,]n$qQ ?jCwX\#߀=]JZ߄ξyиT͍'< =lLܣ= $ZsJ˥i&yJ^^.-~n78@lW%K]_oX(xlyN!<ң(MB4 KާKaƑ !%-B/ʈL $RĹ|;)f_cw)q_|{fg-;ᯩid׾:0NlC@<|ɪU뉬A¶1NI1r % @cbSu$Hu).WmC0 dW!v4 ӻ*iT}N&;Ck|}y!r昺p퍁**%'>/ 2Q[v .E^ zbZ&fa::uVFt x\Zijmd*w lر`=(O4i8Wkχ(B#b @gʓ}DZh&kN2+ږ,VHmvnk`[f+]>ʡ DlO󑃅b5!WuWHi?U 5(" o1WFha~;p㒀bb: ^X֭4 0jo([k꜉>ɬH+Qlo>Ta]:uJKq@Pf{jC~*gP"Ld L.@c,FMI }O۟Bv0@!CԀ_8ufF]ф vHt\X&綸|isXuq U7dC(="aKPl= l57dE ]\AbkrZͼT(C37ZՃ : |$*TQGdA}zӊ/0|bQxD\xoC/W[6M{ʌBgYA5bG_ ;Xcd31B)M]T ɷghR-l8*Q^Al!nq[WᴥV]4l#YZd.-ʤ,@cXe'7>`~AMkEhJxS; #! tyQ+ҵ~OPdY2GYrHQ"Վ )~[TB%K%Iᮺkz1_ILApZZpV~孾ڽm#JPKc}c>Pa!,hSUv- R/#Y+ `lMI%- ^Xjm#XنN(Y:sfsq_'4oF9TA>UU B:oQgL6~-}_+_ج z3,܃A jA7uRg|ʌ}kMf,kQ}A4*-a,5_)w)NNlZN¶uu0@mV !fJv>9j,U' 5:P 8gk1xuIJ@-I#qFgQ>ˇJShKlm*x0H.}&K(xX?*}j7 E̥ۨG63u6suy=3uT#x|J~I׹}kGyjs!UW0LP7z\!8"l~Ϧ!sb~ b7DI+!z3ا`=#O8Li $c=fsB),pW}o@o1 i_Hh]hD\a"t@Θq (p?dYmu~Ԉ(mO]}wݰKarZǐ$X^|}Ě#Dѩ ^V%BnN_J?y+D+(=e )2o#D4rE{'rR1o|.:~V漢RaWN49C5]:[nc&Q2ÛÒ {oexkH-D@nW!Z?DžE_ 8`a^~* 6 b;$q_5#zNi>?];N-+#`p *%#!;8k!uZ2ޏ9;QUoԣ{9*DdpLtߵV,IJ3ou.ե˵4o]RP>=B7 4F{x9G[^a+ϪDrV$H@+)Ip@qQ0Mr mOvbcD!Ygu)a 3)lvHa$7Z+w\ŁHC<:a٬%H!=K 0TלY^,ͬQ-U_KT>$lmե6}Zȭ0hZXl{O nuO4*jd vZc+2NCVWK9{T̙t b22T=P9C[ԥȏ,5ɖdVKqhle.(gƢ+̝37f>a `]0ۧM> îh6Oo)?lsy2;!`Oŋd͚>OYst7mI~ňSkԉ"L)efHS "V\p4~x"%#H|̣pQ2+ +2{:  놜6pIWQoJרJ$V54îӬ4]bcޓMcd޻oc\4;1T^yWhV֢jLWh}h|ۮ{Tj$' c2e =4@~V{m5T23#ix977bHI (`џMX* ,B Z'AW=["5pG`nDv:MA5iIeiYk1C1 ᨾZgknTf]X=A qv'kU_vYxG{ #3 &olZ]?Cdž)97" yيcdNASGCWŜ'n 5ӑh,| 6Ւע~s~Ln muh€FK2zĐw-q (D5(օ(`HZkE JWVl)) єtOСX po|Vџj}F&nA'g*DL7[^5@u埗->qGlL &#`O1wxn2 -ymOHCkd|`&|w' %K%$43 wfd˨6;MؖC^TE[1=)PU*^Xwt :5A5¸YoPɄ-dRDX1Iv:(ea !wkl1hkP9d#tQSZY7I8ۈ^]a}x osSy5`<-^k.;57KfL7yv0c̝P?47i؝lrNG/8+ j>O'׺tJ:>.tAvHҰ%դ_ wɁ!s7;$\iEF(H6D:+Y|eO9Ŕ@ zOI jh0o_ˏ EADs+%H(ڀ\-.C VP[͞a?HF{( Dz13)TƄ"7Q8olld:qǫh757w΅@wRd( s7Qk~zU+AcɺOmM)ksr =D&#Wvaq=oTԁ;<"tD9K0zgjK N,duF"p z($C czmgB޶tR](mP2nb\Ek:h[qi]ӲV7BMɏdYαj3L!^ MH HT;oo /|(p0JQp?ϙ v3\LYs $KN'pO[z&^dWC x2Kk)7&V /ڋqAؖJ_T1鑄3Л`R.rް? 0HUfcVnH<2:2c.ݕPe2F ZgwbwW*HWuъ/*~)X}lh|  u!݋2c˛ .4H]HytO|-;oaGQCd\I5uHFx- Aaz =q;}H偋\c&]9ECᕧ'8\8*knnj`dM;3uJY1RbS? "[!)VΚ.R70@}'86ZAk3HDJSPQ!57"٭h\zka+8p?:&c  jֹb}H6ɹbT TԊ$kxPdЇ8Y2u٥럋]A/N~,]C m%68ƱYhQm'˘BAiwHB1ee2Q.2ALb%ǵx $92#צ'?O&~Ha>M7Z"pEu~%;RS_qǴ9 Qס1M6 I.p!r6KwRDz|^[%M7Zwg-뛮{2owI1L@XA c*8,âaf|a %vHQBER3 X >]4f5КB{sUJ;i0 ˔{.rtEut1 yI4sRdsoCPǥ=~>g ޺|(%m(?K[Ve'/-׌%ľمZ* f7 4Xd"K /qK8)ȈAWV)ۑKW=7Q `Mv*A;e*zكCbGzQ^>'(rAoE쭊EK˂+N9e>@>MOArO.y\ѼdPBZ߾I^qwjfN|)`z);Ga9oe.]SĞ)ڗ&Go'{ІJ800EYY31yUS=;\]Py_PX, }ԅ3.d=*pYn玼쿥oLRyљNOpv`$ P'|1 |[h > V]CtГ+23@j(j>3?Vԭ0>.NdP]T(FȓwM;kdx #dAw?'RI% aU{,G5ь|U <ҫ f2f _GXZ V-˶gjie@wњՔYo7mAhET@\Ѧ9|3T*TjK|:T*[ /&] ݑVŷ,U.m\U޵P 3XyH\.-A.~Xx{{TvPz0zE۟V(+Wt5=șp6~"!7hA6J lE5guXTC ʺ9Pg\gX bRqsvM+2u>J eX6k.]NjR o\tFTm/>?RQ7zHO_怼 jYb:$󺃍֥'905;'|*r#:!$C&˗Uy @ ! H]5G\TF`v{RAN.ktS. ( '9qzÌ6sUkQ`ry 7sB &MnaD7xİnӒ!Qv7X8ctQ"Pu0F'zPpSM7@iׁ H;<ӵ]2!ᵹe860.`lDKK fJ#VYZcH(<V/1xbi63,  𤾴x= o×yzǵj[pJDn"6W_̒3f/֜dەaZb|*iDȁHO } +H2Eu/z׸t;)?fA]Ji롂 \gbgLZ-Aj7JdYz vdՃ̥wBMb? 璓X Þ{첉(cִI5G3|a.|kŃgA;tflx3r8U hz6ƃ!z4/r'IQ A9w1|`!6r0 :b(k&`:.к.-3HˎENPxErLux5=82ųa>=橔,>)W.Pv(QW?-5Do K5k Ihr n3O.%i1&T:NI'Z OÖi[<$C2HPa>iU]JKZ49S"ԾY8o|k3R0.gZfkrbkyr#pKƁ\AM }]ږ=XC8X@ZQl u ƻ}#X>%rGQ$xϾ9*80c-.DX[6H NEբZ^#Z, AŽ<<^R!k%RŲр~C3`;¿``H#`/)E" w9ADIe7tC3nH噣ʰTEƱ3#wvMcLΡϨOLaof{$XL;Vu nt{ ʝ~}>~n|H]~\\}RLNA#Lf@bˑ!!-pY2bN.0}v: aDՍDM{yE?εRE[&>ޱd`j&?BLFcfmVE^RoTY̡psKɿ-?-㖈]~fL3~iLaka>C%$):S@I@&|a?fy.~Y|uǑ*NOTOcI^l!Dw?7 }d@#Ϥ$h l3hw)xMdg⛇" ,_žܕr_.U7g} >*#Nq$չ,+ZbO ګŵ,S-hr,ݵ˩MܕkĬM~q=k8ugoUZLIk5D|,^Bw;֟@[0=T@Ar bPiV />Ӕ5L+X"f/礝=Tzrj[pF';:/&5%5Tnw5ݨ]'h.,̥rS?3k $"=e9ڇM[(Ε91݅Y")U\'A}i#b'Ϣ87HvH4L"LPg+Ȕw`)ji)ly],V=U:IlY|{oQyĺV,eurAPGwof vt!RX~$f&_=݈Ȝw^}|O5W)}8xƖ?܈Xe #?LKʐ֦;Xl0.L\={0KM捙EZER(&Vp=$^#RAQmT:F9$I5~E1 .=JwRyfd>P]c-NR|Z;`p쏋*8G2^"v)HB~d cxAɎ](EchUz?&͂!.ܬ=tW: !k-$R<c ]{K}ypDz?F47IPɲ/H-eVR>HZhVZ-XH7^om˕yHu9܇z]jlK})qAhuS'HLz0lQXϷ/d!<>2nUxflaunnǐg|pRՍY> 0B0bGF~][t0aRy:{V:ק"+x$υ9 QhHw&곪ĕ7f_ˊ8Ia>HHp2W&5IfT瘨Xc[Aߙ0砍㙬1Jx[3klCAW#;M"oF}dv3ɝ vp"2'#B/`D`U;U do b/8?j_y^0M’#JWi?xtl ]D`\gs-zp~qguh٫2Izn)YO (lQԄPzLѤ\]}f&Drgb;MbY.& ٘ ֩x_)HbxK(:eŌ a2ؐpMPӘȳ{FobZ3N5& ΢(7;K | Ug#S󫕆0I*뉷 u6s@ JN]/ă.DH*M./WjsVG-4Pt++g7M ;XD{r(Iџ{Z*vbtǃk^&Gޕ뇋0=1Xxt`1mF+5V@:NDCdnv 5]d}em(V}rj1\x~"{Ηg~C&wsWp^o`S_-o)}FVĮ k ^|ғTMDNZ*^y?Bȳ&X9>p_r=+]w\؟n E1ӍCQ ~y ;vI/*-?R7 1h6Wbz%AGbѱ'$rPgLhrFnT)yFm=|+.u^.,Ef42y-{f用mބjFcɸ9?\$![jW\c}ߕkl[?ug9Qlh<[jzBOybY'+A4dTi;Qp~j'XgR+߉q6Z {l0hJaqgٝŸ.7GۈZ?%~;4-z&;Hҙ#edP#Ƙd0jVE{m^F]u(+TGRFK` T0N,'gt+ygMIArF)EHMo;R#QF5a!Ee-s4QàYg&$GOGr75nP䀸YdRSz0={=le2;U5Hƪ<^\l+ʐ<|tRX u`7 ABd|[݇5;|qyʻzr<6pv[<?HSPkRB5Ϳ39kN63 ߌ38c;g2\%8}Ucd[Uσ+H:cihI|k6Dd10zgPpWp0]R>]Z7q5^H1K-_›B4o6;wƪC)WҁxD$Q{ܐ| <.<˧!NhEjNz,`\AW]{> 5*rS+a\%w^V\d?3Pc8T3v.ܰiZR^nm!Csd-vȳq/78O3KQ^`l]V > ]oN]`~c-_9 oB2,-<4k3ŒB& lW|B"qx}?٤l7$|q JPu׸ >[]װ >kS\! >sS\٫2.cq4BW"rgqJ\FQ4"3U̵4i|F &W4߆qy{uf&4 +pgv E%̵O|Ouj. |Mϸ6{ULe۬>c#M%9״m[|:ry3p6]C5lPfx_$LTɹ{(4-󑮉ea]k#$ݣJiQKצJi9L'd?6XcfN?XS]zd?3|wY3ݣ;aԧhw,&Ǯ3x(3 suXO<'㞊m&y"d? Oe? .z@.>VKy&éXNNq7d?(ݯ~WIF[zd?~j_4Ok_{[&2)ϵKj˿?Hjʎ"KBWSӮbtfJVҨf{w)F۴k4+2kv瓻^w95+GgV[`qQjѬ$&3|M?Tz55,^F%Yo٢O|:m[ne6gqʰ`T-G;YLMСe= >RQѲL&ץduc&cavP -.PW? 8FNZRyhZ^"xbmĪ۫]6ߧ\-4Xp5S,9ii;OF~Q-RMzY$oj\a刕j׌jnv+7VucөLI3fY(yw;THZ8ttvaq}4jɌ;JڻMYt9\8\~^QvAgy2ƒˋIhen@-giCl1>.~'˱8g dca0I5RHe.*=7J fȞfJf7z}@ &&'ZK/^$g1 r#i|?ڬKWyeYdȆwLj`^}o r$ KцEA۔9*nlljGg&0[AԆ]iԠE_Vd>{tҔd^*y<#N%}r>*fکnqGU]{]rVwQ"^T$@|M<z V]B.l׫f"&umh|8 ɑ`xiN} fGf5l(-g7f6IiN'+#{fyږqXt41e^UF-^F6'7^{eur<5ҨÊap,R9݂=VudԘ:t*`b9X}i|t0TજF׍ُ./-;.c&*PnwOc6J:U+A cDԪ@& ĦXڣa󔡠Ug-Psdڡ:|)ke'5CmݯZm{~lf_Vsنgr l5}W~k&%Ԍʕtz: /e|ZkTfBLIs`Mn}CG_Fbf(3(kC(ݭM(ɿJ+!8sleog@gT(-Kb[+ kK0ejQ ]]5BYy =8RD_aPQ^GdfJ'N}y2Z|cT7*@͖6{ci!ER%z#]_Sj2&&ϯ^qĺLcc gbT'"Z,dל]\yv,%Y5n]gY Ӏ-}L{~A@/6$5o~YK% &~fSUwijk3m&QKQC־bľ}4$%jKZ&˚> ; 3"D@5 *dњ}go!CCQ_.uk+bDh4,5@F11qa#Iz761C e jy@Ŝ' 3fx:&1o#=0QwNjHьZϛQmu1V}50D`\ho-ABk`}8}T5 Xd{{l`?Ӳ-F9E* kD._J۲[=HƱw3Nh_)28JQZHB?YI؁$AV>~@J+^IwrsE æ~S<ETCǺٕ@p#:iߣяCYnoYTDJ@b@"&Un8V-)׳Hs殌") J: GMeͺH† Y\Esx;Ņ:^5s_*j4FpT!E?]ZTnÇ2DJ"O=CYo#$Cd7qMBM9*Z;Rfy-w/5 9=˓mqz\#[72ank2\MK@A eag7K~zv>*Lc^?v)wsN|&FKJ7_L:5xE;{Jpp|com1z2ƥ:VS mw2VMoSmk*|u˨n;T5,;EP ?z Br+ -eGƣӴѯ[S?z1e ex;=.k0{}yvϓ ӱWAjYT*NYo5nQxuǯ搋+]6ձ-Q5fm25q;D ټ԰O5,۫e9޳ޘZFSsښf)oL%)~W%4Tc DkShj*5,[2L}Q`uNwٔS '?xA-la8=B9/`C#6X 42,+CrO2x`Wᒶj.v*;Լ,T+{ER-Wݗ.o}bgu/G]VYS#ŇtN?S*bj&e"9){P vTA @7NhaˆPqfys^vMrң )wA6{3LBA ZwGぶ=:m"f@z%GJUJQ'jual $ňQ:K/-7a UkuR(ˆO_Z @oY8/"UOiDV"G*x+X 3x(7\Tm-*M9/Oav^TjV Ẉ6aq[q`)A\ܬ·|/r, >RiGZ/=v ٭~ *J$$$BKGq c4H04fYU3C!}䥌+ߠ9)ŕ&S ۛY7 mHE`RUs>!o69j%Ҋ3<ޕX,ƦP$q.] T6ypX Zυ"j̑)U*eP'ǐ(JG)?DG03u+gmւ|Xb26x*jhhRҤޫ$M$y᠄䮣ڤxh#b+izHb`#",|,{UbqqN֠s`'`\ @i-vg^G i5iФɳ'~k" _+K=iY-ѥ+>8AodVt7?nMIQ$i]upӊO͐ 2R/gJ8)zhlOAc[,qZPM+Xm-B8ò1Ҩ W{QK֙͟73G/_LUЯ`r콹nkW}d+0Wiy}%.lR ;n#Z`iv)|l!!5m.n ]-z]ix:!V8PeS%cldz jB8,>}|TM_hlOPF&g|le켄QfPeGKT4d@RyK'pC}vZ!u" g Nx>a1cZ(N|tXԬ@n̂zO'6!Aֱm^wkp ovyFO,o{Ov\ίVM:Wr;~dxT"\&8Vja1Ǖ&>$sGzo7"}3;xVpqf@: x8DASYTHUo0dE\tC—EX*l(oխN xL]mP񪶪tGyoĚg6} f<:D`1R7o P{nJT{JױB1PS&8u - 8ۄ_Ad _Pgή;TC8 O   dT Q-IS=y" ZMfмj-z `),hla%KoΗ懰b)el{ H-yXd绞eաbfpkS蠠_`t4C J\ ?yy>جj~~޳]5={.}+![A%-|psH+8p/ PMWe9"DmKˊ;02 PCr2_#qǭzDd+%9勿|ۮW} KAM5w|';{e|Xn Wj<2W~ B]J%S !>->B/]օYcܹ>Γ.Cw/(:m +J8Z$?IQ9QƸH9Q|s|9A}?WG0r yvr7<׿dRJ,yNS75RZ#1du&rE{C^-00ok{wɍLI&hU(>]FOqumrR(^t'G-W˼ڥÇ@צּM@Ý@-IVqoCS| \Y? pgtr/˱%Fmw~!J,)Y) lwj9ڇL+!m'ڙ_G8Qsŭaۘ~_G5ydlGlۿo'̳SIYת"Gq9z]/vXg>o'2ى-pR57VSߵ KhzoCH-˚ԝJpU:E`wX *0pz&DU/k.B*b7զYJટ8mQk˙Iu&y2x(9 P祭ԙ vJ=u/W0Y*T=_(p^o ׿G_ -N\!pOiOOٜS[7Ȭ ]@GyQ,L$LAzs^:ŜMOpރGRmP)#i"pͳAzCQYڭV1Йz.sPy,$Cմ+$Ų^ٓ[KN& CoӰMiΣe-c_)lDjxu-RO8 O-}ުteSޚu< Y :\.VR9uc~1^SOg(\i9ܥW$!igq=ܩ (-KjiԘPj:C3q -SD* : D4%uIX@0tI/h ]m$<5*IIY0%kb86'TLW¡'G5!$r~Zzr&Xfv>^(0 v(6V%wJ|˟DKIgԓa 3` -t"d_hwnq`˷Sׇ4NnHrHugQz^V$FJHDb$οjfϽdL3p*j#-z(4v |dŬ=D;/p2e {_nTr nSկyQ=<*ww]@w'n_LeTdhǧH+J3>zǻH8%cG]P( c ޔ {oxɈAXQOvUod8mTzyȄ{:'|=faG14򨤰Q>;r"m<^M|Z"aP.BL,VC}-&*Tw-[csYɡ H6Z5k|^2f;ҚIԊoYToT-4~:D 234Kal#4p>.PEu/xyLN|qAE9dri[(A"ĠyӍh&I*E$|^P =VSh$ꫝDӾ+GD?)`Ro;s$ǵ,MBZNqAгݺ5984_=ҙ^I,>b:GC+ 5,\eI)Sj"/&V/Ȟ+?Tnpo~nhaJN3jʭŨcmk/piZvN7AIP X~䰇(-hF7W ~b=Mq[bpmAXlx ԡҬUR|KeB:U#W; 6_3y7A!ʲ ɻ kRTc!SIzFijs1/an' G~M^ΠKށۻ*?OيWtba*%(?ÁBxVmHKNm7#ݬr%3 0!]4hQr}+g2'[\ۙU䦻~Kbi͌Z1V.a cIU[$Ҹr\>s;W}|wv:Mzq<"?t7 @p5iu~-z`a[]d"?i29Zߍzt CԀ-+!XS k _c$@_]oժ>^YIvG1vZRhӱdԸѺ0f{S=V9ҶTjPS sLsl3 8\$}gdl˲: Q;f3ݘ4t}Tb 2V}u0q/.7 "bd0{HfjLN[*EK'2*@=rwj^0 -,(i ШYź  8P_lp@A0y/(.PJ!͏ArQb<2N^F%,xUdx vTH!&I rG]_}5ZdAtJ"mj+^'c16'-~7C`! +~J l 9w\(TXE?C%1\X;H=%f!Ң(EI_r]8gh4/hvG26/yL34q %*=}@=G+>Bͥ;f\K@nj\:L? ;n%aR;vH[Tr/xp} &$C ^뫐*vpuwG5MVs]<~b{-euXΪf\`ž8+խhcW$9Ur]Mqʻ-n]њF~JezP'-]3oż xt:eS^UK.onA\֣>qC$ BV 0-^F^MeuD@Ԫ}uX'Z{on:>7ή+ s*u7vok.;YMF5S9> ti`s[FQ Z/QS V͌`Y,.3 I ~bB1gЊqR>Đr s|ga\/:9RyېqB FOӚ':$a! ,RzYT<^Xq\?p:wBc`JaUѸL4`Ug)^n%{QR<@W6)"Qh8ϖ"WĢM-/uci OPkWӯ-`Q `'kpP[iYwCctb}@2Ʀr?:^K=IBͼO!^#B2NӶ{6T .>l 06s9ww$22 ˼){>~@ԶIG2s/f;3ށDgz.G|xiwϏ!L|oׇ;IM[NQiFmDpѺtD1]ؠ1Cq w&zwG?_ ̠m7Glx(,x";(<!( xb9M>!<דܮUt/>cuM7Ћ#6+ߚm߸O3R!|ƣdȉ & eb8;i .cpVd0Q٧ JqlQߴS'FVo˲W̝Ʌ>2S>B2\H}WA6W*֙?/[&k ,][k?cQd'D+t.,g_1KC|uց[-UmLI.m)~D8\):a %Be(>q~^^=|xixW dTfq ^O̰ Sэh֞Єa 8k6N\b:A: _rk\Lנ_Ȣ eV2_C!6O\Aj> l_݉6بw1o2 5]jpLl0\Dn طp CJwef(hit3TFL+BpIN03u]Y+ Ue8uLdPt[<7>lj0D-Jnr ?*u PnCx'dK)GPًyeAev1s* =ZcҦZ- q"TfivV26?)ioY~6!NLmf3kDΞmiK5[I<0H] sԶ,-PVq^UNoHPʖƩĦ=7<φݧQ?f2r `a2=ZK" IF,+f*L6Nq"'-/zpBGMߚn3dQD:BQJ|Uaw( F`),sBb> V,)ߓaI9s7Sr8N2"ɂNvUd0An<*j>Y2gmla\Gs{ai6n;MbO3{/WN1`8M+E8'0Y|>A(sYYl)Go%l*w6(a~S[]"ɑf592G߲c?M q9Tژ]/WN/=  ~-67gqO-n%u cjn>F\T3ϔ4LO83Wg+G33sK[Ƣy ,?fA*3 9c74{bq`lЉK6*) !=5ef6gpJL> D7ZP3GiZY5zнjv44~lTQR&4؅0Okѩ6{X`17&'/?հ*n4r&HxH *YEM8B UvN9 9VdsdJ/K' qTDe$ m۶m۶m۶mֶmn&9U* n}${ߞmr,-PYEM2V-{%L~׆XƟ`&@aq"Fa?W,+ݠhht? d;Uo5%$pϓExd06}14;ݾgY^;f57H+ˎup(MLGĂ.ʞY?ZtH]7 >T\{\(.5).N g;*`J=I>M"kXisԯyWmyp`_ǟrjg9f?1b@T]j-ӮR_bi0$s41)J!;q|:#ˆݢs44ʎфk1|m;n4^Ig.oPW%CMy!ՕU8ݶk߲r,zIѱX>nWYRHLY?VZb~P B~ _ y {+`6Ry&Yd>40w(?]$Pv ǯ NPi`R;Ø3&ߥcHwȿ ۷GT`l s Ynj @&4>" '$ T*JڨgjjNK-+y;~ ʐ>HOS;4;8DIc\'̚AƩh~Sp'ξ8gғB>zsW=ܖ.C5;Zt,Ox<XA5 8 kB$hUyUӟ&*0?`sŏm 村x\=ZQ~bP4N` mV@-0eSl6גndǺc[_4ڸӽzgCEBhiMfR!}! btU2O:lS{gB[,]HBDKQH;G\V@C'ɕDpH)Ct7 eNNoIDǡy;7+'S SEvdMD3XM|?ؑ2|N4{c&f;lc Yх5<nܷW2Y4ɞHw< P#ȷi5Bv5W"4,͆2vpԲ7W4WCEbw^A;Vxi ?(7I0Nkm栭؝z!gjDRwv\' 8 @.|qvy4Tp]8 }kgӎ8>/;6"/CR([jBjJFp *a$@W-"Sj޶7>kdhUȢ}o*mb4t,Ɂ\Ÿ{Mͳ%c-Џ{{QuMWLMRxHv~UD(n"*ȈXtr2%oWdFymiq-N7+JYDomVA{>"{FJdmNꑸ,8RJՑՉ_]x*zKr*6΃O[3BjZWwT޿w61?8~t h[^ ¾9g $|s$B PK +Ʃ/hM5W .B =2 rP VAD#0%FMqE0w$Ϡi-:d@ĞqN#8]췭(~)V‘|}>I3yS[O/Z0ɨuC২} *tŰ:dQX[c!Նƀ'S4M4 Vl)*ohrXgxf9H=oCIw7}-{؛ؤ%>\C:hrݹk%_an_v̑*CkY5Z(hؕ6]~Msvٸ/ߴcy DUīAvS}^r&]VRp5gvږnjuo]7h#&S}%P4,6<ԣYkfQ{鷯7Y~&eE8:J=T.Tb R2$A}ߺ֖q, /A&ݨᢺ2oM학sXsH.`LYDbt'TUV Mv6Tq1v2!*EHzB!Sɇy-51hGӵKji EJdƨkXI#H̦m۶CG@,(Z%P-)iM:,6:d,~ei]o-w}L{3s3YR4P}*R*@P/3lpԣOket!p%* z:Zr_Q( (@^8ݱ4tf@2tCuxBJ*?RhA)JcDhX/9&upvZ\&t6{u3!H,D?!Fޑ膟d'} FLl~( 1toJނ{y/o8DH7k<,8Bn;5H4a؈b]{ "Gk#jihs~^ipl-'k4ʭkO`py9]%cE-q.$}YPھ̆?|z:zgwxœfW, ]nW)k9_YAK 1Jj~XSQ;ئzY\Lй_mYmjLlW||!lo*ex!pdiwTͬnܳ{OPdb⍫$@rz=௾SO0"|d&\ pY'Eq& F;F6/*qտۊ\3Qr{62:\68:0*JN)UIՙj{~kR))pVI}BC'<9s9a$)):_0Uz{3ޓބaLA8 !HôcqKk:CvD0"O)fAUVu35"(L+|[cY2 8lTCv)ZMgFhz-jׄsIo@@i~Pbk'f}kk5x4z4l{8 /[8 kW/;J Y)./~~FVV7";)Z酴Z6PVE?xF 6+ "`iV Zu''ϫy#?gp'[&L/ b)svPm`(@_+qqInA|Pz;V P=0iZP; hY*y!pܜnv͑?)NvSF(ŝkL=gՈ{h%~'qɊ3X̯]ob_<7ei혫6bǂ%Nͳurr[l6tS}Vuu J!xj2ϨtiMv<+ycVZ/8ԍ/ fU +8HdYkY У*= PKb@F" F76?|vݛ(P=l*HylT3k߬X>MPC0٠%$( 6΃^ |ʃlNu;1+H 3g{M) ji_\v݋SC*YufZI>u|U]t uH}ŗ1ɽLJ\Cg,7Eҷ6)^ڡVN@MB(Ѽaf~ӰEeK-1v[@*K%"Ö`,QNUK򸍎d3$6:=7*kH'󄔗v\J;TJՊB֥mKi5<8I vn! ՑGU5}^!oq^TُTl4$3bf ?UOD(;W񫹧!`jѥhrݝ?Q66IkWVuߠ7 k' }Mw1kS X?nd 0I>\[4|ZxnQ|h'pʔGۿ ##ߋPvF^}oЋx {t/r]sA4vKGvM­$mg |< 9X[Sy u lhRdW<*o%E $qkW\€=je;^MTꑒr0:^0Eo2]2RkݪXvĥ|ɚT-QvW^_okd9&ٿ,d; J(Z JUoکa G G}]|vթ[՞w jVbD3~/D۶$5wx_0ä|b3'-!َbBDUT+Mp'`eT;"#^=> XvԳ)26t.H-%DमަbcƔ_eA6ilh`-@0m 2ÀQg8ٞMnAbΜ..|!w_Gρ!CtTkyNwIZ,K0Ůݕ]^EEe:{$!KiZ0m;0Pý^&UdssAs (  ovvQ].Gkp=k8' uNxm@Pz TOxu0 bܫzj F)hLtB R\\[+YȪL_OnkP?qT]^ ?ʇ*]Ϻ- wz_=Amxq{fNdϦ٧Li%*S݌Vo?ڛ/\~ߊ_>n=ד, E/ 1G@jhhXfh U5:6b4 cT0\#C- P٫-I0Lvѭ\)YS)N}ђ]}nM\9W qY&P`ʰPw}IRdBa>&{đ Y4ȬH̍Qf~~s6}aB I߶zvV}jU @IgК "yf@zp)d>HTS+_4PlUoQl^M#c'5v7eo#@Ned_]|=Ke4{̇P]O" S08ݘe _E*<kʭTWSEjs@B_4X='2.VUYxnUIm3̓gaG&DZ:Eu3j㥀n^wGnɰN)/sW8t@R0e|tC{]7Hm~{6㬆n-^=@ Wwi$oKQ#s |^<YrXFjWBqGc+Rc_fZ4\Dy\5 6 WSZ}DӠ/FHO½ /=)ŴMb_8~ѰBLQǍaubwCE%s|!# ƮJ14`w׫n9[8H_(j$Zӝ5r,?W*[,~#vҭaD=+M{GJ~R ͻ﮹"&a 2Oy[9\D v[MmJ#h24)W0r9>!47H&ذ2qMSxyoaRIy/RHZֹ5gKlC*~죵@"}eF7C-}w57˽ '1ηee{Sf@|\) sY#?]=m%Jm@e>>z/T mBV\|%jU`t!X,sRT̒U{nidp>/!_Yp7\b"A72N׻Uziǖ\63͏AlXؑEnVlϫc ӧq&iq SAҽAOj^jwTq NhAuS{Blgˆq-/$z4:l+ag։68~ɍ#~DjZw|c0٤˝Jꏗi~|?1~m%<%Oy.|UɟΕݳ1u~Hcfz_3g_:w=O=ž(]P__H<`cd?arQy=+]kZ_~=lI@H% 'Ь}=߻ UTw_(V2!ץg] 74x0g{X6iݡqd}mWqo;[n&f}2bM&P5DiME,- pp(]Lݱ6#|kp]lY ,e!s|.࿡kb}٩i$Ė{S\ae򁨣2B_7rovgzbsxpFlvZ-UO7ogތc9;˙o#GnGǯ_{y{ _sҌ9ׯ 62{ 8,/q \3F~0f̈́ u3y97aw- 䇹W!k.}oT>d` #}{ 8yژ1f=g-M/Ro.WXNJ\,#- h?9%濸l_A~j@:Pdyxm8ǗijzI܎`殹U@%W2Um;,V#*nZ(%_W:;HF숃Wq]ӄ&ET5Y/M7RY\>t6>ޓte?Oi dE)ctoSGz龾{fYvO}}.Sym[\jJ8Gzn^,>bźc2RxJ=n$3q*mn>r>4CПXu2(tTD?ۦI):#aSwIdtwmH7|N?#0oraggF']d>_9kmswN5'VRq[µvF\;Pk} 4)z}e: Q ;;榜vMiHAxz;~쀹/mg^d{wor LU`jTe~boGg}ØwD݌XC+.6+ܸ˜=lPb@+qo_Ѝf}ָ_ԵJ(!t\-bS]7^o_$ Qf;>j(,]A2^ B̬t$ȃ;vU+L.yc=i)=vbʗn"u'rdcw~vGnas\5CDlmOɕzDeRFz5qÐdL;fVgԲ'uv|s{@&z\JVR;aAc%YJ%9ߛLԳ u z.wBz%3 0$o(4;1 $BL~;.tI%y&M~{wJV)En6W`IHG'}nu IA9+Ͽ⬝ڥ:ْk]CRe'%|qLH`)ҦZU8}*TLw~wH|z7<z %H l/XV%;c Vᐘ5\JL+ӎ([~$ԩdZU,#9Iqb_?Ew{Y:n߄&7 |WA˕CYJkUOte~> "02*vHL fmǤ5?}re `a:*OgbL}1 f`;aeh0A b0{M)COOԙ# Y0Nrr%)1'4%X> @(D_xPy]dN҈TBa*]-jOiZE#آ@ |X gD}QsTVӒ<ů2DǸA8AxWǥl(5ເQTU3*̠?p܅NfWh"rU.K|nGy1%Փ`;}r'4 J<26& J%:XUγ FNFʟuqr7ZOAQax҅J#CV񟏯֒x#9lU6phΓk.%d'>s|;h U:ټ2Xp.TtF5!Xg"GH*@U 6 ]rXjz+T}J@\}߳@A-}ndCdɽuöC+a~O,EPg`] Mlr?U)i*}Wo#'+}3@: DJ۪_w4Cۗp}DN!?f(`4h\CO|ǑQX>|<$J~M>c!hMa@ 9:zT^Tt?|ՃpSڶyhbA`;һghy+C;$YĥaXO!k&*Ix9~wLs7bopu$Zo07|\ٰB5>?K_eև3jT"jꀘ'fE'Wҥ坌xk゜'4XBh;W>.m(t|NT[ʀ/?MS gGG91?][ԯS@8[H2HM0UPEGofwE|A<Ưp)ɵ7x1[\ 1a}K7 ke=]Q =]a{( (}/dkEݕyL1煃>1Uz̘t-DjP̿$Wш `4Eí#Tsb)l^ R°jU!a|mV4g'ʶgAiLHK*"@R6t -Soַb*WMA/-1oÚ]5@3{ma^ _eeщن}RhH@䦡3R#+[nLԴP&UnW 7N/U=R"n%b<@EsY[pL/|V}KC+jtEZ1A|jo0NV70i Xf"(da VKF6N?vR*6zvg!V5$kĸGKGROصx"qLqXhmfʂ&T9i½Tkr2]`- b2h0u_a D[~wjJsCm3QUΛ?3?FaoGa(FpimI'@ Q"t:sAjJgRHyN m!V岸J/fH_IB^iӼOZƲBAļLi z%J/\B?Ttzޭ h2]R]z^ kCyJ?|.1paS07~e/`QadAEr@S7]ќþdr$ >clYvisںo߿×<{gќI4ҋ=GzMdSR5$2B?x܏,/"vGOpnbG$!/&_, OڶYeG>#iZL$ڸ.RfAC#a_ydasYj-#rd)fN\BNrS{Bwq 2$ǜ8cvKS[] ѿ㴦v[ C7(6/Z9a4Ch56WzBE'q]lp+Ps˹`34ialrd..gg^))#8SBWW ĒQ *E5 h4!>*>t\`r&ϒE=+THAF/e\H]dl16o)\R)0_WϟcQH#WK>#^ëQԾ6ptcIpqFj:#P+2A$@ڛ0}#HIP8\@dJ(8* 09iCVI)ZJ >N8Z*ҩM#R%0m}])+)Fqe={ v=}w]= }-;nٝԕgB7Yꈐ[o\5Wܟ0vsQOiBʫ)4b ܲ](SGJ'NBpܷs伨6좩Nx"8b~{A1PSG@wIiw{΢Ρ!HQra~]pJZXMwvC5aUS5qutWvPKM2m*)&&+oOV&p5E#&ַ?*X,56`r[F+y1.7A`d-bOs}i"E5oP>)(W.'mAcZ2v*ga6\HHRk.Ys%mBGT'nD@x>f'׿tD{bTJcz԰ܷM^`/Q=C2dC@ 3 vdV!7=~d{MV⠉) vMvJv2B!!.5 |!⌿5 )ԋ8+2k8)L"Hr⑅v*Vr:e,TPI.r]Sf; .w)sQ9)pȃ$8{M=sI';t|Y_wxMICKNPC9T 51PcqP%_7TXuNҹayWkTu,˫^=,HbixէYSK9D܎5Nr'tn-fR}ߘf KˍPIW#<Gj[ޫ5l#v O#6w{E'TV_F)TGN.!^Ww41jWiکtHR?&:H1qqfJ]CBH6JBi] 9xJu+(JһPb04/ xַ䑃`2WaZ44aNβQ)+ga)8*wPNm i|j̿H6I֑]|KUK|`GAZm_*e)oDP6k~>VI䬤Vj  &0C:O f~a|Pa)kSrN ľ mWU>Si ' XbeEv^K$*wBa*Mxj֎oEEnYIUc(#JHܽsD1*w=`=7ˏң칤ͱ =6ؤ2\b4Uocj,]Ce8jzVAQ.-XV⩖:ٓ AAI} ?7.r_&>Lof맥_a0t'&!AoLE t_N` p3l ,# X>hsaȻo;"}ìj!bSOxU?Ҋf49z/9t|ޅ=VxȤ'"o\r8vkcyD NQ솝._/|7#u+cD,;3lh1u|͌N=CZг| f{mIq6|Z܎DW"8$ިcm9O4dޤ\(g3r-^⍸U٘nrv2 j+r"C2g]1 u< sYUѫyY̰ ^pc"l7R<'3:[}}6^˚J u q߇]]bjF@)1tk?gV96F/bj`7IPO2v̚D~"ƢRJ('F|z?q1:okUNGf(̟Ie\D|[,kށ}ÈDtx(I5}q׉9Z%%{K]w:>Tl0꺇QުtZN)u՝6Y#sR>Qcޟ睊 0K\:*TAF|s.jJ9xu?s/ jy3(2 r0u74ַoebioGņV1 _ḫͥ$udgCR rѵpuh~X+OUp>C+Ie<0+1^8-LC9/5ijLN]["ݧ+/c';w>%g$^O~`Âyt Ͽ_ Q2v?K@YӘ%˹G0@IH\oY_% m])H(EdT|W&t3IKwcl,& G%_SJDj_Au <Ɖk@v\ 4%Ý+X<1PNpwxVcqe[ں%T9#* nMXo vT2͂&\M`16у+o6$.*:_#<bdfj2RGk>.Γ/vM%?%heS7"D8OmH?X (RkKq^ۂ\ʼn/۱tV$O7PP&RD }0 VtvT{Y%&~եQi}rշ!OX~v,8Ǯ3o[DQh %Z(N(GUkWBvSD+n 6'hd%@W;06K(z :-X׍cKDGQ^>Fb]4 ny<_;( v_tIOVXP&W F{- 5t P^ipyn:P2\Usd:|If kU]&gZؒ#<}dE|_a=L׮ZrʗEi5>e><$9![8F6RκA^I#4 uFa*,ŀl.(: '9w("΀FdAJXGh? 6L #Geͨ0LZ9p<+OM ծ'98˟ӪMM#K&3AXvN/)1,CtQzH9e =|]oU x!oTc{:jؔvcV k,zBTH`&Y=kswc͖,<j OZm4r񣰋28r5Q{j%KffݒBsh_l) ޴'lQR]~bLY%`'-7 <3eT>WAbbl#.)BU9Yz#ݙQ1aWy,-vpPȧ)l0tD/bS9koc:;R"G#dPI}}-+E3W9P[˟߇ `s5|)|fch'A?-/LFtTK9ɱ ƎkC`Aˇz6mo  1|/*ci_V<6=\IdR.׌ cCiS@,<+gn&ےb'[;ZM^yts䗤v :_$V'$_zu%gMS#_D"}*ԵZv@9ljWIn*Z؆$vx;(8T +Բk5K6n ڞKpaLLm]L MZ T3yQee;G'S*H[ RQ"]`]l)0I+v ᣒElj]z-x~;kj%a>@?_< >`'{]BP> hO0.:0oH}!ռb:Biʲ铒s\ $+Dc)Q2" y\cݓ𯳌|31 +߼Z̞]6 N7f756Κ/Vu8~cw`4`Xߝ;߆z{7^`ƆNZ7;?y(Q aPԧ 晰_7V׹Σ<%7o/5Mi4w?b޾\_ IG;s޵ku'mO5%iylGgV؋r2g-QR>x:jB]{0!,BzrTr8,Pv%xX{DΏ &{yEk?%nA"qH$Իη=s#3I')or,L* [\LhG}rCҩ3 [JZOm,u BpH{l#q`DUir2;cKxh82')c>MOu)e\ϧ/oJ RW{ۯ iK 2|s;j?b> _IkHF^7$2%Ny,Q +|9Il$FdܑCTP5z :D{9 *u\KIPŤF-9b咬k3\ RRVE9QT َ/vi1%vE-HMp6#ʓXˆb+=" %jEBo<^@jᥩp/ jat2\9TR^'򨐖oћ+aJ^FEqb"9X1`tv{/N<>B&@O5a S`B.\CC|\#z vZjL&Y/޷(}i+[K %eU0r?EQ 99ٔE#"7 2! sL D ɣ6[R΁fe^ЊkN ‘CLr[Z0 F<1S&ʹʬf &%)+QWWTdpO=Q#ݾyIm%-9\{C2Wvcm>^?u_=o.2L/ަXu/ :>Ni{*[ &^ 2CfFTw:E.^spΖ"Cg3m/IR/x}~.,U^{O2[ Zj덞+s4/9X/L#ɺAzD%W$NMV)mHTep%밳_5~~cnԂ7 ' {XrmhL9s N|YԱ3Dwek)+%YҤRl1@K5Yr`˓C @1?q3$쥧])wO36*FhC^=hڱE'?ҷrm]c}}pA\"̾@:cto*'骪cϻg~P>*D("UM%vvZ:h$)ea|JTPBqJ4Jb;VoH˹%%;VoPҵyv qy4 x( A f)*Rk'/ZTLia 񨳍 :H;ЍDظH?BSI*䯂Pb8џ p LLڥ;Y(_}(fu]A.%FDزUܐ%kdKsܐyo= <+Kf&x#EWpOu lN+W(|m&ԏ`?]&W!& >D]GJ[)-cuBY0F /% Ͳl(F9\I?[~r?֮ZW2ێJdT]"T:z*|hsPB-K~$;P5w ҽ?b{`\#Hظ,(tOV¢-# {yT2a\Yue Wm>5R`[7"c{5;a=l*o;Y%Lag21Z-u7 y j<Ӭ2L=8љC~I*#D R0a\bݎwaʏ?IA=?@8*}a^`htӠ]Gh|O!}-IќtسsZwA0C!cuOfuvk,u\eb:6U|YT$(#:Ԛ ,-y|7, hv0&9b! |'vl0;AHLFx )SŰ= "-"Hi:#m`q\~+ ?nžR)qfM$RJ%pqڶ 7]#bEHZIqO*F_9w]}lW+M:ϙ*xxD_R_ĬGh!ND'r2wH>csp:uF}[Lo3| MUx. zG_dlxg6١'rv9v**ysv-lؖxÍ+VBsQ$HZ#1k<gÞfݰnlU:Pb=rGo1srq~$^]tZ~$ oS/?rKz2-Sw{ن' ذo4:D2Xñg/[5[qzuS5lakZX8WY>/kf)\yZd 5=WQq֎9il%q;Όnb̬a1rNf?T^'SB8nK^7BbVlD@$XSX9@ӫb:n,;;r1kH' %&:RLΊc^qr(}_?+I9~NfO:L2 G,:+sƖ;(u?N= WtMA -;Sŋy;65÷k0C91G-; W49j$6BBÎA#RYoGnK>nKAO9)_Z*6L^ŷ$w8IG^a JH-zE7Lcߎ\! 7^gv{߶O9M\p5`e*:dHq~3JvP B' 8r ] e"/K#`"ʉMzy~¶-&K!ɸisu:k\;c.7"' $o$!{e{/8Z2tNV)L4r5;ЕNӞ~Qe ]&xJ.pwD ܽNl r;X۶GdV/9G VG@tz;a:VHhMq[$"Jnx9{f,gn-DV)gfv Ruݖ@{e_1T*){Q~/!mǦ;:ڙ軘ژښ8y꛺w4xEsoZ*memtN#!y MJvuHr>% _Y(H[aԨ E4MiSN/?/zW#ofKJ (XKjf >L9rz3b-K [XYD`z0ɤƋ&A 0I9N9zp蘼FUo߾"͉+,}M*MK]KsjpK%"dH0;k?qBfԜ! 䃣l^t7Y> =7C]f튳9|'RN훕uz|TxaHG!9 3Se?:u0b,Еe7r~g4.P+e&'ǫ\Rհ d6+sNEMɣ&16Ik fͩS@*5@1\>bKaeŎC!S/2ϙIFrce0^;`mGc#e*v*(Ml,~f"DzQA#aitmI#~1wո6}]'VF5'i~4_z>9ʑ{ J$^^gѦ<:/^m~Oxe_V{#ѭڧW!qh 1O6a𾱇YXYG4=ɮyUtZ+̗}7&o=nSX -*Ȯ:Sf ^j L7A@7L dFF,5R-E5ÖZŔ@èhr ni\[\v djM$b Hѧٳ26pc#P{,1I`C()FŤ/ !Bq'׾) C!,QNŦp`Av J<-g$#w LeH5c2F8ۆ?衬{HDj$&2q3 7,kp'T.0?s⨡Z0Ch_39϶jKЅٗRH8/陼nu0ep=|GQ;x?gzOU̗X' U!*p "B?Ȝ \V)D_nvBT,$D:=d( SZA 8>% 8kO 2';1ށH;(. ¡\7D,uk*rgrx!|ܪXD*F@]y-S|27\v) cRՔ6R= N^{$6 ǖ0⎮6"IIϒB}(pW^$ 9}F͜<(nhag+GD5^p˹jcdJpd.F}ķtGG_KuHbwsi ɝ J$1s}EPr2t0t,UBz3ފs9ꨉ %G˅`ewϕ0s#+i=mfYˇܲX,|p` )"Д6H(4ǽ3J^)R%Zぺ}srwhh}&E楳' [÷ T%Zy56^L1ѥ9qf `p*\umcAK&1ǚݤ6w߽5#}]SikU8yRקkF656 Q]kdyjFYlB)`hjY~R*Fh5re75qyL_pF1ll%[n B~j UlbM,"e ?hNXԲ.aIbP^V|ь%Yi/HOߖ/w4Op2v=7; owX̬3ڀn;2\}bX[wxt4bs` OpD)yo*uy$JHpȇ!@aayVHE{JwBa,!+L`*υ\KGtнQOܖ;C-B 8&@5Y $2}jDz4d"2-|lyAE*ߊ:.wFb sgn1p0h'g -K;}n9qGl7zbbM}Ec6e*)ZV֚#zyA!O!=aZ‘)0e6AgGHyTzAT5 38aȂPw;~wb$xs=d`L-M z+EA=!Q*m$ſЬA !`ꩮ<d-;o%;L.ӄ\͕J5;u@3ZtΐJ'ldp[pҐ+{q,>O;W*n \=oqy:G(.bm#D_pHgJj4+I'r%{3C2`tI KǴ )}5]@( e:8lK/uL+؍ ~u縬Z 5i*JA^'vʡx/lzl3hꯩ@?^9r|Cuoi/Mp N囆B17 ęs ֕q&4482Z 4ZyZC6ԣT]˾u欽޸>#~l^v®؟.aGƒ ^33AR1@r!kA0N*'*n=i +Go!gzy?SW3 aTy8;:G;[I?Q\Qi]gf7cu4_

%V7$=1$l>}_ 1 t._d۳yFIwM=\<@N~e~8fv߈%Dy^TOiOqWV5a5s =#VwFotDv]yU6DZ%馨MvnnT.'B,!$jd_9CmIۑu3-IOS?>&~[ߐ9:8Oz%ƳY^/׏V~tQfzmU A}ٲm8Ȏ/$ƻ% &t~Iocn|ӳgQ5P.=J6zvȎ!)<=E1 Gr!,z(X7ȂQxMqDl?#ؠ AΞ 5nD:mnZseriaxE@W` u#84Iæ0?Lm&0!/G+^+ m0Q0p ,|yf[ ك60k &~Hn9Mww1 }&6.9~|?3k}TC`zͯk_YP<s)X&lm۶m۶m۶m۶l۶====DOLAEdEddjedݑx9N,lGcW{RX0'*%-in44 6ٓ i2Xr` pEQlقDI b))<_H 23ԄyX̌\yEc˰b{f3`?A]݊)arfNwq"I{̠2R.%{0~` n U\]2 dl+;Fx_8 _.%ƫh:<S 7w ^ah%|fA2웅|+jX^#"C33Eq%p&rVE|/k0EASf-83/gPi>E614Cڨd7;.NB‚DZ-Tί|/ 0*X@*+c?NfBC+"4K2뿕X-eKV-3oZ-@c](Sqƚ2K3}3T.JVBQ[-@tֿ2P@[HGRFqYHW-iImk%rJ{ie ̒aうh+y]6YV{|l$ E2NGpߕ], /oBB޶l6uRk2җG}o6lEWQ'w(ͼj7p$|r-Q6(k{qd2vGN5{pytd tHѫm4:H&LӉp|v,m1qEv?ًtUA\HxW<y4m &@7" zNx?B`\Zd& xD 0 ^H('{'_jX[o""Ax!vKC2Vї@x \bȷ`J?w,bxLA+1v%p:v%y=쭘I˿ uzciu[qXvbJNbG̅0]f=*fnju$G ϺN1?M$?JyO:K6Wǂ̴DAWBB\~C(OޮVklh=dC ~1 tg7dv 3Ek]n;VH]oҍ߻p2?09a|(r8jH{ۤл[ ihm.haW\܌&iܪrXO[Y9(&][j^ȴjOG师wk;@vMbc`)& ];@fm3G`e'3M.Ν8(0M"%{:8et Y2HZ"1E}% dqR*~#DݷUciF Z|/|giV p| ).%1R$2Đ`$n~WH2e>܎݂l?5}2zU1t ̿.n]SZBڈqV =׻(jKN#SKZry[q<|]tS3+SG.B=ټ7ka*z gXnI@pڟXeh 7vukoS_ȶuk".r~3Sax?BP7S,1k|+y@l!컋ggsx$kH5+`{, #̸kZ$r*b!|%_0AjD.{t_A]tW @+;VfL"Bp|[#t}2st p>>̅;~#a3' 46i,GN 83zdy`-mg6 E|DB`5KXM8S!whFONg݂"3Wx G2US8{*(r%5g4& -JkP| i Xrp;"k˂I;ohL<p%튱-LWl?=yo8旒f+̂JY{^9TwI IXd) :g_:z:%sj^6+e%ϢJ:ds6>7VjɨwĕD  C2,ߺ*hIW `*̞I{>UAf#]K,wfZlp tAAL=glo·:$xw72wwi 䯒)'.n2$q*]'l' rhebl=r kqAi[zE}b~)Dlq%7ٰAKo)'ΚgDF*T`ve<Щ8mQo$ߺleE,\̮ od9̌oݺrvG:/78b GpmYfW@ajɤ8plFQM & -H4%!=9rB)8r)\IXjnyCzYd5T~Ak#@c6[% D Fq-=. DfΆv^'Rijwf"ǽSqt|qT0?f u9?=ǠOK*US}R0^mR8@E5܂ȷq9W{7>A3U}f!cDh),,ubjk k%cύ!{>cvTo~a6E9>ykY'3U͂eBnS psw&R5QaPaDO0G/{ v~9q>#OJSNxA/[Nw|OP]?.|1ӊٶe{QʙطyC,%S%1DaNl[%A9*fHk?ۓA2^ y]>BCWAצ :B5$Uǒx4d.߰~ OQ,}, b^%X͉{Yn(Sė5N*f#uLCQx[PA (}&]g=@u{bK:3)(_cGf yI亂-K?$xb=$Üd8ˈ98=1 a^t1>"44Pd~z1O]mOjګe|5od'OW{[yy !+0plD)3{Deοzmz\AXX$VzQ}ؙ]65-8et8ߊ6w~lcGKwkp_kgv hۺ#G9-m#W+CYLH@+¾d̠0hʭVnݘS>[=\-a^Z=|c<< 9[u`nJ.ʘyTE.K^S'ƷQ#/Y  =EɆM0nܑѧyq(usH'<ȍ$zY4طs֥e& Q9{4hw> cIorpCH3ީ )CVM_8.F1n?rN41CR~@w)o0EKoxdo8j^[T1i S'mbS| Iۻ65I8R c͛ٶ/sHEEɽnVBV~62)Jõ'͛Wr, e qtjӡh4S_+Pt T';2N]n@\_.[R(QQy)tp-}:.5'0JuZ2p0t \4 ۥ8, ͗q8|\.2_o53ܗ|t ?݇mD-d5#KV3P,~x8Nka%*V {oS962ERYLϷT&:'u0@EQc+NNv/'ymt1K4\oD;:b]g TFzbť9\ Ef$HDn(* G|1,^C~ta0 O82C /ej"k<Lypg<˫Ď(uhv.d!R6P{WBed 5XD{8|{qTHޙEUʕh%XzqAgXZiTfk{z{y'i'ێY J'BiԬ_,B#PL|mi7dEUtSK>~B&]~QI+"KqrPΊ^;W~[l\?L aAs߮Փz\  ~4 _n_ 9m|t# iv O/p>μk#9J2#ғ,H%[TQxa5lSO& -4O{PǸ}M1rܴ{UvRߐb+g'AkafɁu{Mo}-d; 1c``zz1~dȋ'*l=n |oxCX{ׄiy},P=4!5d{NC-9 9'-,Җ*8yUV_OMEeQE"6;b\iaDHnRJ%a:ھ&n> mkF#ͯfsD6$q8$Gq l64567&O?}w+$ $.+V}~LpIK(Tv2Ots>R8}6q+}9o>윷yO;OgNO;OfR 㚩YnUZs ptTTLӧaRwt0ULXgGS2Te*/U+UoU_NTtDV⪾9VtN7!S UKr>U71S+ZU69S0UyU~xkWT}VT?O|«ʭ櫿TW+6rު/^K諿^Uz6z7ErkT2ήr<,VuZ6U.zT? SmLO{TI-}HSn]1S[U|d/S^I};P5W;9;){K{W=SCJB<{:'Sfbtr٫FǤr(3'3k$gAx8oB5`8A4#,H!)c31F8A0c !dpnr @H @Twݳ~(V#ⲓ~yGGx|I`LB0("Dtq؉Y|frJ `a PC2Iœ$/i%)TTNP680, ik:|3tpnԚ pnF"1N<66e3)=lc\Xw 4Z7+uk/}NiHVH@2?OKL\97^xNp(!ґ =*jوͥ'@ Kn]σONHg")*usեbrXv=5KXCZ(A/jK!&4ُLap[2cW.K3Zup^t㪦wPnTkڽέ6WC\Y}a;OpSq5a梍˩b/St.ݩɀVȹGT<4)IH/x2:?)^m_.se-f%$fwYr[WQ=,`{חRLe+Z[:("䠀H. p7Gff1:HMoV JPsVwy岈AC +<9◂G5 4E(%PV`5a:ڝINUVPZ&캄Җr :T41M0mA)+PjZRB$t*ou=wx.wTb.NENQ BD4.._&W핡g{n䵎1-8錇]fh՟*pjK.mKàq!>2+!k֖;j|Pp MtKӴ#.x'׋.pg'AnJG!$&+m1 (s`r\)aA`XܝH;k Abψ@vE杒- >0s-SUC}p|2BINygnDMFewC t']9K/]Z-:L{\8Fyz<]z"I&:uj EB~5N:t4K8wlRθ!"9-,F襵rCafKU"qֽzT%,%k?w!hk_\%?@}\WF~?h.xsI혿gno%ݤ)ari/K%!9O)gQig%֟|]д.!V^[Y% ?5DK(A)rc1 aKXxBCqu|&j7{M뜑fevV>Ը0Kp6bM@?&|oQ_os4hpp;Iz! RfU a+d\63Fo{hMZe!76s4 >  $8wgSXdXh >Gu%?=j3ҭpςbG;.wש,D[\*T=B iͼmac?!D]W3\7B۲ hvj$V+%,f#C-OWkc 1$uyI@ 9ٝ0L%tkTWqM4Z4t vl~0"l`AO@"9P`@!eK#yFim[`$1ooK[%qk6*]Ͱ3(5ׇyduS3ЄtlI>R~1X[M.Y8eEM>ҜVPe21" Ix۶[ !.s* ض3v9ʒd]d:B÷B#Jx[N(f,cύx&C"Ȭfa0b²#3ɖIHY뤷 dj}jԬLeoE g|X^ [)Qs;Ȯ|Ʈ=3"nF0nw @_~Z{iS8˖|Odn`TW(۴CѦWcIy׾ OgZm9R&Pzb_)}xL]qB֜J* j#?pv?xi"_`~IC9{{.S ~GJL5ғVUifX5OavN08qV>91r,3@G}hyB,1(E8|S *ƋZ|xd_+'H覽;[d8K?7IRB|=Ǎ[)Ell ;皠hP|Dc'Sb~Ue[d]fU3a(62bh%# I Y HiJƙREI |EPY4Ahs:oJX8t~~KW%3,X]aDY + uQbN)lDD<6e~je<^տVY a1nV׫k[pn)V Z8q-UYz)f )L)nF爺V/[~jq I_`d$ Ξ.RDl /"UV_VXNM++-x}Zˈ|0bxZWJ> {-̅DZH:~/ȲHX#U !5sUIy6M[FV@.EW{ʟt"zST?P7BhɶFPKرVܩֈ&P҈ AF`K"8Z i\_OĠm@o%ga|ZXo_0'bTn(Z'!5eWa}t:TkaGEGB9,p|F+%|c !4{l7*86 E"ǺaCO-.SȻlx3FFRexĽy8f)ׅ;ɸ9nv㋹6PrbG]X>wkP:8ψ,'WTmKB<Gx$o |I /\Fg^& ꄇP%Ҳ&ٱ]2 R3velc/dvl&ڐ4cT9iMbdfv@y"6o6x[Nbn2>IGdu`IǹKޚcGfw)RcbiSr~ٽaw}v7K$#8 E>!R wnl;Zk-&pX~gcQe|Zjp{o" b~RrdN+Kߡ($F=!p#PɒJpBph~OFf`n>wE6fOp5sb'CGhJ}1ˢGBE#74lZ*)M_ ˔#Nꉙ16Gfݬ!A+xAQv)#lG^?ÙǒDC<8ӅOx5D@!݌ ŧqbbAq_-& u0Re]!uOhXӇubU(Y&L;ntO)?sKqq#UՍQ\%m .>^I| }F% 3SoEV=ôD÷rJkDž@~#om$I(-^ ;RHaĜ<G^#uK3wOS2"֤!f0̃Y@Xxƣ]n{pܵ,>wC^LxPN/D ݭ2}nݰ~1v{3"ȑBR,Lń^V:7ks -ƫ]vKVKVVD%)DBkVhR_ rs]m͖<,\ĺR {U%p5isWLQP"7lV ?Q߫bO+0 Yd[eСʋZhmw5`$,zfǾ//A(Sx#pa%^uHy٨"h}F{Ch*jФZ-]?h@dc; 쿠5fj!N,!v<#RL2hl7eӺE˥7,#=؎{7D1/3 -B" m_Ejie $ǎzXf|q_5o(1\iك}kPy<1gEkXǝVl|# vlҫiζ=̖=•7Y +)b;UB7cΝtcQٱ EQ.lmWN/m1wߋ!rع-FYِ$s~w8&Y&hMPo!' }vfFYw[p\3sc5/wQt) h윆n]>rEV}u.өWz g+/?y͎lV=ySsjȆsӾR&KkUʻ¥CAbi;UoD"ljR'T@UYv-yAyP};a|1<):Z&xQF< P`ڝx GdKBJG=0< >&;l1Kb$`G!`Svʵ=]&zoD^0u1㵋V#+q3 ~ WmĬe%JX DؖuW=ƕ~zЫf%>*Vˎ{Xwp j*큆 (=ܦ_րkτg 9WO ̭ʁN2^*\ӦAhT۔#!ZS!wǖ{yq6\c6\Ubg"0 P+V, -Gc25kh3Id%I#*tVҟ'Cϡ[ -Hf$|W} #ȇȑwMYrW/[AIwmU;G,a[7qA vyR+W!m<8 FA/j𦍝뜩 >6H; 3͝>:GK僵V q );`. 꾭s5, zpM0McMʝ Fd'ݢQ^^_Eb{݃ԟcAyV.A[˕NjP>1q18)F.gsx9û%L6 Ⱥ_p(S)P;%uhrLx/ ^ǫMMjUr0h-nUU5{}l\Ík _MfgI8l,H,nkv5zK?,"лFִ=aF Icw_ Hb*+GO G 7a(޵.,\ޒn `+Sij7M\C:yDWn{-N L4 c.jAs \ Fn5gd }6hAk DOXxEwt]5\x[j÷d'yT̋-3ܜ:ڮaƧϑʍN85߷x/ EmBu!N ź78q7~d!n0mo!"}ȖD5߃=Heghoڏkcݘ[)!_o/puMXc)Wj[ڿYKd|ך8fPvST_rՑD~ܘ^s,ӧG\ૣvN7L p3)>Ri mhb}=z:5`x8ar,]/CFTtIjl`̛p 8x汼=P㷋B ͠8_ֹJنD#a,zÚGM1A ;{ڂsnsX+Zz@gF)v;_/K ]Vj)?oGW4_)u͞n>s{}~; G-A;8wH>ckƇk5F:f>ݻY6zGFFx|&I+!Vg$V/a1;d 0^"f_VʞJNWx8w <߱! -6͊p;:;on?p(eDri78!e>u,$D7q{:6Z/ &}83z4(rDvaoz ֣ynDmtzaiOl{ U&]QȮ&SN=ݮ\5?6lx1 nl;9Ͳ}ĻvoRw :m*""s9 -^&0wC۶u!(UR4ljdSdQ;wq[1yxj|Ȳ^t&v=~pQ-l!{IGh}dͮFcmsɐEzn~a EK}ho 25zlp U[lY.mk"VݙNz+R-łcN,ݟ7yHV&Ey:wCh@A9N9T!Kq̼ 2 (uQ@NnhzoշsCPqI?0x֓ c @BLZS{ A-oɷo\|}3wi7Fpo\U-=VA)OcmC OL}K|ZZ4n"x0`<iHf,lVwݹ=DP`ܣQ$ߊ"~vӚ{:ɸ~2OK&E>"fM !|u_JcGa~"6tl߾4O0~c~{(cpr?0J 3eW @#Yd>P^{i.,?þw>2jψiiwiYNJ8]w&H+b?E pK G[(nY~rՁ{cJ×?8FA,iT*{*%rzwdԑȧ8aY3\>w_ͶG>2~&W%d~-h~6s.l2]z϶..{98@7 ^Cz}}K;K}}:O$Elн'긤tYYJ|,K \Bqp:R#6̓;!o`u n쀫X?vՄyfPJri'X¶tκb־O?/j sHPomZPʹT:pSѫ! VTEpTk=;bAWpf٧Ry<\^5(W)*7f]G]zFUjC&8xwU\_1cQ$#ҍWHn- l^Wɴ[gg?Ȥc,fct <kOLv%r+!ߤe$ğwåg¢ޝJqBew7<13cm+hŹ_Y^{a]7il;3mzX=#p 02J#@HLrX$LU(di&$e'ggjٓggjHZ/OLL§W_\G_Bg"8?[̘҆?韛g]B~Qπ,FT Ml{3uy}՞O繃S'۞{LD2ǞjJiskk X[<%NiTF?cƒ1{*5 _XUN RQ0JX͓Ch@Ahq!˟vi %F@1/I\mZ_:נrzI5N+G(S1"wAvKһrmB$oj]r>\!aF)i7nO/ܲOmt̠eodNYtPKE.d '4kLHZ j=pisUoN?/:IoXkqM>F* v(*> o OG;n4 E5H?<ԟO~RV# [L&4B1„-s "8::j۠##@jUՑ_ZdE`$#<ywj)dH (H&Z Q2#U(kolIztUZ [$|?"<>*rg$5*۠o990mb<(}o#FnLDk[$]To^xg$C#W ?$N G2|}G u+M@|!6?7)Y;%8[s͏r+ }rr>mLs1MUOCR.A1zXzu83{}cGR}y4N,fdFh ^sx5VX% Z$vxL+p2*ZdLᝪԍ$RqYj-ӵ)N^,H k628Wn>Tm-0)ݺ@FN@ӀTʻvmGͰ}U$+[fۊSahf 3\$M c78c/d U&y2JppyI/T j 0ևI7ۑ7Bt.fBx T`,Z2RoF=ǵA:KX]h5hpco~_TA2 WnKRj~+I4vg"~ =eYe7O+,I{>yɯ Wj%vP5ًzKa"dyo\),=]6 HrD-QLH6V&g[~ׅܝŐ/n2?¨ s{sHG.eʍ[dp씰 P Nv8D& +4i6%q=[^@K8H`Kq*M72f:W|ߜ"{DJ ) ΅6A|p3}93^) >C1:Zz]q)yCb{=y'hcw8gz%RIDHߞrh%άS2m]<  C 7"`e*6h * ~;TF cb[{Ϳ6~56ٖQTvwJ'6ZKMR&.nt$4aCqVNU:oʘU05ѶMi`ILBgjŊ)g*&!TN$GC(ꡊ#Mo2.8ԶAJ:ITS??ZoY ȕB9sc`415t_W*o2`׿rUfe|&,Ů WI%'HlY^qX9>}abx]GHНfP`э'Óg2iav$SО*Lv@T7 ;`(ٿ'b`o):@Cg XU3 ;_Ky*W=}W,//Zo֠bS*ljq%MXb|k: 2z1` j9O]`vgdMv[s=@td%+YJO^=t{tƜU"E6I9~<|cY/#/vNlyQ[/hixXꬬ\܇'2DT؃_$-ԓ%R|C(Xc+I7 W]烇*n6fpZ鷿 < 'fK(ByEO)A3 \gZu6>'wx(O&qq޾k ``r(p/uU?,(^g mUbK|̈́%0RM.G,:kkf/C5m gAk;(G 8ٸ5IBظ6 gJ U`D^DBP6u=Ţn/̌_ TFwb&ϝL7;?? ̀̀M;0$[%1)HjS>t@|SCg pJ{M}{wHZmJ!CS"Y"$F A.`S9JK<:+IzL)C]Vi`QK^X$i/! g^SeYImq AnzIYsx#wh\ajhfz͉/'bgg aZf`aGP]bPhuq`w.N5[2Y&: G%>0wOjU`f0%hH 3Ƚj=9MI+) s7% kU.٩KcwZުŻe9hH(R!@>a:4Hx;  32}P @\E[ZAdYN lL| ]M|~]ɷ<+&H[bKhӨ⧩{%IxsA?4f@r> ̷&hr 9 O-I@6ׅP#;{`w`mN:&U_Up1׬1:BHMESWT֪S"4ZYD') *2TZ=(N)tiL}(MJ1})_*M㽞l#]( ]j,(6v9A/ j5_NOE=TQU1ع|c*q:k!򤍧O^h`gqA(_B?&{l7;~Xd+\T-m+ܪU("lú7sz+.I^n'XK$+K.x/f)mHgY=_?ˁ>B"͂iJt#Jx9I{I X.*Uv=KBA`V,` Lu,d)) eY sB2Gh ḬGd\.߳.%HwN~ٹy";^<R|ĎP]H"6 } t"&0KCfV4PXNUQR!ta"b`Z0\ _3~lvFlo'Egt`m/a sU`yKgTaΉad@M'HYLрťDI&L*[Ȁ^ 5.aR[ܭV&cߒƑC"Tf@25k4,<]>d?Qz55SB 'eN Ï˔jbOy}*sm}F|)Ay:T(Uʨ| zw7lW?:wt>v~ -a'''G j 3tw~!F{) 0N;?"+J)(X\6iw@zbb7 ,Y=Q( :Qa8\P$;!E A@ w6)Pr5)IN/xg\΁C5P`ZE-#'YW d=Q q* ~Э~H{W46o/;N~rۋe=w/;Ù9qbG*O[v:yi XY>Mu=_nNz^k0IRVں \Wd:%O|{K loY[2q h#R8Yi@cr0 IH,+VJ90|?I7-fY[sقT 4 EaFcmMxÎw=Nk@=P9xUe7GVc;3xe\JQM}X!AӺ'r`3y[`AlWDtɯqw(QLƬ_ ưv<14ObVz$O k!zl!:4dda" iX1O3jB%/ʡԮN+\5þ3iarLL(Qvi36-msz5>΅Hi;Gu{4u)>y0ɏo33,ِ fD/m0}3̅X!^ G9[KXÓ<'qW$pr#?E+ ηx@\Y0H1c {ƹ8ҷNY +sg%1dcRN<0< #|Շ{XZ=u(~PjdԮ*`=f+Q{Wl=8jвrRUKDΚU+1""af-Ɇ_"P>4#O@u?s >*Wqo^3{5rUwҚ"bڨoKpyqִ=8fI$/I\d@R[) me,܋DIAާ*9@ 5=Ÿ>{4i)'BYG܏7Ϳ> _(_4ƶ6Ml2=lQe٤W٨J%X[YK(Sf0W+tM.SFq^Fdqm քN#JZ^S= vM!OzSTBBSpݶ%m۶m۶m۶m۶/mۙ_Uqoժػ:aufl(,l@ .1^pk_oaڱ)`LكD4x}L~ :os|fҪ"-<8. q%ǡp 3*`RW,,8"2*&fv1`t,}Z$,,3 %CYEiА6KUSAV9tiz(?&Ƥ1s R7/Ǖ:d2hMRYgJwq<u`2tؿݟDgy~0] Eux,e?Tjs=rȪ|7}=$88`)gg!F@ɲ ib-NQ7*DF.GmR^n9.LY 7ȢjgFr mZ ⬤?}(E` zEK)ᛟ )fQ%s bX;(%!zL %պ*|/!bQ,$tj_x 7%p-`B^ff?J%Pl ;fy SBx ֲ%xuˎU⑍#x뭽(O9w ƀ8\JV~ fNP8ttJgsFBB7&n&$5nmd)[{>+^y/HrY)uU73얨D _;#-m"SƵΑÞAvEGrY]Ho8'es!>߫2 ܀6`hPw.ItNzeP-rjḠq-<1RC݈y)tN+3؄N2aϺP>>Gc-yJ5F3Yz GΠTh9T9L | p 0^Y_&Dd}bl"kCnRx̑NrMx4W-E&^ O~[S,dmAc).>ya^D:̥jJO^56RuGavGw$ք?TqF>z66YrrVGB9EyHQxT H-\Pdyfz߼ :X 2W# Ssp"$fi?dqP<ٖcFtRh0!; YR%0\ku=-ne\ed&ֱINֲK}dst<Ǜ*o_jm+%jwh]P+pY\CRV $e; UOW,)͇my%8>۸ϜJw(9d Z 7:l`G6!W:c:41Db鷻B|:2I dkብٳ76PVݐRTAj]ʺ8URX qeE'\G-!CS*i]ʸΥ(d݂y߳g`hiyuJq)&819^LchMorRSzAދKi vhe55WMm - 0`sxЯ X&'+tG p'šo r5\C~>gGe4F2A93mg-Q+A_2&SG.i/J4 QuG=+>ȷKo"A{T/ِ,{qDrY^06!+ɖ]W/ ږ #a؎5㢂U~Wkx̝ng1KKj["@[ !$֧s}HWY3IIamYSMu ֔?UDu@7HziicD$,U\)][{g ça7ʾ,ѡ2U-,?[`,%uze6|l Yw.GtoYB9^] .pwQ9ZHuH) +FM4BHP4ͧO$2R7ڭ=}c76fba+v[zlV#Z~6oZ43 B=  ș$%m居p'2 .]݇c֩O2'l!$ǘ)5 D4OX J,%$D7,oQrs3A͹;4!զN'uۭ'71v4ˤ[s}D2_pU܏;8QZiBGJ$~|>|T2GRx܁kV'ӾBDZ#$6tIئ䇢pZ .g^F0%PwYp]ar0.s6S$^Ȣ[ 1%!Y)\d)v܇-Y54yJ7ӢM ^?PV۹I}o@,Z܉7P4-M:UG4U4 #)Ligzkwl9K3g"Wz*8^wwfp9pt]"_g$*B26~}PB&9O]~cB~~jRWRb/ŀP0N DtDH's|c `:&Y||<&zթ̚J7ߥ g6n@H+G9DCv v2=.zsҿ2cKBjاscփ͹%vntRY^3~n2yf~m_هM[|Z(JO&tq)-o ߭gJi?635^[]X0f.N*,m^GHQIDE!RB Kz#J:ͭ+U fW5EIZp@hB$jWBS߸ pZ`& ˙6r#(!VteGc^@Gr  eJ&:M%f!*{> nb#nں^5PT d}rdI6CZBm/TL6Zm3Y6%/ƸaJ2$P~bbaia 6^qAr[qy\BKejV#"=ń1ifo[jg꬀bꉧ5lzFR '6 J)6Xn*JmD<Acz퍻(/_whZޝȋ$I[['r,ciefZlAR Ƀ3iG4ϖ!ln-]MW $tmzB%&{>Dņ|%7ə! n<)#FGM&adb ^'W#W;FM S}z2_}\Pvv^ΨlF KB6#5sn]tcFvrߺˡ6Z)æWQ(X;`{MCم z-w7ؚ"Q6T>|$Q`!*VT.u VK7DAmtZRCZ&gMD ZJh?G\9![ZubZTy24%H hÓDמC>j^f؆ NݩS-` :Bsw崏II٬)dW2&TXB뙬&lIqK8 (4%w;rP[.v9\/Vv{j()9Go)ŏb ֳr(3;򙸬Yw0"hP֕-N5frB,Ct+e5Rx]GZC^HG; XrZ4RPѡˋhr eyw+Mt7c[x{XH>w:LL"C+qVHحvȠyτl)Ƣi$&E <<ۈ6ʇG;MvDe^Pʏ䒟Pt(5ax.ݠњP)'z^~"mZ,p9s?(!\p'(W$_KϘM~c_/ws{3|k^VZIMH}ϛ:ӤscA(&~"0F7baɤ! Ygmuܚzcn=1u5oZD$rQ:Ԙ^?3F62`PqHP[-Cb: լpH o A oݚ B~XDݎay?[)aėBll<4BYT{54iVЖg#eORfPdlͭenwrKp}t _4w3s &GڵG=&61aM#`?N+t![Q7eǟ̧^ss -u'mQ%!_L2RkOPTTYNCI*dX;]g:\o؝w-m`=Vw_hECM g: v㎤d$OytdʁG#2o|Wx2ߺpN?o藱 P anO84%. l `ͽA?۾8_ٲK:RO 9m*.9Ih 2~M+T%$?g˓.u&<($>露^pŞ Z"jHW(b㤙DF "P7<ҌpGZH8^}輲 u"1r7tSyBUN6.-t8i\ܠ.yxy' %k /pAyƓD(9F> 9f* z|+FQؐ"8]8;~-nރk{7cϮa@_b1?x^+h^:;:Inp[Sk-seEY[j~yQKRmH E \<>+)Z\[@`㇯K 92 xhS21yT+ꚷ2ĐEnIdCjuM`_`ND 0©kkb ۹OlI)ha.d* VBuܣ*ӛ-{oxt=,[jg &~&B/ey>;)Z:+O16a|Bp/&`g[Py>˸?y3`ZTjsbr[v, :v,vŌ; Iˈ[jH}^w܉$o H aU/<+U,cgOSJݏkVV:e ,ɤ#5\ AE D]`Atn䲋H:t/kE7UCՈkם^Y")YKpinڀv [*CSѧL~3Zj07 xkFP^ÖGec׷;+8,-I]K!{'+vm: lR&PPwS[7[WO{fA[&adEU+~6gdFd`?gtD/%Epo]%v١_C"\R4V?|%Ogf>! |jf֔ޥ_W""fj,NiŒ3״=#Y'8icũ;&1pֳOR.\T`_^\j tj3i"&N:S:Ѭ?al}O& {4&7ѝVZ`2N/'Hh^hg7\i2G.HU#zx\2*9[IwY`P{º:-Fjn^cH^1uM@"v @1XݵonC Cy%OUO.Φ _l O8M&PFSm!vnݱMPm{1HQYOLy?$9NG_zDqE_?-7%>@B)cx:2ZbU}GWd)x5> ڞV6ct\27]I[]k=o;zs-ʲ٩e9KXj՗17mljrl1ik?(">_NЫhki!"PU9Xg~@^4|<ƭtXȚ"ڜ .Ԫj.F%=n׾,\;r 'hGo4nɐvC ID1zM|K{HpyjI 0hg^Zb^͹:ݵ6x/꓿/[ AcCPyD$xŊdW`( V[XLE\ߎ{+w ;NqD9+}\g m`g,m21 ]B^,wMgOXP[t~v01¯<Ϟot2ukF#*~Եq/=2MCW 5If83ٍ$J/t+i9N7E7ۣ!?NKiRv]l̔1l"ِ@eЖΌ2%]PGYOed NUPrPorSz;3.yzn5TYg҃)TFiEj( "[O ܊xϫB3,):̹j*REC;C!f~)oopk;/[;toF $h{ҸA>?.ؤ4Jv&ހ;?ܔcQE b;5`_qiRaÕ8j_g]œōd2!}ZޙlƅToy| ٢V-n-@ʹxX YɄno4(Z J}G0G*cM\%83_}+r S 'AE<2KJಃ5,Or /9IgR1@a' W΢1.LsYK2)lBt%1KL]?_<*$_4 ׿qGXB6 -d5;AR>],P#H^ӕou0y)"!sȮa,N>R(`G uIQ[Zɐ?plRydK{c~'3S(b4]P>MG;^e; wD@ Kk!ebf7eWO1g I3oEWr^#o%]eGUŗD SǝOK`'=ȕ{s᪄j:Teư}r40J)-D*&eme 5҉+uo+k;4V1.nd`&,!(ϴMYoIGkoءKJ2f}D޵+R9h~)LsSj\RԱq@V^U۞fE@~r"#ρpm_(eLbӯ1R]kmh%Sk ޢ?Or 'f3 ~(vAEK);ĸϰ >J(crODrZT$qB94h*xFq !hq[xm~);e^nsM"Ɬ&c~&EKyq~#*|'c``K\^(yY{pw_cp]1F]6N)gm^֙(L,s 4@]*ؚIzĊyUc90eI$ɉdT7,G||rec0[c9Y(QMW`RXx nف ȬRm uU1ȘJ2ErT)Yk-gKHac溬`ʅ E Ȋ 7fTѷ3[1őj2µV{,ۛ_noЇst;춯Ogfu%~F\0o P{ Qw\QMcN~oj${1@ 1amh=b7y΂. bp](7haUOl S_3%KvהO[$3ZUM>aCqxšE, 9PAY9`@ث\9g06{})A|"\9JE{ŧbNǀ0!G1>K2Ŧ|c]"AbeLNI֤t&<g]x ^L dg܏2ew}ge9fq@.=`lŘ IEȜ|b㜰E>&.L,jAP|xARѽzsN":6OoHHFZ J"f sŵHk ]\BhZ.vD@* O!*dsm5 )@C6*_Q`Ҍd/.TBIAܻ/Y΅e[u %7?GWE>J|7)Bʭ` :׌;-3Vm`N/: R.o/:,.D6/?v>-uwp1EAkPK:ҥ0k]oXwf-/@%b{RT)# \;f`rɆYSJr=zŸ=_= 栺$ zD˻Nz7r5Mo{c,9|z sĐj1@q=?]!vоbM5^_f7A.Eܷ/j$t1Ԕ/_w=a1L |etOBFEGZu >݇+Yc@Ր phU|LSO : fʛC4!k7i)e&Ct_;/bydIέ*!JP_|ꞩ,TiDEVU<ȇ/q?< -*` 8&SυjID]чO'TAA܅X?OvsڬZtHjRelBn"AߑF"=F5ʃDl9YC|IfNyt$EFeg}AP$0YB'i؉m*!I NYPB\WCNL1܁~K]i]AV*=D*jERh*G2爍#(pXbM E3v "TB`Dy `H}?.ZQH.W8z=4dĻo9U`!ˈjBaP }(t:7Ёxj}9n2³ҟK[+4c,Thrc?i v,M|c* pe`nɆ +\"o%&Dd'l%-G4'b%2pRpW9|QP(F5ʨOϊ!TJB  ~%5Fp,6hKzlL2HV M/OyEOC.0Z*(SCIE!'l$r'!L>H7͗Ƀ!$O@5-t!˺P6<(16†^&XV n qb;@6uSpmHS͏R]&*&40`U,Rdgq}xpj~=Gg^/NG]$r"z^yO1{~;)P=)G4jrc]Mii3Џ;GoІ2LHV`ѷM| Ī|ގ!e**#9^CV0EtvBvothf{;3H'G=FLwdQML&GM+cӋeo׵A[KA7Bu[kԧ_y~c׀%1"( hњqs7_T$ #*Du̘&KKa3 n<1AaǰP0REZ L٬)i $QP! ' vbB4FM^56/*rw-eaSYeava[Lę(c'cuDmtk]YF)'"SYOaީ蓵?wPiY ߹#7ki "6XhPg"팂ܩvN58 ,u%.M‘Q*έpAvR|ܛ0Ƥnw[Y n YAJK5ًbސ? X"I;S3bMV K}JhV W){V:9sHM7왷j=YO= e⾽}zq^v90ġOi=`<\Ej1nBZZ) O" ŸbZ%X<=[bZgwvo܃]2DD u `ަnvh:B%WE_dAأcÈ+HSh Ie%PM:+HD{$O &Va::_]=y>i~'m9nӕrNWqڷ4!n ͸nZW}[ w5#gye0E1PdYnTG g5Puk("+qb gp6CuhYG,ky0̔<ط4jlt T_:Bx8 VOC>C?z`=L->FnFc窛0Jyyđ+ӠޣsifFߥr28M(~h?tյ}H7|N^HrxsbX$jd}v4L0 sqbvWgM!·;'՘%W kʎof@ztlH6{ `c-% :5՜>\ S s'|F~;2>znL Ք>ϟNxd tr81z-?x_A7X^)8x)$OӒ_)c Jcx}+2$U=5SmdYH!J+ ftzi⒥câEZ&̙xᾚ;Q/#xi ا;2bYj^JxTNBuI{.3ڨ.f}&YHIQc<0x~h|WqWvmy֮EL[նVxx~xiWa/WZ8Qj5&Ěwߢ`vI\]1·U`7P22곲'0-daLdLIPRt"IgC2H{ckADi4ָZOTZkS-EDQ eP:M#$>7<5.‡U:xY ZU: iL~_1"b!+.ro4$p5nwPjkӬiژͿW}3h Nب6jx4$΄f 0 z܈ڻ׀!&c#-9\&w_RO,E),O?,ag0:vX8@CLU#SgM@$ QsZ%B{T{q^s} H30Jܡtl͉3e8A GyWQ@^\xPUעbn=u.Bt āK=1§KLjƛ|-A֔6FGX8k/ĥfB<9[ߤlnՅ7.ѯ40[~N=O# *p);YU>dAŒ~3M-r!9M0U7{ D=eTס aLa)?hHylFx|oK1VsYT'+cRdR~OMb 91dR˄pQ5;I<Ď^HR#**n^i)w E[TA*&I5RgF\qπvVD \xidǀyÖ)VX0J'EY\}'| LT\j7ح!@QAjF:Ied2LwS kg+"8V#Kަ%V֗]fZ& ]4LWV H<5D稚z `:PPD3[Jz$R*<+?aSfl(]yLPX B؅' 9Y~A9s!gO |pfzQr&0,Sqx=NځEe>G? e:-tv <7fw$tX@ׅoKCl@C$S:$LVY?բ`y)L`0oy*!嘢KaƸC40}7_[[ U?U^YW~zTEnlf gEl#=9e=IJVofaO<]xsû2A~y Ls&Lit.N@D+1%ϒ5XVP>4y$, ޻eVO6Z$&U'viQ "-ߥq9 k? ELvSr|x8mk=hzu'n~mQ&b1Aqw6ø0Df.4`9|Kߩ;dL5,戃TT])\kqpr#xD%̉LL᪶69kJv9uFg;L.h7<ɐ֮6شeԭkGdTvT ̀2чh>Vk'LKtxA+,T2ͮ EJ3e%8Ky3#d)[j۵cUC׼Vc1Ӫm]VbuM6ӺT,B>Wm:XєDŽEW$g[ea1y~`gUߘ+X[q4|#TSRKQ/4UT؅v,1Jh]l ⩘sK< NwEE+ ][o2INq7{;87;!mI%ְk=UwD),x:{rK!)XTu0HJyw2%B:޼E|V.j({BR S'X61xz[L-Hu-!AAo8pa3m\]^Y!2]*;XX)sJsA6-IEybRHT_iI2f(F&UB-@ t0XwoX?:S|r?8bM9}w;CP9(ELOY&dbXaH BD塓$FSq'aOP7IKY`nI![2RO]?KGa5BTn/E쥳5{r@`]s 4ܭ^w0 @+J?_ X]6WZ YY ZzR`M{V{:x \H6m-ge`g8v8kSr0%螣pe4^nd`\QKoF^^U/h2 ct5')ҽyAXHXO =,p(|4asu|[9ng`C BHdY= 6Hޕ&x8\x&}߻L'ek(FyzLwTD<[1ņF ѩI :˘UT&jj֢g_AlO E\F $6ލ>\n%N%SdUk[c…9mwb+f’W8WSzΉ#*rTA Xo >+ϱI@TDRSk :RZLZ*Llv[ "?QnNqPǃNHG(RDWNt VCt_vvkmwKN4q(@7b!$a.RLCċ %WN1ts52R-17]{Y'Bq%ލEjDnMU{V 9`~`!U6of[@<6 a!_H.ACūZ21ZlBMx(/sLh:SB=ԦLFxr^wsf7ZvغMtv fviK8s4agRV{*^'p$r GJXsB^ *z;ZJQ݇fwy ikHUب(Ve $z'wcdN\?;-\}ƨ1`!'+q!# F'05$dpheb &GFWC=f}XĞ`46h~t:\q򛤏וEQ+p~m;&c1[P^#i69yS,wkb0ߑǴ7j4?bc5T\^%M[kR [3-YeK6DVxcgDz X4hO?F6Tc(\\>ѩNIԽr)%-Ɍ BKQ"hM̸\!1>a^PIfj (Le T҃ifS&c˼M2Ɨw*+ϟƴ];-H5# l!ϷKJx '*R' 62Iȏ \LrAPɓAT(GH$@bZ&~tT.Btv׍!w7__7O0Ǫgæߐ>6|+dA@>|ȒeGF1GA'~3{Sࢭ2Ǿߏ]Mqqq= 87O%|p 1s0LERb\%Э# Ů!fwrpԀq^E9Qda9YHOҬK6 }$zd~FzIy ~W[c$r*;He͑{i[9,cHR.1'-`$'_^ׇ"B˅.6d`5$bٕ[er<[^H掑1lZM@d͡)]k w^|GtDrGKL_-/0_=Nj]FAF3C]qVH/`x%G{+XWSi0 O fZqڊTg;'h_H닯"n=)Ksʏ hziNdʡ=KaV? x|IJV00"0|R5 -3[)azE=@TT8VZ_u\gzH)Ƈ!0@22{ˍ 4hQ٠b6„8,qD9z߾UkIqe5 (bւUECu l4h΢[qيKNdŁ/+D%b@+pcs5>~tOLOw!54{)D2yPHF Gcjœ`;DgKSLō Fa!\VU:^p/e[1hAtDKH$sY .~$A9Ᵽx30#HH)p=L@s>U@구bIX- v0] qHfD} Tv$^n_Pd@Y#|_y[0T8J3K -<3yA5|YaaUMrxk/`?D0VTV^4p tB&~^ᎱX;, gX+pGgkDM!ۂfu)VbNu݌NOiC-ZrKTإMfc4]zN|85U{R4?QэKR;q 4wuJYK /Ra$$ #E{b@{ȪOx&.Ԁ*ÌKXAdVS=?]ى?ʼ64jHҬ!CvKQځs]:\s88gCe*MwGK|p;U8Q߿<crނWlҴ tmJJfqC+oy#}Sk 2ɦj6Q97wJѱ6J(;/ ctյ~w+ {wSB\s' h~{{/ٗ}&_zi} 9ܣO2-~Z^6Qcwç>͊z2χ!y,6wx0;Rn pd͛՛d;> SN[OY>+x{C oNԘ5]28hwk% y#"b"hΰW&R:\*=n,Je=; /_lP$uGe$BrT8 ˈ~I-K"^_6y! kS9 i%wV zW53B@! {5=rT( 0\Qj]^9`:9/v[McZ ŦcgvHeQĊBѸ=U\`;5 $q/fow.nlW;ͯx]u|Yg5]&f*ROnSs*gq܅?ML~'qq>p[\&)*(763M@Y~.ыT"嚟D~GO(|1DHT\8wq 9"*>e!Y1vˢA 8cl1V'D9y 2 }&BXaKITGE? h$vCCSL~1Iܻ!IDbhOHԁ5rՀtπ DvI}~ l F,\!ʅP\mzJa7a8mG6B`_E\bhP:mGGu7( f-9_|H:@ve<t1 j?P>dWmAoyQ}2 RP8Mk\5ACzAQ" \vJQ==}t"C 5J̑z~7}7pDFs)}rz*qu!q&mՁk4z "GHD{,֠}|P_휼B:9fyFLG˝§dwTe0q?$a4y^U} G^k?&!-[f-Caj6]:+YSƬ&ʜZb9Iәa6Z>XH@VXZzj+e4NR"[]fjiԕiblM"yS}deفy'ar1Q?g@Z*r`+( CcJ 1aMi'R7lcl6gј`j{m4Tۅ6LZ]sO2())2 I™:ğn&n( jjh]6imO!⟬d/\$z؍jҮ|(V<ΒoE?d*-ك.򚺃| e~oXvI/CB9f:.ȦL˙ҹK4~RpP7A4 w N14>XRQ+h' 쒣\j:,;, gB s6dfҚ$ԲLy3C.VUc^fԂ4,FӼQU!Fb5Sq#;Ћ]&i.PӰ薎XUzaVjGi ۱ VmKG×;5X)1mr^1JVڄ<ǸVڛYfJAوiyDD|CH>uG54Jܱ NB$P_bq:Ҝ%Q}E\=T.!Le :תg0ù w G M͟*Cp~vR`~$r'E͔:e65w)vYg`ڜMT47iyĂl4 6yҨ_$N0WB$P8H4B:3MK_t[*50DYfYEH> @JR;af9IB zxR]S$rFF<'u %VmK4y(-y[IF[3 Xj0Ukq .'o'vo:w`]84d R矶p#:s77ǧboF,9'XvyTNsU.< b(tvoFT߉-"XO  VO}OcpFz.0; ׭.3޿!{TF¿ʚ:^`u7rP$ dx'~HCAs:Kyqae1R LnG_-:p7̭ ֆ 3](0*ee馿W60Yu|4*To] Y-urmUν^Qp6b"I^=)2|>йAlw!n{W`S[6y}dl9MB/tf.j`Z{j*=>8@X0O aVR$rwT Ma5M92;3A_П%ghyTHbђ,wJR)ĻUB၁I2erca&QkJ7sVJyH? -%Hܗ x""j<4čKT+ԺŜD6bp7B|w" X-''VL (IDYkaO@ȘQ6XNBT)b!E d*2 -Xj̩H {zư-|yQ'@ַ|*03*Μ)x&P]J9﶐,@Я8.6JI'*VUXֵ}Xݻ s%P$-NV@B ]GZiKg3HW''T }NKZ*<NS /b ɭ3UI34=fM٤OXtch CVB1*Ѫ.n1@d0uqɨs(i'7xv/կL5 OƕF WϢҖ:cD-Fu{`'ړ!G _@?Țz4DHG +w~gi̓9o4߾ZdIff[DGIT؇þӎ'/*nOM=EϛS*iu o m-7_ހڤ~=i_L>Lȃ߻ )-dH@Tn?d%V*nk 1~޷n-cƇ>c'c)oadp{ :vqU{j!` "*kv[vȊ0ՀyvUO 2酶`qI/]?BŅ[aUi`=b{Z.5c<=[vӡ}61T (B>gҖ-pNrGNR%.Pʻ TM`Yީ:$j/Z1VHU TGY4,PRN䒦*6S654Y-D"M)"E}H=8WI% 1Ť0q/ؿ^BKCVϪnXq@)\ )Qzi~X]O%W^V qhi]"Ћvkr!"rWduuRs&UR/y6C1(׷~ǖ59r^.z|k*xJQ!$ :y(jhmwEWvf"#,F6nC.ٮpBt7xhh+GBv+o4B\Y2Zޏ麤WkLpv:I.WښcR.@>E]MH~F޷%}#&$z$Q$WR[NVҵN|syn^]2 72hoyy"w u]F-L7oߛ{J4Dzhi d?jj*?М}3b߽r}" p$KeUHz^Po̒Di=ieGAP:NAY}/&/TSӳ5#j'1ʼn`edj/S)l= 6U\ǚF"͘64A's\/`g4|ެ!һ nY<6?`Ӎs'x{g_W.xo ٛڱv$xЎD&SI_D(k4Lm'9R b%C4uy"˞m[,͌T`w{ ebݽI~c 6pG#HmFLė(qĞ$5Ԙ.Uf~쟀vmSmˍ}Ǚ!8.5Sqx}տ=x#n1(@ 8@'I/e뾿b}yI/(2aµ z$ߌ2q=RWkose# p-װ:>$BPDJNȣ+0R6XľgV3! M(14B;Sɓ+@TGnz(x{H١>oHzjb.jST .wiaM `;);&2 Ya}m*z?|tibM`eƕy4:%tΠd"$%>Y @^F[܀QiWp*LFp.F7 & G$ܛ}>B HSM01%p0iЍ !CMi yFa[+nYU0-2ͲVP, ѢDL#n.wE=k0sawEZ#*Y9PUpD w4j؋z~Q3rE!Ud75'(]3r;_4hD"f'[_6sJgwE[#NA?.FI `^+"v\#o<\(QN)kxeOI <Aff5pa`]-\ R2 cguYP5ƫ @ou9zv\%[g\2)=Qˆ3yҦv ` V6P}U2L_a߅} y@%Wv-4'?>-OʠF94D%N~k+2V҃[J*U4VJYz5qġ GT&9n7F^ad0xw0_m;I: hܰ##+hL!ㅥ@k>@Sb9fuoJMF\;xgVUW9&*024#u(h~d@D$IT0PNNpW +VDUr0^zD8CoH3X'6l lKO8`p;5tH 51)BmU GFQQ36nhr%Z{~wE}@0L(cd#O-OP[[S%xKVA{e[tzmGM\YER8gYn6ѫT]fPbnʲɧp d3˜ ߜO5%:-c_'0'>lkQtVmtG2b@f d1V:TRYEO!h8ʖ-*m%E?>qo7 GZA-V;k_Jg2>fMknٲy~K[oDf~sp$WpFOg/˗ϊ\莀ߜ@G蠙&^PHDm,M1ɿ2֫`.|v('Zqآr,>)ۡxK޻Q>lIεNJ-wrOE-=4N_kIz,'J("ݭW0[U_XVw߯:S%qv?'YUv]pSαs`NnvVu=/InZSx168a;Hlą @k5U ."H*@h"TbTEJoH(= oaFv&P{}NY/!w`05 3c^;i(&{T(Do߷3|Fu:sssL,,gb^܏&H WlooH̗\wC/(&sT>Tc}a)~|!g /GjKBn%޾pSSw!o9\ѿA2pۭVʈ#3) k_0?aK 2x$#b($ygf%z.1܅1&2wr7xxYͫ Ju'4")Ru\dLt"Jb/M=q$dyFJ 0~v4ke^霶k6>UJe}_Ps9.^kpHhWQ魊-Wc囫%7 ߰*9h^GmG3g5Oqdž:U O=&d0awFBFQ$k1Qi&õ|0mYq|A+)QGի}cd'# D603x'K9t~}|ylY^+ft^KӒ-O峤7(jQ+R5Q?V|Q&sxQgpݨFꐔ5m^WԿip$o o<q/ΖsӅr>([}?<9Z߼~9k7:sӤfXqBjxÝ9>%C]4 B$$20J*XK߬jC;ݺAig|MטʨM At,OGe=i||\R Msd^]#/6IX3jɘ޾͜I^ARY`(۷L58ߨ.ާ^(`ˏ'rz(cK.O9DƁcG2/+X6}sZP=x}NUG9Zc﷧k#H?+7BdO,z?-kJh7sێ[غӟH831N6:> />Ô46<.HSqe)|Ju/)WR+Nm^oxh Υ(.٩ku|å[eH?S%J޷Rb1M.^ 6ץo4 l4҉a36f}R+,0˪ P_~l1zvLY׵Eӽ]z#脝P_m1+ H 37E4tfB;0 /Hy˟fT;q!O( ߵ>"ٔΓ,ִbVaW٤fLKUNN"o}V{& zIj׋e%<c܄nW.͉Pڐb899e{JWtA:!ϥgEG+ŤV3wT;-R;9<Ͼ"n.ۦ`8tм"w./m:f# KE&72RFs毛oai')%w)ҹȅZԖNunoeѧ]_l:׍M% 1gj9ͺz&/on6݇ȸ|]ca]=0}Llh˵պgCuad=Z`.Ȃ˛[鸚&oLwy|5jaNɟH[f8@cQFWӹnp>yIYr`]˦ҷ8s5t ک.uGDܷtιz؊F&9IHv"t&e?C<8S',/ } ;] exepPж:cah۟}RIR'n-|/epV.sR1e{g<)20^|C^<[!%#{ZN,Њ;pTF%do=;^^zō2~AqM\UFD \ⲡƤ-}5z~Nib7R]ڥ9ߒ]t8Z2cts9]MȎBt+(0n'WyaY1{PDc/3*ӌ'ўe?l|9YCs\˲6:!oZTrޙ/*FmZԖ14d>.3w*ܛJL'߳3"?>qTJWr 9mv#v>VsMTDNO e7ǯ[ TasCsPD;ׂDҎ=n`ΘB"st0Jp춿`eq&2 a ۦ7(%-$Q2~fWqg' RH53]#OY4Ct'ÄǓMUΐiR~T2U[Co$;\ƒzO]sk$&йCKI6zz5I9ѱqeyk-8ܴ@Alwcöͻ+ZF,|۟,F )YOx6]hOuAR̹79/kb@v*ӑ73];,F>2ٸ~LΘ/65'*D@Df9Y-7CӴl_>"igGs_fe+pQm*at.,6wCv)u/]QK>L',drY=MzDq@]M) 7j|fŨЧc/ڡrF"L%Ӫ?$~R,ѿmA41꣣Osn?Art5r{T3-zly_W~~UϿHsVBTU!7J659~بȈcKnyg(O/_'FNv4s-K˳.i"k^i `,\u5+:T,ϵ6}tLc>+iD =CXBI.Ʈ.bZIש-p$үz:aK Ɛk,zz$*UL )~UZLxg٥?D|<G-ŇFxLc5/pAp2|bqge;.,'dqm+~5ꞝєk0$^\}iiju#cȻr6$ #ƚM$5phC]yU3lhi_H}A~һkRœ>i՚Sl4B5E x1/k}|> .dt (^>i, T{ccxӸ֞j>K#܌69oLr>6[Ȩǒ~fe|j?bԇ~ T(nO0?7}cοs*'Wv9 U3 "hYw )n%* oVs}gԕR['zWR8>I_VUT>#qiyxy9)'9a3֬, oH$ {Ь;WIqI[oRЯ37 (%zp}^hE59;sVEROr_v(vlfMx+GHVRHaZ>r :23Ҁ{ÝVV?O(Vh}V*nĮ<6$R@BQ'Sj{Kxœ}.?iPyAI&\'acd&RCPz>L֡KӢnX0%2G,ԄYdϾ.ިM\F/9|<XFsRD|K[ZDY%J1uQT&R] ݶ!ŖWo'ƽj̛A? f|֋ܗA~E23[73 p: P/8ـ Ÿ OiG?pOQ4]ϐ8Xoxl?`U:(bJ!W.NjRM t@wNaze2@ ~} !o/6u.̚6 W58@:B=}{W|k0t;r!Q@Se?t,Q*px뉢^DDZ]@f}أ==( ?3 ~܅G(Xl<ȀT'{?_c_+ چUm!~IC{g%c; Łz'JZI~ -@@,>O ؏B Z.HD'  Q{hGm!@Dw'%:=n5!`0KV_p) cO%vg=8wVCI0 L?5?G69G1œ(ĠN{zA Σ+ebYP ^ R?E+959]?EB ` &`a%kEw~%xBC(4+eW}jPwe=<IRW\Ȼ?g_`LQ 86aEtٲ#$5+8G!9<1pDB/IZGB%`HؾD}*w<@טhܴ]R_ /\N }[@D"#b9ٵ8YN4S/tuB"0hu,V)p1(:(˚e%Q ?ДA%eY|>M4egSEwe_rK ]epMÚD.VF 䋏-[v_' 1.p@أ{llc@ ae)VeUh<`)x\}@y q 5-x(H('7 ^@9bI,1,Wߗ{"`+ uC:-@"wZGtξUa5XN5Jh(@(Q0kYH}ɺkpXm^5vUsa!8Vnлh^ s^'d&O-Z_\\(Jz >C>fdq^XX0N*mTm-BHN #aDb䮽P$@Y0S  ߫OVM3Jo)y@NC-IRO^8sX9flU0'!{;Qr`(Բ[@b;pQap^ pY*O ʨji'!E07?y1B4N1dbZ xk,V΂Ś> B,XA9pBb=/Ģ`0G^: V5:*^>8Kl,ͬad_ºB{u}&y`qY *ĒZ3@'iDUݒ}tp?-5x;TtW ?׃ EUpmP! %.I8^e[%.z8bW'تeb {yH2`F"=a{KϜ تbՒk!ڕkL`xʬgkfq%Մ#0 2`paQ0g7^Zn# E . u/Ffؾ\ ]Dbt U^/ N?s+Vk)@$88Lj)@^er6.=Z-ܯ@rԑsxw*h/ zw"0ʃ;cY:^a{5n+N:&>ۅ>h,', vA74 wKd- : %SmT,m~x$vՈ{R d6jK4V2k+z9DL~ wwkcQ*SbMX MHuu3n; #fˇ0;=akqisC:aPu&ܣg?p٫+r9L*RBcNo tV`@s?Ćt;7\,V-@l#=7x I j"]P,n%8 zn/|Morȝaԁ0nj}oZ ?0ʆ~X[QYǴ1FlƁu]_| U]b@! @ @KT''fx%,PLQvu ,-@+:]sppa~ǯ@9}>L ̮3΀~“ɠ8NGl) ؓb:- IJYƅE}D}\|f1ӱkX ')|$%"Q1JM8) 7{3ƒ,G π 򯢴^U'MWLN[zjCw `, @ Zȁv_`֓UeR@ `K{&l˱u@3ClS*^:@ūzSup<~q;˚.q@ff﬛ ^7*CtzԆ%lq ha{|P` oOfa9;sn Sw^Kyg8¿yX԰QakGc {աo| i$fY}lp䗻D@X\ڌ]O^Cl,3ںw>,|}[$k~xS3#X(߲k#~,\`ojKJ*^T`.G%yh/05&+. *[d*Q&E@C-UJ\̺˺w?Z|@D!Uh#ThmhzF=̝&/{99gCv8;r`/va *~J[+Inp8r9Z‹(@#c(PRuFBߓ{< ŒTAJaөxo;j8V+uQtUW偊׊Ÿ Vc(2G^ܧ7ŀ#Eړg-z5B/\nN]U1cࣗA Hޚ@.e3-ђ80-ms%LĦJ^YԷunko2d ?/W]OI M^~*0d5c|>LH~g'#qKf?,ӭ›K M2t9!̩aCcRRaFX%%F0 3O 2; 6?{_8Y,RY$tY4V&!4oq @TkB%k& ?~*s5Ȅ- 艽@yXdVar֚ѩ@dZv*!6%CR0vf,V3ω{ F9ܱy HfO48ō?'w ğv*pDuԊ ƞKл`igdawhZi 2υKlgcux SM0WLN6"^E)JQtWbldׯ m*=|f{3~3U 'TK1&̸(f^crOtFg%;SO +Jj idF=eʯm} UA ^%υ6L@eě*Ce{Cα4TT#6Zbq*y▱a&5K-nbEs(LHNaQ[@wu.1NH~\rXe`J@W r+ 1cIHz~PFs%.k8B?hH|B*^ӿtrs:V:~jTcS1d X C ]m2 'Rc0!xm(0gkr'ЏXzWXkY@M7m!۰%M]Ke0&+Я?𰋖v<ն>t PK EUAbin/UTo:>cux PKcRebK+  ">bin/WALinuxAgent-9.9.9.9-py2.7.eggUT$@`ux PKcRp D manifest.xmlUT$@`ux PK] Azure-WALinuxAgent-2b21de5/tests/data/ga/fake_extension.zip000066400000000000000000000002511462617747000237160ustar00rootroot00000000000000PKӁ;Uq%97test.shSVO/JMWP*sMK+PHHM.-IMQ(.MNN-.N+ɩTPKӁ;Uq%97test.shPK5^Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/000077500000000000000000000000001462617747000223075ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/ext_conf-agent_family_version.xml000066400000000000000000000236301462617747000310440ustar00rootroot00000000000000 Prod 9.9.9.10 true true https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml Test 9.9.9.10 true true https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml CentralUSEUAP CRP MultipleExtensionsPerHandler https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w https://zrdfepirv2cbn09pr02a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml https://umsah3cwjlctnmhsvzqv.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": {"GCS_AUTO_CONFIG":true} } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": {"enableGenevaUpload":true} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"commandToExecute":"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo '4abb1e88-f349-41f8-8442-247d9fdfcac5'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo 'e865c9bc-a7b3-42c6-9a79-cfa98a1ee8b3'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo 'f923e416-0340-485c-9243-8b84fb9930c6'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "59A10F50FFE2A0408D3F03FE336C8FD5716CF25C", "protectedSettings": "*** REDACTED ***" } } ] } https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=PaiLic%3d&se=9999-01-01T00%3a00%3a00Z&sp=r Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/ext_conf-empty_depends_on.xml000066400000000000000000000064561462617747000302030ustar00rootroot00000000000000 Prod https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml 2.5.0.2 Test https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml 2.5.0.2 CentralUSEUAP CRP MultipleExtensionsPerHandler https://dcrcqabsr1.blob.core.windows.net/$system/edpxmal5j1.058b176d-445b-4e75-bd97-4911511b7d96.status?sv=2018-03-28&sr=b&sk=system-1&sig=U4KaLxlyYfgQ%2fie8RCwgMBSXa3E4vlW0ozPYOEHikoc%3d&se=9999-01-01T00%3a00%3a00Z&sp=w https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"commandToExecute":"echo '09cd27e9-fbd6-48ad-be86-55f3783e0a23'"} } } ] } https://dcrcqabsr1.blob.core.windows.net/$system/edpxmal5j1.058b176d-445b-4e75-bd97-4911511b7d96.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=mOMtcUyao4oNPMtcVhQjzMK%2bmGSJS3Y1MIKOJPjqzus%3d&se=9999-01-01T00%3a00%3a00Z&sp=r Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/ext_conf-invalid_blob_type.xml000066400000000000000000000121651462617747000303260ustar00rootroot00000000000000 Prod https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml 2.5.0.2 Test https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml 2.5.0.2 CentralUSEUAP CRP MultipleExtensionsPerHandler https://dcrcqabsr1.blob.core.windows.net/$system/edpxmal5j1.058b176d-445b-4e75-bd97-4911511b7d96.status?sv=2018-03-28&sr=b&sk=system-1&sig=U4KaLxlyYfgQ%2fie8RCwgMBSXa3E4vlW0ozPYOEHikoc%3d&se=9999-01-01T00%3a00%3a00Z&sp=w https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml https://umsajbjtqrb3zqjvgb2z.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"commandToExecute":"echo '09cd27e9-fbd6-48ad-be86-55f3783e0a23'"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo '737fa9f1-e9bf-4c3e-ab1f-9e03cd0b5b40'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo 'f6a0a405-c028-4e68-bd77-5d491fbbd9cf'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo '338e316a-01bf-4513-8ae1-b603b09ad155'"}} } } ] } https://dcrcqabsr1.blob.core.windows.net/$system/edpxmal5j1.058b176d-445b-4e75-bd97-4911511b7d96.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=mOMtcUyao4oNPMtcVhQjzMK%2bmGSJS3Y1MIKOJPjqzus%3d&se=9999-01-01T00%3a00%3a00Z&sp=r Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/ext_conf-no_status_upload_blob.xml000066400000000000000000000045351462617747000312240ustar00rootroot00000000000000 Prod https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml CentralUSEUAP CRP https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"commandToExecute":"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'"} } } ] } https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=PaiLic%3d&se=9999-01-01T00%3a00%3a00Z&sp=r Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/ext_conf-rsm_version_properties_false.xml000066400000000000000000000236341462617747000326400ustar00rootroot00000000000000 Prod 9.9.9.10 false false https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml Test 9.9.9.10 false false https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml CentralUSEUAP CRP MultipleExtensionsPerHandler https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w https://zrdfepirv2cbn09pr02a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml https://umsah3cwjlctnmhsvzqv.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": {"GCS_AUTO_CONFIG":true} } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": {"enableGenevaUpload":true} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"commandToExecute":"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo '4abb1e88-f349-41f8-8442-247d9fdfcac5'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo 'e865c9bc-a7b3-42c6-9a79-cfa98a1ee8b3'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo 'f923e416-0340-485c-9243-8b84fb9930c6'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "59A10F50FFE2A0408D3F03FE336C8FD5716CF25C", "protectedSettings": "*** REDACTED ***" } } ] } https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=PaiLic%3d&se=9999-01-01T00%3a00%3a00Z&sp=r Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/ext_conf.xml000066400000000000000000000214751462617747000246470ustar00rootroot00000000000000 Prod https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml Test https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml CentralUSEUAP CRP MultipleExtensionsPerHandler https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w https://zrdfepirv2cbn09pr02a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml https://umsah3cwjlctnmhsvzqv.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent==", "publicSettings": {"GCS_AUTO_CONFIG":true} } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent==", "publicSettings": {"enableGenevaUpload":true} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"commandToExecute":"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo '4abb1e88-f349-41f8-8442-247d9fdfcac5'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo 'e865c9bc-a7b3-42c6-9a79-cfa98a1ee8b3'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"echo 'f923e416-0340-485c-9243-8b84fb9930c6'"}} } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "59A10F50FFE2A0408D3F03FE336C8FD5716CF25C", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpddesZQewdDBgegkxNzA1BgoJkgergres/Microsoft.OSTCExtensions.VMAccessForLinux==" } } ] } https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=PaiLic%3d&se=9999-01-01T00%3a00%3a00Z&sp=r Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/in_vm_artifacts_profile.json000066400000000000000000000000221462617747000300640ustar00rootroot00000000000000{ "onHold": true }Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/vm_settings-agent_family_version.json000066400000000000000000000172271462617747000317570ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726699999999999, "extensionGoalStatesSource": "FastTrack", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "name": "Prod", "version": "9.9.9.9", "isVersionFromRSM": true, "isVMEnabledForRSMUpgrades": true, "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "version": "9.9.9.9", "isVersionFromRSM": true, "isVMEnabledForRSMUpgrades": true, "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", "version": "1.9.1", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": "{\"GCS_AUTO_CONFIG\":true}" } ] }, { "name": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent", "version": "2.15.112", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": "{\"enableGenevaUpload\":true}" } ] }, { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'\"}" } ] }, { "name": "Microsoft.CPlat.Core.RunCommandHandlerLinux", "version": "1.2.0", "location": "https://umsavbvncrpzbnxmxzmr.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "failoverlocation": "https://umsajbjtqrb3zqjvgb2z.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "additionalLocations": [ "https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": true, "settings": [ { "publicSettings": "{\"source\":{\"script\":\"echo '4abb1e88-f349-41f8-8442-247d9fdfcac5'\"}}", "seqNo": 0, "extensionName": "MCExt1", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'e865c9bc-a7b3-42c6-9a79-cfa98a1ee8b3'\"}}", "seqNo": 0, "extensionName": "MCExt2", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'f923e416-0340-485c-9243-8b84fb9930c6'\"}}", "seqNo": 0, "extensionName": "MCExt3", "extensionState": "enabled" } ] } ] }Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/vm_settings-difference_in_required_features.json000066400000000000000000000247711462617747000341330ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726657706205217, "extensionGoalStatesSource": "FastTrack", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "requiredFeatures": [ { "name": "A_NON_EXISTING_FEATURE_USED_TO_PRODUCE_AN_ERROR" } ], "gaFamilies": [ { "name": "Prod", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", "version": "1.9.1", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "failoverlocation": "https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "additionalLocations": ["https://zrdfepirv2cbn09pr02a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml"], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": "{\"GCS_AUTO_CONFIG\":true}" } ] }, { "name": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent", "version": "2.15.112", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "failoverlocation": "https://zrdfepirv2cbz06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "additionalLocations": ["https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml"], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": "{\"enableGenevaUpload\":true}" } ] }, { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'\"}" } ], "dependsOn": [ { "DependsOnExtension": [ { "handler": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent" } ], "dependencyLevel": 1 } ] }, { "name": "Microsoft.CPlat.Core.RunCommandHandlerLinux", "version": "1.2.0", "location": "https://umsavbvncrpzbnxmxzmr.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "failoverlocation": "https://umsajbjtqrb3zqjvgb2z.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "additionalLocations": [ "https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": true, "settings": [ { "publicSettings": "{\"source\":{\"script\":\"echo '4abb1e88-f349-41f8-8442-247d9fdfcac5'\"}}", "seqNo": 0, "extensionName": "MCExt1", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'e865c9bc-a7b3-42c6-9a79-cfa98a1ee8b3'\"}}", "seqNo": 0, "extensionName": "MCExt2", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'f923e416-0340-485c-9243-8b84fb9930c6'\"}}", "seqNo": 0, "extensionName": "MCExt3", "extensionState": "enabled" } ], "dependsOn": [ { "dependsOnExtension": [ { "extension": "...", "handler": "..." }, { "extension": "...", "handler": "..." } ], "dependencyLevel": 2, "name": "MCExt1" }, { "dependsOnExtension": [ { "extension": "...", "handler": "..." } ], "dependencyLevel": 1, "name": "MCExt2" } ] }, { "name": "Microsoft.OSTCExtensions.VMAccessForLinux", "version": "1.5.11", "location": "https://umsasc25p0kjg0c1dg4b.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml", "failoverlocation": "https://umsamfwlmfshvxx2lsjm.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml", "additionalLocations": [ "https://umsah3cwjlctnmhsvzqv.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml" ], "state": "enabled", "autoUpgrade": false, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "protectedSettingsCertThumbprint": "59A10F50FFE2A0408D3F03FE336C8FD5716CF25C", "protectedSettings": "*** REDACTED ***" } ] } ] } Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/vm_settings-empty_depends_on.json000066400000000000000000000061021462617747000310750ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "2e7f8b5d-f637-4721-b757-cb190d49b4e9", "correlationId": "1bef4c48-044e-4225-8f42-1d1eac1eb158", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637693267431616449, "extensionGoalStatesSource": "FastTrack", "StatusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcqabsr1.blob.core.windows.net/$system/edpxmal5j1.058b176d-445b-4e75-bd97-4911511b7d96.status?sv=2018-03-28&sr=b&sk=system-1&sig=U4KaLxlyYfgQ%2fie8RCwgMBSXa3E4vlW0ozPYOEHikoc%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-qphvx25", "vmName": "edpxmal5j1", "location": "CentralUSEUAP", "vmId": "058b176d-445b-4e75-bd97-4911511b7d96", "vmSize": "Standard_D2s_v3", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "Name": "Prod", "Version": "2.5.0.2", "Uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "Name": "Test", "Version": "2.5.0.2", "Uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo '09cd27e9-fbd6-48ad-be86-55f3783e0a23'\"}" } ], "dependsOn": [] } ] }Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/vm_settings-fabric-no_thumbprints.json000066400000000000000000000215441462617747000320470ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726657706205299, "extensionGoalStatesSource": "Fabric", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "name": "Prod", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", "version": "1.9.1", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "failoverlocation": "https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "additionalLocations": ["https://zrdfepirv2cbn09pr02a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml"], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "publicSettings": "{\"GCS_AUTO_CONFIG\":true}" } ] }, { "name": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent", "version": "2.15.112", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "failoverlocation": "https://zrdfepirv2cbz06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "additionalLocations": ["https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml"], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "publicSettings": "{\"enableGenevaUpload\":true}" } ] }, { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'\"}" } ], "dependsOn": [ { "DependsOnExtension": [ { "handler": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent" } ], "dependencyLevel": 1 } ] }, { "name": "Microsoft.CPlat.Core.RunCommandHandlerLinux", "version": "1.2.0", "location": "https://umsavbvncrpzbnxmxzmr.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "failoverlocation": "https://umsajbjtqrb3zqjvgb2z.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "additionalLocations": [ "https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": true, "settings": [ { "publicSettings": "{\"source\":{\"script\":\"echo '4abb1e88-f349-41f8-8442-247d9fdfcac5'\"}}", "seqNo": 0, "extensionName": "MCExt1", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'e865c9bc-a7b3-42c6-9a79-cfa98a1ee8b3'\"}}", "seqNo": 0, "extensionName": "MCExt2", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'f923e416-0340-485c-9243-8b84fb9930c6'\"}}", "seqNo": 0, "extensionName": "MCExt3", "extensionState": "enabled" } ], "dependsOn": [ { "dependsOnExtension": [ { "extension": "...", "handler": "..." }, { "extension": "...", "handler": "..." } ], "dependencyLevel": 2, "name": "MCExt1" }, { "dependsOnExtension": [ { "extension": "...", "handler": "..." } ], "dependencyLevel": 1, "name": "MCExt2" } ] }, { "name": "Microsoft.OSTCExtensions.VMAccessForLinux", "version": "1.5.11", "location": "https://umsasc25p0kjg0c1dg4b.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml", "failoverlocation": "https://umsamfwlmfshvxx2lsjm.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml", "additionalLocations": [ "https://umsah3cwjlctnmhsvzqv.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml" ], "state": "enabled", "autoUpgrade": false, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ ] } ] } Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/vm_settings-invalid_blob_type.json000066400000000000000000000114371462617747000312350ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "2e7f8b5d-f637-4721-b757-cb190d49b4e9", "correlationId": "1bef4c48-044e-4225-8f42-1d1eac1eb158", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637693267431616449, "extensionGoalStatesSource": "FastTrack", "StatusUploadBlob": { "statusBlobType": "INVALID_BLOB_TYPE", "value": "https://dcrcqabsr1.blob.core.windows.net/$system/edpxmal5j1.058b176d-445b-4e75-bd97-4911511b7d96.status?sv=2018-03-28&sr=b&sk=system-1&sig=U4KaLxlyYfgQ%2fie8RCwgMBSXa3E4vlW0ozPYOEHikoc%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-qphvx25", "vmName": "edpxmal5j1", "location": "CentralUSEUAP", "vmId": "058b176d-445b-4e75-bd97-4911511b7d96", "vmSize": "Standard_D2s_v3", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "Name": "Prod", "Version": "2.5.0.2", "Uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "Name": "Test", "Version": "2.5.0.2", "Uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo '09cd27e9-fbd6-48ad-be86-55f3783e0a23'\"}" } ] }, { "name": "Microsoft.CPlat.Core.RunCommandHandlerLinux", "version": "1.2.0", "location": "https://umsavbvncrpzbnxmxzmr.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "failoverlocation": "https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "additionalLocations": [ "https://umsajbjtqrb3zqjvgb2z.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": true, "settings": [ { "publicSettings": "{\"source\":{\"script\":\"echo '737fa9f1-e9bf-4c3e-ab1f-9e03cd0b5b40'\"}}", "seqNo": 0, "extensionName": "MCExt1", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'f6a0a405-c028-4e68-bd77-5d491fbbd9cf'\"}}", "seqNo": 0, "extensionName": "MCExt2", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo '338e316a-01bf-4513-8ae1-b603b09ad155'\"}}", "seqNo": 0, "extensionName": "MCExt3", "extensionState": "enabled" } ] } ] }Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/vm_settings-missing_cert.json000066400000000000000000000061441462617747000302350ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726657706205299, "extensionGoalStatesSource": "FastTrack", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "name": "Prod", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.OSTCExtensions.VMAccessForLinux", "version": "1.5.11", "location": "https://umsasc25p0kjg0c1dg4b.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml", "failoverlocation": "https://umsamfwlmfshvxx2lsjm.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml", "additionalLocations": [ "https://umsah3cwjlctnmhsvzqv.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml" ], "state": "enabled", "autoUpgrade": false, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "protectedSettingsCertThumbprint": "59A10F50FFE2A0408D3F03FE336C8FD5716CF25C", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpddesZQewdDBgegkxNzA1BgoJkgergres/Microsoft.OSTCExtensions.VMAccessForLinux==" } ] } ] } Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/vm_settings-no_manifests.json000066400000000000000000000053371462617747000302370ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "89d50bf1-fa55-4257-8af3-3db0c9f81ab4", "correlationId": "c143f8f0-a66b-4881-8c06-1efd278b0b02", "inSvdSeqNo": 978, "extensionsLastModifiedTickCount": 637829610574739741, "extensionGoalStatesSource": "FastTrack", "statusUploadBlob": { "statusBlobType": "PageBlob", "value": "https://md-ssd-xpdjf15s.blob.core.windows.net/$system/u-sqlwatcher.f338f67e.status?sv=2018-03-28&sr=b&sk=system-1&sig=88Y3NM%2b1aU%3d&se=9999-01-01T00%3a00%3a00Z&sp=rw" }, "inVMMetadata": { "subscriptionId": "8d3c2715-f063-40b8-9402-49784992ae8d", "resourceGroupName": "SYSTEMCENTERCURRENTBRANCH", "vmName": "ubuntu-sqlwatcher", "location": "centralus", "vmId": "f338f67e-5d06-4f13-892a-ff1b047ba5bf", "vmSize": "Standard_D2s_v3", "osType": "Linux", "vmImage": { "publisher": "Canonical", "offer": "UbuntuServer", "sku": "18.04-LTS", "version": "18.04.202005220" } }, "gaFamilies": [ { "name": "Prod" } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Monitor.WorkloadInsights.Test.Workload.LinuxConfigAgent", "version": "3.0", "state": "uninstall", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "isMultiConfig": false }, { "name": "Microsoft.Azure.Monitor.WorkloadInsights.Test.Workload.LinuxInstallerAgent", "version": "11.0", "state": "uninstall", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "isMultiConfig": false }, { "name": "Microsoft.Azure.Monitor.Workloads.Workload.WLILinuxExtension", "version": "0.2.127", "location": "https://umsakzkwhng2ft0jjptl.blob.core.windows.net/deeb2df6-c025-e6fb-b015-449ed6a676bc/deeb2df6-c025-e6fb-b015-449ed6a676bc_manifest.xml", "failoverLocation": "https://umsafmqfbv4hgrd1hqff.blob.core.windows.net/deeb2df6-c025-e6fb-b015-449ed6a676bc/deeb2df6-c025-e6fb-b015-449ed6a676bc_manifest.xml", "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 7, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"workloadConfig\": null}" } ] } ] }Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/vm_settings-no_status_upload_blob.json000066400000000000000000000051621462617747000321270ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726657706209999, "extensionGoalStatesSource": "FastTrack", "onHold": true, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "name": "Prod", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'\"}" } ], "dependsOn": [ { "DependsOnExtension": [ { "handler": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent" } ], "dependencyLevel": 1 } ] } ] } Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/vm_settings-out-of-sync.json000066400000000000000000000071721462617747000277340ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "AAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE", "correlationId": "EEEEEEEE-DDDD-CCCC-BBBB-AAAAAAAAAAAA", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726657000000000, "extensionGoalStatesSource": "FastTrack", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "name": "Prod", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", "version": "1.9.1", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "failoverlocation": "https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "additionalLocations": ["https://zrdfepirv2cbn09pr02a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml"], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": "{\"GCS_AUTO_CONFIG\":true}" } ] } ] } Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/vm_settings-parse_error.json000066400000000000000000000063101462617747000300650ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": THIS_IS_A_SYNTAX_ERROR, "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726657706205217, "extensionGoalStatesSource": "FastTrack", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "gaFamilies": [ { "name": "Prod", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'\"}" } ], "dependsOn": [ { "DependsOnExtension": [ { "handler": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent" } ], "dependencyLevel": 1 } ] } ] } vm_settings-requested_version_properties_false.json000066400000000000000000000172331462617747000346650ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726699999999999, "extensionGoalStatesSource": "FastTrack", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "name": "Prod", "version": "9.9.9.9", "isVersionFromRSM": false, "isVMEnabledForRSMUpgrades": false, "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "version": "9.9.9.9", "isVersionFromRSM": false, "isVMEnabledForRSMUpgrades": false, "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", "version": "1.9.1", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": "{\"GCS_AUTO_CONFIG\":true}" } ] }, { "name": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent", "version": "2.15.112", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEFpB/HKM/7evRk+DBz754wUwDQYJKoZIhvcNAQEBBQAEggEADPJwniDeIUXzxNrZCloitFdscQ59Bz1dj9DLBREAiM8jmxM0LLicTJDUv272Qm/4ZQgdqpFYBFjGab/9MX+Ih2x47FkVY1woBkckMaC/QOFv84gbboeQCmJYZC/rZJdh8rCMS+CEPq3uH1PVrvtSdZ9uxnaJ+E4exTPPviIiLIPtqWafNlzdbBt8HZjYaVw+SSe+CGzD2pAQeNttq3Rt/6NjCzrjG8ufKwvRoqnrInMs4x6nnN5/xvobKIBSv4/726usfk8Ug+9Q6Benvfpmre2+1M5PnGTfq78cO3o6mI3cPoBUjp5M0iJjAMGeMt81tyHkimZrEZm6pLa4NQMOEjArBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECC5nVaiJaWt+gAhgeYvxUOYHXw==", "publicSettings": "{\"enableGenevaUpload\":true}" } ] }, { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'\"}" } ] }, { "name": "Microsoft.CPlat.Core.RunCommandHandlerLinux", "version": "1.2.0", "location": "https://umsavbvncrpzbnxmxzmr.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "failoverlocation": "https://umsajbjtqrb3zqjvgb2z.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "additionalLocations": [ "https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": true, "settings": [ { "publicSettings": "{\"source\":{\"script\":\"echo '4abb1e88-f349-41f8-8442-247d9fdfcac5'\"}}", "seqNo": 0, "extensionName": "MCExt1", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'e865c9bc-a7b3-42c6-9a79-cfa98a1ee8b3'\"}}", "seqNo": 0, "extensionName": "MCExt2", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'f923e416-0340-485c-9243-8b84fb9930c6'\"}}", "seqNo": 0, "extensionName": "MCExt3", "extensionState": "enabled" } ] } ] }Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/vm_settings-unsupported_version.json000066400000000000000000000062671462617747000317120ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.116", "vmSettingsSchemaVersion": "0.0", "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726657706205217, "extensionGoalStatesSource": "FastTrack", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "gaFamilies": [ { "name": "Prod", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'\"}" } ], "dependsOn": [ { "DependsOnExtension": [ { "handler": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent" } ], "dependencyLevel": 1 } ] } ] } Azure-WALinuxAgent-2b21de5/tests/data/hostgaplugin/vm_settings.json000066400000000000000000000232631462617747000255520ustar00rootroot00000000000000{ "hostGAPluginVersion": "1.0.8.133", "vmSettingsSchemaVersion": "0.0", "activityId": "a33f6f53-43d6-4625-b322-1a39651a00c9", "correlationId": "9a47a2a2-e740-4bfc-b11b-4f2f7cfe7d2e", "inSvdSeqNo": 1, "extensionsLastModifiedTickCount": 637726657706205299, "extensionGoalStatesSource": "FastTrack", "onHold": true, "statusUploadBlob": { "statusBlobType": "BlockBlob", "value": "https://dcrcl3a0xs.blob.core.windows.net/$system/edp0plkw2b.86f4ae0a-61f8-48ae-9199-40f402d56864.status?sv=2018-03-28&sr=b&sk=system-1&sig=KNWgC2%3d&se=9999-01-01T00%3a00%3a00Z&sp=w" }, "inVMMetadata": { "subscriptionId": "8e037ad4-618f-4466-8bc8-5099d41ac15b", "resourceGroupName": "rg-dc-86fjzhp", "vmName": "edp0plkw2b", "location": "CentralUSEUAP", "vmId": "86f4ae0a-61f8-48ae-9199-40f402d56864", "vmSize": "Standard_B2s", "osType": "Linux" }, "requiredFeatures": [ { "name": "MultipleExtensionsPerHandler" } ], "gaFamilies": [ { "name": "Prod", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Prod_uscentraleuap_manifest.xml" ] }, { "name": "Test", "uris": [ "https://zrdfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml", "https://ardfepirv2cdm03prdstr01a.blob.core.windows.net/7d89d439b79f4452950452399add2c90/Microsoft.OSTCLinuxAgent_Test_uscentraleuap_manifest.xml" ] } ], "extensionGoalStates": [ { "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent", "version": "1.9.1", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "failoverlocation": "https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml", "additionalLocations": ["https://zrdfepirv2cbn09pr02a.blob.core.windows.net/a47f0806d764480a8d989d009c75007d/Microsoft.Azure.Monitor_AzureMonitorLinuxAgent_useast2euap_manifest.xml"], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent==", "publicSettings": "{\"GCS_AUTO_CONFIG\":true}" } ] }, { "name": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent", "version": "2.15.112", "location": "https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "failoverlocation": "https://zrdfepirv2cbz06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml", "additionalLocations": ["https://zrdfepirv2cbn06prdstr01a.blob.core.windows.net/4ef06ad957494df49c807a5334f2b5d2/Microsoft.Azure.Security.Monitoring_AzureSecurityLinuxAgent_useast2euap_manifest.xml"], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "settings": [ { "protectedSettingsCertThumbprint": "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent==", "publicSettings": "{\"enableGenevaUpload\":true}" } ] }, { "name": "Microsoft.Azure.Extensions.CustomScript", "version": "2.1.6", "location": "https://umsavwggj2v40kvqhc0w.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "failoverlocation": "https://umsafwzhkbm1rfrhl0ws.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml", "additionalLocations": [ "https://umsanh4b5rfz0q0p4pwm.blob.core.windows.net/5237dd14-0aad-f051-0fad-1e33e1b63091/5237dd14-0aad-f051-0fad-1e33e1b63091_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "publicSettings": "{\"commandToExecute\":\"echo 'cee174d4-4daa-4b07-9958-53b9649445c2'\"}" } ], "dependsOn": [ { "DependsOnExtension": [ { "handler": "Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent" } ], "dependencyLevel": 1 } ] }, { "name": "Microsoft.CPlat.Core.RunCommandHandlerLinux", "version": "1.2.0", "location": "https://umsavbvncrpzbnxmxzmr.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "failoverlocation": "https://umsajbjtqrb3zqjvgb2z.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml", "additionalLocations": [ "https://umsawqtlsshtn5v2nfgh.blob.core.windows.net/f4086d41-69f9-3103-78e0-8a2c7e789d0f/f4086d41-69f9-3103-78e0-8a2c7e789d0f_manifest.xml" ], "state": "enabled", "autoUpgrade": true, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": true, "settings": [ { "publicSettings": "{\"source\":{\"script\":\"echo '4abb1e88-f349-41f8-8442-247d9fdfcac5'\"}}", "seqNo": 0, "extensionName": "MCExt1", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'e865c9bc-a7b3-42c6-9a79-cfa98a1ee8b3'\"}}", "seqNo": 0, "extensionName": "MCExt2", "extensionState": "enabled" }, { "publicSettings": "{\"source\":{\"script\":\"echo 'f923e416-0340-485c-9243-8b84fb9930c6'\"}}", "seqNo": 0, "extensionName": "MCExt3", "extensionState": "enabled" } ], "dependsOn": [ { "dependsOnExtension": [ { "extension": "...", "handler": "..." }, { "extension": "...", "handler": "..." } ], "dependencyLevel": 2, "name": "MCExt1" }, { "dependsOnExtension": [ { "extension": "...", "handler": "..." } ], "dependencyLevel": 1, "name": "MCExt2" } ] }, { "name": "Microsoft.OSTCExtensions.VMAccessForLinux", "version": "1.5.11", "location": "https://umsasc25p0kjg0c1dg4b.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml", "failoverlocation": "https://umsamfwlmfshvxx2lsjm.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml", "additionalLocations": [ "https://umsah3cwjlctnmhsvzqv.blob.core.windows.net/2bbece4f-0283-d415-b034-cc0adc6997a1/2bbece4f-0283-d415-b034-cc0adc6997a1_manifest.xml" ], "state": "enabled", "autoUpgrade": false, "runAsStartupTask": false, "isJson": true, "useExactVersion": true, "settingsSeqNo": 0, "isMultiConfig": false, "settings": [ { "protectedSettingsCertThumbprint": "BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F", "protectedSettings": "MIIBsAYJKoZIhvcNAQcDoIIBoTCCAZ0CAQAxggFpddesZQewdDBgegkxNzA1BgoJkgergres/Microsoft.OSTCExtensions.VMAccessForLinux==" } ] } ] } Azure-WALinuxAgent-2b21de5/tests/data/imds/000077500000000000000000000000001462617747000205375ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/imds/unicode.json000066400000000000000000000020031462617747000230530ustar00rootroot00000000000000{ "compute": { "location": "wéstus", "name": "héalth", "offer": "UbuntuSérvér", "osType": "Linux", "placementGroupId": "", "platformFaultDomain": "0", "platformUpdateDomain": "0", "publisher": "Canonical", "resourceGroupName": "tésts", "sku": "16.04-LTS", "subscriptionId": "21b2dc34-bcé6-4é63-9449-d2a8d1c2339é", "tags": "", "version": "16.04.201805220", "vmId": "é7fdbfc4-2déb-4a4é-8615-éa6aaf50162é", "vmScaleSetName": "", "vmSize": "Standard_D2_V2", "zone": "" }, "network": { "interface": [ { "ipv4": { "ipAddress": [ { "privateIpAddress": "10.0.1.4", "publicIpAddress": "40.112.128.120" } ], "subnet": [ { "address": "10.0.1.0", "prefix": "24" } ] }, "ipv6": { "ipAddress": [] }, "macAddress": "000D3A3382E8" } ] } } Azure-WALinuxAgent-2b21de5/tests/data/imds/valid.json000066400000000000000000000017661462617747000225430ustar00rootroot00000000000000{ "compute": { "location": "westus", "name": "health", "offer": "UbuntuServer", "osType": "Linux", "placementGroupId": "", "platformFaultDomain": "0", "platformUpdateDomain": "0", "publisher": "Canonical", "resourceGroupName": "tests", "sku": "16.04-LTS", "subscriptionId": "21b2dc34-bce6-4e63-9449-d2a8d1c2339e", "tags": "", "version": "16.04.201805220", "vmId": "e7fdbfc4-2deb-4a4e-8615-ea6aaf50162e", "vmScaleSetName": "", "vmSize": "Standard_D2_V2", "zone": "" }, "network": { "interface": [ { "ipv4": { "ipAddress": [ { "privateIpAddress": "10.0.1.4", "publicIpAddress": "40.112.128.120" } ], "subnet": [ { "address": "10.0.1.0", "prefix": "24" } ] }, "ipv6": { "ipAddress": [] }, "macAddress": "000D3A3382E8" } ] } } Azure-WALinuxAgent-2b21de5/tests/data/init/000077500000000000000000000000001462617747000205465ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/init/azure-vmextensions.slice000066400000000000000000000001671462617747000254610ustar00rootroot00000000000000[Unit] Description=Slice for Azure VM Extensions DefaultDependencies=no Before=slices.target [Slice] CPUAccounting=yes Azure-WALinuxAgent-2b21de5/tests/data/init/azure-walinuxagent-logcollector.slice000066400000000000000000000002711462617747000301070ustar00rootroot00000000000000[Unit] Description=Slice for Azure VM Agent Periodic Log Collector DefaultDependencies=no Before=slices.target [Slice] CPUAccounting=yes CPUQuota=5% MemoryAccounting=yes MemoryLimit=30MAzure-WALinuxAgent-2b21de5/tests/data/init/azure.slice000066400000000000000000000001471462617747000227170ustar00rootroot00000000000000[Unit] Description=Slice for Azure VM Agent and Extensions DefaultDependencies=no Before=slices.target Azure-WALinuxAgent-2b21de5/tests/data/init/walinuxagent.service000066400000000000000000000007721462617747000246440ustar00rootroot00000000000000# # NOTE: This is the service file used in current versions of the agent (>= 2.2.55) # [Unit] Description=Azure Linux Agent After=network-online.target cloud-init.service Wants=network-online.target sshd.service sshd-keygen.service ConditionFileIsExecutable=/usr/sbin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python3 -u /usr/sbin/waagent -daemon Restart=always Slice=azure.slice CPUAccounting=yes MemoryAccounting=yes [Install] WantedBy=multi-user.target Azure-WALinuxAgent-2b21de5/tests/data/init/walinuxagent.service.previous000066400000000000000000000006771462617747000265230ustar00rootroot00000000000000# # NOTE: This is the service file used in older versions of the agent (<= 2.2.54) # [Unit] Description=Azure Linux Agent After=network-online.target cloud-init.service Wants=network-online.target sshd.service sshd-keygen.service ConditionFileIsExecutable=/usr/sbin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python3 -u /usr/sbin/waagent -daemon Restart=always [Install] WantedBy=multi-user.target Azure-WALinuxAgent-2b21de5/tests/data/init/walinuxagent.service_system-slice000066400000000000000000000010251462617747000273350ustar00rootroot00000000000000# # NOTE: # This file hosted on WALinuxAgent repository only for reference purposes. # Please refer to a recent image to find out the up-to-date systemd unit file. # [Unit] Description=Azure Linux Agent After=network-online.target cloud-init.service Wants=network-online.target sshd.service sshd-keygen.service ConditionFileIsExecutable=/usr/sbin/waagent ConditionPathExists=/etc/waagent.conf [Service] Type=simple ExecStart=/usr/bin/python3 -u /usr/sbin/waagent -daemon Restart=always [Install] WantedBy=multi-user.target Azure-WALinuxAgent-2b21de5/tests/data/metadata/000077500000000000000000000000001462617747000213635ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/metadata/certificates.json000066400000000000000000000002051462617747000247200ustar00rootroot00000000000000{ "certificates":[{ "name":"foo", "thumbprint":"bar", "certificateDataUri":"certificates_data" }] } Azure-WALinuxAgent-2b21de5/tests/data/metadata/certificates_data.json000066400000000000000000000112531462617747000257160ustar00rootroot00000000000000{"certificateData":"MIINswYJKoZIhvcNAQcDoIINpDCCDaACAQIxggEwMIIBLAIBAoAUvyL+x6GkZXog QNfsXRZAdD9lc7IwDQYJKoZIhvcNAQEBBQAEggEArhMPepD/RqwdPcHEVqvrdZid 72vXrOCuacRBhwlCGrNlg8oI+vbqmT6CSv6thDpet31ALUzsI4uQHq1EVfV1+pXy NlYD1CKhBCoJxs2fSPU4rc8fv0qs5JAjnbtW7lhnrqFrXYcyBYjpURKfa9qMYBmj NdijN+1T4E5qjxPr7zK5Dalp7Cgp9P2diH4Nax2nixotfek3MrEFBaiiegDd+7tE ux685GWYPqB5Fn4OsDkkYOdb0OE2qzLRrnlCIiBCt8VubWH3kMEmSCxBwSJupmQ8 sxCWk+sBPQ9gJSt2sIqfx/61F8Lpu6WzP+ZOnMLTUn2wLU/d1FN85HXmnQALzTCC DGUGCSqGSIb3DQEHATAUBggqhkiG9w0DBwQIbEcBfddWPv+AggxAAOAt/kCXiffe GeJG0P2K9Q18XZS6Rz7Xcz+Kp2PVgqHKRpPjjmB2ufsRO0pM4z/qkHTOdpfacB4h gz912D9U04hC8mt0fqGNTvRNAFVFLsmo7KXc/a8vfZNrGWEnYn7y1WfP52pqA/Ei SNFf0NVtMyqg5Gx+hZ/NpWAE5vcmRRdoYyWeg13lhlW96QUxf/W7vY/D5KpAGACI ok79/XI4eJkbq3Dps0oO/difNcvdkE74EU/GPuL68yR0CdzzafbLxzV+B43TBRgP jH1hCdRqaspjAaZL5LGfp1QUM8HZIKHuTze/+4dWzS1XR3/ix9q/2QFI7YCuXpuE un3AFYXE4QX/6kcPklZwh9FqjSie3I5HtC1vczqYVjqT4oHrs8ktkZ7oAzeXaXTF k6+JQNNa/IyJw24I1MR77q7HlHSSfhXX5cFjVCd/+SiA4HJQjJgeIuXZ+dXmSPdL 9xLbDbtppifFyNaXdlSzcsvepKy0WLF49RmbL7Bnd46ce/gdQ6Midwi2MTnUtapu tHmu/iJtaUpwXXC0B93PHfAk7Y3SgeY4tl/gKzn9/x5SPAcHiNRtOsNBU8ZThzos Wh41xMLZavmX8Yfm/XWtl4eU6xfhcRAbJQx7E1ymGEt7xGqyPV7hjqhoB9i3oR5N itxHgf1+jw/cr7hob+Trd1hFqZO6ePMyWpqUg97G2ThJvWx6cv+KRtTlVA6/r/UH gRGBArJKBlLpXO6dAHFztT3Y6DFThrus4RItcfA8rltfQcRm8d0nPb4lCa5kRbCx iudq3djWtTIe64sfk8jsc6ahWYSovM+NmhbpxEUbZVWLVEcHAYOeMbKgXSu5sxNO JZNeFdzZqDRRY9fGjYNS7DdNOmrMmWKH+KXuMCItpNZsZS/3W7QxAo3ugYLdUylU Zg8H/BjUGZCGn1rEBAuQX78m0SZ1xHlgHSwJIOmxOJUDHLPHtThfbELY9ec14yi5 so1aQwhhfhPvF+xuXBrVeTAfhFNYkf2uxcEp7+tgFAc5W0QfT9SBn5vSvIxv+dT4 7B2Pg1l/zjdsM74g58lmRJeDoz4psAq+Uk7n3ImBhIku9qX632Q1hanjC8D4xM4W sI/W0ADCuAbY7LmwMpAMdrGg//SJUnBftlom7C9VA3EVf8Eo+OZH9hze+gIgUq+E iEUL5M4vOHK2ttsYrSkAt8MZzjQiTlDr1yzcg8fDIrqEAi5arjTPz0n2s0NFptNW lRD+Xz6pCXrnRgR8YSWpxvq3EWSJbZkSEk/eOmah22sFnnBZpDqn9+UArAznXrRi nYK9w38aMGPKM39ymG8kcbY7jmDZlRgGs2ab0Fdj1jl3CRo5IUatkOJwCEMd/tkB eXLQ8hspJhpFnVNReX0oithVZir+j36epk9Yn8d1l+YlKmuynjunKl9fhmoq5Q6i DFzdYpqBV+x9nVhnmPfGyrOkXvGL0X6vmXAEif/4JoOW4IZpyXjgn+VoCJUoae5J Djl45Bcc2Phrn4HW4Gg/+pIwTFqqZZ2jFrznNdgeIxTGjBrVsyJUeO3BHI0mVLaq jtjhTshYCI7mXOis9W3ic0RwE8rgdDXOYKHhLVw9c4094P/43utSVXE7UzbEhhLE Ngb4H5UGrQmPTNbq40tMUMUCej3zIKuVOvamzeE0IwLhkjNrvKhCG1EUhX4uoJKu DQ++3KVIVeYSv3+78Jfw9F3usAXxX1ICU74/La5DUNjU7DVodLDvCAy5y1jxP3Ic If6m7aBYVjFSQAcD8PZPeIEl9W4ZnbwyBfSDd11P2a8JcZ7N99GiiH3yS1QgJnAO g9XAgjT4Gcn7k4lHPHLULgijfiDSvt94Ga4/hse0F0akeZslVN/bygyib7x7Lzmq JkepRianrvKHbatuxvcajt/d+dxCnr32Q1qCEc5fcgDsjvviRL2tKR0qhuYjn1zR Vk/fRtYOmlaGBVzUXcjLRAg3gC9+Gy8KvXIDrnHxD+9Ob+DUP9fgbKqMeOzKcCK8 NSfSQ+tQjBYD5Ku4zAPUQJoRGgx43vXzcl2Z2i3E2otpoH82Kx8S9WlVEUlTtBjQ QIGM5aR0QUNt8z34t2KWRA8SpP54VzBmEPdwLnzna+PkrGKsKiHVn4K+HfjDp1uW xyO8VjrolAOYosTPXMpNp2u/FoFxaAPTa/TvmKc0kQ3ED9/sGLS2twDnEccvHP+9 zzrnzzN3T2CWuXveDpuyuAty3EoAid1nuC86WakSaAZoa8H2QoRgsrkkBCq+K/yl 4FO9wuP+ksZoVq3mEDQ9qv6H4JJEWurfkws3OqrA5gENcLmSUkZie4oqAxeOD4Hh Zx4ckG5egQYr0PnOd2r7ZbIizv3MKT4RBrfOzrE6cvm9bJEzNWXdDyIxZ/kuoLA6 zX7gGLdGhg7dqzKqnGtopLAsyM1b/utRtWxOTGO9K9lRxyX82oCVT9Yw0DwwA+cH Gutg1w7JHrIAYEtY0ezHgxhqMGuuTyJMX9Vr0D+9DdMeBK7hVOeSnxkaQ0f9HvF6 0XI/2OTIoBSCBpUXjpgsYt7m7n2rFJGJmtqgLAosCAkacHnHLwX0EnzBw3sdDU6Q jFXUWIDd5xUsNkFDCbspLMFs22hjNI6f/GREwd23Q4ujF8pUIcxcfbs2myjbK45s tsn/jrkxmKRgwCIeN/H7CM+4GXSkEGLWbiGCxWzWt9wW1F4M7NW9nho3D1Pi2LBL 1ByTmjfo/9u9haWrp53enDLJJbcaslfe+zvo3J70Nnzu3m3oJ3dmUxgJIstG10g3 lhpUm1ynvx04IFkYJ3kr/QHG/xGS+yh/pMZlwcUSpjEgYFmjFHU4A1Ng4LGI4lnw 5wisay4J884xmDgGfK0sdVQyW5rExIg63yYXp2GskRdDdwvWlFUzPzGgCNXQU96A ljZfjs2u4IiVCC3uVsNbGqCeSdAl9HC5xKuPNbw5yTxPkeRL1ouSdkBy7rvdFaFf dMPw6sBRNW8ZFInlgOncR3+xT/rZxru87LCq+3hRN3kw3hvFldrW2QzZSksO759b pJEP+4fxuG96Wq25fRmzHzE0bdJ+2qF3fp/hy4oRi+eVPa0vHdtkymE4OUFWftb6 +P++JVOzZ4ZxYA8zyUoJb0YCaxL+Jp/QqiUiH8WZVmYZmswqR48sUUKr7TIvpNbY 6jEH6F7KiZCoWfKH12tUC69iRYx3UT/4Bmsgi3S4yUxfieYRMIwihtpP4i0O+OjB /DPbb13qj8ZSfXJ+jmF2SRFfFG+2T7NJqm09JvT9UcslVd+vpUySNe9UAlpcvNGZ 2+j180ZU7YAgpwdVwdvqiJxkeVtAsIeqAvIXMFm1PDe7FJB0BiSVZdihB6cjnKBI dv7Lc1tI2sQe7QSfk+gtionLrEnto+aXF5uVM5LMKi3gLElz7oXEIhn54OeEciB1 cEmyX3Kb4HMRDMHyJxqJXwxm88RgC6RekoPvstu+AfX/NgSpRj5beaj9XkweJT3H rKWhkjq4Ghsn1LoodxluMMHd61m47JyoqIP9PBKoW+Na0VUKIVHw9e9YeW0nY1Zi 5qFA/pHPAt9AbEilRay6NEm8P7TTlNo216amc8byPXanoNrqBYZQHhZ93A4yl6jy RdpYskMivT+Sh1nhZAioKqqTZ3HiFR8hFGspAt5gJc4WLYevmxSicGa6AMyhrkvG rvOSdjY6JY/NkxtcgeycBX5MLF7uDbhUeqittvmlcrVN6+V+2HIbCCrvtow9pcX9 EkaaNttj5M0RzjQxogCG+S5TkhCy04YvKIkaGJFi8xO3icdlxgOrKD8lhtbf4UpR cDuytl70JD95mSUWL53UYjeRf9OsLRJMHQOpS02japkMwCb/ngMCQuUXA8hGkBZL Xw7RwwPuM1Lx8edMXn5C0E8UK5e0QmI/dVIl2aglXk2oBMBJbnyrbfUPm462SG6u ke4gQKFmVy2rKICqSkh2DMr0NzeYEUjZ6KbmQcV7sKiFxQ0/ROk8eqkYYxGWUWJv ylPF1OTLH0AIbGlFPLQO4lMPh05yznZTac4tmowADSHY9RCxad1BjBeine2pj48D u36OnnuQIsedxt5YC+h1bs+mIvwMVsnMLidse38M/RayCDitEBvL0KeG3vWYzaAL h0FCZGOW0ilVk8tTF5+XWtsQEp1PpclvkcBMkU3DtBUnlmPSKNfJT0iRr2T0sVW1 h+249Wj0Bw=="}Azure-WALinuxAgent-2b21de5/tests/data/metadata/ext_handler_pkgs.json000066400000000000000000000002701462617747000255760ustar00rootroot00000000000000{ "versions": [{ "version":"1.3.0.0", "uris":[{ "uri":"http://localhost/foo1" },{ "uri":"http://localhost/foo2" }] }] } Azure-WALinuxAgent-2b21de5/tests/data/metadata/ext_handlers.json000066400000000000000000000006611462617747000247410ustar00rootroot00000000000000[{ "name":"foo", "properties":{ "version":"1.3.0.0", "upgradePolicy": "manual", "state": "enabled", "extensions":[{ "name":"baz", "sequenceNumber":0, "publicSettings":{ "commandToExecute": "echo 123", "uris":[] } }] }, "versionUris":[{ "uri":"http://ext_handler_pkgs/versionUri" }] }] Azure-WALinuxAgent-2b21de5/tests/data/metadata/ext_handlers_no_ext.json000066400000000000000000000000031462617747000263030ustar00rootroot00000000000000[] Azure-WALinuxAgent-2b21de5/tests/data/metadata/identity.json000066400000000000000000000000641462617747000241070ustar00rootroot00000000000000{ "vmName":"foo", "subscriptionId":"bar" } Azure-WALinuxAgent-2b21de5/tests/data/metadata/trans_cert000066400000000000000000000021271462617747000234540ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDBzCCAe+gAwIBAgIJANujJuVt5eC8MA0GCSqGSIb3DQEBCwUAMBkxFzAVBgNV BAMMDkxpbnV4VHJhbnNwb3J0MCAXDTE0MTAyNDA3MjgwN1oYDzIxMDQwNzEyMDcy ODA3WjAZMRcwFQYDVQQDDA5MaW51eFRyYW5zcG9ydDCCASIwDQYJKoZIhvcNAQEB BQADggEPADCCAQoCggEBANPcJAkd6V5NeogSKjIeTXOWC5xzKTyuJPt4YZMVSosU 0lI6a0wHp+g2fP22zrVswW+QJz6AVWojIEqLQup3WyCXZTv8RUblHnIjkvX/+J/G aLmz0G5JzZIpELL2C8IfQLH2IiPlK9LOQH00W74WFcK3QqcJ6Kw8GcVaeSXT1r7X QcGMqEjcWJkpKLoMJv3LMufE+JMdbXDUGY+Ps7Zicu8KXvBPaKVsc6H2jrqBS8et jXbzLyrezTUDz45rmyRJzCO5Sk2pohuYg73wUykAUPVxd7L8WnSyqz1v4zrObqnw BAyor67JR/hjTBfjFOvd8qFGonfiv2Vnz9XsYFTZsXECAwEAAaNQME4wHQYDVR0O BBYEFL8i/sehpGV6IEDX7F0WQHQ/ZXOyMB8GA1UdIwQYMBaAFL8i/sehpGV6IEDX 7F0WQHQ/ZXOyMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAMPLrimT Gptu5pLRHPT8OFRN+skNSkepYaUaJuq6cSKxLumSYkD8++rohu+1+a7t1YNjjNSJ 8ohRAynRJ7aRqwBmyX2OPLRpOfyRZwR0rcFfAMORm/jOE6WBdqgYD2L2b+tZplGt /QqgQzebaekXh/032FK4c74Zg5r3R3tfNSUMG6nLauWzYHbQ5SCdkuQwV0ehGqh5 VF1AOdmz4CC2237BNznDFQhkeU0LrqqAoE/hv5ih7klJKZdS88rOYEnVJsFFJb0g qaycXjOm5Khgl4hKrd+DBD/qj4IVVzsmdpFli72k6WLBHGOXusUGo/3isci2iAIt DsfY6XGSEIhZnA4= -----END CERTIFICATE----- Azure-WALinuxAgent-2b21de5/tests/data/metadata/trans_prv000066400000000000000000000032501462617747000233240ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDT3CQJHeleTXqI EioyHk1zlguccyk8riT7eGGTFUqLFNJSOmtMB6foNnz9ts61bMFvkCc+gFVqIyBK i0Lqd1sgl2U7/EVG5R5yI5L1//ifxmi5s9BuSc2SKRCy9gvCH0Cx9iIj5SvSzkB9 NFu+FhXCt0KnCeisPBnFWnkl09a+10HBjKhI3FiZKSi6DCb9yzLnxPiTHW1w1BmP j7O2YnLvCl7wT2ilbHOh9o66gUvHrY128y8q3s01A8+Oa5skScwjuUpNqaIbmIO9 8FMpAFD1cXey/Fp0sqs9b+M6zm6p8AQMqK+uyUf4Y0wX4xTr3fKhRqJ34r9lZ8/V 7GBU2bFxAgMBAAECggEBAM4hsfog3VAAyIieS+npq+gbhH6bWfMNaTQ3g5CNNbMu 9hhFeOJHzKnWYjSlamgBQhAfTN+2E+Up+iAtcVUZ/lMumrQLlwgMo1vgmvu5Kxmh /YE5oEG+k0JzrCjD1trwd4zvc3ZDYyk/vmVTzTOc311N248UyArUiyqHBbq1a4rP tJhCLn2c4S7flXGF0MDVGZyV9V7J8N8leq/dRGMB027Li21T+B4mPHXa6b8tpRPL 4vc8sHoUJDa2/+mFDJ2XbZfmlgd3MmIPlRn1VWoW7mxgT/AObsPl7LuQx7+t80Wx hIMjuKUHRACQSLwHxJ3SQRFWp4xbztnXSRXYuHTscLUCgYEA//Uu0qIm/FgC45yG nXtoax4+7UXhxrsWDEkbtL6RQ0TSTiwaaI6RSQcjrKDVSo/xo4ZySTYcRgp5GKlI CrWyNM+UnIzTNbZOtvSIAfjxYxMsq1vwpTlOB5/g+cMukeGg39yUlrjVNoFpv4i6 9t4yYuEaF4Vww0FDd2nNKhhW648CgYEA0+UYH6TKu03zDXqFpwf4DP2VoSo8OgfQ eN93lpFNyjrfzvxDZkGF+7M/ebyYuI6hFplVMu6BpgpFP7UVJpW0Hn/sXkTq7F1Q rTJTtkTp2+uxQVP/PzSOqK0Twi5ifkfoEOkPkNNtTiXzwCW6Qmmcvln2u893pyR5 gqo5BHR7Ev8CgYAb7bXpN9ZHLJdMHLU3k9Kl9YvqOfjTxXA3cPa79xtEmsrTys4q 4HuL22KSII6Fb0VvkWkBAg19uwDRpw78VC0YxBm0J02Yi8b1AaOhi3dTVzFFlWeh r6oK/PAAcMKxGkyCgMAZ3hstsltGkfXMoBwhW+yL6nyOYZ2p9vpzAGrjkwKBgQDF 0huzbyXVt/AxpTEhv07U0enfjI6tnp4COp5q8zyskEph8yD5VjK/yZh5DpmFs6Kw dnYUFpbzbKM51tToMNr3nnYNjEnGYVfwWgvNHok1x9S0KLcjSu3ki7DmmGdbfcYq A2uEyd5CFyx5Nr+tQOwUyeiPbiFG6caHNmQExLoiAQKBgFPy9H8///xsadYmZ18k r77R2CvU7ArxlLfp9dr19aGYKvHvnpsY6EuChkWfy8Xjqn3ogzgrHz/rn3mlGUpK vbtwtsknAHtTbotXJwfaBZv2RGgGRr3DzNo6ll2Aez0lNblZFXq132h7+y5iLvar 4euORaD/fuM4UPlR5mN+bypU -----END PRIVATE KEY----- Azure-WALinuxAgent-2b21de5/tests/data/metadata/vmagent_manifest1.json000066400000000000000000000006541462617747000256730ustar00rootroot00000000000000{ "versions": [ { "version": "2.2.8", "uris": [ { "uri": "https: //notused.com/ga/WALinuxAgent-2.2.8.zip" } ] }, { "version": "2.2.9", "uris": [ { "uri": "https: //notused.com/ga/WALinuxAgent-2.2.9.zip" } ] } ] }Azure-WALinuxAgent-2b21de5/tests/data/metadata/vmagent_manifest2.json000066400000000000000000000006541462617747000256740ustar00rootroot00000000000000{ "versions": [ { "version": "2.2.8", "uris": [ { "uri": "https: //notused.com/ga/WALinuxAgent-2.2.8.zip" } ] }, { "version": "2.2.9", "uris": [ { "uri": "https: //notused.com/ga/WALinuxAgent-2.2.9.zip" } ] } ] }Azure-WALinuxAgent-2b21de5/tests/data/metadata/vmagent_manifests.json000066400000000000000000000002601462617747000257660ustar00rootroot00000000000000{ "versionsManifestUris" : [ { "uri" : "https://notused.com/vmagent_manifest1.json" }, { "uri" : "https://notused.com/vmagent_manifest2.json" } ] } Azure-WALinuxAgent-2b21de5/tests/data/metadata/vmagent_manifests_invalid1.json000066400000000000000000000003121462617747000275530ustar00rootroot00000000000000{ "notTheRightKey": [ { "uri": "https://notused.com/vmagent_manifest1.json" }, { "uri": "https://notused.com/vmagent_manifest2.json" } ] }Azure-WALinuxAgent-2b21de5/tests/data/metadata/vmagent_manifests_invalid2.json000066400000000000000000000003121462617747000275540ustar00rootroot00000000000000{ "notTheRightKey": [ { "foo": "https://notused.com/vmagent_manifest1.json" }, { "bar": "https://notused.com/vmagent_manifest2.json" } ] }Azure-WALinuxAgent-2b21de5/tests/data/ovf-env-2.xml000066400000000000000000000037141462617747000220510ustar00rootroot00000000000000 1.0 LinuxProvisioningConfiguration HostName UserName UserPassword false EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 $HOME/UserName/.ssh/authorized_keys ssh-rsa AAAANOTAREALKEY== foo@bar.local EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 $HOME/UserName/.ssh/id_rsa CustomData 1.0 kms.core.windows.net true true true false Azure-WALinuxAgent-2b21de5/tests/data/ovf-env-3.xml000066400000000000000000000037101462617747000220460ustar00rootroot00000000000000 1.0 LinuxProvisioningConfiguration HostName UserName UserPassword false EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 $HOME/UserName/.ssh/authorized_keys ssh-rsa AAAANOTAREALKEY== foo@bar.local EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 $HOME/UserName/.ssh/id_rsa CustomData 1.0 kms.core.windows.net true true false Azure-WALinuxAgent-2b21de5/tests/data/ovf-env-4.xml000066400000000000000000000037201462617747000220500ustar00rootroot00000000000000 1.0 LinuxProvisioningConfiguration HostName UserName UserPassword false EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 $HOME/UserName/.ssh/authorized_keys ssh-rsa AAAANOTAREALKEY== foo@bar.local EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 $HOME/UserName/.ssh/id_rsa CustomData 1.0 kms.core.windows.net bad data true true false Azure-WALinuxAgent-2b21de5/tests/data/ovf-env.xml000066400000000000000000000037151462617747000217130ustar00rootroot00000000000000 1.0 LinuxProvisioningConfiguration HostName UserName UserPassword false EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 $HOME/UserName/.ssh/authorized_keys ssh-rsa AAAANOTAREALKEY== foo@bar.local EB0C0AB4B2D5FC35F2F0658D19F44C8283E2DD62 $HOME/UserName/.ssh/id_rsa CustomData 1.0 kms.core.windows.net false true true false Azure-WALinuxAgent-2b21de5/tests/data/safe_deploy.json000066400000000000000000000010161462617747000227660ustar00rootroot00000000000000{ "blacklisted" : [ "^1.2.3$", "^1.3(?:\\.\\d+)*$" ], "families" : { "ubuntu-x64": { "versions": [ "^Ubuntu,(1[4-9]|2[0-9])\\.\\d+,.*$" ], "require_64bit": true, "partition": 85 }, "fedora-x64": { "versions": [ "^Oracle[^,]*,([7-9]|[1-9][0-9])\\.\\d+,.*$", "^Red\\sHat[^,]*,([7-9]|[1-9][0-9])\\.\\d+,.*$" ], "partition": 20 } } }Azure-WALinuxAgent-2b21de5/tests/data/test_waagent.conf000066400000000000000000000071421462617747000231430ustar00rootroot00000000000000# # Microsoft Azure Linux Agent Configuration # # Key / value handling test entries =Value0 FauxKey1= Value1 FauxKey2=Value2 Value2 FauxKey3=delalloc,rw,noatime,nobarrier,users,mode=777 # Enable extension handling Extensions.Enabled=y # Specify provisioning agent. Provisioning.Agent=auto # Password authentication for root account will be unavailable. Provisioning.DeleteRootPassword=y # Generate fresh host key pair. Provisioning.RegenerateSshHostKeyPair=y # Supported values are "rsa", "dsa", "ecdsa", "ed25519", and "auto". # The "auto" option is supported on OpenSSH 5.9 (2011) and later. Provisioning.SshHostKeyPairType=rsa # An EOL comment that should be ignored # Monitor host name changes and publish changes via DHCP requests. Provisioning.MonitorHostName=y # Decode CustomData from Base64. Provisioning.DecodeCustomData=n#Another EOL comment that should be ignored # Execute CustomData after provisioning. Provisioning.ExecuteCustomData=n # Algorithm used by crypt when generating password hash. #Provisioning.PasswordCryptId=6 # Length of random salt used when generating password hash. #Provisioning.PasswordCryptSaltLength=10 # Allow reset password of sys user Provisioning.AllowResetSysUser=n # Format if unformatted. If 'n', resource disk will not be mounted. ResourceDisk.Format=y # File system on the resource disk # Typically ext3 or ext4. FreeBSD images should use 'ufs2' here. ResourceDisk.Filesystem=ext4 # Mount point for the resource disk ResourceDisk.MountPoint=/mnt/resource # Create and use swapfile on resource disk. ResourceDisk.EnableSwap=n # Use encrypted swap ResourceDisk.EnableSwapEncryption=n # Size of the swapfile. ResourceDisk.SwapSizeMB=0 # Comma-seperated list of mount options. See man(8) for valid options. ResourceDisk.MountOptions=None # Enable verbose logging (y|n) Logs.Verbose=n # Enable periodic log collection, default is y Logs.Collect=y # How frequently to collect logs, default is each hour Logs.CollectPeriod=3600 # Is FIPS enabled OS.EnableFIPS=y#Another EOL comment that should be ignored # Root device timeout in seconds. OS.RootDeviceScsiTimeout=300 # If "None", the system default version is used. OS.OpensslPath=None # Set the SSH ClientAliveInterval OS.SshClientAliveInterval=42#Yet another EOL comment with a '#' that should be ignored # Set the path to SSH keys and configuration files OS.SshDir=/notareal/path # If set, agent will use proxy server to access internet #HttpProxy.Host=None #HttpProxy.Port=None # Detect Scvmm environment, default is n # DetectScvmmEnv=n # # Lib.Dir=/var/lib/waagent # # DVD.MountPoint=/mnt/cdrom/secure # # Pid.File=/var/run/waagent.pid # # Extension.LogDir=/var/log/azure # # OS.HomeDir=/home # Enable RDMA management and set up, should only be used in HPC images # OS.EnableRDMA=n # OS.UpdateRdmaDriver=n # OS.CheckRdmaDriver=n # Enable or disable goal state processing auto-update, default is enabled. # Deprecated now but keep it for backward compatibility # AutoUpdate.Enabled=y # Enable or disable goal state processing auto-update, default is enabled # AutoUpdate.UpdateToLatestVersion=y # Determine the update family, this should not be changed # AutoUpdate.GAFamily=Prod # Determine if the overprovisioning feature is enabled. If yes, hold extension # handling until inVMArtifactsProfile.OnHold is false. # Default is enabled # EnableOverProvisioning=y # Allow fallback to HTTP if HTTPS is unavailable # Note: Allowing HTTP (vs. HTTPS) may cause security risks # OS.AllowHTTP=n # Add firewall rules to protect access to Azure host node services # Note: # - The default is false to protect the state of existing VMs OS.EnableFirewall=n Azure-WALinuxAgent-2b21de5/tests/data/wire/000077500000000000000000000000001462617747000205515ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/wire/certs-2.xml000066400000000000000000000123351462617747000225560ustar00rootroot00000000000000 2012-11-30 5 Pkcs7BlobWithPfxContents MIIOgwYJKoZIhvcNAQcDoIIOdDCCDnACAQIxggEwMIIBLAIBAoAUiF8ZYMs9mMa8 QOEMxDaIhGza+0IwDQYJKoZIhvcNAQEBBQAEggEAQW7GyeRVEhHSU1/dzV0IndH0 rDQk+27MvlsWTcpNcgGFtfRYxu5bzmp0+DoimX3pRBlSFOpMJ34jpg4xs78EsSWH FRhCf3EGuEUBHo6yR8FhXDTuS7kZ0UmquiCI2/r8j8gbaGBNeP8IRizcAYrPMA5S E8l1uCrw7DHuLscbVni/7UglGaTfFS3BqS5jYbiRt2Qh3p+JPUfm51IG3WCIw/WS 2QHebmHxvMFmAp8AiBWSQJizQBEJ1lIfhhBMN4A7NadMWAe6T2DRclvdrQhJX32k amOiogbW4HJsL6Hphn7Frrw3CENOdWMAvgQBvZ3EjAXgsJuhBA1VIrwofzlDljCC DTUGCSqGSIb3DQEHATAUBggqhkiG9w0DBwQIxcvw9qx4y0qAgg0QrINXpC23BWT2 Fb9N8YS3Be9eO3fF8KNdM6qGf0kKR16l/PWyP2L+pZxCcCPk83d070qPdnJK9qpJ 6S1hI80Y0oQnY9VBFrdfkc8fGZHXqm5jNS9G32v/AxYpJJC/qrAQnWuOdLtOZaGL 94GEh3XRagvz1wifv8SRI8B1MzxrpCimeMxHkL3zvJFg9FjLGdrak868feqhr6Nb pqH9zL7bMq8YP788qTRELUnL72aDzGAM7HEj7V4yu2uD3i3Ryz3bqWaj9IF38Sa0 6rACBkiNfZBPgExoMUm2GNVyx8hTis2XKRgz4NLh29bBkKrArK9sYDncE9ocwrrX AQ99yn03Xv6TH8bRp0cSj4jzBXc5RFsUQG/LxzJVMjvnkDbwNE41DtFiYz5QVcv1 cMpTH16YfzSL34a479eNq/4+JAs/zcb2wjBskJipMUU4hNx5fhthvfKwDOQbLTqN HcP23iPQIhjdUXf6gpu5RGu4JZ0dAMHMHFKvNL6TNejwx/H6KAPp6rCRsYi6QhAb 42SXdZmhAyQsFpGD9U5ieJApqeCHfj9Xhld61GqLJA9+WLVhDPADjqHoAVvrOkKH OtPegId/lWnCB7p551klAjiEA2/DKxFBIAEhqZpiLl+juZfMXovkdmGxMP4gvNNF gbS2k5A0IJ8q51gZcH1F56smdAmi5kvhPnFdy/9gqeI/F11F1SkbPVLImP0mmrFi zQD5JGfEu1psUYvhpOdaYDkmAK5qU5xHSljqZFz5hXNt4ebvSlurHAhunJb2ln3g AJUHwtZnVBrtYMB0w6fdwYqMxXi4vLeqUiHtIQtbOq32zlSryNPQqG9H0iP9l/G1 t7oUfr9woI/B0kduaY9jd5Qtkqs1DoyfNMSaPNohUK/CWOTD51qOadzSvK0hJ+At 033PFfv9ilaX6GmzHdEVEanrn9a+BoBCnGnuysHk/8gdswj9OzeCemyIFJD7iObN rNex3SCf3ucnAejJOA0awaLx88O1XTteUjcFn26EUji6DRK+8JJiN2lXSyQokNeY ox6Z4hFQDmw/Q0k/iJqe9/Dq4zA0l3Krkpra0DZoWh5kzYUA0g5+Yg6GmRNRa8YG tuuD6qK1SBEzmCYff6ivjgsXV5+vFBSjEpx2dPEaKdYxtHMOjkttuTi1mr+19dVf hSltbzfISbV9HafX76dhwZJ0QwsUx+aOW6OrnK8zoQc5AFOXpe9BrrOuEX01qrM0 KX5tS8Zx5HqDLievjir194oi3r+nAiG14kYlGmOTHshu7keGCgJmzJ0iVG/i+TnV ZSLyd8OqV1F6MET1ijgR3OPL3kt81Zy9lATWk/DgKbGBkkKAnXO2HUw9U34JFyEy vEc81qeHci8sT5QKSFHiP3r8EcK8rT5k9CHpnbFmg7VWSMVD0/wRB/C4BiIw357a xyJ/q1NNvOZVAyYzIzf9TjwREtyeHEo5kS6hyWSn7fbFf3sNGO2I30veWOvE6kFA HMtF3NplOrTYcM7fAK5zJCBK20oU645TxI8GsICMog7IFidFMdRn4MaXpwAjEZO4 44m2M+4XyeRCAZhp1Fu4mDiHGqgd44mKtwvLACVF4ygWZnACDpI17X88wMnwL4uU vgehLZdAE89gvukSCsET1inVBnn/hVenCRbbZ++IGv2XoYvRfeezfOoNUcJXyawQ JFqN0CRB5pliuCesTO2urn4HSwGGoeBd507pGWZmOAjbNjGswlJJXF0NFnNW/zWw UFYy+BI9axuhWTSnCXbNbngdNQKHznKe1Lwit6AI3U9jS33pM3W+pwUAQegVdtpG XT01YgiMCBX+b8B/xcWTww0JbeUwKXudzKsPhQmaA0lubAo04JACMfON8jSZCeRV TyIzgacxGU6YbEKH4PhYTGl9srcWIT9iGSYD53V7Kyvjumd0Y3Qc3JLnuWZT6Oe3 uJ4xz9jJtoaTDvPJQNK3igscjZnWZSP8XMJo1/f7vbvD57pPt1Hqdirp1EBQNshk iX9CUh4fuGFFeHf6MtGxPofbXmvA2GYcFsOez4/2eOTEmo6H3P4Hrya97XHS0dmD zFSAjzAlacTrn1uuxtxFTikdOwvdmQJJEfyYWCB1lqWOZi97+7nzqyXMLvMgmwug ZF/xHFMhFTR8Wn7puuwf36JpPQiM4oQ/Lp66zkS4UlKrVsmSXIXudLMg8SQ5WqK8 DjevEZwsHHaMtfDsnCAhAdRc2jCpyHKKnmhCDdkcdJJEymWKILUJI5PJ3XtiMHnR Sa35OOICS0lTq4VwhUdkGwGjRoY1GsriPHd6LOt1aom14yJros1h7ta604hSCn4k zj9p7wY9gfgkXWXNfmarrZ9NNwlHxzgSva+jbJcLmE4GMX5OFHHGlRj/9S1xC2Wf MY9orzlooGM74NtmRi4qNkFj3dQCde8XRR4wh2IvPUCsr4j+XaoCoc3R5Rn/yNJK zIkccJ2K14u9X/A0BLXHn5Gnd0tBYcVOqP6dQlW9UWdJC/Xooh7+CVU5cZIxuF/s Vvg+Xwiv3XqekJRu3cMllJDp5rwe5EWZSmnoAiGKjouKAIszlevaRiD/wT6Zra3c Wn/1U/sGop6zRscHR7pgI99NSogzpVGThUs+ez7otDBIdDbLpMjktahgWoi1Vqhc fNZXjA6ob4zTWY/16Ys0YWxHO+MtyWTMP1dnsqePDfYXGUHe8yGxylbcjfrsVYta 4H6eYR86eU3eXB+MpS/iA4jBq4QYWR9QUkd6FDfmRGgWlMXhisPv6Pfnj384NzEV Emeg7tW8wzWR64EON9iGeGYYa2BBl2FVaayMEoUhthhFcDM1r3/Mox5xF0qnlys4 goWkMzqbzA2t97bC0KDGzkcHT4wMeiJBLDZ7S2J2nDAEhcTLY0P2zvOB4879pEWx Bd15AyG1DvNssA5ooaDzKi/Li6NgDuMJ8W7+tmsBwDvwuf2N3koqBeXfKhR4rTqu Wg1k9fX3+8DzDf0EjtDZJdfWZAynONi1PhZGbNbaMKsQ+6TflkCACInRdOADR5GM rL7JtrgF1a9n0HD9vk2WGZqKI71tfS8zODkOZDD8aAusD2DOSmVZl48HX/t4i4Wc 3dgi/gkCMrfK3wOujb8tL4zjnlVkM7kzKk0MgHuA1w81zFjeMFvigHes4IWhQVcz ek3l4bGifI2kzU7bGIi5e/019ppJzGsVcrOE/3z4GS0DJVk6fy7MEMIFx0LhJPlL T+9HMH85sSYb97PTiMWpfBvNw3FSC7QQT9FC3L8d/XtMY3NvZoc7Fz7cSGaj7NXG 1OgVnAzMunPa3QaduoxMF9346s+4a+FrpRxL/3bb4skojjmmLqP4dsbD1uz0fP9y xSifnTnrtjumYWMVi+pEb5kR0sTHl0XS7qKRi3SEfv28uh72KdvcufonIA5rnEb5 +yqAZiqW2OxVsRoVLVODPswP4VIDiun2kCnfkQygPzxlZUeDZur0mmZ3vwC81C1Q dZcjlukZcqUaxybUloUilqfNeby+2Uig0krLh2+AM4EqR63LeZ/tk+zCitHeRBW0 wl3Bd7ShBFg6kN5tCJlHf/G6suIJVr+A9BXfwekO9+//CutKakCwmJTUiNWbQbtN q3aNCnomyD3WjvUbitVO0CWYjZrmMLIsPtzyLQydpT7tjXpHgvwm5GYWdUGnNs4y NbA262sUl7Ku/GDw1CnFYXbxl+qxbucLtCdSIFR2xUq3rEO1MXlD/txdTxn6ANax hi9oBg8tHzuGYJFiCDCvbVVTHgWUSnm/EqfclpJzGmxt8g7vbaohW7NMmMQrLBFP G6qBypgvotx1iJWaHVLNNiXvyqQwTtelNPAUweRoNawBp/5KTwwy/tHeF0gsVQ7y mFX4umub9YT34Lpe7qUPKNxXzFcUgAf1SA6vyZ20UI7p42S2OT2PrahJ+uO6LQVD +REhtN0oyS3G6HzAmKkBgw7LcV3XmAr39iSR7mdmoHSJuI9bjveAPhniK+N6uuln xf17Qnw5NWfr9MXcLli7zqwMglU/1bNirkwVqf/ogi/zQ3JYCo6tFGf/rnGQAORJ hvOq2SEYXnizPPIH7VrpE16+jUXwgpiQ8TDyeLPmpZVuhXTXiCaJO5lIwmLQqkmg JqNiT9V44sksNFTGNKgZo5O9rEqfqX4dLjfv6pGJL+MFXD9if4f1JQiXJfhcRcDh Ff9B6HukgbJ1H96eLUUNj8sL1+WPOqawkS4wg7tVaERE8CW7mqk15dCysn9shSut I+7JU7+dZsxpj0ownrxuPAFuT8ZlcBPrFzPUwTlW1G0CbuEco8ijfy5IfbyGCn5s K/0bOfAuNVGoOpLZ1dMki2bGdBwQOQlkLKhAxYcCVQ0/urr1Ab+VXU9kBsIU8ssN GogKngYpuUV0PHmpzmobielOHLjNqA2v9vQSV3Ed48wRy5OCwLX1+vYmYlggMDGt wfl+7QbXYf+k5WnELf3IqYvh8ZWexa0= Azure-WALinuxAgent-2b21de5/tests/data/wire/certs.xml000066400000000000000000000123351462617747000224170ustar00rootroot00000000000000 2012-11-30 3 Pkcs7BlobWithPfxContents MIIOgwYJKoZIhvcNAQcDoIIOdDCCDnACAQIxggEwMIIBLAIBAoAUZcG9X+5aK8VZ FY8eJV9j+RImq58wDQYJKoZIhvcNAQEBBQAEggEAn/hOytP/StyRuXHcqFq6x+Za 7gHfO8prXWdZW4e28NLt/x5ZOBHDDZ6buwwdXEZME0+RoiJvLqP2RNhZkEO8bkna pS76xLZE4NXyfxkeEs1vJYis0WJdt/56uCzBuud2SBLuMWoAWgF5alokN0uFpVgm CKCos+xv6Pisolc6geM8xQTYe6sLf5Z23LWftWfJqzuo/29glCCre7R80OLeZe5w pN6XztbYz06nhVByC35To8Lm0akWAAKU7sfqM1Nty4P0rwUJPKXo42uN1GKYbDbF x8piCAd+rs+q4Alu3qK/YaTPpMb2ECRMH6CYB8Klf/CbuWykkfS8zrsnpXT1kzCC DTUGCSqGSIb3DQEHATAUBggqhkiG9w0DBwQInjJWFaJcZz2Agg0QX6NlJUH17o20 90gfjWV01mPmzLKx71JT+hyzKr5vHywDSRI/mdb3RqA59ZrIKeyWr0HXEOuABlul nxjc/Rfk1tiLQwh0iqlOjlMtRsxS6yDA0WNwK2Y9gaXcdgDm5Vioai18l4Pd0qzK fsof5a/jEJyunW1CZK19QciwfQ2pS8QbRYgeLRZRft2I+kv6cWXlGS6YrMqKQC8t QMxnXR4AuzVllPLbbIgtM3l9oS+6jl7jKyKogeroJ9FNLjoMBJLldRLGPRhkCmdJ Z1m+s/BAVUH08qgj2kmHzucdULLjlRcmma9m/h91TcQCXHAavf7S+U9QwIyGRh83 t4Y7EqbQ93mOgjajFzILSL7AT/irgJpDu6CJqMu3EMNDA0mjxn5Cdvj40sufL/g3 UyBwqosmIwAPzNDmhPtTKvHaHfGY/k8WhoIYfAA5Lhq1z22/RODZOY0Ch2XyxQM4 s35eppe6IhnwyMv6HfrCrqE/o/16OrvvbaFQTeTlMvU0P7MIR4pVW6tRq4NEa5Wx JcvGutuMuzH1VMcqcKdc7wYyOqDOGU43kcV8PiALTIcqrhD8NDrKks1jSkyqQw2h sJQckNaQIcCXkUQecQa2UGe0l4HDSJ5gAETSenjyLBKFHf3hxiWBpw446/bODgdk 0+oyreZqMpRz3vn9LC+Yt7SuVbTzRdx7nlKIiNvo8+btOuVm44evchLFHAq3Ni8D c+tP/wss3K4Xdp+t5SvEY/nLIu11Lw44HDVMYTuNz3Ya9psL70ZLaLZM8NromnEl CUMRNTPoOC7/KDRh2E9d6c1V4CC43wAsRhksGJnSYoiSVAhaVgLqVFQTsqNHxmcg 3Y9AEBVzm3fZg6+DxAYu+amb+r8lk0Pp+N1t6rVbKXhkbAAxg0UDO3pY8Xcz0Y3g Qdd5rnHh1rJrehku7zTHvQaXEddUWmCoUGIXJ+bt4VOhErL6s5/j8GSG0xmfxgSE jnGj4Jwd0Vv19uZjsBDQ54R88GcA9YX8r48gr9JAwplrQ50m9KX6GwQhDRYKN/Dh zOt9DCUkqMqdi5T4v2qNTfkL7iXBMhsSkeYUQ/tFLyv4QQyli5uTUZ5FNXohOVAx TNyV9+gcV5WiBR0Aje6rwPW3oTkrPnVfZCdBwt/mZjPNMO5Se7D/lWE33yYu7bJ+ gaxRNynhEOB7RaOePzDjn7LExahFmTFV0sgQxwQ2BYsfI22cdkAf6qOxdK/kqiQm lgzRpDjyPIFhaCCHnXyJdSqcHmDrCjcg2P6AVCDJGdFOBvupeJ7Kg7WV5EY7G6AU ng16tyumJSMWSzSks9M0Ikop6xhq3cV+Q0OArJoreQ6eonezXjM9Y865xjF80nJL V4lcRxdXfoKpXJwzc++pgkY9t55J0+cEyBvIXfKud1/HHOhewhoy5ATyi9LLM91n iW1DaQXlvHZgE7GFMSCVLxy6ZopBbm9tF0NQDFi8zUtGulD3Gkoc/Bp+DWb2vsX4 S8W9vByNvIz/SWOGNbEs2irTRXccMAL7JHJ+74bwZZi5DRrqyQWHCn/3Ls2YPI6z lnfl15EE4G7g3+nrvP2lZFBXjsdG/U3HYi+tAyHkRN3oXvgnt9N76PoY8dlsNf6c RuNqgk31uO1sX/8du3Jxz87MlzWiG3kbAHMvbcoCgy/dW4JQcM3Sqg5PmF8i9wD1 ZuqZ7zHpWILIWd13TM3UDolQZzl+GXEX62dPPL1vBtxHhDgQicdaWFXa6DX3dVwt DToWaAqrAPIrgxvNk5FHNCTEVTQkmCIL5JoinZSk7BAl8b085CPM6F7OjB5CR4Ts V+6UaTUZqk+z+raL+HJNW2ds1r7+t8Po5CydMBS4M/pE7b/laUnbRu7rO8cqKucn n+eYimib/0YuqZj9u2RXso4kzdOyIxGSGHkmSzYuoNRx80r+jHtcBBTqXk37t0FY X5O7QItCE+uwV1Sa12yg2dgJ6vKRPCEVyMoYUBwNbKEcw1pjG9Em7HwjOZK0UrO1 yKRz6kxffVKN9Naf7lOnXooVuedY/jcaZ2zCZtASlOe8iiQK5prM4sbMixMp9ovL tTxy9E9kgvaI/mkzarloKPQGsk0WzuH+i39M3DOXrMf5HwfE+A55u1gnrHsxQlxp z5acwN42+4ln6axs4aweMGAhyEtBW8TdsNomwuPk+tpqZXHI2pqS4/aVOk8R8VE7 IqtBx2QBMINT79PDPOn3K6v9HEt9fUHJ2TWJvKRKfsu5lECJPJSJA8OQ7zzw6zQt NXw8UhZRmNW0+eI5dykg+XsII7+njYa33EJ1Sy1Ni8ZT/izKfrKCwEm44KVAyUG5 qUjghPPMNQY3D0qOl54DRfGVOxbHztUooblW+DnlLlpOy/+/B+H9Dscxosdx2/Mo RftJOMlLqK7AYIYAlw1zvqZo0pf7rCcLSLt+6FrPtNZe6ULFUacZ3RqyTZovsZi5 Ucda3bLdOHX6tKL21bRfN7L0/BjF6BJETpG3p+rBYOyCwO6HvdenpMm6cT02nrfP QJtImjeW1ov6Pw02zNlIZAXFir78Z6AcMhV2iKEJxc1RMFBcXmylNXJmGlKYB3lJ jWo6qumLewTz5vzRu0vZCmOf+bKmuyVxckPbrzP+4OHKhpm95Kp6sUn2pvh0S8H3 w1pjfZ9+sIaVgMspfRPgoWTyZ0flFvAX6DHWYVejMebwfAqZaa+UAJJ6jWQbMNzo ZtOhzCjV+2ZBYHvSiY7dtfaLwQJeMWEKIw32kEYv/Ts33n7dD/pAzZu0WCyfoqsQ MEXhbZYSCQTJ8/gqvdlurWOJL091z6Uw810YVt+wMqsBo5lnMsS3GqkzgM2PVzuV taddovr5CrWfAjQaFG8wcETiKEQFWS9JctKo0F+gwLwkVyc4fBSkjVmIliw1jXGu Enf2mBei+n8EaRB2nNa/CBVGQM24WEeMNq+TqaMvnEonvMtCIEpuJAO/NzJ1pxw0 9S+LKq3lFoIQoON5glsjV82WseAbFXmynBmSbyUY/mZQpjuNSnwLfpz4630x5vuV VNglsZ8lW9XtSPh6GkMj+lLOCqJ5aZ4UEXDSYW7IaH4sPuQ4eAAUsKx/XlbmaOad hgK+3gHYi98fiGGQjt9OqKzQRxVFnHtoSwbMp/gjAWqjDCFdo7RkCqFjfB1DsSj0 TrjZU1lVMrmdEhtUNjqfRpWN82f55fxZdrHEPUQIrOywdbRiNbONwm4AfSE8ViPz +SltYpQfF6g+tfZMwsoPSevLjdcmb1k3n8/lsEL99wpMT3NbibaXCjeJCZbAYK05 rUw5bFTVAuv6i3Bax3rx5DqyQANS3S8TBVYrdXf9x7RpQ8oeb4oo+qn293bP4n5m nW/D/yvsAJYcm3lD7oW7D369nV/mwKPpNC4B9q6N1FiUndvdFSbyzfNfSF9LV0RU A/4Qm05HtE3PAUFYfwwP8MDg0HdltMn83VfqrEi/d76xlcxfoIh2RQQgqxCIS6KE AExIY/hPYDVxApznI39xNOp7IqdPEX3i7Cv7aHeFAwbhXYMNnkfFJJTkHRdcRiJ/ RE1QPlC7ijH+IF02PE/seYg4GWrkeW3jvi+IKQ9BPBoYIx0P+7wHXf4ZGtZMourd N4fdwzFCDMFkS7wQC/GOqZltzF/gz1fWEGXRTH3Lqx0iKyiiLs2trQhFOzNw3B7E WxCIUjRMAAJ6vvUdvoFlMw8WfBkzCVple4yrCqIw6fJEq8v0q8EQ7qKDTfyPnFBt CtQZuTozfdPDnVHGmGPQKUODH/6Vwl+9/l7HDvV8/D/HKDnP581ix1a3bdokNtSK 7rBfovpzYltYGpVxsC6MZByYEpvIh5nHQouLR4L3Je2wB3F9nBGjNhBvGDQlxcne AAgywpOpQfvfsnYRWt2vlQzwhHUgWhJmGMhGMmn4oKc5su87G7yzFEnq/yIUMOm/ X0Zof/Qm92KCJS7YkLzP1GDO9XPMe+ZHeHVNXhVNCRxGNbHCHB9+g9v090sLLmal jpgrDks19uHv0yYiMqBdpstzxClRWxgHwrZO6jtbr5jeJuLVUxV0uuX76oeomUj2 mAwoD5cB1U8W9Ew+cMjp5v6gg0LTk90HftjhrZmMA0Ll6TqFWjxge+jsswOY1SZi peuQGIHFcuQ7SEcyIbqju3bmeEGZwTz51yo8x2WqpCwB1a4UTngWJgDCySAI58fM eRL6r478CAZjk+fu9ZA85B7tFczl3lj0B4QHxkX370ZeCHy39qw8vMYIcPk3ytI0 vmj5UCSeQDHHDcwo54wi83IFEWUFh18gP4ty5Tfvs6qv7qd455UQZTAO7lwpdBlp MJGlMqBHjDLGyY80p+O4vdlQBZ1uMH+48u91mokUP8p+tVVKh7bAw/HPG+SQsuNR DXF+gTm/hRuY7IYe3C7Myzc8bDTtFw6Es9BLAqzFFAMjzDVz7wY1rnZQq4mmLcKg AAMJaqItipKAroYIntXXJ3U8fsUt03M= Azure-WALinuxAgent-2b21de5/tests/data/wire/certs_format_not_pfx.xml000066400000000000000000000004741462617747000255250ustar00rootroot00000000000000 2012-11-30 12 CertificatesNonPfxPackage NotPFXData Azure-WALinuxAgent-2b21de5/tests/data/wire/certs_no_format_specified.xml000066400000000000000000000123071462617747000264750ustar00rootroot00000000000000 2012-11-30 12 MIIOgwYJKoZIhvcNAQcDoIIOdDCCDnACAQIxggEwMIIBLAIBAoAUZcG9X+5aK8VZ FY8eJV9j+RImq58wDQYJKoZIhvcNAQEBBQAEggEAn/hOytP/StyRuXHcqFq6x+Za 7gHfO8prXWdZW4e28NLt/x5ZOBHDDZ6buwwdXEZME0+RoiJvLqP2RNhZkEO8bkna pS76xLZE4NXyfxkeEs1vJYis0WJdt/56uCzBuud2SBLuMWoAWgF5alokN0uFpVgm CKCos+xv6Pisolc6geM8xQTYe6sLf5Z23LWftWfJqzuo/29glCCre7R80OLeZe5w pN6XztbYz06nhVByC35To8Lm0akWAAKU7sfqM1Nty4P0rwUJPKXo42uN1GKYbDbF x8piCAd+rs+q4Alu3qK/YaTPpMb2ECRMH6CYB8Klf/CbuWykkfS8zrsnpXT1kzCC DTUGCSqGSIb3DQEHATAUBggqhkiG9w0DBwQInjJWFaJcZz2Agg0QX6NlJUH17o20 90gfjWV01mPmzLKx71JT+hyzKr5vHywDSRI/mdb3RqA59ZrIKeyWr0HXEOuABlul nxjc/Rfk1tiLQwh0iqlOjlMtRsxS6yDA0WNwK2Y9gaXcdgDm5Vioai18l4Pd0qzK fsof5a/jEJyunW1CZK19QciwfQ2pS8QbRYgeLRZRft2I+kv6cWXlGS6YrMqKQC8t QMxnXR4AuzVllPLbbIgtM3l9oS+6jl7jKyKogeroJ9FNLjoMBJLldRLGPRhkCmdJ Z1m+s/BAVUH08qgj2kmHzucdULLjlRcmma9m/h91TcQCXHAavf7S+U9QwIyGRh83 t4Y7EqbQ93mOgjajFzILSL7AT/irgJpDu6CJqMu3EMNDA0mjxn5Cdvj40sufL/g3 UyBwqosmIwAPzNDmhPtTKvHaHfGY/k8WhoIYfAA5Lhq1z22/RODZOY0Ch2XyxQM4 s35eppe6IhnwyMv6HfrCrqE/o/16OrvvbaFQTeTlMvU0P7MIR4pVW6tRq4NEa5Wx JcvGutuMuzH1VMcqcKdc7wYyOqDOGU43kcV8PiALTIcqrhD8NDrKks1jSkyqQw2h sJQckNaQIcCXkUQecQa2UGe0l4HDSJ5gAETSenjyLBKFHf3hxiWBpw446/bODgdk 0+oyreZqMpRz3vn9LC+Yt7SuVbTzRdx7nlKIiNvo8+btOuVm44evchLFHAq3Ni8D c+tP/wss3K4Xdp+t5SvEY/nLIu11Lw44HDVMYTuNz3Ya9psL70ZLaLZM8NromnEl CUMRNTPoOC7/KDRh2E9d6c1V4CC43wAsRhksGJnSYoiSVAhaVgLqVFQTsqNHxmcg 3Y9AEBVzm3fZg6+DxAYu+amb+r8lk0Pp+N1t6rVbKXhkbAAxg0UDO3pY8Xcz0Y3g Qdd5rnHh1rJrehku7zTHvQaXEddUWmCoUGIXJ+bt4VOhErL6s5/j8GSG0xmfxgSE jnGj4Jwd0Vv19uZjsBDQ54R88GcA9YX8r48gr9JAwplrQ50m9KX6GwQhDRYKN/Dh zOt9DCUkqMqdi5T4v2qNTfkL7iXBMhsSkeYUQ/tFLyv4QQyli5uTUZ5FNXohOVAx TNyV9+gcV5WiBR0Aje6rwPW3oTkrPnVfZCdBwt/mZjPNMO5Se7D/lWE33yYu7bJ+ gaxRNynhEOB7RaOePzDjn7LExahFmTFV0sgQxwQ2BYsfI22cdkAf6qOxdK/kqiQm lgzRpDjyPIFhaCCHnXyJdSqcHmDrCjcg2P6AVCDJGdFOBvupeJ7Kg7WV5EY7G6AU ng16tyumJSMWSzSks9M0Ikop6xhq3cV+Q0OArJoreQ6eonezXjM9Y865xjF80nJL V4lcRxdXfoKpXJwzc++pgkY9t55J0+cEyBvIXfKud1/HHOhewhoy5ATyi9LLM91n iW1DaQXlvHZgE7GFMSCVLxy6ZopBbm9tF0NQDFi8zUtGulD3Gkoc/Bp+DWb2vsX4 S8W9vByNvIz/SWOGNbEs2irTRXccMAL7JHJ+74bwZZi5DRrqyQWHCn/3Ls2YPI6z lnfl15EE4G7g3+nrvP2lZFBXjsdG/U3HYi+tAyHkRN3oXvgnt9N76PoY8dlsNf6c RuNqgk31uO1sX/8du3Jxz87MlzWiG3kbAHMvbcoCgy/dW4JQcM3Sqg5PmF8i9wD1 ZuqZ7zHpWILIWd13TM3UDolQZzl+GXEX62dPPL1vBtxHhDgQicdaWFXa6DX3dVwt DToWaAqrAPIrgxvNk5FHNCTEVTQkmCIL5JoinZSk7BAl8b085CPM6F7OjB5CR4Ts V+6UaTUZqk+z+raL+HJNW2ds1r7+t8Po5CydMBS4M/pE7b/laUnbRu7rO8cqKucn n+eYimib/0YuqZj9u2RXso4kzdOyIxGSGHkmSzYuoNRx80r+jHtcBBTqXk37t0FY X5O7QItCE+uwV1Sa12yg2dgJ6vKRPCEVyMoYUBwNbKEcw1pjG9Em7HwjOZK0UrO1 yKRz6kxffVKN9Naf7lOnXooVuedY/jcaZ2zCZtASlOe8iiQK5prM4sbMixMp9ovL tTxy9E9kgvaI/mkzarloKPQGsk0WzuH+i39M3DOXrMf5HwfE+A55u1gnrHsxQlxp z5acwN42+4ln6axs4aweMGAhyEtBW8TdsNomwuPk+tpqZXHI2pqS4/aVOk8R8VE7 IqtBx2QBMINT79PDPOn3K6v9HEt9fUHJ2TWJvKRKfsu5lECJPJSJA8OQ7zzw6zQt NXw8UhZRmNW0+eI5dykg+XsII7+njYa33EJ1Sy1Ni8ZT/izKfrKCwEm44KVAyUG5 qUjghPPMNQY3D0qOl54DRfGVOxbHztUooblW+DnlLlpOy/+/B+H9Dscxosdx2/Mo RftJOMlLqK7AYIYAlw1zvqZo0pf7rCcLSLt+6FrPtNZe6ULFUacZ3RqyTZovsZi5 Ucda3bLdOHX6tKL21bRfN7L0/BjF6BJETpG3p+rBYOyCwO6HvdenpMm6cT02nrfP QJtImjeW1ov6Pw02zNlIZAXFir78Z6AcMhV2iKEJxc1RMFBcXmylNXJmGlKYB3lJ jWo6qumLewTz5vzRu0vZCmOf+bKmuyVxckPbrzP+4OHKhpm95Kp6sUn2pvh0S8H3 w1pjfZ9+sIaVgMspfRPgoWTyZ0flFvAX6DHWYVejMebwfAqZaa+UAJJ6jWQbMNzo ZtOhzCjV+2ZBYHvSiY7dtfaLwQJeMWEKIw32kEYv/Ts33n7dD/pAzZu0WCyfoqsQ MEXhbZYSCQTJ8/gqvdlurWOJL091z6Uw810YVt+wMqsBo5lnMsS3GqkzgM2PVzuV taddovr5CrWfAjQaFG8wcETiKEQFWS9JctKo0F+gwLwkVyc4fBSkjVmIliw1jXGu Enf2mBei+n8EaRB2nNa/CBVGQM24WEeMNq+TqaMvnEonvMtCIEpuJAO/NzJ1pxw0 9S+LKq3lFoIQoON5glsjV82WseAbFXmynBmSbyUY/mZQpjuNSnwLfpz4630x5vuV VNglsZ8lW9XtSPh6GkMj+lLOCqJ5aZ4UEXDSYW7IaH4sPuQ4eAAUsKx/XlbmaOad hgK+3gHYi98fiGGQjt9OqKzQRxVFnHtoSwbMp/gjAWqjDCFdo7RkCqFjfB1DsSj0 TrjZU1lVMrmdEhtUNjqfRpWN82f55fxZdrHEPUQIrOywdbRiNbONwm4AfSE8ViPz +SltYpQfF6g+tfZMwsoPSevLjdcmb1k3n8/lsEL99wpMT3NbibaXCjeJCZbAYK05 rUw5bFTVAuv6i3Bax3rx5DqyQANS3S8TBVYrdXf9x7RpQ8oeb4oo+qn293bP4n5m nW/D/yvsAJYcm3lD7oW7D369nV/mwKPpNC4B9q6N1FiUndvdFSbyzfNfSF9LV0RU A/4Qm05HtE3PAUFYfwwP8MDg0HdltMn83VfqrEi/d76xlcxfoIh2RQQgqxCIS6KE AExIY/hPYDVxApznI39xNOp7IqdPEX3i7Cv7aHeFAwbhXYMNnkfFJJTkHRdcRiJ/ RE1QPlC7ijH+IF02PE/seYg4GWrkeW3jvi+IKQ9BPBoYIx0P+7wHXf4ZGtZMourd N4fdwzFCDMFkS7wQC/GOqZltzF/gz1fWEGXRTH3Lqx0iKyiiLs2trQhFOzNw3B7E WxCIUjRMAAJ6vvUdvoFlMw8WfBkzCVple4yrCqIw6fJEq8v0q8EQ7qKDTfyPnFBt CtQZuTozfdPDnVHGmGPQKUODH/6Vwl+9/l7HDvV8/D/HKDnP581ix1a3bdokNtSK 7rBfovpzYltYGpVxsC6MZByYEpvIh5nHQouLR4L3Je2wB3F9nBGjNhBvGDQlxcne AAgywpOpQfvfsnYRWt2vlQzwhHUgWhJmGMhGMmn4oKc5su87G7yzFEnq/yIUMOm/ X0Zof/Qm92KCJS7YkLzP1GDO9XPMe+ZHeHVNXhVNCRxGNbHCHB9+g9v090sLLmal jpgrDks19uHv0yYiMqBdpstzxClRWxgHwrZO6jtbr5jeJuLVUxV0uuX76oeomUj2 mAwoD5cB1U8W9Ew+cMjp5v6gg0LTk90HftjhrZmMA0Ll6TqFWjxge+jsswOY1SZi peuQGIHFcuQ7SEcyIbqju3bmeEGZwTz51yo8x2WqpCwB1a4UTngWJgDCySAI58fM eRL6r478CAZjk+fu9ZA85B7tFczl3lj0B4QHxkX370ZeCHy39qw8vMYIcPk3ytI0 vmj5UCSeQDHHDcwo54wi83IFEWUFh18gP4ty5Tfvs6qv7qd455UQZTAO7lwpdBlp MJGlMqBHjDLGyY80p+O4vdlQBZ1uMH+48u91mokUP8p+tVVKh7bAw/HPG+SQsuNR DXF+gTm/hRuY7IYe3C7Myzc8bDTtFw6Es9BLAqzFFAMjzDVz7wY1rnZQq4mmLcKg AAMJaqItipKAroYIntXXJ3U8fsUt03M= Azure-WALinuxAgent-2b21de5/tests/data/wire/ec-key.pem000066400000000000000000000003431462617747000224310ustar00rootroot00000000000000-----BEGIN EC PRIVATE KEY----- MHcCAQEEIEydYXZkSbZjdKaNEurW6x2W3dEOC5+yDxM/Wkq1m6lUoAoGCCqGSM49 AwEHoUQDQgAE8H1M+73QdzCyIDToTyU7OTMfi9cnIt8B4sz7e127ydNBVWjDwgGV bKXPNtuQSWNgkfGW8A3tf9S8VcKNFxXaZg== -----END EC PRIVATE KEY----- Azure-WALinuxAgent-2b21de5/tests/data/wire/ec-key.pub.pem000066400000000000000000000002621462617747000232160ustar00rootroot00000000000000-----BEGIN PUBLIC KEY----- MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE8H1M+73QdzCyIDToTyU7OTMfi9cn It8B4sz7e127ydNBVWjDwgGVbKXPNtuQSWNgkfGW8A3tf9S8VcKNFxXaZg== -----END PUBLIC KEY----- Azure-WALinuxAgent-2b21de5/tests/data/wire/encrypted.enc000066400000000000000000000010551462617747000232360ustar00rootroot00000000000000MIIBlwYJKoZIhvcNAQcDoIIBiDCCAYQCAQIxggEwMIIBLAIBAoAUW4P+tNXlmDXW H30raKBkpUhXYwUwDQYJKoZIhvcNAQEBBQAEggEAP0LpwacLdJyvNQVmSyXPGM0i mNJSHPQsAXLFFcmWmCAGiEsQWiHKV9mON/eyd6DjtgbTuhVNHPY/IDSDXfjgLxdX NK1XejuEaVTwdVtCJWl5l4luOeCMDueitoIgBqgkbFpteqV6s8RFwnv+a2HhM0lc TUwim6skx1bFs0csDD5DkM7R10EWxWHjdKox8R8tq/C2xpaVWRvJ52/DCVgeHOfh orV0GmBK0ue/mZVTxu8jz2BxQUBhHXNWjBuNuGNmUuZvD0VY1q2K6Fa3xzv32mfB xPKgt6ru/wG1Kn6P8yMdKS3bQiNZxE1D1o3epDujiygQahUby5cI/WXk7ryZ1DBL BgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECAxpp+ZE6rpAgChqxBVpU047fb4zinTV 5xaG7lN15YEME4q8CqcF/Ji3NbHPmdw1/gtf Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf-no_gs_metadata.xml000066400000000000000000000031421462617747000260430ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf.xml000066400000000000000000000034271462617747000231060ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_additional_locations.xml000066400000000000000000000035011462617747000273420ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml http://mock-goal-state/additionalLocation/3/manifest.xml http://mock-goal-state/additionalLocation/4/manifest.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_aks_extension.xml000066400000000000000000000106461462617747000260410ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling non-AKS"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling AKSNode"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling AKSBilling"} } } ] } https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_autoupgrade.xml000066400000000000000000000045071462617747000255060ustar00rootroot00000000000000 Win8 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml Win7 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_autoupgrade_internalversion.xml000066400000000000000000000045071462617747000310100ustar00rootroot00000000000000 Win8 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml Win7 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_dependencies_with_empty_settings.xml000066400000000000000000000052141462617747000320010ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_in_vm_artifacts_profile.xml000066400000000000000000000041321462617747000300500ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo https://mock-goal-state/test.blob.core.windows.net/$system/test-cs12.test-cs12.test-cs12.vmSettings?sv=2016-05-31&sr=b&sk=system-1&sig=saskey;se=9999-01-01T00%3a00%3a00Z&sp=r Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_in_vm_empty_artifacts_profile.xml000066400000000000000000000036371462617747000312770ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_in_vm_metadata.xml000066400000000000000000000036551462617747000261410ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_internalversion.xml000066400000000000000000000045071462617747000264100ustar00rootroot00000000000000 Win8 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml Win7 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_invalid_and_valid_handlers.xml000066400000000000000000000056061462617747000304760ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_invalid_vm_metadata.xml000066400000000000000000000035241462617747000271540ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_missing_family.xml000066400000000000000000000015711462617747000261760ustar00rootroot00000000000000 Prod eastus https://walaautoasmeastus.blob.core.windows.net/vhds/walaautos73small.walaautos73small.walaautos73small.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=u%2BCA2Cxb7ticiEBRIW8HWgNW7gl2NPuOGQl0u95ApQE%3D Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_multiple_extensions.xml000066400000000000000000000216571462617747000273050ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIIB4AYJKoZIhvcNAQcDoIIB0TCCAc0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEH3vWjYIrceWQigVQwoS8z0wDQYJKoZIhvcNAQEBBQAEggEANYey5W0qDqC6RHZlVnpLp2dWrMr1Rt5TCFkOjq1jU4y2y1FPtsTTKq9Z5pdGb/IHQo9VcT+OFglO3bChMbqc1vgmk4wkTQkgJVD3C8Rq4nv3uvQIux+g8zsa1MPKT5fTwG/dcrBp9xqySJLexUiuJljmNJgorGc0KtLwjnad4HTSKudDSo5DGskSDLxxLZYx0VVtQvgekOOwT/0C0pN4+JS/766jdUAnHR3oOuD5Dx7/c6EhFSoiYXMA0bUzH7VZeF8j/rkP1xscLQRrCScCNV2Ox424Y4RBbcbP/p69lDxGURcIKLKrIUhQdC8CfUMkQUEmFDLcOtxutCTFBZYMJzBbBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECCuc0a4Gl8PAgDgcHekee/CivSTCXntJiCrltUDob8cX4YtIS6lq3H08Ar+2tKkpg5e3bOkdAo3q2GfIrGDm4MtVWw==","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIIBwAYJKoZIhvcNAQcDoIIBsTCCAa0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEH3vWjYIrceWQigVQwoS8z0wDQYJKoZIhvcNAQEBBQAEggEABILhQPoMx3NEbd/sS0xAAE4rJXwzJSE0bWr4OaKpcGS4ePtaNW8XWm+psYR9CBlXuGCuDVlFEdPmO2Ai8NX8TvT7RVYYc6yVQKpNQqO6Q9g9O52XXX4tBSFSCfoTzd1kbGC1c2wbXDyeROGCjraWuGHd4C9s9gytpgAlYicZjOqV3deo30F4vXZ+ZhCNpMkOvSXcsNpzTzQ/mskwNubN8MPkg/jEAzTHRpiJl3tjGtTqm00GHMqFF8/31jnoLQeQnWSmY+FBpiTUhPzyjufIcoZ+ueGXZiJ77xyH2Rghh5wvQM8oTVy2dwFQGeqjHOVgdgRNi/HgfZhcdltaQ8kjYDA7BgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECHPM0ZKBn+aWgBiVPT7zlkJA8eGuH7bNMTQCtGoJezToa24=","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIIB4AYJKoZIhvcNAQcDoIIB0TCCAc0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEH3vWjYIrceWQigVQwoS8z0wDQYJKoZIhvcNAQEBBQAEggEAGSKUDRN64DIB7FS7yKXa07OXaFPhmdNnNDOAOD3/WVFb9fQ2bztV46waq7iRO+lpz7LSerRzIe6Kod9zCfK7ryukRomVHIfTIBwPjQ+Otn8ZD2aVcrxR0EI95x/SGyiESJRQnOMbpoVSWSu2KJUCPfycQ4ODbaazDc61k0JCmmRy12rQ4ttyWKhYwpwI2OYFHGr39N/YYq6H8skHj5ve1605i4P9XpfEyIwF5BbX59tDOAFFQtX7jzQcz//LtaHHjwLmysmD9OG5XyvfbBICwSYJfMX9Jh1aahLwcjL8Bd0vYyGL1ItMQF5KfDwog4+HLcRGx+S02Yngm3/YKS9DmzBbBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECFGLNfK0bO5OgDgH90bRzqfgKK6EEh52XJfHz9G/ZL1mqP/ueWqo95PtEFo1gvI7z25V/pT0tBGibXgRhQXLFmwVTA==","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIIEzAYJKoZIhvcNAQcDoIIEvTCCBLkCAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEH3vWjYIrceWQigVQwoS8z0wDQYJKoZIhvcNAQEBBQAEggEAFqLDBFGeuglluYmZb0Zw+ZlMiMIws9/LgmurVSRUTU/nSleIc9vOLcukfMeCpMativzHe23iDFy6p3XDkViNcuzqbhlPq5LQsXXg+xaUrrg8Xy+q7KUQdxzPdNBdpgkUh6yE2EFbqVLQ/7x+TkkSsw35uPT0nEqSj3yYFGH7X/NJ49fKU+ZvFDp/N+o54UbE6ZdxlHFtz6NJFxx5w4z5adQ8DgnUyS0bJ2denolknODfSW2D2alm00SXlI88CAjeHgEDkoLCduwkrDkSFAODcAiEHHX8oYCnfanatpjm7ZgSutS9y7+XUnGWxDYoujHDI9bbV0WpyDcx/DIrlZ+WcTCCA0UGCSqGSIb3DQEHATAUBggqhkiG9w0DBwQIrL18Lbp1qU6AggMgGklvozqr8HqYP+DwkvxdwHSpo+23QFxh70os+NJRtVgBv5NjPEziXo3FpXHMPvt0kp0IwXbwyy5vwnjCTA2sQOYgj77X6RmwF6+1gt2DIHDN1Q6jWzdcXZVHykSiF3gshbebRKO0hydfCaCyYL36HOZ8ugyCctOon5EflrnoOYDDHRbsr30DAxZCAwGOGZEeoU2+U+YdhuMvplnMryD1f6b8FQ7jXihe/zczAibX5/22NxhsVgALdsV5h6hwuTbspDt3V15/VU8ak7a4xxdBfXOX0HcQI86oqsFr7S7zIveoQHsW+wzlyMjwi6DRPFpz2wFkv5ivgFEvtCzDQP4aCqGI8VdqzR7aUDnuqiSCe/cbmv5mSmTYlDPTR03WS0IvgyeoNAzqCbYQe44AUBEZb/yT8Z3XxwW0GzcPMZQ0XjpcZiaKAueN9V8nJgNCEDPTJqpSjy+tEHmSgxn70+E57F0vzPvdQ3vOEeRj8zlBblHd4uVrhxdBMUuQ73JEQEha5rz0qcUy04Wmjld1rBuX6pdOqrArAYzTLJbIuLqDjlnYFsHLs9QBGvIEb9VFOlAm5JW8npBbIRHXqPfwZWs60+uNksTtsN3MxBxUWJPOByb4xRNx+nRpTOvfKKFlgq1ReK5bGSTCB7x0Ft3+T42LOQDrBPyxxtGzWs+aq05qFgI4n0h8X82wxJflK+kUdwvvG/ZY5MM+/le2zOrUeyzvxXsHoRetgg+DOk7v+v7VsuT1KuvTXvgzxoOFF3/T2pNPpE3h6bbP2BUqZ2yzPNziGFslywDLZ8W3OUZoQejGqobRePdgUoBi5q2um/sPnq81kOJ/qhIOVq581ZD4IQWLot8eK8vX0G/y7y71YelRR51cUfgR5WvZZf6LvYw+GpwOtSViugl9QxGCviSLgHTJSSEm0ijtbzKhwP4vEyydNDrz8+WYB8DNIV7K2Pc8JyxAM03FYX30CaaJ40pbEUuVQVEnkAD2E//29/ZzgNTf/LBMzMEP5j7wlL+QQpmPAtL/FlBrOJ4nDEqsOOhWzI1MN51xRZuv3e2RqzVPiSmrKtk=","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_no_extensions-block_blob.xml000066400000000000000000000010231462617747000301350ustar00rootroot00000000000000 http://foo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_no_extensions-no_status_blob.xml000066400000000000000000000007061462617747000310710ustar00rootroot00000000000000 Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_no_extensions-page_blob.xml000066400000000000000000000015221462617747000277630ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml http://sas_url Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_no_public.xml000066400000000000000000000126761462617747000251460ustar00rootroot00000000000000 Win8 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr05.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr06.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr07.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr08.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr09.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr10.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr11.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr12.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/zrdfepirv2hk2prdstr01.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml Win7 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr05.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr06.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr07.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr08.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr09.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr10.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr11.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr12.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/zrdfepirv2hk2prdstr01.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK"}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_no_settings.xml000066400000000000000000000122021462617747000255110ustar00rootroot00000000000000 Win8 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr05.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr06.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr07.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr08.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr09.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr10.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr11.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr12.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml http://mock-goal-state/zrdfepirv2hk2prdstr01.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win8_asiaeast_manifest.xml Win7 http://mock-goal-state/rdfepirv2hknprdstr03.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr04.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr05.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr06.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr07.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr08.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr09.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr10.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr11.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/rdfepirv2hknprdstr12.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml http://mock-goal-state/zrdfepirv2hk2prdstr01.blob.core.windows.net/bfd5c281a7dc4e4b84381eb0b47e3aaf/Microsoft.WindowsAzure.GuestAgent_Win7_asiaeast_manifest.xml https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_required_features.xml000066400000000000000000000041241462617747000266770ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml TestRequiredFeature1 TestRequiredFeature2 TestRequiredFeature3 3.0 {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_rsm_version.xml000066400000000000000000000041001462617747000255210ustar00rootroot00000000000000 Prod 9.9.9.10 True True http://mock-goal-state/manifest_of_ga.xml Test 9.9.9.10 True True http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_sequencing.xml000066400000000000000000000054451462617747000253310ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_settings_case_mismatch.xml000066400000000000000000000131461462617747000277050ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_upgradeguid.xml000066400000000000000000000035141462617747000254630ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_version_missing_in_agent_family.xml000066400000000000000000000037741462617747000316160ustar00rootroot00000000000000 Prod True True http://mock-goal-state/manifest_of_ga.xml Test True True http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_version_missing_in_manifest.xml000066400000000000000000000046311462617747000307560ustar00rootroot00000000000000 Prod 5.2.1.0 True True http://mock-goal-state/manifest_of_ga.xml Test 5.2.1.0 True True http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_version_not_from_rsm.xml000066400000000000000000000041021462617747000274260ustar00rootroot00000000000000 Prod 9.9.9.10 False True http://mock-goal-state/manifest_of_ga.xml Test 9.9.9.10 False True http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ext_conf_vm_not_enabled_for_rsm_upgrades.xml000066400000000000000000000041041462617747000315540ustar00rootroot00000000000000 Prod 9.9.9.10 False False http://mock-goal-state/manifest_of_ga.xml Test 9.9.9.10 False False http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/ga_manifest.xml000066400000000000000000000032301462617747000235460ustar00rootroot00000000000000 1.0.0 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__1.0.0 1.1.0 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__1.1.0 1.2.0 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__1.2.0 2.0.0http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__2.0.0 2.1.0http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__2.1.0 2.5.0http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__2.5.0 9.9.9.10 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__9.9.9.10 99999.0.0.0http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__99999.0.0.0 Azure-WALinuxAgent-2b21de5/tests/data/wire/ga_manifest_no_upgrade.xml000066400000000000000000000014351462617747000257560ustar00rootroot00000000000000 1.0.0 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__1.0.0 1.1.0 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__1.1.0 1.2.0 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__1.2.0 Azure-WALinuxAgent-2b21de5/tests/data/wire/ga_manifest_no_uris.xml000066400000000000000000000026211462617747000253070ustar00rootroot00000000000000 1.0.0 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__1.0.0 1.1.0 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__1.1.0 1.2.0 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__1.2.0 2.0.0http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__2.0.0 2.1.0http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__2.1.0 9.9.9.10 http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__99999.0.0.0 99999.0.0.0 Azure-WALinuxAgent-2b21de5/tests/data/wire/goal_state.xml000066400000000000000000000035511462617747000234210ustar00rootroot00000000000000 2010-12-15 1 Started 16001 c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2 b61f93d0-e1ed-40b2-b067-22c243233448.MachineRole_IN_0 Started http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=hostingEnvironmentConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=sharedConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=extensionsConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=fullConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=certificates&incarnation=1 bc8b9d47-b5ed-4704-85d9-fd74cc967ec2.5.bc8b9d47-b5ed-4704-85d9-fd74cc967ec2.5._canary.1.xml Azure-WALinuxAgent-2b21de5/tests/data/wire/goal_state_no_certs.xml000066400000000000000000000032571462617747000253200ustar00rootroot00000000000000 2010-12-15 1 Started 16001 c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2 b61f93d0-e1ed-40b2-b067-22c243233448.MachineRole_IN_0 Started http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=hostingEnvironmentConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=sharedConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=extensionsConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=fullConfig&incarnation=1 bc8b9d47-b5ed-4704-85d9-fd74cc967ec2.5.bc8b9d47-b5ed-4704-85d9-fd74cc967ec2.5._canary.1.xml Azure-WALinuxAgent-2b21de5/tests/data/wire/goal_state_no_ext.xml000066400000000000000000000032241462617747000247720ustar00rootroot00000000000000 2010-12-15 1 Started 16001 c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2 b61f93d0-e1ed-40b2-b067-22c243233448.MachineRole_IN_0 Started http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=hostingEnvironmentConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=sharedConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=fullConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=certificates&incarnation=1 bc8b9d47-b5ed-4704-85d9-fd74cc967ec2.5.bc8b9d47-b5ed-4704-85d9-fd74cc967ec2.5._canary.1.xml Azure-WALinuxAgent-2b21de5/tests/data/wire/goal_state_noop.xml000066400000000000000000000007261462617747000244550ustar00rootroot00000000000000 2010-12-15 1 Started 16001 c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2 Azure-WALinuxAgent-2b21de5/tests/data/wire/goal_state_remote_access.xml000066400000000000000000000041141462617747000263110ustar00rootroot00000000000000 2010-12-15 1 Started 16001 c6d5526c-5ac2-4200-b6e2-56f2b70c5ab2 http://168.63.129.16:80/remoteaccessinfouri/ b61f93d0-e1ed-40b2-b067-22c243233448.MachineRole_IN_0 Started b61f93d0-e1ed-40b2-b067-22c243233448.1.b61f93d0-e1ed-40b2-b067-22c243233448.2.MachineRole_IN_0.xml http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=hostingEnvironmentConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=sharedConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=extensionsConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=config&type=fullConfig&incarnation=1 http://168.63.129.16:80/machine/865a6683-91d8-450f-99ae/bc8b9d47%2Db5ed%2D4704%2D85d9%2Dfd74cc967ec2.%5Fcanary?comp=certificates&incarnation=1 bc8b9d47-b5ed-4704-85d9-fd74cc967ec2.5.bc8b9d47-b5ed-4704-85d9-fd74cc967ec2.5._canary.1.xml Azure-WALinuxAgent-2b21de5/tests/data/wire/hosting_env.xml000066400000000000000000000043251462617747000236220ustar00rootroot00000000000000 Azure-WALinuxAgent-2b21de5/tests/data/wire/in_vm_artifacts_profile.json000066400000000000000000000000221462617747000263260ustar00rootroot00000000000000{ "onHold": true }Azure-WALinuxAgent-2b21de5/tests/data/wire/invalid_config/000077500000000000000000000000001462617747000235245ustar00rootroot00000000000000ext_conf_multiple_depends_on_for_single_handler.xml000066400000000000000000000065151462617747000360200ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/wire/invalid_config Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo ext_conf_multiple_runtime_settings_same_plugin.xml000066400000000000000000000037651462617747000357700ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/wire/invalid_config Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo ext_conf_multiple_settings_for_same_handler.xml000066400000000000000000000041031462617747000351750ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/wire/invalid_config Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo ext_conf_plugin_settings_version_mismatch.xml000066400000000000000000000041211462617747000347220ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/wire/invalid_config Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo ext_conf_single_and_multi_config_settings_same_plugin.xml000066400000000000000000000040421462617747000372210ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/wire/invalid_config Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} {"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"BD447EF71C3ADDF7C837E84D630F3FAC22CCD22F","protectedSettings":"MIICWgYJK","publicSettings":{"foo":"bar"}}}]} https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo Azure-WALinuxAgent-2b21de5/tests/data/wire/manifest.xml000066400000000000000000000061451462617747000231070ustar00rootroot00000000000000 1.0.0 http://mock-goal-state/foo.bar/zar/OSTCExtensions.ExampleHandlerLinux__1.0.0 1.1.0 http://mock-goal-state/foo.bar/zar/OSTCExtensions.ExampleHandlerLinux__1.1.0 1.1.1 http://mock-goal-state/foo.bar/zar/OSTCExtensions.ExampleHandlerLinux__1.1.1 1.2.0 http://mock-goal-state/foo.bar/zar/OSTCExtensions.ExampleHandlerLinux__1.2.0 2.0.0http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__2.0.0 2.1.0http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__2.1.0 True 2.1.1http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__2.1.1 2.2.0http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__2.2.0 3.0http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__3.0 3.1http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__3.1 4.0.0.0http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__3.0 4.0.0.1http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__3.1 4.1.0.0http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__3.1 1.3.0 http://mock-goal-state/foo.bar/zar/OSTCExtensions.ExampleHandlerLinux__1.3.0 2.3.0http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__2.3.0 2.4.0http://mock-goal-state/host/OSTCExtensions.ExampleHandlerLinux__2.3.0 Azure-WALinuxAgent-2b21de5/tests/data/wire/manifest_deletion.xml000066400000000000000000000006001462617747000247600ustar00rootroot00000000000000 1.0.0 http://mock-goal-state/foo.bar/zar/OSTCExtensions.ExampleHandlerLinux__1.0.0 Azure-WALinuxAgent-2b21de5/tests/data/wire/multi-config/000077500000000000000000000000001462617747000231465ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/data/wire/multi-config/ext_conf_mc_disabled_extensions.xml000066400000000000000000000104301462617747000322600ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml eastus CRP MultipleExtensionsPerHandler https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.status?sv=2018-03-28&sr=b&sk=system-1&sig=1%2b%2f4nL3kZJyUb7EKxSVGQ%2fHLpXBZxCU8Zo4diPFPv5o%3d&se=9999-01-01T00%3a00%3a00Z&sp=w { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host First: Hello World 1234!"},"parameters":[{"name":"extensionName","value":"firstRunCommand"}],"timeoutInSeconds":120} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Disabling secondExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling thirdExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling fourthExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling SingleConfig Extension"} } } ] } https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=8YHwmibhasT0r9MZgL09QmFwL7ZV%2bg%2b49QP5Zwe4ksY%3d&se=9999-01-01T00%3a00%3a00Z&sp=r https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmHealth?sv=2018-03-28&sr=b&sk=system-1&sig=DQSxfPRZEoGBGIFl%2f4bFZ0LM9RNr9DbUEmmtkiQkWkE%3d&se=9999-01-01T00%3a00%3a00Z&sp=rwAzure-WALinuxAgent-2b21de5/tests/data/wire/multi-config/ext_conf_mc_update_extensions.xml000066400000000000000000000076551462617747000320120ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml eastus CRP MultipleExtensionsPerHandler https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.status?sv=2018-03-28&sr=b&sk=system-1&sig=1%2b%2f4nL3kZJyUb7EKxSVGQ%2fHLpXBZxCU8Zo4diPFPv5o%3d&se=9999-01-01T00%3a00%3a00Z&sp=w { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Disabling firstExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Disabling secondExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling thirdExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling SingleConfig extension"} } } ] } https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=8YHwmibhasT0r9MZgL09QmFwL7ZV%2bg%2b49QP5Zwe4ksY%3d&se=9999-01-01T00%3a00%3a00Z&sp=r https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmHealth?sv=2018-03-28&sr=b&sk=system-1&sig=DQSxfPRZEoGBGIFl%2f4bFZ0LM9RNr9DbUEmmtkiQkWkE%3d&se=9999-01-01T00%3a00%3a00Z&sp=rwAzure-WALinuxAgent-2b21de5/tests/data/wire/multi-config/ext_conf_multi_config_no_dependencies.xml000066400000000000000000000076471462617747000334540ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml eastus CRP MultipleExtensionsPerHandler https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.status?sv=2018-03-28&sr=b&sk=system-1&sig=1%2b%2f4nL3kZJyUb7EKxSVGQ%2fHLpXBZxCU8Zo4diPFPv5o%3d&se=9999-01-01T00%3a00%3a00Z&sp=w { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling firstExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling secondExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling thirdExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling SingleConfig extension"} } } ] } https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=8YHwmibhasT0r9MZgL09QmFwL7ZV%2bg%2b49QP5Zwe4ksY%3d&se=9999-01-01T00%3a00%3a00Z&sp=r https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmHealth?sv=2018-03-28&sr=b&sk=system-1&sig=DQSxfPRZEoGBGIFl%2f4bFZ0LM9RNr9DbUEmmtkiQkWkE%3d&se=9999-01-01T00%3a00%3a00Z&sp=rwAzure-WALinuxAgent-2b21de5/tests/data/wire/multi-config/ext_conf_with_disabled_multi_config.xml000066400000000000000000000205601462617747000331210ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml eastus CRP MultipleExtensionsPerHandler https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.status?sv=2018-03-28&sr=b&sk=system-1&sig=1%2b%2f4nL3kZJyUb7EKxSVGQ%2fHLpXBZxCU8Zo4diPFPv5o%3d&se=9999-01-01T00%3a00%3a00Z&sp=w { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host First: Hello World 66!"},"parameters":[{"name":"extensionName","value":"firstRunCommand2"}],"timeoutInSeconds":120} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host Second: Hello World 66!"},"parameters":[{"name":"extensionName","value":"secondRunCommand2"}],"timeoutInSeconds":120} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host Third: Hello World 3!"},"parameters":[{"name":"extensionName","value":"thirdRunCommand"}],"timeoutInSeconds":120} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"fileUris":["https://test.blob.core.windows.net/windowsagent/VerifyAgentRunning.ps1"],"commandToExecute":"echo Hello 666"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {} } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "EE22B5ECAC6A83B581A7CAC1772BEBD0E016649F", "protectedSettings": "MIIB0AYJKoZIhvcNAQcDoIIBwTCCAb0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEC9nmIRFUMGqQOFdBaBjrNswDQYJKoZIhvcNAQEBBQAEggEAUrwxmEAEZuDxhX1S8il8AWeaSFhMDag2DbFEw9v8VCpNM1nhikc21tVaxKfkcugczJ++x5cfM06sMj+8aBisNFUEvAKZzaUvqH91ty7DN8syaZrs0bzx34p8O42bOtGC/UoetqsAlpYrDJr+UYUgm8QFJY8UDCpBHdFZS4bO4FQdAmJ9AVwwTzYT4KI96GNDiUS8skJMycUyFT1IO08n5bSOJqo1b0qUDz28JOQdEAriR+7sCry7okyajhy76FcbD16zhwsZVNzHcQm1IqPU2htlRcWR13RqHz7FUFMFky7t+ZxKB5TAcleF04yo4xkSqLtPDWZuDRu5gEHqom0kQjBLBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECNOB8r5YkBJugCiXeeb4js/ehiWBX7qWgrPlgETtKjUq5N1Z0fxPCjPG7SQb+BqQqiX1", "publicSettings": {"UserName":"test1234"} } } ] } https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=8YHwmibhasT0r9MZgL09QmFwL7ZV%2bg%2b49QP5Zwe4ksY%3d&se=9999-01-01T00%3a00%3a00Z&sp=r https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmHealth?sv=2018-03-28&sr=b&sk=system-1&sig=DQSxfPRZEoGBGIFl%2f4bFZ0LM9RNr9DbUEmmtkiQkWkE%3d&se=9999-01-01T00%3a00%3a00Z&sp=rwAzure-WALinuxAgent-2b21de5/tests/data/wire/multi-config/ext_conf_with_multi_config.xml000066400000000000000000000211521462617747000312700ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml eastus CRP MultipleExtensionsPerHandler https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.status?sv=2018-03-28&sr=b&sk=system-1&sig=1%2b%2f4nL3kZJyUb7EKxSVGQ%2fHLpXBZxCU8Zo4diPFPv5o%3d&se=9999-01-01T00%3a00%3a00Z&sp=w { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host First: Hello World 1234!"},"parameters":[{"name":"extensionName","value":"firstRunCommand"}],"timeoutInSeconds":120} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host First: Hello World 1234!"},"parameters":[{"name":"extensionName","value":"secondRunCommand"}],"timeoutInSeconds":120} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"source":{"script":"Write-Host Third: Hello World 3!"},"parameters":[{"name":"extensionName","value":"thirdRunCommand"}],"timeoutInSeconds":120} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"fileUris":["https://test.blob.core.windows.net/windowsagent/VerifyAgentRunning.ps1"],"commandToExecute":"echo Hello 1234"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {} } } ] } { "runtimeSettings": [ { "handlerSettings": { "protectedSettingsCertThumbprint": "EE22B5ECAC6A83B581A7CAC1772BEBD0E016649F", "protectedSettings": "MIIByAYJKoZIhvcNAQcDoIIBuTCCAbUCAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEC9nmIRFUMGqQOFdBaBjrNswDQYJKoZIhvcNAQEBBQAEggEAylzH/UuK3909SCbqecUyrd+V6EqTJ7Xe7hzMtYtfVTI3TBDnDlFLLzazawgXpsmOV96II9Bk4Kpo7rvwDuZWZulYWuBWw2q8/XPIpZ+hQg2TaV5A2l9N4gBU6JQ/6axHjCsuq7CUOpK9/Yq019I9HP2SqK8Ao4lMEKLR4AGMnoc+x8aKuFvhn+ClYF/75Pz+h1kgVGvj11LMM5u9M87M6Fie6UlbpFjZmBuZaPiyhfAxQxWqsJBsZ8AaUYzy21Wh3YEC54Pqx3n5kOzMH9Q+G5SkgJQ1SBEhf0gLdzeIKX2jEwOAhrL1tVtglUPxHxK5I9NACCvxou7xRtJgvbsYATBDBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECE8EEkK33SSegCCJOYYNsSXz9s87019rTptS0oBvlZgijlB+NQzvNpzAow==", "publicSettings": {"UserName":"test1234"} } } ] } https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=8YHwmibhasT0r9MZgL09QmFwL7ZV%2bg%2b49QP5Zwe4ksY%3d&se=9999-01-01T00%3a00%3a00Z&sp=r https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmHealth?sv=2018-03-28&sr=b&sk=system-1&sig=DQSxfPRZEoGBGIFl%2f4bFZ0LM9RNr9DbUEmmtkiQkWkE%3d&se=9999-01-01T00%3a00%3a00Z&sp=rwAzure-WALinuxAgent-2b21de5/tests/data/wire/multi-config/ext_conf_with_multi_config_dependencies.xml000066400000000000000000000127431462617747000340040ustar00rootroot00000000000000 Prod http://mock-goal-state/manifest_of_ga.xml Test http://mock-goal-state/manifest_of_ga.xml eastus CRP MultipleExtensionsPerHandler https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.status?sv=2018-03-28&sr=b&sk=system-1&sig=1%2b%2f4nL3kZJyUb7EKxSVGQ%2fHLpXBZxCU8Zo4diPFPv5o%3d&se=9999-01-01T00%3a00%3a00Z&sp=w { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling firstExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling secondExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling thirdExtension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling dependent SingleConfig extension"} } } ] } { "runtimeSettings": [ { "handlerSettings": { "publicSettings": {"message": "Enabling independent SingleConfig extension"} } } ] } https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmSettings?sv=2018-03-28&sr=b&sk=system-1&sig=8YHwmibhasT0r9MZgL09QmFwL7ZV%2bg%2b49QP5Zwe4ksY%3d&se=9999-01-01T00%3a00%3a00Z&sp=r https://test.blob.core.windows.net/$system/lrwinmcdn_0.0f3bfecf-f14f-4c7d-8275-9dee7310fe8c.vmHealth?sv=2018-03-28&sr=b&sk=system-1&sig=DQSxfPRZEoGBGIFl%2f4bFZ0LM9RNr9DbUEmmtkiQkWkE%3d&se=9999-01-01T00%3a00%3a00Z&sp=rwAzure-WALinuxAgent-2b21de5/tests/data/wire/remote_access_10_accounts.xml000066400000000000000000000065631462617747000263200ustar00rootroot00000000000000 1 1 testAccount1 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount2 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount3 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount4 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount5 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount6 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount7 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount8 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount9 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount10 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers Azure-WALinuxAgent-2b21de5/tests/data/wire/remote_access_duplicate_accounts.xml000066400000000000000000000014501462617747000300400ustar00rootroot00000000000000 1 1 testAccount encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers Azure-WALinuxAgent-2b21de5/tests/data/wire/remote_access_no_accounts.xml000066400000000000000000000002151462617747000265000ustar00rootroot00000000000000 1 1 Azure-WALinuxAgent-2b21de5/tests/data/wire/remote_access_single_account.xml000066400000000000000000000007401462617747000271650ustar00rootroot00000000000000 1 1 testAccount encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers Azure-WALinuxAgent-2b21de5/tests/data/wire/remote_access_two_accounts.xml000066400000000000000000000014521462617747000267010ustar00rootroot00000000000000 1 1 testAccount1 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers testAccount2 encryptedPasswordString 2019-01-01 Administrators RemoteDesktopUsers Azure-WALinuxAgent-2b21de5/tests/data/wire/rsa-key.pem000066400000000000000000000032501462617747000226270ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDe7cwx76yO+OjR hWHJrKt0L1ih9F/Bctyq7Ddi/v3CitVBvkQUve4k+xeT538mHyeoOuGI3QFs5mLh i535zbOFaHwfMMQI/CI4ZDtRrQh59XrJSsPytu0fXihsJ81IwNURuNDKwxYR0tKI KUuUN4YxsDSBeqvP5vjSKT05f90gniscuGvPJ6Zgyynmg56KQtSXKaetbyNzPW/4 QFmadyqsgdR7oZHEYj+1Tl6T9/tAPg/dgO55hT7WVdC8JxXeSiaDyRS1NRMFL0bC fcnLNsO4tni2WJsfuju9a4GTrWe3NQ3+vsQV5s59MtuOhoObuYNVcETYiEjBVVsf +shxRxL/AgMBAAECggEAfslt/eSbFoFYIHmkoQe0R5L57LpIj4QdHpTT91igyDkf ipGEtOtEewHXagYaWXsUmehLBwTy35W0HSTDxyQHetNu7GpWw+lqKPpQhmZL0Nkd aUg9Y1hISjPJ96E3bq5FQBwFm5wSfDaUCF68HmLpzm6xngY/mzF4yEYuDPq8r+RV SDhVtrovSImpwLbKmPdn634PqC6bPDgO5htkT/lL/TVkR3Sla3U/YYMu90m7DiAA 46DEblx0yt+zBB+mKR3TU4zIPSFiTWYs/Srsm6nUnNqjf5rvupvXFZt0/eDZat7/ L+/V5HPV0BxGIkCGt0Uv+qZYMGpC3eU+aEbByOr/wQKBgQDy+l4Rvgl0i+XzUPyw N6UrDDpxBVsZ/w48DrBEBMQqTbZxVDK77E2CeMK/JlYMFYFdIT/c9W0U7eWPqe35 kk9jVsPXc3xeoSiZvqK4CZeHEugE9OtJ4jJL1CfDXMcgPM+iSSj/QOJc5v7891QH 3gMOvmVk3Kk/I2MyBAEE6p6WHwKBgQDq4FvO77tsIZRkgmp3gPg4iImcTgwrgDxz aHqlSVc98o4jzWsUShbZTwRgfcZm+kD3eas+gkux8CevYhwjafWiukrnwu3xvUaO AKmgXU7ud/kS9bK/AT6ZpJsfoZzM/CQsConFbz0eXVb/tmipCBpyzi2yskLdk6SP pEZYISknIQKBgHwE9PzjXdoiChYekUu0q1aEoFPN4wkq2W4oJSoisKnTDrtbuaWX 4Jwm3WhJvgPe+i+55+n1T18uakzg9Hm9h03yHHYdGS8H3TxURKPhKXmlWc4l4O7O SNPRjxY1heHbiDOSWh2nVaMLuL0P1NFLLY5Z+lD4HF8AxgHib06+HoILAoGBALvg oa+jNhGlvrSzWYSkJmnaVfEwwS1e03whe9GRG/cSeb6Lx3agWSyUt1ST50tiLOuI aIGE6hW4m5X/7bAqRvFXASnoVDtFgxV91DHR0ZyRXSxcWxHMZg2yjN89gFa77hdI irHibEpIsZm0iH2FXNqusAE79J6XRlAcQKSoSenhAoGARAP9q1WaftXdK4X7L1Ut wnWJSVYMx6AsEo58SsJgNGqpbCl/vZMCwnSo6pdgO4xInu2tld3TKdPWZLoRCGCo PDYVM1GXj5SS8QPmq+h/6fxS65Gl0h0oHUcKXoPD+AxHn2MWWqWzxMdRuthUQATE MT+l5wgZPiEuiceY3Bp1hYk= -----END PRIVATE KEY----- Azure-WALinuxAgent-2b21de5/tests/data/wire/rsa-key.pub.pem000066400000000000000000000007031462617747000234140ustar00rootroot00000000000000-----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA3u3MMe+sjvjo0YVhyayr dC9YofRfwXLcquw3Yv79worVQb5EFL3uJPsXk+d/Jh8nqDrhiN0BbOZi4Yud+c2z hWh8HzDECPwiOGQ7Ua0IefV6yUrD8rbtH14obCfNSMDVEbjQysMWEdLSiClLlDeG MbA0gXqrz+b40ik9OX/dIJ4rHLhrzyemYMsp5oOeikLUlymnrW8jcz1v+EBZmncq rIHUe6GRxGI/tU5ek/f7QD4P3YDueYU+1lXQvCcV3komg8kUtTUTBS9Gwn3JyzbD uLZ4tlibH7o7vWuBk61ntzUN/r7EFebOfTLbjoaDm7mDVXBE2IhIwVVbH/rIcUcS /wIDAQAB -----END PUBLIC KEY----- Azure-WALinuxAgent-2b21de5/tests/data/wire/sample.pem000066400000000000000000000032471462617747000225430ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC3zCdThkBDYu83 M7ouc03caqyEwV6lioWbtYdnraoftbuCJrOhy+WipSCVAmhlu/tpaItuzwB9/VTw eSWfB/hB2sabVTKgU8gTQrI6ISy2ocLjqTIZuOETJuGlAIw6OXorhdUr8acZ8ohb ftZIbS9YKxbO7sQi+20sT2ugROJnO7IDGbb2vWhEhp2NAieJ8Nnq0SMv1+cZJZYk 6hiFVSl12g0egVFrRTJBvvTbPS7amLAQkauK/IxG28jZR61pMbHHX+xBg4Iayb2i qp8YnwK3qtf0stc0h9snnLnHSODva1Bo6qVBEcrkuXmtrHL2nUMsV/MgWG3HMgJJ 6Jf/wSFpAgMBAAECggEBALepsS6cvADajzK5ZPXf0NFOY6CxXnPLrWGAj5NCDftr 7bjMFbq7dngFzD46zrnClCOsDZEoF1TO3p8CYF6/Zwvfo5E7HMDrl8XvYwwFdJn3 oTlALMlZXsh1lQv+NSJFp1hwfylPbGzYV/weDeEIAkR3om4cWDCg0GJz5peb3iXK 5fimrZsnInhktloU2Ep20UepR8wbhS5WP7B2s32OULTlWiGdORUVrHJQbTN6O0NZ WzmAcsgfmW1KEBOR9sDFbAdldt8/WcLJVIfWOdFVbCbOaxrnRnZ8j8tsafziVncD QFRpNeyOHZR5S84oAPo2EIVeFCLLeo3Wit/O3IFmhhUCgYEA5jrs0VSowb/xU/Bw wm1cKnSqsub3p3GLPL4TdODYMHH56Wv8APiwcW9O1+oRZoM9M/8KXkDlfFOz10tY bMYvF8MzFKIzzi5TxaWqSWsNeXpoqtFqUed7KRh3ybncIqFAAauTwmAhAlEmGR/e AY7Oy4b2lnRU1ssIOd0VnSnAqTcCgYEAzF6746DhsInlFIQGsUZBOmUtwyu0k1kc gkWhJt5SyQHZtX1SMV2RI6CXFpUZcjv31jM30GmXdvkuj2dIHaDZB5V5BlctPJZq FH0RFxmFHXk+npLJnKKSX1H3/2PxTUsSBcFHEaPCgvIz3720bX7fqRIFtVdrcbQA cB9DARbjWl8CgYBKADyoWCbaB+EA0vLbe505RECtulF176gKgSnt0muKvsfOQFhC 06ya+WUFP4YSRjLA6MQjYYahvKG8nMoyRE1UvPhJNI2kQv3INKSUbqVpG3BTH3am Ftpebi/qliPsuZnCL60RuCZEAWNWhgisxYMwphPSblfqpl3hg290EbyMZwKBgQCs mypHQ166EozW+fcJDFQU9NVkrGoTtMR+Rj6oLEdxG037mb+sj+EAXSaeXQkj0QAt +g4eyL+zLRuk5E8lLu9+F0EjGMfNDyDC8ypW/yfNT9SSa1k6IJhNR1aUbZ2kcU3k bGwQuuWSYOttAbT8cZaHHgCSOyY03xkrmUunBOS6MwKBgBK4D0Uv7ZDf3Y38A07D MblDQj3wZeFu6IWi9nVT12U3WuEJqQqqxWnWmETa+TS/7lhd0GjTB+79+qOIhmls XSAmIS/rBUGlk5f9n+vBjQkpbqAvcXV7I/oQASpVga1xB9EuMvXc9y+x/QfmrYVM zqxRWJIMASPLiQr79V0zXGXP -----END PRIVATE KEY-----Azure-WALinuxAgent-2b21de5/tests/data/wire/shared_config.xml000066400000000000000000000046351462617747000240760ustar00rootroot00000000000000 Azure-WALinuxAgent-2b21de5/tests/data/wire/sshd_config000066400000000000000000000047771462617747000230010ustar00rootroot00000000000000# Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key HostKey /etc/ssh/ssh_host_ed25519_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 1024 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin without-password StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes Match group root Azure-WALinuxAgent-2b21de5/tests/data/wire/trans_cert000066400000000000000000000021471462617747000226440ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDEzCCAfugAwIBAgIUDcHXiRT74wOkLZYnyoZibT9+2G8wDQYJKoZIhvcNAQEL BQAwGTEXMBUGA1UEAwwOTGludXhUcmFuc3BvcnQwHhcNMjIwODEyMTgzMTM5WhcN MjQwODExMTgzMTM5WjAZMRcwFQYDVQQDDA5MaW51eFRyYW5zcG9ydDCCASIwDQYJ KoZIhvcNAQEBBQADggEPADCCAQoCggEBAK/XWh+Djc2WYoJ/8FkZd8OV3V47fID5 WV8hSBz/i/hVUKHhCWTQfE4VcQBGYFyK8lMKIBV7t6Bq05TQGuB8148HSjIboDx3 Ndd0C/+lYcBE1izMrHKZYhcy7lSlEUk+y5iye0cA5k/dlJhfwoxWolw0E2dMOjlY qzkEGJdyS6+hFddo696HzD7OYhxh1r50aHPWqY8NnC51487loOtPs4LYA2bd3HSg ECpOtKzyJW+GP0H2vBa7MrXrZOnD1K2j2xb8nTnYnpNtlmnZPj7VYFsLOlsq547X nFiSptPWslbVogkUVkCZlAqkMcJ/OtH70ZVjLyjFd6j7J/Wy8MrA7pECAwEAAaNT MFEwHQYDVR0OBBYEFGXBvV/uWivFWRWPHiVfY/kSJqufMB8GA1UdIwQYMBaAFGXB vV/uWivFWRWPHiVfY/kSJqufMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEL BQADggEBABjatix/q90u6X/Jar/UkKiL2zx36s4huPU9KG8F51g48cYRjrvpK4+H K6avCGArl7h1gczaGS7LTOHFUU25eg/BBcKcXEO3aryQph2A167ip89UM55LxlnC QVVV9HAnEw5qAoh0wlZ65fVN+SE8FdasYlbbbp7c4At/LZruSj+IIapZDwwJxcBk YlSOa34v1Uay09+Hgu95dYQjI9txJW1ViRVlDpKbieGTzROI6s3uk+3rhxxlH2Zi Z9UqNmPfH9UE1xgSk/wkMWW22h/x51qIRKAZ4EzmdHVXdT/BarIuHxtHH8hIPNSL FjetCMVZNBej2HXL9cY5UVFYCG6JG0Q= -----END CERTIFICATE----- Azure-WALinuxAgent-2b21de5/tests/data/wire/trans_prv000066400000000000000000000032501462617747000225120ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCv11ofg43NlmKC f/BZGXfDld1eO3yA+VlfIUgc/4v4VVCh4Qlk0HxOFXEARmBcivJTCiAVe7egatOU 0BrgfNePB0oyG6A8dzXXdAv/pWHARNYszKxymWIXMu5UpRFJPsuYsntHAOZP3ZSY X8KMVqJcNBNnTDo5WKs5BBiXckuvoRXXaOveh8w+zmIcYda+dGhz1qmPDZwudePO 5aDrT7OC2ANm3dx0oBAqTrSs8iVvhj9B9rwWuzK162Tpw9Sto9sW/J052J6TbZZp 2T4+1WBbCzpbKueO15xYkqbT1rJW1aIJFFZAmZQKpDHCfzrR+9GVYy8oxXeo+yf1 svDKwO6RAgMBAAECggEAEwBogsNKjY7Usll09Yvk/0OwmkA/YgiP+dG04z1SONGv Vu7kfvpwlFeI0IjKXPW+3e5YLTojS7h/iLM8VEnpWVFmWSfXFvGi5ddqfIO4nnhR 1KGBeRjOGsesLYVw6sNYaPXQkImuWa8OIbEnatbp0KDn/9+i4xOL3StuJN97Ak1u Giq4gwFbag4/QctBZ+5P0t77W+uzWcvEyNgK6rndfPWxqwmJSBFchY6O3s1l6NY8 vSmyYhYRgFXEgX0nDumGfEXsF1Cj9tzYT2DUZc2f6+UCtXCD49qnoKawLhCrl5Uh QGs82TR5FSn7zLW4MbFody6p8UDw6pYiWlPPR7fmgQKBgQDO3j5RCXf0276K56BA rFpOOmivq3fxElRVCSRRRVPKHDYKQiPKnNXoa/pSl8a6CfjJaJzkNj+wTEFdQRGm Ia123kR/1S21/zgGZNmbUGby+A4fKxBY101/JQweucRN7aw3XLKPXhOL1NPyKdWh dARvjZvEl1qR6s07Y6jZgpkGqQKBgQDZmqVWvUgACdxkCYEzDf3Fc2G/8oL4VxWJ HHr5zib+DDhTfKrgQyA9CZ97stZfrR7KYnsLJH8jnj/w/CNOI0G+41KroICRsnjT 5bm7/sT5uwLwu+FAQzITiehj7Te1lwsqtS8yOnXBTQ3hzaw9yhAsuhefx+WT2UCd Y8Od13nhqQKBgQCR2LR8s71D/81F52nfTuRYNOvrtmtYpkCYt1pIhiU94EflUZ4k UhCpzb7tjh5IuZEShtPePbUHWavX0HFd/G5s2OXYbnbM0oQwVdfpnXUHpgVmyhi7 WghENN1nqDcTbha17X/ifkQvmLxZBk+chcw+zcrdfowXRkCtt2Sq/V1gCQKBgH/w UK3C9AYxxgZ7IB9oZoAk6p/0cdSZPuwydotRDdPoU2WissTQMrAwbDhKWYg/PQ84 /6b5elbywB1r4UYbrJgTB5Qo9e6zxB6xvpYtoJpDveLUVAd4eoTKXHwECPEXMVWW 2XzqqjlQmIzeZBqgJwplD2a+HNjkrvzanzS6b8qhAoGBAIun0EEc/Zc0ZxzgDPen A9/7jV++QCrNsevxGH8yrhPP4UqTVSHGR9H+RAif7zTBTn0OwzSBz6hFbPmxum3m cKabsKVN3poz3TBvfyhgjYosMWvCHpNhif09lyd/s2FezPGyK1Nyf5cKNEWjFGKw +fCPJ/Ihp4iwacNU1Pu9m050 -----END PRIVATE KEY----- Azure-WALinuxAgent-2b21de5/tests/data/wire/trans_pub000066400000000000000000000007031462617747000224710ustar00rootroot00000000000000-----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAr9daH4ONzZZign/wWRl3 w5XdXjt8gPlZXyFIHP+L+FVQoeEJZNB8ThVxAEZgXIryUwogFXu3oGrTlNAa4HzX jwdKMhugPHc113QL/6VhwETWLMyscpliFzLuVKURST7LmLJ7RwDmT92UmF/CjFai XDQTZ0w6OVirOQQYl3JLr6EV12jr3ofMPs5iHGHWvnRoc9apjw2cLnXjzuWg60+z gtgDZt3cdKAQKk60rPIlb4Y/Qfa8Frsytetk6cPUraPbFvydOdiek22Wadk+PtVg Wws6WyrnjtecWJKm09ayVtWiCRRWQJmUCqQxwn860fvRlWMvKMV3qPsn9bLwysDu kQIDAQAB -----END PUBLIC KEY----- Azure-WALinuxAgent-2b21de5/tests/data/wire/version_info.xml000066400000000000000000000003361462617747000237750ustar00rootroot00000000000000 2012-11-30 2010-12-15 2010-28-10 Azure-WALinuxAgent-2b21de5/tests/ga/000077500000000000000000000000001462617747000172615ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/ga/__init__.py000066400000000000000000000011651462617747000213750ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-2b21de5/tests/ga/test_agent_update_handler.py000066400000000000000000001047601462617747000250370ustar00rootroot00000000000000import contextlib import json import os from azurelinuxagent.common import conf from azurelinuxagent.common.event import WALAEventOperation from azurelinuxagent.common.exception import AgentUpgradeExitException from azurelinuxagent.common.future import ustr, httpclient from azurelinuxagent.common.protocol.restapi import VMAgentUpdateStatuses from azurelinuxagent.common.protocol.util import ProtocolUtil from azurelinuxagent.common.version import CURRENT_VERSION, AGENT_NAME from azurelinuxagent.ga.agent_update_handler import get_agent_update_handler from azurelinuxagent.ga.guestagent import GuestAgent from tests.ga.test_update import UpdateTestCase from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib.wire_protocol_data import DATA_FILE from tests.lib.tools import clear_singleton_instances, load_bin_data, patch class TestAgentUpdate(UpdateTestCase): def setUp(self): UpdateTestCase.setUp(self) # Since ProtocolUtil is a singleton per thread, we need to clear it to ensure that the test cases do not # reuse a previous state clear_singleton_instances(ProtocolUtil) @contextlib.contextmanager def _get_agent_update_handler(self, test_data=None, autoupdate_frequency=0.001, autoupdate_enabled=True, protocol_get_error=False, mock_get_header=None, mock_put_header=None): # Default to DATA_FILE of test_data parameter raises the pylint warning # W0102: Dangerous default value DATA_FILE (builtins.dict) as argument (dangerous-default-value) test_data = DATA_FILE if test_data is None else test_data with mock_wire_protocol(test_data) as protocol: def get_handler(url, **kwargs): if HttpRequestPredicates.is_agent_package_request(url): if not protocol_get_error: agent_pkg = load_bin_data(self._get_agent_file_name(), self._agent_zip_dir) return MockHttpResponse(status=httpclient.OK, body=agent_pkg) else: return MockHttpResponse(status=httpclient.SERVICE_UNAVAILABLE) return protocol.mock_wire_data.mock_http_get(url, **kwargs) def put_handler(url, *args, **_): if HttpRequestPredicates.is_host_plugin_status_request(url): # Skip reading the HostGA request data as its encoded return MockHttpResponse(status=500) protocol.aggregate_status = json.loads(args[0]) return MockHttpResponse(status=201) http_get_handler = mock_get_header if mock_get_header else get_handler http_put_handler = mock_put_header if mock_put_header else put_handler protocol.set_http_handlers(http_get_handler=http_get_handler, http_put_handler=http_put_handler) with patch("azurelinuxagent.common.conf.get_autoupdate_enabled", return_value=autoupdate_enabled): with patch("azurelinuxagent.common.conf.get_autoupdate_frequency", return_value=autoupdate_frequency): with patch("azurelinuxagent.common.conf.get_autoupdate_gafamily", return_value="Prod"): with patch("azurelinuxagent.common.conf.get_enable_ga_versioning", return_value=True): with patch("azurelinuxagent.common.event.EventLogger.add_event") as mock_telemetry: agent_update_handler = get_agent_update_handler(protocol) agent_update_handler._protocol = protocol yield agent_update_handler, mock_telemetry def _assert_agent_directories_available(self, versions): for version in versions: self.assertTrue(os.path.exists(self.agent_dir(version)), "Agent directory {0} not found".format(version)) def _assert_agent_directories_exist_and_others_dont_exist(self, versions): self._assert_agent_directories_available(versions=versions) other_agents = [agent_dir for agent_dir in self.agent_dirs() if agent_dir not in [self.agent_dir(version) for version in versions]] self.assertFalse(any(other_agents), "All other agents should be purged from agent dir: {0}".format(other_agents)) def _assert_agent_rsm_version_in_goal_state(self, mock_telemetry, inc=1, version="9.9.9.10"): upgrade_event_msgs = [kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if 'New agent version:{0} requested by RSM in Goal state incarnation_{1}'.format(version, inc) in kwarg['message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade] self.assertEqual(1, len(upgrade_event_msgs), "Did not find the event indicating that the agent requested version found. Got: {0}".format( mock_telemetry.call_args_list)) def _assert_update_discovered_from_agent_manifest(self, mock_telemetry, inc=1, version="9.9.9.10"): upgrade_event_msgs = [kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if 'Self-update is ready to upgrade the new agent: {0} now before processing the goal state: incarnation_{1}'.format(version, inc) in kwarg['message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade] self.assertEqual(1, len(upgrade_event_msgs), "Did not find the event indicating that the new version found. Got: {0}".format( mock_telemetry.call_args_list)) def _assert_no_agent_package_telemetry_emitted(self, mock_telemetry, version="9.9.9.10"): upgrade_event_msgs = [kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if 'No matching package found in the agent manifest for version: {0}'.format(version) in kwarg['message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade] self.assertEqual(1, len(upgrade_event_msgs), "Did not find the event indicating that the agent package not found. Got: {0}".format( mock_telemetry.call_args_list)) def _assert_agent_exit_process_telemetry_emitted(self, message): self.assertIn("Current Agent {0} completed all update checks, exiting current process".format(CURRENT_VERSION), message) def test_it_should_not_update_when_autoupdate_disabled(self): self.prepare_agents(count=1) with self._get_agent_update_handler(autoupdate_enabled=False) as (agent_update_handler, mock_telemetry): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION)]) self.assertEqual(0, len([kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if "requesting a new agent version" in kwarg['message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade]), "should not check for rsm version") def test_it_should_update_to_largest_version_if_ga_versioning_disabled(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): with patch.object(conf, "get_enable_ga_versioning", return_value=False): with self.assertRaises(AgentUpgradeExitException) as context: agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_update_discovered_from_agent_manifest(mock_telemetry, version="99999.0.0.0") self._assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION), "99999.0.0.0"]) self._assert_agent_exit_process_telemetry_emitted(ustr(context.exception.reason)) def test_it_should_not_update_to_largest_version_if_time_window_not_elapsed(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file["ga_manifest"] = "wire/ga_manifest_no_uris.xml" with self._get_agent_update_handler(test_data=data_file, autoupdate_frequency=10) as (agent_update_handler, _): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") agent_update_handler._protocol.mock_wire_data.set_ga_manifest("wire/ga_manifest.xml") agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") def test_it_should_update_to_largest_version_if_time_window_elapsed(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file["ga_manifest"] = "wire/ga_manifest_no_uris.xml" with patch("azurelinuxagent.common.conf.get_self_update_hotfix_frequency", return_value=0.001): with patch("azurelinuxagent.common.conf.get_self_update_regular_frequency", return_value=0.001): with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): with self.assertRaises(AgentUpgradeExitException) as context: agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") agent_update_handler._protocol.mock_wire_data.set_ga_manifest("wire/ga_manifest.xml") agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_update_discovered_from_agent_manifest(mock_telemetry, inc=2, version="99999.0.0.0") self._assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION), "99999.0.0.0"]) self._assert_agent_exit_process_telemetry_emitted(ustr(context.exception.reason)) def test_it_should_not_allow_update_if_largest_version_below_current_version(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file["ga_manifest"] = "wire/ga_manifest_no_upgrade.xml" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, _): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION)]) def test_it_should_update_to_largest_version_if_rsm_version_not_available(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file['ext_conf'] = "wire/ext_conf.xml" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): with self.assertRaises(AgentUpgradeExitException) as context: agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_update_discovered_from_agent_manifest(mock_telemetry, version="99999.0.0.0") self._assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION), "99999.0.0.0"]) self._assert_agent_exit_process_telemetry_emitted(ustr(context.exception.reason)) def test_it_should_not_download_manifest_again_if_last_attempted_download_time_not_elapsed(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file['ext_conf'] = "wire/ext_conf.xml" with self._get_agent_update_handler(test_data=data_file, autoupdate_frequency=10, protocol_get_error=True) as (agent_update_handler, _): # making multiple agent update attempts agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) mock_wire_data = agent_update_handler._protocol.mock_wire_data self.assertEqual(1, mock_wire_data.call_counts['manifest_of_ga.xml'], "Agent manifest should not be downloaded again") def test_it_should_download_manifest_if_last_attempted_download_time_is_elapsed(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file['ext_conf'] = "wire/ext_conf.xml" with self._get_agent_update_handler(test_data=data_file, autoupdate_frequency=0.00001, protocol_get_error=True) as (agent_update_handler, _): # making multiple agent update attempts agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) mock_wire_data = agent_update_handler._protocol.mock_wire_data self.assertEqual(3, mock_wire_data.call_counts['manifest_of_ga.xml'], "Agent manifest should be downloaded in all attempts") def test_it_should_not_agent_update_if_rsm_version_is_same_as_current_version(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family( str(CURRENT_VERSION)) agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertEqual(0, len([kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if "requesting a new agent version" in kwarg['message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade]), "rsm version should be same as current version") self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") def test_it_should_upgrade_agent_if_rsm_version_is_available_greater_than_current_version(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): with self.assertRaises(AgentUpgradeExitException) as context: agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_rsm_version_in_goal_state(mock_telemetry, version="9.9.9.10") self._assert_agent_directories_exist_and_others_dont_exist(versions=["9.9.9.10", str(CURRENT_VERSION)]) self._assert_agent_exit_process_telemetry_emitted(ustr(context.exception.reason)) def test_it_should_downgrade_agent_if_rsm_version_is_available_less_than_current_version(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") downgraded_version = "2.5.0" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family(downgraded_version) agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() with self.assertRaises(AgentUpgradeExitException) as context: agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_rsm_version_in_goal_state(mock_telemetry, inc=2, version=downgraded_version) self._assert_agent_directories_exist_and_others_dont_exist( versions=[downgraded_version, str(CURRENT_VERSION)]) self._assert_agent_exit_process_telemetry_emitted(ustr(context.exception.reason)) def test_it_should_not_do_rsm_update_if_gs_not_updated_in_next_attempt(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" version = "5.2.0.1" with self._get_agent_update_handler(test_data=data_file, autoupdate_frequency=10) as (agent_update_handler, mock_telemetry): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family(version) agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_rsm_version_in_goal_state(mock_telemetry, inc=2, version=version) self._assert_no_agent_package_telemetry_emitted(mock_telemetry, version=version) # Now we shouldn't check for download if update not allowed(GS not updated).This run should not add new logs agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), False) self._assert_agent_rsm_version_in_goal_state(mock_telemetry, inc=2, version=version) self._assert_no_agent_package_telemetry_emitted(mock_telemetry, version=version) def test_it_should_not_downgrade_below_daemon_version(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") downgraded_version = "1.2.0" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, _): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family(downgraded_version) agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertFalse(os.path.exists(self.agent_dir(downgraded_version)), "New agent directory should not be found") def test_it_should_update_to_largest_version_if_vm_not_enabled_for_rsm_upgrades(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file['ext_conf'] = "wire/ext_conf_vm_not_enabled_for_rsm_upgrades.xml" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): with self.assertRaises(AgentUpgradeExitException) as context: agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_update_discovered_from_agent_manifest(mock_telemetry, version="99999.0.0.0") self._assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION), "99999.0.0.0"]) self._assert_agent_exit_process_telemetry_emitted(ustr(context.exception.reason)) def test_it_should_not_update_to_version_if_version_not_from_rsm(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_version_not_from_rsm.xml" downgraded_version = "2.5.0" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, _): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family(downgraded_version) agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_directories_exist_and_others_dont_exist( versions=[str(CURRENT_VERSION)]) self.assertFalse(os.path.exists(self.agent_dir(downgraded_version)), "New agent directory should not be found") def test_handles_if_rsm_version_not_found_in_pkgs_to_download(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") version = "5.2.0.4" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family(version) agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_rsm_version_in_goal_state(mock_telemetry, inc=2, version=version) self.assertFalse(os.path.exists(self.agent_dir(version)), "New agent directory should not be found") self._assert_no_agent_package_telemetry_emitted(mock_telemetry, version=version) def test_handles_missing_agent_family(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_missing_family.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") self.assertEqual(1, len([kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if "No manifest links found for agent family" in kwarg[ 'message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade]), "Agent manifest should not be in GS") # making multiple agent update attempts and assert only one time logged agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), False) agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), False) self.assertEqual(1, len([kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if "No manifest links found for agent family" in kwarg[ 'message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade]), "Agent manifest error should be logged once if it's same goal state") def test_it_should_report_update_status_with_success(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, _): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family( str(CURRENT_VERSION)) agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) vm_agent_update_status = agent_update_handler.get_vmagent_update_status() self.assertEqual(VMAgentUpdateStatuses.Success, vm_agent_update_status.status) self.assertEqual(0, vm_agent_update_status.code) self.assertEqual(str(CURRENT_VERSION), vm_agent_update_status.expected_version) def test_it_should_report_update_status_with_error_on_download_fail(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self._get_agent_update_handler(test_data=data_file, protocol_get_error=True) as (agent_update_handler, _): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) vm_agent_update_status = agent_update_handler.get_vmagent_update_status() self.assertEqual(VMAgentUpdateStatuses.Error, vm_agent_update_status.status) self.assertEqual(1, vm_agent_update_status.code) self.assertEqual("9.9.9.10", vm_agent_update_status.expected_version) self.assertIn("Failed to download agent package from all URIs", vm_agent_update_status.message) def test_it_should_report_update_status_with_missing_rsm_version_error(self): data_file = DATA_FILE.copy() data_file['ext_conf'] = "wire/ext_conf_version_missing_in_agent_family.xml" with self._get_agent_update_handler(test_data=data_file, protocol_get_error=True) as (agent_update_handler, _): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) vm_agent_update_status = agent_update_handler.get_vmagent_update_status() self.assertEqual(VMAgentUpdateStatuses.Error, vm_agent_update_status.status) self.assertEqual(1, vm_agent_update_status.code) self.assertIn("missing version property. So, skipping agent update", vm_agent_update_status.message) def test_it_should_not_log_same_error_next_hours(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_missing_family.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") self.assertEqual(1, len([kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if "No manifest links found for agent family" in kwarg[ 'message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade]), "Agent manifest should not be in GS") agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertEqual(1, len([kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if "No manifest links found for agent family" in kwarg[ 'message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade]), "Agent manifest should not be in GS") def test_it_should_save_rsm_state_of_the_most_recent_goal_state(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, _): with self.assertRaises(AgentUpgradeExitException): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) state_file = os.path.join(conf.get_lib_dir(), "rsm_update.json") self.assertTrue(os.path.exists(state_file), "The rsm state file was not saved (can't find {0})".format(state_file)) # check if state gets updated if most recent goal state has different values agent_update_handler._protocol.mock_wire_data.set_extension_config_is_vm_enabled_for_rsm_upgrades("False") agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() with self.assertRaises(AgentUpgradeExitException): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self.assertFalse(os.path.exists(state_file), "The rsm file should be removed (file: {0})".format(state_file)) def test_it_should_not_update_to_latest_if_flag_is_disabled(self): self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file['ext_conf'] = "wire/ext_conf.xml" with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, _): with patch("azurelinuxagent.common.conf.get_auto_update_to_latest_version", return_value=False): agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION)]) def test_it_should_continue_with_update_if_number_of_update_attempts_less_than_3(self): data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" latest_version = self.prepare_agents(count=2) self.expand_agents() latest_path = os.path.join(self.tmp_dir, "{0}-{1}".format(AGENT_NAME, latest_version)) agent = GuestAgent.from_installed_agent(latest_path) # marking agent as bad agent on first attempt agent.mark_failure(is_fatal=True) agent.inc_update_attempt_count() self.assertTrue(agent.is_blacklisted, "Agent should be blacklisted") self.assertEqual(1, agent.get_update_attempt_count(), "Agent update attempts should be 1") with self._get_agent_update_handler(test_data=data_file) as (agent_update_handler, mock_telemetry): # Rest 2 attempts it should continue with update even agent is marked as bad agent in first attempt for i in range(2): with self.assertRaises(AgentUpgradeExitException): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family( str(latest_version)) agent_update_handler._protocol.mock_wire_data.set_version_in_ga_manifest(str(latest_version)) agent_update_handler._protocol.mock_wire_data.set_incarnation(i+2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION), str(latest_version)]) agent = GuestAgent.from_installed_agent(latest_path) self.assertFalse(agent.is_blacklisted, "Agent should not be blacklisted") self.assertEqual(i+2, agent.get_update_attempt_count(), "Agent update attempts should be {0}".format(i+2)) # check if next update is not attempted agent.mark_failure(is_fatal=True) agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) agent = GuestAgent.from_installed_agent(latest_path) self.assertTrue(agent.is_blacklisted, "Agent should be blacklisted") self.assertEqual(3, agent.get_update_attempt_count(), "Agent update attempts should be 3") self.assertEqual(1, len([kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if "Attempted enough update retries for version: {0} but still agent not recovered from bad state".format(latest_version) in kwarg[ 'message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade]), "Update is not allowed after 3 attempts") def test_it_should_fail_the_update_if_agent_pkg_is_invalid(self): agent_uri = 'https://foo.blob.core.windows.net/bar/OSTCExtensions.WALinuxAgent__9.9.9.10' def http_get_handler(uri, *_, **__): if uri in (agent_uri, 'http://168.63.129.16:32526/extensionArtifact'): response = load_bin_data("ga/WALinuxAgent-9.9.9.10-no_manifest.zip") return MockHttpResponse(status=httpclient.OK, body=response) return None self.prepare_agents(count=1) data_file = DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self._get_agent_update_handler(test_data=data_file, mock_get_header=http_get_handler) as (agent_update_handler, mock_telemetry): agent_update_handler._protocol.mock_wire_data.set_version_in_agent_family("9.9.9.10") agent_update_handler._protocol.mock_wire_data.set_incarnation(2) agent_update_handler._protocol.client.update_goal_state() agent_update_handler.run(agent_update_handler._protocol.get_goal_state(), True) self._assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION)]) self.assertEqual(1, len([kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if "Downloaded agent package: WALinuxAgent-9.9.9.10 is missing agent handler manifest file" in kwarg['message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade]), "Agent update should fail") Azure-WALinuxAgent-2b21de5/tests/ga/test_cgroupapi.py000066400000000000000000000270041462617747000226660ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.4+ and Openssl 1.0+ # from __future__ import print_function import os import re import subprocess import tempfile from azurelinuxagent.ga.cgroupapi import CGroupsApi, SystemdCgroupsApi from azurelinuxagent.ga.cgroupstelemetry import CGroupsTelemetry from azurelinuxagent.common.osutil import systemd from azurelinuxagent.common.utils import fileutil from tests.lib.mock_cgroup_environment import mock_cgroup_environment from tests.lib.tools import AgentTestCase, patch, mock_sleep from tests.lib.cgroups_tools import CGroupsTools class _MockedFileSystemTestCase(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) self.cgroups_file_system_root = os.path.join(self.tmp_dir, "cgroup") os.mkdir(self.cgroups_file_system_root) os.mkdir(os.path.join(self.cgroups_file_system_root, "cpu")) os.mkdir(os.path.join(self.cgroups_file_system_root, "memory")) self.mock_cgroups_file_system_root = patch("azurelinuxagent.ga.cgroupapi.CGROUPS_FILE_SYSTEM_ROOT", self.cgroups_file_system_root) self.mock_cgroups_file_system_root.start() def tearDown(self): self.mock_cgroups_file_system_root.stop() AgentTestCase.tearDown(self) class CGroupsApiTestCase(_MockedFileSystemTestCase): def test_cgroups_should_be_supported_only_on_ubuntu16_centos7dot4_redhat7dot4_and_later_versions(self): test_cases = [ (['ubuntu', '16.04', 'xenial'], True), (['ubuntu', '16.10', 'yakkety'], True), (['ubuntu', '18.04', 'bionic'], True), (['ubuntu', '18.10', 'cosmic'], True), (['ubuntu', '20.04', 'focal'], True), (['ubuntu', '20.10', 'groovy'], True), (['centos', '7.4', 'Source'], False), (['redhat', '7.4', 'Maipo'], False), (['centos', '7.5', 'Source'], False), (['centos', '7.3', 'Maipo'], False), (['redhat', '7.2', 'Maipo'], False), (['centos', '7.8', 'Source'], False), (['redhat', '7.8', 'Maipo'], False), (['redhat', '7.9.1908', 'Core'], False), (['centos', '8.1', 'Source'], True), (['redhat', '8.2', 'Maipo'], True), (['redhat', '8.2.2111', 'Core'], True), (['redhat', '9.1', 'Core'], False), (['centos', '9.1', 'Source'], False), (['bigip', '15.0.1', 'Final'], False), (['gaia', '273.562', 'R80.30'], False), (['debian', '9.1', ''], False), ] for (distro, supported) in test_cases: with patch("azurelinuxagent.ga.cgroupapi.get_distro", return_value=distro): self.assertEqual(CGroupsApi.cgroups_supported(), supported, "cgroups_supported() failed on {0}".format(distro)) class SystemdCgroupsApiTestCase(AgentTestCase): def test_get_systemd_version_should_return_a_version_number(self): with mock_cgroup_environment(self.tmp_dir): version_info = systemd.get_version() found = re.search(r"systemd \d+", version_info) is not None self.assertTrue(found, "Could not determine the systemd version: {0}".format(version_info)) def test_get_cpu_and_memory_mount_points_should_return_the_cgroup_mount_points(self): with mock_cgroup_environment(self.tmp_dir): cpu, memory = SystemdCgroupsApi().get_cgroup_mount_points() self.assertEqual(cpu, '/sys/fs/cgroup/cpu,cpuacct', "The mount point for the CPU controller is incorrect") self.assertEqual(memory, '/sys/fs/cgroup/memory', "The mount point for the memory controller is incorrect") def test_get_service_cgroup_paths_should_return_the_cgroup_mount_points(self): with mock_cgroup_environment(self.tmp_dir): cpu, memory = SystemdCgroupsApi().get_unit_cgroup_paths("extension.service") self.assertIn(cpu, '/sys/fs/cgroup/cpu,cpuacct/system.slice/extension.service', "The mount point for the CPU controller is incorrect") self.assertIn(memory, '/sys/fs/cgroup/memory/system.slice/extension.service', "The mount point for the memory controller is incorrect") def test_get_cpu_and_memory_cgroup_relative_paths_for_process_should_return_the_cgroup_relative_paths(self): with mock_cgroup_environment(self.tmp_dir): cpu, memory = SystemdCgroupsApi.get_process_cgroup_relative_paths('self') self.assertEqual(cpu, "system.slice/walinuxagent.service", "The relative path for the CPU cgroup is incorrect") self.assertEqual(memory, "system.slice/walinuxagent.service", "The relative memory for the CPU cgroup is incorrect") def test_get_cgroup2_controllers_should_return_the_v2_cgroup_controllers(self): with mock_cgroup_environment(self.tmp_dir): mount_point, controllers = SystemdCgroupsApi.get_cgroup2_controllers() self.assertEqual(mount_point, "/sys/fs/cgroup/unified", "Invalid mount point for V2 cgroups") self.assertIn("cpu", controllers, "The CPU controller is not in the list of V2 controllers") self.assertIn("memory", controllers, "The memory controller is not in the list of V2 controllers") def test_get_unit_property_should_return_the_value_of_the_given_property(self): with mock_cgroup_environment(self.tmp_dir): cpu_accounting = systemd.get_unit_property("walinuxagent.service", "CPUAccounting") self.assertEqual(cpu_accounting, "no", "Property {0} of {1} is incorrect".format("CPUAccounting", "walinuxagent.service")) def assert_cgroups_created(self, extension_cgroups): self.assertEqual(len(extension_cgroups), 2, 'start_extension_command did not return the expected number of cgroups') cpu_found = memory_found = False for cgroup in extension_cgroups: match = re.match( r'^/sys/fs/cgroup/(cpu|memory)/system.slice/Microsoft.Compute.TestExtension_1\.2\.3\_([a-f0-9-]+)\.scope$', cgroup.path) self.assertTrue(match is not None, "Unexpected path for cgroup: {0}".format(cgroup.path)) if match.group(1) == 'cpu': cpu_found = True if match.group(1) == 'memory': memory_found = True self.assertTrue(cpu_found, 'start_extension_command did not return a cpu cgroup') self.assertTrue(memory_found, 'start_extension_command did not return a memory cgroup') @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_command_should_return_the_command_output(self, _): original_popen = subprocess.Popen def mock_popen(command, *args, **kwargs): if command.startswith('systemd-run --property'): command = "echo TEST_OUTPUT" return original_popen(command, *args, **kwargs) with mock_cgroup_environment(self.tmp_dir): with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as output_file: with patch("subprocess.Popen", side_effect=mock_popen) as popen_patch: # pylint: disable=unused-variable command_output = SystemdCgroupsApi().start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="A_TEST_COMMAND", cmd_name="test", shell=True, timeout=300, cwd=self.tmp_dir, env={}, stdout=output_file, stderr=output_file) self.assertIn("[stdout]\nTEST_OUTPUT\n", command_output, "The test output was not captured") @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_command_should_execute_the_command_in_a_cgroup(self, _): with mock_cgroup_environment(self.tmp_dir): SystemdCgroupsApi().start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="test command", cmd_name="test", shell=False, timeout=300, cwd=self.tmp_dir, env={}, stdout=subprocess.PIPE, stderr=subprocess.PIPE) tracked = CGroupsTelemetry._tracked self.assertTrue( any(cg for cg in tracked.values() if cg.name == 'Microsoft.Compute.TestExtension-1.2.3' and 'cpu' in cg.path), "The extension's CPU is not being tracked") self.assertTrue( any(cg for cg in tracked.values() if cg.name == 'Microsoft.Compute.TestExtension-1.2.3' and 'memory' in cg.path), "The extension's Memory is not being tracked") @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_command_should_use_systemd_to_execute_the_command(self, _): with mock_cgroup_environment(self.tmp_dir): with patch("subprocess.Popen", wraps=subprocess.Popen) as popen_patch: SystemdCgroupsApi().start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="the-test-extension-command", cmd_name="test", timeout=300, shell=True, cwd=self.tmp_dir, env={}, stdout=subprocess.PIPE, stderr=subprocess.PIPE) extension_calls = [args[0] for (args, _) in popen_patch.call_args_list if "the-test-extension-command" in args[0]] self.assertEqual(1, len(extension_calls), "The extension should have been invoked exactly once") self.assertIn("systemd-run", extension_calls[0], "The extension should have been invoked using systemd") class SystemdCgroupsApiMockedFileSystemTestCase(_MockedFileSystemTestCase): def test_cleanup_legacy_cgroups_should_remove_legacy_cgroups(self): # Set up a mock /var/run/waagent.pid file daemon_pid_file = os.path.join(self.tmp_dir, "waagent.pid") fileutil.write_file(daemon_pid_file, "42\n") # Set up old controller cgroups, but do not add the daemon's PID to them legacy_cpu_cgroup = CGroupsTools.create_legacy_agent_cgroup(self.cgroups_file_system_root, "cpu", '') legacy_memory_cgroup = CGroupsTools.create_legacy_agent_cgroup(self.cgroups_file_system_root, "memory", '') with patch("azurelinuxagent.ga.cgroupapi.get_agent_pid_file_path", return_value=daemon_pid_file): legacy_cgroups = SystemdCgroupsApi().cleanup_legacy_cgroups() self.assertEqual(legacy_cgroups, 2, "cleanup_legacy_cgroups() did not find all the expected cgroups") self.assertFalse(os.path.exists(legacy_cpu_cgroup), "cleanup_legacy_cgroups() did not remove the CPU legacy cgroup") self.assertFalse(os.path.exists(legacy_memory_cgroup), "cleanup_legacy_cgroups() did not remove the memory legacy cgroup") Azure-WALinuxAgent-2b21de5/tests/ga/test_cgroupconfigurator.py000066400000000000000000001716371462617747000246330ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.4+ and Openssl 1.0+ # from __future__ import print_function import contextlib import os import random import re import subprocess import tempfile import time import threading from nose.plugins.attrib import attr from azurelinuxagent.common import conf from azurelinuxagent.ga.cgroup import AGENT_NAME_TELEMETRY, MetricsCounter, MetricValue, MetricsCategory, CpuCgroup from azurelinuxagent.ga.cgroupconfigurator import CGroupConfigurator, DisableCgroups from azurelinuxagent.ga.cgroupstelemetry import CGroupsTelemetry from azurelinuxagent.common.event import WALAEventOperation from azurelinuxagent.common.exception import CGroupsException, ExtensionError, ExtensionErrorCodes, \ AgentMemoryExceededException from azurelinuxagent.common.future import ustr from azurelinuxagent.common.utils import shellutil, fileutil from tests.lib.mock_environment import MockCommand from tests.lib.mock_cgroup_environment import mock_cgroup_environment, UnitFilePaths from tests.lib.tools import AgentTestCase, patch, mock_sleep, i_am_root, data_dir, is_python_version_26_or_34, skip_if_predicate_true from tests.lib.miscellaneous_tools import format_processes, wait_for class CGroupConfiguratorSystemdTestCase(AgentTestCase): @classmethod def tearDownClass(cls): CGroupConfigurator._instance = None AgentTestCase.tearDownClass() @contextlib.contextmanager def _get_cgroup_configurator(self, initialize=True, enable=True, mock_commands=None): CGroupConfigurator._instance = None configurator = CGroupConfigurator.get_instance() CGroupsTelemetry.reset() with mock_cgroup_environment(self.tmp_dir) as mock_environment: if mock_commands is not None: for command in mock_commands: mock_environment.add_command(command) configurator.mocks = mock_environment if initialize: if not enable: with patch.object(configurator, "enable"): configurator.initialize() else: configurator.initialize() yield configurator def test_initialize_should_enable_cgroups(self): with self._get_cgroup_configurator() as configurator: self.assertTrue(configurator.enabled(), "cgroups were not enabled") def test_initialize_should_start_tracking_the_agent_cgroups(self): with self._get_cgroup_configurator() as configurator: tracked = CGroupsTelemetry._tracked self.assertTrue(configurator.enabled(), "Cgroups should be enabled") self.assertTrue(any(cg for cg in tracked.values() if cg.name == AGENT_NAME_TELEMETRY and 'cpu' in cg.path), "The Agent's CPU is not being tracked. Tracked: {0}".format(tracked)) self.assertTrue(any(cg for cg in tracked.values() if cg.name == AGENT_NAME_TELEMETRY and 'memory' in cg.path), "The Agent's Memory is not being tracked. Tracked: {0}".format(tracked)) def test_initialize_should_start_tracking_other_controllers_when_one_is_not_present(self): command_mocks = [MockCommand(r"^mount -t cgroup$", '''cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd) cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) ''')] with self._get_cgroup_configurator(mock_commands=command_mocks) as configurator: tracked = CGroupsTelemetry._tracked self.assertTrue(configurator.enabled(), "Cgroups should be enabled") self.assertFalse(any(cg for cg in tracked.values() if cg.name == 'walinuxagent.service' and 'memory' in cg.path), "The Agent's memory should not be tracked. Tracked: {0}".format(tracked)) def test_initialize_should_not_enable_cgroups_when_the_cpu_and_memory_controllers_are_not_present(self): command_mocks = [MockCommand(r"^mount -t cgroup$", '''cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd) cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) ''')] with self._get_cgroup_configurator(mock_commands=command_mocks) as configurator: tracked = CGroupsTelemetry._tracked self.assertFalse(configurator.enabled(), "Cgroups should not be enabled") self.assertEqual(len(tracked), 0, "No cgroups should be tracked. Tracked: {0}".format(tracked)) def test_initialize_should_not_enable_cgroups_when_the_agent_is_not_in_the_system_slice(self): command_mocks = [MockCommand(r"^mount -t cgroup$", '''cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd) cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) ''')] with self._get_cgroup_configurator(mock_commands=command_mocks) as configurator: tracked = CGroupsTelemetry._tracked agent_drop_in_file_cpu_quota = configurator.mocks.get_mapped_path(UnitFilePaths.cpu_quota) self.assertFalse(configurator.enabled(), "Cgroups should not be enabled") self.assertEqual(len(tracked), 0, "No cgroups should be tracked. Tracked: {0}".format(tracked)) self.assertFalse(os.path.exists(agent_drop_in_file_cpu_quota), "{0} should not have been created".format(agent_drop_in_file_cpu_quota)) def test_initialize_should_not_create_unit_files(self): with self._get_cgroup_configurator() as configurator: # get the paths to the mocked files azure_slice_unit_file = configurator.mocks.get_mapped_path(UnitFilePaths.azure) extensions_slice_unit_file = configurator.mocks.get_mapped_path(UnitFilePaths.vmextensions) agent_drop_in_file_slice = configurator.mocks.get_mapped_path(UnitFilePaths.slice) agent_drop_in_file_cpu_accounting = configurator.mocks.get_mapped_path(UnitFilePaths.cpu_accounting) agent_drop_in_file_memory_accounting = configurator.mocks.get_mapped_path(UnitFilePaths.memory_accounting) # The mock creates the slice unit files; delete them os.remove(azure_slice_unit_file) os.remove(extensions_slice_unit_file) # The service file for the agent includes settings for the slice and cpu accounting, but not for cpu quota; initialize() # should not create drop in files for the first 2, but it should create one the cpu quota self.assertFalse(os.path.exists(azure_slice_unit_file), "{0} should not have been created".format(azure_slice_unit_file)) self.assertFalse(os.path.exists(extensions_slice_unit_file), "{0} should not have been created".format(extensions_slice_unit_file)) self.assertFalse(os.path.exists(agent_drop_in_file_slice), "{0} should not have been created".format(agent_drop_in_file_slice)) self.assertFalse(os.path.exists(agent_drop_in_file_cpu_accounting), "{0} should not have been created".format(agent_drop_in_file_cpu_accounting)) self.assertFalse(os.path.exists(agent_drop_in_file_memory_accounting), "{0} should not have been created".format(agent_drop_in_file_memory_accounting)) def test_initialize_should_create_unit_files_when_the_agent_service_file_is_not_updated(self): with self._get_cgroup_configurator(initialize=False) as configurator: # get the paths to the mocked files azure_slice_unit_file = configurator.mocks.get_mapped_path(UnitFilePaths.azure) extensions_slice_unit_file = configurator.mocks.get_mapped_path(UnitFilePaths.vmextensions) agent_drop_in_file_slice = configurator.mocks.get_mapped_path(UnitFilePaths.slice) agent_drop_in_file_cpu_accounting = configurator.mocks.get_mapped_path(UnitFilePaths.cpu_accounting) agent_drop_in_file_memory_accounting = configurator.mocks.get_mapped_path(UnitFilePaths.memory_accounting) # The mock creates the service and slice unit files; replace the former and delete the latter configurator.mocks.add_data_file(os.path.join(data_dir, 'init', "walinuxagent.service.previous"), UnitFilePaths.walinuxagent) os.remove(azure_slice_unit_file) os.remove(extensions_slice_unit_file) configurator.initialize() # The older service file for the agent did not include settings for the slice and cpu parameters; in that case, initialize() should # create drop in files to set those properties self.assertTrue(os.path.exists(azure_slice_unit_file), "{0} was not created".format(azure_slice_unit_file)) self.assertTrue(os.path.exists(extensions_slice_unit_file), "{0} was not created".format(extensions_slice_unit_file)) self.assertTrue(os.path.exists(agent_drop_in_file_slice), "{0} was not created".format(agent_drop_in_file_slice)) self.assertTrue(os.path.exists(agent_drop_in_file_cpu_accounting), "{0} was not created".format(agent_drop_in_file_cpu_accounting)) self.assertTrue(os.path.exists(agent_drop_in_file_memory_accounting), "{0} was not created".format(agent_drop_in_file_memory_accounting)) def test_initialize_should_update_logcollector_memorylimit(self): with self._get_cgroup_configurator(initialize=False) as configurator: log_collector_unit_file = configurator.mocks.get_mapped_path(UnitFilePaths.logcollector) original_memory_limit = "MemoryLimit=30M" # The mock creates the slice unit file with memory limit configurator.mocks.add_data_file(os.path.join(data_dir, 'init', "azure-walinuxagent-logcollector.slice"), UnitFilePaths.logcollector) if not os.path.exists(log_collector_unit_file): raise Exception("{0} should have been created during test setup".format(log_collector_unit_file)) if not fileutil.findre_in_file(log_collector_unit_file, original_memory_limit): raise Exception("MemoryLimit was not set correctly. Expected: {0}. Got:\n{1}".format( original_memory_limit, fileutil.read_file(log_collector_unit_file))) configurator.initialize() # initialize() should update the unit file to remove the memory limit self.assertFalse(fileutil.findre_in_file(log_collector_unit_file, original_memory_limit), "Log collector slice unit file was not updated correctly. Expected no memory limit. Got:\n{0}".format( fileutil.read_file(log_collector_unit_file))) def test_setup_extension_slice_should_create_unit_files(self): with self._get_cgroup_configurator() as configurator: # get the paths to the mocked files extension_slice_unit_file = configurator.mocks.get_mapped_path(UnitFilePaths.extensionslice) configurator.setup_extension_slice(extension_name="Microsoft.CPlat.Extension", cpu_quota=5) expected_cpu_accounting = "CPUAccounting=yes" expected_cpu_quota_percentage = "5%" expected_memory_accounting = "MemoryAccounting=yes" self.assertTrue(os.path.exists(extension_slice_unit_file), "{0} was not created".format(extension_slice_unit_file)) self.assertTrue(fileutil.findre_in_file(extension_slice_unit_file, expected_cpu_accounting), "CPUAccounting was not set correctly. Expected: {0}. Got:\n{1}".format(expected_cpu_accounting, fileutil.read_file( extension_slice_unit_file))) self.assertTrue(fileutil.findre_in_file(extension_slice_unit_file, expected_cpu_quota_percentage), "CPUQuota was not set correctly. Expected: {0}. Got:\n{1}".format(expected_cpu_quota_percentage, fileutil.read_file( extension_slice_unit_file))) self.assertTrue(fileutil.findre_in_file(extension_slice_unit_file, expected_memory_accounting), "MemoryAccounting was not set correctly. Expected: {0}. Got:\n{1}".format(expected_memory_accounting, fileutil.read_file( extension_slice_unit_file))) def test_remove_extension_slice_should_remove_unit_files(self): with self._get_cgroup_configurator() as configurator: with patch("os.path.exists") as mock_path: mock_path.return_value = True # get the paths to the mocked files extension_slice_unit_file = configurator.mocks.get_mapped_path(UnitFilePaths.extensionslice) CGroupsTelemetry._tracked['/sys/fs/cgroup/cpu,cpuacct/azure.slice/azure-vmextensions.slice/' \ 'azure-vmextensions-Microsoft.CPlat.Extension.slice'] = \ CpuCgroup('Microsoft.CPlat.Extension', '/sys/fs/cgroup/cpu,cpuacct/azure.slice/azure-vmextensions.slice/azure-vmextensions-Microsoft.CPlat.Extension.slice') configurator.remove_extension_slice(extension_name="Microsoft.CPlat.Extension") tracked = CGroupsTelemetry._tracked self.assertFalse( any(cg for cg in tracked.values() if cg.name == 'Microsoft.CPlat.Extension' and 'cpu' in cg.path), "The extension's CPU is being tracked") self.assertFalse( any(cg for cg in tracked.values() if cg.name == 'Microsoft.CPlat.Extension' and 'memory' in cg.path), "The extension's Memory is being tracked") self.assertFalse(os.path.exists(extension_slice_unit_file), "{0} should not be present".format(extension_slice_unit_file)) def test_enable_should_raise_cgroups_exception_when_cgroups_are_not_supported(self): with self._get_cgroup_configurator(enable=False) as configurator: with patch.object(configurator, "supported", return_value=False): with self.assertRaises(CGroupsException) as context_manager: configurator.enable() self.assertIn("Attempted to enable cgroups, but they are not supported on the current platform", str(context_manager.exception)) def test_enable_should_set_agent_cpu_quota_and_track_throttled_time(self): with self._get_cgroup_configurator(initialize=False) as configurator: agent_drop_in_file_cpu_quota = configurator.mocks.get_mapped_path(UnitFilePaths.cpu_quota) if os.path.exists(agent_drop_in_file_cpu_quota): raise Exception("{0} should not have been created during test setup".format(agent_drop_in_file_cpu_quota)) configurator.initialize() expected_quota = "CPUQuota={0}%".format(conf.get_agent_cpu_quota()) self.assertTrue(os.path.exists(agent_drop_in_file_cpu_quota), "{0} was not created".format(agent_drop_in_file_cpu_quota)) self.assertTrue( fileutil.findre_in_file(agent_drop_in_file_cpu_quota, expected_quota), "CPUQuota was not set correctly. Expected: {0}. Got:\n{1}".format(expected_quota, fileutil.read_file(agent_drop_in_file_cpu_quota))) self.assertTrue(CGroupsTelemetry.get_track_throttled_time(), "Throttle time should be tracked") def test_enable_should_not_track_throttled_time_when_setting_the_cpu_quota_fails(self): with self._get_cgroup_configurator(initialize=False) as configurator: if CGroupsTelemetry.get_track_throttled_time(): raise Exception("Test setup should not start tracking Throttle Time") configurator.mocks.add_file(UnitFilePaths.cpu_quota, Exception("A TEST EXCEPTION")) configurator.initialize() self.assertFalse(CGroupsTelemetry.get_track_throttled_time(), "Throttle time should not be tracked") def test_disable_should_reset_cpu_quota(self): with self._get_cgroup_configurator() as configurator: if len(CGroupsTelemetry._tracked) == 0: raise Exception("Test setup should have started tracking at least 1 cgroup (the agent's)") if not CGroupsTelemetry._track_throttled_time: raise Exception("Test setup should have started tracking Throttle Time") configurator.disable("UNIT TEST", DisableCgroups.AGENT) agent_drop_in_file_cpu_quota = configurator.mocks.get_mapped_path(UnitFilePaths.cpu_quota) self.assertTrue(os.path.exists(agent_drop_in_file_cpu_quota), "{0} was not created".format(agent_drop_in_file_cpu_quota)) self.assertTrue( fileutil.findre_in_file(agent_drop_in_file_cpu_quota, "^CPUQuota=$"), "CPUQuota was not set correctly. Expected an empty value. Got:\n{0}".format(fileutil.read_file(agent_drop_in_file_cpu_quota))) self.assertEqual(len(CGroupsTelemetry._tracked), 1, "Memory cgroups should be tracked after disable. Tracking: {0}".format(CGroupsTelemetry._tracked)) self.assertFalse(any(cg for cg in CGroupsTelemetry._tracked.values() if cg.name == 'walinuxagent.service' and 'cpu' in cg.path), "The Agent's cpu should not be tracked. Tracked: {0}".format(CGroupsTelemetry._tracked)) def test_disable_should_reset_cpu_quota_for_all_cgroups(self): service_list = [ { "name": "extension.service", "cpuQuotaPercentage": 5 } ] extension_name = "Microsoft.CPlat.Extension" extension_services = {extension_name: service_list} with self._get_cgroup_configurator() as configurator: with patch.object(configurator, "get_extension_services_list", return_value=extension_services): # get the paths to the mocked files agent_drop_in_file_cpu_quota = configurator.mocks.get_mapped_path(UnitFilePaths.cpu_quota) extension_slice_unit_file = configurator.mocks.get_mapped_path(UnitFilePaths.extensionslice) extension_service_cpu_quota = configurator.mocks.get_mapped_path(UnitFilePaths.extension_service_cpu_quota) configurator.setup_extension_slice(extension_name=extension_name, cpu_quota=5) configurator.set_extension_services_cpu_memory_quota(service_list) CGroupsTelemetry._tracked['/sys/fs/cgroup/cpu,cpuacct/system.slice/extension.service'] = \ CpuCgroup('extension.service', '/sys/fs/cgroup/cpu,cpuacct/system.slice/extension.service') CGroupsTelemetry._tracked['/sys/fs/cgroup/cpu,cpuacct/azure.slice/azure-vmextensions.slice/' \ 'azure-vmextensions-Microsoft.CPlat.Extension.slice'] = \ CpuCgroup('Microsoft.CPlat.Extension', '/sys/fs/cgroup/cpu,cpuacct/azure.slice/azure-vmextensions.slice/azure-vmextensions-Microsoft.CPlat.Extension.slice') configurator.disable("UNIT TEST", DisableCgroups.ALL) self.assertTrue(os.path.exists(agent_drop_in_file_cpu_quota), "{0} was not created".format(agent_drop_in_file_cpu_quota)) self.assertTrue( fileutil.findre_in_file(agent_drop_in_file_cpu_quota, "^CPUQuota=$"), "CPUQuota was not set correctly. Expected an empty value. Got:\n{0}".format( fileutil.read_file(agent_drop_in_file_cpu_quota))) self.assertTrue(os.path.exists(extension_slice_unit_file), "{0} was not created".format(extension_slice_unit_file)) self.assertTrue( fileutil.findre_in_file(extension_slice_unit_file, "^CPUQuota=$"), "CPUQuota was not set correctly. Expected an empty value. Got:\n{0}".format( fileutil.read_file(extension_slice_unit_file))) self.assertTrue(os.path.exists(extension_service_cpu_quota), "{0} was not created".format(extension_service_cpu_quota)) self.assertTrue( fileutil.findre_in_file(extension_service_cpu_quota, "^CPUQuota=$"), "CPUQuota was not set correctly. Expected an empty value. Got:\n{0}".format( fileutil.read_file(extension_service_cpu_quota))) self.assertEqual(len(CGroupsTelemetry._tracked), 0, "No cgroups should be tracked after disable. Tracking: {0}".format( CGroupsTelemetry._tracked)) @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_command_should_not_use_systemd_when_cgroups_are_not_enabled(self, _): with self._get_cgroup_configurator() as configurator: configurator.disable("UNIT TEST", DisableCgroups.ALL) with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", wraps=subprocess.Popen) as patcher: configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="date", cmd_name="test", timeout=300, shell=False, cwd=self.tmp_dir, env={}, stdout=subprocess.PIPE, stderr=subprocess.PIPE) command_calls = [args[0] for args, _ in patcher.call_args_list if len(args) > 0 and "date" in args[0]] self.assertEqual(len(command_calls), 1, "The test command should have been called exactly once [{0}]".format(command_calls)) self.assertNotIn("systemd-run", command_calls[0], "The command should not have been invoked using systemd") self.assertEqual(command_calls[0], "date", "The command line should not have been modified") @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_command_should_use_systemd_run_when_cgroups_are_enabled(self, _): with self._get_cgroup_configurator() as configurator: with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", wraps=subprocess.Popen) as popen_patch: configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="the-test-extension-command", cmd_name="test", timeout=300, shell=False, cwd=self.tmp_dir, env={}, stdout=subprocess.PIPE, stderr=subprocess.PIPE) command_calls = [args[0] for (args, _) in popen_patch.call_args_list if "the-test-extension-command" in args[0]] self.assertEqual(len(command_calls), 1, "The test command should have been called exactly once [{0}]".format(command_calls)) self.assertIn("systemd-run", command_calls[0], "The extension should have been invoked using systemd") @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_command_should_start_tracking_the_extension_cgroups(self, _): # CPU usage is initialized when we begin tracking a CPU cgroup; since this test does not retrieve the # CPU usage, there is no need for initialization with self._get_cgroup_configurator() as configurator: configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="test command", cmd_name="test", timeout=300, shell=False, cwd=self.tmp_dir, env={}, stdout=subprocess.PIPE, stderr=subprocess.PIPE) tracked = CGroupsTelemetry._tracked self.assertTrue( any(cg for cg in tracked.values() if cg.name == 'Microsoft.Compute.TestExtension-1.2.3' and 'cpu' in cg.path), "The extension's CPU is not being tracked") self.assertTrue( any(cg for cg in tracked.values() if cg.name == 'Microsoft.Compute.TestExtension-1.2.3' and 'memory' in cg.path), "The extension's Memory is not being tracked") def test_start_extension_command_should_raise_an_exception_when_the_command_cannot_be_started(self): with self._get_cgroup_configurator() as configurator: original_popen = subprocess.Popen def mock_popen(command_arg, *args, **kwargs): if "test command" in command_arg: raise Exception("A TEST EXCEPTION") return original_popen(command_arg, *args, **kwargs) with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", side_effect=mock_popen): with self.assertRaises(Exception) as context_manager: configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="test command", cmd_name="test", timeout=300, shell=False, cwd=self.tmp_dir, env={}, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.assertIn("A TEST EXCEPTION", str(context_manager.exception)) @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_command_should_disable_cgroups_and_invoke_the_command_directly_if_systemd_fails(self, _): with self._get_cgroup_configurator() as configurator: configurator.mocks.add_command(MockCommand("systemd-run", return_value=1, stdout='', stderr='Failed to start transient scope unit: syntax error')) with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as output_file: with patch("azurelinuxagent.ga.cgroupconfigurator.add_event") as mock_add_event: with patch("subprocess.Popen", wraps=subprocess.Popen) as popen_patch: CGroupsTelemetry.reset() command = "echo TEST_OUTPUT" command_output = configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command=command, cmd_name="test", timeout=300, shell=True, cwd=self.tmp_dir, env={}, stdout=output_file, stderr=output_file) self.assertFalse(configurator.enabled(), "Cgroups should have been disabled") disabled_events = [kwargs for _, kwargs in mock_add_event.call_args_list if kwargs['op'] == WALAEventOperation.CGroupsDisabled] self.assertTrue(len(disabled_events) == 1, "Exactly one CGroupsDisabled telemetry event should have been issued. Found: {0}".format(disabled_events)) self.assertIn("Failed to start Microsoft.Compute.TestExtension-1.2.3 using systemd-run", disabled_events[0]['message'], "The systemd-run failure was not included in the telemetry message") self.assertEqual(False, disabled_events[0]['is_success'], "The telemetry event should indicate a failure") extension_calls = [args[0] for (args, _) in popen_patch.call_args_list if command in args[0]] self.assertEqual(2, len(extension_calls), "The extension should have been invoked exactly twice") self.assertIn("systemd-run", extension_calls[0], "The first call to the extension should have used systemd") self.assertEqual(command, extension_calls[1], "The second call to the extension should not have used systemd") self.assertEqual(len(CGroupsTelemetry._tracked), 0, "No cgroups should have been created") self.assertIn("TEST_OUTPUT\n", command_output, "The test output was not captured") @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_command_should_disable_cgroups_and_invoke_the_command_directly_if_systemd_times_out(self, _): with self._get_cgroup_configurator() as configurator: # Systemd has its own internal timeout which is shorter than what we define for extension operation timeout. # When systemd times out, it will write a message to stderr and exit with exit code 1. # In that case, we will internally recognize the failure due to the non-zero exit code, not as a timeout. configurator.mocks.add_command(MockCommand("systemd-run", return_value=1, stdout='', stderr='Failed to start transient scope unit: Connection timed out')) with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: with patch("subprocess.Popen", wraps=subprocess.Popen) as popen_patch: CGroupsTelemetry.reset() configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="echo 'success'", cmd_name="test", timeout=300, shell=True, cwd=self.tmp_dir, env={}, stdout=stdout, stderr=stderr) self.assertFalse(configurator.enabled(), "Cgroups should have been disabled") extension_calls = [args[0] for (args, _) in popen_patch.call_args_list if "echo 'success'" in args[0]] self.assertEqual(2, len(extension_calls), "The extension should have been called twice. Got: {0}".format(extension_calls)) self.assertIn("systemd-run", extension_calls[0], "The first call to the extension should have used systemd") self.assertNotIn("systemd-run", extension_calls[1], "The second call to the extension should not have used systemd") self.assertEqual(len(CGroupsTelemetry._tracked), 0, "No cgroups should have been created") @skip_if_predicate_true(is_python_version_26_or_34, "Disabled on Python 2.6 and 3.4 for now. Need to revisit to fix it") @attr('requires_sudo') @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_command_should_not_use_fallback_option_if_extension_fails(self, *args): self.assertTrue(i_am_root(), "Test does not run when non-root") with self._get_cgroup_configurator() as configurator: pass # release the mocks used to create the test CGroupConfigurator so that they do not conflict the mock Popen below command = "ls folder_does_not_exist" with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", wraps=subprocess.Popen) as popen_patch: with self.assertRaises(ExtensionError) as context_manager: configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command=command, cmd_name="test", timeout=300, shell=True, cwd=self.tmp_dir, env={}, stdout=stdout, stderr=stderr) extension_calls = [args[0] for (args, _) in popen_patch.call_args_list if command in args[0]] self.assertEqual(1, len(extension_calls), "The extension should have been invoked exactly once") self.assertIn("systemd-run", extension_calls[0], "The first call to the extension should have used systemd") self.assertEqual(context_manager.exception.code, ExtensionErrorCodes.PluginUnknownFailure) self.assertIn("Non-zero exit code", ustr(context_manager.exception)) # The scope name should appear in the process output since systemd-run was invoked and stderr # wasn't truncated. self.assertIn("Running scope as unit", ustr(context_manager.exception)) @skip_if_predicate_true(is_python_version_26_or_34, "Disabled on Python 2.6 and 3.4 for now. Need to revisit to fix it") @attr('requires_sudo') @patch('time.sleep', side_effect=lambda _: mock_sleep()) @patch("azurelinuxagent.ga.extensionprocessutil.TELEMETRY_MESSAGE_MAX_LEN", 5) def test_start_extension_command_should_not_use_fallback_option_if_extension_fails_with_long_output(self, *args): self.assertTrue(i_am_root(), "Test does not run when non-root") with self._get_cgroup_configurator() as configurator: pass # release the mocks used to create the test CGroupConfigurator so that they do not conflict the mock Popen below long_output = "a"*20 # large enough to ensure both stdout and stderr are truncated long_stdout_stderr_command = "echo {0} && echo {0} >&2 && ls folder_does_not_exist".format(long_output) with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", wraps=subprocess.Popen) as popen_patch: with self.assertRaises(ExtensionError) as context_manager: configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command=long_stdout_stderr_command, cmd_name="test", timeout=300, shell=True, cwd=self.tmp_dir, env={}, stdout=stdout, stderr=stderr) extension_calls = [args[0] for (args, _) in popen_patch.call_args_list if long_stdout_stderr_command in args[0]] self.assertEqual(1, len(extension_calls), "The extension should have been invoked exactly once") self.assertIn("systemd-run", extension_calls[0], "The first call to the extension should have used systemd") self.assertEqual(context_manager.exception.code, ExtensionErrorCodes.PluginUnknownFailure) self.assertIn("Non-zero exit code", ustr(context_manager.exception)) # stdout and stderr should have been truncated, so the scope name doesn't appear in stderr # even though systemd-run ran self.assertNotIn("Running scope as unit", ustr(context_manager.exception)) @attr('requires_sudo') def test_start_extension_command_should_not_use_fallback_option_if_extension_times_out(self, *args): # pylint: disable=unused-argument self.assertTrue(i_am_root(), "Test does not run when non-root") with self._get_cgroup_configurator() as configurator: pass # release the mocks used to create the test CGroupConfigurator so that they do not conflict the mock Popen below with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: with patch("azurelinuxagent.ga.extensionprocessutil.wait_for_process_completion_or_timeout", return_value=[True, None, 0]): with patch("azurelinuxagent.ga.cgroupapi.SystemdCgroupsApi._is_systemd_failure", return_value=False): with self.assertRaises(ExtensionError) as context_manager: configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="date", cmd_name="test", timeout=300, shell=True, cwd=self.tmp_dir, env={}, stdout=stdout, stderr=stderr) self.assertEqual(context_manager.exception.code, ExtensionErrorCodes.PluginHandlerScriptTimedout) self.assertIn("Timeout", ustr(context_manager.exception)) @patch('time.sleep', side_effect=lambda _: mock_sleep()) def test_start_extension_command_should_capture_only_the_last_subprocess_output(self, _): with self._get_cgroup_configurator() as configurator: pass # release the mocks used to create the test CGroupConfigurator so that they do not conflict the mock Popen below original_popen = subprocess.Popen def mock_popen(command, *args, **kwargs): # Inject a syntax error to the call # Popen can accept both strings and lists, handle both here. if isinstance(command, str): systemd_command = command.replace('systemd-run', 'systemd-run syntax_error') elif isinstance(command, list) and command[0] == 'systemd-run': systemd_command = ['systemd-run', 'syntax_error'] + command[1:] return original_popen(systemd_command, *args, **kwargs) expected_output = "[stdout]\n{0}\n\n\n[stderr]\n" with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", side_effect=mock_popen): # We expect this call to fail because of the syntax error process_output = configurator.start_extension_command( extension_name="Microsoft.Compute.TestExtension-1.2.3", command="echo 'very specific test message'", cmd_name="test", timeout=300, shell=True, cwd=self.tmp_dir, env={}, stdout=stdout, stderr=stderr) self.assertEqual(expected_output.format("very specific test message"), process_output) def test_it_should_set_extension_services_cpu_memory_quota(self): service_list = [ { "name": "extension.service", "cpuQuotaPercentage": 5 } ] with self._get_cgroup_configurator() as configurator: # get the paths to the mocked files extension_service_cpu_accounting = configurator.mocks.get_mapped_path(UnitFilePaths.extension_service_cpu_accounting) extension_service_cpu_quota = configurator.mocks.get_mapped_path(UnitFilePaths.extension_service_cpu_quota) configurator.set_extension_services_cpu_memory_quota(service_list) expected_cpu_accounting = "CPUAccounting=yes" expected_cpu_quota_percentage = "CPUQuota=5%" # create drop in files to set those properties self.assertTrue(os.path.exists(extension_service_cpu_accounting), "{0} was not created".format(extension_service_cpu_accounting)) self.assertTrue( fileutil.findre_in_file(extension_service_cpu_accounting, expected_cpu_accounting), "CPUAccounting was not enabled. Expected: {0}. Got:\n{1}".format(expected_cpu_accounting, fileutil.read_file(extension_service_cpu_accounting))) self.assertTrue(os.path.exists(extension_service_cpu_quota), "{0} was not created".format(extension_service_cpu_quota)) self.assertTrue( fileutil.findre_in_file(extension_service_cpu_quota, expected_cpu_quota_percentage), "CPUQuota was not set. Expected: {0}. Got:\n{1}".format(expected_cpu_quota_percentage, fileutil.read_file(extension_service_cpu_quota))) def test_it_should_set_extension_services_when_quotas_not_defined(self): service_list = [ { "name": "extension.service" } ] with self._get_cgroup_configurator() as configurator: # get the paths to the mocked files extension_service_cpu_accounting = configurator.mocks.get_mapped_path(UnitFilePaths.extension_service_cpu_accounting) extension_service_cpu_quota = configurator.mocks.get_mapped_path(UnitFilePaths.extension_service_cpu_quota) extension_service_memory_accounting = configurator.mocks.get_mapped_path(UnitFilePaths.extension_service_memory_accounting) extension_service_memory_quota = configurator.mocks.get_mapped_path(UnitFilePaths.extension_service_memory_limit) configurator.set_extension_services_cpu_memory_quota(service_list) self.assertTrue(os.path.exists(extension_service_cpu_accounting), "{0} was not created".format(extension_service_cpu_accounting)) self.assertFalse(os.path.exists(extension_service_cpu_quota), "{0} should not have been created during setup".format(extension_service_cpu_quota)) self.assertTrue(os.path.exists(extension_service_memory_accounting), "{0} was not created".format(extension_service_memory_accounting)) self.assertFalse(os.path.exists(extension_service_memory_quota), "{0} should not have been created during setup".format(extension_service_memory_quota)) def test_it_should_start_tracking_extension_services_cgroups(self): service_list = [ { "name": "extension.service" } ] with self._get_cgroup_configurator() as configurator: configurator.start_tracking_extension_services_cgroups(service_list) tracked = CGroupsTelemetry._tracked self.assertTrue( any(cg for cg in tracked.values() if cg.name == 'extension.service' and 'cpu' in cg.path), "The extension service's CPU is not being tracked") self.assertTrue( any(cg for cg in tracked.values() if cg.name == 'extension.service' and 'memory' in cg.path), "The extension service's Memory is not being tracked") def test_it_should_stop_tracking_extension_services_cgroups(self): service_list = [ { "name": "extension.service" } ] with self._get_cgroup_configurator() as configurator: with patch("os.path.exists") as mock_path: mock_path.return_value = True CGroupsTelemetry.track_cgroup(CpuCgroup('extension.service', '/sys/fs/cgroup/cpu,cpuacct/system.slice/extension.service')) configurator.stop_tracking_extension_services_cgroups(service_list) tracked = CGroupsTelemetry._tracked self.assertFalse( any(cg for cg in tracked.values() if cg.name == 'extension.service' and 'cpu' in cg.path), "The extension service's CPU is being tracked") self.assertFalse( any(cg for cg in tracked.values() if cg.name == 'extension.service' and 'memory' in cg.path), "The extension service's Memory is being tracked") def test_it_should_remove_extension_services_drop_in_files(self): service_list = [ { "name": "extension.service", "cpuQuotaPercentage": 5 } ] with self._get_cgroup_configurator() as configurator: extension_service_cpu_accounting = configurator.mocks.get_mapped_path( UnitFilePaths.extension_service_cpu_accounting) extension_service_cpu_quota = configurator.mocks.get_mapped_path(UnitFilePaths.extension_service_cpu_quota) extension_service_memory_accounting = configurator.mocks.get_mapped_path( UnitFilePaths.extension_service_memory_accounting) configurator.remove_extension_services_drop_in_files(service_list) self.assertFalse(os.path.exists(extension_service_cpu_accounting), "{0} should not have been created".format(extension_service_cpu_accounting)) self.assertFalse(os.path.exists(extension_service_cpu_quota), "{0} should not have been created".format(extension_service_cpu_quota)) self.assertFalse(os.path.exists(extension_service_memory_accounting), "{0} should not have been created".format(extension_service_memory_accounting)) def test_it_should_start_tracking_unit_cgroups(self): with self._get_cgroup_configurator() as configurator: configurator.start_tracking_unit_cgroups("extension.service") tracked = CGroupsTelemetry._tracked self.assertTrue( any(cg for cg in tracked.values() if cg.name == 'extension.service' and 'cpu' in cg.path), "The extension service's CPU is not being tracked") self.assertTrue( any(cg for cg in tracked.values() if cg.name == 'extension.service' and 'memory' in cg.path), "The extension service's Memory is not being tracked") def test_it_should_stop_tracking_unit_cgroups(self): def side_effect(path): if path == '/sys/fs/cgroup/cpu,cpuacct/system.slice/extension.service': return True return False with self._get_cgroup_configurator() as configurator: with patch("os.path.exists") as mock_path: mock_path.side_effect = side_effect CGroupsTelemetry._tracked['/sys/fs/cgroup/cpu,cpuacct/system.slice/extension.service'] = \ CpuCgroup('extension.service', '/sys/fs/cgroup/cpu,cpuacct/system.slice/extension.service') configurator.stop_tracking_unit_cgroups("extension.service") tracked = CGroupsTelemetry._tracked self.assertFalse( any(cg for cg in tracked.values() if cg.name == 'extension.service' and 'cpu' in cg.path), "The extension service's CPU is being tracked") self.assertFalse( any(cg for cg in tracked.values() if cg.name == 'extension.service' and 'memory' in cg.path), "The extension service's Memory is being tracked") def test_check_processes_in_agent_cgroup_should_raise_a_cgroups_exception_when_there_are_unexpected_processes_in_the_agent_cgroup(self): with self._get_cgroup_configurator() as configurator: pass # release the mocks used to create the test CGroupConfigurator so that they do not conflict the mock Popen below # The test script recursively creates a given number of descendant processes, then it blocks until the # 'stop_file' exists. It produces an output file containing the PID of each descendant process. test_script = os.path.join(self.tmp_dir, "create_processes.sh") stop_file = os.path.join(self.tmp_dir, "create_processes.stop") AgentTestCase.create_script(test_script, """ #!/usr/bin/env bash set -euo pipefail if [[ $# != 2 ]]; then echo "Usage: $0 " exit 1 fi echo $$ >> $1 if [[ $2 > 1 ]]; then $0 $1 $(($2 - 1)) else timeout 30s /usr/bin/env bash -c "while ! [[ -f {0} ]]; do sleep 0.25s; done" fi exit 0 """.format(stop_file)) number_of_descendants = 3 def wait_for_processes(processes_file): def _all_present(): if os.path.exists(processes_file): with open(processes_file, "r") as file_stream: _all_present.processes = [int(process) for process in file_stream.read().split()] return len(_all_present.processes) >= number_of_descendants _all_present.processes = [] if not wait_for(_all_present): raise Exception("Timeout waiting for processes. Expected {0}; got: {1}".format( number_of_descendants, format_processes(_all_present.processes))) return _all_present.processes threads = [] try: # # Start the processes that will be used by the test. We use two sets of processes: the first set simulates a command executed by the agent # (e.g. iptables) and its child processes, if any. The second set of processes simulates an extension. # agent_command_output = os.path.join(self.tmp_dir, "agent_command.pids") agent_command = threading.Thread(target=lambda: shellutil.run_command([test_script, agent_command_output, str(number_of_descendants)])) agent_command.start() threads.append(agent_command) agent_command_processes = wait_for_processes(agent_command_output) extension_output = os.path.join(self.tmp_dir, "extension.pids") def start_extension(): original_sleep = time.sleep original_popen = subprocess.Popen # Extensions are started using systemd-run; mock Popen to remove the call to systemd-run; the test script creates a couple of # child processes, which would simulate the extension's processes. def mock_popen(command, *args, **kwargs): match = re.match(r"^systemd-run --property=CPUAccounting=no --property=MemoryAccounting=no --unit=[^\s]+ --scope --slice=[^\s]+ (.+)", command) is_systemd_run = match is not None if is_systemd_run: command = match.group(1) process = original_popen(command, *args, **kwargs) if is_systemd_run: start_extension.systemd_run_pid = process.pid return process with patch('time.sleep', side_effect=lambda _: original_sleep(0.1)): # start_extension_command has a small delay; skip it with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", side_effect=mock_popen): with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stdout: with tempfile.TemporaryFile(dir=self.tmp_dir, mode="w+b") as stderr: configurator.start_extension_command( extension_name="TestExtension", command="{0} {1} {2}".format(test_script, extension_output, number_of_descendants), cmd_name="test", timeout=30, shell=True, cwd=self.tmp_dir, env={}, stdout=stdout, stderr=stderr) start_extension.systemd_run_pid = None extension = threading.Thread(target=start_extension) extension.start() threads.append(extension) extension_processes = wait_for_processes(extension_output) # # check_processes_in_agent_cgroup uses shellutil and the cgroups api to get the commands that are currently running; # wait for all the processes to show up # if not wait_for(lambda: len(shellutil.get_running_commands()) > 0 and len(configurator._cgroups_api.get_systemd_run_commands()) > 0): raise Exception("Timeout while attempting to track the child commands") # # Verify that check_processes_in_agent_cgroup raises when there are unexpected processes in the agent's cgroup. # # For the agent's processes, we use the current process and its parent (in the actual agent these would be the daemon and the extension # handler), and the commands started by the agent. # # For other processes, we use process 1, a process that already completed, and an extension. Note that extensions are started using # systemd-run and the process for that commands belongs to the agent's cgroup but the processes for the extension should be in a # different cgroup # def get_completed_process(): random.seed() completed = random.randint(1000, 10000) while os.path.exists("/proc/{0}".format(completed)): # ensure we do not use an existing process completed = random.randint(1000, 10000) return completed agent_processes = [os.getppid(), os.getpid()] + agent_command_processes + [start_extension.systemd_run_pid] other_processes = [1, get_completed_process()] + extension_processes with patch("azurelinuxagent.ga.cgroupconfigurator.CGroupsApi.get_processes_in_cgroup", return_value=agent_processes + other_processes): with self.assertRaises(CGroupsException) as context_manager: configurator._check_processes_in_agent_cgroup() # The list of processes in the message is an array of strings: "['foo', ..., 'bar']" message = ustr(context_manager.exception) search = re.search(r'unexpected processes: \[(?P.+)\]', message) self.assertIsNotNone(search, "The event message is not in the expected format: {0}".format(message)) reported = search.group('processes').split(',') self.assertEqual( len(other_processes), len(reported), "An incorrect number of processes was reported. Expected: {0} Got: {1}".format(format_processes(other_processes), reported)) for pid in other_processes: self.assertTrue( any("[PID: {0}]".format(pid) in reported_process for reported_process in reported), "Process {0} was not reported. Got: {1}".format(format_processes([pid]), reported)) finally: # create the file that stops the test processes and wait for them to complete open(stop_file, "w").close() for thread in threads: thread.join(timeout=5) def test_check_agent_throttled_time_should_raise_a_cgroups_exception_when_the_threshold_is_exceeded(self): metrics = [MetricValue(MetricsCategory.CPU_CATEGORY, MetricsCounter.THROTTLED_TIME, AGENT_NAME_TELEMETRY, conf.get_agent_cpu_throttled_time_threshold() + 1)] with self.assertRaises(CGroupsException) as context_manager: CGroupConfigurator._Impl._check_agent_throttled_time(metrics) self.assertIn("The agent has been throttled", ustr(context_manager.exception), "An incorrect exception was raised") def test_check_cgroups_should_disable_cgroups_when_a_check_fails(self): with self._get_cgroup_configurator() as configurator: checks = ["_check_processes_in_agent_cgroup", "_check_agent_throttled_time"] for method_to_fail in checks: patchers = [] try: # mock 'method_to_fail' to raise an exception and the rest to do nothing for method_to_mock in checks: side_effect = CGroupsException(method_to_fail) if method_to_mock == method_to_fail else lambda *_: None p = patch.object(configurator, method_to_mock, side_effect=side_effect) patchers.append(p) p.start() with patch("azurelinuxagent.ga.cgroupconfigurator.add_event") as add_event: configurator.enable() tracked_metrics = [ MetricValue(MetricsCategory.CPU_CATEGORY, MetricsCounter.PROCESSOR_PERCENT_TIME, "test", 10)] configurator.check_cgroups(tracked_metrics) if method_to_fail == "_check_processes_in_agent_cgroup": self.assertFalse(configurator.enabled(), "An error in {0} should have disabled cgroups".format(method_to_fail)) else: self.assertFalse(configurator.agent_enabled(), "An error in {0} should have disabled cgroups".format(method_to_fail)) disable_events = [kwargs for _, kwargs in add_event.call_args_list if kwargs["op"] == WALAEventOperation.CGroupsDisabled] self.assertTrue( len(disable_events) == 1, "Exactly 1 event should have been emitted when {0} fails. Got: {1}".format(method_to_fail, disable_events)) self.assertIn( "[CGroupsException] {0}".format(method_to_fail), disable_events[0]["message"], "The error message is not correct when {0} failed".format(method_to_fail)) finally: for p in patchers: p.stop() def test_check_agent_memory_usage_should_raise_a_cgroups_exception_when_the_limit_is_exceeded(self): metrics = [MetricValue(MetricsCategory.MEMORY_CATEGORY, MetricsCounter.TOTAL_MEM_USAGE, AGENT_NAME_TELEMETRY, conf.get_agent_memory_quota() + 1), MetricValue(MetricsCategory.MEMORY_CATEGORY, MetricsCounter.SWAP_MEM_USAGE, AGENT_NAME_TELEMETRY, conf.get_agent_memory_quota() + 1)] with self.assertRaises(AgentMemoryExceededException) as context_manager: with self._get_cgroup_configurator() as configurator: with patch("azurelinuxagent.ga.cgroup.MemoryCgroup.get_tracked_metrics") as tracked_metrics: tracked_metrics.return_value = metrics configurator.check_agent_memory_usage() self.assertIn("The agent memory limit {0} bytes exceeded".format(conf.get_agent_memory_quota()), ustr(context_manager.exception), "An incorrect exception was raised")Azure-WALinuxAgent-2b21de5/tests/ga/test_cgroups.py000066400000000000000000000232251462617747000223600ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.4+ and Openssl 1.0+ # from __future__ import print_function import errno import os import random import shutil from azurelinuxagent.ga.cgroup import CpuCgroup, MemoryCgroup, MetricsCounter, CounterNotFound from azurelinuxagent.common.exception import CGroupsException from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.utils import fileutil from tests.lib.tools import AgentTestCase, patch, data_dir def consume_cpu_time(): waste = 0 for x in range(1, 200000): # pylint: disable=unused-variable waste += random.random() return waste class TestCGroup(AgentTestCase): def test_is_active(self): test_cgroup = CpuCgroup("test_extension", self.tmp_dir) self.assertEqual(False, test_cgroup.is_active()) with open(os.path.join(self.tmp_dir, "tasks"), mode="wb") as tasks: tasks.write(str(1000).encode()) self.assertEqual(True, test_cgroup.is_active()) @patch("azurelinuxagent.common.logger.periodic_warn") def test_is_active_file_not_present(self, patch_periodic_warn): test_cgroup = CpuCgroup("test_extension", self.tmp_dir) self.assertEqual(False, test_cgroup.is_active()) test_cgroup = MemoryCgroup("test_extension", os.path.join(self.tmp_dir, "this_cgroup_does_not_exist")) self.assertEqual(False, test_cgroup.is_active()) self.assertEqual(0, patch_periodic_warn.call_count) @patch("azurelinuxagent.common.logger.periodic_warn") def test_is_active_incorrect_file(self, patch_periodic_warn): open(os.path.join(self.tmp_dir, "tasks"), mode="wb").close() test_cgroup = CpuCgroup("test_extension", os.path.join(self.tmp_dir, "tasks")) self.assertEqual(False, test_cgroup.is_active()) self.assertEqual(1, patch_periodic_warn.call_count) class TestCpuCgroup(AgentTestCase): @classmethod def setUpClass(cls): AgentTestCase.setUpClass() original_read_file = fileutil.read_file # # Tests that need to mock the contents of /proc/stat or */cpuacct/stat can set this map from # the file that needs to be mocked to the mock file (each test starts with an empty map). If # an Exception is given instead of a path, the exception is raised # cls.mock_read_file_map = {} def mock_read_file(filepath, **args): if filepath in cls.mock_read_file_map: mapped_value = cls.mock_read_file_map[filepath] if isinstance(mapped_value, Exception): raise mapped_value filepath = mapped_value return original_read_file(filepath, **args) cls.mock_read_file = patch("azurelinuxagent.common.utils.fileutil.read_file", side_effect=mock_read_file) cls.mock_read_file.start() @classmethod def tearDownClass(cls): cls.mock_read_file.stop() AgentTestCase.tearDownClass() def setUp(self): AgentTestCase.setUp(self) TestCpuCgroup.mock_read_file_map.clear() def test_initialize_cpu_usage_should_set_current_cpu_usage(self): cgroup = CpuCgroup("test", "/sys/fs/cgroup/cpu/system.slice/test") TestCpuCgroup.mock_read_file_map = { "/proc/stat": os.path.join(data_dir, "cgroups", "proc_stat_t0"), os.path.join(cgroup.path, "cpuacct.stat"): os.path.join(data_dir, "cgroups", "cpuacct.stat_t0") } cgroup.initialize_cpu_usage() self.assertEqual(cgroup._current_cgroup_cpu, 63763) self.assertEqual(cgroup._current_system_cpu, 5496872) def test_get_cpu_usage_should_return_the_cpu_usage_since_its_last_invocation(self): osutil = get_osutil() cgroup = CpuCgroup("test", "/sys/fs/cgroup/cpu/system.slice/test") TestCpuCgroup.mock_read_file_map = { "/proc/stat": os.path.join(data_dir, "cgroups", "proc_stat_t0"), os.path.join(cgroup.path, "cpuacct.stat"): os.path.join(data_dir, "cgroups", "cpuacct.stat_t0") } cgroup.initialize_cpu_usage() TestCpuCgroup.mock_read_file_map = { "/proc/stat": os.path.join(data_dir, "cgroups", "proc_stat_t1"), os.path.join(cgroup.path, "cpuacct.stat"): os.path.join(data_dir, "cgroups", "cpuacct.stat_t1") } cpu_usage = cgroup.get_cpu_usage() self.assertEqual(cpu_usage, round(100.0 * 0.000307697876885 * osutil.get_processor_cores(), 3)) TestCpuCgroup.mock_read_file_map = { "/proc/stat": os.path.join(data_dir, "cgroups", "proc_stat_t2"), os.path.join(cgroup.path, "cpuacct.stat"): os.path.join(data_dir, "cgroups", "cpuacct.stat_t2") } cpu_usage = cgroup.get_cpu_usage() self.assertEqual(cpu_usage, round(100.0 * 0.000445181085968 * osutil.get_processor_cores(), 3)) def test_initialize_cpu_usage_should_set_the_cgroup_usage_to_0_when_the_cgroup_does_not_exist(self): cgroup = CpuCgroup("test", "/sys/fs/cgroup/cpu/system.slice/test") io_error_2 = IOError() io_error_2.errno = errno.ENOENT # "No such directory" TestCpuCgroup.mock_read_file_map = { "/proc/stat": os.path.join(data_dir, "cgroups", "proc_stat_t0"), os.path.join(cgroup.path, "cpuacct.stat"): io_error_2 } cgroup.initialize_cpu_usage() self.assertEqual(cgroup._current_cgroup_cpu, 0) self.assertEqual(cgroup._current_system_cpu, 5496872) # check the system usage just for test sanity def test_initialize_cpu_usage_should_raise_an_exception_when_called_more_than_once(self): cgroup = CpuCgroup("test", "/sys/fs/cgroup/cpu/system.slice/test") TestCpuCgroup.mock_read_file_map = { "/proc/stat": os.path.join(data_dir, "cgroups", "proc_stat_t0"), os.path.join(cgroup.path, "cpuacct.stat"): os.path.join(data_dir, "cgroups", "cpuacct.stat_t0") } cgroup.initialize_cpu_usage() with self.assertRaises(CGroupsException): cgroup.initialize_cpu_usage() def test_get_cpu_usage_should_raise_an_exception_when_initialize_cpu_usage_has_not_been_invoked(self): cgroup = CpuCgroup("test", "/sys/fs/cgroup/cpu/system.slice/test") with self.assertRaises(CGroupsException): cpu_usage = cgroup.get_cpu_usage() # pylint: disable=unused-variable def test_get_throttled_time_should_return_the_value_since_its_last_invocation(self): test_file = os.path.join(self.tmp_dir, "cpu.stat") shutil.copyfile(os.path.join(data_dir, "cgroups", "cpu.stat_t0"), test_file) # throttled_time = 50 cgroup = CpuCgroup("test", self.tmp_dir) cgroup.initialize_cpu_usage() shutil.copyfile(os.path.join(data_dir, "cgroups", "cpu.stat_t1"), test_file) # throttled_time = 2075541442327 throttled_time = cgroup.get_cpu_throttled_time() self.assertEqual(throttled_time, float(2075541442327 - 50) / 1E9, "The value of throttled_time is incorrect") def test_get_tracked_metrics_should_return_the_throttled_time(self): cgroup = CpuCgroup("test", os.path.join(data_dir, "cgroups")) cgroup.initialize_cpu_usage() def find_throttled_time(metrics): return [m for m in metrics if m.counter == MetricsCounter.THROTTLED_TIME] found = find_throttled_time(cgroup.get_tracked_metrics()) self.assertTrue(len(found) == 0, "get_tracked_metrics should not fetch the throttled time by default. Found: {0}".format(found)) found = find_throttled_time(cgroup.get_tracked_metrics(track_throttled_time=True)) self.assertTrue(len(found) == 1, "get_tracked_metrics should have fetched the throttled time by default. Found: {0}".format(found)) class TestMemoryCgroup(AgentTestCase): def test_get_metrics(self): test_mem_cg = MemoryCgroup("test_extension", os.path.join(data_dir, "cgroups", "memory_mount")) memory_usage = test_mem_cg.get_memory_usage() self.assertEqual(150000, memory_usage) max_memory_usage = test_mem_cg.get_max_memory_usage() self.assertEqual(1000000, max_memory_usage) swap_memory_usage = test_mem_cg.try_swap_memory_usage() self.assertEqual(20000, swap_memory_usage) def test_get_metrics_when_files_not_present(self): test_mem_cg = MemoryCgroup("test_extension", os.path.join(data_dir, "cgroups")) with self.assertRaises(IOError) as e: test_mem_cg.get_memory_usage() self.assertEqual(e.exception.errno, errno.ENOENT) with self.assertRaises(IOError) as e: test_mem_cg.get_max_memory_usage() self.assertEqual(e.exception.errno, errno.ENOENT) with self.assertRaises(IOError) as e: test_mem_cg.try_swap_memory_usage() self.assertEqual(e.exception.errno, errno.ENOENT) def test_get_memory_usage_counters_not_found(self): test_mem_cg = MemoryCgroup("test_extension", os.path.join(data_dir, "cgroups", "missing_memory_counters")) with self.assertRaises(CounterNotFound): test_mem_cg.get_memory_usage() swap_memory_usage = test_mem_cg.try_swap_memory_usage() self.assertEqual(0, swap_memory_usage) Azure-WALinuxAgent-2b21de5/tests/ga/test_cgroupstelemetry.py000066400000000000000000000477261462617747000243270ustar00rootroot00000000000000# Copyright 2019 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.4+ and Openssl 1.0+ # import errno import os import random import time from azurelinuxagent.ga.cgroup import CpuCgroup, MemoryCgroup from azurelinuxagent.ga.cgroupstelemetry import CGroupsTelemetry from azurelinuxagent.common.utils import fileutil from tests.lib.tools import AgentTestCase, data_dir, patch def raise_ioerror(*_): e = IOError() from errno import EIO e.errno = EIO raise e def median(lst): data = sorted(lst) l_len = len(data) if l_len < 1: return None if l_len % 2 == 0: return (data[int((l_len - 1) / 2)] + data[int((l_len + 1) / 2)]) / 2.0 else: return data[int((l_len - 1) / 2)] def generate_metric_list(lst): return [float(sum(lst)) / float(len(lst)), min(lst), max(lst), median(lst), len(lst)] def consume_cpu_time(): waste = 0 for x in range(1, 200000): # pylint: disable=unused-variable waste += random.random() return waste def consume_memory(): waste = [] for x in range(1, 3): # pylint: disable=unused-variable waste.append([random.random()] * 10000) time.sleep(0.1) waste *= 0 return waste class TestCGroupsTelemetry(AgentTestCase): NumSummarizationValues = 7 @classmethod def setUpClass(cls): AgentTestCase.setUpClass() # CPU Cgroups compute usage based on /proc/stat and /sys/fs/cgroup/.../cpuacct.stat; use mock data for those # files original_read_file = fileutil.read_file def mock_read_file(filepath, **args): if filepath == "/proc/stat": filepath = os.path.join(data_dir, "cgroups", "proc_stat_t0") elif filepath.endswith("/cpuacct.stat"): filepath = os.path.join(data_dir, "cgroups", "cpuacct.stat_t0") return original_read_file(filepath, **args) cls._mock_read_cpu_cgroup_file = patch("azurelinuxagent.common.utils.fileutil.read_file", side_effect=mock_read_file) cls._mock_read_cpu_cgroup_file.start() @classmethod def tearDownClass(cls): cls._mock_read_cpu_cgroup_file.stop() AgentTestCase.tearDownClass() def setUp(self): AgentTestCase.setUp(self) CGroupsTelemetry.reset() def tearDown(self): AgentTestCase.tearDown(self) CGroupsTelemetry.reset() @staticmethod def _track_new_extension_cgroups(num_extensions): for i in range(num_extensions): dummy_cpu_cgroup = CpuCgroup("dummy_extension_{0}".format(i), "dummy_cpu_path_{0}".format(i)) CGroupsTelemetry.track_cgroup(dummy_cpu_cgroup) dummy_memory_cgroup = MemoryCgroup("dummy_extension_{0}".format(i), "dummy_memory_path_{0}".format(i)) CGroupsTelemetry.track_cgroup(dummy_memory_cgroup) def _assert_cgroups_are_tracked(self, num_extensions): for i in range(num_extensions): self.assertTrue(CGroupsTelemetry.is_tracked("dummy_cpu_path_{0}".format(i))) self.assertTrue(CGroupsTelemetry.is_tracked("dummy_memory_path_{0}".format(i))) def _assert_polled_metrics_equal(self, metrics, cpu_metric_value, memory_metric_value, max_memory_metric_value, swap_memory_value): for metric in metrics: self.assertIn(metric.category, ["CPU", "Memory"]) if metric.category == "CPU": self.assertEqual(metric.counter, "% Processor Time") self.assertEqual(metric.value, cpu_metric_value) if metric.category == "Memory": self.assertIn(metric.counter, ["Total Memory Usage", "Max Memory Usage", "Swap Memory Usage"]) if metric.counter == "Total Memory Usage": self.assertEqual(metric.value, memory_metric_value) elif metric.counter == "Max Memory Usage": self.assertEqual(metric.value, max_memory_metric_value) elif metric.counter == "Swap Memory Usage": self.assertEqual(metric.value, swap_memory_value) def test_telemetry_polling_with_active_cgroups(self, *args): # pylint: disable=unused-argument num_extensions = 3 self._track_new_extension_cgroups(num_extensions) with patch("azurelinuxagent.ga.cgroup.MemoryCgroup.get_max_memory_usage") as patch_get_memory_max_usage: with patch("azurelinuxagent.ga.cgroup.MemoryCgroup.get_memory_usage") as patch_get_memory_usage: with patch("azurelinuxagent.ga.cgroup.MemoryCgroup.get_memory_usage") as patch_get_memory_usage: with patch("azurelinuxagent.ga.cgroup.MemoryCgroup.try_swap_memory_usage") as patch_try_swap_memory_usage: with patch("azurelinuxagent.ga.cgroup.CpuCgroup.get_cpu_usage") as patch_get_cpu_usage: with patch("azurelinuxagent.ga.cgroup.CGroup.is_active") as patch_is_active: patch_is_active.return_value = True current_cpu = 30 current_memory = 209715200 current_max_memory = 471859200 current_swap_memory = 20971520 # 1 CPU metric + 1 Current Memory + 1 Max memory + 1 swap memory num_of_metrics_per_extn_expected = 4 patch_get_cpu_usage.return_value = current_cpu patch_get_memory_usage.return_value = current_memory # example 200 MB patch_get_memory_max_usage.return_value = current_max_memory # example 450 MB patch_try_swap_memory_usage.return_value = current_swap_memory # example 20MB num_polls = 12 for data_count in range(1, num_polls + 1): # pylint: disable=unused-variable metrics = CGroupsTelemetry.poll_all_tracked() self.assertEqual(len(metrics), num_extensions * num_of_metrics_per_extn_expected) self._assert_polled_metrics_equal(metrics, current_cpu, current_memory, current_max_memory, current_swap_memory) @patch("azurelinuxagent.ga.cgroup.MemoryCgroup.get_max_memory_usage", side_effect=raise_ioerror) @patch("azurelinuxagent.ga.cgroup.MemoryCgroup.get_memory_usage", side_effect=raise_ioerror) @patch("azurelinuxagent.ga.cgroup.CpuCgroup.get_cpu_usage", side_effect=raise_ioerror) @patch("azurelinuxagent.ga.cgroup.CGroup.is_active", return_value=False) def test_telemetry_polling_with_inactive_cgroups(self, *_): num_extensions = 5 no_extensions_expected = 0 # pylint: disable=unused-variable self._track_new_extension_cgroups(num_extensions) self._assert_cgroups_are_tracked(num_extensions) metrics = CGroupsTelemetry.poll_all_tracked() for i in range(num_extensions): self.assertFalse(CGroupsTelemetry.is_tracked("dummy_cpu_path_{0}".format(i))) self.assertFalse(CGroupsTelemetry.is_tracked("dummy_memory_path_{0}".format(i))) self.assertEqual(len(metrics), 0) @patch("azurelinuxagent.ga.cgroup.MemoryCgroup.get_max_memory_usage") @patch("azurelinuxagent.ga.cgroup.MemoryCgroup.get_memory_usage") @patch("azurelinuxagent.ga.cgroup.CpuCgroup.get_cpu_usage") @patch("azurelinuxagent.ga.cgroup.CGroup.is_active") def test_telemetry_polling_with_changing_cgroups_state(self, patch_is_active, patch_get_cpu_usage, # pylint: disable=unused-argument patch_get_mem, patch_get_max_mem, *args): num_extensions = 5 self._track_new_extension_cgroups(num_extensions) patch_is_active.return_value = True no_extensions_expected = 0 # pylint: disable=unused-variable expected_data_count = 1 # pylint: disable=unused-variable current_cpu = 30 current_memory = 209715200 current_max_memory = 471859200 patch_get_cpu_usage.return_value = current_cpu patch_get_mem.return_value = current_memory # example 200 MB patch_get_max_mem.return_value = current_max_memory # example 450 MB self._assert_cgroups_are_tracked(num_extensions) CGroupsTelemetry.poll_all_tracked() self._assert_cgroups_are_tracked(num_extensions) patch_is_active.return_value = False patch_get_cpu_usage.side_effect = raise_ioerror patch_get_mem.side_effect = raise_ioerror patch_get_max_mem.side_effect = raise_ioerror CGroupsTelemetry.poll_all_tracked() for i in range(num_extensions): self.assertFalse(CGroupsTelemetry.is_tracked("dummy_cpu_path_{0}".format(i))) self.assertFalse(CGroupsTelemetry.is_tracked("dummy_memory_path_{0}".format(i))) # mocking get_proc_stat to make it run on Mac and other systems. This test does not need to read the values of the # /proc/stat file on the filesystem. @patch("azurelinuxagent.common.logger.periodic_warn") def test_telemetry_polling_to_not_generate_transient_logs_ioerror_file_not_found(self, patch_periodic_warn): num_extensions = 1 self._track_new_extension_cgroups(num_extensions) self.assertEqual(0, patch_periodic_warn.call_count) # Not expecting logs present for io_error with errno=errno.ENOENT io_error_2 = IOError() io_error_2.errno = errno.ENOENT with patch("azurelinuxagent.common.utils.fileutil.read_file", side_effect=io_error_2): poll_count = 1 for data_count in range(poll_count, 10): # pylint: disable=unused-variable CGroupsTelemetry.poll_all_tracked() self.assertEqual(0, patch_periodic_warn.call_count) @patch("azurelinuxagent.common.logger.periodic_warn") def test_telemetry_polling_to_generate_transient_logs_ioerror_permission_denied(self, patch_periodic_warn): num_extensions = 1 num_controllers = 1 is_active_check_per_controller = 2 self._track_new_extension_cgroups(num_extensions) self.assertEqual(0, patch_periodic_warn.call_count) # Expecting logs to be present for different kind of errors io_error_3 = IOError() io_error_3.errno = errno.EPERM with patch("azurelinuxagent.common.utils.fileutil.read_file", side_effect=io_error_3): poll_count = 1 expected_count_per_call = num_controllers + is_active_check_per_controller # get_max_memory_usage memory controller would generate a log statement, and each cgroup would invoke a # is active check raising an exception for data_count in range(poll_count, 10): # pylint: disable=unused-variable CGroupsTelemetry.poll_all_tracked() self.assertEqual(poll_count * expected_count_per_call, patch_periodic_warn.call_count) def test_telemetry_polling_to_generate_transient_logs_index_error(self): num_extensions = 1 self._track_new_extension_cgroups(num_extensions) # Generating a different kind of error (non-IOError) to check the logging. # Trying to invoke IndexError during the getParameter call with patch("azurelinuxagent.common.utils.fileutil.read_file", return_value=''): with patch("azurelinuxagent.common.logger.periodic_warn") as patch_periodic_warn: expected_call_count = 1 # 1 periodic warning for memory for data_count in range(1, 10): # pylint: disable=unused-variable CGroupsTelemetry.poll_all_tracked() self.assertEqual(expected_call_count, patch_periodic_warn.call_count) @patch("azurelinuxagent.ga.cgroup.MemoryCgroup.try_swap_memory_usage") @patch("azurelinuxagent.ga.cgroup.MemoryCgroup.get_max_memory_usage") @patch("azurelinuxagent.ga.cgroup.MemoryCgroup.get_memory_usage") @patch("azurelinuxagent.ga.cgroup.CpuCgroup.get_cpu_usage") @patch("azurelinuxagent.ga.cgroup.CGroup.is_active") def test_telemetry_calculations(self, patch_is_active, patch_get_cpu_usage, patch_get_memory_usage, patch_get_memory_max_usage, patch_try_memory_swap_usage, *args): # pylint: disable=unused-argument num_polls = 10 num_extensions = 1 cpu_percent_values = [random.randint(0, 100) for _ in range(num_polls)] # only verifying calculations and not validity of the values. memory_usage_values = [random.randint(0, 8 * 1024 ** 3) for _ in range(num_polls)] max_memory_usage_values = [random.randint(0, 8 * 1024 ** 3) for _ in range(num_polls)] swap_usage_values = [random.randint(0, 8 * 1024 ** 3) for _ in range(num_polls)] self._track_new_extension_cgroups(num_extensions) self.assertEqual(2 * num_extensions, len(CGroupsTelemetry._tracked)) for i in range(num_polls): patch_is_active.return_value = True patch_get_cpu_usage.return_value = cpu_percent_values[i] patch_get_memory_usage.return_value = memory_usage_values[i] patch_get_memory_max_usage.return_value = max_memory_usage_values[i] patch_try_memory_swap_usage.return_value = swap_usage_values[i] metrics = CGroupsTelemetry.poll_all_tracked() # 1 CPU metric + 1 Current Memory + 1 Max memory + 1 swap memory self.assertEqual(len(metrics), 4 * num_extensions) self._assert_polled_metrics_equal(metrics, cpu_percent_values[i], memory_usage_values[i], max_memory_usage_values[i], swap_usage_values[i]) def test_cgroup_tracking(self, *args): # pylint: disable=unused-argument num_extensions = 5 num_controllers = 2 self._track_new_extension_cgroups(num_extensions) self._assert_cgroups_are_tracked(num_extensions) self.assertEqual(num_extensions * num_controllers, len(CGroupsTelemetry._tracked)) def test_cgroup_is_tracked(self, *args): # pylint: disable=unused-argument num_extensions = 5 self._track_new_extension_cgroups(num_extensions) self._assert_cgroups_are_tracked(num_extensions) self.assertFalse(CGroupsTelemetry.is_tracked("not_present_cpu_dummy_path")) self.assertFalse(CGroupsTelemetry.is_tracked("not_present_memory_dummy_path")) @patch("azurelinuxagent.ga.cgroup.MemoryCgroup.get_memory_usage", side_effect=raise_ioerror) def test_process_cgroup_metric_with_no_memory_cgroup_mounted(self, *args): # pylint: disable=unused-argument num_extensions = 5 self._track_new_extension_cgroups(num_extensions) with patch("azurelinuxagent.ga.cgroup.CpuCgroup.get_cpu_usage") as patch_get_cpu_usage: with patch("azurelinuxagent.ga.cgroup.CGroup.is_active") as patch_is_active: patch_is_active.return_value = True current_cpu = 30 patch_get_cpu_usage.return_value = current_cpu poll_count = 1 for data_count in range(poll_count, 10): # pylint: disable=unused-variable metrics = CGroupsTelemetry.poll_all_tracked() self.assertEqual(len(metrics), num_extensions * 1) # Only CPU populated self._assert_polled_metrics_equal(metrics, current_cpu, 0, 0, 0) @patch("azurelinuxagent.ga.cgroup.CpuCgroup.get_cpu_usage", side_effect=raise_ioerror) def test_process_cgroup_metric_with_no_cpu_cgroup_mounted(self, *args): # pylint: disable=unused-argument num_extensions = 5 self._track_new_extension_cgroups(num_extensions) with patch("azurelinuxagent.ga.cgroup.MemoryCgroup.get_max_memory_usage") as patch_get_memory_max_usage: with patch("azurelinuxagent.ga.cgroup.MemoryCgroup.get_memory_usage") as patch_get_memory_usage: with patch("azurelinuxagent.ga.cgroup.MemoryCgroup.try_swap_memory_usage") as patch_try_swap_memory_usage: with patch("azurelinuxagent.ga.cgroup.CGroup.is_active") as patch_is_active: patch_is_active.return_value = True current_memory = 209715200 current_max_memory = 471859200 current_swap_memory = 20971520 patch_get_memory_usage.return_value = current_memory # example 200 MB patch_get_memory_max_usage.return_value = current_max_memory # example 450 MB patch_try_swap_memory_usage.return_value = current_swap_memory # example 20MB num_polls = 10 for data_count in range(1, num_polls + 1): # pylint: disable=unused-variable metrics = CGroupsTelemetry.poll_all_tracked() # Memory is only populated, CPU is not. Thus 3 metrics for memory. self.assertEqual(len(metrics), num_extensions * 3) self._assert_polled_metrics_equal(metrics, 0, current_memory, current_max_memory, current_swap_memory) @patch("azurelinuxagent.ga.cgroup.MemoryCgroup.get_memory_usage", side_effect=raise_ioerror) @patch("azurelinuxagent.ga.cgroup.MemoryCgroup.get_max_memory_usage", side_effect=raise_ioerror) @patch("azurelinuxagent.ga.cgroup.CpuCgroup.get_cpu_usage", side_effect=raise_ioerror) def test_extension_telemetry_not_sent_for_empty_perf_metrics(self, *args): # pylint: disable=unused-argument num_extensions = 5 self._track_new_extension_cgroups(num_extensions) with patch("azurelinuxagent.ga.cgroup.CGroup.is_active") as patch_is_active: patch_is_active.return_value = False poll_count = 1 for data_count in range(poll_count, 10): # pylint: disable=unused-variable metrics = CGroupsTelemetry.poll_all_tracked() self.assertEqual(0, len(metrics)) @patch("azurelinuxagent.ga.cgroup.CpuCgroup.get_cpu_usage") @patch("azurelinuxagent.ga.cgroup.CpuCgroup.get_cpu_throttled_time") @patch("azurelinuxagent.ga.cgroup.CGroup.is_active") def test_cgroup_telemetry_should_not_report_cpu_negative_value(self, patch_is_active, path_get_throttled_time, patch_get_cpu_usage): num_polls = 5 num_extensions = 1 # only verifying calculations and not validity of the values. cpu_percent_values = [random.randint(0, 100) for _ in range(num_polls-1)] cpu_percent_values.append(-1) cpu_throttled_values = [random.randint(0, 60 * 60) for _ in range(num_polls)] dummy_cpu_cgroup = CpuCgroup("dummy_extension_name", "dummy_cpu_path") CGroupsTelemetry.track_cgroup(dummy_cpu_cgroup) self.assertEqual(1, len(CGroupsTelemetry._tracked)) for i in range(num_polls): patch_is_active.return_value = True patch_get_cpu_usage.return_value = cpu_percent_values[i] path_get_throttled_time.return_value = cpu_throttled_values[i] CGroupsTelemetry._track_throttled_time = True metrics = CGroupsTelemetry.poll_all_tracked() # 1 CPU metric + 1 CPU throttled # ignore CPU metrics from telemetry if cpu cgroup reports negative value if i < num_polls-1: self.assertEqual(len(metrics), 2 * num_extensions) else: self.assertEqual(len(metrics), 0) for metric in metrics: self.assertGreaterEqual(metric.value, 0, "telemetry should not report negative value") Azure-WALinuxAgent-2b21de5/tests/ga/test_collect_logs.py000066400000000000000000000310371462617747000233470ustar00rootroot00000000000000# Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib import os from azurelinuxagent.common import logger, conf from azurelinuxagent.ga.cgroup import CpuCgroup, MemoryCgroup, MetricValue from azurelinuxagent.ga.cgroupconfigurator import CGroupConfigurator from azurelinuxagent.common.logger import Logger from azurelinuxagent.common.protocol.util import ProtocolUtil from azurelinuxagent.common.utils import fileutil from azurelinuxagent.ga.collect_logs import get_collect_logs_handler, is_log_collection_allowed, \ get_log_collector_monitor_handler from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.wire_protocol_data import DATA_FILE from tests.lib.tools import Mock, MagicMock, patch, AgentTestCase, clear_singleton_instances, skip_if_predicate_true, \ is_python_version_26, data_dir @contextlib.contextmanager def _create_collect_logs_handler(iterations=1, cgroups_enabled=True, collect_logs_conf=True): """ Creates an instance of CollectLogsHandler that * Uses a mock_wire_protocol for network requests, * Runs its main loop only the number of times given in the 'iterations' parameter, and * Does not sleep at the end of each iteration The returned CollectLogsHandler is augmented with 2 methods: * get_mock_wire_protocol() - returns the mock protocol * run_and_wait() - invokes run() and wait() on the CollectLogsHandler """ with mock_wire_protocol(DATA_FILE) as protocol: protocol_util = MagicMock() protocol_util.get_protocol = Mock(return_value=protocol) with patch("azurelinuxagent.ga.collect_logs.get_protocol_util", return_value=protocol_util): with patch("azurelinuxagent.ga.collect_logs.CollectLogsHandler.stopped", side_effect=[False] * iterations + [True]): with patch("time.sleep"): # Grab the singleton to patch it cgroups_configurator_singleton = CGroupConfigurator.get_instance() with patch.object(cgroups_configurator_singleton, "enabled", return_value=cgroups_enabled): with patch("azurelinuxagent.ga.collect_logs.conf.get_collect_logs", return_value=collect_logs_conf): def run_and_wait(): collect_logs_handler.run() collect_logs_handler.join() collect_logs_handler = get_collect_logs_handler() collect_logs_handler.get_mock_wire_protocol = lambda: protocol collect_logs_handler.run_and_wait = run_and_wait yield collect_logs_handler @skip_if_predicate_true(is_python_version_26, "Disabled on Python 2.6") class TestCollectLogs(AgentTestCase, HttpRequestPredicates): def setUp(self): AgentTestCase.setUp(self) prefix = "UnitTest" logger.DEFAULT_LOGGER = Logger(prefix=prefix) self.archive_path = os.path.join(self.tmp_dir, "logs.zip") self.mock_archive_path = patch("azurelinuxagent.ga.collect_logs.COMPRESSED_ARCHIVE_PATH", self.archive_path) self.mock_archive_path.start() self.logger_path = os.path.join(self.tmp_dir, "waagent.log") self.mock_logger_path = patch.object(conf, "get_agent_log_file", return_value=self.logger_path) self.mock_logger_path.start() # Since ProtocolUtil is a singleton per thread, we need to clear it to ensure that the test cases do not # reuse a previous state clear_singleton_instances(ProtocolUtil) def tearDown(self): if os.path.exists(self.archive_path): os.remove(self.archive_path) self.mock_archive_path.stop() if os.path.exists(self.logger_path): os.remove(self.logger_path) self.mock_logger_path.stop() AgentTestCase.tearDown(self) def _create_dummy_archive(self, size=1024): with open(self.archive_path, "wb") as f: f.truncate(size) def test_it_should_only_collect_logs_if_conditions_are_met(self): # In order to collect logs, three conditions have to be met: # 1) the flag must be set to true in the conf file # 2) cgroups must be managing services # 3) python version 2.7+ which is automatically true for these tests since they are disabled on py2.6 # cgroups not enabled, config flag false with _create_collect_logs_handler(cgroups_enabled=False, collect_logs_conf=False): self.assertEqual(False, is_log_collection_allowed(), "Log collection should not have been enabled") # cgroups enabled, config flag false with _create_collect_logs_handler(cgroups_enabled=True, collect_logs_conf=False): self.assertEqual(False, is_log_collection_allowed(), "Log collection should not have been enabled") # cgroups not enabled, config flag true with _create_collect_logs_handler(cgroups_enabled=False, collect_logs_conf=True): self.assertEqual(False, is_log_collection_allowed(), "Log collection should not have been enabled") # cgroups enabled, config flag true with _create_collect_logs_handler(cgroups_enabled=True, collect_logs_conf=True): self.assertEqual(True, is_log_collection_allowed(), "Log collection should have been enabled") def test_it_uploads_logs_when_collection_is_successful(self): archive_size = 42 def mock_run_command(*_, **__): return self._create_dummy_archive(size=archive_size) with _create_collect_logs_handler() as collect_logs_handler: with patch("azurelinuxagent.ga.collect_logs.shellutil.run_command", side_effect=mock_run_command): def http_put_handler(url, content, **__): if self.is_host_plugin_put_logs_request(url): http_put_handler.counter += 1 http_put_handler.archive = content return MockHttpResponse(status=200) return None http_put_handler.counter = 0 http_put_handler.archive = b"" protocol = collect_logs_handler.get_mock_wire_protocol() protocol.set_http_handlers(http_put_handler=http_put_handler) collect_logs_handler.run_and_wait() self.assertEqual(http_put_handler.counter, 1, "The PUT API to upload logs should have been called once") self.assertTrue(os.path.exists(self.archive_path), "The archive file should exist on disk") self.assertEqual(archive_size, len(http_put_handler.archive), "The archive file should have {0} bytes, not {1}".format(archive_size, len(http_put_handler.archive))) def test_it_does_not_upload_logs_when_collection_is_unsuccessful(self): with _create_collect_logs_handler() as collect_logs_handler: with patch("azurelinuxagent.ga.collect_logs.shellutil.run_command", side_effect=Exception("test exception")): def http_put_handler(url, _, **__): if self.is_host_plugin_put_logs_request(url): http_put_handler.counter += 1 return MockHttpResponse(status=200) return None http_put_handler.counter = 0 protocol = collect_logs_handler.get_mock_wire_protocol() protocol.set_http_handlers(http_put_handler=http_put_handler) collect_logs_handler.run_and_wait() self.assertFalse(os.path.exists(self.archive_path), "The archive file should not exist on disk") self.assertEqual(http_put_handler.counter, 0, "The PUT API to upload logs shouldn't have been called") @contextlib.contextmanager def _create_log_collector_monitor_handler(iterations=1): """ Creates an instance of LogCollectorMonitorHandler that * Runs its main loop only the number of times given in the 'iterations' parameter, and * Does not sleep at the end of each iteration The returned CollectLogsHandler is augmented with 2 methods: * run_and_wait() - invokes run() and wait() on the CollectLogsHandler """ with patch("azurelinuxagent.ga.collect_logs.LogCollectorMonitorHandler.stopped", side_effect=[False] * iterations + [True]): with patch("time.sleep"): original_read_file = fileutil.read_file def mock_read_file(filepath, **args): if filepath == "/proc/stat": filepath = os.path.join(data_dir, "cgroups", "proc_stat_t0") elif filepath.endswith("/cpuacct.stat"): filepath = os.path.join(data_dir, "cgroups", "cpuacct.stat_t0") return original_read_file(filepath, **args) with patch("azurelinuxagent.common.utils.fileutil.read_file", side_effect=mock_read_file): def run_and_wait(): monitor_log_collector.run() monitor_log_collector.join() cgroups = [ CpuCgroup("test", "dummy_cpu_path"), MemoryCgroup("test", "dummy_memory_path") ] monitor_log_collector = get_log_collector_monitor_handler(cgroups) monitor_log_collector.run_and_wait = run_and_wait yield monitor_log_collector class TestLogCollectorMonitorHandler(AgentTestCase): @patch('azurelinuxagent.common.event.EventLogger.add_metric') @patch("azurelinuxagent.ga.collect_logs.LogCollectorMonitorHandler._poll_resource_usage") def test_send_extension_metrics_telemetry(self, patch_poll_resource_usage, patch_add_metric): with _create_log_collector_monitor_handler() as log_collector_monitor_handler: patch_poll_resource_usage.return_value = [MetricValue("Process", "% Processor Time", "service", 1), MetricValue("Process", "Throttled Time", "service", 1), MetricValue("Memory", "Total Memory Usage", "service", 1), MetricValue("Memory", "Max Memory Usage", "service", 1), MetricValue("Memory", "Swap Memory Usage", "service", 1) ] log_collector_monitor_handler.run_and_wait() self.assertEqual(1, patch_poll_resource_usage.call_count) self.assertEqual(5, patch_add_metric.call_count) # Five metrics being sent. @patch("os._exit", side_effect=Exception) @patch("azurelinuxagent.ga.collect_logs.LogCollectorMonitorHandler._poll_resource_usage") def test_verify_log_collector_memory_limit_exceeded(self, patch_poll_resource_usage, mock_exit): with _create_log_collector_monitor_handler() as log_collector_monitor_handler: with patch("azurelinuxagent.ga.cgroupconfigurator.LOGCOLLECTOR_MEMORY_LIMIT", 8): patch_poll_resource_usage.return_value = [MetricValue("Process", "% Processor Time", "service", 1), MetricValue("Process", "Throttled Time", "service", 1), MetricValue("Memory", "Total Memory Usage", "service", 9), MetricValue("Memory", "Max Memory Usage", "service", 7), MetricValue("Memory", "Swap Memory Usage", "service", 0) ] try: log_collector_monitor_handler.run_and_wait() except Exception: self.assertEqual(mock_exit.call_count, 1) Azure-WALinuxAgent-2b21de5/tests/ga/test_collect_telemetry_events.py000066400000000000000000000771621462617747000260120ustar00rootroot00000000000000# Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib import glob import json import os import random import re import shutil import string import uuid from collections import defaultdict from mock import patch, MagicMock from azurelinuxagent.common import conf from azurelinuxagent.common.event import EVENTS_DIRECTORY from azurelinuxagent.common.exception import InvalidExtensionEventError, ServiceStoppedError from azurelinuxagent.common.protocol.util import ProtocolUtil from azurelinuxagent.common.telemetryevent import GuestAgentGenericLogsSchema, \ CommonTelemetryEventSchema from azurelinuxagent.common.utils import fileutil from azurelinuxagent.ga.collect_telemetry_events import ExtensionEventSchema, _ProcessExtensionEvents from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.tools import AgentTestCase, clear_singleton_instances, data_dir class TestExtensionTelemetryHandler(AgentTestCase, HttpRequestPredicates): _TEST_DATA_DIR = os.path.join(data_dir, "events", "extension_events") _WELL_FORMED_FILES = os.path.join(_TEST_DATA_DIR, "well_formed_files") _MALFORMED_FILES = os.path.join(_TEST_DATA_DIR, "malformed_files") _MIX_FILES = os.path.join(_TEST_DATA_DIR, "mix_files") # To make tests more versatile, include this key in a test event to mark that event as a bad event. # This event will then be skipped and will not be counted as a good event. This is purely for testing purposes, # we use the good_event_count to validate the no of events the agent actually sends to Wireserver. # Eg: { # "EventLevel": "INFO", # "Message": "Starting IaaS ScriptHandler Extension v1", # "Version": "1.2.3", # "TaskName": "Extension Info", # "EventPid": "5676", # "EventTid": "1", # "OperationId": "e1065def-7571-42c2-88a2-4f8b4c8f226d", # "TimeStamp": "2019-12-12T01:11:38.2298194Z", # "BadEvent": true # } BAD_EVENT_KEY = 'BadEvent' def setUp(self): AgentTestCase.setUp(self) clear_singleton_instances(ProtocolUtil) # Create the log directory if not exists fileutil.mkdir(conf.get_ext_log_dir()) def tearDown(self): AgentTestCase.tearDown(self) @staticmethod def _parse_file_and_count_good_events(test_events_file_path): if not os.path.exists(test_events_file_path): raise OSError("Test Events file {0} not found".format(test_events_file_path)) try: with open(test_events_file_path, "rb") as fd: event_data = fd.read().decode("utf-8") # Parse the string and get the list of events events = json.loads(event_data) if not isinstance(events, list): events = [events] except Exception as e: print("Error parsing json file: {0}".format(e)) return 0 bad_key = TestExtensionTelemetryHandler.BAD_EVENT_KEY return len([e for e in events if bad_key not in e or not e[bad_key]]) @staticmethod def _create_random_extension_events_dir_with_events(no_of_extensions, events_path, no_of_chars=10): if os.path.isdir(events_path): # If its a directory, get all files from that directory test_events_paths = glob.glob(os.path.join(events_path, "*")) else: test_events_paths = [events_path] extension_names = {} for i in range(no_of_extensions): # pylint: disable=unused-variable ext_name = "Microsoft.OSTCExtensions.{0}".format(''.join(random.sample(string.ascii_letters, no_of_chars))) no_of_good_events = 0 for test_events_file_path in test_events_paths: if not os.path.exists(test_events_file_path) or not os.path.isfile(test_events_file_path): continue no_of_good_events += TestExtensionTelemetryHandler._parse_file_and_count_good_events(test_events_file_path) events_dir = os.path.join(conf.get_ext_log_dir(), ext_name, EVENTS_DIRECTORY) fileutil.mkdir(events_dir) shutil.copy(test_events_file_path, events_dir) extension_names[ext_name] = no_of_good_events return extension_names @staticmethod def _get_no_of_events_from_body(body): return body.count("") @staticmethod def _replace_in_file(file_path, replace_from, replace_to): with open(file_path, 'r') as f: content = f.read() content = content.replace(replace_from, replace_to) with open(file_path, 'w') as f: f.write(content) @staticmethod def _get_param_from_events(event_list): for event in event_list: for param in event.parameters: yield param @staticmethod def _get_handlers_with_version(event_list): event_with_name_and_versions = defaultdict(list) for param in TestExtensionTelemetryHandler._get_param_from_events(event_list): if param.name == GuestAgentGenericLogsSchema.EventName: handler_name, version = param.value.split("-") event_with_name_and_versions[handler_name].append(version) return event_with_name_and_versions @staticmethod def _get_param_value_from_event_body_if_exists(event_list, param_name): param_values = [] for param in TestExtensionTelemetryHandler._get_param_from_events(event_list): if param.name == param_name: param_values.append(param.value) return param_values @contextlib.contextmanager def _create_extension_telemetry_processor(self, telemetry_handler=None): event_list = [] if not telemetry_handler: telemetry_handler = MagicMock(autospec=True) telemetry_handler.stopped = MagicMock(return_value=False) telemetry_handler.enqueue_event = MagicMock(wraps=event_list.append) extension_telemetry_processor = _ProcessExtensionEvents(telemetry_handler) extension_telemetry_processor.event_list = event_list yield extension_telemetry_processor def _assert_handler_data_in_event_list(self, telemetry_events, ext_names_with_count, expected_count=None): for ext_name, test_file_event_count in ext_names_with_count.items(): # If expected_count is not given, then the take the no of good events in the test file as the source of truth count = expected_count if expected_count is not None else test_file_event_count if count == 0: self.assertNotIn(ext_name, telemetry_events, "Found telemetry events for unwanted extension {0}".format(ext_name)) continue self.assertIn(ext_name, telemetry_events, "Extension name: {0} not found in the Telemetry Events".format(ext_name)) self.assertEqual(len(telemetry_events[ext_name]), count, "No of good events for ext {0} do not match".format(ext_name)) def _assert_param_in_events(self, event_list, param_key, param_value, min_count=1): count = 0 for param in TestExtensionTelemetryHandler._get_param_from_events(event_list): if param.name == param_key and param.value == param_value: count += 1 self.assertGreaterEqual(count, min_count, "'{0}: {1}' param only found {2} times in events. Min_count required: {3}".format( param_key, param_value, count, min_count)) @staticmethod def _is_string_in_event_body(event_body, expected_string): found = False for body in event_body: if expected_string in body: found = True break return found def test_it_should_not_capture_malformed_events(self): with self._create_extension_telemetry_processor() as extension_telemetry_processor: bad_name_ext_with_count = self._create_random_extension_events_dir_with_events(2, self._MALFORMED_FILES) bad_json_ext_with_count = self._create_random_extension_events_dir_with_events(2, os.path.join( self._MALFORMED_FILES, "bad_json_files", "1591816395.json")) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, bad_name_ext_with_count, expected_count=0) self._assert_handler_data_in_event_list(telemetry_events, bad_json_ext_with_count, expected_count=0) def test_it_should_capture_and_send_correct_events(self): with self._create_extension_telemetry_processor() as extension_telemetry_processor: ext_names_with_count = self._create_random_extension_events_dir_with_events(2, self._WELL_FORMED_FILES) ext_names_with_count.update(self._create_random_extension_events_dir_with_events(3, os.path.join( self._MIX_FILES, "1591835859.json"))) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, ext_names_with_count) def test_it_should_disregard_bad_events_and_keep_good_ones_in_a_mixed_file(self): with self._create_extension_telemetry_processor() as extension_telemetry_processor: extensions_with_count = self._create_random_extension_events_dir_with_events(2, self._MIX_FILES) extensions_with_count.update(self._create_random_extension_events_dir_with_events(3, os.path.join( self._MALFORMED_FILES, "bad_name_file.json"))) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, extensions_with_count) def test_it_should_limit_max_no_of_events_to_send_per_run_per_extension_and_report_event(self): max_events = 5 with patch("azurelinuxagent.ga.collect_telemetry_events.add_log_event") as mock_event: with self._create_extension_telemetry_processor() as extension_telemetry_processor: with patch.object(extension_telemetry_processor, "_MAX_NUMBER_OF_EVENTS_PER_EXTENSION_PER_PERIOD", max_events): ext_names_with_count = self._create_random_extension_events_dir_with_events(5, self._WELL_FORMED_FILES) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, ext_names_with_count, expected_count=max_events) pattern = r'Reached max count for the extension:\s*(?P.+?);\s*.+' self._assert_event_reported(mock_event, ext_names_with_count, pattern) def test_it_should_only_process_the_newer_events(self): max_events = 5 no_of_extension = 2 test_guid = str(uuid.uuid4()) with self._create_extension_telemetry_processor() as extension_telemetry_processor: with patch.object(extension_telemetry_processor, "_MAX_NUMBER_OF_EVENTS_PER_EXTENSION_PER_PERIOD", max_events): ext_names_with_count = self._create_random_extension_events_dir_with_events(no_of_extension, self._WELL_FORMED_FILES) for ext_name in ext_names_with_count.keys(): self._replace_in_file( os.path.join(conf.get_ext_log_dir(), ext_name, EVENTS_DIRECTORY, "9999999999.json"), replace_from='"{0}": ""'.format(ExtensionEventSchema.OperationId), replace_to='"{0}": "{1}"'.format(ExtensionEventSchema.OperationId, test_guid)) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, ext_names_with_count, expected_count=max_events) self._assert_param_in_events(extension_telemetry_processor.event_list, param_key=GuestAgentGenericLogsSchema.Context1, param_value="This is the latest event", min_count=no_of_extension*max_events) self._assert_param_in_events(extension_telemetry_processor.event_list, param_key=GuestAgentGenericLogsSchema.Context3, param_value=test_guid, min_count=no_of_extension*max_events) def test_it_should_parse_extension_event_irrespective_of_case(self): with self._create_extension_telemetry_processor() as extension_telemetry_processor: extensions_with_count = self._create_random_extension_events_dir_with_events(2, os.path.join( self._TEST_DATA_DIR, "different_cases")) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, extensions_with_count) def test_it_should_parse_special_chars_properly(self): with self._create_extension_telemetry_processor() as extension_telemetry_processor: extensions_with_count = self._create_random_extension_events_dir_with_events(2, os.path.join( self._TEST_DATA_DIR, "special_chars")) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, extensions_with_count) def test_it_should_parse_int_type_for_eventpid_or_eventtid_properly(self): with self._create_extension_telemetry_processor() as extension_telemetry_processor: extensions_with_count = self._create_random_extension_events_dir_with_events(2, os.path.join( self._TEST_DATA_DIR, "int_type")) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, extensions_with_count) def _setup_and_assert_tests_for_max_sizes(self, no_of_extensions=2, expected_count=None): with self._create_extension_telemetry_processor() as extension_telemetry_processor: extensions_with_count = self._create_random_extension_events_dir_with_events(no_of_extensions, os.path.join( self._TEST_DATA_DIR, "large_messages")) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, extensions_with_count, expected_count) return extensions_with_count, extension_telemetry_processor.event_list def _assert_invalid_extension_error_event_reported(self, mock_event, handler_name_with_count, error, expected_drop_count=None): self.assertTrue(mock_event.called, "Even a single event not logged") patt = r'Extension:\s+(?PMicrosoft.OSTCExtensions.+?);.+\s*Reason:\s+\[InvalidExtensionEventError\](?P.+?):.+Dropped Count:\s*(?P\d+)' for _, kwargs in mock_event.call_args_list: msg = kwargs['message'] match = re.search(patt, msg, re.MULTILINE) if match is not None: self.assertEqual(match.group("reason").strip(), error, "Incorrect error") self.assertIn(match.group("name"), handler_name_with_count, "Extension event not found") count = handler_name_with_count.pop(match.group("name")) count = expected_drop_count if expected_drop_count is not None else count self.assertEqual(int(count), int(match.group("count")), "Dropped count doesnt match") self.assertEqual(len(handler_name_with_count), 0, "All extension events not matched") def _assert_event_reported(self, mock_event, handler_name_with_count, pattern): self.assertTrue(mock_event.called, "Even a single event not logged") for _, kwargs in mock_event.call_args_list: msg = kwargs['message'] match = re.search(pattern, msg, re.MULTILINE) if match is not None: expected_handler_name = match.group("name") self.assertIn(expected_handler_name, handler_name_with_count, "Extension event not found") handler_name_with_count.pop(expected_handler_name) self.assertEqual(len(handler_name_with_count), 0, "All extension events not matched") def test_it_should_trim_message_if_more_than_limit(self): max_len = 100 no_of_extensions = 2 with patch("azurelinuxagent.ga.collect_telemetry_events._ProcessExtensionEvents._EXTENSION_EVENT_MAX_MSG_LEN", max_len): handler_name_with_count, event_list = self._setup_and_assert_tests_for_max_sizes() # pylint: disable=unused-variable context1_vals = self._get_param_value_from_event_body_if_exists(event_list, GuestAgentGenericLogsSchema.Context1) self.assertEqual(no_of_extensions, len(context1_vals), "There should be {0} Context1 values".format(no_of_extensions)) for val in context1_vals: self.assertLessEqual(len(val), max_len, "Message Length does not match") def test_it_should_skip_events_larger_than_max_size_and_report_event(self): max_size = 1000 no_of_extensions = 3 with patch("azurelinuxagent.ga.collect_telemetry_events.add_log_event") as mock_event: with patch("azurelinuxagent.ga.collect_telemetry_events._ProcessExtensionEvents._EXTENSION_EVENT_MAX_SIZE", max_size): handler_name_with_count, _ = self._setup_and_assert_tests_for_max_sizes(no_of_extensions, expected_count=0) self._assert_invalid_extension_error_event_reported(mock_event, handler_name_with_count, error=InvalidExtensionEventError.OversizeEventError) def test_it_should_skip_large_files_greater_than_max_file_size_and_report_event(self): max_file_size = 10000 no_of_extensions = 5 with patch("azurelinuxagent.ga.collect_telemetry_events.add_log_event") as mock_event: with patch("azurelinuxagent.ga.collect_telemetry_events._ProcessExtensionEvents._EXTENSION_EVENT_FILE_MAX_SIZE", max_file_size): handler_name_with_count, _ = self._setup_and_assert_tests_for_max_sizes(no_of_extensions, expected_count=0) pattern = r'Skipping file:\s*{0}/(?P.+?)/{1}.+'.format(conf.get_ext_log_dir(), EVENTS_DIRECTORY) self._assert_event_reported(mock_event, handler_name_with_count, pattern) def test_it_should_map_extension_event_json_correctly_to_telemetry_event(self): # EventName maps to HandlerName + '-' + Version from event file expected_mapping = { GuestAgentGenericLogsSchema.EventName: ExtensionEventSchema.Version, GuestAgentGenericLogsSchema.CapabilityUsed: ExtensionEventSchema.EventLevel, GuestAgentGenericLogsSchema.TaskName: ExtensionEventSchema.TaskName, GuestAgentGenericLogsSchema.Context1: ExtensionEventSchema.Message, GuestAgentGenericLogsSchema.Context2: ExtensionEventSchema.Timestamp, GuestAgentGenericLogsSchema.Context3: ExtensionEventSchema.OperationId, CommonTelemetryEventSchema.EventPid: ExtensionEventSchema.EventPid, CommonTelemetryEventSchema.EventTid: ExtensionEventSchema.EventTid } with self._create_extension_telemetry_processor() as extension_telemetry_processor: test_file = os.path.join(self._WELL_FORMED_FILES, "1592355539.json") handler_name = list(self._create_random_extension_events_dir_with_events(1, test_file))[0] extension_telemetry_processor.run() telemetry_event_map = defaultdict(list) for telemetry_event_key in expected_mapping: telemetry_event_map[telemetry_event_key] = self._get_param_value_from_event_body_if_exists( extension_telemetry_processor.event_list, telemetry_event_key) with open(test_file, 'r') as event_file: data = json.load(event_file) extension_event_map = defaultdict(list) for extension_event in data: for event_key in extension_event: extension_event_map[event_key].append(extension_event[event_key]) for telemetry_key in expected_mapping: extension_event_key = expected_mapping[telemetry_key] telemetry_data = telemetry_event_map[telemetry_key] # EventName = "HandlerName-Version" from Extensions extension_data = ["{0}-{1}".format(handler_name, v) for v in extension_event_map[ extension_event_key]] if telemetry_key == GuestAgentGenericLogsSchema.EventName else \ extension_event_map[extension_event_key] self.assertEqual(telemetry_data, extension_data, "The data for {0} and {1} doesn't map properly".format(telemetry_key, extension_event_key)) def test_it_should_always_cleanup_files_on_good_and_bad_cases(self): with self._create_extension_telemetry_processor() as extension_telemetry_processor: extensions_with_count = self._create_random_extension_events_dir_with_events(2, os.path.join( self._TEST_DATA_DIR, "large_messages")) extensions_with_count.update(self._create_random_extension_events_dir_with_events(3, self._MALFORMED_FILES)) extensions_with_count.update(self._create_random_extension_events_dir_with_events(4, self._WELL_FORMED_FILES)) extensions_with_count.update(self._create_random_extension_events_dir_with_events(1, self._MIX_FILES)) # Create random files in the events directory for each extension just to ensure that we delete them later for handler_name in extensions_with_count.keys(): file_name = os.path.join(conf.get_ext_log_dir(), handler_name, EVENTS_DIRECTORY, ''.join(random.sample(string.ascii_letters, 10))) with open(file_name, 'a') as random_file: random_file.write('1*2*3' * 100) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, extensions_with_count) for handler_name in extensions_with_count.keys(): events_path = os.path.join(conf.get_ext_log_dir(), handler_name, EVENTS_DIRECTORY) self.assertTrue(os.path.exists(events_path), "{0} dir doesn't exist".format(events_path)) self.assertEqual(0, len(os.listdir(events_path)), "There should be no files inside the events dir") def test_it_should_skip_unwanted_parameters_in_event_file(self): extra_params = ["SomethingNewButNotCool", "SomethingVeryWeird"] with self._create_extension_telemetry_processor() as extension_telemetry_processor: extensions_with_count= self._create_random_extension_events_dir_with_events(3, os.path.join( self._TEST_DATA_DIR, "extra_parameters")) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, extensions_with_count) for param in extra_params: self.assertEqual(0, len( self._get_param_value_from_event_body_if_exists(extension_telemetry_processor.event_list, extra_params)), "Unwanted param {0} found".format(param)) def test_it_should_not_send_events_which_dont_have_all_required_keys_and_report_event(self): with patch("azurelinuxagent.ga.collect_telemetry_events.add_log_event") as mock_event: with self._create_extension_telemetry_processor() as extension_telemetry_processor: extensions_with_count = self._create_random_extension_events_dir_with_events(3, os.path.join( self._TEST_DATA_DIR, "missing_parameters")) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, extensions_with_count, expected_count=0) self.assertTrue(mock_event.called, "Even a single event not logged") for _, kwargs in mock_event.call_args_list: # Example: #['Dropped events for Extension: Microsoft.OSTCExtensions.ZJQRNqbKtP; Details:', # 'Reason: [InvalidExtensionEventError] MissingKeyError: eventpid not found; Dropped Count: 2', # 'Reason: [InvalidExtensionEventError] MissingKeyError: Message not found; Dropped Count: 2', # 'Reason: [InvalidExtensionEventError] MissingKeyError: version not found; Dropped Count: 3'] msg = kwargs['message'].split("\n") ext_name = re.search(r'Dropped events for Extension:\s+(?PMicrosoft.OSTCExtensions.+?);.+', msg.pop(0)) if ext_name is not None: ext_name = ext_name.group('name') self.assertIn(ext_name, extensions_with_count, "Extension {0} not found".format(ext_name)) patt = r'\s*Reason:\s+\[InvalidExtensionEventError\](?P.+?):\s*(?P.+?)\s+not found;\s+Dropped Count:\s*(?P\d+)' expected_error_drop_count = { ExtensionEventSchema.EventPid.lower(): 2, ExtensionEventSchema.Message.lower(): 2, ExtensionEventSchema.Version.lower(): 3 } for m in msg: match = re.search(patt, m) self.assertIsNotNone(match, "No InvalidExtensionEventError errors reported") self.assertEqual(match.group("reason").strip(), InvalidExtensionEventError.MissingKeyError, "Error is not a {0}".format(InvalidExtensionEventError.MissingKeyError)) observerd_error = match.group("key").lower() self.assertIn(observerd_error, expected_error_drop_count, "Unexpected error reported") self.assertEqual(expected_error_drop_count.pop(observerd_error), int(match.group("count")), "Unequal no of dropped events") self.assertEqual(len(expected_error_drop_count), 0, "All errros not found yet") del extensions_with_count[ext_name] self.assertEqual(len(extensions_with_count), 0, "All extension events not matched") def test_it_should_not_send_event_where_message_is_empty_and_report_event(self): with patch("azurelinuxagent.ga.collect_telemetry_events.add_log_event") as mock_event: with self._create_extension_telemetry_processor() as extension_telemetry_processor: extensions_with_count = self._create_random_extension_events_dir_with_events(3, os.path.join( self._TEST_DATA_DIR, "empty_message")) extension_telemetry_processor.run() telemetry_events = self._get_handlers_with_version(extension_telemetry_processor.event_list) self._assert_handler_data_in_event_list(telemetry_events, extensions_with_count, expected_count=0) self._assert_invalid_extension_error_event_reported(mock_event, extensions_with_count, InvalidExtensionEventError.EmptyMessageError, expected_drop_count=1) def test_it_should_not_process_events_if_send_telemetry_events_handler_stopped(self): event_list = [] telemetry_handler = MagicMock(autospec=True) telemetry_handler.stopped = MagicMock(return_value=True) telemetry_handler.enqueue_event = MagicMock(wraps=event_list.append) with self._create_extension_telemetry_processor(telemetry_handler) as extension_telemetry_processor: self._create_random_extension_events_dir_with_events(3, self._WELL_FORMED_FILES) extension_telemetry_processor.run() self.assertEqual(0, len(event_list), "No events should have been enqueued") def test_it_should_not_delete_event_files_except_current_one_if_service_stopped_midway(self): event_list = [] telemetry_handler = MagicMock(autospec=True) telemetry_handler.stopped = MagicMock(return_value=False) telemetry_handler.enqueue_event = MagicMock(side_effect=ServiceStoppedError("Telemetry service stopped"), wraps=event_list.append) no_of_extensions = 3 # self._WELL_FORMED_FILES has 3 event files, i.e. total files for 3 extensions = 3 * 3 = 9 # But since we delete the file that we were processing last, expected count = 8 expected_event_file_count = 8 with self._create_extension_telemetry_processor(telemetry_handler) as extension_telemetry_processor: ext_names = self._create_random_extension_events_dir_with_events(no_of_extensions, self._WELL_FORMED_FILES) extension_telemetry_processor.run() self.assertEqual(0, len(event_list), "No events should have been enqueued") total_file_count = 0 for ext_name in ext_names: event_dir = os.path.join(conf.get_ext_log_dir(), ext_name, EVENTS_DIRECTORY) file_count = len(os.listdir(event_dir)) self.assertGreater(file_count, 0, "Some event files should still be there") total_file_count += file_count self.assertEqual(expected_event_file_count, total_file_count, "Expected File count doesn't match") Azure-WALinuxAgent-2b21de5/tests/ga/test_env.py000066400000000000000000000203601462617747000214630ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import datetime from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.osutil.default import DefaultOSUtil, shellutil from azurelinuxagent.ga.env import MonitorDhcpClientRestart from tests.lib.tools import AgentTestCase, patch class MonitorDhcpClientRestartTestCase(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) # save the original run_command so that mocks can reference it self.shellutil_run_command = shellutil.run_command # save an instance of the original DefaultOSUtil so that mocks can reference it self.default_osutil = DefaultOSUtil() # AgentTestCase.setUp mocks osutil.factory._get_osutil; we override that mock for this class with a new mock # that always returns the default implementation. self.mock_get_osutil = patch("azurelinuxagent.common.osutil.factory._get_osutil", return_value=DefaultOSUtil()) self.mock_get_osutil.start() def tearDown(self): self.mock_get_osutil.stop() AgentTestCase.tearDown(self) def test_get_dhcp_client_pid_should_return_a_sorted_list_of_pids(self): with patch("azurelinuxagent.common.utils.shellutil.run_command", return_value="11 9 5 22 4 6"): pids = MonitorDhcpClientRestart(get_osutil())._get_dhcp_client_pid() self.assertEqual(pids, [4, 5, 6, 9, 11, 22]) def test_get_dhcp_client_pid_should_return_an_empty_list_and_log_a_warning_when_dhcp_client_is_not_running(self): with patch("azurelinuxagent.common.osutil.default.shellutil.run_command", side_effect=lambda _: self.shellutil_run_command(["pidof", "non-existing-process"])): with patch('azurelinuxagent.common.logger.Logger.warn') as mock_warn: pids = MonitorDhcpClientRestart(get_osutil())._get_dhcp_client_pid() self.assertEqual(pids, []) self.assertEqual(mock_warn.call_count, 1) args, kwargs = mock_warn.call_args # pylint: disable=unused-variable message = args[0] self.assertEqual("Dhcp client is not running.", message) def test_get_dhcp_client_pid_should_return_and_empty_list_and_log_an_error_when_an_invalid_command_is_used(self): with patch("azurelinuxagent.common.osutil.default.shellutil.run_command", side_effect=lambda _: self.shellutil_run_command(["non-existing-command"])): with patch('azurelinuxagent.common.logger.Logger.error') as mock_error: pids = MonitorDhcpClientRestart(get_osutil())._get_dhcp_client_pid() self.assertEqual(pids, []) self.assertEqual(mock_error.call_count, 1) args, kwargs = mock_error.call_args # pylint: disable=unused-variable self.assertIn("Failed to get the PID of the DHCP client", args[0]) self.assertIn("No such file or directory", args[1]) def test_get_dhcp_client_pid_should_not_log_consecutive_errors(self): monitor_dhcp_client_restart = MonitorDhcpClientRestart(get_osutil()) with patch('azurelinuxagent.common.logger.Logger.warn') as mock_warn: def assert_warnings(count): self.assertEqual(mock_warn.call_count, count) for call_args in mock_warn.call_args_list: args, _ = call_args self.assertEqual("Dhcp client is not running.", args[0]) with patch("azurelinuxagent.common.osutil.default.shellutil.run_command", side_effect=lambda _: self.shellutil_run_command(["pidof", "non-existing-process"])): # it should log the first error pids = monitor_dhcp_client_restart._get_dhcp_client_pid() self.assertEqual(pids, []) assert_warnings(1) # it should not log subsequent errors for _ in range(0, 3): pids = monitor_dhcp_client_restart._get_dhcp_client_pid() self.assertEqual(pids, []) self.assertEqual(mock_warn.call_count, 1) with patch("azurelinuxagent.common.osutil.default.shellutil.run_command", return_value="123"): # now it should succeed pids = monitor_dhcp_client_restart._get_dhcp_client_pid() self.assertEqual(pids, [123]) assert_warnings(1) with patch("azurelinuxagent.common.osutil.default.shellutil.run_command", side_effect=lambda _: self.shellutil_run_command(["pidof", "non-existing-process"])): # it should log the new error pids = monitor_dhcp_client_restart._get_dhcp_client_pid() self.assertEqual(pids, []) assert_warnings(2) # it should not log subsequent errors for _ in range(0, 3): pids = monitor_dhcp_client_restart._get_dhcp_client_pid() self.assertEqual(pids, []) self.assertEqual(mock_warn.call_count, 2) def test_handle_dhclient_restart_should_reconfigure_network_routes_when_dhcp_client_restarts(self): with patch("azurelinuxagent.common.dhcp.DhcpHandler.conf_routes") as mock_conf_routes: monitor_dhcp_client_restart = MonitorDhcpClientRestart(get_osutil()) monitor_dhcp_client_restart._period = datetime.timedelta(seconds=0) # Run the operation one time to initialize the DHCP PIDs with patch.object(monitor_dhcp_client_restart, "_get_dhcp_client_pid", return_value=[123]): monitor_dhcp_client_restart.run() # # if the dhcp client has not been restarted then it should not reconfigure the network routes # def mock_check_pid_alive(pid): if pid == 123: return True raise Exception("Unexpected PID: {0}".format(pid)) with patch("azurelinuxagent.common.osutil.default.DefaultOSUtil.check_pid_alive", side_effect=mock_check_pid_alive): with patch.object(monitor_dhcp_client_restart, "_get_dhcp_client_pid", side_effect=Exception("get_dhcp_client_pid should not have been invoked")): monitor_dhcp_client_restart.run() self.assertEqual(mock_conf_routes.call_count, 1) # count did not change # # if the process was restarted then it should reconfigure the network routes # def mock_check_pid_alive(pid): # pylint: disable=function-redefined if pid == 123: return False raise Exception("Unexpected PID: {0}".format(pid)) with patch("azurelinuxagent.common.osutil.default.DefaultOSUtil.check_pid_alive", side_effect=mock_check_pid_alive): with patch.object(monitor_dhcp_client_restart, "_get_dhcp_client_pid", return_value=[456, 789]): monitor_dhcp_client_restart.run() self.assertEqual(mock_conf_routes.call_count, 2) # count increased # # if the new dhcp client has not been restarted then it should not reconfigure the network routes # def mock_check_pid_alive(pid): # pylint: disable=function-redefined if pid in [456, 789]: return True raise Exception("Unexpected PID: {0}".format(pid)) with patch("azurelinuxagent.common.osutil.default.DefaultOSUtil.check_pid_alive", side_effect=mock_check_pid_alive): with patch.object(monitor_dhcp_client_restart, "_get_dhcp_client_pid", side_effect=Exception("get_dhcp_client_pid should not have been invoked")): monitor_dhcp_client_restart.run() self.assertEqual(mock_conf_routes.call_count, 2) # count did not change Azure-WALinuxAgent-2b21de5/tests/ga/test_extension.py000066400000000000000000005535641462617747000227300ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib import datetime import glob import json import os.path import random import re import shutil import subprocess import tempfile import time import unittest from azurelinuxagent.common import conf from azurelinuxagent.common.agent_supported_feature import get_agent_supported_features_list_for_crp from azurelinuxagent.ga.cgroupconfigurator import CGroupConfigurator from azurelinuxagent.common.datacontract import get_properties from azurelinuxagent.common.event import WALAEventOperation from azurelinuxagent.common.utils import fileutil from azurelinuxagent.common.utils.fileutil import read_file from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.version import AGENT_VERSION from azurelinuxagent.common.exception import ResourceGoneError, ExtensionDownloadError, ProtocolError, \ ExtensionErrorCodes, ExtensionError, GoalStateAggregateStatusCodes from azurelinuxagent.common.protocol.restapi import ExtensionSettings, Extension, ExtHandlerStatus, \ ExtensionStatus, ExtensionRequestedState from azurelinuxagent.common.protocol import wire from azurelinuxagent.common.protocol.wire import WireProtocol, InVMArtifactsProfile from azurelinuxagent.common.utils.restutil import KNOWN_WIRESERVER_IP from azurelinuxagent.ga.exthandlers import ExtHandlerInstance, migrate_handler_state, \ get_exthandlers_handler, ExtCommandEnvVariable, HandlerManifest, NOT_RUN, \ ExtensionStatusValue, HANDLER_COMPLETE_NAME_PATTERN, HandlerEnvironment, GoalStateStatus from tests.lib import wire_protocol_data from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.wire_protocol_data import DATA_FILE, DATA_FILE_EXT_ADDITIONAL_LOCATIONS from tests.lib.tools import AgentTestCase, data_dir, MagicMock, Mock, patch, mock_sleep from tests.lib.extension_emulator import Actions, ExtensionCommandNames, extension_emulator, \ enable_invocations, generate_put_handler # Mocking the original sleep to reduce test execution time SLEEP = time.sleep SUCCESS_CODE_FROM_STATUS_FILE = 1 def do_not_run_test(): return True def raise_system_exception(): raise Exception def raise_ioerror(*args): # pylint: disable=unused-argument e = IOError() from errno import EIO e.errno = EIO raise e class TestExtensionCleanup(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) self.mock_sleep = patch("time.sleep", lambda *_: mock_sleep(0.01)) self.mock_sleep.start() def tearDown(self): self.mock_sleep.stop() AgentTestCase.tearDown(self) @staticmethod def _count_packages(): return len(glob.glob(os.path.join(conf.get_lib_dir(), "*.zip"))) @staticmethod def _count_extension_directories(): paths = [os.path.join(conf.get_lib_dir(), p) for p in os.listdir(conf.get_lib_dir())] return len([p for p in paths if os.path.isdir(p) and TestExtensionCleanup._is_extension_dir(p)]) @staticmethod def _is_extension_dir(path): return re.match(HANDLER_COMPLETE_NAME_PATTERN, os.path.basename(path)) is not None def _assert_ext_handler_status(self, aggregate_status, expected_status, version, expected_ext_handler_count=0, verify_ext_reported=True): self.assertIsNotNone(aggregate_status, "Aggregate status should not be None") handler_statuses = aggregate_status['aggregateStatus']['handlerAggregateStatus'] self.assertEqual(expected_ext_handler_count, len(handler_statuses), "All ExtensionHandlers: {0}".format(handler_statuses)) for ext_handler_status in handler_statuses: debug_info = "ExtensionHandler: {0}".format(ext_handler_status) self.assertEqual(expected_status, ext_handler_status['status'], debug_info) self.assertEqual(version, ext_handler_status['handlerVersion'], debug_info) if verify_ext_reported: self.assertIn("runtimeSettingsStatus", ext_handler_status, debug_info) return @contextlib.contextmanager def _setup_test_env(self, test_data): with mock_wire_protocol(test_data) as protocol: def mock_http_put(url, *args, **_): if HttpRequestPredicates.is_host_plugin_status_request(url): # Skip reading the HostGA request data as its encoded return MockHttpResponse(status=500) protocol.aggregate_status = json.loads(args[0]) return MockHttpResponse(status=201) protocol.aggregate_status = None protocol.set_http_handlers(http_put_handler=mock_http_put) no_of_extensions = protocol.mock_wire_data.get_no_of_plugins_in_extension_config() exthandlers_handler = get_exthandlers_handler(protocol) yield exthandlers_handler, protocol, no_of_extensions def test_cleanup_leaves_installed_extensions(self): with self._setup_test_env(wire_protocol_data.DATA_FILE_MULTIPLE_EXT) as (exthandlers_handler, protocol, no_of_exts): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_exts, TestExtensionCleanup._count_extension_directories(), "No of extension directories doesnt match the no of extensions in GS") self._assert_ext_handler_status(protocol.aggregate_status, "Ready", expected_ext_handler_count=no_of_exts, version="1.0.0") def test_cleanup_removes_uninstalled_extensions(self): with self._setup_test_env(wire_protocol_data.DATA_FILE_MULTIPLE_EXT) as (exthandlers_handler, protocol, no_of_exts): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_ext_handler_status(protocol.aggregate_status, "Ready", expected_ext_handler_count=no_of_exts, version="1.0.0") # Update incarnation and extension config protocol.mock_wire_data.set_incarnation(2) protocol.mock_wire_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(0, TestExtensionCleanup._count_packages(), "All packages must be deleted") self._assert_ext_handler_status(protocol.aggregate_status, "Ready", expected_ext_handler_count=0, version="1.0.0") self.assertEqual(0, TestExtensionCleanup._count_extension_directories(), "All extension directories should be removed") def test_cleanup_removes_orphaned_packages(self): data_file = wire_protocol_data.DATA_FILE_NO_EXT.copy() data_file["ext_conf"] = "wire/ext_conf_no_extensions-no_status_blob.xml" no_of_orphaned_packages = 5 with self._setup_test_env(data_file) as (exthandlers_handler, protocol, no_of_exts): self.assertEqual(no_of_exts, 0, "Test setup error - Extensions found in ExtConfig") # Create random extension directories for i in range(no_of_orphaned_packages): eh = Extension(name='Random.Extension.ShouldNot.Be.There') eh.version = FlexibleVersion("9.9.0") + i handler = ExtHandlerInstance(eh, "unused") os.mkdir(handler.get_base_dir()) self.assertEqual(no_of_orphaned_packages, TestExtensionCleanup._count_extension_directories(), "Test Setup error - Not enough extension directories") exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_exts, TestExtensionCleanup._count_extension_directories(), "There should be no extension directories in FS") self.assertIsNone(protocol.aggregate_status, "Since there's no ExtConfig, we shouldn't even report status as we pull status blob link from ExtConfig") def test_cleanup_leaves_failed_extensions(self): original_popen = subprocess.Popen def mock_fail_popen(*args, **kwargs): # pylint: disable=unused-argument return original_popen("fail_this_command", **kwargs) with self._setup_test_env(wire_protocol_data.DATA_FILE_EXT_SINGLE) as (exthandlers_handler, protocol, no_of_exts): with patch("azurelinuxagent.ga.cgroupapi.subprocess.Popen", mock_fail_popen): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_ext_handler_status(protocol.aggregate_status, "NotReady", expected_ext_handler_count=no_of_exts, version="1.0.0", verify_ext_reported=False) self.assertEqual(no_of_exts, TestExtensionCleanup._count_extension_directories(), "There should still be 1 extension directory in FS") # Update incarnation and extension config to uninstall the extension, this should delete the extension protocol.mock_wire_data.set_incarnation(2) protocol.mock_wire_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(0, TestExtensionCleanup._count_packages(), "All packages must be deleted") self.assertEqual(0, TestExtensionCleanup._count_extension_directories(), "All extension directories should be removed") self._assert_ext_handler_status(protocol.aggregate_status, "Ready", expected_ext_handler_count=0, version="1.0.0") def test_it_should_report_and_cleanup_only_if_gs_supported(self): def assert_gs_aggregate_status(seq_no, status, code): gs_status = protocol.aggregate_status['aggregateStatus']['vmArtifactsAggregateStatus']['goalStateAggregateStatus'] self.assertEqual(gs_status['inSvdSeqNo'], seq_no, "Seq number not matching") self.assertEqual(gs_status['code'], code, "The error code not matching") self.assertEqual(gs_status['status'], status, "The status not matching") def assert_extension_seq_no(expected_seq_no): for handler_status in protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']: self.assertEqual(expected_seq_no, handler_status['runtimeSettingsStatus']['sequenceNumber'], "Sequence number mismatch") with self._setup_test_env(wire_protocol_data.DATA_FILE_MULTIPLE_EXT) as (exthandlers_handler, protocol, orig_no_of_exts): # Run 1 - GS has no required features and contains 5 extensions exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(orig_no_of_exts, TestExtensionCleanup._count_extension_directories(), "No of extension directories doesnt match the no of extensions in GS") self._assert_ext_handler_status(protocol.aggregate_status, "Ready", expected_ext_handler_count=orig_no_of_exts, version="1.0.0") assert_gs_aggregate_status(seq_no='1', status=GoalStateStatus.Success, code=GoalStateAggregateStatusCodes.Success) assert_extension_seq_no(expected_seq_no=0) # Run 2 - Change the GS to one with Required features not supported by the agent # This ExtensionConfig has 1 extension - ExampleHandlerLinuxWithRequiredFeatures protocol.mock_wire_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_REQUIRED_FEATURES) protocol.mock_wire_data.set_incarnation(2) protocol.mock_wire_data.set_extensions_config_sequence_number(random.randint(10, 100)) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertGreater(orig_no_of_exts, 1, "No of extensions to check should be > 1") self.assertEqual(orig_no_of_exts, TestExtensionCleanup._count_extension_directories(), "No of extension directories should not be changed") self._assert_ext_handler_status(protocol.aggregate_status, "Ready", expected_ext_handler_count=orig_no_of_exts, version="1.0.0") assert_gs_aggregate_status(seq_no='2', status=GoalStateStatus.Failed, code=GoalStateAggregateStatusCodes.GoalStateUnsupportedRequiredFeatures) # Since its an unsupported GS, we should report the last state of extensions assert_extension_seq_no(0) # assert the extension in the new Config was not reported as that GS was not executed self.assertTrue(any('ExampleHandlerLinuxWithRequiredFeatures' not in ext_handler_status['handlerName'] for ext_handler_status in protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "Unwanted handler found in status reporting") # Run 3 - Run a GS with no Required Features and ensure we execute all extensions properly # This ExtensionConfig has 1 extension - OSTCExtensions.ExampleHandlerLinux protocol.mock_wire_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) protocol.mock_wire_data.set_incarnation(3) extension_seq_no = random.randint(10, 100) protocol.mock_wire_data.set_extensions_config_sequence_number(extension_seq_no) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, TestExtensionCleanup._count_extension_directories(), "No of extension directories should not be changed") self._assert_ext_handler_status(protocol.aggregate_status, "Ready", expected_ext_handler_count=1, version="1.0.0") assert_gs_aggregate_status(seq_no='3', status=GoalStateStatus.Success, code=GoalStateAggregateStatusCodes.Success) assert_extension_seq_no(expected_seq_no=extension_seq_no) # Only OSTCExtensions.ExampleHandlerLinux extension should be reported self.assertEqual('OSTCExtensions.ExampleHandlerLinux', protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus'][0]['handlerName'], "Expected handler not found in status reporting") class TestHandlerStateMigration(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) handler_name = "Not.A.Real.Extension" handler_version = "1.2.3" self.ext_handler = Extension(handler_name) self.ext_handler.version = handler_version self.ext_handler_i = ExtHandlerInstance(self.ext_handler, "dummy protocol") self.handler_state = "Enabled" self.handler_status = ExtHandlerStatus( name=handler_name, version=handler_version, status="Ready", message="Uninteresting message") return def _prepare_handler_state(self): handler_state_path = os.path.join( self.tmp_dir, "handler_state", self.ext_handler_i.get_full_name()) os.makedirs(handler_state_path) fileutil.write_file( os.path.join(handler_state_path, "state"), self.handler_state) fileutil.write_file( os.path.join(handler_state_path, "status"), json.dumps(get_properties(self.handler_status))) return def _prepare_handler_config(self): handler_config_path = os.path.join( self.tmp_dir, self.ext_handler_i.get_full_name(), "config") os.makedirs(handler_config_path) return def test_migration_migrates(self): self._prepare_handler_state() self._prepare_handler_config() migrate_handler_state() self.assertEqual(self.ext_handler_i.get_handler_state(), self.handler_state) self.assertEqual( self.ext_handler_i.get_handler_status().status, self.handler_status.status) return def test_migration_skips_if_empty(self): self._prepare_handler_config() migrate_handler_state() self.assertFalse( os.path.isfile(os.path.join(self.ext_handler_i.get_conf_dir(), "HandlerState"))) self.assertFalse( os.path.isfile(os.path.join(self.ext_handler_i.get_conf_dir(), "HandlerStatus"))) return def test_migration_cleans_up(self): self._prepare_handler_state() self._prepare_handler_config() migrate_handler_state() self.assertFalse(os.path.isdir(os.path.join(conf.get_lib_dir(), "handler_state"))) return def test_migration_does_not_overwrite(self): self._prepare_handler_state() self._prepare_handler_config() state = "Installed" status = "NotReady" code = 1 message = "A message" self.assertNotEqual(state, self.handler_state) self.assertNotEqual(status, self.handler_status.status) self.assertNotEqual(code, self.handler_status.code) self.assertNotEqual(message, self.handler_status.message) self.ext_handler_i.set_handler_state(state) self.ext_handler_i.set_handler_status(status=status, code=code, message=message) migrate_handler_state() self.assertEqual(self.ext_handler_i.get_handler_state(), state) handler_status = self.ext_handler_i.get_handler_status() self.assertEqual(handler_status.status, status) self.assertEqual(handler_status.code, code) self.assertEqual(handler_status.message, message) return def test_set_handler_status_ignores_none_content(self): """ Validate that set_handler_status ignore cases where json.dumps returns a value of None. """ self._prepare_handler_state() self._prepare_handler_config() status = "Ready" code = 0 message = "A message" try: with patch('json.dumps', return_value=None): self.ext_handler_i.set_handler_status(status=status, code=code, message=message) except Exception as e: # pylint: disable=unused-variable self.fail("set_handler_status threw an exception") @patch("shutil.move", side_effect=Exception) def test_migration_ignores_move_errors(self, shutil_mock): # pylint: disable=unused-argument self._prepare_handler_state() self._prepare_handler_config() try: migrate_handler_state() except Exception as e: self.assertTrue(False, "Unexpected exception: {0}".format(str(e))) # pylint: disable=redundant-unittest-assert return @patch("shutil.rmtree", side_effect=Exception) def test_migration_ignores_tree_remove_errors(self, shutil_mock): # pylint: disable=unused-argument self._prepare_handler_state() self._prepare_handler_config() try: migrate_handler_state() except Exception as e: self.assertTrue(False, "Unexpected exception: {0}".format(str(e))) # pylint: disable=redundant-unittest-assert return class TestExtensionBase(AgentTestCase): def _assert_handler_status(self, report_vm_status, expected_status, expected_ext_count, version, expected_handler_name="OSTCExtensions.ExampleHandlerLinux", expected_msg=None): self.assertTrue(report_vm_status.called) args, kw = report_vm_status.call_args # pylint: disable=unused-variable vm_status = args[0] self.assertNotEqual(0, len(vm_status.vmAgent.extensionHandlers)) handler_status = next( status for status in vm_status.vmAgent.extensionHandlers if status.name == expected_handler_name) self.assertEqual(expected_status, handler_status.status, get_properties(handler_status)) self.assertEqual(expected_handler_name, handler_status.name) self.assertEqual(version, handler_status.version) self.assertEqual(expected_ext_count, len([ext_handler for ext_handler in vm_status.vmAgent.extensionHandlers if ext_handler.name == expected_handler_name and ext_handler.extension_status is not None])) if expected_msg is not None: self.assertIn(expected_msg, handler_status.message) # Deprecated. New tests should be added to the TestExtension class @patch('time.sleep', side_effect=lambda _: mock_sleep(0.001)) @patch("azurelinuxagent.common.protocol.wire.CryptUtil") @patch("azurelinuxagent.common.utils.restutil.http_get") class TestExtension_Deprecated(TestExtensionBase): def setUp(self): AgentTestCase.setUp(self) def _assert_ext_pkg_file_status(self, expected_to_be_present=True, extension_version="1.0.0", extension_handler_name="OSTCExtensions.ExampleHandlerLinux"): zip_file_format = "{0}__{1}.zip" if expected_to_be_present: self.assertIn(zip_file_format.format(extension_handler_name, extension_version), os.listdir(conf.get_lib_dir())) else: self.assertNotIn(zip_file_format.format(extension_handler_name, extension_version), os.listdir(conf.get_lib_dir())) def _assert_no_handler_status(self, report_vm_status): self.assertTrue(report_vm_status.called) args, kw = report_vm_status.call_args # pylint: disable=unused-variable vm_status = args[0] self.assertEqual(0, len(vm_status.vmAgent.extensionHandlers)) return @staticmethod def _create_mock(test_data, mock_http_get, mock_crypt_util, *_): # Mock protocol to return test data mock_http_get.side_effect = test_data.mock_http_get mock_crypt_util.side_effect = test_data.mock_crypt_util protocol = WireProtocol(KNOWN_WIRESERVER_IP) protocol.detect() protocol.report_vm_status = MagicMock() handler = get_exthandlers_handler(protocol) return handler, protocol def _set_up_update_test_and_update_gs(self, patch_command, *args): """ This helper function sets up the Update test by setting up the protocol and ext_handler and asserts the ext_handler runs fine the first time before patching a failure command for testing. :param patch_command: The patch_command to setup for failure :param args: Any additional args passed to the function, needed for creating a mock for handler and protocol :return: test_data, exthandlers_handler, protocol """ test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SINGLE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter # Ensure initial install and enable is successful exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(0, patch_command.call_count) self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Next incarnation, update version test_data.set_incarnation(2) test_data.set_extensions_config_version("1.0.1") test_data.set_manifest_version('1.0.1') protocol.client.update_goal_state() # Ensure the patched command fails patch_command.return_value = "exit 1" return test_data, exthandlers_handler, protocol @staticmethod def _create_extension_handlers_handler(protocol): handler = get_exthandlers_handler(protocol) return handler def test_ext_handler(self, *args): # Test enable scenario. test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Test goal state not changed exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") # Test goal state changed test_data.set_incarnation(2) test_data.set_extensions_config_sequence_number(1) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 1) # Test hotfix test_data.set_incarnation(3) test_data.set_extensions_config_version("1.1.1") test_data.set_extensions_config_sequence_number(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.1.1") self._assert_ext_status(protocol.report_vm_status, "success", 2) # Test upgrade test_data.set_incarnation(4) test_data.set_extensions_config_version("1.2.0") test_data.set_extensions_config_sequence_number(3) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.2.0") self._assert_ext_status(protocol.report_vm_status, "success", 3) # Test disable test_data.set_incarnation(5) test_data.set_extensions_config_state(ExtensionRequestedState.Disabled) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "NotReady", 1, "1.2.0") # Test uninstall test_data.set_incarnation(6) test_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_no_handler_status(protocol.report_vm_status) # Test uninstall again! test_data.set_incarnation(7) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_no_handler_status(protocol.report_vm_status) def test_it_should_only_download_extension_manifest_once_per_goal_state(self, *args): def _assert_handler_status_and_manifest_download_count(protocol, test_data, manifest_count): self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) self.assertEqual(test_data.call_counts['manifest.xml'], manifest_count, "We should have downloaded extension manifest {0} times".format(manifest_count)) test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() _assert_handler_status_and_manifest_download_count(protocol, test_data, 1) # Update Incarnation test_data.set_incarnation(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() _assert_handler_status_and_manifest_download_count(protocol, test_data, 2) def test_it_should_fail_handler_on_bad_extension_config_and_report_error(self, mock_get, mock_crypt_util, *args): invalid_config_dir = os.path.join(data_dir, "wire", "invalid_config") self.assertGreater(len(os.listdir(invalid_config_dir)), 0, "Not even a single bad config file found") for bad_config_file_path in os.listdir(invalid_config_dir): bad_conf = DATA_FILE.copy() bad_conf["ext_conf"] = os.path.join(invalid_config_dir, bad_config_file_path) test_data = wire_protocol_data.WireProtocolData(bad_conf) exthandlers_handler, protocol = self._create_mock(test_data, mock_get, mock_crypt_util, *args) with patch('azurelinuxagent.ga.exthandlers.add_event') as patch_add_event: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "NotReady", 0, "1.0.0") invalid_config_errors = [kw for _, kw in patch_add_event.call_args_list if kw['op'] == WALAEventOperation.InvalidExtensionConfig] self.assertEqual(1, len(invalid_config_errors), "Error not logged and reported to Kusto for {0}".format(bad_config_file_path)) def test_it_should_process_valid_extensions_if_present(self, mock_get, mock_crypt_util, *args): bad_conf = DATA_FILE.copy() bad_conf["ext_conf"] = os.path.join("wire", "ext_conf_invalid_and_valid_handlers.xml") test_data = wire_protocol_data.WireProtocolData(bad_conf) exthandlers_handler, protocol = self._create_mock(test_data, mock_get, mock_crypt_util, *args) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertTrue(protocol.report_vm_status.called) args, _ = protocol.report_vm_status.call_args vm_status = args[0] expected_handlers = ["OSTCExtensions.InvalidExampleHandlerLinux", "OSTCExtensions.ValidExampleHandlerLinux"] self.assertEqual(2, len(vm_status.vmAgent.extensionHandlers)) for handler in vm_status.vmAgent.extensionHandlers: expected_status = "NotReady" if "InvalidExampleHandlerLinux" in handler.name else "Ready" expected_ext_count = 0 if "InvalidExampleHandlerLinux" in handler.name else 1 self.assertEqual(expected_status, handler.status, "Invalid status") self.assertIn(handler.name, expected_handlers, "Handler not found") self.assertEqual("1.0.0", handler.version, "Incorrect handler version") self.assertEqual(expected_ext_count, len([ext for ext in vm_status.vmAgent.extensionHandlers if ext.name == handler.name and ext.extension_status is not None]), "Incorrect extensions enabled") expected_handlers.remove(handler.name) self.assertEqual(0, len(expected_handlers), "All handlers not reported status") def test_it_should_ignore_case_when_parsing_plugin_settings(self, mock_get, mock_crypt_util, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_CASE_MISMATCH_EXT) exthandlers_handler, protocol = self._create_mock(test_data, mock_get, mock_crypt_util, *args) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() expected_ext_handlers = ["OSTCExtensions.ExampleHandlerLinux", "Microsoft.Powershell.ExampleExtension", "Microsoft.EnterpriseCloud.Monitoring.ExampleHandlerLinux", "Microsoft.CPlat.Core.ExampleExtensionLinux", "Microsoft.OSTCExtensions.Edp.ExampleExtensionLinuxInTest"] self.assertTrue(protocol.report_vm_status.called, "Handler status not reported") args, _ = protocol.report_vm_status.call_args vm_status = args[0] self.assertEqual(len(expected_ext_handlers), len(vm_status.vmAgent.extensionHandlers), "No of Extension handlers dont match") for handler_status in vm_status.vmAgent.extensionHandlers: self.assertEqual("Ready", handler_status.status, "Handler is not Ready") self.assertIn(handler_status.name, expected_ext_handlers, "Handler not reported") self.assertEqual("1.0.0", handler_status.version, "Handler version not matching") self.assertEqual(1, len( [status for status in vm_status.vmAgent.extensionHandlers if status.name == handler_status.name]), "No settings were found for this extension") expected_ext_handlers.remove(handler_status.name) self.assertEqual(0, len(expected_ext_handlers), "All handlers not reported") def test_ext_handler_no_settings(self, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_NO_SETTINGS) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter test_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux") with enable_invocations(test_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 0, "1.0.0") invocation_record.compare( (test_ext, ExtensionCommandNames.INSTALL), (test_ext, ExtensionCommandNames.ENABLE) ) # Uninstall the Plugin and make sure Disable called test_data.set_incarnation(2) test_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) protocol.client.update_goal_state() with enable_invocations(test_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertTrue(protocol.report_vm_status.called) args, _ = protocol.report_vm_status.call_args self.assertEqual(0, len(args[0].vmAgent.extensionHandlers)) invocation_record.compare( (test_ext, ExtensionCommandNames.DISABLE), (test_ext, ExtensionCommandNames.UNINSTALL) ) def test_ext_handler_no_public_settings(self, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_NO_PUBLIC) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") def test_ext_handler_no_ext(self, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_NO_EXT) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter # Assert no extension handler status exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_no_handler_status(protocol.report_vm_status) def test_ext_handler_sequencing(self, *args): # Test enable scenario. test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SEQUENCING) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter dep_ext_level_2 = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux") dep_ext_level_1 = extension_emulator(name="OSTCExtensions.OtherExampleHandlerLinux") with enable_invocations(dep_ext_level_2, dep_ext_level_1) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") self._assert_ext_status(protocol.report_vm_status, "success", 0, expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") # check handler list and dependency levels self.assertTrue(exthandlers_handler.ext_handlers is not None) self.assertTrue(exthandlers_handler.ext_handlers is not None) self.assertEqual(len(exthandlers_handler.ext_handlers), 2) self.assertEqual(1, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_1.name).settings[0].dependencyLevel) self.assertEqual(2, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_2.name).settings[0].dependencyLevel) # Ensure the invocation order follows the dependency levels invocation_record.compare( (dep_ext_level_1, ExtensionCommandNames.INSTALL), (dep_ext_level_1, ExtensionCommandNames.ENABLE), (dep_ext_level_2, ExtensionCommandNames.INSTALL), (dep_ext_level_2, ExtensionCommandNames.ENABLE) ) # Test goal state not changed exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") # Test goal state changed test_data.set_incarnation(2) test_data.set_extensions_config_sequence_number(1) # Swap the dependency ordering dep_ext_level_3 = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux") dep_ext_level_4 = extension_emulator(name="OSTCExtensions.OtherExampleHandlerLinux") test_data.ext_conf = test_data.ext_conf.replace("dependencyLevel=\"2\"", "dependencyLevel=\"3\"") test_data.ext_conf = test_data.ext_conf.replace("dependencyLevel=\"1\"", "dependencyLevel=\"4\"") protocol.client.update_goal_state() with enable_invocations(dep_ext_level_3, dep_ext_level_4) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 1) self.assertEqual(len(exthandlers_handler.ext_handlers), 2) self.assertEqual(3, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_3.name).settings[0].dependencyLevel) self.assertEqual(4, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_4.name).settings[0].dependencyLevel) # Ensure the invocation order follows the dependency levels invocation_record.compare( (dep_ext_level_3, ExtensionCommandNames.ENABLE), (dep_ext_level_4, ExtensionCommandNames.ENABLE) ) # Test disable # In the case of disable, the last extension to be enabled should be # the first extension disabled. The first extension enabled should be # the last one disabled. test_data.set_incarnation(3) test_data.set_extensions_config_state(ExtensionRequestedState.Disabled) protocol.client.update_goal_state() with enable_invocations(dep_ext_level_3, dep_ext_level_4) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "NotReady", 1, "1.0.0", expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") self.assertEqual(3, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_3.name).settings[0].dependencyLevel) self.assertEqual(4, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_4.name).settings[0].dependencyLevel) # Ensure the invocation order follows the dependency levels invocation_record.compare( (dep_ext_level_4, ExtensionCommandNames.DISABLE), (dep_ext_level_3, ExtensionCommandNames.DISABLE) ) # Test uninstall # In the case of uninstall, the last extension to be installed should be # the first extension uninstalled. The first extension installed # should be the last one uninstalled. test_data.set_incarnation(4) test_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) # Swap the dependency ordering AGAIN dep_ext_level_5 = extension_emulator(name="OSTCExtensions.OtherExampleHandlerLinux") dep_ext_level_6 = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux") test_data.ext_conf = test_data.ext_conf.replace("dependencyLevel=\"3\"", "dependencyLevel=\"6\"") test_data.ext_conf = test_data.ext_conf.replace("dependencyLevel=\"4\"", "dependencyLevel=\"5\"") protocol.client.update_goal_state() with enable_invocations(dep_ext_level_5, dep_ext_level_6) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_no_handler_status(protocol.report_vm_status) self.assertEqual(len(exthandlers_handler.ext_handlers), 2) self.assertEqual(5, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_5.name).settings[0].dependencyLevel) self.assertEqual(6, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_6.name).settings[0].dependencyLevel) # Ensure the invocation order follows the dependency levels invocation_record.compare( (dep_ext_level_6, ExtensionCommandNames.UNINSTALL), (dep_ext_level_5, ExtensionCommandNames.UNINSTALL) ) def test_it_should_process_sequencing_properly_even_if_no_settings_for_dependent_extension( self, mock_get, mock_crypt, *args): test_data_file = DATA_FILE.copy() test_data_file["ext_conf"] = "wire/ext_conf_dependencies_with_empty_settings.xml" test_data = wire_protocol_data.WireProtocolData(test_data_file) exthandlers_handler, protocol = self._create_mock(test_data, mock_get, mock_crypt, *args) ext_1 = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux") ext_2 = extension_emulator(name="OSTCExtensions.OtherExampleHandlerLinux") with enable_invocations(ext_1, ext_2) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # Ensure no extension status was reported for OtherExampleHandlerLinux as no settings provided for it self._assert_handler_status(protocol.report_vm_status, "Ready", 0, "1.0.0", expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") # Ensure correct status reported back for the other extension with settings self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="OSTCExtensions.ExampleHandlerLinux") self._assert_ext_status(protocol.report_vm_status, "success", 0, expected_handler_name="OSTCExtensions.ExampleHandlerLinux") # Ensure the invocation order follows the dependency levels invocation_record.compare( (ext_2, ExtensionCommandNames.INSTALL), (ext_2, ExtensionCommandNames.ENABLE), (ext_1, ExtensionCommandNames.INSTALL), (ext_1, ExtensionCommandNames.ENABLE) ) def test_ext_handler_sequencing_should_fail_if_handler_failed(self, mock_get, mock_crypt, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SEQUENCING) exthandlers_handler, protocol = self._create_mock(test_data, mock_get, mock_crypt, *args) original_popen = subprocess.Popen def _assert_event_reported_only_on_incarnation_change(expected_count=1): handler_seq_reporting = [kwargs for _, kwargs in patch_add_event.call_args_list if kwargs[ 'op'] == WALAEventOperation.ExtensionProcessing and "Skipping processing of extensions since execution of dependent extension" in kwargs['message']] self.assertEqual(len(handler_seq_reporting), expected_count, "Error should be reported only on incarnation change") def mock_fail_extension_commands(args, **kwargs): if 'sample.py' in args: return original_popen("fail_this_command", **kwargs) return original_popen(args, **kwargs) with patch("subprocess.Popen", mock_fail_extension_commands): with patch('azurelinuxagent.ga.exthandlers.add_event') as patch_add_event: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "NotReady", 0, "1.0.0", expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") _assert_event_reported_only_on_incarnation_change(expected_count=1) test_data.set_incarnation(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # We should report error again on incarnation change _assert_event_reported_only_on_incarnation_change(expected_count=2) # Test it recovers on a new goal state if Handler succeeds test_data.set_incarnation(3) test_data.set_extensions_config_sequence_number(1) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") self._assert_ext_status(protocol.report_vm_status, "success", 1, expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") # Update incarnation to confirm extension invocation order test_data.set_incarnation(4) protocol.client.update_goal_state() dep_ext_level_2 = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux") dep_ext_level_1 = extension_emulator(name="OSTCExtensions.OtherExampleHandlerLinux") with enable_invocations(dep_ext_level_2, dep_ext_level_1) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # check handler list and dependency levels self.assertTrue(exthandlers_handler.ext_handlers is not None) self.assertTrue(exthandlers_handler.ext_handlers is not None) self.assertEqual(len(exthandlers_handler.ext_handlers), 2) self.assertEqual(1, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_1.name).settings[0].dependencyLevel) self.assertEqual(2, next(handler for handler in exthandlers_handler.ext_handlers if handler.name == dep_ext_level_2.name).settings[0].dependencyLevel) # Ensure the invocation order follows the dependency levels invocation_record.compare( (dep_ext_level_1, ExtensionCommandNames.ENABLE), (dep_ext_level_2, ExtensionCommandNames.ENABLE) ) def test_ext_handler_sequencing_default_dependency_level(self, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=unused-variable,no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(exthandlers_handler.ext_handlers[0].settings[0].dependencyLevel, 0) self.assertEqual(exthandlers_handler.ext_handlers[0].settings[0].dependencyLevel, 0) def test_ext_handler_sequencing_invalid_dependency_level(self, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SEQUENCING) test_data.set_incarnation(2) test_data.set_extensions_config_sequence_number(1) test_data.ext_conf = test_data.ext_conf.replace("dependencyLevel=\"1\"", "dependencyLevel=\"a6\"") test_data.ext_conf = test_data.ext_conf.replace("dependencyLevel=\"2\"", "dependencyLevel=\"5b\"") exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=unused-variable,no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(exthandlers_handler.ext_handlers[0].settings[0].dependencyLevel, 0) self.assertEqual(exthandlers_handler.ext_handlers[0].settings[0].dependencyLevel, 0) def test_ext_handler_rollingupgrade(self, *args): # Test enable scenario. test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_ROLLINGUPGRADE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Test goal state changed test_data.set_incarnation(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Test minor version bump test_data.set_incarnation(3) test_data.set_extensions_config_version("1.1.0") protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.1.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Test hotfix version bump test_data.set_incarnation(4) test_data.set_extensions_config_version("1.1.1") protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.1.1") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Test disable test_data.set_incarnation(5) test_data.set_extensions_config_state(ExtensionRequestedState.Disabled) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "NotReady", 1, "1.1.1") # Test uninstall test_data.set_incarnation(6) test_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_no_handler_status(protocol.report_vm_status) # Test uninstall again! test_data.set_incarnation(7) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_no_handler_status(protocol.report_vm_status) # Test re-install test_data.set_incarnation(8) test_data.set_extensions_config_state(ExtensionRequestedState.Enabled) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.1.1") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Test version bump post-re-install test_data.set_incarnation(9) test_data.set_extensions_config_version("1.2.0") protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.2.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Test rollback test_data.set_incarnation(10) test_data.set_extensions_config_version("1.1.0") protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.1.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) def test_it_should_create_extension_events_dir_and_set_handler_environment_only_if_extension_telemetry_enabled(self, *args): for enable_extensions in [False, True]: tmp_lib_dir = tempfile.mkdtemp(prefix="ExtensionEnabled{0}".format(enable_extensions)) with patch("azurelinuxagent.common.conf.get_lib_dir", return_value=tmp_lib_dir): with patch("azurelinuxagent.common.agent_supported_feature._ETPFeature.is_supported", enable_extensions): # Create new object for each run to force re-installation of extensions as we # only create handler_environment on installation test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_MULTIPLE_EXT) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) for ext_handler in exthandlers_handler.ext_handlers: ehi = ExtHandlerInstance(ext_handler, protocol) self.assertEqual(enable_extensions, os.path.exists(ehi.get_extension_events_dir()), "Events directory incorrectly set") handler_env_json = ehi.get_env_file() with open(handler_env_json, 'r') as env_json: env_data = json.load(env_json) self.assertEqual(enable_extensions, HandlerEnvironment.eventsFolder in env_data[0][ HandlerEnvironment.handlerEnvironment], "eventsFolder wrongfully set in HandlerEnvironment.json file") if enable_extensions: self.assertEqual(ehi.get_extension_events_dir(), env_data[0][HandlerEnvironment.handlerEnvironment][ HandlerEnvironment.eventsFolder], "Events directory dont match") # Clean the File System for the next test run if os.path.exists(tmp_lib_dir): shutil.rmtree(tmp_lib_dir, ignore_errors=True) def test_it_should_not_delete_extension_events_directory_on_extension_uninstall(self, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter with patch("azurelinuxagent.common.agent_supported_feature._ETPFeature.is_supported", True): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) ehi = ExtHandlerInstance(exthandlers_handler.ext_handlers[0], protocol) self.assertTrue(os.path.exists(ehi.get_extension_events_dir()), "Events directory should exist") # Uninstall extensions now test_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) test_data.set_incarnation(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertTrue(os.path.exists(ehi.get_extension_events_dir()), "Events directory should still exist") def test_it_should_uninstall_unregistered_extensions_properly(self, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") # Update version and set it to uninstall. That is how it would be propagated by CRP if a version 1.0.0 is # unregistered in PIR and a new version 1.0.1 is published. test_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) test_data.set_extensions_config_version("1.0.1") # Since the installed version is not in PIR anymore, we need to also remove it from manifest file test_data.manifest = test_data.manifest.replace("1.0.0", "9.9.9") test_data.set_incarnation(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() args, _ = protocol.report_vm_status.call_args vm_status = args[0] self.assertEqual(0, len(vm_status.vmAgent.extensionHandlers), "The extension should not be reported as it is uninstalled") @patch('azurelinuxagent.common.errorstate.ErrorState.is_triggered') @patch('azurelinuxagent.ga.exthandlers.add_event') def test_ext_handler_report_status_permanent(self, mock_add_event, mock_error_state, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter protocol.report_vm_status = Mock(side_effect=ProtocolError) mock_error_state.return_value = True exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() args, kw = mock_add_event.call_args self.assertEqual(False, kw['is_success']) self.assertTrue("Failed to report vm agent status" in kw['message']) self.assertEqual("ReportStatusExtended", kw['op']) @patch('azurelinuxagent.ga.exthandlers.add_event') def test_ext_handler_report_status_resource_gone(self, mock_add_event, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter protocol.report_vm_status = Mock(side_effect=ResourceGoneError) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() args, kw = mock_add_event.call_args self.assertEqual(False, kw['is_success']) self.assertTrue("ResourceGoneError" in kw['message']) self.assertEqual("ReportStatus", kw['op']) @patch('azurelinuxagent.common.errorstate.ErrorState.is_triggered') @patch('azurelinuxagent.ga.exthandlers.add_event') def test_ext_handler_download_failure_permanent_ProtocolError(self, mock_add_event, mock_error_state, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter protocol.get_goal_state().fetch_extension_manifest = Mock(side_effect=ProtocolError) mock_error_state.return_value = True exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() event_occurrences = [kw for _, kw in mock_add_event.call_args_list if "[ExtensionError] Failed to get ext handler pkgs" in kw['message']] self.assertEqual(1, len(event_occurrences)) self.assertFalse(event_occurrences[0]['is_success']) self.assertTrue("Failed to get ext handler pkgs" in event_occurrences[0]['message']) self.assertTrue("ProtocolError" in event_occurrences[0]['message']) @patch('azurelinuxagent.ga.exthandlers.fileutil') def test_ext_handler_io_error(self, mock_fileutil, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=unused-variable,no-value-for-parameter mock_fileutil.write_file.return_value = IOError("Mock IO Error") exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() def _assert_ext_status(self, vm_agent_status, expected_status, expected_seq_no, expected_handler_name="OSTCExtensions.ExampleHandlerLinux", expected_msg=None): self.assertTrue(vm_agent_status.called) args, _ = vm_agent_status.call_args vm_status = args[0] ext_status = next(handler_status.extension_status for handler_status in vm_status.vmAgent.extensionHandlers if handler_status.name == expected_handler_name) self.assertEqual(expected_status, ext_status.status) self.assertEqual(expected_seq_no, ext_status.sequenceNumber) if expected_msg is not None: self.assertIn(expected_msg, ext_status.message) def test_it_should_initialise_and_use_command_execution_log_for_extensions(self, mock_get, mock_crypt_util, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, mock_get, mock_crypt_util, *args) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") command_execution_log = os.path.join(conf.get_ext_log_dir(), "OSTCExtensions.ExampleHandlerLinux", "CommandExecution.log") self.assertTrue(os.path.exists(command_execution_log), "CommandExecution.log file not found") self.assertGreater(os.path.getsize(command_execution_log), 0, "The file should not be empty") def test_ext_handler_no_reporting_status(self, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") # Remove status file and re-run collecting extension status status_file = os.path.join(self.tmp_dir, "OSTCExtensions.ExampleHandlerLinux-1.0.0", "status", "0.status") self.assertTrue(os.path.isfile(status_file)) os.remove(status_file) exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, ExtensionStatusValue.transitioning, 0, expected_msg="This status is being reported by the Guest Agent since no status " "file was reported by extension OSTCExtensions.ExampleHandlerLinux") def test_wait_for_handler_completion_no_status(self, mock_http_get, mock_crypt_util, *args): """ Testing depends-on scenario when there is no status file reported by the extension. Expected to retry and eventually report failure for all dependent extensions. """ exthandlers_handler, protocol = self._create_mock( wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SEQUENCING), mock_http_get, mock_crypt_util, *args) original_popen = subprocess.Popen def mock_popen(cmd, *args, **kwargs): # For the purpose of this test, deleting the placeholder status file created by the agent if "sample.py" in cmd: status_path = os.path.join(kwargs['env'][ExtCommandEnvVariable.ExtensionPath], "status", "{0}.status".format(kwargs['env'][ExtCommandEnvVariable.ExtensionSeqNumber])) mock_popen.deleted_status_file = status_path if os.path.exists(status_path): os.remove(status_path) return original_popen(["echo", "Yes"], *args, **kwargs) with patch('azurelinuxagent.ga.cgroupapi.subprocess.Popen', side_effect=mock_popen): with patch('azurelinuxagent.ga.exthandlers._DEFAULT_EXT_TIMEOUT_MINUTES', 0.01): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # The Handler Status for the base extension should be ready as it was executed successfully by the agent self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") # The extension status reported by the Handler should be transitioning since no status file was found self._assert_ext_status(protocol.report_vm_status, ExtensionStatusValue.transitioning, 0, expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux", expected_msg="This status is being reported by the Guest Agent since no status " "file was reported by extension OSTCExtensions.OtherExampleHandlerLinux") # The Handler Status for the dependent extension should be NotReady as it was not executed at all # And since it was not executed, it should not report any extension status either self._assert_handler_status(protocol.report_vm_status, "NotReady", 0, "1.0.0", expected_msg="Dependent Extension OSTCExtensions.OtherExampleHandlerLinux did not reach a terminal state within the allowed timeout. Last status was {0}".format( ExtensionStatusValue.warning)) def test_it_should_not_create_placeholder_for_single_config_extensions(self, mock_http_get, mock_crypt_util, *args): original_popen = subprocess.Popen def mock_popen(cmd, *_, **kwargs): if 'env' in kwargs: if ExtensionCommandNames.ENABLE not in cmd: # To force the test extension to not create a status file on Install, changing command return original_popen(["echo", "not-enable"], *_, **kwargs) seq_no = kwargs['env'][ExtCommandEnvVariable.ExtensionSeqNumber] ext_path = kwargs['env'][ExtCommandEnvVariable.ExtensionPath] status_file_name = "{0}.status".format(seq_no) status_file = os.path.join(ext_path, "status", status_file_name) self.assertFalse(os.path.exists(status_file), "Placeholder file should not be created for single config extensions") return original_popen(cmd, *_, **kwargs) aks_test_mock = DATA_FILE.copy() aks_test_mock["ext_conf"] = "wire/ext_conf_aks_extension.xml" exthandlers_handler, protocol = self._create_mock(wire_protocol_data.WireProtocolData(aks_test_mock), mock_http_get, mock_crypt_util, *args) with patch('azurelinuxagent.ga.cgroupapi.subprocess.Popen', side_effect=mock_popen): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="OSTCExtensions.ExampleHandlerLinux") self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="Microsoft.AKS.Compute.AKS.Linux.AKSNode") self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="Microsoft.AKS.Compute.AKS-Engine.Linux.Billing") # Extension without settings self._assert_handler_status(protocol.report_vm_status, "Ready", 0, "1.0.0", expected_handler_name="Microsoft.AKS.Compute.AKS.Linux.Billing") self._assert_ext_status(protocol.report_vm_status, ExtensionStatusValue.success, 0, expected_handler_name="OSTCExtensions.ExampleHandlerLinux", expected_msg="Enabling non-AKS") self._assert_ext_status(protocol.report_vm_status, ExtensionStatusValue.success, 0, expected_handler_name="Microsoft.AKS.Compute.AKS.Linux.AKSNode", expected_msg="Enabling AKSNode") self._assert_ext_status(protocol.report_vm_status, ExtensionStatusValue.success, 0, expected_handler_name="Microsoft.AKS.Compute.AKS-Engine.Linux.Billing", expected_msg="Enabling AKSBilling") def test_it_should_include_part_of_status_in_ext_handler_message(self, mock_http_get, mock_crypt_util, *args): """ Testing scenario when the status file is invalid, The extension status reported by the Handler should contain a fragment of status file for debugging. """ exthandlers_handler, protocol = self._create_mock( wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE), mock_http_get, mock_crypt_util, *args) original_popen = subprocess.Popen def mock_popen(cmd, *args, **kwargs): # For the purpose of this test, replacing the status file with file that could not be parsed if "sample.py" in cmd: status_path = os.path.join(kwargs['env'][ExtCommandEnvVariable.ExtensionPath], "status", "{0}.status".format(kwargs['env'][ExtCommandEnvVariable.ExtensionSeqNumber])) invalid_json_path = os.path.join(data_dir, "ext", "sample-status-invalid-json-format.json") if 'enable' in cmd: invalid_json = fileutil.read_file(invalid_json_path) fileutil.write_file(status_path,invalid_json) return original_popen(["echo", "Yes"], *args, **kwargs) with patch('azurelinuxagent.ga.cgroupapi.subprocess.Popen', side_effect=mock_popen): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # The Handler Status for the base extension should be ready as it was executed successfully by the agent self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="OSTCExtensions.ExampleHandlerLinux") # The extension status reported by the Handler should contain a fragment of status file for # debugging. The uniqueMachineId tag comes from status file self._assert_ext_status(protocol.report_vm_status, ExtensionStatusValue.error, 0, expected_handler_name="OSTCExtensions.ExampleHandlerLinux", expected_msg="\"uniqueMachineId\": \"e5e5602b-48a6-4c35-9f96-752043777af1\"") def test_wait_for_handler_completion_success_status(self, mock_http_get, mock_crypt_util, *args): """ Testing depends-on scenario on a successful case. Expected to report the status for both extensions properly. """ exthandlers_handler, protocol = self._create_mock( wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SEQUENCING), mock_http_get, mock_crypt_util, *args) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux", expected_msg='Plugin enabled') # The extension status reported by the Handler should be an error since no status file was found self._assert_ext_status(protocol.report_vm_status, ExtensionStatusValue.success, 0, expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") # The Handler Status for the dependent extension should be NotReady as it was not executed at all # And since it was not executed, it should not report any extension status either self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0", expected_msg='Plugin enabled') self._assert_ext_status(protocol.report_vm_status, ExtensionStatusValue.success, 0) def test_wait_for_handler_completion_error_status(self, mock_http_get, mock_crypt_util, *args): """ Testing wait_for_handler_completion() when there is error status. Expected to return False. """ exthandlers_handler, protocol = self._create_mock( wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SEQUENCING), mock_http_get, mock_crypt_util, *args) original_popen = subprocess.Popen def mock_popen(cmd, *args, **kwargs): # For the purpose of this test, deleting the placeholder status file created by the agent if "sample.py" in cmd: return original_popen(["/fail/this/command"], *args, **kwargs) return original_popen(cmd, *args, **kwargs) with patch('azurelinuxagent.ga.cgroupapi.subprocess.Popen', side_effect=mock_popen): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # The Handler Status for the base extension should be NotReady as it failed self._assert_handler_status(protocol.report_vm_status, "NotReady", 0, "1.0.0", expected_handler_name="OSTCExtensions.OtherExampleHandlerLinux") # The Handler Status for the dependent extension should be NotReady as it was not executed at all # And since it was not executed, it should not report any extension status either self._assert_handler_status(protocol.report_vm_status, "NotReady", 0, "1.0.0", expected_msg='Skipping processing of extensions since execution of dependent extension OSTCExtensions.OtherExampleHandlerLinux failed') def test_get_ext_handling_status(self, *args): """ Testing get_ext_handling_status() function with various cases and verifying against the expected values """ test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=unused-variable,no-value-for-parameter handler_name = "Handler" exthandler = Extension(name=handler_name) extension = ExtensionSettings(name=handler_name) exthandler.settings.append(extension) # In the following list of test cases, the first element corresponds to seq_no. # the second element is the status file name, the third element indicates if the status file exits or not. # The fourth element is the expected value from get_ext_handling_status() test_cases = [ [-5, None, False, None], [-1, None, False, None], [0, None, False, None], [0, "filename", False, "warning"], [0, "filename", True, ExtensionStatus(status="success")], [5, "filename", False, "warning"], [5, "filename", True, ExtensionStatus(status="success")] ] orig_state = os.path.exists for case in test_cases: ext_handler_i = ExtHandlerInstance(exthandler, protocol) ext_handler_i.get_status_file_path = MagicMock(return_value=(case[0], case[1])) os.path.exists = MagicMock(return_value=case[2]) if case[2]: # when the status file exists, it is expected return the value from collect_ext_status() ext_handler_i.collect_ext_status = MagicMock(return_value=case[3]) status = ext_handler_i.get_ext_handling_status(extension) if case[2]: self.assertEqual(status, case[3].status) else: self.assertEqual(status, case[3]) os.path.exists = orig_state def test_is_ext_handling_complete(self, *args): """ Testing is_ext_handling_complete() with various input and verifying against the expected output values. """ test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=unused-variable,no-value-for-parameter handler_name = "Handler" exthandler = Extension(name=handler_name) extension = ExtensionSettings(name=handler_name) exthandler.settings.append(extension) ext_handler_i = ExtHandlerInstance(exthandler, protocol) # Testing no status case ext_handler_i.get_ext_handling_status = MagicMock(return_value=None) completed, status = ext_handler_i.is_ext_handling_complete(extension) self.assertTrue(completed) self.assertEqual(status, None) # Here the key represents the possible input value to is_ext_handling_complete() # the value represents the output tuple from is_ext_handling_complete() expected_results = { "error": (True, "error"), "success": (True, "success"), "warning": (False, "warning"), "transitioning": (False, "transitioning") } for key in expected_results.keys(): ext_handler_i.get_ext_handling_status = MagicMock(return_value=key) completed, status = ext_handler_i.is_ext_handling_complete(extension) self.assertEqual(completed, expected_results[key][0]) self.assertEqual(status, expected_results[key][1]) def test_ext_handler_version_decide_autoupgrade_internalversion(self, *args): for internal in [False, True]: for autoupgrade in [False, True]: if internal: config_version = '1.3.0' decision_version = '1.3.0' if autoupgrade: datafile = wire_protocol_data.DATA_FILE_EXT_AUTOUPGRADE_INTERNALVERSION else: datafile = wire_protocol_data.DATA_FILE_EXT_INTERNALVERSION else: config_version = '1.0.0' decision_version = '1.0.0' if autoupgrade: datafile = wire_protocol_data.DATA_FILE_EXT_AUTOUPGRADE else: datafile = wire_protocol_data.DATA_FILE _, protocol = self._create_mock(wire_protocol_data.WireProtocolData(datafile), *args) # pylint: disable=no-value-for-parameter ext_handlers = protocol.get_goal_state().extensions_goal_state.extensions self.assertEqual(1, len(ext_handlers)) ext_handler = ext_handlers[0] self.assertEqual('OSTCExtensions.ExampleHandlerLinux', ext_handler.name) self.assertEqual(config_version, ext_handler.version, "config version.") ExtHandlerInstance(ext_handler, protocol).decide_version() self.assertEqual(decision_version, ext_handler.version, "decision version.") def test_ext_handler_version_decide_between_minor_versions(self, *args): """ Using v2.x~v4.x for unit testing Available versions via manifest XML (I stands for internal): 2.0.0, 2.1.0, 2.1.1, 2.2.0, 2.3.0(I), 2.4.0(I), 3.0, 3.1, 4.0.0.0, 4.0.0.1, 4.1.0.0 See tests/data/wire/manifest.xml for possible versions """ # (installed_version, config_version, exptected_version, autoupgrade_expected_version) cases = [ (None, '2.0', '2.0.0'), (None, '2.0.0', '2.0.0'), ('1.0', '1.0.0', '1.0.0'), (None, '2.1.0', '2.1.0'), (None, '2.1.1', '2.1.1'), (None, '2.2.0', '2.2.0'), (None, '2.3.0', '2.3.0'), (None, '2.4.0', '2.4.0'), (None, '3.0', '3.0'), (None, '3.1', '3.1'), (None, '4.0', '4.0.0.1'), (None, '4.1', '4.1.0.0'), ] _, protocol = self._create_mock(wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE), *args) # pylint: disable=no-value-for-parameter version_uri = 'http://mock-goal-state/Microsoft.OSTCExtensions_ExampleHandlerLinux_asiaeast_manifest.xml' for (installed_version, config_version, expected_version) in cases: ext_handler = Mock() ext_handler.properties = Mock() ext_handler.name = 'OSTCExtensions.ExampleHandlerLinux' ext_handler.manifest_uris = [version_uri] ext_handler.version = config_version ext_handler_instance = ExtHandlerInstance(ext_handler, protocol) ext_handler_instance.get_installed_version = Mock(return_value=installed_version) ext_handler_instance.decide_version() self.assertEqual(expected_version, ext_handler.version) @patch('azurelinuxagent.common.conf.get_extensions_enabled', return_value=False) def test_extensions_disabled(self, _, *args): # test status is reported for no extensions test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_NO_EXT) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_no_handler_status(protocol.report_vm_status) # test status is reported, but extensions are not processed test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() report_vm_status = protocol.report_vm_status self.assertTrue(report_vm_status.called) args, kw = report_vm_status.call_args # pylint: disable=unused-variable vm_status = args[0] self.assertEqual(1, len(vm_status.vmAgent.extensionHandlers)) exthandler = vm_status.vmAgent.extensionHandlers[0] self.assertEqual(-1, exthandler.code) self.assertEqual('NotReady', exthandler.status) self.assertEqual("Extension will not be processed since extension processing is disabled. To enable extension processing, set Extensions.Enabled=y in '/etc/waagent.conf'", exthandler.message) ext_status = exthandler.extension_status self.assertEqual(-1, ext_status.code) self.assertEqual('error', ext_status.status) self.assertEqual("Extension will not be processed since extension processing is disabled. To enable extension processing, set Extensions.Enabled=y in '/etc/waagent.conf'", ext_status.message) def test_extensions_deleted(self, *args): # Ensure initial enable is successful test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_DELETION) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Update incarnation, simulate new extension version and old one deleted test_data.set_incarnation(2) test_data.set_extensions_config_version("1.0.1") test_data.set_manifest_version('1.0.1') protocol.client.update_goal_state() # Ensure new extension can be enabled exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.1") self._assert_ext_status(protocol.report_vm_status, "success", 0) @patch('azurelinuxagent.ga.exthandlers.ExtHandlerInstance.install', side_effect=ExtHandlerInstance.install, autospec=True) @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_install_command') def test_install_failure(self, patch_get_install_command, patch_install, *args): """ When extension install fails, the operation should not be retried. """ test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SINGLE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter # Ensure initial install is unsuccessful patch_get_install_command.return_value = "exit.sh 1" exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_install.call_count) self.assertEqual(1, protocol.report_vm_status.call_count) self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=0, version="1.0.0") @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_install_command') def test_install_failure_check_exception_handling(self, patch_get_install_command, *args): """ When extension install fails, the operation should be reported to our telemetry service. """ test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SINGLE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter # Ensure install is unsuccessful patch_get_install_command.return_value = "exit.sh 1" exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, protocol.report_vm_status.call_count) self._assert_handler_status(protocol.report_vm_status, expected_status="NotReady", expected_ext_count=0, version="1.0.0") @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_enable_command') def test_enable_failure_check_exception_handling(self, patch_get_enable_command, *args): """ When extension enable fails, the operation should be reported. """ test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SINGLE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter # Ensure initial install is successful, but enable fails patch_get_enable_command.call_count = 0 patch_get_enable_command.return_value = "exit.sh 1" exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_enable_command.call_count) self.assertEqual(1, protocol.report_vm_status.call_count) self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=1, version="1.0.0") @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_disable_command') def test_disable_failure_with_exception_handling(self, patch_get_disable_command, *args): """ When extension disable fails, the operation should be reported. """ # Ensure initial install and enable is successful, but disable fails test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SINGLE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter patch_get_disable_command.call_count = 0 patch_get_disable_command.return_value = "exit 1" exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(0, patch_get_disable_command.call_count) self.assertEqual(1, protocol.report_vm_status.call_count) self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Next incarnation, disable extension test_data.set_incarnation(2) test_data.set_extensions_config_state(ExtensionRequestedState.Disabled) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_disable_command.call_count) self.assertEqual(2, protocol.report_vm_status.call_count) self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=1, version="1.0.0") @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_uninstall_command') def test_uninstall_failure(self, patch_get_uninstall_command, *args): """ When extension uninstall fails, the operation should not be retried. """ # Ensure initial install and enable is successful, but uninstall fails test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SINGLE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter patch_get_uninstall_command.call_count = 0 patch_get_uninstall_command.return_value = "exit 1" exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(0, patch_get_uninstall_command.call_count) self.assertEqual(1, protocol.report_vm_status.call_count) self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.0") self._assert_ext_status(protocol.report_vm_status, "success", 0) # Next incarnation, disable extension test_data.set_incarnation(2) test_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_uninstall_command.call_count) self.assertEqual(2, protocol.report_vm_status.call_count) self.assertEqual("Ready", protocol.report_vm_status.call_args[0][0].vmAgent.status) self._assert_no_handler_status(protocol.report_vm_status) # Ensure there are no further retries exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_uninstall_command.call_count) self.assertEqual(3, protocol.report_vm_status.call_count) self.assertEqual("Ready", protocol.report_vm_status.call_args[0][0].vmAgent.status) self._assert_no_handler_status(protocol.report_vm_status) @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_update_command') def test_extension_upgrade_failure_when_new_version_update_fails(self, patch_get_update_command, *args): """ When the update command of the new extension fails, it should result in the new extension failed and the old extension disabled. On the next goal state, the entire upgrade scenario should be retried (once), meaning the download, initialize and update are called on the new extension. Note: we don't re-download the zip since it wasn't cleaned up in the previous goal state (we only clean up NotInstalled handlers), so we just re-use the existing zip of the new extension. """ test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_update_command, *args) extension_name = exthandlers_handler.ext_handlers[0].name extension_calls = [] original_popen = subprocess.Popen def mock_popen(*args, **kwargs): # Maintain an internal list of invoked commands of the test extension to assert on later if extension_name in args[0]: extension_calls.append(args[0]) return original_popen(*args, **kwargs) with patch('azurelinuxagent.ga.cgroupapi.subprocess.Popen', side_effect=mock_popen): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() update_command_count = len([extension_call for extension_call in extension_calls if patch_get_update_command.return_value in extension_call]) enable_command_count = len([extension_call for extension_call in extension_calls if "-enable" in extension_call]) self.assertEqual(1, update_command_count) self.assertEqual(0, enable_command_count) # We report the failure of the new extension version self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=1, version="1.0.1") # If the incarnation number changes (there's a new goal state), ensure we go through the entire upgrade # process again. test_data.set_incarnation(3) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() update_command_count = len([extension_call for extension_call in extension_calls if patch_get_update_command.return_value in extension_call]) enable_command_count = len([extension_call for extension_call in extension_calls if "-enable" in extension_call]) self.assertEqual(2, update_command_count) self.assertEqual(0, enable_command_count) # We report the failure of the new extension version self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=1, version="1.0.1") @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_disable_command') def test_extension_upgrade_failure_when_prev_version_disable_fails(self, patch_get_disable_command, *args): test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_disable_command, *args) # pylint: disable=unused-variable with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_enable_command') as patch_get_enable_command: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # When the previous version's disable fails, we expect the upgrade scenario to fail, so the enable # for the new version is not called and the new version handler's status is reported as not ready. self.assertEqual(1, patch_get_disable_command.call_count) self.assertEqual(0, patch_get_enable_command.call_count) self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=0, version="1.0.1") @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_disable_command') def test_extension_upgrade_failure_when_prev_version_disable_fails_and_recovers_on_next_incarnation(self, patch_get_disable_command, *args): test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_disable_command, *args) with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_enable_command') as patch_get_enable_command: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # When the previous version's disable fails, we expect the upgrade scenario to fail, so the enable # for the new version is not called and the new version handler's status is reported as not ready. self.assertEqual(1, patch_get_disable_command.call_count) self.assertEqual(0, patch_get_enable_command.call_count) self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=0, version="1.0.1") # Force a new goal state incarnation, only then will we attempt the upgrade again test_data.set_incarnation(3) protocol.client.update_goal_state() # Ensure disable won't fail by making launch_command a no-op with patch('azurelinuxagent.ga.exthandlers.ExtHandlerInstance.launch_command') as patch_launch_command: # pylint: disable=unused-variable exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(2, patch_get_disable_command.call_count) self.assertEqual(1, patch_get_enable_command.call_count) self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.1") @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_disable_command') def test_extension_upgrade_failure_when_prev_version_disable_fails_incorrect_zip(self, patch_get_disable_command, *args): test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_disable_command, # pylint: disable=unused-variable *args) # The download logic has retry logic that sleeps before each try - make sleep a no-op. with patch("time.sleep"): with patch("zipfile.ZipFile.extractall") as patch_zipfile_extractall: with patch( 'azurelinuxagent.ga.exthandlers.HandlerManifest.get_enable_command') as patch_get_enable_command: patch_zipfile_extractall.side_effect = raise_ioerror # The zipfile was corrupt and the upgrade sequence failed exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # We never called the disable of the old version due to the failure when unzipping the new version, # nor the enable of the new version self.assertEqual(0, patch_get_disable_command.call_count) self.assertEqual(0, patch_get_enable_command.call_count) # Ensure we are processing the same goal state only once loop_run = 5 for x in range(loop_run): # pylint: disable=unused-variable exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(0, patch_get_disable_command.call_count) self.assertEqual(0, patch_get_enable_command.call_count) @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_disable_command') def test_old_handler_reports_failure_on_disable_fail_on_update(self, patch_get_disable_command, *args): old_version, new_version = "1.0.0", "1.0.1" test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_disable_command, # pylint: disable=unused-variable *args) with patch.object(ExtHandlerInstance, "report_event", autospec=True) as patch_report_event: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_disable_command.call_count) old_version_args, old_version_kwargs = patch_report_event.call_args new_version_args, new_version_kwargs = patch_report_event.call_args_list[0] self.assertEqual(new_version_args[0].ext_handler.version, new_version, "The first call to report event should be from the new version of the ext-handler " "to report download succeeded") self.assertEqual(new_version_kwargs['message'], "Download succeeded", "The message should be Download Succedded") self.assertEqual(old_version_args[0].ext_handler.version, old_version, "The last report event call should be from the old version ext-handler " "to report the event from the previous version") self.assertFalse(old_version_kwargs['is_success'], "The last call to report event should be for a failure") self.assertTrue('Error' in old_version_kwargs['message'], "No error reported") # This is ensuring that the error status is being written to the new version self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=0, version=new_version) @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_update_command') def test_upgrade_failure_with_exception_handling(self, patch_get_update_command, *args): """ Extension upgrade failure should not be retried """ test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_update_command, # pylint: disable=unused-variable *args) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_update_command.call_count) self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=1, version="1.0.1") @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_disable_command') def test_extension_upgrade_should_pass_when_continue_on_update_failure_is_true_and_prev_version_disable_fails( self, patch_get_disable_command, *args): test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_disable_command, # pylint: disable=unused-variable *args) with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.is_continue_on_update_failure', return_value=True) \ as mock_continue_on_update_failure: # These are just testing the mocks have been called and asserting the test conditions have been met exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_disable_command.call_count) self.assertEqual(2, mock_continue_on_update_failure.call_count, "This should be called twice, for both disable and uninstall") # Ensure the handler status and ext_status is successful self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.1") self._assert_ext_status(protocol.report_vm_status, "success", 0) @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_uninstall_command') def test_extension_upgrade_should_pass_when_continue_on_update_failue_is_true_and_prev_version_uninstall_fails( self, patch_get_uninstall_command, *args): test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_uninstall_command, # pylint: disable=unused-variable *args) with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.is_continue_on_update_failure', return_value=True) \ as mock_continue_on_update_failure: # These are just testing the mocks have been called and asserting the test conditions have been met exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_uninstall_command.call_count) self.assertEqual(2, mock_continue_on_update_failure.call_count, "This should be called twice, for both disable and uninstall") # Ensure the handler status and ext_status is successful self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.1") self._assert_ext_status(protocol.report_vm_status, "success", 0) @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_disable_command') def test_extension_upgrade_should_fail_when_continue_on_update_failure_is_false_and_prev_version_disable_fails( self, patch_get_disable_command, *args): test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_disable_command, # pylint: disable=unused-variable *args) with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.is_continue_on_update_failure', return_value=False) \ as mock_continue_on_update_failure: # These are just testing the mocks have been called and asserting the test conditions have been met exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_disable_command.call_count) self.assertEqual(1, mock_continue_on_update_failure.call_count, "The first call would raise an exception") # Assert test scenario self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=0, version="1.0.1") @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_uninstall_command') def test_extension_upgrade_should_fail_when_continue_on_update_failure_is_false_and_prev_version_uninstall_fails( self, patch_get_uninstall_command, *args): test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_uninstall_command, # pylint: disable=unused-variable *args) with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.is_continue_on_update_failure', return_value=False) \ as mock_continue_on_update_failure: # These are just testing the mocks have been called and asserting the test conditions have been met exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_uninstall_command.call_count) self.assertEqual(2, mock_continue_on_update_failure.call_count, "The second call would raise an exception") # Assert test scenario self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=0, version="1.0.1") @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_disable_command') def test_extension_upgrade_should_fail_when_continue_on_update_failure_is_true_and_old_disable_and_new_enable_fails( self, patch_get_disable_command, *args): test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(patch_get_disable_command, # pylint: disable=unused-variable *args) with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.is_continue_on_update_failure', return_value=True) \ as mock_continue_on_update_failure: with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.get_enable_command', return_value="exit 1")\ as patch_get_enable: # These are just testing the mocks have been called and asserting the test conditions have been met exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, patch_get_disable_command.call_count) self.assertEqual(2, mock_continue_on_update_failure.call_count) self.assertEqual(1, patch_get_enable.call_count) # Assert test scenario self._assert_handler_status(protocol.report_vm_status, "NotReady", expected_ext_count=1, version="1.0.1") @patch('azurelinuxagent.ga.exthandlers.HandlerManifest.is_continue_on_update_failure', return_value=True) def test_uninstall_rc_env_var_should_report_not_run_for_non_update_calls_to_exthandler_run( self, patch_continue_on_update, *args): test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(Mock(), *args) with patch.object(CGroupConfigurator.get_instance(), "start_extension_command", side_effect=[ExtensionError("Disable Failed"), "ok", ExtensionError("uninstall failed"), "ok", "ok", "New enable run ok"]) as patch_start_cmd: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() _, update_kwargs = patch_start_cmd.call_args_list[1] _, install_kwargs = patch_start_cmd.call_args_list[3] _, enable_kwargs = patch_start_cmd.call_args_list[4] # Ensure that the env variables were present in the first run when failures were thrown for update self.assertEqual(2, patch_continue_on_update.call_count) self.assertTrue( '-update' in update_kwargs['command'] and ExtCommandEnvVariable.DisableReturnCode in update_kwargs['env'], "The update command call should have Disable Failed in env variable") self.assertTrue( '-install' in install_kwargs['command'] and ExtCommandEnvVariable.DisableReturnCode not in install_kwargs[ 'env'], "The Disable Failed env variable should be removed from install command") self.assertTrue( '-install' in install_kwargs['command'] and ExtCommandEnvVariable.UninstallReturnCode in install_kwargs[ 'env'], "The install command call should have Uninstall Failed in env variable") self.assertTrue( '-enable' in enable_kwargs['command'] and ExtCommandEnvVariable.UninstallReturnCode in enable_kwargs['env'], "The enable command call should have Uninstall Failed in env variable") # Initiating another run which shouldn't have any failed env variables in it if no failures # Updating Incarnation test_data.set_incarnation(3) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() _, new_enable_kwargs = patch_start_cmd.call_args # Ensure the new run didn't have Disable Return Code env variable self.assertNotIn(ExtCommandEnvVariable.DisableReturnCode, new_enable_kwargs['env']) # Ensure the new run had Uninstall Return Code env variable == NOT_RUN self.assertIn(ExtCommandEnvVariable.UninstallReturnCode, new_enable_kwargs['env']) self.assertTrue( new_enable_kwargs['env'][ExtCommandEnvVariable.UninstallReturnCode] == NOT_RUN) # Ensure the handler status and ext_status is successful self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.1") self._assert_ext_status(protocol.report_vm_status, "success", 0) def test_ext_path_and_version_env_variables_set_for_ever_operation(self, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SINGLE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter with patch.object(CGroupConfigurator.get_instance(), "start_extension_command") as patch_start_cmd: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() # Extension Path and Version should be set for all launch_command calls for args, kwargs in patch_start_cmd.call_args_list: self.assertIn(ExtCommandEnvVariable.ExtensionPath, kwargs['env']) self.assertIn('OSTCExtensions.ExampleHandlerLinux-1.0.0', kwargs['env'][ExtCommandEnvVariable.ExtensionPath]) self.assertIn(ExtCommandEnvVariable.ExtensionVersion, kwargs['env']) self.assertEqual("1.0.0", kwargs['env'][ExtCommandEnvVariable.ExtensionVersion]) self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.0") @patch("azurelinuxagent.ga.cgroupconfigurator.handle_process_completion", side_effect="Process Successful") def test_ext_sequence_no_should_be_set_for_every_command_call(self, _, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_MULTIPLE_EXT) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter with patch("subprocess.Popen") as patch_popen: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() for _, kwargs in patch_popen.call_args_list: self.assertIn(ExtCommandEnvVariable.ExtensionSeqNumber, kwargs['env']) self.assertEqual(kwargs['env'][ExtCommandEnvVariable.ExtensionSeqNumber], "0") self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.0") # Next incarnation and seq for extensions, update version test_data.goal_state = test_data.goal_state.replace("1<", "2<") test_data.ext_conf = test_data.ext_conf.replace('version="1.0.0"', 'version="1.0.1"') test_data.ext_conf = test_data.ext_conf.replace('seqNo="0"', 'seqNo="1"') test_data.manifest = test_data.manifest.replace('1.0.0', '1.0.1') exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=no-value-for-parameter with patch("subprocess.Popen") as patch_popen: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() for _, kwargs in patch_popen.call_args_list: self.assertIn(ExtCommandEnvVariable.ExtensionSeqNumber, kwargs['env']) self.assertEqual(kwargs['env'][ExtCommandEnvVariable.ExtensionSeqNumber], "1") self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.1") def test_ext_sequence_no_should_be_set_from_within_extension(self, *args): test_file_name = "testfile.sh" handler_json = { "installCommand": test_file_name, "uninstallCommand": test_file_name, "updateCommand": test_file_name, "enableCommand": test_file_name, "disableCommand": test_file_name, "rebootAfterInstall": False, "reportHeartbeat": False, "continueOnUpdateFailure": False } manifest = HandlerManifest({'handlerManifest': handler_json}) # Script prints env variables passed to this process and prints all starting with ConfigSequenceNumber test_file = """ printenv | grep ConfigSequenceNumber """ base_dir = os.path.join(conf.get_lib_dir(), 'OSTCExtensions.ExampleHandlerLinux-1.0.0') if not os.path.exists(base_dir): os.mkdir(base_dir) self.create_script(os.path.join(base_dir, test_file_name), test_file) test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_EXT_SINGLE) exthandlers_handler, protocol = self._create_mock(test_data, *args) # pylint: disable=unused-variable,no-value-for-parameter expected_seq_no = 0 with patch.object(ExtHandlerInstance, "load_manifest", return_value=manifest): with patch.object(ExtHandlerInstance, 'report_event') as mock_report_event: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() for _, kwargs in mock_report_event.call_args_list: # The output is of the format - 'Command: testfile.sh -{Operation} \n[stdout]ConfigSequenceNumber=N\n[stderr]' if ("Command: " + test_file_name) not in kwargs['message']: continue self.assertIn("{0}={1}".format(ExtCommandEnvVariable.ExtensionSeqNumber, expected_seq_no), kwargs['message']) # Update goal state, extension version and seq no test_data.goal_state = test_data.goal_state.replace("1<", "2<") test_data.ext_conf = test_data.ext_conf.replace('version="1.0.0"', 'version="1.0.1"') test_data.ext_conf = test_data.ext_conf.replace('seqNo="0"', 'seqNo="1"') test_data.manifest = test_data.manifest.replace('1.0.0', '1.0.1') expected_seq_no = 1 base_dir = os.path.join(conf.get_lib_dir(), 'OSTCExtensions.ExampleHandlerLinux-1.0.1') if not os.path.exists(base_dir): os.mkdir(base_dir) self.create_script(os.path.join(base_dir, test_file_name), test_file) with patch.object(ExtHandlerInstance, 'report_event') as mock_report_event: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() for _, kwargs in mock_report_event.call_args_list: # The output is of the format - 'testfile.sh\n[stdout]ConfigSequenceNumber=N\n[stderr]' if test_file_name not in kwargs['message']: continue self.assertIn("{0}={1}".format(ExtCommandEnvVariable.ExtensionSeqNumber, expected_seq_no), kwargs['message']) def test_correct_exit_code_should_be_set_on_uninstall_cmd_failure(self, *args): test_file_name = "testfile.sh" test_error_file_name = "error.sh" handler_json = { "installCommand": test_file_name + " -install", "uninstallCommand": test_error_file_name, "updateCommand": test_file_name + " -update", "enableCommand": test_file_name + " -enable", "disableCommand": test_error_file_name, "rebootAfterInstall": False, "reportHeartbeat": False, "continueOnUpdateFailure": True } manifest = HandlerManifest({'handlerManifest': handler_json}) # Script prints env variables passed to this process and prints all starting with ConfigSequenceNumber test_file = """ printenv | grep AZURE_ """ exit_code = 151 test_error_content = """ exit %s """ % exit_code error_dir = os.path.join(conf.get_lib_dir(), 'OSTCExtensions.ExampleHandlerLinux-1.0.0') if not os.path.exists(error_dir): os.mkdir(error_dir) self.create_script(os.path.join(error_dir, test_error_file_name), test_error_content) test_data, exthandlers_handler, protocol = self._set_up_update_test_and_update_gs(Mock(), *args) # pylint: disable=unused-variable base_dir = os.path.join(conf.get_lib_dir(), 'OSTCExtensions.ExampleHandlerLinux-1.0.1') if not os.path.exists(base_dir): os.mkdir(base_dir) self.create_script(os.path.join(base_dir, test_file_name), test_file) with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.load_manifest", return_value=manifest): with patch.object(ExtHandlerInstance, 'report_event') as mock_report_event: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() update_kwargs = next(kwargs for _, kwargs in mock_report_event.call_args_list if "Command: testfile.sh -update" in kwargs['message']) install_kwargs = next(kwargs for _, kwargs in mock_report_event.call_args_list if "Command: testfile.sh -install" in kwargs['message']) enable_kwargs = next(kwargs for _, kwargs in mock_report_event.call_args_list if "Command: testfile.sh -enable" in kwargs['message']) self.assertIn("%s=%s" % (ExtCommandEnvVariable.DisableReturnCode, exit_code), update_kwargs['message']) self.assertIn("%s=%s" % (ExtCommandEnvVariable.UninstallReturnCode, exit_code), install_kwargs['message']) self.assertIn("%s=%s" % (ExtCommandEnvVariable.UninstallReturnCode, exit_code), enable_kwargs['message']) def test_it_should_persist_goal_state_aggregate_status_until_new_incarnation(self, mock_get, mock_crypt_util, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) exthandlers_handler, protocol = self._create_mock(test_data, mock_get, mock_crypt_util, *args) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.0") args, _ = protocol.report_vm_status.call_args gs_aggregate_status = args[0].vmAgent.vm_artifacts_aggregate_status.goal_state_aggregate_status self.assertIsNotNone(gs_aggregate_status, "Goal State Aggregate status not reported") self.assertEqual(gs_aggregate_status.status, GoalStateStatus.Success, "Wrong status reported") self.assertEqual(gs_aggregate_status.in_svd_seq_no, "1", "Incorrect seq no") # Update incarnation and ensure the gs_aggregate_status is modified too test_data.set_incarnation(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", expected_ext_count=1, version="1.0.0") args, _ = protocol.report_vm_status.call_args new_gs_aggregate_status = args[0].vmAgent.vm_artifacts_aggregate_status.goal_state_aggregate_status self.assertIsNotNone(new_gs_aggregate_status, "New Goal State Aggregate status not reported") self.assertNotEqual(gs_aggregate_status, new_gs_aggregate_status, "The gs_aggregate_status should be different") self.assertEqual(new_gs_aggregate_status.status, GoalStateStatus.Success, "Wrong status reported") self.assertEqual(new_gs_aggregate_status.in_svd_seq_no, "2", "Incorrect seq no") def test_it_should_parse_required_features_properly(self, mock_get, mock_crypt_util, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_REQUIRED_FEATURES) _, protocol = self._create_mock(test_data, mock_get, mock_crypt_util, *args) required_features = protocol.get_goal_state().extensions_goal_state.required_features self.assertEqual(3, len(required_features), "Incorrect features parsed") for i, feature in enumerate(required_features): self.assertEqual(feature, "TestRequiredFeature{0}".format(i+1), "Name mismatch") def test_it_should_fail_goal_state_if_required_features_not_supported(self, mock_get, mock_crypt_util, *args): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE_REQUIRED_FEATURES) exthandlers_handler, protocol = self._create_mock(test_data, mock_get, mock_crypt_util, *args) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() args, _ = protocol.report_vm_status.call_args gs_aggregate_status = args[0].vmAgent.vm_artifacts_aggregate_status.goal_state_aggregate_status self.assertEqual(0, len(args[0].vmAgent.extensionHandlers), "No extensions should be reported") self.assertIsNotNone(gs_aggregate_status, "GS Aggregagte status should be reported") self.assertEqual(gs_aggregate_status.status, GoalStateStatus.Failed, "GS should be failed") self.assertEqual(gs_aggregate_status.code, GoalStateAggregateStatusCodes.GoalStateUnsupportedRequiredFeatures, "Incorrect error code set properly for GS failure") self.assertEqual(gs_aggregate_status.in_svd_seq_no, "1", "Sequence Number is wrong") @patch("azurelinuxagent.common.protocol.wire.CryptUtil") @patch("azurelinuxagent.common.utils.restutil.http_get") class TestExtensionSequencing(AgentTestCase): def _create_mock(self, mock_http_get, MockCryptUtil): test_data = wire_protocol_data.WireProtocolData(wire_protocol_data.DATA_FILE) # Mock protocol to return test data mock_http_get.side_effect = test_data.mock_http_get MockCryptUtil.side_effect = test_data.mock_crypt_util protocol = WireProtocol(KNOWN_WIRESERVER_IP) protocol.detect() protocol.report_vm_status = MagicMock() handler = get_exthandlers_handler(protocol) return handler def _set_dependency_levels(self, dependency_levels, exthandlers_handler): """ Creates extensions with the given dependencyLevel """ handler_map = {} all_handlers = [] for handler_name, level in dependency_levels: if handler_map.get(handler_name) is None: handler = Extension(name=handler_name) extension = ExtensionSettings(name=handler_name) handler.state = ExtensionRequestedState.Enabled handler.settings.append(extension) handler_map[handler_name] = handler all_handlers.append(handler) handler = handler_map[handler_name] for ext in handler.settings: ext.dependencyLevel = level exthandlers_handler.protocol.get_goal_state().extensions_goal_state._extensions *= 0 exthandlers_handler.protocol.get_goal_state().extensions_goal_state.extensions.extend(all_handlers) def _validate_extension_sequence(self, expected_sequence, exthandlers_handler): installed_extensions = [a[0].ext_handler.name for a, _ in exthandlers_handler.handle_ext_handler.call_args_list] self.assertListEqual(expected_sequence, installed_extensions, "Expected and actual list of extensions are not equal") def _run_test(self, extensions_to_be_failed, expected_sequence, exthandlers_handler): """ Mocks get_ext_handling_status() to mimic error status for a given extension. Calls ExtHandlersHandler.run() Verifies if the ExtHandlersHandler.handle_ext_handler() was called with appropriate extensions in the expected order. """ def get_ext_handling_status(ext): status = "error" if ext.name in extensions_to_be_failed else "success" return status exthandlers_handler.handle_ext_handler = MagicMock() with patch.object(ExtHandlerInstance, "get_ext_handling_status", side_effect=get_ext_handling_status): with patch.object(ExtHandlerInstance, "get_handler_status", ExtHandlerStatus): with patch('azurelinuxagent.ga.exthandlers._DEFAULT_EXT_TIMEOUT_MINUTES', 0.01): exthandlers_handler.run() self._validate_extension_sequence(expected_sequence, exthandlers_handler) def test_handle_ext_handlers(self, *args): """ Tests extension sequencing among multiple extensions with dependencies. This test introduces failure in all possible levels and extensions. Verifies that the sequencing is in the expected order and a failure in one extension skips the rest of the extensions in the sequence. """ exthandlers_handler = self._create_mock(*args) # pylint: disable=no-value-for-parameter self._set_dependency_levels([("A", 3), ("B", 2), ("C", 2), ("D", 1), ("E", 1), ("F", 1), ("G", 1)], exthandlers_handler) extensions_to_be_failed = [] expected_sequence = ["D", "E", "F", "G", "B", "C", "A"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["D"] expected_sequence = ["D"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["E"] expected_sequence = ["D", "E"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["F"] expected_sequence = ["D", "E", "F"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["G"] expected_sequence = ["D", "E", "F", "G"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["B"] expected_sequence = ["D", "E", "F", "G", "B"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["C"] expected_sequence = ["D", "E", "F", "G", "B", "C"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["A"] expected_sequence = ["D", "E", "F", "G", "B", "C", "A"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) def test_handle_ext_handlers_with_uninstallation(self, *args): """ Tests extension sequencing among multiple extensions with dependencies when some extension are to be uninstalled. Verifies that the sequencing is in the expected order and the uninstallation takes place prior to all the installation/enable. """ exthandlers_handler = self._create_mock(*args) # pylint: disable=no-value-for-parameter # "A", "D" and "F" are marked as to be uninstalled self._set_dependency_levels([("A", 0), ("B", 2), ("C", 2), ("D", 0), ("E", 1), ("F", 0), ("G", 1)], exthandlers_handler) extensions_to_be_failed = [] expected_sequence = ["A", "D", "F", "E", "G", "B", "C"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) def test_handle_ext_handlers_fallback(self, *args): """ This test makes sure that the extension sequencing is applied only when the user specifies dependency information in the extension. When there is no dependency specified, the agent is expected to assign dependencyLevel=0 to all extension. Also, it is expected to install all the extension no matter if there is any failure in any of the extensions. """ exthandlers_handler = self._create_mock(*args) # pylint: disable=no-value-for-parameter self._set_dependency_levels([("A", 1), ("B", 1), ("C", 1), ("D", 1), ("E", 1), ("F", 1), ("G", 1)], exthandlers_handler) # Expected sequence must contain all the extensions in the given order. # The following test cases verfy against this same expected sequence no matter if any extension failed expected_sequence = ["A", "B", "C", "D", "E", "F", "G"] # Make sure that failure in any extension does not prevent other extensions to be installed extensions_to_be_failed = [] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["A"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["B"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["C"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["D"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["E"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["F"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) extensions_to_be_failed = ["G"] self._run_test(extensions_to_be_failed, expected_sequence, exthandlers_handler) class TestInVMArtifactsProfile(AgentTestCase): def test_it_should_parse_boolean_values(self): profile_json = '{ "onHold": true }' profile = InVMArtifactsProfile(profile_json) self.assertTrue(profile.is_on_hold(), "Failed to parse '{0}'".format(profile_json)) profile_json = '{ "onHold": false }' profile = InVMArtifactsProfile(profile_json) self.assertFalse(profile.is_on_hold(), "Failed to parse '{0}'".format(profile_json)) def test_it_should_parse_boolean_values_encoded_as_strings(self): profile_json = '{ "onHold": "true" }' profile = InVMArtifactsProfile(profile_json) self.assertTrue(profile.is_on_hold(), "Failed to parse '{0}'".format(profile_json)) profile_json = '{ "onHold": "false" }' profile = InVMArtifactsProfile(profile_json) self.assertFalse(profile.is_on_hold(), "Failed to parse '{0}'".format(profile_json)) profile_json = '{ "onHold": "TRUE" }' profile = InVMArtifactsProfile(profile_json) self.assertTrue(profile.is_on_hold(), "Failed to parse '{0}'".format(profile_json)) class TestExtensionUpdateOnFailure(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) self.mock_sleep = patch("time.sleep", lambda *_: mock_sleep(0.0001)) self.mock_sleep.start() def tearDown(self): self.mock_sleep.stop() AgentTestCase.tearDown(self) @staticmethod def _do_upgrade_scenario_and_get_order(first_ext, upgraded_ext): """ Given the provided ExtensionEmulator objects, installs the first and then attempts to update to the second. StatusBlobs and command invocations for each actor can be checked with {emulator}.status_blobs and {emulator}.actions[{command_name}] respectively. Note that this method assumes the first extension's install command should succeed. Don't use this method if your test is attempting to emulate a fresh install (i.e. not an upgrade) with a failing install. """ with mock_wire_protocol(DATA_FILE, http_put_handler=generate_put_handler(first_ext, upgraded_ext)) as protocol: exthandlers_handler = get_exthandlers_handler(protocol) with enable_invocations(first_ext, upgraded_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() invocation_record.compare( (first_ext, ExtensionCommandNames.INSTALL), # Note that if installCommand is supposed to fail, this will erroneously raise. (first_ext, ExtensionCommandNames.ENABLE) ) protocol.mock_wire_data.set_extensions_config_version(upgraded_ext.version) protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() with enable_invocations(first_ext, upgraded_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() return invocation_record def test_non_enabled_ext_should_not_be_disabled_at_ver_update(self): _, enable_action = Actions.generate_unique_fail() first_ext = extension_emulator(enable_action=enable_action) second_ext = extension_emulator(version="1.1.0") invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (second_ext, ExtensionCommandNames.UPDATE), (first_ext, ExtensionCommandNames.UNINSTALL), (second_ext, ExtensionCommandNames.INSTALL), (second_ext, ExtensionCommandNames.ENABLE) ) def test_disable_failed_env_variable_should_be_set_for_update_cmd_when_continue_on_update_failure_is_true(self): exit_code, disable_action = Actions.generate_unique_fail() first_ext = extension_emulator(disable_action=disable_action) second_ext = extension_emulator(version="1.1.0", continue_on_update_failure=True) invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.UPDATE), (first_ext, ExtensionCommandNames.UNINSTALL), (second_ext, ExtensionCommandNames.INSTALL), (second_ext, ExtensionCommandNames.ENABLE) ) _, kwargs = second_ext.actions[ExtensionCommandNames.UPDATE].call_args self.assertEqual(kwargs["env"][ExtCommandEnvVariable.DisableReturnCode], exit_code, "DisableAction's return code should be in updateAction's env.") def test_uninstall_failed_env_variable_should_set_for_install_when_continue_on_update_failure_is_true(self): exit_code, uninstall_action = Actions.generate_unique_fail() first_ext = extension_emulator(uninstall_action=uninstall_action) second_ext = extension_emulator(version="1.1.0", continue_on_update_failure=True) invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.UPDATE), (first_ext, ExtensionCommandNames.UNINSTALL), (second_ext, ExtensionCommandNames.INSTALL), (second_ext, ExtensionCommandNames.ENABLE) ) _, kwargs = second_ext.actions[ExtensionCommandNames.INSTALL].call_args self.assertEqual(kwargs["env"][ExtCommandEnvVariable.UninstallReturnCode], exit_code, "UninstallAction's return code should be in updateAction's env.") def test_extension_error_should_be_raised_when_continue_on_update_failure_is_false_on_disable_failure(self): exit_code, disable_action = Actions.generate_unique_fail() first_ext = extension_emulator(disable_action=disable_action) second_ext = extension_emulator(version="1.1.0", continue_on_update_failure=False) invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE) ) self.assertEqual(len(first_ext.status_blobs), 1, "The first extension should not have submitted a second status.") self.assertEqual(len(second_ext.status_blobs), 1, "The second extension should have a single submitted status.") self.assertTrue(exit_code in second_ext.status_blobs[0]["formattedMessage"]["message"], "DisableAction's error code should be propagated to the status blob.") def test_extension_error_should_be_raised_when_continue_on_update_failure_is_false_on_uninstall_failure(self): exit_code, uninstall_action = Actions.generate_unique_fail() first_ext = extension_emulator(uninstall_action=uninstall_action) second_ext = extension_emulator(version="1.1.0", continue_on_update_failure=False) invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.UPDATE), (first_ext, ExtensionCommandNames.UNINSTALL) ) self.assertEqual(len(first_ext.status_blobs), 1, "The first extension should not have submitted a second status.") self.assertEqual(len(second_ext.status_blobs), 1, "The second extension should have a single submitted status.") self.assertTrue(exit_code in second_ext.status_blobs[0]["formattedMessage"]["message"], "UninstallAction's error code should be propagated to the status blob.") def test_extension_error_should_be_raised_when_continue_on_update_failure_is_true_on_disable_and_update_failure(self): exit_codes = { } exit_codes["disable"], disable_action = Actions.generate_unique_fail() exit_codes["update"], update_action = Actions.generate_unique_fail() first_ext = extension_emulator(disable_action=disable_action) second_ext = extension_emulator(version="1.1.0", update_action=update_action, continue_on_update_failure=True) invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.UPDATE) ) self.assertEqual(len(first_ext.status_blobs), 1, "The first extension should not have submitted a second status.") self.assertEqual(len(second_ext.status_blobs), 1, "The second extension should have a single submitted status.") self.assertTrue(exit_codes["update"] in second_ext.status_blobs[0]["formattedMessage"]["message"], "UpdateAction's error code should be propagated to the status blob.") def test_extension_error_should_be_raised_when_continue_on_update_failure_is_true_on_uninstall_and_install_failure(self): exit_codes = { } exit_codes["install"], install_action = Actions.generate_unique_fail() exit_codes["uninstall"], uninstall_action = Actions.generate_unique_fail() first_ext = extension_emulator(uninstall_action=uninstall_action) second_ext = extension_emulator(version="1.1.0", install_action=install_action, continue_on_update_failure=True) invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.UPDATE), (first_ext, ExtensionCommandNames.UNINSTALL), (second_ext, ExtensionCommandNames.INSTALL) ) self.assertEqual(len(first_ext.status_blobs), 1, "The first extension should not have submitted a second status.") self.assertEqual(len(second_ext.status_blobs), 1, "The second extension should have a single submitted status.") self.assertTrue(exit_codes["install"] in second_ext.status_blobs[0]["formattedMessage"]["message"], "InstallAction's error code should be propagated to the status blob.") def test_failed_env_variables_should_be_set_from_within_extension_commands(self): """ This test will test from the perspective of the extensions command weather the env variables are being set for those processes """ exit_codes = { } exit_codes["disable"], disable_action = Actions.generate_unique_fail() exit_codes["uninstall"], uninstall_action = Actions.generate_unique_fail() first_ext = extension_emulator(disable_action=disable_action, uninstall_action=uninstall_action) second_ext = extension_emulator(version="1.1.0", continue_on_update_failure=True) invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.UPDATE), (first_ext, ExtensionCommandNames.UNINSTALL), (second_ext, ExtensionCommandNames.INSTALL), (second_ext, ExtensionCommandNames.ENABLE) ) _, update_kwargs = second_ext.actions[ExtensionCommandNames.UPDATE].call_args _, install_kwargs = second_ext.actions[ExtensionCommandNames.INSTALL].call_args second_extension_dir = os.path.join( conf.get_lib_dir(), "{0}-{1}".format(second_ext.name, second_ext.version) ) # Ensure we're checking variables for update scenario self.assertEqual(update_kwargs["env"][ExtCommandEnvVariable.DisableReturnCode], exit_codes["disable"], "DisableAction's return code should be present in updateAction's env.") self.assertTrue(ExtCommandEnvVariable.UninstallReturnCode not in update_kwargs["env"], "UninstallAction's return code should not be in updateAction's env.") self.assertEqual(update_kwargs["env"][ExtCommandEnvVariable.ExtensionPath], second_extension_dir, "The second extension's directory should be present in updateAction's env.") self.assertEqual(update_kwargs["env"][ExtCommandEnvVariable.ExtensionVersion], "1.1.0", "The second extension's version should be present in updateAction's env.") # Ensure we're checking variables for install scenario self.assertEqual(install_kwargs["env"][ExtCommandEnvVariable.UninstallReturnCode], exit_codes["uninstall"], "UninstallAction's return code should be present in installAction's env.") self.assertTrue(ExtCommandEnvVariable.DisableReturnCode not in install_kwargs["env"], "DisableAction's return code should not be in installAction's env.") self.assertEqual(install_kwargs["env"][ExtCommandEnvVariable.ExtensionPath], second_extension_dir, "The second extension's directory should be present in installAction's env.") self.assertEqual(install_kwargs["env"][ExtCommandEnvVariable.ExtensionVersion], "1.1.0", "The second extension's version should be present in installAction's env.") def test_correct_exit_code_should_set_on_disable_cmd_failure(self): exit_code, disable_action = Actions.generate_unique_fail() first_ext = extension_emulator(disable_action=disable_action) second_ext = extension_emulator(version="1.1.0", continue_on_update_failure=True, update_mode="UpdateWithoutInstall") invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.UPDATE), (first_ext, ExtensionCommandNames.UNINSTALL), (second_ext, ExtensionCommandNames.ENABLE) ) _, update_kwargs = second_ext.actions[ExtensionCommandNames.UPDATE].call_args self.assertEqual(update_kwargs["env"][ExtCommandEnvVariable.DisableReturnCode], exit_code, "DisableAction's return code should be present in UpdateAction's env.") def test_timeout_code_should_set_on_cmd_timeout(self): # Return None to every poll, forcing a timeout after 900 seconds (actually very quick because sleep(*) is mocked) force_timeout = lambda *args, **kwargs: None first_ext = extension_emulator(disable_action=force_timeout, uninstall_action=force_timeout) second_ext = extension_emulator(version="1.1.0", continue_on_update_failure=True) with patch("os.killpg"): with patch("os.getpgid"): invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.UPDATE), (first_ext, ExtensionCommandNames.UNINSTALL), (second_ext, ExtensionCommandNames.INSTALL), (second_ext, ExtensionCommandNames.ENABLE) ) _, update_kwargs = second_ext.actions[ExtensionCommandNames.UPDATE].call_args _, install_kwargs = second_ext.actions[ExtensionCommandNames.INSTALL].call_args # Verify both commands are reported as timeouts. self.assertEqual(update_kwargs["env"][ExtCommandEnvVariable.DisableReturnCode], str(ExtensionErrorCodes.PluginHandlerScriptTimedout), "DisableAction's return code should be marked as a timeout in UpdateAction's env.") self.assertEqual(install_kwargs["env"][ExtCommandEnvVariable.UninstallReturnCode], str(ExtensionErrorCodes.PluginHandlerScriptTimedout), "UninstallAction's return code should be marked as a timeout in installAction's env.") def test_success_code_should_set_in_env_variables_on_cmd_success(self): first_ext = extension_emulator() second_ext = extension_emulator(version="1.1.0") invocation_record = TestExtensionUpdateOnFailure._do_upgrade_scenario_and_get_order(first_ext, second_ext) invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.UPDATE), (first_ext, ExtensionCommandNames.UNINSTALL), (second_ext, ExtensionCommandNames.INSTALL), (second_ext, ExtensionCommandNames.ENABLE) ) _, update_kwargs = second_ext.actions[ExtensionCommandNames.UPDATE].call_args _, install_kwargs = second_ext.actions[ExtensionCommandNames.INSTALL].call_args self.assertEqual(update_kwargs["env"][ExtCommandEnvVariable.DisableReturnCode], "0", "DisableAction's return code in updateAction's env should be 0.") self.assertEqual(install_kwargs["env"][ExtCommandEnvVariable.UninstallReturnCode], "0", "UninstallAction's return code in installAction's env should be 0.") class TestCollectExtensionStatus(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) self.lib_dir = tempfile.mkdtemp() self.mock_sleep = patch("time.sleep", lambda *_: mock_sleep(0.001)) self.mock_sleep.start() def tearDown(self): self.mock_sleep.stop() AgentTestCase.tearDown(self) def _setup_extension_for_validating_collect_ext_status(self, mock_lib_dir, status_file=None): handler_name = "TestHandler" handler_version = "1.0.0" mock_lib_dir.return_value = self.lib_dir fileutil.mkdir(os.path.join(self.lib_dir, handler_name + "-" + handler_version, "config")) fileutil.mkdir(os.path.join(self.lib_dir, handler_name + "-" + handler_version, "status")) shutil.copy(tempfile.mkstemp(prefix="test-file")[1], os.path.join(self.lib_dir, handler_name + "-" + handler_version, "config", "0.settings")) if status_file is not None: shutil.copy(os.path.join(data_dir, "ext", status_file), os.path.join(self.lib_dir, handler_name + "-" + handler_version, "status", "0.status")) with mock_wire_protocol(DATA_FILE) as protocol: exthandler = Extension(name=handler_name) exthandler.version = handler_version extension = ExtensionSettings(name=handler_name, sequenceNumber=0) exthandler.settings.append(extension) return ExtHandlerInstance(exthandler, protocol), extension @patch("azurelinuxagent.common.conf.get_lib_dir") def test_collect_ext_status(self, mock_lib_dir): """ This test validates that collect_ext_status correctly picks up the status file (sample-status.json) and then parses it correctly. """ ext_handler_i, extension = self._setup_extension_for_validating_collect_ext_status(mock_lib_dir, "sample-status.json") ext_status = ext_handler_i.collect_ext_status(extension) self.assertEqual(ext_status.code, SUCCESS_CODE_FROM_STATUS_FILE) self.assertEqual(ext_status.configurationAppliedTime, None) self.assertEqual(ext_status.operation, "Enable") self.assertEqual(ext_status.sequenceNumber, 0) self.assertEqual(ext_status.message, "Aenean semper nunc nisl, vitae sollicitudin felis consequat at. In " "lobortis elementum sapien, non commodo odio semper ac.") self.assertEqual(ext_status.status, ExtensionStatusValue.success) self.assertEqual(len(ext_status.substatusList), 1) sub_status = ext_status.substatusList[0] self.assertEqual(sub_status.code, "0") self.assertEqual(sub_status.message, None) self.assertEqual(sub_status.status, ExtensionStatusValue.success) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_collect_ext_status_for_invalid_json(self, mock_lib_dir): """ This test validates that collect_ext_status correctly picks up the status file (sample-status-invalid-json-format.json) and then since the Json cannot be parsed correctly it extension status message should include 2000 bytes of status file and the line number in which it failed to parse. The uniqueMachineId tag comes from status file. """ ext_handler_i, extension = self._setup_extension_for_validating_collect_ext_status(mock_lib_dir, "sample-status-invalid-json-format.json") ext_status = ext_handler_i.collect_ext_status(extension) self.assertEqual(ext_status.code, ExtensionErrorCodes.PluginSettingsStatusInvalid) self.assertEqual(ext_status.configurationAppliedTime, None) self.assertEqual(ext_status.operation, None) self.assertEqual(ext_status.sequenceNumber, 0) self.assertRegex(ext_status.message, r".*The status reported by the extension TestHandler-1.0.0\(Sequence number 0\), " r"was in an incorrect format and the agent could not parse it correctly." r" Failed due to.*") self.assertIn("\"uniqueMachineId\": \"e5e5602b-48a6-4c35-9f96-752043777af1\"", ext_status.message) self.assertEqual(ext_status.status, ExtensionStatusValue.error) self.assertEqual(len(ext_status.substatusList), 0) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_it_should_collect_ext_status_even_when_config_dir_deleted(self, mock_lib_dir): ext_handler_i, extension = self._setup_extension_for_validating_collect_ext_status(mock_lib_dir, "sample-status.json") shutil.rmtree(ext_handler_i.get_conf_dir(), ignore_errors=True) ext_status = ext_handler_i.collect_ext_status(extension) self.assertEqual(ext_status.code, SUCCESS_CODE_FROM_STATUS_FILE) self.assertEqual(ext_status.configurationAppliedTime, None) self.assertEqual(ext_status.operation, "Enable") self.assertEqual(ext_status.sequenceNumber, 0) self.assertEqual(ext_status.message, "Aenean semper nunc nisl, vitae sollicitudin felis consequat at. In " "lobortis elementum sapien, non commodo odio semper ac.") self.assertEqual(ext_status.status, ExtensionStatusValue.success) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_collect_ext_status_very_large_status_message(self, mock_lib_dir): """ Testing collect_ext_status() with a very large status file (>128K) to see if it correctly parses the status without generating a really large message. """ ext_handler_i, extension = self._setup_extension_for_validating_collect_ext_status(mock_lib_dir, "sample-status-very-large.json") ext_status = ext_handler_i.collect_ext_status(extension) self.assertEqual(ext_status.code, SUCCESS_CODE_FROM_STATUS_FILE) self.assertEqual(ext_status.configurationAppliedTime, None) self.assertEqual(ext_status.operation, "Enable") self.assertEqual(ext_status.sequenceNumber, 0) # [TRUNCATED] comes from azurelinuxagent.ga.exthandlers._TRUNCATED_SUFFIX self.assertRegex(ext_status.message, r"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum non " r"lacinia urna, sit .*\[TRUNCATED\]") self.maxDiff = None self.assertEqual(ext_status.status, ExtensionStatusValue.success) self.assertEqual(len(ext_status.substatusList), 1) # NUM OF SUBSTATUS PARSED for sub_status in ext_status.substatusList: self.assertRegex(sub_status.name, r'\[\{"status"\: \{"status": "success", "code": "1", "snapshotInfo": ' r'\[\{"snapshotUri":.*') self.assertEqual(0, sub_status.code) self.assertRegex(sub_status.message, "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum " "non lacinia urna, sit amet venenatis orci.*") self.assertEqual(sub_status.status, ExtensionStatusValue.success) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_collect_ext_status_very_large_status_file_with_multiple_substatus_nodes(self, mock_lib_dir): """ Testing collect_ext_status() with a very large status file (>128K) to see if it correctly parses the status without generating a really large message. This checks if the multiple substatus messages are correctly parsed and truncated. """ ext_handler_i, extension = self._setup_extension_for_validating_collect_ext_status( mock_lib_dir, "sample-status-very-large-multiple-substatuses.json") # ~470K bytes. ext_status = ext_handler_i.collect_ext_status(extension) self.assertEqual(ext_status.code, SUCCESS_CODE_FROM_STATUS_FILE) self.assertEqual(ext_status.configurationAppliedTime, None) self.assertEqual(ext_status.operation, "Enable") self.assertEqual(ext_status.sequenceNumber, 0) self.assertRegex(ext_status.message, "Lorem ipsum dolor sit amet, consectetur adipiscing elit. " "Vestibulum non lacinia urna, sit .*") self.assertEqual(ext_status.status, ExtensionStatusValue.success) self.assertEqual(len(ext_status.substatusList), 12) # The original file has 41 substatus nodes. for sub_status in ext_status.substatusList: self.assertRegex(sub_status.name, r'\[\{"status"\: \{"status": "success", "code": "1", "snapshotInfo": ' r'\[\{"snapshotUri":.*') self.assertEqual(0, sub_status.code) self.assertRegex(sub_status.message, "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum " "non lacinia urna, sit amet venenatis orci.*") self.assertEqual(ExtensionStatusValue.success, sub_status.status) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_collect_ext_status_read_file_read_exceptions(self, mock_lib_dir): """ Testing collect_ext_status to validate the readfile exceptions. """ ext_handler_i, extension = self._setup_extension_for_validating_collect_ext_status(mock_lib_dir, "sample-status.json") original_read_file = read_file def mock_read_file(file_, *args, **kwargs): expected_status_file_path = os.path.join(self.lib_dir, ext_handler_i.ext_handler.name + "-" + ext_handler_i.ext_handler.version, "status", "0.status") if file_ == expected_status_file_path: raise IOError("No such file or directory: {0}".format(expected_status_file_path)) else: original_read_file(file_, *args, **kwargs) with patch('azurelinuxagent.common.utils.fileutil.read_file', mock_read_file): ext_status = ext_handler_i.collect_ext_status(extension) self.assertEqual(ext_status.code, ExtensionErrorCodes.PluginUnknownFailure) self.assertEqual(ext_status.configurationAppliedTime, None) self.assertEqual(ext_status.operation, None) self.assertEqual(ext_status.sequenceNumber, 0) self.assertRegex(ext_status.message, r".*We couldn't read any status for {0}-{1} extension, for the " r"sequence number {2}. It failed due to". format("TestHandler", "1.0.0", 0)) self.assertEqual(ext_status.status, ExtensionStatusValue.error) self.assertEqual(len(ext_status.substatusList), 0) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_collect_ext_status_json_exceptions(self, mock_lib_dir): """ Testing collect_ext_status() with a malformed json status file. """ ext_handler_i, extension = self._setup_extension_for_validating_collect_ext_status(mock_lib_dir, "sample-status-invalid-format-emptykey-line7.json") ext_status = ext_handler_i.collect_ext_status(extension) self.assertEqual(ext_status.code, ExtensionErrorCodes.PluginSettingsStatusInvalid) self.assertEqual(ext_status.configurationAppliedTime, None) self.assertEqual(ext_status.operation, None) self.assertEqual(ext_status.sequenceNumber, 0) self.assertRegex(ext_status.message, r".*The status reported by the extension {0}-{1}\(Sequence number {2}\), " "was in an incorrect format and the agent could not parse it correctly." " Failed due to.*". format("TestHandler", "1.0.0", 0)) self.assertEqual(ext_status.status, ExtensionStatusValue.error) self.assertEqual(len(ext_status.substatusList), 0) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_collect_ext_status_parse_ext_status_exceptions(self, mock_lib_dir): """ Testing collect_ext_status() with a malformed json status file. """ ext_handler_i, extension = self._setup_extension_for_validating_collect_ext_status(mock_lib_dir, "sample-status-invalid-status-no-status-status-key.json") ext_status = ext_handler_i.collect_ext_status(extension) self.assertEqual(ext_status.code, ExtensionErrorCodes.PluginSettingsStatusInvalid) self.assertEqual(ext_status.configurationAppliedTime, None) self.assertEqual(ext_status.operation, None) self.assertEqual(ext_status.sequenceNumber, 0) self.assertRegex(ext_status.message, "Could not get a valid status from the extension {0}-{1}. " "Encountered the following error".format("TestHandler", "1.0.0")) self.assertEqual(ext_status.status, ExtensionStatusValue.error) self.assertEqual(len(ext_status.substatusList), 0) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_it_should_report_transitioning_if_status_file_not_found(self, mock_lib_dir): """ Testing collect_ext_status() with a missing status file. """ ext_handler_i, extension = self._setup_extension_for_validating_collect_ext_status(mock_lib_dir) ext_status = ext_handler_i.collect_ext_status(extension) self.assertEqual(ext_status.code, ExtensionErrorCodes.PluginSuccess) self.assertEqual(ext_status.configurationAppliedTime, None) self.assertEqual(ext_status.operation, None) self.assertEqual(ext_status.sequenceNumber, 0) self.assertIn("This status is being reported by the Guest Agent since no status file was reported by extension {0}". format("TestHandler"), ext_status.message) self.assertEqual(ext_status.status, ExtensionStatusValue.transitioning) self.assertEqual(len(ext_status.substatusList), 0) class TestAdditionalLocationsExtensions(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) self.test_data = DATA_FILE_EXT_ADDITIONAL_LOCATIONS.copy() def tearDown(self): AgentTestCase.tearDown(self) @patch('time.sleep') def test_additional_locations_node_is_consumed(self, _): location_uri_pattern = r'https?://mock-goal-state/(?P{0})/(?P\d)/manifest.xml'\ .format(r'(location)|(failoverlocation)|(additionalLocation)') location_uri_regex = re.compile(location_uri_pattern) manifests_used = [ ('location', '1'), ('failoverlocation', '2'), ('additionalLocation', '3'), ('additionalLocation', '4') ] def manifest_location_handler(url, **kwargs): url_match = location_uri_regex.match(url) if not url_match: if "extensionArtifact" in url: wrapped_url = kwargs.get("headers", {}).get("x-ms-artifact-location") if wrapped_url and location_uri_regex.match(wrapped_url): return Exception("Ignoring host plugin requests for testing purposes.") return None location_type, manifest_num = url_match.group("location_type", "manifest_num") try: manifests_used.remove((location_type, manifest_num)) except ValueError: raise AssertionError("URI '{0}' used multiple times".format(url)) if manifests_used: # Still locations to try in the list; throw a fake # error to make sure all of the locations get called. return Exception("Failing manifest fetch from uri '{0}' for testing purposes.".format(url)) return None with mock_wire_protocol(self.test_data, http_get_handler=manifest_location_handler) as protocol: exthandlers_handler = get_exthandlers_handler(protocol) exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() def test_fetch_manifest_timeout_is_respected(self): location_uri_pattern = r'https?://mock-goal-state/(?P{0})/(?P\d)/manifest.xml'\ .format(r'(location)|(failoverlocation)|(additionalLocation)') location_uri_regex = re.compile(location_uri_pattern) def manifest_location_handler(url, **kwargs): url_match = location_uri_regex.match(url) if not url_match: if "extensionArtifact" in url: wrapped_url = kwargs.get("headers", {}).get("x-ms-artifact-location") if wrapped_url and location_uri_regex.match(wrapped_url): return Exception("Ignoring host plugin requests for testing purposes.") return None if manifest_location_handler.num_times_called == 0: time.sleep(.3) manifest_location_handler.num_times_called += 1 return Exception("Failing manifest fetch from uri '{0}' for testing purposes.".format(url)) return None manifest_location_handler.num_times_called = 0 with mock_wire_protocol(self.test_data, http_get_handler=manifest_location_handler) as protocol: ext_handlers = protocol.get_goal_state().extensions_goal_state.extensions download_timeout = wire._DOWNLOAD_TIMEOUT wire._DOWNLOAD_TIMEOUT = datetime.timedelta(minutes=0) try: with self.assertRaises(ExtensionDownloadError): protocol.client.fetch_manifest("extension", ext_handlers[0].manifest_uris, use_verify_header=False) finally: wire._DOWNLOAD_TIMEOUT = download_timeout # New test cases should be added here.This class uses mock_wire_protocol class TestExtension(TestExtensionBase, HttpRequestPredicates): @classmethod def setUpClass(cls): AgentTestCase.setUpClass() cls.mock_sleep = patch('time.sleep', side_effect=lambda _: mock_sleep(0.001)) cls.mock_sleep.start() @classmethod def tearDownClass(cls): cls.mock_sleep.stop() def setUp(self): AgentTestCase.setUp(self) def tearDown(self): AgentTestCase.tearDown(self) @patch('time.gmtime', MagicMock(return_value=time.gmtime(0))) @patch("azurelinuxagent.common.version.get_daemon_version", return_value=FlexibleVersion("0.0.0.0")) def test_ext_handler_reporting_status_file(self, _): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: def mock_http_put(url, *args, **_): if HttpRequestPredicates.is_host_plugin_status_request(url): # Skip reading the HostGA request data as its encoded return MockHttpResponse(status=500) protocol.aggregate_status = json.loads(args[0]) return MockHttpResponse(status=201) protocol.aggregate_status = None protocol.set_http_handlers(http_put_handler=mock_http_put) exthandlers_handler = get_exthandlers_handler(protocol) # creating supported features list that is sent to crp supported_features = [] for _, feature in get_agent_supported_features_list_for_crp().items(): supported_features.append( { "Key": feature.name, "Value": feature.version } ) expected_status = { "version": "1.1", "timestampUTC": "1970-01-01T00:00:00Z", "aggregateStatus": { "guestAgentStatus": { "version": AGENT_VERSION, "status": "Ready", "formattedMessage": { "lang": "en-US", "message": "Guest Agent is running" } }, "handlerAggregateStatus": [ { "handlerVersion": "1.0.0", "handlerName": "OSTCExtensions.ExampleHandlerLinux", "status": "Ready", "code": 0, "useExactVersion": True, "formattedMessage": { "lang": "en-US", "message": "Plugin enabled" }, "runtimeSettingsStatus": { "settingsStatus": { "status": { "name": "OSTCExtensions.ExampleHandlerLinux", "configurationAppliedTime": None, "operation": None, "status": "success", "code": 0, "formattedMessage": { "lang": "en-US", "message": None } }, "version": 1.0, "timestampUTC": "1970-01-01T00:00:00Z" }, "sequenceNumber": 0 } } ], "vmArtifactsAggregateStatus": { "goalStateAggregateStatus": { "formattedMessage": { "lang": "en-US", "message": "GoalState executed successfully" }, "timestampUTC": "1970-01-01T00:00:00Z", "inSvdSeqNo": "1", "status": "Success", "code": 0 } } }, "guestOSInfo": None, "supportedFeatures": supported_features } exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() actual_status = json.loads(protocol.get_status_blob_data()) # Don't compare the guestOSInfo self.assertIsNotNone(actual_status.get("guestOSInfo"), "The status file is missing the guestOSInfo property") actual_status["guestOSInfo"] = None self.assertEqual(expected_status, actual_status) def test_it_should_process_extensions_only_if_allowed(self): def assert_extensions_called(exthandlers_handler, expected_call_count=0): extension_name = 'OSTCExtensions.ExampleHandlerLinux' extension_calls = [] original_popen = subprocess.Popen def mock_popen(*args, **kwargs): if extension_name in args[0]: extension_calls.append(args[0]) return original_popen(*args, **kwargs) with patch('subprocess.Popen', side_effect=mock_popen): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(expected_call_count, len(extension_calls), "Call counts dont match") with patch('time.sleep', side_effect=lambda _: mock_sleep(0.001)): def http_get_handler(url, *_, **kwargs): if self.is_in_vm_artifacts_profile_request(url) or self.is_host_plugin_in_vm_artifacts_profile_request(url, kwargs): return mock_in_vm_artifacts_profile_response return None mock_in_vm_artifacts_profile_response = MockHttpResponse(200, body='{ "onHold": false }'.encode('utf-8')) with mock_wire_protocol(wire_protocol_data.DATA_FILE_IN_VM_ARTIFACTS_PROFILE, http_get_handler=http_get_handler) as protocol: protocol.report_vm_status = MagicMock() exthandlers_handler = get_exthandlers_handler(protocol) # Extension called once for Install and once for Enable assert_extensions_called(exthandlers_handler, expected_call_count=2) self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") # Update GoalState protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() with patch.object(conf, 'get_extensions_enabled', return_value=False): assert_extensions_called(exthandlers_handler, expected_call_count=0) # Disabled over-provisioning in configuration # In this case we should process GoalState as incarnation changed with patch.object(conf, 'get_extensions_enabled', return_value=True): with patch.object(conf, 'get_enable_overprovisioning', return_value=False): # 1 expected call count for Enable command assert_extensions_called(exthandlers_handler, expected_call_count=1) # Enabled on_hold property in artifact_blob mock_in_vm_artifacts_profile_response = MockHttpResponse(200, body='{ "onHold": true }'.encode('utf-8')) protocol.client.reset_goal_state() with patch.object(conf, 'get_extensions_enabled', return_value=True): with patch.object(conf, "get_enable_overprovisioning", return_value=True): assert_extensions_called(exthandlers_handler, expected_call_count=0) # Disabled on_hold property in artifact_blob mock_in_vm_artifacts_profile_response = MockHttpResponse(200, body='{ "onHold": false }'.encode('utf-8')) protocol.client.reset_goal_state() with patch.object(conf, 'get_extensions_enabled', return_value=True): with patch.object(conf, "get_enable_overprovisioning", return_value=True): # 1 expected call count for Enable command assert_extensions_called(exthandlers_handler, expected_call_count=1) def test_it_should_process_extensions_appropriately_on_artifact_hold(self): with patch('time.sleep', side_effect=lambda _: mock_sleep(0.001)): with patch("azurelinuxagent.common.conf.get_enable_overprovisioning", return_value=True): with mock_wire_protocol(wire_protocol_data.DATA_FILE_IN_VM_ARTIFACTS_PROFILE) as protocol: protocol.report_vm_status = MagicMock() exthandlers_handler = get_exthandlers_handler(protocol) # # The test data sets onHold to True; extensions should not be processed # exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() vm_agent_status = protocol.report_vm_status.call_args[0][0].vmAgent self.assertEqual(vm_agent_status.status, "Ready", "Agent should report ready") self.assertEqual(0, len(vm_agent_status.extensionHandlers), "No extensions should be reported as on_hold is True") self.assertIsNone(vm_agent_status.vm_artifacts_aggregate_status.goal_state_aggregate_status, "No GS Aggregate status should be reported") # # Now force onHold to False; extensions should be processed # def http_get_handler(url, *_, **kwargs): if self.is_in_vm_artifacts_profile_request(url) or self.is_host_plugin_in_vm_artifacts_profile_request(url, kwargs): return MockHttpResponse(200, body='{ "onHold": false }'.encode('utf-8')) return None protocol.set_http_handlers(http_get_handler=http_get_handler) protocol.client.reset_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self._assert_handler_status(protocol.report_vm_status, "Ready", 1, "1.0.0") self.assertEqual("1", protocol.report_vm_status.call_args[0][0].vmAgent.vm_artifacts_aggregate_status.goal_state_aggregate_status.in_svd_seq_no, "SVD sequence number mismatch") def test_it_should_redact_access_tokens_in_extension_output(self): original = r'''ONE https://foo.blob.core.windows.net/bar?sv=2000&ss=bfqt&srt=sco&sp=rw&se=2025&st=2022&spr=https&sig=SI%3D TWO:HTTPS://bar.blob.core.com/foo/bar/foo.txt?sv=2018&sr=b&sig=Yx%3D&st=2023%3A52Z&se=9999%3A59%3A59Z&sp=r TWO https://bar.com/foo?uid=2018&sr=b THREE''' expected = r'''ONE https://foo.blob.core.windows.net/bar? TWO:HTTPS://bar.blob.core.com/foo/bar/foo.txt? TWO https://bar.com/foo?uid=2018&sr=b THREE''' with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: exthandlers_handler = get_exthandlers_handler(protocol) original_popen = subprocess.Popen def mock_popen(cmd, *args, **kwargs): if cmd.endswith("sample.py -enable"): cmd = "echo '{0}'; >&2 echo '{0}'; exit 1".format(original) return original_popen(cmd, *args, **kwargs) with patch.object(subprocess, 'Popen', side_effect=mock_popen): exthandlers_handler.run() status = exthandlers_handler.report_ext_handlers_status() self.assertEqual(1, len(status.vmAgent.extensionHandlers), 'Expected exactly 1 extension status') message = status.vmAgent.extensionHandlers[0].message self.assertIn('[stdout]\n{0}'.format(expected), message, "The extension's stdout was not redacted correctly") self.assertIn('[stderr]\n{0}'.format(expected), message, "The extension's stderr was not redacted correctly") if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/ga/test_exthandlers.py000066400000000000000000001031351462617747000232160ustar00rootroot00000000000000# Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import json import os import subprocess import time import uuid from azurelinuxagent.common.agent_supported_feature import AgentSupportedFeature from azurelinuxagent.common.event import AGENT_EVENT_FILE_EXTENSION, WALAEventOperation from azurelinuxagent.common.exception import ExtensionError, ExtensionErrorCodes from azurelinuxagent.common.protocol.restapi import ExtensionStatus, ExtensionSettings, Extension from azurelinuxagent.common.protocol.util import ProtocolUtil from azurelinuxagent.common.protocol.wire import WireProtocol from azurelinuxagent.common.utils import fileutil from azurelinuxagent.ga.extensionprocessutil import TELEMETRY_MESSAGE_MAX_LEN, format_stdout_stderr, \ read_output from azurelinuxagent.ga.exthandlers import parse_ext_status, ExtHandlerInstance, ExtCommandEnvVariable, \ ExtensionStatusError, _DEFAULT_SEQ_NO, get_exthandlers_handler, ExtHandlerState from tests.lib.mock_wire_protocol import mock_wire_protocol, wire_protocol_data from tests.lib.tools import AgentTestCase, patch, mock_sleep, clear_singleton_instances class TestExtHandlers(AgentTestCase): def setUp(self): super(TestExtHandlers, self).setUp() # Since ProtocolUtil is a singleton per thread, we need to clear it to ensure that the test cases do not # reuse a previous state clear_singleton_instances(ProtocolUtil) def test_parse_ext_status_should_raise_on_non_array(self): status = json.loads(''' {{ "status": {{ "status": "transitioning", "operation": "Enabling Handler", "code": 0, "name": "Microsoft.Azure.RecoveryServices.SiteRecovery.Linux" }}, "version": 1.0, "timestampUTC": "2020-01-14T15:04:43Z", "longText": "{0}" }}'''.format("*" * 5 * 1024)) with self.assertRaises(ExtensionStatusError) as context_manager: parse_ext_status(ExtensionStatus(seq_no=0), status) error_message = str(context_manager.exception) self.assertIn("The extension status must be an array", error_message) self.assertTrue(0 < len(error_message) - 64 < 4096, "The error message should not be much longer than 4K characters: [{0}]".format(error_message)) def test_parse_extension_status00(self): """ Parse a status report for a successful execution of an extension. """ s = '''[{ "status": { "status": "success", "formattedMessage": { "lang": "en-US", "message": "Command is finished." }, "operation": "Daemon", "code": "0", "name": "Microsoft.OSTCExtensions.CustomScriptForLinux" }, "version": "1.0", "timestampUTC": "2018-04-20T21:20:24Z" } ]''' ext_status = ExtensionStatus(seq_no=0) parse_ext_status(ext_status, json.loads(s)) self.assertEqual('0', ext_status.code) self.assertEqual(None, ext_status.configurationAppliedTime) self.assertEqual('Command is finished.', ext_status.message) self.assertEqual('Daemon', ext_status.operation) self.assertEqual('success', ext_status.status) self.assertEqual(0, ext_status.sequenceNumber) self.assertEqual(0, len(ext_status.substatusList)) def test_parse_extension_status01(self): """ Parse a status report for a failed execution of an extension. The extension returned a bad status/status of failed. The agent should handle this gracefully, and convert all unknown status/status values into an error. """ s = '''[{ "status": { "status": "failed", "formattedMessage": { "lang": "en-US", "message": "Enable failed: Failed with error: commandToExecute is empty or invalid ..." }, "operation": "Enable", "code": "0", "name": "Microsoft.OSTCExtensions.CustomScriptForLinux" }, "version": "1.0", "timestampUTC": "2018-04-20T20:50:22Z" }]''' ext_status = ExtensionStatus(seq_no=0) parse_ext_status(ext_status, json.loads(s)) self.assertEqual('0', ext_status.code) self.assertEqual(None, ext_status.configurationAppliedTime) self.assertEqual('Enable failed: Failed with error: commandToExecute is empty or invalid ...', ext_status.message) self.assertEqual('Enable', ext_status.operation) self.assertEqual('error', ext_status.status) self.assertEqual(0, ext_status.sequenceNumber) self.assertEqual(0, len(ext_status.substatusList)) def test_parse_ext_status_should_parse_missing_substatus_as_empty(self): status = '''[{ "status": { "status": "success", "formattedMessage": { "lang": "en-US", "message": "Command is finished." }, "operation": "Enable", "code": "0", "name": "Microsoft.OSTCExtensions.CustomScriptForLinux" }, "version": "1.0", "timestampUTC": "2018-04-20T21:20:24Z" } ]''' extension_status = ExtensionStatus(seq_no=0) parse_ext_status(extension_status, json.loads(status)) self.assertTrue(isinstance(extension_status.substatusList, list), 'substatus was not parsed correctly') self.assertEqual(0, len(extension_status.substatusList)) def test_parse_ext_status_should_parse_null_substatus_as_empty(self): status = '''[{ "status": { "status": "success", "formattedMessage": { "lang": "en-US", "message": "Command is finished." }, "operation": "Enable", "code": "0", "name": "Microsoft.OSTCExtensions.CustomScriptForLinux", "substatus": null }, "version": "1.0", "timestampUTC": "2018-04-20T21:20:24Z" } ]''' extension_status = ExtensionStatus(seq_no=0) parse_ext_status(extension_status, json.loads(status)) self.assertTrue(isinstance(extension_status.substatusList, list), 'substatus was not parsed correctly') self.assertEqual(0, len(extension_status.substatusList)) def test_parse_extension_status_with_empty_status(self): """ Parse a status report for a successful execution of an extension. """ # Validating empty status case s = '''[]''' ext_status = ExtensionStatus(seq_no=0) parse_ext_status(ext_status, json.loads(s)) self.assertEqual(None, ext_status.code) self.assertEqual(None, ext_status.configurationAppliedTime) self.assertEqual(None, ext_status.message) self.assertEqual(None, ext_status.operation) self.assertEqual(None, ext_status.status) self.assertEqual(0, ext_status.sequenceNumber) self.assertEqual(0, len(ext_status.substatusList)) # Validating None case ext_status = ExtensionStatus(seq_no=0) parse_ext_status(ext_status, None) self.assertEqual(None, ext_status.code) self.assertEqual(None, ext_status.configurationAppliedTime) self.assertEqual(None, ext_status.message) self.assertEqual(None, ext_status.operation) self.assertEqual(None, ext_status.status) self.assertEqual(0, ext_status.sequenceNumber) self.assertEqual(0, len(ext_status.substatusList)) @patch('azurelinuxagent.ga.exthandlers.ExtHandlerInstance._get_last_modified_seq_no_from_config_files') def assert_extension_sequence_number(self, patch_get_largest_seq=None, goal_state_sequence_number=None, disk_sequence_number=None, expected_sequence_number=None): ext = ExtensionSettings() ext.sequenceNumber = goal_state_sequence_number patch_get_largest_seq.return_value = disk_sequence_number ext_handler = Extension(name='foo') ext_handler.version = "1.2.3" instance = ExtHandlerInstance(ext_handler=ext_handler, protocol=None) seq, path = instance.get_status_file_path(ext) self.assertEqual(expected_sequence_number, seq) if seq > -1: self.assertTrue(path.endswith('/foo-1.2.3/status/{0}.status'.format(expected_sequence_number))) else: self.assertIsNone(path) def test_extension_sequence_number(self): self.assert_extension_sequence_number(goal_state_sequence_number="12", disk_sequence_number=366, expected_sequence_number=12) self.assert_extension_sequence_number(goal_state_sequence_number=" 12 ", disk_sequence_number=366, expected_sequence_number=12) self.assert_extension_sequence_number(goal_state_sequence_number=" foo", disk_sequence_number=3, expected_sequence_number=3) self.assert_extension_sequence_number(goal_state_sequence_number="-1", disk_sequence_number=3, expected_sequence_number=-1) def test_it_should_report_error_if_plugin_settings_version_mismatch(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_PLUGIN_SETTINGS_MISMATCH) as protocol: with patch("azurelinuxagent.common.protocol.extensions_goal_state_from_extensions_config.add_event") as mock_add_event: # Forcing update of GoalState to allow the ExtConfig to report an event protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() plugin_setting_mismatch_calls = [kw for _, kw in mock_add_event.call_args_list if kw['op'] == WALAEventOperation.PluginSettingsVersionMismatch] self.assertEqual(1, len(plugin_setting_mismatch_calls), "PluginSettingsMismatch event should be reported once") self.assertIn('Extension PluginSettings Version Mismatch! Expected PluginSettings version: 1.0.0 for Extension: OSTCExtensions.ExampleHandlerLinux' , plugin_setting_mismatch_calls[0]['message'], "Invalid error message with incomplete data detected for PluginSettingsVersionMismatch") self.assertTrue("1.0.2" in plugin_setting_mismatch_calls[0]['message'] and "1.0.1" in plugin_setting_mismatch_calls[0]['message'], "Error message should contain the incorrect versions") self.assertFalse(plugin_setting_mismatch_calls[0]['is_success'], "The event should be false") @patch("azurelinuxagent.common.conf.get_ext_log_dir") def test_command_extension_log_truncates_correctly(self, mock_log_dir): log_dir_path = os.path.join(self.tmp_dir, "log_directory") mock_log_dir.return_value = log_dir_path ext_handler = Extension(name='foo') ext_handler.version = "1.2.3" first_line = "This is the first line!" second_line = "This is the second line." old_logfile_contents = "{first_line}\n{second_line}\n".format(first_line=first_line, second_line=second_line) log_file_path = os.path.join(log_dir_path, "foo", "CommandExecution.log") fileutil.mkdir(os.path.join(log_dir_path, "foo"), mode=0o755) with open(log_file_path, "a") as log_file: log_file.write(old_logfile_contents) _ = ExtHandlerInstance(ext_handler=ext_handler, protocol=None, execution_log_max_size=(len(first_line)+len(second_line)//2)) with open(log_file_path) as truncated_log_file: self.assertEqual(truncated_log_file.read(), "{second_line}\n".format(second_line=second_line)) def test_set_logger_should_not_reset_the_mode_of_the_log_directory(self): ext_log_dir = os.path.join(self.tmp_dir, "log_directory") with patch("azurelinuxagent.common.conf.get_ext_log_dir", return_value=ext_log_dir): ext_handler = Extension(name='foo') ext_handler.version = "1.2.3" ext_handler_instance = ExtHandlerInstance(ext_handler=ext_handler, protocol=None) ext_handler_log_dir = os.path.join(ext_log_dir, ext_handler.name) # Double-check the initial mode get_mode = lambda f: os.stat(f).st_mode & 0o777 mode = get_mode(ext_handler_log_dir) if mode != 0o755: raise Exception("The initial mode of the log directory should be 0o755, got 0{0:o}".format(mode)) new_mode = 0o700 os.chmod(ext_handler_log_dir, new_mode) ext_handler_instance.set_logger() mode = get_mode(ext_handler_log_dir) self.assertEqual(new_mode, mode, "The mode of the log directory should not have changed") def test_it_should_report_the_message_in_the_hearbeat(self): def heartbeat_with_message(): return {'code': 0, 'formattedMessage': {'lang': 'en-US', 'message': 'This is a heartbeat message'}, 'status': 'ready'} with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: with patch("azurelinuxagent.common.protocol.wire.WireProtocol.report_vm_status", return_value=None): with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.collect_heartbeat", side_effect=heartbeat_with_message): with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.get_handler_state", return_value=ExtHandlerState.Enabled): with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.collect_ext_status", return_value=None): exthandlers_handler = get_exthandlers_handler(protocol) exthandlers_handler.run() vm_status = exthandlers_handler.report_ext_handlers_status() ext_handler = vm_status.vmAgent.extensionHandlers[0] self.assertEqual(ext_handler.message, heartbeat_with_message().get('formattedMessage').get('message'), "Extension handler messages don't match") self.assertEqual(ext_handler.status, heartbeat_with_message().get('status'), "Extension handler statuses don't match") class LaunchCommandTestCase(AgentTestCase): """ Test cases for launch_command """ def setUp(self): AgentTestCase.setUp(self) self.ext_handler = Extension(name='foo') self.ext_handler.version = "1.2.3" self.ext_handler_instance = ExtHandlerInstance(ext_handler=self.ext_handler, protocol=WireProtocol("1.2.3.4")) self.mock_get_base_dir = patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.get_base_dir", lambda *_: self.tmp_dir) self.mock_get_base_dir.start() self.log_dir = os.path.join(self.tmp_dir, "log") self.mock_get_log_dir = patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.get_log_dir", lambda *_: self.log_dir) self.mock_get_log_dir.start() self.mock_sleep = patch("time.sleep", lambda *_: mock_sleep(0.01)) self.mock_sleep.start() def tearDown(self): self.mock_get_log_dir.stop() self.mock_get_base_dir.stop() self.mock_sleep.stop() AgentTestCase.tearDown(self) @staticmethod def _output_regex(stdout, stderr): return r"\[stdout\]\s+{0}\s+\[stderr\]\s+{1}".format(stdout, stderr) @staticmethod def _find_process(command): for pid in [pid for pid in os.listdir('/proc') if pid.isdigit()]: try: with open(os.path.join('/proc', pid, 'cmdline'), 'r') as cmdline: for line in cmdline.readlines(): if command in line: return True except IOError: # proc has already terminated continue return False def test_it_should_capture_the_output_of_the_command(self): stdout = "stdout" * 5 stderr = "stderr" * 5 command = "produce_output.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import sys sys.stdout.write("{0}") sys.stderr.write("{1}") '''.format(stdout, stderr)) def list_directory(): base_dir = self.ext_handler_instance.get_base_dir() return [i for i in os.listdir(base_dir) if not i.endswith(AGENT_EVENT_FILE_EXTENSION)] # ignore telemetry files files_before = list_directory() output = self.ext_handler_instance.launch_command(command) files_after = list_directory() self.assertRegex(output, LaunchCommandTestCase._output_regex(stdout, stderr)) self.assertListEqual(files_before, files_after, "Not all temporary files were deleted. File list: {0}".format(files_after)) def test_it_should_raise_an_exception_when_the_command_times_out(self): extension_error_code = ExtensionErrorCodes.PluginHandlerScriptTimedout stdout = "stdout" * 7 stderr = "stderr" * 7 # the signal file is used by the test command to indicate it has produced output signal_file = os.path.join(self.tmp_dir, "signal_file.txt") # the test command produces some output then goes into an infinite loop command = "produce_output_then_hang.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import sys import time sys.stdout.write("{0}") sys.stdout.flush() sys.stderr.write("{1}") sys.stderr.flush() with open("{2}", "w") as file: while True: file.write(".") time.sleep(1) '''.format(stdout, stderr, signal_file)) # mock time.sleep to wait for the signal file (launch_command implements the time out using polling and sleep) def sleep(seconds): if not os.path.exists(signal_file): sleep.original_sleep(seconds) sleep.original_sleep = time.sleep timeout = 60 start_time = time.time() with patch("time.sleep", side_effect=sleep, autospec=True) as mock_sleep: # pylint: disable=redefined-outer-name with self.assertRaises(ExtensionError) as context_manager: self.ext_handler_instance.launch_command(command, timeout=timeout, extension_error_code=extension_error_code) # the command name and its output should be part of the message message = str(context_manager.exception) command_full_path = os.path.join(self.tmp_dir, command.lstrip(os.path.sep)) self.assertRegex(message, r"Timeout\(\d+\):\s+{0}\s+{1}".format(command_full_path, LaunchCommandTestCase._output_regex(stdout, stderr))) # the exception code should be as specified in the call to launch_command self.assertEqual(context_manager.exception.code, extension_error_code) # the timeout period should have elapsed self.assertGreaterEqual(mock_sleep.call_count, timeout) # The command should have been terminated. # The /proc file system may still include the process when we do this check so we try a few times after a short delay; note that we # are mocking sleep, so we need to use the original implementation. terminated = False i = 0 while not terminated and i < 4: if not LaunchCommandTestCase._find_process(command): terminated = True else: sleep.original_sleep(0.25) i += 1 self.assertTrue(terminated, "The command was not terminated") # as a check for the test itself, verify it completed in just a few seconds self.assertLessEqual(time.time() - start_time, 5) def test_it_should_raise_an_exception_when_the_command_fails(self): extension_error_code = 2345 stdout = "stdout" * 3 stderr = "stderr" * 3 exit_code = 99 command = "fail.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import sys sys.stdout.write("{0}") sys.stderr.write("{1}") exit({2}) '''.format(stdout, stderr, exit_code)) # the output is captured as part of the exception message with self.assertRaises(ExtensionError) as context_manager: self.ext_handler_instance.launch_command(command, extension_error_code=extension_error_code) message = str(context_manager.exception) self.assertRegex(message, r"Non-zero exit code: {0}.+{1}\s+{2}".format(exit_code, command, LaunchCommandTestCase._output_regex(stdout, stderr))) self.assertEqual(context_manager.exception.code, extension_error_code) def test_it_should_not_wait_for_child_process(self): stdout = "stdout" stderr = "stderr" command = "start_child_process.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import os import sys import time pid = os.fork() if pid == 0: time.sleep(60) else: sys.stdout.write("{0}") sys.stderr.write("{1}") '''.format(stdout, stderr)) start_time = time.time() output = self.ext_handler_instance.launch_command(command) self.assertLessEqual(time.time() - start_time, 5) # Also check that we capture the parent's output self.assertRegex(output, LaunchCommandTestCase._output_regex(stdout, stderr)) def test_it_should_capture_the_output_of_child_process(self): parent_stdout = "PARENT STDOUT" parent_stderr = "PARENT STDERR" child_stdout = "CHILD STDOUT" child_stderr = "CHILD STDERR" more_parent_stdout = "MORE PARENT STDOUT" more_parent_stderr = "MORE PARENT STDERR" # the child process uses the signal file to indicate it has produced output signal_file = os.path.join(self.tmp_dir, "signal_file.txt") command = "start_child_with_output.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import os import sys import time sys.stdout.write("{0}") sys.stderr.write("{1}") pid = os.fork() if pid == 0: sys.stdout.write("{2}") sys.stderr.write("{3}") open("{6}", "w").close() else: sys.stdout.write("{4}") sys.stderr.write("{5}") while not os.path.exists("{6}"): time.sleep(0.5) '''.format(parent_stdout, parent_stderr, child_stdout, child_stderr, more_parent_stdout, more_parent_stderr, signal_file)) output = self.ext_handler_instance.launch_command(command) self.assertIn(parent_stdout, output) self.assertIn(parent_stderr, output) self.assertIn(child_stdout, output) self.assertIn(child_stderr, output) self.assertIn(more_parent_stdout, output) self.assertIn(more_parent_stderr, output) def test_it_should_capture_the_output_of_child_process_that_fails_to_start(self): parent_stdout = "PARENT STDOUT" parent_stderr = "PARENT STDERR" child_stdout = "CHILD STDOUT" child_stderr = "CHILD STDERR" command = "start_child_that_fails.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import os import sys import time pid = os.fork() if pid == 0: sys.stdout.write("{0}") sys.stderr.write("{1}") exit(1) else: sys.stdout.write("{2}") sys.stderr.write("{3}") '''.format(child_stdout, child_stderr, parent_stdout, parent_stderr)) output = self.ext_handler_instance.launch_command(command) self.assertIn(parent_stdout, output) self.assertIn(parent_stderr, output) self.assertIn(child_stdout, output) self.assertIn(child_stderr, output) def test_it_should_execute_commands_with_no_output(self): # file used to verify the command completed successfully signal_file = os.path.join(self.tmp_dir, "signal_file.txt") command = "create_file.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' open("{0}", "w").close() '''.format(signal_file)) output = self.ext_handler_instance.launch_command(command) self.assertTrue(os.path.exists(signal_file)) self.assertRegex(output, LaunchCommandTestCase._output_regex('', '')) def test_it_should_not_capture_the_output_of_commands_that_do_their_own_redirection(self): # the test script redirects its output to this file command_output_file = os.path.join(self.tmp_dir, "command_output.txt") stdout = "STDOUT" stderr = "STDERR" # the test script mimics the redirection done by the Custom Script extension command = "produce_output" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' exec &> {0} echo {1} >&2 echo {2} '''.format(command_output_file, stdout, stderr)) output = self.ext_handler_instance.launch_command(command) self.assertRegex(output, LaunchCommandTestCase._output_regex('', '')) with open(command_output_file, "r") as command_output: output = command_output.read() self.assertEqual(output, "{0}\n{1}\n".format(stdout, stderr)) def test_it_should_truncate_the_command_output(self): stdout = "STDOUT" stderr = "STDERR" command = "produce_long_output.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import sys sys.stdout.write( "{0}" * {1}) sys.stderr.write( "{2}" * {3}) '''.format(stdout, int(TELEMETRY_MESSAGE_MAX_LEN / len(stdout)), stderr, int(TELEMETRY_MESSAGE_MAX_LEN / len(stderr)))) output = self.ext_handler_instance.launch_command(command) self.assertLessEqual(len(output), TELEMETRY_MESSAGE_MAX_LEN) self.assertIn(stdout, output) self.assertIn(stderr, output) def test_it_should_read_only_the_head_of_large_outputs(self): command = "produce_long_output.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import sys sys.stdout.write("O" * 5 * 1024 * 1024) sys.stderr.write("E" * 5 * 1024 * 1024) ''') # Mocking the call to file.read() is difficult, so instead we mock the call to format_stdout_stderr, which takes the # return value of the calls to file.read(). The intention of the test is to verify we never read (and load in memory) # more than a few KB of data from the files used to capture stdout/stderr with patch('azurelinuxagent.ga.extensionprocessutil.format_stdout_stderr', side_effect=format_stdout_stderr) as mock_format: output = self.ext_handler_instance.launch_command(command) self.assertGreaterEqual(len(output), 1024) self.assertLessEqual(len(output), TELEMETRY_MESSAGE_MAX_LEN) mock_format.assert_called_once() args, kwargs = mock_format.call_args # pylint: disable=unused-variable stdout, stderr = args self.assertGreaterEqual(len(stdout), 1024) self.assertLessEqual(len(stdout), TELEMETRY_MESSAGE_MAX_LEN) self.assertGreaterEqual(len(stderr), 1024) self.assertLessEqual(len(stderr), TELEMETRY_MESSAGE_MAX_LEN) def test_it_should_handle_errors_while_reading_the_command_output(self): command = "produce_output.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import sys sys.stdout.write("STDOUT") sys.stderr.write("STDERR") ''') # Mocking the call to file.read() is difficult, so instead we mock the call to_capture_process_output, # which will call file.read() and we force stdout/stderr to be None; this will produce an exception when # trying to use these files. original_capture_process_output = read_output def capture_process_output(stdout_file, stderr_file): # pylint: disable=unused-argument return original_capture_process_output(None, None) with patch('azurelinuxagent.ga.extensionprocessutil.read_output', side_effect=capture_process_output): output = self.ext_handler_instance.launch_command(command) self.assertIn("[stderr]\nCannot read stdout/stderr:", output) def test_it_should_contain_all_helper_environment_variables(self): wire_ip = str(uuid.uuid4()) ext_handler_instance = ExtHandlerInstance(ext_handler=self.ext_handler, protocol=WireProtocol(wire_ip)) helper_env_vars = {ExtCommandEnvVariable.ExtensionSeqNumber: _DEFAULT_SEQ_NO, ExtCommandEnvVariable.ExtensionPath: self.tmp_dir, ExtCommandEnvVariable.ExtensionVersion: ext_handler_instance.ext_handler.version, ExtCommandEnvVariable.WireProtocolAddress: wire_ip} command = """ printenv | grep -E '(%s)' """ % '|'.join(helper_env_vars.keys()) test_file = 'printHelperEnvironments.sh' self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), test_file), command) with patch("subprocess.Popen", wraps=subprocess.Popen) as patch_popen: # Returning empty list for get_agent_supported_features_list_for_extensions as we have a separate test for it with patch("azurelinuxagent.ga.exthandlers.get_agent_supported_features_list_for_extensions", return_value={}): output = ext_handler_instance.launch_command(test_file) args, kwagrs = patch_popen.call_args # pylint: disable=unused-variable without_os_env = dict((k, v) for (k, v) in kwagrs['env'].items() if k not in os.environ) # This check will fail if any helper environment variables are added/removed later on self.assertEqual(helper_env_vars, without_os_env) # This check is checking if the expected values are set for the extension commands for helper_var in helper_env_vars: self.assertIn("%s=%s" % (helper_var, helper_env_vars[helper_var]), output) def test_it_should_pass_supported_features_list_as_environment_variables(self): class TestFeature(AgentSupportedFeature): def __init__(self, name, version, supported): super(TestFeature, self).__init__(name=name, version=version, supported=supported) test_name = str(uuid.uuid4()) test_version = str(uuid.uuid4()) command = "check_env.py" self.create_script(os.path.join(self.ext_handler_instance.get_base_dir(), command), ''' import os import json import sys features = os.getenv("{0}") if not features: print("{0} not found in environment") sys.exit(0) l = json.loads(features) found = False for feature in l: if feature['Key'] == "{1}" and feature['Value'] == "{2}": found = True break print("Found Feature %s: %s" % ("{1}", found)) '''.format(ExtCommandEnvVariable.ExtensionSupportedFeatures, test_name, test_version)) # It should include all supported features and pass it as Environment Variable to extensions test_supported_features = {test_name: TestFeature(name=test_name, version=test_version, supported=True)} with patch("azurelinuxagent.common.agent_supported_feature.__EXTENSION_ADVERTISED_FEATURES", test_supported_features): output = self.ext_handler_instance.launch_command(command) self.assertIn("[stdout]\nFound Feature {0}: True".format(test_name), output, "Feature not found") # It should not include the feature if feature not supported test_supported_features = { test_name: TestFeature(name=test_name, version=test_version, supported=False), "testFeature": TestFeature(name="testFeature", version="1.2.1", supported=True) } with patch("azurelinuxagent.common.agent_supported_feature.__EXTENSION_ADVERTISED_FEATURES", test_supported_features): output = self.ext_handler_instance.launch_command(command) self.assertIn("[stdout]\nFound Feature {0}: False".format(test_name), output, "Feature wrongfully found") # It should not include the SupportedFeatures Key in Environment variables if no features supported test_supported_features = {test_name: TestFeature(name=test_name, version=test_version, supported=False)} with patch("azurelinuxagent.common.agent_supported_feature.__EXTENSION_ADVERTISED_FEATURES", test_supported_features): output = self.ext_handler_instance.launch_command(command) self.assertIn( "[stdout]\n{0} not found in environment".format(ExtCommandEnvVariable.ExtensionSupportedFeatures), output, "Environment variable should not be found") Azure-WALinuxAgent-2b21de5/tests/ga/test_exthandlers_download_extension.py000066400000000000000000000310651462617747000272030ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. import contextlib import os import time import zipfile from azurelinuxagent.common.exception import ExtensionDownloadError, ExtensionErrorCodes from azurelinuxagent.common.protocol.restapi import Extension, ExtHandlerPackage from azurelinuxagent.common.protocol.wire import WireProtocol from azurelinuxagent.ga.exthandlers import ExtHandlerInstance, ExtHandlerState from tests.lib import wire_protocol_data from tests.lib.mock_wire_protocol import mock_wire_protocol from tests.lib.tools import AgentTestCase, patch, Mock class DownloadExtensionTestCase(AgentTestCase): """ Test cases for launch_command """ @classmethod def setUpClass(cls): AgentTestCase.setUpClass() cls.mock_cgroups = patch("azurelinuxagent.ga.exthandlers.CGroupConfigurator") cls.mock_cgroups.start() @classmethod def tearDownClass(cls): cls.mock_cgroups.stop() AgentTestCase.tearDownClass() def setUp(self): AgentTestCase.setUp(self) ext_handler = Extension(name='Microsoft.CPlat.Core.RunCommandLinux') ext_handler.version = "1.0.0" protocol = WireProtocol("http://Microsoft.CPlat.Core.RunCommandLinux/foo-bar") protocol.client.get_host_plugin = Mock() protocol.client.get_artifact_request = Mock(return_value=(None, None)) # create a dummy goal state, since downloads are done via the GoalState class with mock_wire_protocol(wire_protocol_data.DATA_FILE) as p: goal_state = p.get_goal_state() goal_state._wire_client = protocol.client protocol.client._goal_state = goal_state self.pkg = ExtHandlerPackage() self.pkg.uris = [ 'https://zrdfepirv2cy4prdstr00a.blob.core.windows.net/f72653efd9e349ed9842c8b99e4c1712-foobar/Microsoft.CPlat.Core__RunCommandLinux__1.0.0', 'https://zrdfepirv2cy4prdstr01a.blob.core.windows.net/f72653efd9e349ed9842c8b99e4c1712-foobar/Microsoft.CPlat.Core__RunCommandLinux__1.0.0', 'https://zrdfepirv2cy4prdstr02a.blob.core.windows.net/f72653efd9e349ed9842c8b99e4c1712-foobar/Microsoft.CPlat.Core__RunCommandLinux__1.0.0', 'https://zrdfepirv2cy4prdstr03a.blob.core.windows.net/f72653efd9e349ed9842c8b99e4c1712-foobar/Microsoft.CPlat.Core__RunCommandLinux__1.0.0', 'https://zrdfepirv2cy4prdstr04a.blob.core.windows.net/f72653efd9e349ed9842c8b99e4c1712-foobar/Microsoft.CPlat.Core__RunCommandLinux__1.0.0' ] self.ext_handler_instance = ExtHandlerInstance(ext_handler=ext_handler, protocol=protocol) self.ext_handler_instance.pkg = self.pkg self.extension_dir = os.path.join(self.tmp_dir, "Microsoft.CPlat.Core.RunCommandLinux-1.0.0") self.mock_get_base_dir = patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.get_base_dir", return_value=self.extension_dir) self.mock_get_base_dir.start() self.mock_get_log_dir = patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.get_log_dir", return_value=self.tmp_dir) self.mock_get_log_dir.start() self.agent_dir = self.tmp_dir self.mock_get_lib_dir = patch("azurelinuxagent.ga.exthandlers.conf.get_lib_dir", return_value=self.agent_dir) self.mock_get_lib_dir.start() def tearDown(self): self.mock_get_lib_dir.stop() self.mock_get_log_dir.stop() self.mock_get_base_dir.stop() AgentTestCase.tearDown(self) _extension_command = "RunCommandLinux.sh" @staticmethod def _create_zip_file(filename): file = None # pylint: disable=redefined-builtin try: file = zipfile.ZipFile(filename, "w") info = zipfile.ZipInfo(DownloadExtensionTestCase._extension_command) info.date_time = time.localtime(time.time())[:6] info.compress_type = zipfile.ZIP_DEFLATED file.writestr(info, "#!/bin/sh\necho 'RunCommandLinux executed successfully'\n") finally: if file is not None: file.close() @staticmethod def _create_invalid_zip_file(filename): with open(filename, "w") as file: # pylint: disable=redefined-builtin file.write("An invalid ZIP file\n") def _get_extension_base_dir(self): return self.extension_dir def _get_extension_package_file(self): return os.path.join(self.agent_dir, self.ext_handler_instance.get_extension_package_zipfile_name()) def _get_extension_command_file(self): return os.path.join(self.extension_dir, DownloadExtensionTestCase._extension_command) def _assert_download_and_expand_succeeded(self): self.assertTrue(os.path.exists(self._get_extension_base_dir()), "The extension package was not downloaded to the expected location") self.assertTrue(os.path.exists(self._get_extension_command_file()), "The extension package was not expanded to the expected location") @staticmethod @contextlib.contextmanager def create_mock_stream(stream_function): with patch("azurelinuxagent.common.protocol.wire.WireClient.stream", side_effect=stream_function) as mock_stream: mock_stream.download_failures = 0 with patch('time.sleep'): # don't sleep in-between retries yield mock_stream def test_it_should_download_and_expand_extension_package(self): def stream(_, destination, **__): DownloadExtensionTestCase._create_zip_file(destination) return True with DownloadExtensionTestCase.create_mock_stream(stream) as mock_stream: with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.report_event") as mock_report_event: self.ext_handler_instance.download() # first download attempt should succeed mock_stream.assert_called_once() mock_report_event.assert_called_once() self._assert_download_and_expand_succeeded() def test_it_should_use_existing_extension_package_when_already_downloaded(self): DownloadExtensionTestCase._create_zip_file(self._get_extension_package_file()) with DownloadExtensionTestCase.create_mock_stream(lambda: None) as mock_stream: with patch("azurelinuxagent.ga.exthandlers.ExtHandlerInstance.report_event") as mock_report_event: self.ext_handler_instance.download() mock_stream.assert_not_called() mock_report_event.assert_not_called() self.assertTrue(os.path.exists(self._get_extension_command_file()), "The extension package was not expanded to the expected location") def test_it_should_ignore_existing_extension_package_when_it_is_invalid(self): def stream(_, destination, **__): DownloadExtensionTestCase._create_zip_file(destination) return True DownloadExtensionTestCase._create_invalid_zip_file(self._get_extension_package_file()) with DownloadExtensionTestCase.create_mock_stream(stream) as mock_stream: self.ext_handler_instance.download() mock_stream.assert_called_once() self._assert_download_and_expand_succeeded() def test_it_should_maintain_extension_handler_state_when_good_zip_exists(self): DownloadExtensionTestCase._create_zip_file(self._get_extension_package_file()) self.ext_handler_instance.set_handler_state(ExtHandlerState.NotInstalled) self.ext_handler_instance.download() self._assert_download_and_expand_succeeded() self.assertTrue(os.path.exists(os.path.join(self.ext_handler_instance.get_conf_dir(), "HandlerState")), "Ensure that the HandlerState file exists on disk") self.assertEqual(self.ext_handler_instance.get_handler_state(), ExtHandlerState.NotInstalled, "Ensure that the state is maintained for extension HandlerState") def test_it_should_maintain_extension_handler_state_when_bad_zip_exists_and_recovers_with_good_zip(self): def stream(_, destination, **__): DownloadExtensionTestCase._create_zip_file(destination) return True DownloadExtensionTestCase._create_invalid_zip_file(self._get_extension_package_file()) self.ext_handler_instance.set_handler_state(ExtHandlerState.NotInstalled) with DownloadExtensionTestCase.create_mock_stream(stream) as mock_stream: self.ext_handler_instance.download() mock_stream.assert_called_once() self._assert_download_and_expand_succeeded() self.assertEqual(self.ext_handler_instance.get_handler_state(), ExtHandlerState.NotInstalled, "Ensure that the state is maintained for extension HandlerState") def test_it_should_maintain_extension_handler_state_when_it_downloads_bad_zips(self): def stream(_, destination, **__): DownloadExtensionTestCase._create_invalid_zip_file(destination) return True self.ext_handler_instance.set_handler_state(ExtHandlerState.NotInstalled) with DownloadExtensionTestCase.create_mock_stream(stream): with self.assertRaises(ExtensionDownloadError): self.ext_handler_instance.download() self.assertFalse(os.path.exists(self._get_extension_package_file()), "The bad zip extension package should not be downloaded to the expected location") self.assertFalse(os.path.exists(self._get_extension_command_file()), "The extension package should not expanded be to the expected location due to bad zip") self.assertEqual(self.ext_handler_instance.get_handler_state(), ExtHandlerState.NotInstalled, "Ensure that the state is maintained for extension HandlerState") def test_it_should_use_alternate_uris_when_download_fails(self): def stream(_, destination, **__): # fail a few times, then succeed if mock_stream.download_failures < 3: mock_stream.download_failures += 1 return None DownloadExtensionTestCase._create_zip_file(destination) return True with DownloadExtensionTestCase.create_mock_stream(stream) as mock_stream: self.ext_handler_instance.download() self.assertEqual(mock_stream.call_count, mock_stream.download_failures + 1) self._assert_download_and_expand_succeeded() def test_it_should_use_alternate_uris_when_download_raises_an_exception(self): def stream(_, destination, **__): # fail a few times, then succeed if mock_stream.download_failures < 3: mock_stream.download_failures += 1 raise Exception("Download failed") DownloadExtensionTestCase._create_zip_file(destination) return True with DownloadExtensionTestCase.create_mock_stream(stream) as mock_stream: self.ext_handler_instance.download() self.assertEqual(mock_stream.call_count, mock_stream.download_failures + 1) self._assert_download_and_expand_succeeded() def test_it_should_use_alternate_uris_when_it_downloads_an_invalid_package(self): def stream(_, destination, **__): # fail a few times, then succeed if mock_stream.download_failures < 3: mock_stream.download_failures += 1 DownloadExtensionTestCase._create_invalid_zip_file(destination) else: DownloadExtensionTestCase._create_zip_file(destination) return True with DownloadExtensionTestCase.create_mock_stream(stream) as mock_stream: self.ext_handler_instance.download() self.assertEqual(mock_stream.call_count, mock_stream.download_failures + 1) self._assert_download_and_expand_succeeded() def test_it_should_raise_an_exception_when_all_downloads_fail(self): def stream(_, target_file, **___): stream.target_file = target_file DownloadExtensionTestCase._create_invalid_zip_file(target_file) return True stream.target_file = None with DownloadExtensionTestCase.create_mock_stream(stream) as mock_stream: with self.assertRaises(ExtensionDownloadError) as context_manager: self.ext_handler_instance.download() self.assertEqual(mock_stream.call_count, len(self.pkg.uris)) self.assertRegex(str(context_manager.exception), "Failed to download extension") self.assertEqual(context_manager.exception.code, ExtensionErrorCodes.PluginManifestDownloadError) self.assertFalse(os.path.exists(self.extension_dir), "The extension directory was not removed") self.assertFalse(os.path.exists(stream.target_file), "The extension package was not removed") Azure-WALinuxAgent-2b21de5/tests/ga/test_exthandlers_exthandlerinstance.py000066400000000000000000000142641462617747000271650ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. import os import shutil import sys from azurelinuxagent.common.protocol.restapi import Extension, ExtHandlerPackage from azurelinuxagent.ga.exthandlers import ExtHandlerInstance from tests.lib.tools import AgentTestCase, patch class ExtHandlerInstanceTestCase(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) ext_handler = Extension(name='foo') ext_handler.version = "1.2.3" self.ext_handler_instance = ExtHandlerInstance(ext_handler=ext_handler, protocol=None) pkg_uri = "http://bar/foo__1.2.3" self.ext_handler_instance.pkg = ExtHandlerPackage(ext_handler.version) self.ext_handler_instance.pkg.uris.append(pkg_uri) self.base_dir = self.tmp_dir self.extension_directory = os.path.join(self.tmp_dir, "extension_directory") self.mock_get_base_dir = patch.object(self.ext_handler_instance, "get_base_dir", return_value=self.extension_directory) self.mock_get_base_dir.start() def tearDown(self): self.mock_get_base_dir.stop() super(ExtHandlerInstanceTestCase, self).tearDown() def test_rm_ext_handler_dir_should_remove_the_extension_packages(self): os.mkdir(self.extension_directory) open(os.path.join(self.extension_directory, "extension_file1"), 'w').close() open(os.path.join(self.extension_directory, "extension_file2"), 'w').close() open(os.path.join(self.extension_directory, "extension_file3"), 'w').close() open(os.path.join(self.base_dir, "foo__1.2.3.zip"), 'w').close() self.ext_handler_instance.remove_ext_handler() self.assertFalse(os.path.exists(self.extension_directory)) self.assertFalse(os.path.exists(os.path.join(self.base_dir, "foo__1.2.3.zip"))) def test_rm_ext_handler_dir_should_remove_the_extension_directory(self): os.mkdir(self.extension_directory) os.mknod(os.path.join(self.extension_directory, "extension_file1")) os.mknod(os.path.join(self.extension_directory, "extension_file2")) os.mknod(os.path.join(self.extension_directory, "extension_file3")) self.ext_handler_instance.remove_ext_handler() self.assertFalse(os.path.exists(self.extension_directory)) def test_rm_ext_handler_dir_should_not_report_an_event_if_the_extension_directory_does_not_exist(self): if os.path.exists(self.extension_directory): os.rmdir(self.extension_directory) with patch.object(self.ext_handler_instance, "report_event") as mock_report_event: self.ext_handler_instance.remove_ext_handler() mock_report_event.assert_not_called() def test_rm_ext_handler_dir_should_not_report_an_event_if_a_child_is_removed_asynchronously_while_deleting_the_extension_directory(self): os.mkdir(self.extension_directory) os.mknod(os.path.join(self.extension_directory, "extension_file1")) os.mknod(os.path.join(self.extension_directory, "extension_file2")) os.mknod(os.path.join(self.extension_directory, "extension_file3")) # # Some extensions uninstall asynchronously and the files we are trying to remove may be removed # while shutil.rmtree is traversing the extension's directory. Mock this by deleting a file # twice (the second call will produce "[Errno 2] No such file or directory", which should not be # reported as a telemetry event. # In order to mock this, we need to know that remove_ext_handler invokes Pyhon's shutil.rmtree, # which in turn invokes os.unlink (Python 3) or os.remove (Python 2) # remove_api_name = "unlink" if sys.version_info >= (3, 0) else "remove" original_remove_api = getattr(shutil.os, remove_api_name) extension_directory = self.extension_directory def mock_remove(path, dir_fd=None): if dir_fd is not None: # path is relative, make it absolute path = os.path.join(extension_directory, path) if path.endswith("extension_file2"): original_remove_api(path) mock_remove.file_deleted_asynchronously = True original_remove_api(path) mock_remove.file_deleted_asynchronously = False with patch.object(shutil.os, remove_api_name, mock_remove): with patch.object(self.ext_handler_instance, "report_event") as mock_report_event: self.ext_handler_instance.remove_ext_handler() mock_report_event.assert_not_called() # The next 2 asserts are checks on the mock itself, in case the implementation of remove_ext_handler changes (mocks may need to be updated then) self.assertTrue(mock_remove.file_deleted_asynchronously) # verify the mock was actually called self.assertFalse(os.path.exists(self.extension_directory)) # verify the error produced by the mock did not prevent the deletion def test_rm_ext_handler_dir_should_report_an_event_if_an_error_occurs_while_deleting_the_extension_directory(self): os.mkdir(self.extension_directory) os.mknod(os.path.join(self.extension_directory, "extension_file1")) os.mknod(os.path.join(self.extension_directory, "extension_file2")) os.mknod(os.path.join(self.extension_directory, "extension_file3")) # The mock below relies on the knowledge that remove_ext_handler invokes Pyhon's shutil.rmtree, # which in turn invokes os.unlink (Python 3) or os.remove (Python 2) remove_api_name = "unlink" if sys.version_info >= (3, 0) else "remove" original_remove_api = getattr(shutil.os, remove_api_name) def mock_remove(path, dir_fd=None): # pylint: disable=unused-argument if path.endswith("extension_file2"): raise IOError("A mocked error") original_remove_api(path) with patch.object(shutil.os, remove_api_name, mock_remove): with patch.object(self.ext_handler_instance, "report_event") as mock_report_event: self.ext_handler_instance.remove_ext_handler() args, kwargs = mock_report_event.call_args # pylint: disable=unused-variable self.assertTrue("A mocked error" in kwargs["message"]) Azure-WALinuxAgent-2b21de5/tests/ga/test_guestagent.py000066400000000000000000000252621462617747000230470ustar00rootroot00000000000000import contextlib import json import os import tempfile from azurelinuxagent.common import conf from azurelinuxagent.common.exception import UpdateError from azurelinuxagent.ga.guestagent import GuestAgent, AGENT_MANIFEST_FILE, AGENT_ERROR_FILE, GuestAgentError, \ MAX_FAILURE, GuestAgentUpdateAttempt from azurelinuxagent.common.version import AGENT_NAME from tests.ga.test_update import UpdateTestCase, EMPTY_MANIFEST, WITH_ERROR, NO_ERROR class TestGuestAgent(UpdateTestCase): def setUp(self): UpdateTestCase.setUp(self) self.copy_agents(self._get_agent_file_path()) self.agent_path = os.path.join(self.tmp_dir, self._get_agent_name()) def test_creation(self): with self.assertRaises(UpdateError): GuestAgent.from_installed_agent("A very bad file name") with self.assertRaises(UpdateError): GuestAgent.from_installed_agent("{0}-a.bad.version".format(AGENT_NAME)) self.expand_agents() agent = GuestAgent.from_installed_agent(self.agent_path) self.assertNotEqual(None, agent) self.assertEqual(self._get_agent_name(), agent.name) self.assertEqual(self._get_agent_version(), agent.version) self.assertEqual(self.agent_path, agent.get_agent_dir()) path = os.path.join(self.agent_path, AGENT_MANIFEST_FILE) self.assertEqual(path, agent.get_agent_manifest_path()) self.assertEqual( os.path.join(self.agent_path, AGENT_ERROR_FILE), agent.get_agent_error_file()) path = ".".join((os.path.join(conf.get_lib_dir(), self._get_agent_name()), "zip")) self.assertEqual(path, agent.get_agent_pkg_path()) self.assertTrue(agent.is_downloaded) self.assertFalse(agent.is_blacklisted) self.assertTrue(agent.is_available) def test_clear_error(self): self.expand_agents() agent = GuestAgent.from_installed_agent(self.agent_path) agent.mark_failure(is_fatal=True) self.assertTrue(agent.error.last_failure > 0.0) self.assertEqual(1, agent.error.failure_count) self.assertTrue(agent.is_blacklisted) self.assertEqual(agent.is_blacklisted, agent.error.is_blacklisted) agent.clear_error() self.assertEqual(0.0, agent.error.last_failure) self.assertEqual(0, agent.error.failure_count) self.assertFalse(agent.is_blacklisted) self.assertEqual(agent.is_blacklisted, agent.error.is_blacklisted) def test_is_available(self): self.expand_agents() agent = GuestAgent.from_installed_agent(self.agent_path) self.assertTrue(agent.is_available) agent.mark_failure(is_fatal=True) self.assertFalse(agent.is_available) def test_is_blacklisted(self): self.expand_agents() agent = GuestAgent.from_installed_agent(self.agent_path) self.assertFalse(agent.is_blacklisted) self.assertEqual(agent.is_blacklisted, agent.error.is_blacklisted) agent.mark_failure(is_fatal=True) self.assertTrue(agent.is_blacklisted) self.assertEqual(agent.is_blacklisted, agent.error.is_blacklisted) def test_is_downloaded(self): self.expand_agents() agent = GuestAgent.from_installed_agent(self.agent_path) self.assertTrue(agent.is_downloaded) def test_mark_failure(self): agent = GuestAgent.from_installed_agent(self.agent_path) agent.mark_failure() self.assertEqual(1, agent.error.failure_count) agent.mark_failure(is_fatal=True) self.assertEqual(2, agent.error.failure_count) self.assertTrue(agent.is_blacklisted) def test_inc_update_attempt_count(self): agent = GuestAgent.from_installed_agent(self.agent_path) agent.inc_update_attempt_count() self.assertEqual(1, agent.update_attempt_data.count) agent.inc_update_attempt_count() self.assertEqual(2, agent.update_attempt_data.count) def test_get_update_count(self): agent = GuestAgent.from_installed_agent(self.agent_path) agent.inc_update_attempt_count() self.assertEqual(1, agent.get_update_attempt_count()) agent.inc_update_attempt_count() self.assertEqual(2, agent.get_update_attempt_count()) def test_load_manifest(self): self.expand_agents() agent = GuestAgent.from_installed_agent(self.agent_path) agent._load_manifest() self.assertEqual(agent.manifest.get_enable_command(), agent.get_agent_cmd()) def test_load_manifest_missing(self): self.expand_agents() agent = GuestAgent.from_installed_agent(self.agent_path) os.remove(agent.get_agent_manifest_path()) self.assertRaises(UpdateError, agent._load_manifest) def test_load_manifest_is_empty(self): self.expand_agents() agent = GuestAgent.from_installed_agent(self.agent_path) self.assertTrue(os.path.isfile(agent.get_agent_manifest_path())) with open(agent.get_agent_manifest_path(), "w") as file: # pylint: disable=redefined-builtin json.dump(EMPTY_MANIFEST, file) self.assertRaises(UpdateError, agent._load_manifest) def test_load_manifest_is_malformed(self): self.expand_agents() agent = GuestAgent.from_installed_agent(self.agent_path) self.assertTrue(os.path.isfile(agent.get_agent_manifest_path())) with open(agent.get_agent_manifest_path(), "w") as file: # pylint: disable=redefined-builtin file.write("This is not JSON data") self.assertRaises(UpdateError, agent._load_manifest) def test_load_error(self): agent = GuestAgent.from_installed_agent(self.agent_path) agent.error = None agent._load_error() self.assertTrue(agent.error is not None) class TestGuestAgentError(UpdateTestCase): def test_creation(self): self.assertRaises(TypeError, GuestAgentError) self.assertRaises(UpdateError, GuestAgentError, None) with self.get_error_file(error_data=WITH_ERROR) as path: err = GuestAgentError(path.name) err.load() self.assertEqual(path.name, err.path) self.assertNotEqual(None, err) self.assertEqual(WITH_ERROR["last_failure"], err.last_failure) self.assertEqual(WITH_ERROR["failure_count"], err.failure_count) self.assertEqual(WITH_ERROR["was_fatal"], err.was_fatal) return def test_clear(self): with self.get_error_file(error_data=WITH_ERROR) as path: err = GuestAgentError(path.name) err.load() self.assertEqual(path.name, err.path) self.assertNotEqual(None, err) err.clear() self.assertEqual(NO_ERROR["last_failure"], err.last_failure) self.assertEqual(NO_ERROR["failure_count"], err.failure_count) self.assertEqual(NO_ERROR["was_fatal"], err.was_fatal) return def test_save(self): err1 = self.create_error() err1.mark_failure() err1.mark_failure(is_fatal=True) err2 = self.create_error(err1.to_json()) self.assertEqual(err1.last_failure, err2.last_failure) self.assertEqual(err1.failure_count, err2.failure_count) self.assertEqual(err1.was_fatal, err2.was_fatal) def test_mark_failure(self): err = self.create_error() self.assertFalse(err.is_blacklisted) for i in range(0, MAX_FAILURE): # pylint: disable=unused-variable err.mark_failure() # Agent failed >= MAX_FAILURE, it should be blacklisted self.assertTrue(err.is_blacklisted) self.assertEqual(MAX_FAILURE, err.failure_count) return def test_mark_failure_permanent(self): err = self.create_error() self.assertFalse(err.is_blacklisted) # Fatal errors immediately blacklist err.mark_failure(is_fatal=True) self.assertTrue(err.is_blacklisted) self.assertTrue(err.failure_count < MAX_FAILURE) return def test_str(self): err = self.create_error(error_data=NO_ERROR) s = "Last Failure: {0}, Total Failures: {1}, Fatal: {2}, Reason: {3}".format( NO_ERROR["last_failure"], NO_ERROR["failure_count"], NO_ERROR["was_fatal"], NO_ERROR["reason"]) self.assertEqual(s, str(err)) err = self.create_error(error_data=WITH_ERROR) s = "Last Failure: {0}, Total Failures: {1}, Fatal: {2}, Reason: {3}".format( WITH_ERROR["last_failure"], WITH_ERROR["failure_count"], WITH_ERROR["was_fatal"], WITH_ERROR["reason"]) self.assertEqual(s, str(err)) return UPDATE_ATTEMPT = { "count": 2 } NO_ATTEMPT = { "count": 0 } class TestGuestAgentUpdateAttempt(UpdateTestCase): @contextlib.contextmanager def get_attempt_count_file(self, attempt_count=None): if attempt_count is None: attempt_count = NO_ATTEMPT with tempfile.NamedTemporaryFile(mode="w") as fp: json.dump(attempt_count, fp) fp.seek(0) yield fp def test_creation(self): self.assertRaises(TypeError, GuestAgentUpdateAttempt) self.assertRaises(UpdateError, GuestAgentUpdateAttempt, None) with self.get_attempt_count_file(UPDATE_ATTEMPT) as path: update_data = GuestAgentUpdateAttempt(path.name) update_data.load() self.assertEqual(path.name, update_data.path) self.assertNotEqual(None, update_data) self.assertEqual(UPDATE_ATTEMPT["count"], update_data.count) def test_clear(self): with self.get_attempt_count_file(UPDATE_ATTEMPT) as path: update_data = GuestAgentUpdateAttempt(path.name) update_data.load() self.assertEqual(path.name, update_data.path) self.assertNotEqual(None, update_data) update_data.clear() self.assertEqual(NO_ATTEMPT["count"], update_data.count) def test_save(self): with self.get_attempt_count_file(UPDATE_ATTEMPT) as path: update_data = GuestAgentUpdateAttempt(path.name) update_data.load() update_data.inc_count() update_data.save() with self.get_attempt_count_file(update_data.to_json()) as path: new_data = GuestAgentUpdateAttempt(path.name) new_data.load() self.assertEqual(update_data.count, new_data.count) def test_inc_count(self): with self.get_attempt_count_file() as path: update_data = GuestAgentUpdateAttempt(path.name) update_data.load() self.assertEqual(0, update_data.count) update_data.inc_count() self.assertEqual(1, update_data.count) update_data.inc_count() self.assertEqual(2, update_data.count) Azure-WALinuxAgent-2b21de5/tests/ga/test_logcollector.py000066400000000000000000000562411462617747000233720ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import shutil import tempfile import zipfile from azurelinuxagent.ga.logcollector import LogCollector from azurelinuxagent.common.utils import fileutil from azurelinuxagent.common.utils.fileutil import rm_dirs, mkdir, rm_files from tests.lib.tools import AgentTestCase, is_python_version_26, patch, skip_if_predicate_true, data_dir SMALL_FILE_SIZE = 1 * 1024 * 1024 # 1 MB LARGE_FILE_SIZE = 5 * 1024 * 1024 # 5 MB @skip_if_predicate_true(is_python_version_26, "Disabled on Python 2.6") class TestLogCollector(AgentTestCase): @classmethod def setUpClass(cls): AgentTestCase.setUpClass() prefix = "{0}_".format(cls.__class__.__name__) cls.tmp_dir = tempfile.mkdtemp(prefix=prefix) cls.root_collect_dir = os.path.join(cls.tmp_dir, "files_to_collect") mkdir(cls.root_collect_dir) cls._mock_constants() cls._mock_cgroup() @classmethod def _mock_constants(cls): cls.mock_manifest = patch("azurelinuxagent.ga.logcollector.MANIFEST_NORMAL", cls._build_manifest()) cls.mock_manifest.start() cls.log_collector_dir = os.path.join(cls.tmp_dir, "logcollector") cls.mock_log_collector_dir = patch("azurelinuxagent.ga.logcollector._LOG_COLLECTOR_DIR", cls.log_collector_dir) cls.mock_log_collector_dir.start() cls.truncated_files_dir = os.path.join(cls.tmp_dir, "truncated") cls.mock_truncated_files_dir = patch("azurelinuxagent.ga.logcollector._TRUNCATED_FILES_DIR", cls.truncated_files_dir) cls.mock_truncated_files_dir.start() cls.output_results_file_path = os.path.join(cls.log_collector_dir, "results.txt") cls.mock_output_results_file_path = patch("azurelinuxagent.ga.logcollector.OUTPUT_RESULTS_FILE_PATH", cls.output_results_file_path) cls.mock_output_results_file_path.start() cls.compressed_archive_path = os.path.join(cls.log_collector_dir, "logs.zip") cls.mock_compressed_archive_path = patch("azurelinuxagent.ga.logcollector.COMPRESSED_ARCHIVE_PATH", cls.compressed_archive_path) cls.mock_compressed_archive_path.start() @classmethod def _mock_cgroup(cls): # CPU Cgroups compute usage based on /proc/stat and /sys/fs/cgroup/.../cpuacct.stat; use mock data for those # files original_read_file = fileutil.read_file def mock_read_file(filepath, **args): if filepath == "/proc/stat": filepath = os.path.join(data_dir, "cgroups", "proc_stat_t0") elif filepath.endswith("/cpuacct.stat"): filepath = os.path.join(data_dir, "cgroups", "cpuacct.stat_t0") return original_read_file(filepath, **args) cls._mock_read_cpu_cgroup_file = patch("azurelinuxagent.common.utils.fileutil.read_file", side_effect=mock_read_file) cls._mock_read_cpu_cgroup_file.start() @classmethod def tearDownClass(cls): cls.mock_manifest.stop() cls.mock_log_collector_dir.stop() cls.mock_truncated_files_dir.stop() cls.mock_output_results_file_path.stop() cls.mock_compressed_archive_path.stop() cls._mock_read_cpu_cgroup_file.stop() shutil.rmtree(cls.tmp_dir) AgentTestCase.tearDownClass() def setUp(self): AgentTestCase.setUp(self) self._build_test_data() def tearDown(self): rm_dirs(self.root_collect_dir) rm_files(self.compressed_archive_path) AgentTestCase.tearDown(self) @classmethod def _build_test_data(cls): """ Build a dummy file structure which will be used as a foundation for the log collector tests """ cls._create_file_of_specific_size(os.path.join(cls.root_collect_dir, "waagent.log"), SMALL_FILE_SIZE) # small text file cls._create_file_of_specific_size(os.path.join(cls.root_collect_dir, "waagent.log.1"), LARGE_FILE_SIZE) # large text file cls._create_file_of_specific_size(os.path.join(cls.root_collect_dir, "waagent.log.2.gz"), SMALL_FILE_SIZE, binary=True) # small binary file cls._create_file_of_specific_size(os.path.join(cls.root_collect_dir, "waagent.log.3.gz"), LARGE_FILE_SIZE, binary=True) # large binary file mkdir(os.path.join(cls.root_collect_dir, "another_dir")) cls._create_file_of_specific_size(os.path.join(cls.root_collect_dir, "less_important_file"), SMALL_FILE_SIZE) cls._create_file_of_specific_size(os.path.join(cls.root_collect_dir, "another_dir", "least_important_file"), SMALL_FILE_SIZE) @classmethod def _build_manifest(cls): """ Files listed in the manifest will be collected, others will be ignored """ files = [ os.path.join(cls.root_collect_dir, "waagent*"), os.path.join(cls.root_collect_dir, "less_important_file*"), os.path.join(cls.root_collect_dir, "another_dir", "least_important_file"), os.path.join(cls.root_collect_dir, "non_existing_file"), ] manifest = "" for file_entry in files: manifest += "copy,{0}\n".format(file_entry) return manifest @staticmethod def _create_file_of_specific_size(file_path, file_size, binary=False): binary_descriptor = "b" if binary else "" data = b'0' if binary else '0' with open(file_path, "w{0}".format(binary_descriptor)) as fh: # pylint: disable=bad-open-mode fh.seek(file_size - 1) fh.write(data) @staticmethod def _truncated_path(normal_path): return "truncated_" + normal_path.replace(os.path.sep, "_") def _assert_files_are_in_archive(self, expected_files): with zipfile.ZipFile(self.compressed_archive_path, "r") as archive: archive_files = archive.namelist() for file in expected_files: # pylint: disable=redefined-builtin if file.lstrip(os.path.sep) not in archive_files: self.fail("File {0} was supposed to be collected, but is not present in the archive!".format(file)) # Assert that results file is always present if "results.txt" not in archive_files: self.fail("File results.txt was supposed to be collected, but is not present in the archive!") self.assertTrue(True) # pylint: disable=redundant-unittest-assert def _assert_files_are_not_in_archive(self, unexpected_files): with zipfile.ZipFile(self.compressed_archive_path, "r") as archive: archive_files = archive.namelist() for file in unexpected_files: # pylint: disable=redefined-builtin if file.lstrip(os.path.sep) in archive_files: self.fail("File {0} wasn't supposed to be collected, but is present in the archive!".format(file)) self.assertTrue(True) # pylint: disable=redundant-unittest-assert def _assert_archive_created(self, archive): with open(self.output_results_file_path, "r") as out: error_message = out.readlines()[-1] self.assertTrue(archive, "Failed to collect logs, error message: {0}".format(error_message)) def _get_uncompressed_file_size(self, file): # pylint: disable=redefined-builtin with zipfile.ZipFile(self.compressed_archive_path, "r") as archive: return archive.getinfo(file.lstrip(os.path.sep)).file_size def _get_number_of_files_in_archive(self): with zipfile.ZipFile(self.compressed_archive_path, "r") as archive: # Exclude results file return len(archive.namelist())-1 def test_log_collector_parses_commands_in_manifest(self): # Ensure familiar commands are parsed and unknowns are ignored (like diskinfo and malformed entries) file_to_collect = os.path.join(self.root_collect_dir, "waagent.log") folder_to_list = self.root_collect_dir manifest = """ echo,### Test header ### unknown command ll,{0} copy,{1} diskinfo,""".format(folder_to_list, file_to_collect) with patch("azurelinuxagent.ga.logcollector.MANIFEST_NORMAL", manifest): with patch('azurelinuxagent.ga.logcollector.LogCollector._initialize_telemetry'): log_collector = LogCollector() archive = log_collector.collect_logs_and_get_archive() with open(self.output_results_file_path, "r") as fh: results = fh.readlines() # Assert echo was parsed self.assertTrue(any(line.endswith("### Test header ###\n") for line in results)) # Assert unknown command was reported self.assertTrue(any(line.endswith("ERROR Couldn\'t parse \"unknown command\"\n") for line in results)) # Assert ll was parsed self.assertTrue(any("ls -alF {0}".format(folder_to_list) in line for line in results)) # Assert copy was parsed self._assert_archive_created(archive) self._assert_files_are_in_archive(expected_files=[file_to_collect]) no_files = self._get_number_of_files_in_archive() self.assertEqual(1, no_files, "Expected 1 file in archive, found {0}!".format(no_files)) def test_log_collector_uses_full_manifest_when_full_mode_enabled(self): file_to_collect = os.path.join(self.root_collect_dir, "less_important_file") manifest = """ echo,### Test header ### copy,{0} """.format(file_to_collect) with patch("azurelinuxagent.ga.logcollector.MANIFEST_FULL", manifest): with patch('azurelinuxagent.ga.logcollector.LogCollector._initialize_telemetry'): log_collector = LogCollector(is_full_mode=True) archive = log_collector.collect_logs_and_get_archive() self._assert_archive_created(archive) self._assert_files_are_in_archive(expected_files=[file_to_collect]) no_files = self._get_number_of_files_in_archive() self.assertEqual(1, no_files, "Expected 1 file in archive, found {0}!".format(no_files)) def test_log_collector_should_collect_all_files(self): # All files in the manifest should be collected, since none of them are over the individual file size limit, # and combined they do not cross the archive size threshold. with patch('azurelinuxagent.ga.logcollector.LogCollector._initialize_telemetry'): log_collector = LogCollector() archive = log_collector.collect_logs_and_get_archive() self._assert_archive_created(archive) expected_files = [ os.path.join(self.root_collect_dir, "waagent.log"), os.path.join(self.root_collect_dir, "waagent.log.1"), os.path.join(self.root_collect_dir, "waagent.log.2.gz"), os.path.join(self.root_collect_dir, "waagent.log.3.gz"), os.path.join(self.root_collect_dir, "less_important_file"), os.path.join(self.root_collect_dir, "another_dir", "least_important_file") ] self._assert_files_are_in_archive(expected_files) no_files = self._get_number_of_files_in_archive() self.assertEqual(6, no_files, "Expected 6 files in archive, found {0}!".format(no_files)) def test_log_collector_should_truncate_large_text_files_and_ignore_large_binary_files(self): # Set the size limit so that some files are too large to collect in full. with patch("azurelinuxagent.ga.logcollector._FILE_SIZE_LIMIT", SMALL_FILE_SIZE): with patch('azurelinuxagent.ga.logcollector.LogCollector._initialize_telemetry'): log_collector = LogCollector() archive = log_collector.collect_logs_and_get_archive() self._assert_archive_created(archive) expected_files = [ os.path.join(self.root_collect_dir, "waagent.log"), self._truncated_path(os.path.join(self.root_collect_dir, "waagent.log.1")), # this file should be truncated os.path.join(self.root_collect_dir, "waagent.log.2.gz"), os.path.join(self.root_collect_dir, "less_important_file"), os.path.join(self.root_collect_dir, "another_dir", "least_important_file") ] unexpected_files = [ os.path.join(self.root_collect_dir, "waagent.log.3.gz") # binary files cannot be truncated, ignore it ] self._assert_files_are_in_archive(expected_files) self._assert_files_are_not_in_archive(unexpected_files) no_files = self._get_number_of_files_in_archive() self.assertEqual(5, no_files, "Expected 5 files in archive, found {0}!".format(no_files)) def test_log_collector_should_prioritize_important_files_if_archive_too_big(self): # Set the archive size limit so that not all files can be collected. In that case, files will be added to the # archive according to their priority. # Specify files that have priority. The list is ordered, where the first entry has the highest priority. must_collect_files = [ os.path.join(self.root_collect_dir, "waagent*"), os.path.join(self.root_collect_dir, "less_important_file*") ] with patch("azurelinuxagent.ga.logcollector._UNCOMPRESSED_ARCHIVE_SIZE_LIMIT", 10 * 1024 * 1024): with patch("azurelinuxagent.ga.logcollector._MUST_COLLECT_FILES", must_collect_files): with patch('azurelinuxagent.ga.logcollector.LogCollector._initialize_telemetry'): log_collector = LogCollector() archive = log_collector.collect_logs_and_get_archive() self._assert_archive_created(archive) expected_files = [ os.path.join(self.root_collect_dir, "waagent.log"), os.path.join(self.root_collect_dir, "waagent.log.1"), os.path.join(self.root_collect_dir, "waagent.log.2.gz") ] unexpected_files = [ os.path.join(self.root_collect_dir, "waagent.log.3.gz"), os.path.join(self.root_collect_dir, "less_important_file"), os.path.join(self.root_collect_dir, "another_dir", "least_important_file") ] self._assert_files_are_in_archive(expected_files) self._assert_files_are_not_in_archive(unexpected_files) no_files = self._get_number_of_files_in_archive() self.assertEqual(3, no_files, "Expected 3 files in archive, found {0}!".format(no_files)) # Second collection, if a file got deleted, delete it from the archive and add next file on the priority list # if there is enough space. rm_files(os.path.join(self.root_collect_dir, "waagent.log.3.gz")) with patch("azurelinuxagent.ga.logcollector._UNCOMPRESSED_ARCHIVE_SIZE_LIMIT", 10 * 1024 * 1024): with patch("azurelinuxagent.ga.logcollector._MUST_COLLECT_FILES", must_collect_files): second_archive = log_collector.collect_logs_and_get_archive() expected_files = [ os.path.join(self.root_collect_dir, "waagent.log"), os.path.join(self.root_collect_dir, "waagent.log.1"), os.path.join(self.root_collect_dir, "waagent.log.2.gz"), os.path.join(self.root_collect_dir, "less_important_file"), os.path.join(self.root_collect_dir, "another_dir", "least_important_file") ] unexpected_files = [ os.path.join(self.root_collect_dir, "waagent.log.3.gz") ] self._assert_files_are_in_archive(expected_files) self._assert_files_are_not_in_archive(unexpected_files) self._assert_archive_created(second_archive) no_files = self._get_number_of_files_in_archive() self.assertEqual(5, no_files, "Expected 5 files in archive, found {0}!".format(no_files)) def test_log_collector_should_update_archive_when_files_are_new_or_modified_or_deleted(self): # Ensure the archive reflects the state of files on the disk at collection time. If a file was updated, it # needs to be updated in the archive, deleted if removed from disk, and added if not previously seen. with patch('azurelinuxagent.ga.logcollector.LogCollector._initialize_telemetry'): log_collector = LogCollector() first_archive = log_collector.collect_logs_and_get_archive() self._assert_archive_created(first_archive) # Everything should be in the archive expected_files = [ os.path.join(self.root_collect_dir, "waagent.log"), os.path.join(self.root_collect_dir, "waagent.log.1"), os.path.join(self.root_collect_dir, "waagent.log.2.gz"), os.path.join(self.root_collect_dir, "waagent.log.3.gz"), os.path.join(self.root_collect_dir, "less_important_file"), os.path.join(self.root_collect_dir, "another_dir", "least_important_file") ] self._assert_files_are_in_archive(expected_files) no_files = self._get_number_of_files_in_archive() self.assertEqual(6, no_files, "Expected 6 files in archive, found {0}!".format(no_files)) # Update a file and its last modified time to ensure the last modified time and last collection time are not # the same in this test file_to_update = os.path.join(self.root_collect_dir, "waagent.log") self._create_file_of_specific_size(file_to_update, LARGE_FILE_SIZE) # update existing file new_time = os.path.getmtime(file_to_update) + 5 os.utime(file_to_update, (new_time, new_time)) # Create a new file (that is covered by the manifest and will be collected) and delete a file self._create_file_of_specific_size(os.path.join(self.root_collect_dir, "less_important_file.1"), LARGE_FILE_SIZE) rm_files(os.path.join(self.root_collect_dir, "waagent.log.1")) second_archive = log_collector.collect_logs_and_get_archive() self._assert_archive_created(second_archive) expected_files = [ os.path.join(self.root_collect_dir, "waagent.log"), os.path.join(self.root_collect_dir, "waagent.log.2.gz"), os.path.join(self.root_collect_dir, "waagent.log.3.gz"), os.path.join(self.root_collect_dir, "less_important_file"), os.path.join(self.root_collect_dir, "less_important_file.1"), os.path.join(self.root_collect_dir, "another_dir", "least_important_file") ] unexpected_files = [ os.path.join(self.root_collect_dir, "waagent.log.1") ] self._assert_files_are_in_archive(expected_files) self._assert_files_are_not_in_archive(unexpected_files) file = os.path.join(self.root_collect_dir, "waagent.log") # pylint: disable=redefined-builtin new_file_size = self._get_uncompressed_file_size(file) self.assertEqual(LARGE_FILE_SIZE, new_file_size, "File {0} hasn't been updated! Size in archive is {1}, but " "should be {2}.".format(file, new_file_size, LARGE_FILE_SIZE)) no_files = self._get_number_of_files_in_archive() self.assertEqual(6, no_files, "Expected 6 files in archive, found {0}!".format(no_files)) def test_log_collector_should_clean_up_uncollected_truncated_files(self): # Make sure that truncated files that are no longer needed are cleaned up. If an existing truncated file # from a previous run is not collected in the current run, it should be deleted to free up space. # Specify files that have priority. The list is ordered, where the first entry has the highest priority. must_collect_files = [ os.path.join(self.root_collect_dir, "waagent*") ] # Set the archive size limit so that not all files can be collected. In that case, files will be added to the # archive according to their priority. # Set the size limit so that only two files can be collected, of which one needs to be truncated. with patch("azurelinuxagent.ga.logcollector._UNCOMPRESSED_ARCHIVE_SIZE_LIMIT", 2 * SMALL_FILE_SIZE): with patch("azurelinuxagent.ga.logcollector._MUST_COLLECT_FILES", must_collect_files): with patch("azurelinuxagent.ga.logcollector._FILE_SIZE_LIMIT", SMALL_FILE_SIZE): with patch('azurelinuxagent.ga.logcollector.LogCollector._initialize_telemetry'): log_collector = LogCollector() archive = log_collector.collect_logs_and_get_archive() self._assert_archive_created(archive) expected_files = [ os.path.join(self.root_collect_dir, "waagent.log"), self._truncated_path(os.path.join(self.root_collect_dir, "waagent.log.1")), # this file should be truncated ] self._assert_files_are_in_archive(expected_files) no_files = self._get_number_of_files_in_archive() self.assertEqual(2, no_files, "Expected 2 files in archive, found {0}!".format(no_files)) # Remove the original file so it is not collected anymore. In the next collection, the truncated file should be # removed both from the archive and from the filesystem. rm_files(os.path.join(self.root_collect_dir, "waagent.log.1")) with patch("azurelinuxagent.ga.logcollector._UNCOMPRESSED_ARCHIVE_SIZE_LIMIT", 2 * SMALL_FILE_SIZE): with patch("azurelinuxagent.ga.logcollector._MUST_COLLECT_FILES", must_collect_files): with patch("azurelinuxagent.ga.logcollector._FILE_SIZE_LIMIT", SMALL_FILE_SIZE): with patch('azurelinuxagent.ga.logcollector.LogCollector._initialize_telemetry'): log_collector = LogCollector() second_archive = log_collector.collect_logs_and_get_archive() expected_files = [ os.path.join(self.root_collect_dir, "waagent.log"), os.path.join(self.root_collect_dir, "waagent.log.2.gz"), ] unexpected_files = [ self._truncated_path(os.path.join(self.root_collect_dir, "waagent.log.1")) ] self._assert_files_are_in_archive(expected_files) self._assert_files_are_not_in_archive(unexpected_files) self._assert_archive_created(second_archive) no_files = self._get_number_of_files_in_archive() self.assertEqual(2, no_files, "Expected 2 files in archive, found {0}!".format(no_files)) truncated_files = os.listdir(self.truncated_files_dir) self.assertEqual(0, len(truncated_files), "Uncollected truncated file waagent.log.1 should have been deleted!") Azure-WALinuxAgent-2b21de5/tests/ga/test_monitor.py000066400000000000000000000346421462617747000223720ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib import os import random import string from azurelinuxagent.common import event, logger from azurelinuxagent.ga.cgroup import CpuCgroup, MemoryCgroup, MetricValue, _REPORT_EVERY_HOUR from azurelinuxagent.ga.cgroupstelemetry import CGroupsTelemetry from azurelinuxagent.common.event import EVENTS_DIRECTORY from azurelinuxagent.common.protocol.healthservice import HealthService from azurelinuxagent.common.protocol.util import ProtocolUtil from azurelinuxagent.common.protocol.wire import WireProtocol from azurelinuxagent.ga.monitor import get_monitor_handler, PeriodicOperation, SendImdsHeartbeat, \ ResetPeriodicLogMessages, SendHostPluginHeartbeat, PollResourceUsage, \ ReportNetworkErrors, ReportNetworkConfigurationChanges, PollSystemWideResourceUsage from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.wire_protocol_data import DATA_FILE from tests.lib.tools import Mock, MagicMock, patch, AgentTestCase, clear_singleton_instances def random_generator(size=6, chars=string.ascii_uppercase + string.digits + string.ascii_lowercase): return ''.join(random.choice(chars) for x in range(size)) @contextlib.contextmanager def _mock_wire_protocol(): # Since ProtocolUtil is a singleton per thread, we need to clear it to ensure that the test cases do not # reuse a previous state clear_singleton_instances(ProtocolUtil) with mock_wire_protocol(DATA_FILE) as protocol: protocol_util = MagicMock() protocol_util.get_protocol = Mock(return_value=protocol) with patch("azurelinuxagent.ga.monitor.get_protocol_util", return_value=protocol_util): yield protocol class MonitorHandlerTestCase(AgentTestCase): def test_it_should_invoke_all_periodic_operations(self): def periodic_operation_run(self): invoked_operations.append(self.__class__.__name__) with _mock_wire_protocol(): with patch("azurelinuxagent.ga.monitor.MonitorHandler.stopped", side_effect=[False, True, False, True]): with patch("time.sleep"): with patch.object(PeriodicOperation, "run", side_effect=periodic_operation_run, autospec=True): with patch("azurelinuxagent.common.conf.get_monitor_network_configuration_changes") as monitor_network_changes: for network_changes in [True, False]: monitor_network_changes.return_value = network_changes invoked_operations = [] monitor_handler = get_monitor_handler() monitor_handler.run() monitor_handler.join() expected_operations = [ PollResourceUsage.__name__, PollSystemWideResourceUsage.__name__, ReportNetworkErrors.__name__, ResetPeriodicLogMessages.__name__, SendHostPluginHeartbeat.__name__, SendImdsHeartbeat.__name__, ] if network_changes: expected_operations.append(ReportNetworkConfigurationChanges.__name__) invoked_operations.sort() expected_operations.sort() self.assertEqual(invoked_operations, expected_operations, "The monitor thread did not invoke the expected operations") class SendHostPluginHeartbeatOperationTestCase(AgentTestCase, HttpRequestPredicates): def test_it_should_report_host_ga_health(self): with _mock_wire_protocol() as protocol: def http_post_handler(url, _, **__): if self.is_health_service_request(url): http_post_handler.health_service_posted = True return MockHttpResponse(status=200) return None http_post_handler.health_service_posted = False protocol.set_http_handlers(http_post_handler=http_post_handler) health_service = HealthService(protocol.get_endpoint()) SendHostPluginHeartbeat(protocol, health_service).run() self.assertTrue(http_post_handler.health_service_posted, "The monitor thread did not report host ga plugin health") def test_it_should_report_a_telemetry_event_when_host_plugin_is_not_healthy(self): with _mock_wire_protocol() as protocol: # the error triggers only after ERROR_STATE_DELTA_DEFAULT with patch('azurelinuxagent.common.errorstate.ErrorState.is_triggered', return_value=True): with patch('azurelinuxagent.common.event.EventLogger.add_event') as add_event_patcher: def http_get_handler(url, *_, **__): if self.is_host_plugin_health_request(url): return MockHttpResponse(status=503) return None protocol.set_http_handlers(http_get_handler=http_get_handler) health_service = HealthService(protocol.get_endpoint()) SendHostPluginHeartbeat(protocol, health_service).run() heartbeat_events = [kwargs for _, kwargs in add_event_patcher.call_args_list if kwargs['op'] == 'HostPluginHeartbeatExtended'] self.assertTrue(len(heartbeat_events) == 1, "The monitor thread should have reported exactly 1 telemetry event for an unhealthy host ga plugin") self.assertFalse(heartbeat_events[0]['is_success'], 'The reported event should indicate failure') def test_it_should_not_send_a_health_signal_when_the_hearbeat_fails(self): with _mock_wire_protocol() as protocol: with patch('azurelinuxagent.common.event.EventLogger.add_event') as add_event_patcher: health_service_post_requests = [] def http_get_handler(url, *_, **__): if self.is_host_plugin_health_request(url): del health_service_post_requests[:] # clear the requests; after this error there should be no more POSTs raise IOError('A CLIENT ERROR') return None def http_post_handler(url, _, **__): if self.is_health_service_request(url): health_service_post_requests.append(url) return MockHttpResponse(status=200) return None protocol.set_http_handlers(http_get_handler=http_get_handler, http_post_handler=http_post_handler) health_service = HealthService(protocol.get_endpoint()) SendHostPluginHeartbeat(protocol, health_service).run() self.assertEqual(0, len(health_service_post_requests), "No health signals should have been posted: {0}".format(health_service_post_requests)) heartbeat_events = [kwargs for _, kwargs in add_event_patcher.call_args_list if kwargs['op'] == 'HostPluginHeartbeat'] self.assertTrue(len(heartbeat_events) == 1, "The monitor thread should have reported exactly 1 telemetry event for an unhealthy host ga plugin") self.assertFalse(heartbeat_events[0]['is_success'], 'The reported event should indicate failure') self.assertIn('A CLIENT ERROR', heartbeat_events[0]['message'], 'The failure does not include the expected message') class ResetPeriodicLogMessagesOperationTestCase(AgentTestCase, HttpRequestPredicates): def test_it_should_clear_periodic_log_messages(self): logger.reset_periodic() # Adding 100 different messages expected = 100 for i in range(expected): logger.periodic_info(logger.EVERY_DAY, "Test {0}".format(i)) actual = len(logger.DEFAULT_LOGGER.periodic_messages) if actual != expected: raise Exception('Test setup error: the periodic messages were not added. Got: {0} Expected: {1}'.format(actual, expected)) ResetPeriodicLogMessages().run() self.assertEqual(0, len(logger.DEFAULT_LOGGER.periodic_messages), "The monitor thread did not reset the periodic log messages") @patch('azurelinuxagent.common.osutil.get_osutil') @patch('azurelinuxagent.common.protocol.util.get_protocol_util') @patch("azurelinuxagent.common.protocol.healthservice.HealthService._report") @patch("azurelinuxagent.common.utils.restutil.http_get") class TestExtensionMetricsDataTelemetry(AgentTestCase): def setUp(self): AgentTestCase.setUp(self) event.init_event_logger(os.path.join(self.tmp_dir, EVENTS_DIRECTORY)) CGroupsTelemetry.reset() clear_singleton_instances(ProtocolUtil) protocol = WireProtocol('endpoint') protocol.client.update_goal_state = MagicMock() self.get_protocol = patch('azurelinuxagent.common.protocol.util.ProtocolUtil.get_protocol', return_value=protocol) self.get_protocol.start() def tearDown(self): AgentTestCase.tearDown(self) CGroupsTelemetry.reset() self.get_protocol.stop() @patch('azurelinuxagent.common.event.EventLogger.add_metric') @patch("azurelinuxagent.ga.cgroupstelemetry.CGroupsTelemetry.poll_all_tracked") def test_send_extension_metrics_telemetry(self, patch_poll_all_tracked, # pylint: disable=unused-argument patch_add_metric, *args): patch_poll_all_tracked.return_value = [MetricValue("Process", "% Processor Time", "service", 1), MetricValue("Memory", "Total Memory Usage", "service", 1), MetricValue("Memory", "Max Memory Usage", "service", 1, _REPORT_EVERY_HOUR), MetricValue("Memory", "Swap Memory Usage", "service", 1, _REPORT_EVERY_HOUR) ] PollResourceUsage().run() self.assertEqual(1, patch_poll_all_tracked.call_count) self.assertEqual(4, patch_add_metric.call_count) # Four metrics being sent. @patch('azurelinuxagent.common.event.EventLogger.add_metric') @patch("azurelinuxagent.ga.cgroupstelemetry.CGroupsTelemetry.poll_all_tracked") def test_send_extension_metrics_telemetry_for_empty_cgroup(self, patch_poll_all_tracked, # pylint: disable=unused-argument patch_add_metric, *args): patch_poll_all_tracked.return_value = [] PollResourceUsage().run() self.assertEqual(1, patch_poll_all_tracked.call_count) self.assertEqual(0, patch_add_metric.call_count) @patch('azurelinuxagent.common.event.EventLogger.add_metric') @patch("azurelinuxagent.ga.cgroup.MemoryCgroup.get_memory_usage") @patch('azurelinuxagent.common.logger.Logger.periodic_warn') def test_send_extension_metrics_telemetry_handling_memory_cgroup_exceptions_errno2(self, patch_periodic_warn, # pylint: disable=unused-argument patch_get_memory_usage, patch_add_metric, *args): ioerror = IOError() ioerror.errno = 2 patch_get_memory_usage.side_effect = ioerror CGroupsTelemetry._tracked["/test/path"] = MemoryCgroup("cgroup_name", "/test/path") PollResourceUsage().run() self.assertEqual(0, patch_periodic_warn.call_count) self.assertEqual(0, patch_add_metric.call_count) # No metrics should be sent. @patch('azurelinuxagent.common.event.EventLogger.add_metric') @patch("azurelinuxagent.ga.cgroup.CpuCgroup.get_cpu_usage") @patch('azurelinuxagent.common.logger.Logger.periodic_warn') def test_send_extension_metrics_telemetry_handling_cpu_cgroup_exceptions_errno2(self, patch_periodic_warn, # pylint: disable=unused-argument patch_cpu_usage, patch_add_metric, *args): ioerror = IOError() ioerror.errno = 2 patch_cpu_usage.side_effect = ioerror CGroupsTelemetry._tracked["/test/path"] = CpuCgroup("cgroup_name", "/test/path") PollResourceUsage().run() self.assertEqual(0, patch_periodic_warn.call_count) self.assertEqual(0, patch_add_metric.call_count) # No metrics should be sent. class TestPollSystemWideResourceUsage(AgentTestCase): @patch('azurelinuxagent.common.event.EventLogger.add_metric') @patch("azurelinuxagent.common.osutil.default.DefaultOSUtil.get_used_and_available_system_memory") def test_send_system_memory_metrics(self, path_get_system_memory, patch_add_metric, *args): # pylint: disable=unused-argument path_get_system_memory.return_value = (234.45, 123.45) PollSystemWideResourceUsage().run() self.assertEqual(1, path_get_system_memory.call_count) self.assertEqual(2, patch_add_metric.call_count) # 2 metrics being sent. @patch('azurelinuxagent.common.event.EventLogger.add_metric') @patch("azurelinuxagent.ga.monitor.PollSystemWideResourceUsage.poll_system_memory_metrics") def test_send_system_memory_metrics_empty(self, path_poll_system_memory_metrics, patch_add_metric, # pylint: disable=unused-argument *args): path_poll_system_memory_metrics.return_value = [] PollSystemWideResourceUsage().run() self.assertEqual(1, path_poll_system_memory_metrics.call_count) self.assertEqual(0, patch_add_metric.call_count) # Zero metrics being sent.Azure-WALinuxAgent-2b21de5/tests/ga/test_multi_config_extension.py000066400000000000000000002400661462617747000254550ustar00rootroot00000000000000import contextlib import json import os.path import re import subprocess import uuid from azurelinuxagent.common import conf from azurelinuxagent.common.event import WALAEventOperation from azurelinuxagent.common.exception import GoalStateAggregateStatusCodes from azurelinuxagent.common.future import ustr from azurelinuxagent.common.protocol.restapi import ExtensionRequestedState, ExtensionState from azurelinuxagent.common.utils import fileutil from azurelinuxagent.ga.exthandlers import get_exthandlers_handler, ExtensionStatusValue, ExtCommandEnvVariable, \ GoalStateStatus, ExtHandlerInstance from tests.lib.extension_emulator import enable_invocations, extension_emulator, ExtensionCommandNames, Actions, \ extract_extension_info_from_command from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.wire_protocol_data import DATA_FILE, WireProtocolData from tests.lib.tools import AgentTestCase, mock_sleep, patch class TestMultiConfigExtensionsConfigParsing(AgentTestCase): _MULTI_CONFIG_TEST_DATA = os.path.join("wire", "multi-config") def setUp(self): AgentTestCase.setUp(self) self.mock_sleep = patch("time.sleep", lambda *_: mock_sleep(0.0001)) self.mock_sleep.start() self.test_data = DATA_FILE.copy() def tearDown(self): self.mock_sleep.stop() AgentTestCase.tearDown(self) class _TestExtHandlerObject: def __init__(self, name, version, state="enabled"): self.name = name self.version = version self.state = state self.is_invalid_setting = False self.settings = dict() class _TestExtensionObject: def __init__(self, name, seq_no, dependency_level="0", state="enabled"): self.name = name self.seq_no = seq_no self.dependency_level = int(dependency_level) self.state = state def _mock_and_assert_ext_handlers(self, expected_handlers): with mock_wire_protocol(self.test_data) as protocol: ext_handlers = protocol.get_goal_state().extensions_goal_state.extensions for ext_handler in ext_handlers: if ext_handler.name not in expected_handlers: continue expected_handler = expected_handlers.pop(ext_handler.name) self.assertEqual(expected_handler.state, ext_handler.state) self.assertEqual(expected_handler.version, ext_handler.version) self.assertEqual(expected_handler.is_invalid_setting, ext_handler.is_invalid_setting) self.assertEqual(len(expected_handler.settings), len(ext_handler.settings)) for extension in ext_handler.settings: self.assertIn(extension.name, expected_handler.settings) expected_extension = expected_handler.settings.pop(extension.name) self.assertEqual(expected_extension.seq_no, extension.sequenceNumber) self.assertEqual(expected_extension.state, extension.state) self.assertEqual(expected_extension.dependency_level, extension.dependencyLevel) self.assertEqual(0, len(expected_handler.settings), "All extensions not verified for handler") self.assertEqual(0, len(expected_handlers), "All handlers not verified") def _get_mock_expected_handler_data(self, rc_extensions, vmaccess_extensions, geneva_extensions): # Set expected handler data run_command_test_handler = self._TestExtHandlerObject("Microsoft.CPlat.Core.RunCommandHandlerWindows", "2.3.0") run_command_test_handler.settings.update(rc_extensions) vm_access_test_handler = self._TestExtHandlerObject("Microsoft.Compute.VMAccessAgent", "2.4.7") vm_access_test_handler.settings.update(vmaccess_extensions) geneva_test_handler = self._TestExtHandlerObject("Microsoft.Azure.Geneva.GenevaMonitoring", "2.20.0.1") geneva_test_handler.settings.update(geneva_extensions) expected_handlers = { run_command_test_handler.name: run_command_test_handler, vm_access_test_handler.name: vm_access_test_handler, geneva_test_handler.name: geneva_test_handler } return expected_handlers def test_it_should_parse_multi_config_settings_properly(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_with_multi_config.xml") rc_extensions = dict() rc_extensions["firstRunCommand"] = self._TestExtensionObject(name="firstRunCommand", seq_no=2) rc_extensions["secondRunCommand"] = self._TestExtensionObject(name="secondRunCommand", seq_no=2, dependency_level="3") rc_extensions["thirdRunCommand"] = self._TestExtensionObject(name="thirdRunCommand", seq_no=1, dependency_level="4") vmaccess_extensions = { "Microsoft.Compute.VMAccessAgent": self._TestExtensionObject(name="Microsoft.Compute.VMAccessAgent", seq_no=1, dependency_level=2)} geneva_extensions = {"Microsoft.Azure.Geneva.GenevaMonitoring": self._TestExtensionObject( name="Microsoft.Azure.Geneva.GenevaMonitoring", seq_no=1)} expected_handlers = self._get_mock_expected_handler_data(rc_extensions, vmaccess_extensions, geneva_extensions) self._mock_and_assert_ext_handlers(expected_handlers) def test_it_should_parse_multi_config_with_disable_state_properly(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_with_disabled_multi_config.xml") rc_extensions = dict() rc_extensions["firstRunCommand"] = self._TestExtensionObject(name="firstRunCommand", seq_no=3) rc_extensions["secondRunCommand"] = self._TestExtensionObject(name="secondRunCommand", seq_no=3, dependency_level="1") rc_extensions["thirdRunCommand"] = self._TestExtensionObject(name="thirdRunCommand", seq_no=1, dependency_level="4", state="disabled") vmaccess_extensions = { "Microsoft.Compute.VMAccessAgent": self._TestExtensionObject(name="Microsoft.Compute.VMAccessAgent", seq_no=2, dependency_level="2")} geneva_extensions = {"Microsoft.Azure.Geneva.GenevaMonitoring": self._TestExtensionObject( name="Microsoft.Azure.Geneva.GenevaMonitoring", seq_no=2)} expected_handlers = self._get_mock_expected_handler_data(rc_extensions, vmaccess_extensions, geneva_extensions) self._mock_and_assert_ext_handlers(expected_handlers) class _MultiConfigBaseTestClass(AgentTestCase): _MULTI_CONFIG_TEST_DATA = os.path.join("wire", "multi-config") def setUp(self): AgentTestCase.setUp(self) self.mock_sleep = patch("time.sleep", lambda *_: mock_sleep(0.01)) self.mock_sleep.start() self.test_data = DATA_FILE.copy() def tearDown(self): self.mock_sleep.stop() AgentTestCase.tearDown(self) @contextlib.contextmanager def _setup_test_env(self, mock_manifest=False): with mock_wire_protocol(self.test_data) as protocol: def mock_http_put(url, *args, **_): if HttpRequestPredicates.is_host_plugin_status_request(url): # Skip reading the HostGA request data as its encoded return MockHttpResponse(status=500) protocol.aggregate_status = json.loads(args[0]) return MockHttpResponse(status=201) with patch("azurelinuxagent.common.agent_supported_feature._MultiConfigFeature.is_supported", True): protocol.aggregate_status = None protocol.set_http_handlers(http_put_handler=mock_http_put) exthandlers_handler = get_exthandlers_handler(protocol) no_of_extensions = protocol.mock_wire_data.get_no_of_extensions_in_config() if mock_manifest: with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.supports_multiple_extensions', return_value=True): yield exthandlers_handler, protocol, no_of_extensions else: yield exthandlers_handler, protocol, no_of_extensions def _assert_and_get_handler_status(self, aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", handler_version="1.0.0", status="Ready", expected_count=1, message=None): self.assertIsNotNone(aggregate_status['aggregateStatus'], "No aggregate status found") handlers = [handler for handler in aggregate_status['aggregateStatus']['handlerAggregateStatus'] if handler_name == handler['handlerName'] and handler_version == handler['handlerVersion']] self.assertEqual(expected_count, len(handlers), "Unexpected extension count") self.assertTrue(all(handler['status'] == status for handler in handlers), "Unexpected Status reported for handler {0}".format(handler_name)) if message is not None: self.assertTrue(all(message in handler['formattedMessage']['message'] for handler in handlers), "Status Message mismatch") return handlers def _assert_extension_status(self, handler_statuses, expected_ext_status, multi_config=False): for ext_name, settings_status in expected_ext_status.items(): ext_status = next(handler for handler in handler_statuses if handler['runtimeSettingsStatus']['settingsStatus']['status']['name'] == ext_name) ext_runtime_status = ext_status['runtimeSettingsStatus'] self.assertIsNotNone(ext_runtime_status, "Extension not found") self.assertEqual(settings_status['seq_no'], ext_runtime_status['sequenceNumber'], "Sequence no mismatch") self.assertEqual(settings_status['status'], ext_runtime_status['settingsStatus']['status']['status'], "status mismatch") if 'message' in settings_status and settings_status['message'] is not None: self.assertIn(settings_status['message'], ext_runtime_status['settingsStatus']['status']['formattedMessage']['message'], "message mismatch") if multi_config: self.assertEqual(ext_name, ext_runtime_status['extensionName'], "ext name mismatch") else: self.assertNotIn('extensionName', ext_runtime_status, "Extension name should not be reported for SC") handler_statuses.remove(ext_status) self.assertEqual(0, len(handler_statuses), "Unexpected extensions left for handler") class TestMultiConfigExtensions(_MultiConfigBaseTestClass): def __assert_extension_not_present(self, handlers, extensions): for ext_name in extensions: self.assertFalse(all( 'runtimeSettingsStatus' in handler and 'extensionName' in handler['runtimeSettingsStatus'] and handler['runtimeSettingsStatus']['extensionName'] == ext_name for handler in handlers), "Extension status found") def __run_and_assert_generic_case(self, exthandlers_handler, protocol, no_of_extensions, with_message=True): def get_message(msg): return msg if with_message else None exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=3) expected_extensions = { "firstExtension": {"status": ExtensionStatusValue.success, "seq_no": 1, "message": get_message("Enabling firstExtension")}, "secondExtension": {"status": ExtensionStatusValue.success, "seq_no": 2, "message": get_message("Enabling secondExtension")}, "thirdExtension": {"status": ExtensionStatusValue.success, "seq_no": 3, "message": get_message("Enabling thirdExtension")}, } self._assert_extension_status(mc_handlers[:], expected_extensions, multi_config=True) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.success, "seq_no": 9, "message": get_message("Enabling SingleConfig extension")} } self._assert_extension_status(sc_handler[:], expected_extensions) return mc_handlers, sc_handler def __setup_and_assert_disable_scenario(self, exthandlers_handler, protocol): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, 'ext_conf_mc_disabled_extensions.xml') protocol.mock_wire_data = WireProtocolData(self.test_data) protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", status="Ready", expected_count=2) expected_extensions = { "thirdExtension": {"status": ExtensionStatusValue.success, "seq_no": 99, "message": None}, "fourthExtension": {"status": ExtensionStatusValue.success, "seq_no": 101, "message": None}, } self.__assert_extension_not_present(mc_handlers[:], ["firstExtension", "secondExtension"]) self._assert_extension_status(mc_handlers[:], expected_extensions, multi_config=True) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension", status="Ready") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.success, "seq_no": 10, "message": None} } self._assert_extension_status(sc_handler[:], expected_extensions) return mc_handlers, sc_handler @contextlib.contextmanager def __setup_generic_test_env(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") first_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension") second_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension") third_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.thirdExtension") fourth_ext = extension_emulator(name="Microsoft.Powershell.ExampleExtension") with self._setup_test_env(mock_manifest=True) as (exthandlers_handler, protocol, no_of_extensions): with enable_invocations(first_ext, second_ext, third_ext, fourth_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") invocation_record.compare( (first_ext, ExtensionCommandNames.INSTALL), (first_ext, ExtensionCommandNames.ENABLE), (second_ext, ExtensionCommandNames.ENABLE), (third_ext, ExtensionCommandNames.ENABLE), (fourth_ext, ExtensionCommandNames.INSTALL), (fourth_ext, ExtensionCommandNames.ENABLE) ) self.__run_and_assert_generic_case(exthandlers_handler, protocol, no_of_extensions, with_message=False) yield exthandlers_handler, protocol, [first_ext, second_ext, third_ext, fourth_ext] def test_it_should_execute_and_report_multi_config_extensions_properly(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") with self._setup_test_env(mock_manifest=True) as (exthandlers_handler, protocol, no_of_extensions): # Case 1: Install and enable Single and MultiConfig extensions self.__run_and_assert_generic_case(exthandlers_handler, protocol, no_of_extensions) # Case 2: Disable 2 multi-config extensions and add another for enable self.__setup_and_assert_disable_scenario(exthandlers_handler, protocol) # Case 3: Uninstall Multi-config handler (with enabled extensions) and single config extension protocol.mock_wire_data.set_incarnation(3) protocol.mock_wire_data.set_extensions_config_state(ExtensionRequestedState.Uninstall) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(0, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "No handler/extension status should be reported") def test_it_should_report_unregistered_version_error_per_extension(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") with self._setup_test_env() as (exthandlers_handler, protocol, no_of_extensions): # Set a random failing extension failing_version = "19.12.1221" protocol.mock_wire_data.set_extensions_config_version(failing_version) protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") error_msg_format = '[ExtensionError] Unable to find version {0} in manifest for extension {1}' mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", handler_version=failing_version, status="NotReady", expected_count=3, message=error_msg_format.format(failing_version, "OSTCExtensions.ExampleHandlerLinux")) self.assertTrue(all( handler['runtimeSettingsStatus']['settingsStatus']['status']['operation'] == WALAEventOperation.Download and handler['runtimeSettingsStatus']['settingsStatus']['status']['status'] == ExtensionStatusValue.error for handler in mc_handlers), "Incorrect data reported") sc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension", handler_version=failing_version, status="NotReady", message=error_msg_format.format(failing_version, "Microsoft.Powershell.ExampleExtension")) self.assertFalse(all("runtimeSettingsStatus" in handler for handler in sc_handlers), "Incorrect status") def test_it_should_not_install_handler_again_if_installed(self): with self.__setup_generic_test_env() as (_, _, _): # Everything is already asserted in the context manager pass def test_it_should_retry_handler_installation_per_extension_if_failed(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") with self._setup_test_env() as (exthandlers_handler, protocol, no_of_extensions): fail_code, fail_action = Actions.generate_unique_fail() first_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension", install_action=fail_action, supports_multiple_extensions=True) second_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension", supports_multiple_extensions=True) third_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.thirdExtension", supports_multiple_extensions=True) sc_ext = extension_emulator(name="Microsoft.Powershell.ExampleExtension", install_action=fail_action) with enable_invocations(first_ext, second_ext, third_ext, sc_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") invocation_record.compare( (first_ext, ExtensionCommandNames.INSTALL), # Should try installation again if first time failed (second_ext, ExtensionCommandNames.INSTALL), (second_ext, ExtensionCommandNames.ENABLE), (third_ext, ExtensionCommandNames.ENABLE), (sc_ext, ExtensionCommandNames.INSTALL) ) mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=3, status="Ready") expected_extensions = { "firstExtension": {"status": ExtensionStatusValue.error, "seq_no": 1, "message": fail_code}, "secondExtension": {"status": ExtensionStatusValue.success, "seq_no": 2, "message": None}, "thirdExtension": {"status": ExtensionStatusValue.success, "seq_no": 3, "message": None}, } self._assert_extension_status(mc_handlers, expected_extensions, multi_config=True) sc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension", status="NotReady", message=fail_code) self.assertFalse(all("runtimeSettingsStatus" in handler for handler in sc_handlers), "Incorrect status") def test_it_should_only_disable_enabled_extensions_on_update(self): with self.__setup_generic_test_env() as (exthandlers_handler, protocol, old_exts): # Update extensions self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, 'ext_conf_mc_update_extensions.xml') protocol.mock_wire_data = WireProtocolData(self.test_data) protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() new_version = "1.1.0" new_first_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension", version=new_version, supports_multiple_extensions=True) new_second_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension", version=new_version, supports_multiple_extensions=True) new_third_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.thirdExtension", version=new_version, supports_multiple_extensions=True) new_sc_ext = extension_emulator(name="Microsoft.Powershell.ExampleExtension", version=new_version) with enable_invocations(new_first_ext, new_second_ext, new_third_ext, new_sc_ext, *old_exts) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() old_first, old_second, old_third, old_fourth = old_exts invocation_record.compare( # Disable all enabled commands for MC before updating the Handler (old_first, ExtensionCommandNames.DISABLE), (old_second, ExtensionCommandNames.DISABLE), (old_third, ExtensionCommandNames.DISABLE), (new_first_ext, ExtensionCommandNames.UPDATE), (old_first, ExtensionCommandNames.UNINSTALL), (new_first_ext, ExtensionCommandNames.INSTALL), # No enable for First and Second extension as their state is Disabled in GoalState, # only enabled the ThirdExtension (new_third_ext, ExtensionCommandNames.ENABLE), # Follow the normal update pattern for Single config handlers (old_fourth, ExtensionCommandNames.DISABLE), (new_sc_ext, ExtensionCommandNames.UPDATE), (old_fourth, ExtensionCommandNames.UNINSTALL), (new_sc_ext, ExtensionCommandNames.INSTALL), (new_sc_ext, ExtensionCommandNames.ENABLE) ) mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=1, handler_version=new_version) expected_extensions = { "thirdExtension": {"status": ExtensionStatusValue.success, "seq_no": 99, "message": None} } self._assert_extension_status(mc_handlers, expected_extensions, multi_config=True) self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_version=new_version, handler_name="Microsoft.Powershell.ExampleExtension") def test_it_should_retry_update_sequence_per_extension_if_previous_failed(self): with self.__setup_generic_test_env() as (exthandlers_handler, protocol, old_exts): # Update extensions self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, 'ext_conf_mc_update_extensions.xml') protocol.mock_wire_data = WireProtocolData(self.test_data) protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() new_version = "1.1.0" _, fail_action = Actions.generate_unique_fail() # Fail Uninstall of the secondExtension old_exts[1] = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension", uninstall_action=fail_action, supports_multiple_extensions=True) # Fail update of the first extension new_first_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension", version=new_version, update_action=fail_action, supports_multiple_extensions=True) new_second_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension", version=new_version, supports_multiple_extensions=True) new_third_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.thirdExtension", version=new_version, supports_multiple_extensions=True) new_sc_ext = extension_emulator(name="Microsoft.Powershell.ExampleExtension", version=new_version) with enable_invocations(new_first_ext, new_second_ext, new_third_ext, new_sc_ext, *old_exts) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() old_first, old_second, old_third, old_fourth = old_exts invocation_record.compare( # Disable all enabled commands for MC before updating the Handler (old_first, ExtensionCommandNames.DISABLE), (old_second, ExtensionCommandNames.DISABLE), (old_third, ExtensionCommandNames.DISABLE), (new_first_ext, ExtensionCommandNames.UPDATE), # Since the extensions have been disabled before, we won't disable them again for Update scenario (new_second_ext, ExtensionCommandNames.UPDATE), # This will fail too as per the mock above (old_second, ExtensionCommandNames.UNINSTALL), (new_third_ext, ExtensionCommandNames.UPDATE), (old_third, ExtensionCommandNames.UNINSTALL), (new_third_ext, ExtensionCommandNames.INSTALL), # No enable for First and Second extension as their state is Disabled in GoalState, # only enabled the ThirdExtension (new_third_ext, ExtensionCommandNames.ENABLE), # Follow the normal update pattern for Single config handlers (old_fourth, ExtensionCommandNames.DISABLE), (new_sc_ext, ExtensionCommandNames.UPDATE), (old_fourth, ExtensionCommandNames.UNINSTALL), (new_sc_ext, ExtensionCommandNames.INSTALL), (new_sc_ext, ExtensionCommandNames.ENABLE) ) # Since firstExtension and secondExtension are Disabled, we won't report their status mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=1, handler_version=new_version) expected_extensions = { "thirdExtension": {"status": ExtensionStatusValue.success, "seq_no": 99, "message": None} } self._assert_extension_status(mc_handlers, expected_extensions, multi_config=True) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_version=new_version, handler_name="Microsoft.Powershell.ExampleExtension") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.success, "seq_no": 10, "message": None} } self._assert_extension_status(sc_handler, expected_extensions) def test_it_should_report_disabled_extension_errors_if_update_failed(self): with self.__setup_generic_test_env() as (exthandlers_handler, protocol, old_exts): # Update extensions self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, 'ext_conf_mc_update_extensions.xml') protocol.mock_wire_data = WireProtocolData(self.test_data) protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() new_version = "1.1.0" fail_code, fail_action = Actions.generate_unique_fail() # Fail Disable of the firstExtension old_exts[0] = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension", disable_action=fail_action, supports_multiple_extensions=True) new_first_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension", version=new_version, supports_multiple_extensions=True) new_second_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension", version=new_version, supports_multiple_extensions=True) new_third_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.thirdExtension", version=new_version, supports_multiple_extensions=True) new_fourth_ext = extension_emulator(name="Microsoft.Powershell.ExampleExtension", version=new_version) with enable_invocations(new_first_ext, new_second_ext, new_third_ext, new_fourth_ext, *old_exts) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() old_first, _, _, old_fourth = old_exts invocation_record.compare( # Disable for firstExtension should fail 3 times, i.e., once per extension which tries to update the Handler (old_first, ExtensionCommandNames.DISABLE), (old_first, ExtensionCommandNames.DISABLE), (old_first, ExtensionCommandNames.DISABLE), # Since Disable fails for the firstExtension and continueOnUpdate = False, Update should not go through # Follow the normal update pattern for Single config handlers (old_fourth, ExtensionCommandNames.DISABLE), (new_fourth_ext, ExtensionCommandNames.UPDATE), (old_fourth, ExtensionCommandNames.UNINSTALL), (new_fourth_ext, ExtensionCommandNames.INSTALL), (new_fourth_ext, ExtensionCommandNames.ENABLE) ) # Since firstExtension and secondExtension are Disabled, we won't report their status mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=1, handler_version=new_version, status="NotReady", message=fail_code) expected_extensions = { "thirdExtension": {"status": ExtensionStatusValue.error, "seq_no": 99, "message": fail_code} } self._assert_extension_status(mc_handlers, expected_extensions, multi_config=True) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_version=new_version, handler_name="Microsoft.Powershell.ExampleExtension") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.success, "seq_no": 10, "message": None} } self._assert_extension_status(sc_handler, expected_extensions) def test_it_should_report_extension_status_properly(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") with self._setup_test_env(mock_manifest=True) as (exthandlers_handler, protocol, no_of_extensions): self.__run_and_assert_generic_case(exthandlers_handler, protocol, no_of_extensions) def test_it_should_handle_and_report_enable_errors_properly(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") with self._setup_test_env() as (exthandlers_handler, protocol, no_of_extensions): fail_code, fail_action = Actions.generate_unique_fail() first_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension", supports_multiple_extensions=True) second_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension", supports_multiple_extensions=True) third_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.thirdExtension", enable_action=fail_action, supports_multiple_extensions=True) fourth_ext = extension_emulator(name="Microsoft.Powershell.ExampleExtension", enable_action=fail_action) with enable_invocations(first_ext, second_ext, third_ext, fourth_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") invocation_record.compare( (first_ext, ExtensionCommandNames.INSTALL), (first_ext, ExtensionCommandNames.ENABLE), (second_ext, ExtensionCommandNames.ENABLE), (third_ext, ExtensionCommandNames.ENABLE), (fourth_ext, ExtensionCommandNames.INSTALL), (fourth_ext, ExtensionCommandNames.ENABLE) ) mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=3, status="Ready") expected_extensions = { "firstExtension": {"status": ExtensionStatusValue.success, "seq_no": 1, "message": None}, "secondExtension": {"status": ExtensionStatusValue.success, "seq_no": 2, "message": None}, "thirdExtension": {"status": ExtensionStatusValue.error, "seq_no": 3, "message": fail_code}, } self._assert_extension_status(mc_handlers, expected_extensions, multi_config=True) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension", status="NotReady", message=fail_code) expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.error, "seq_no": 9, "message": fail_code} } self._assert_extension_status(sc_handler, expected_extensions) def test_it_should_cleanup_extension_state_on_disable(self): def __assert_state_file(handler_name, handler_version, extensions, state, not_present=None): config_path = os.path.join(self.tmp_dir, "{0}-{1}".format(handler_name, handler_version), "config") config_files = os.listdir(config_path) for ext_name in extensions: self.assertIn("{0}.settings".format(ext_name), config_files, "settings not found") self.assertEqual( fileutil.read_file(os.path.join(config_path, "{0}.HandlerState".format(ext_name.split(".")[0]))), state, "Invalid state") if not_present is not None: for ext_name in not_present: self.assertNotIn("{0}.HandlerState".format(ext_name), config_files, "Wrongful state found") with self.__setup_generic_test_env() as (ext_handler, protocol, _): __assert_state_file("OSTCExtensions.ExampleHandlerLinux", "1.0.0", ["firstExtension.1", "secondExtension.2", "thirdExtension.3"], ExtensionState.Enabled) self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, 'ext_conf_mc_disabled_extensions.xml') protocol.mock_wire_data = WireProtocolData(self.test_data) protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() ext_handler.run() ext_handler.report_ext_handlers_status() mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=2, status="Ready") expected_extensions = { "thirdExtension": {"status": ExtensionStatusValue.success, "seq_no": 99, "message": "Enabling thirdExtension"}, "fourthExtension": {"status": ExtensionStatusValue.success, "seq_no": 101, "message": "Enabling fourthExtension"}, } self._assert_extension_status(mc_handlers, expected_extensions, multi_config=True) __assert_state_file("OSTCExtensions.ExampleHandlerLinux", "1.0.0", ["thirdExtension.99", "fourthExtension.101"], ExtensionState.Enabled, not_present=["firstExtension", "secondExtension"]) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.success, "seq_no": 10, "message": "Enabling SingleConfig Extension"} } self._assert_extension_status(sc_handler, expected_extensions) def test_it_should_create_command_execution_log_per_extension(self): with self.__setup_generic_test_env() as (_, _, _): sc_handler_path = os.path.join(conf.get_ext_log_dir(), "Microsoft.Powershell.ExampleExtension") mc_handler_path = os.path.join(conf.get_ext_log_dir(), "OSTCExtensions.ExampleHandlerLinux") self.assertIn("CommandExecution_firstExtension.log", os.listdir(mc_handler_path), "Command Execution file not found") self.assertGreater(os.path.getsize(os.path.join(mc_handler_path, "CommandExecution_firstExtension.log")), 0, "Log file not being used") self.assertIn("CommandExecution_secondExtension.log", os.listdir(mc_handler_path), "Command Execution file not found") self.assertGreater(os.path.getsize(os.path.join(mc_handler_path, "CommandExecution_secondExtension.log")), 0, "Log file not being used") self.assertIn("CommandExecution_thirdExtension.log", os.listdir(mc_handler_path), "Command Execution file not found") self.assertGreater(os.path.getsize(os.path.join(mc_handler_path, "CommandExecution_thirdExtension.log")), 0, "Log file not being used") self.assertIn("CommandExecution.log", os.listdir(sc_handler_path), "Command Execution file not found") self.assertGreater(os.path.getsize(os.path.join(sc_handler_path, "CommandExecution.log")), 0, "Log file not being used") def test_it_should_set_relevant_environment_variables_for_mc(self): original_popen = subprocess.Popen handler_envs = {} def __assert_env_variables(handler_name, handler_version="1.0.0", seq_no="1", ext_name=None, expected_vars=None, not_expected=None): original_env_vars = { ExtCommandEnvVariable.ExtensionPath: os.path.join(self.tmp_dir, "{0}-{1}".format(handler_name, handler_version)), ExtCommandEnvVariable.ExtensionVersion: handler_version, ExtCommandEnvVariable.ExtensionSeqNumber: ustr(seq_no), ExtCommandEnvVariable.WireProtocolAddress: '168.63.129.16', ExtCommandEnvVariable.ExtensionSupportedFeatures: json.dumps([{"Key": "ExtensionTelemetryPipeline", "Value": "1.0"}]) } full_name = handler_name if ext_name is not None: original_env_vars[ExtCommandEnvVariable.ExtensionName] = ext_name full_name = "{0}.{1}".format(handler_name, ext_name) self.assertIn(full_name, handler_envs, "Handler/ext combo not called") for commands in handler_envs[full_name]: expected_environment_variables = original_env_vars.copy() if expected_vars is not None and commands['command'] in expected_vars: for name, val in expected_vars[commands['command']].items(): expected_environment_variables[name] = val self.assertTrue(all( env_var in commands['data'] and env_val == commands['data'][env_var] for env_var, env_val in expected_environment_variables.items()), "Incorrect data for environment variable for {0}-{1}, incorrect: {2}".format( full_name, commands['command'], [(env_var, env_val) for env_var, env_val in expected_environment_variables.items() if env_var not in commands['data'] or env_val != commands['data'][env_var]])) if not_expected is not None and commands['command'] in not_expected: self.assertFalse(any(env_var in commands['data'] for env_var in not_expected), "Unwanted env variable found") def mock_popen(cmd, *_, **kwargs): # This cgroupsapi Popen mocking all other popen calls which breaking the extension emulator logic. # The emulator should be used only on extension commands and not on other commands even env flag set. # So, added ExtensionVersion check to avoid using extension emulator on non extension operations. if 'env' in kwargs and ExtCommandEnvVariable.ExtensionVersion in kwargs['env']: handler_name, __, command = extract_extension_info_from_command(cmd) name = handler_name if ExtCommandEnvVariable.ExtensionName in kwargs['env']: name = "{0}.{1}".format(handler_name, kwargs['env'][ExtCommandEnvVariable.ExtensionName]) data = { "command": command, "data": kwargs['env'] } if name in handler_envs: handler_envs[name].append(data) else: handler_envs[name] = [data] return original_popen(cmd, *_, **kwargs) self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") with self._setup_test_env(mock_manifest=True) as (exthandlers_handler, protocol, no_of_extensions): with patch('azurelinuxagent.ga.cgroupapi.subprocess.Popen', side_effect=mock_popen): # Case 1: Check normal scenario - Install/Enable mc_handlers, sc_handler = self.__run_and_assert_generic_case(exthandlers_handler, protocol, no_of_extensions) for handler in mc_handlers: __assert_env_variables(handler['handlerName'], ext_name=handler['runtimeSettingsStatus']['extensionName'], seq_no=handler['runtimeSettingsStatus']['sequenceNumber']) for handler in sc_handler: __assert_env_variables(handler['handlerName'], seq_no=handler['runtimeSettingsStatus']['sequenceNumber']) # Case 2: Check Update Scenario # Clear old test case state handler_envs = {} self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, 'ext_conf_mc_update_extensions.xml') protocol.mock_wire_data = WireProtocolData(self.test_data) protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=1, handler_version="1.1.0") expected_extensions = { "thirdExtension": {"status": ExtensionStatusValue.success, "seq_no": 99, "message": "Enabling thirdExtension"}, } self._assert_extension_status(mc_handlers[:], expected_extensions, multi_config=True) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension", handler_version="1.1.0") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.success, "seq_no": 10, "message": "Enabling SingleConfig extension"} } self._assert_extension_status(sc_handler[:], expected_extensions) for handler in mc_handlers: __assert_env_variables(handler['handlerName'], handler_version="1.1.0", ext_name=handler['runtimeSettingsStatus']['extensionName'], seq_no=handler['runtimeSettingsStatus']['sequenceNumber'], expected_vars={ "disable": { ExtCommandEnvVariable.ExtensionPath: os.path.join(self.tmp_dir, "{0}-{1}".format(handler['handlerName'], "1.0.0")), ExtCommandEnvVariable.ExtensionVersion: '1.0.0' }}) # Assert the environment variables were present even for disabled/uninstalled commands first_ext_expected_vars = { "disable": { ExtCommandEnvVariable.ExtensionPath: os.path.join(self.tmp_dir, "{0}-{1}".format(handler['handlerName'], "1.0.0")), ExtCommandEnvVariable.ExtensionVersion: '1.0.0' }, "uninstall": { ExtCommandEnvVariable.ExtensionPath: os.path.join(self.tmp_dir, "{0}-{1}".format(handler['handlerName'], "1.0.0")), ExtCommandEnvVariable.ExtensionVersion: '1.0.0' }, "update": { ExtCommandEnvVariable.UpdatingFromVersion: "1.0.0", ExtCommandEnvVariable.DisableReturnCodeMultipleExtensions: json.dumps([ {"extensionName": "firstExtension", "exitCode": "0"}, {"extensionName": "secondExtension", "exitCode": "0"}, {"extensionName": "thirdExtension", "exitCode": "0"} ]) } } __assert_env_variables(handler['handlerName'], ext_name="firstExtension", expected_vars=first_ext_expected_vars, handler_version="1.1.0", seq_no="1", not_expected={ "update": [ExtCommandEnvVariable.DisableReturnCode] }) __assert_env_variables(handler['handlerName'], ext_name="secondExtension", seq_no="2") for handler in sc_handler: sc_expected_vars = { "disable": { ExtCommandEnvVariable.ExtensionPath: os.path.join(self.tmp_dir, "{0}-{1}".format(handler['handlerName'], "1.0.0")), ExtCommandEnvVariable.ExtensionVersion: '1.0.0' }, "uninstall": { ExtCommandEnvVariable.ExtensionPath: os.path.join(self.tmp_dir, "{0}-{1}".format(handler['handlerName'], "1.0.0")), ExtCommandEnvVariable.ExtensionVersion: '1.0.0' }, "update": { ExtCommandEnvVariable.UpdatingFromVersion: "1.0.0", ExtCommandEnvVariable.DisableReturnCode: "0" } } __assert_env_variables(handler['handlerName'], handler_version="1.1.0", seq_no=handler['runtimeSettingsStatus']['sequenceNumber'], expected_vars=sc_expected_vars, not_expected={ "update": [ExtCommandEnvVariable.DisableReturnCodeMultipleExtensions] }) def test_it_should_ignore_disable_errors_for_multi_config_extensions(self): fail_code, fail_action = Actions.generate_unique_fail() with self.__setup_generic_test_env() as (exthandlers_handler, protocol, exts): # Fail disable of 1st and 2nd extension exts[0] = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension", disable_action=fail_action) exts[1] = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension", disable_action=fail_action) fourth_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.fourthExtension") with patch.object(ExtHandlerInstance, "report_event", autospec=True) as patch_report_event: with enable_invocations(fourth_ext, *exts) as invocation_record: # Assert even though 2 extensions are failing, we clean their state up properly and enable the # remaining extensions self.__setup_and_assert_disable_scenario(exthandlers_handler, protocol) first_ext, second_ext, third_ext, sc_ext = exts invocation_record.compare( (first_ext, ExtensionCommandNames.DISABLE), (second_ext, ExtensionCommandNames.DISABLE), (third_ext, ExtensionCommandNames.ENABLE), (fourth_ext, ExtensionCommandNames.ENABLE), (sc_ext, ExtensionCommandNames.ENABLE) ) reported_events = [kwargs for _, kwargs in patch_report_event.call_args_list if re.search("Executing command: (.+) with environment variables: ", kwargs['message']) is None] self.assertTrue(all( fail_code in kwargs['message'] for kwargs in reported_events if kwargs['name'] == first_ext.name), "Error not reported") self.assertTrue(all( fail_code in kwargs['message'] for kwargs in reported_events if kwargs['name'] == second_ext.name), "Error not reported") # Make sure fail code is not reported for any other extension self.assertFalse(all( fail_code in kwargs['message'] for kwargs in reported_events if kwargs['name'] == third_ext.name), "Error not reported") def test_it_should_report_transitioning_if_status_file_not_found(self): original_popen = subprocess.Popen def mock_popen(cmd, *_, **kwargs): if 'env' in kwargs: handler_name, handler_version, __ = extract_extension_info_from_command(cmd) ext_name = None if ExtCommandEnvVariable.ExtensionName in kwargs['env']: ext_name = kwargs['env'][ExtCommandEnvVariable.ExtensionName] seq_no = kwargs['env'][ExtCommandEnvVariable.ExtensionSeqNumber] status_file_name = "{0}.status".format(seq_no) status_file_name = "{0}.{1}".format(ext_name, status_file_name) if ext_name is not None else status_file_name status_file = os.path.join(self.tmp_dir, "{0}-{1}".format(handler_name, handler_version), "status", status_file_name) if os.path.exists(status_file): os.remove(status_file) return original_popen("echo " + cmd, *_, **kwargs) self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") with self._setup_test_env(mock_manifest=True) as (exthandlers_handler, protocol, no_of_extensions): with patch('azurelinuxagent.ga.cgroupapi.subprocess.Popen', side_effect=mock_popen): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=3) agent_status_message = "This status is being reported by the Guest Agent since no status file was " \ "reported by extension {0}: " \ "[ExtensionStatusError] Status file" expected_extensions = { "firstExtension": {"status": ExtensionStatusValue.transitioning, "seq_no": 1, "message": agent_status_message.format("OSTCExtensions.ExampleHandlerLinux.firstExtension")}, "secondExtension": {"status": ExtensionStatusValue.transitioning, "seq_no": 2, "message": agent_status_message.format("OSTCExtensions.ExampleHandlerLinux.secondExtension")}, "thirdExtension": {"status": ExtensionStatusValue.transitioning, "seq_no": 3, "message": agent_status_message.format("OSTCExtensions.ExampleHandlerLinux.thirdExtension")}, } self._assert_extension_status(mc_handlers[:], expected_extensions, multi_config=True) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.transitioning, "seq_no": 9, "message": agent_status_message.format("Microsoft.Powershell.ExampleExtension")} } self._assert_extension_status(sc_handler[:], expected_extensions) def test_it_should_report_status_correctly_for_unsupported_goal_state(self): with self.__setup_generic_test_env() as (exthandlers_handler, protocol, _): # Update GS with an ExtensionConfig with 3 Required features to force GA to mark it as unsupported self.test_data['ext_conf'] = "wire/ext_conf_required_features.xml" protocol.mock_wire_data = WireProtocolData(self.test_data) protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() # Assert the extension status is the same as we reported for Incarnation 1. self.__run_and_assert_generic_case(exthandlers_handler, protocol, no_of_extensions=4, with_message=False) # Assert the GS was reported as unsupported gs_aggregate_status = protocol.aggregate_status['aggregateStatus']['vmArtifactsAggregateStatus'][ 'goalStateAggregateStatus'] self.assertEqual(gs_aggregate_status['status'], GoalStateStatus.Failed, "Incorrect status") self.assertEqual(gs_aggregate_status['code'], GoalStateAggregateStatusCodes.GoalStateUnsupportedRequiredFeatures, "Incorrect code") self.assertEqual(gs_aggregate_status['inSvdSeqNo'], '2', "Incorrect incarnation reported") self.assertEqual(gs_aggregate_status['formattedMessage']['message'], 'Failing GS incarnation_2 as Unsupported features found: TestRequiredFeature1, TestRequiredFeature2, TestRequiredFeature3', "Incorrect error message reported") def test_it_should_fail_handler_if_handler_does_not_support_mc(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_multi_config_no_dependencies.xml") first_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension") second_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension") third_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.thirdExtension") fourth_ext = extension_emulator(name="Microsoft.Powershell.ExampleExtension") with self._setup_test_env() as (exthandlers_handler, protocol, no_of_extensions): with enable_invocations(first_ext, second_ext, third_ext, fourth_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") invocation_record.compare( # Since we raise a ConfigError, we shouldn't process any of the MC extensions at all (fourth_ext, ExtensionCommandNames.INSTALL), (fourth_ext, ExtensionCommandNames.ENABLE) ) err_msg = 'Handler OSTCExtensions.ExampleHandlerLinux does not support MultiConfig but CRP expects it, failing due to inconsistent data' mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=3, status="NotReady", message=err_msg) expected_extensions = { "firstExtension": {"status": ExtensionStatusValue.error, "seq_no": 1, "message": err_msg}, "secondExtension": {"status": ExtensionStatusValue.error, "seq_no": 2, "message": err_msg}, "thirdExtension": {"status": ExtensionStatusValue.error, "seq_no": 3, "message": err_msg}, } self._assert_extension_status(mc_handlers[:], expected_extensions, multi_config=True) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.success, "seq_no": 9} } self._assert_extension_status(sc_handler[:], expected_extensions) def test_it_should_check_every_time_if_handler_supports_mc(self): with self.__setup_generic_test_env() as (exthandlers_handler, protocol, old_exts): protocol.mock_wire_data.set_incarnation(2) protocol.client.update_goal_state() # Mock manifest to not support multiple extensions with patch('azurelinuxagent.ga.exthandlers.HandlerManifest.supports_multiple_extensions', return_value=False): with enable_invocations(*old_exts) as invocation_record: (_, _, _, fourth_ext) = old_exts exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(4, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") invocation_record.compare( # Since we raise a ConfigError, we shouldn't process any of the MC extensions at all (fourth_ext, ExtensionCommandNames.ENABLE) ) err_msg = 'Handler OSTCExtensions.ExampleHandlerLinux does not support MultiConfig but CRP expects it, failing due to inconsistent data' mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=3, status="NotReady", message=err_msg) # Since the extensions were not even executed, their status file should reflect the last status # (Handler status above should always report the error though) expected_extensions = { "firstExtension": {"status": ExtensionStatusValue.success, "seq_no": 1}, "secondExtension": {"status": ExtensionStatusValue.success, "seq_no": 2}, "thirdExtension": {"status": ExtensionStatusValue.success, "seq_no": 3}, } self._assert_extension_status(mc_handlers[:], expected_extensions, multi_config=True) sc_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.success, "seq_no": 9} } self._assert_extension_status(sc_handler[:], expected_extensions) class TestMultiConfigExtensionSequencing(_MultiConfigBaseTestClass): @contextlib.contextmanager def __setup_test_and_get_exts(self): self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_with_multi_config_dependencies.xml") first_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension", supports_multiple_extensions=True) second_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.secondExtension", supports_multiple_extensions=True) third_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.thirdExtension", supports_multiple_extensions=True) dependent_sc_ext = extension_emulator(name="Microsoft.Powershell.ExampleExtension") independent_sc_ext = extension_emulator(name="Microsoft.Azure.Geneva.GenevaMonitoring", version="1.1.0") with self._setup_test_env() as (exthandlers_handler, protocol, no_of_extensions): yield exthandlers_handler, protocol, no_of_extensions, first_ext, second_ext, third_ext, dependent_sc_ext, independent_sc_ext def test_it_should_process_dependency_chain_extensions_properly(self): with self.__setup_test_and_get_exts() as ( exthandlers_handler, protocol, no_of_extensions, first_ext, second_ext, third_ext, dependent_sc_ext, independent_sc_ext): with enable_invocations(first_ext, second_ext, third_ext, dependent_sc_ext, independent_sc_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") invocation_record.compare( (first_ext, ExtensionCommandNames.INSTALL), (first_ext, ExtensionCommandNames.ENABLE), (independent_sc_ext, ExtensionCommandNames.INSTALL), (independent_sc_ext, ExtensionCommandNames.ENABLE), (dependent_sc_ext, ExtensionCommandNames.INSTALL), (dependent_sc_ext, ExtensionCommandNames.ENABLE), (second_ext, ExtensionCommandNames.ENABLE), (third_ext, ExtensionCommandNames.ENABLE) ) mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=3) expected_extensions = { "firstExtension": {"status": ExtensionStatusValue.success, "seq_no": 2}, "secondExtension": {"status": ExtensionStatusValue.success, "seq_no": 2}, "thirdExtension": {"status": ExtensionStatusValue.success, "seq_no": 1}, } self._assert_extension_status(mc_handlers[:], expected_extensions, multi_config=True) sc_dependent_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension") expected_extensions = { "Microsoft.Powershell.ExampleExtension": {"status": ExtensionStatusValue.success, "seq_no": 2} } self._assert_extension_status(sc_dependent_handler[:], expected_extensions) sc_independent_handler = self._assert_and_get_handler_status( aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Azure.Geneva.GenevaMonitoring", handler_version="1.1.0") expected_extensions = { "Microsoft.Azure.Geneva.GenevaMonitoring": {"status": ExtensionStatusValue.success, "seq_no": 1} } self._assert_extension_status(sc_independent_handler[:], expected_extensions) def __assert_invalid_status_scenario(self, protocol, fail_code, mc_status="NotReady", mc_message="Plugin installed but not enabled", err_msg=None): mc_handlers = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="OSTCExtensions.ExampleHandlerLinux", expected_count=3, status=mc_status, message=mc_message) expected_extensions = { "firstExtension": {"status": ExtensionStatusValue.error, "seq_no": 2, "message": fail_code}, "secondExtension": {"status": ExtensionStatusValue.error, "seq_no": 2, "message": err_msg}, "thirdExtension": {"status": ExtensionStatusValue.error, "seq_no": 1, "message": err_msg}, } self._assert_extension_status(mc_handlers[:], expected_extensions, multi_config=True) sc_dependent_handler = self._assert_and_get_handler_status(aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Powershell.ExampleExtension", status="NotReady", message=err_msg) self.assertTrue(all('runtimeSettingsStatus' not in handler for handler in sc_dependent_handler)) sc_independent_handler = self._assert_and_get_handler_status( aggregate_status=protocol.aggregate_status, handler_name="Microsoft.Azure.Geneva.GenevaMonitoring", handler_version="1.1.0", status="NotReady", message=err_msg) self.assertTrue(all('runtimeSettingsStatus' not in handler for handler in sc_independent_handler)) def test_it_should_report_extension_status_failures_for_all_dependent_extensions(self): with self.__setup_test_and_get_exts() as ( exthandlers_handler, protocol, no_of_extensions, first_ext, second_ext, third_ext, dependent_sc_ext, independent_sc_ext): # Fail the enable for firstExtension. fail_code, fail_action = Actions.generate_unique_fail() first_ext = extension_emulator(name="OSTCExtensions.ExampleHandlerLinux.firstExtension", enable_action=fail_action, supports_multiple_extensions=True) with enable_invocations(first_ext, second_ext, third_ext, dependent_sc_ext, independent_sc_ext) as invocation_record: exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") # Since firstExtension is high up on the dependency chain, no other extensions should be executed invocation_record.compare( (first_ext, ExtensionCommandNames.INSTALL), (first_ext, ExtensionCommandNames.ENABLE) ) err_msg = 'Skipping processing of extensions since execution of dependent extension OSTCExtensions.ExampleHandlerLinux.firstExtension failed' self.__assert_invalid_status_scenario(protocol, fail_code, err_msg=err_msg) def test_it_should_stop_execution_if_status_file_contains_errors(self): # This test tests the scenario where the extensions exit with a success exit code but fail subsequently with an # error in the status file self.test_data['ext_conf'] = os.path.join(self._MULTI_CONFIG_TEST_DATA, "ext_conf_with_multi_config_dependencies.xml") original_popen = subprocess.Popen invocation_records = [] fail_code = str(uuid.uuid4()) def mock_popen(cmd, *_, **kwargs): try: handler_name, handler_version, command_name = extract_extension_info_from_command(cmd) except ValueError: return original_popen(cmd, *_, **kwargs) if 'env' in kwargs: env = kwargs['env'] if ExtCommandEnvVariable.ExtensionName in env: full_name = "{0}.{1}".format(handler_name, env[ExtCommandEnvVariable.ExtensionName]) status_file = "{0}.{1}.status".format(env[ExtCommandEnvVariable.ExtensionName], env[ExtCommandEnvVariable.ExtensionSeqNumber]) status_contents = [{"status": {"status": ExtensionStatusValue.error, "code": fail_code, "formattedMessage": {"message": fail_code, "lang": "en-US"}}}] fileutil.write_file(os.path.join(env[ExtCommandEnvVariable.ExtensionPath], "status", status_file), json.dumps(status_contents)) invocation_records.append((full_name, handler_version, command_name)) # The return code is 0 but the status file should have the error, this it to test the scenario # where the extensions return a success code but fail later. return original_popen(['echo', "works"], *_, **kwargs) invocation_records.append((handler_name, handler_version, command_name)) return original_popen(cmd, *_, **kwargs) with self._setup_test_env(mock_manifest=True) as (exthandlers_handler, protocol, no_of_extensions): with patch('azurelinuxagent.ga.cgroupapi.subprocess.Popen', side_effect=mock_popen): exthandlers_handler.run() exthandlers_handler.report_ext_handlers_status() self.assertEqual(no_of_extensions, len(protocol.aggregate_status['aggregateStatus']['handlerAggregateStatus']), "incorrect extensions reported") # Since we're writing error status for firstExtension, only the firstExtension should be invoked and # everything else should be skipped expected_invocations = [ ('OSTCExtensions.ExampleHandlerLinux.firstExtension', '1.0.0', ExtensionCommandNames.INSTALL), ('OSTCExtensions.ExampleHandlerLinux.firstExtension', '1.0.0', ExtensionCommandNames.ENABLE)] self.assertEqual(invocation_records, expected_invocations, "Invalid invocations found") err_msg = 'Dependent Extension OSTCExtensions.ExampleHandlerLinux.firstExtension did not succeed. Status was error' self.__assert_invalid_status_scenario(protocol, fail_code, mc_status="Ready", mc_message="Plugin enabled", err_msg=err_msg) Azure-WALinuxAgent-2b21de5/tests/ga/test_periodic_operation.py000066400000000000000000000156001462617747000245520ustar00rootroot00000000000000# Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import datetime import time from azurelinuxagent.ga.monitor import PeriodicOperation from tests.lib.tools import AgentTestCase, patch, PropertyMock class TestPeriodicOperation(AgentTestCase): class SaveRunTimestamp(PeriodicOperation): def __init__(self, period): super(TestPeriodicOperation.SaveRunTimestamp, self).__init__(period) self.run_time = None def _operation(self): self.run_time = datetime.datetime.utcnow() def test_it_should_take_a_timedelta_as_period(self): op = TestPeriodicOperation.SaveRunTimestamp(datetime.timedelta(hours=1)) op.run() expected = op.run_time + datetime.timedelta(hours=1) difference = op.next_run_time() - expected self.assertTrue(difference < datetime.timedelta(seconds=1), "The next run time exceeds the expected value by more than 1 second: {0} vs {1}".format(op.next_run_time(), expected)) def test_it_should_take_a_number_of_seconds_as_period(self): op = TestPeriodicOperation.SaveRunTimestamp(3600) op.run() expected = op.run_time + datetime.timedelta(hours=1) difference = op.next_run_time() - expected self.assertTrue(difference < datetime.timedelta(seconds=1), "The next run time exceeds the expected value by more than 1 second: {0} vs {1}".format(op.next_run_time(), expected)) class CountInvocations(PeriodicOperation): def __init__(self, period): super(TestPeriodicOperation.CountInvocations, self).__init__(period) self.invoke_count = 0 def _operation(self): self.invoke_count += 1 def test_it_should_be_invoked_when_run_is_called_first_time(self): op = TestPeriodicOperation.CountInvocations(datetime.timedelta(hours=1)) op.run() self.assertTrue(op.invoke_count > 0, "The operation was not invoked") def test_it_should_not_be_invoked_if_the_period_has_not_elapsed(self): pop = TestPeriodicOperation.CountInvocations(datetime.timedelta(hours=1)) for _ in range(5): pop.run() # the first run() invoked the operation, so the count is 1 self.assertEqual(pop.invoke_count, 1, "The operation was invoked before the period elapsed") def test_it_should_be_invoked_if_the_period_has_elapsed(self): pop = TestPeriodicOperation.CountInvocations(datetime.timedelta(milliseconds=1)) for _ in range(5): pop.run() time.sleep(0.001) self.assertEqual(pop.invoke_count, 5, "The operation was not invoked after the period elapsed") class RaiseException(PeriodicOperation): def _operation(self): raise Exception("A test exception") @staticmethod def _get_number_of_warnings(warn_patcher, message="A test exception"): return len([args for args, _ in warn_patcher.call_args_list if any(message in a for a in args)]) def test_it_should_log_a_warning_if_the_operation_fails(self): with patch("azurelinuxagent.common.logger.warn") as warn_patcher: TestPeriodicOperation.RaiseException(datetime.timedelta(hours=1)).run() self.assertEqual(self._get_number_of_warnings(warn_patcher), 1, "The error in the operation was should have been reported exactly once") def test_it_should_not_log_multiple_warnings_when_the_period_has_not_elapsed(self): with patch("azurelinuxagent.common.logger.warn") as warn_patcher: pop = TestPeriodicOperation.RaiseException(datetime.timedelta(hours=1)) for _ in range(5): pop.run() self.assertEqual(self._get_number_of_warnings(warn_patcher), 1, "The error in the operation was should have been reported exactly once") def test_it_should_not_log_multiple_warnings_when_the_period_has_elapsed(self): with patch("azurelinuxagent.common.logger.warn") as warn_patcher: with patch("azurelinuxagent.ga.periodic_operation.PeriodicOperation._LOG_WARNING_PERIOD", new_callable=PropertyMock, return_value=datetime.timedelta(milliseconds=1)): pop = TestPeriodicOperation.RaiseException(datetime.timedelta(milliseconds=1)) for _ in range(5): pop.run() time.sleep(0.001) self.assertEqual(self._get_number_of_warnings(warn_patcher), 5, "The error in the operation was not reported the expected number of times") class RaiseTwoExceptions(PeriodicOperation): def __init__(self, period): super(TestPeriodicOperation.RaiseTwoExceptions, self).__init__(period) self._count = 0 def _operation(self): message = "WARNING {0}".format(self._count) if self._count == 0: self._count += 1 raise Exception(message) def test_it_should_log_warnings_if_they_are_different(self): with patch("azurelinuxagent.common.logger.warn") as warn_patcher: pop = TestPeriodicOperation.RaiseTwoExceptions(0) for _ in range(5): pop.run() self.assertEqual(self._get_number_of_warnings(warn_patcher, "WARNING 0"), 1, "The first error should have been reported exactly 1 time") self.assertEqual(self._get_number_of_warnings(warn_patcher, "WARNING 1"), 1, "The second error should have been reported exactly 1 time") class NoOp(PeriodicOperation): def _operation(self): pass def test_sleep_until_next_operation_should_wait_for_the_closest_operation(self): operations = [ TestPeriodicOperation.NoOp(datetime.timedelta(seconds=60)), TestPeriodicOperation.NoOp(datetime.timedelta(hours=1)), TestPeriodicOperation.NoOp(datetime.timedelta(seconds=10)), # closest operation TestPeriodicOperation.NoOp(datetime.timedelta(minutes=11)), TestPeriodicOperation.NoOp(datetime.timedelta(days=1)) ] for op in operations: op.run() def mock_sleep(seconds): mock_sleep.seconds = seconds mock_sleep.seconds = 0 with patch("azurelinuxagent.ga.periodic_operation.time.sleep", side_effect=mock_sleep): PeriodicOperation.sleep_until_next_operation(operations) self.assertAlmostEqual(mock_sleep.seconds, 10, 0, "did not sleep for the expected time") Azure-WALinuxAgent-2b21de5/tests/ga/test_persist_firewall_rules.py000066400000000000000000000571321462617747000254720ustar00rootroot00000000000000# Copyright 2016 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib import os import shutil import subprocess import sys import uuid import azurelinuxagent.common.conf as conf from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil.default import DefaultOSUtil from azurelinuxagent.ga.persist_firewall_rules import PersistFirewallRulesHandler from azurelinuxagent.common.utils import fileutil, shellutil from azurelinuxagent.common.utils.networkutil import AddFirewallRules, FirewallCmdDirectCommands from tests.lib.tools import AgentTestCase, MagicMock, patch class TestPersistFirewallRulesHandler(AgentTestCase): original_popen = subprocess.Popen def __init__(self, *args, **kwargs): super(TestPersistFirewallRulesHandler, self).__init__(*args, **kwargs) self._expected_service_name = "" self._expected_service_name = "" self._binary_file = "" self._network_service_unit_file = "" def setUp(self): AgentTestCase.setUp(self) # Override for mocking Popen, should be of the form - (True/False, cmd-to-execute-if-True) self.__replace_popen_cmd = lambda *_: (False, "") self.__executed_commands = [] self.__test_dst_ip = "1.2.3.4" self.__test_uid = 9999 self.__test_wait = "-w" self.__systemd_dir = os.path.join(self.tmp_dir, "system") fileutil.mkdir(self.__systemd_dir) self.__agent_bin_dir = os.path.join(self.tmp_dir, "bin") fileutil.mkdir(self.__agent_bin_dir) self.__tmp_conf_lib = os.path.join(self.tmp_dir, "waagent") fileutil.mkdir(self.__tmp_conf_lib) conf.get_lib_dir = MagicMock(return_value=self.__tmp_conf_lib) def tearDown(self): shutil.rmtree(self.__systemd_dir, ignore_errors=True) shutil.rmtree(self.__agent_bin_dir, ignore_errors=True) shutil.rmtree(self.__tmp_conf_lib, ignore_errors=True) AgentTestCase.tearDown(self) def __mock_popen(self, cmd, *args, **kwargs): self.__executed_commands.append(cmd) replace_cmd, replace_with_command = self.__replace_popen_cmd(cmd) if replace_cmd: cmd = replace_with_command return TestPersistFirewallRulesHandler.original_popen(cmd, *args, **kwargs) @contextlib.contextmanager def _get_persist_firewall_rules_handler(self, systemd=True): osutil = DefaultOSUtil() osutil.get_firewall_will_wait = MagicMock(return_value=self.__test_wait) osutil.get_agent_bin_path = MagicMock(return_value=self.__agent_bin_dir) osutil.get_systemd_unit_file_install_path = MagicMock(return_value=self.__systemd_dir) self._expected_service_name = PersistFirewallRulesHandler._AGENT_NETWORK_SETUP_NAME_FORMAT.format( osutil.get_service_name()) self._network_service_unit_file = os.path.join(self.__systemd_dir, self._expected_service_name) self._binary_file = os.path.join(conf.get_lib_dir(), PersistFirewallRulesHandler.BINARY_FILE_NAME) # Just for these tests, ignoring the mode of mkdir to allow non-sudo tests orig_mkdir = fileutil.mkdir with patch("azurelinuxagent.ga.persist_firewall_rules.fileutil.mkdir", side_effect=lambda path, **mode: orig_mkdir(path)): with patch("azurelinuxagent.ga.persist_firewall_rules.get_osutil", return_value=osutil): with patch('azurelinuxagent.common.osutil.systemd.is_systemd', return_value=systemd): with patch("azurelinuxagent.common.utils.shellutil.subprocess.Popen", side_effect=self.__mock_popen): yield PersistFirewallRulesHandler(self.__test_dst_ip, self.__test_uid) def __assert_firewall_called(self, cmd, validate_command_called=True): if validate_command_called: self.assertIn(AddFirewallRules.get_wire_root_accept_rule(command=AddFirewallRules.APPEND_COMMAND, destination=self.__test_dst_ip, owner_uid=self.__test_uid, firewalld_command=cmd), self.__executed_commands, "Firewall {0} command not found".format(cmd)) self.assertIn(AddFirewallRules.get_wire_non_root_drop_rule(command=AddFirewallRules.APPEND_COMMAND, destination=self.__test_dst_ip, firewalld_command=cmd), self.__executed_commands, "Firewall {0} command not found".format(cmd)) else: self.assertNotIn(AddFirewallRules.get_wire_root_accept_rule(command=AddFirewallRules.APPEND_COMMAND, destination=self.__test_dst_ip, owner_uid=self.__test_uid, firewalld_command=cmd), self.__executed_commands, "Firewall {0} command found".format(cmd)) self.assertNotIn(AddFirewallRules.get_wire_non_root_drop_rule(command=AddFirewallRules.APPEND_COMMAND, destination=self.__test_dst_ip, firewalld_command=cmd), self.__executed_commands, "Firewall {0} command found".format(cmd)) def __assert_systemctl_called(self, cmd="enable", validate_command_called=True): systemctl_command = ["systemctl", cmd, self._expected_service_name] if validate_command_called: self.assertIn(systemctl_command, self.__executed_commands, "Systemctl command {0} not found".format(cmd)) else: self.assertNotIn(systemctl_command, self.__executed_commands, "Systemctl command {0} found".format(cmd)) def __assert_systemctl_reloaded(self, validate_command_called=True): systemctl_reload = ["systemctl", "daemon-reload"] if validate_command_called: self.assertIn(systemctl_reload, self.__executed_commands, "Systemctl config not reloaded") else: self.assertNotIn(systemctl_reload, self.__executed_commands, "Systemctl config reloaded") def __assert_firewall_cmd_running_called(self, validate_command_called=True): cmd = PersistFirewallRulesHandler._FIREWALLD_RUNNING_CMD if validate_command_called: self.assertIn(cmd, self.__executed_commands, "Firewall state not checked") else: self.assertNotIn(cmd, self.__executed_commands, "Firewall state not checked") def __assert_network_service_setup_properly(self): self.__assert_systemctl_called(cmd="is-enabled", validate_command_called=True) self.__assert_systemctl_called(cmd="enable", validate_command_called=True) self.__assert_systemctl_reloaded() self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.PassThrough, validate_command_called=False) self.assertTrue(os.path.exists(self._network_service_unit_file), "Service unit file should be there") self.assertTrue(os.path.exists(self._binary_file), "Binary file should be there") @staticmethod def __mock_network_setup_service_enabled(cmd): if "firewall-cmd" in cmd: return True, ["echo", "not-running"] if "systemctl" in cmd: return True, ["echo", "enabled"] return False, [] @staticmethod def __mock_network_setup_service_disabled(cmd): if "firewall-cmd" in cmd: return True, ["echo", "not-running"] if "systemctl" in cmd: return True, ["echo", "not enabled"] return False, [] @staticmethod def __mock_firewalld_running_and_not_applied(cmd): if cmd == PersistFirewallRulesHandler._FIREWALLD_RUNNING_CMD: return True, ["echo", "running"] # This is to fail the check if firewalld-rules are already applied cmds_to_fail = ["firewall-cmd", FirewallCmdDirectCommands.QueryPassThrough, "conntrack"] if all(cmd_to_fail in cmd for cmd_to_fail in cmds_to_fail): return True, ["exit", "1"] if "firewall-cmd" in cmd: return True, ["echo", "enabled"] return False, [] @staticmethod def __mock_firewalld_running_and_remove_not_successful(cmd): if cmd == PersistFirewallRulesHandler._FIREWALLD_RUNNING_CMD: return True, ["echo", "running"] # This is to fail the check if firewalld-rules are already applied cmds_to_fail = ["firewall-cmd", FirewallCmdDirectCommands.QueryPassThrough, "conntrack"] if all(cmd_to_fail in cmd for cmd_to_fail in cmds_to_fail): return True, ["exit", "1"] # This is to fail the remove if firewalld-rules fails to remove rule cmds_to_fail = ["firewall-cmd", FirewallCmdDirectCommands.RemovePassThrough, "conntrack"] if all(cmd_to_fail in cmd for cmd_to_fail in cmds_to_fail): return True, ["exit", "2"] if "firewall-cmd" in cmd: return True, ["echo", "enabled"] return False, [] def __setup_and_assert_network_service_setup_scenario(self, handler, mock_popen=None): mock_popen = TestPersistFirewallRulesHandler.__mock_network_setup_service_disabled if mock_popen is None else mock_popen self.__replace_popen_cmd = mock_popen handler.setup() self.__assert_systemctl_called(cmd="is-enabled", validate_command_called=True) self.__assert_systemctl_called(cmd="enable", validate_command_called=True) self.__assert_systemctl_reloaded(validate_command_called=True) self.__assert_firewall_cmd_running_called(validate_command_called=True) self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.QueryPassThrough, validate_command_called=False) self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.RemovePassThrough, validate_command_called=False) self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.PassThrough, validate_command_called=False) self.assertTrue(os.path.exists(handler.get_service_file_path()), "Service unit file not found") def test_it_should_skip_setup_if_firewalld_already_enabled(self): self.__replace_popen_cmd = lambda cmd: ("firewall-cmd" in cmd, ["echo", "running"]) with self._get_persist_firewall_rules_handler() as handler: handler.setup() # Assert we verified that rules were set using firewall-cmd self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.QueryPassThrough, validate_command_called=True) # Assert no commands for adding rules using firewall-cmd were called self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.RemovePassThrough, validate_command_called=False) self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.PassThrough, validate_command_called=False) # Assert no commands for systemctl were called self.assertFalse(any("systemctl" in cmd for cmd in self.__executed_commands), "Systemctl shouldn't be called") def test_it_should_skip_setup_if_agent_network_setup_service_already_enabled_and_version_same(self): with self._get_persist_firewall_rules_handler() as handler: # 1st time should setup the service self.__setup_and_assert_network_service_setup_scenario(handler) # 2nd time setup should do nothing as service is enabled and no version updated self.__replace_popen_cmd = TestPersistFirewallRulesHandler.__mock_network_setup_service_enabled # Reset state self.__executed_commands = [] handler.setup() self.__assert_systemctl_called(cmd="is-enabled", validate_command_called=True) self.__assert_systemctl_called(cmd="enable", validate_command_called=False) self.__assert_systemctl_reloaded(validate_command_called=False) self.__assert_firewall_cmd_running_called(validate_command_called=True) self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.QueryPassThrough, validate_command_called=False) self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.RemovePassThrough, validate_command_called=False) self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.PassThrough, validate_command_called=False) self.assertTrue(os.path.exists(handler.get_service_file_path()), "Service unit file not found") def test_it_should_always_replace_binary_file_only_if_using_custom_network_service(self): def _find_in_file(file_name, line_str): try: with open(file_name, 'r') as fh: content = fh.read() return line_str in content except Exception: # swallow exception pass return False test_str = 'os.system("{py_path} {egg_path} --setup-firewall --dst_ip={wire_ip} --uid={user_id} {wait}")' current_exe_path = os.path.join(os.getcwd(), sys.argv[0]) self.__replace_popen_cmd = TestPersistFirewallRulesHandler.__mock_network_setup_service_disabled with self._get_persist_firewall_rules_handler() as handler: self.assertFalse(os.path.exists(self._binary_file), "Binary file should not be there") self.assertFalse(os.path.exists(self._network_service_unit_file), "Unit file should not be present") handler.setup() orig_service_file_contents = "ExecStart={py_path} {binary_path}".format(py_path=sys.executable, binary_path=self._binary_file) self.__assert_systemctl_called(cmd="is-enabled", validate_command_called=True) self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.PassThrough, validate_command_called=False) self.assertTrue(os.path.exists(self._binary_file), "Binary file should be there") self.assertTrue(_find_in_file(self._binary_file, test_str.format(py_path=sys.executable, egg_path=current_exe_path, wire_ip=self.__test_dst_ip, user_id=self.__test_uid, wait=self.__test_wait)), "Binary file not set correctly") self.assertTrue(_find_in_file(self._network_service_unit_file, orig_service_file_contents), "Service Unit file file not set correctly") # Change test params self.__test_dst_ip = "9.8.7.6" self.__test_uid = 5555 self.__test_wait = "" # The service should say its enabled now self.__replace_popen_cmd = TestPersistFirewallRulesHandler.__mock_network_setup_service_enabled with self._get_persist_firewall_rules_handler() as handler: # The Binary file should be available on the 2nd run self.assertTrue(os.path.exists(self._binary_file), "Binary file should be there") handler.setup() self.assertTrue(_find_in_file(self._binary_file, test_str.format(py_path=sys.executable, egg_path=current_exe_path, wire_ip=self.__test_dst_ip, user_id=self.__test_uid, wait=self.__test_wait)), "Binary file not updated correctly") # Unit file should NOT be updated self.assertTrue(_find_in_file(self._network_service_unit_file, orig_service_file_contents), "Service Unit file file should not be updated") def test_it_should_use_firewalld_if_available(self): self.__replace_popen_cmd = self.__mock_firewalld_running_and_not_applied with self._get_persist_firewall_rules_handler() as handler: handler.setup() self.__assert_firewall_cmd_running_called(validate_command_called=True) self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.QueryPassThrough, validate_command_called=True) self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.RemovePassThrough, validate_command_called=True) self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.PassThrough, validate_command_called=True) self.assertFalse(any("systemctl" in cmd for cmd in self.__executed_commands), "Systemctl shouldn't be called") def test_it_should_add_firewalld_rules_if_remove_raises_exception(self): self.__replace_popen_cmd = self.__mock_firewalld_running_and_remove_not_successful with self._get_persist_firewall_rules_handler() as handler: handler.setup() self.__assert_firewall_cmd_running_called(validate_command_called=True) self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.QueryPassThrough, validate_command_called=True) self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.RemovePassThrough, validate_command_called=True) self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.PassThrough, validate_command_called=True) self.assertFalse(any("systemctl" in cmd for cmd in self.__executed_commands), "Systemctl shouldn't be called") def test_it_should_set_up_custom_service_if_no_firewalld(self): self.__replace_popen_cmd = TestPersistFirewallRulesHandler.__mock_network_setup_service_disabled with self._get_persist_firewall_rules_handler() as handler: self.assertFalse(os.path.exists(self._network_service_unit_file), "Service unit file should not be there") self.assertFalse(os.path.exists(self._binary_file), "Binary file should not be there") handler.setup() self.__assert_network_service_setup_properly() def test_it_should_cleanup_files_on_error(self): orig_write_file = fileutil.write_file files_to_fail = [] def mock_write_file(path, _, *__): if files_to_fail[0] in path: raise IOError("Invalid file: {0}".format(path)) return orig_write_file(path, _, *__) self.__replace_popen_cmd = TestPersistFirewallRulesHandler.__mock_network_setup_service_disabled with self._get_persist_firewall_rules_handler() as handler: test_files = [self._binary_file, self._network_service_unit_file] for file_to_fail in test_files: files_to_fail = [file_to_fail] with patch("azurelinuxagent.ga.persist_firewall_rules.fileutil.write_file", side_effect=mock_write_file): with self.assertRaises(Exception) as context_manager: handler.setup() self.assertIn("Invalid file: {0}".format(file_to_fail), ustr(context_manager.exception)) self.assertFalse(os.path.exists(file_to_fail), "File should be deleted: {0}".format(file_to_fail)) # Cleanup remaining files for test clarity for test_file in test_files: try: os.remove(test_file) except Exception: pass def test_it_should_execute_binary_file_successfully(self): # A bare-bone test to ensure no simple syntactical errors in the binary file as its generated dynamically self.__replace_popen_cmd = TestPersistFirewallRulesHandler.__mock_network_setup_service_disabled with self._get_persist_firewall_rules_handler() as handler: self.assertFalse(os.path.exists(self._binary_file), "Binary file should not be there") handler.setup() self.assertTrue(os.path.exists(self._binary_file), "Binary file not set properly") shellutil.run_command([sys.executable, self._binary_file]) def test_it_should_not_fail_if_egg_not_found(self): self.__replace_popen_cmd = TestPersistFirewallRulesHandler.__mock_network_setup_service_disabled test_str = str(uuid.uuid4()) with patch("sys.argv", [test_str]): with self._get_persist_firewall_rules_handler() as handler: self.assertFalse(os.path.exists(self._binary_file), "Binary file should not be there") handler.setup() output = shellutil.run_command([sys.executable, self._binary_file], stderr=subprocess.STDOUT) expected_str = "{0} file not found, skipping execution of firewall execution setup for this boot".format( os.path.join(os.getcwd(), test_str)) self.assertIn(expected_str, output, "Unexpected output") def test_it_should_delete_custom_service_files_if_firewalld_enabled(self): with self._get_persist_firewall_rules_handler() as handler: # 1st run - Setup the Custom Service self.__setup_and_assert_network_service_setup_scenario(handler) # 2nd run - Enable Firewalld and ensure the agent sets firewall rules using firewalld and deletes custom service self.__executed_commands = [] self.__replace_popen_cmd = self.__mock_firewalld_running_and_not_applied handler.setup() self.__assert_firewall_cmd_running_called(validate_command_called=True) self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.QueryPassThrough, validate_command_called=True) self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.RemovePassThrough, validate_command_called=True) self.__assert_firewall_called(cmd=FirewallCmdDirectCommands.PassThrough, validate_command_called=True) self.__assert_systemctl_called(cmd="is-enabled", validate_command_called=False) self.__assert_systemctl_called(cmd="enable", validate_command_called=False) self.__assert_systemctl_reloaded(validate_command_called=False) self.assertFalse(os.path.exists(handler.get_service_file_path()), "Service unit file found") self.assertFalse(os.path.exists(os.path.join(conf.get_lib_dir(), handler.BINARY_FILE_NAME)), "Binary file found") def test_it_should_reset_service_unit_files_if_version_changed(self): with self._get_persist_firewall_rules_handler() as handler: # 1st step - Setup the service with old Version test_ver = str(uuid.uuid4()) with patch.object(handler, "_UNIT_VERSION", test_ver): self.__setup_and_assert_network_service_setup_scenario(handler) self.assertIn(test_ver, fileutil.read_file(handler.get_service_file_path()), "Test version not found") # 2nd step - Re-run the setup and ensure the service file set up again even if service enabled self.__executed_commands = [] self.__setup_and_assert_network_service_setup_scenario(handler, mock_popen=self.__mock_network_setup_service_enabled) self.assertNotIn(test_ver, fileutil.read_file(handler.get_service_file_path()), "Test version found incorrectly") Azure-WALinuxAgent-2b21de5/tests/ga/test_remoteaccess.py000066400000000000000000000124171462617747000233540ustar00rootroot00000000000000# Copyright Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import xml from azurelinuxagent.common.protocol.goal_state import GoalState, RemoteAccess # pylint: disable=unused-import from tests.lib.tools import AgentTestCase, load_data, patch, Mock # pylint: disable=unused-import from tests.lib import wire_protocol_data from tests.lib.mock_wire_protocol import mock_wire_protocol class TestRemoteAccess(AgentTestCase): def test_parse_remote_access(self): data_str = load_data('wire/remote_access_single_account.xml') remote_access = RemoteAccess(data_str) self.assertNotEqual(None, remote_access) self.assertEqual("1", remote_access.incarnation) self.assertEqual(1, len(remote_access.user_list.users), "User count does not match.") self.assertEqual("testAccount", remote_access.user_list.users[0].name, "Account name does not match") self.assertEqual("encryptedPasswordString", remote_access.user_list.users[0].encrypted_password, "Encrypted password does not match.") self.assertEqual("2019-01-01", remote_access.user_list.users[0].expiration, "Expiration does not match.") def test_goal_state_with_no_remote_access(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: self.assertIsNone(protocol.client.get_remote_access()) def test_parse_two_remote_access_accounts(self): data_str = load_data('wire/remote_access_two_accounts.xml') remote_access = RemoteAccess(data_str) self.assertNotEqual(None, remote_access) self.assertEqual("1", remote_access.incarnation) self.assertEqual(2, len(remote_access.user_list.users), "User count does not match.") self.assertEqual("testAccount1", remote_access.user_list.users[0].name, "Account name does not match") self.assertEqual("encryptedPasswordString", remote_access.user_list.users[0].encrypted_password, "Encrypted password does not match.") self.assertEqual("2019-01-01", remote_access.user_list.users[0].expiration, "Expiration does not match.") self.assertEqual("testAccount2", remote_access.user_list.users[1].name, "Account name does not match") self.assertEqual("encryptedPasswordString", remote_access.user_list.users[1].encrypted_password, "Encrypted password does not match.") self.assertEqual("2019-01-01", remote_access.user_list.users[1].expiration, "Expiration does not match.") def test_parse_ten_remote_access_accounts(self): data_str = load_data('wire/remote_access_10_accounts.xml') remote_access = RemoteAccess(data_str) self.assertNotEqual(None, remote_access) self.assertEqual(10, len(remote_access.user_list.users), "User count does not match.") def test_parse_duplicate_remote_access_accounts(self): data_str = load_data('wire/remote_access_duplicate_accounts.xml') remote_access = RemoteAccess(data_str) self.assertNotEqual(None, remote_access) self.assertEqual(2, len(remote_access.user_list.users), "User count does not match.") self.assertEqual("testAccount", remote_access.user_list.users[0].name, "Account name does not match") self.assertEqual("encryptedPasswordString", remote_access.user_list.users[0].encrypted_password, "Encrypted password does not match.") self.assertEqual("2019-01-01", remote_access.user_list.users[0].expiration, "Expiration does not match.") self.assertEqual("testAccount", remote_access.user_list.users[1].name, "Account name does not match") self.assertEqual("encryptedPasswordString", remote_access.user_list.users[1].encrypted_password, "Encrypted password does not match.") self.assertEqual("2019-01-01", remote_access.user_list.users[1].expiration, "Expiration does not match.") def test_parse_zero_remote_access_accounts(self): data_str = load_data('wire/remote_access_no_accounts.xml') remote_access = RemoteAccess(data_str) self.assertNotEqual(None, remote_access) self.assertEqual(0, len(remote_access.user_list.users), "User count does not match.") def test_update_remote_access_conf_remote_access(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_REMOTE_ACCESS) as protocol: self.assertIsNotNone(protocol.client.get_remote_access()) self.assertEqual(1, len(protocol.client.get_remote_access().user_list.users)) self.assertEqual('testAccount', protocol.client.get_remote_access().user_list.users[0].name) self.assertEqual('encryptedPasswordString', protocol.client.get_remote_access().user_list.users[0].encrypted_password) def test_parse_bad_remote_access_data(self): data = "foobar" self.assertRaises(xml.parsers.expat.ExpatError, RemoteAccess, data)Azure-WALinuxAgent-2b21de5/tests/ga/test_remoteaccess_handler.py000066400000000000000000000656621462617747000250630ustar00rootroot00000000000000# Copyright Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # from datetime import timedelta, datetime from mock import Mock, MagicMock from azurelinuxagent.common.osutil.default import DefaultOSUtil from azurelinuxagent.common.protocol.goal_state import RemoteAccess from azurelinuxagent.common.protocol.util import ProtocolUtil from azurelinuxagent.common.protocol.wire import WireProtocol from azurelinuxagent.ga.remoteaccess import RemoteAccessHandler from tests.lib.tools import AgentTestCase, load_data, patch, clear_singleton_instances from tests.lib.mock_wire_protocol import mock_wire_protocol from tests.lib.wire_protocol_data import DATA_FILE, DATA_FILE_REMOTE_ACCESS class MockOSUtil(DefaultOSUtil): def __init__(self): # pylint: disable=super-init-not-called self.all_users = {} self.sudo_users = set() self.jit_enabled = True def useradd(self, username, expiration=None, comment=None): if username == "": raise Exception("test exception for bad username") if username in self.all_users: raise Exception("test exception, user already exists") self.all_users[username] = (username, None, None, None, comment, None, None, expiration) def conf_sudoer(self, username, nopasswd=False, remove=False): if not remove: self.sudo_users.add(username) else: self.sudo_users.remove(username) def chpasswd(self, username, password, crypt_id=6, salt_len=10): if password == "": raise Exception("test exception for bad password") user = self.all_users[username] self.all_users[username] = (user[0], password, user[2], user[3], user[4], user[5], user[6], user[7]) def del_account(self, username): if username == "": raise Exception("test exception, bad data") if username not in self.all_users: raise Exception("test exception, user does not exist to delete") self.all_users.pop(username) def get_users(self): return self.all_users.values() def get_user_dictionary(users): user_dictionary = {} for user in users: user_dictionary[user[0]] = user return user_dictionary def mock_add_event(name, op, is_success, version, message): TestRemoteAccessHandler.eventing_data = (name, op, is_success, version, message) class TestRemoteAccessHandler(AgentTestCase): eventing_data = [()] def setUp(self): super(TestRemoteAccessHandler, self).setUp() # Since ProtocolUtil is a singleton per thread, we need to clear it to ensure that the test cases do not # reuse a previous state clear_singleton_instances(ProtocolUtil) for data in TestRemoteAccessHandler.eventing_data: del data # add_user tests @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_add_user(self, *_): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) tstpassword = "]aPPEv}uNg1FPnl?" tstuser = "foobar" expiration_date = datetime.utcnow() + timedelta(days=1) pwd = tstpassword rah._add_user(tstuser, pwd, expiration_date) users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(tstuser in users, "{0} missing from users".format(tstuser)) actual_user = users[tstuser] expected_expiration = (expiration_date + timedelta(days=1)).strftime("%Y-%m-%d") self.assertEqual(actual_user[7], expected_expiration) self.assertEqual(actual_user[4], "JIT_Account") @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_add_user_bad_creation_data(self, *_): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) tstpassword = "]aPPEv}uNg1FPnl?" tstuser = "" expiration = datetime.utcnow() + timedelta(days=1) pwd = tstpassword error = "test exception for bad username" self.assertRaisesRegex(Exception, error, rah._add_user, tstuser, pwd, expiration) self.assertEqual(0, len(rah._os_util.get_users())) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="") def test_add_user_bad_password_data(self, *_): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) tstpassword = "" tstuser = "foobar" expiration = datetime.utcnow() + timedelta(days=1) pwd = tstpassword error = "test exception for bad password" self.assertRaisesRegex(Exception, error, rah._add_user, tstuser, pwd, expiration) self.assertEqual(0, len(rah._os_util.get_users())) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_add_user_already_existing(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) tstpassword = "]aPPEv}uNg1FPnl?" tstuser = "foobar" expiration_date = datetime.utcnow() + timedelta(days=1) pwd = tstpassword rah._add_user(tstuser, pwd, expiration_date) users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(tstuser in users, "{0} missing from users".format(tstuser)) self.assertEqual(1, len(users.keys())) actual_user = users[tstuser] self.assertEqual(actual_user[7], (expiration_date + timedelta(days=1)).strftime("%Y-%m-%d")) # add the new duplicate user, ensure it's not created and does not overwrite the existing user. # this does not test the user add function as that's mocked, it tests processing skips the remaining # calls after the initial failure new_user_expiration = datetime.utcnow() + timedelta(days=5) self.assertRaises(Exception, rah._add_user, tstuser, pwd, new_user_expiration) # refresh users users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(tstuser in users, "{0} missing from users after dup user attempted".format(tstuser)) self.assertEqual(1, len(users.keys())) actual_user = users[tstuser] self.assertEqual(actual_user[7], (expiration_date + timedelta(days=1)).strftime("%Y-%m-%d")) # delete_user tests @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_delete_user(self, *_): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) tstpassword = "]aPPEv}uNg1FPnl?" tstuser = "foobar" expiration_date = datetime.utcnow() + timedelta(days=1) expected_expiration = (expiration_date + timedelta(days=1)).strftime("%Y-%m-%d") # pylint: disable=unused-variable pwd = tstpassword rah._add_user(tstuser, pwd, expiration_date) users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(tstuser in users, "{0} missing from users".format(tstuser)) rah._remove_user(tstuser) # refresh users users = get_user_dictionary(rah._os_util.get_users()) self.assertFalse(tstuser in users) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_handle_new_user(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_single_account.xml') remote_access = RemoteAccess(data_str) tstuser = remote_access.user_list.users[0].name expiration_date = datetime.utcnow() + timedelta(days=1) expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" remote_access.user_list.users[0].expiration = expiration rah._remote_access = remote_access rah._handle_remote_access() users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(tstuser in users, "{0} missing from users".format(tstuser)) actual_user = users[tstuser] expected_expiration = (expiration_date + timedelta(days=1)).strftime("%Y-%m-%d") self.assertEqual(actual_user[7], expected_expiration) self.assertEqual(actual_user[4], "JIT_Account") def test_do_not_add_expired_user(self): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_single_account.xml') remote_access = RemoteAccess(data_str) expiration = (datetime.utcnow() - timedelta(days=2)).strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" remote_access.user_list.users[0].expiration = expiration rah._remote_access = remote_access rah._handle_remote_access() users = get_user_dictionary(rah._os_util.get_users()) self.assertFalse("testAccount" in users) def test_error_add_user(self): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) tstuser = "foobar" expiration = datetime.utcnow() + timedelta(days=1) pwd = "bad password" error = r"\[CryptError\] Error decoding secret\nInner error: Incorrect padding" self.assertRaisesRegex(Exception, error, rah._add_user, tstuser, pwd, expiration) users = get_user_dictionary(rah._os_util.get_users()) self.assertEqual(0, len(users)) def test_handle_remote_access_no_users(self): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_no_accounts.xml') remote_access = RemoteAccess(data_str) rah._remote_access = remote_access rah._handle_remote_access() users = get_user_dictionary(rah._os_util.get_users()) self.assertEqual(0, len(users.keys())) def test_handle_remote_access_validate_jit_user_valid(self): rah = RemoteAccessHandler(Mock()) comment = "JIT_Account" result = rah._is_jit_user(comment) self.assertTrue(result, "Did not identify '{0}' as a JIT_Account".format(comment)) def test_handle_remote_access_validate_jit_user_invalid(self): rah = RemoteAccessHandler(Mock()) test_users = ["John Doe", None, "", " "] failed_results = "" for user in test_users: if rah._is_jit_user(user): failed_results += "incorrectly identified '{0} as a JIT_Account'. ".format(user) if len(failed_results) > 0: self.fail(failed_results) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_handle_remote_access_multiple_users(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_two_accounts.xml') remote_access = RemoteAccess(data_str) testusers = [] count = 0 while count < 2: user = remote_access.user_list.users[count].name expiration_date = datetime.utcnow() + timedelta(days=count + 1) expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" remote_access.user_list.users[count].expiration = expiration testusers.append(user) count += 1 rah._remote_access = remote_access rah._handle_remote_access() users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(testusers[0] in users, "{0} missing from users".format(testusers[0])) self.assertTrue(testusers[1] in users, "{0} missing from users".format(testusers[1])) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") # max fabric supports in the Goal State def test_handle_remote_access_ten_users(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_10_accounts.xml') remote_access = RemoteAccess(data_str) count = 0 for user in remote_access.user_list.users: count += 1 user.name = "tstuser{0}".format(count) expiration_date = datetime.utcnow() + timedelta(days=count) user.expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" rah._remote_access = remote_access rah._handle_remote_access() users = get_user_dictionary(rah._os_util.get_users()) self.assertEqual(10, len(users.keys())) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_handle_remote_access_user_removed(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_10_accounts.xml') remote_access = RemoteAccess(data_str) count = 0 for user in remote_access.user_list.users: count += 1 user.name = "tstuser{0}".format(count) expiration_date = datetime.utcnow() + timedelta(days=count) user.expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" rah._remote_access = remote_access rah._handle_remote_access() users = get_user_dictionary(rah._os_util.get_users()) self.assertEqual(10, len(users.keys())) del rah._remote_access.user_list.users[:] self.assertEqual(10, len(users.keys())) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_handle_remote_access_bad_data_and_good_data(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_10_accounts.xml') remote_access = RemoteAccess(data_str) count = 0 for user in remote_access.user_list.users: count += 1 user.name = "tstuser{0}".format(count) if count == 2: user.name = "" expiration_date = datetime.utcnow() + timedelta(days=count) user.expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" rah._remote_access = remote_access rah._handle_remote_access() users = get_user_dictionary(rah._os_util.get_users()) self.assertEqual(9, len(users.keys())) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_handle_remote_access_deleted_user_readded(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_single_account.xml') remote_access = RemoteAccess(data_str) tstuser = remote_access.user_list.users[0].name expiration_date = datetime.utcnow() + timedelta(days=1) expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" remote_access.user_list.users[0].expiration = expiration rah._remote_access = remote_access rah._handle_remote_access() users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(tstuser in users, "{0} missing from users".format(tstuser)) os_util = rah._os_util os_util.__class__ = MockOSUtil os_util.all_users.clear() # pylint: disable=no-member # refresh users users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(tstuser not in users) rah._handle_remote_access() # refresh users users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(tstuser in users, "{0} missing from users".format(tstuser)) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") @patch('azurelinuxagent.common.osutil.get_osutil', return_value=MockOSUtil()) @patch('azurelinuxagent.common.protocol.util.ProtocolUtil.get_protocol', return_value=WireProtocol("12.34.56.78")) @patch('azurelinuxagent.common.protocol.wire.WireClient.get_remote_access', return_value="asdf") def test_remote_access_handler_run_bad_data(self, _1, _2, _3, _4): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) tstpassword = "]aPPEv}uNg1FPnl?" tstuser = "foobar" expiration_date = datetime.utcnow() + timedelta(days=1) pwd = tstpassword rah._add_user(tstuser, pwd, expiration_date) users = get_user_dictionary(rah._os_util.get_users()) self.assertTrue(tstuser in users, "{0} missing from users".format(tstuser)) rah.run() self.assertTrue(tstuser in users, "{0} missing from users".format(tstuser)) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_handle_remote_access_multiple_users_one_removed(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_10_accounts.xml') remote_access = RemoteAccess(data_str) count = 0 for user in remote_access.user_list.users: count += 1 user.name = "tstuser{0}".format(count) expiration_date = datetime.utcnow() + timedelta(days=count) user.expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" rah._remote_access = remote_access rah._handle_remote_access() users = rah._os_util.get_users() self.assertEqual(10, len(users)) # now remove the user from RemoteAccess deleted_user = rah._remote_access.user_list.users[3] del rah._remote_access.user_list.users[3] rah._handle_remote_access() users = rah._os_util.get_users() self.assertTrue(deleted_user not in users, "{0} still in users".format(deleted_user)) self.assertEqual(9, len(users)) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_handle_remote_access_multiple_users_null_remote_access(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_10_accounts.xml') remote_access = RemoteAccess(data_str) count = 0 for user in remote_access.user_list.users: count += 1 user.name = "tstuser{0}".format(count) expiration_date = datetime.utcnow() + timedelta(days=count) user.expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" rah._remote_access = remote_access rah._handle_remote_access() users = rah._os_util.get_users() self.assertEqual(10, len(users)) # now remove the user from RemoteAccess rah._remote_access = None rah._handle_remote_access() users = rah._os_util.get_users() self.assertEqual(0, len(users)) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_handle_remote_access_multiple_users_error_with_null_remote_access(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_10_accounts.xml') remote_access = RemoteAccess(data_str) count = 0 for user in remote_access.user_list.users: count += 1 user.name = "tstuser{0}".format(count) expiration_date = datetime.utcnow() + timedelta(days=count) user.expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" rah._remote_access = remote_access rah._handle_remote_access() users = rah._os_util.get_users() self.assertEqual(10, len(users)) # now remove the user from RemoteAccess rah._remote_access = None rah._handle_remote_access() users = rah._os_util.get_users() self.assertEqual(0, len(users)) def test_remove_user_error(self): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) error = "test exception, bad data" self.assertRaisesRegex(Exception, error, rah._remove_user, "") def test_remove_user_not_exists(self): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) user = "bob" error = "test exception, user does not exist to delete" self.assertRaisesRegex(Exception, error, rah._remove_user, user) @patch('azurelinuxagent.common.utils.cryptutil.CryptUtil.decrypt_secret', return_value="]aPPEv}uNg1FPnl?") def test_handle_remote_access_remove_and_add(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): rah = RemoteAccessHandler(Mock()) data_str = load_data('wire/remote_access_10_accounts.xml') remote_access = RemoteAccess(data_str) count = 0 for user in remote_access.user_list.users: count += 1 user.name = "tstuser{0}".format(count) expiration_date = datetime.utcnow() + timedelta(days=count) user.expiration = expiration_date.strftime("%a, %d %b %Y %H:%M:%S ") + "UTC" rah._remote_access = remote_access rah._handle_remote_access() users = rah._os_util.get_users() self.assertEqual(10, len(users)) # now remove the user from RemoteAccess new_user = "tstuser11" deleted_user = rah._remote_access.user_list.users[3] rah._remote_access.user_list.users[3].name = new_user rah._handle_remote_access() users = rah._os_util.get_users() self.assertTrue(deleted_user not in users, "{0} still in users".format(deleted_user)) self.assertTrue(new_user in [u[0] for u in users], "user {0} not in users".format(new_user)) self.assertEqual(10, len(users)) @patch('azurelinuxagent.ga.remoteaccess.add_event', side_effect=mock_add_event) def test_remote_access_handler_run_error(self, _): with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=MockOSUtil()): mock_protocol = WireProtocol("foo.bar") mock_protocol.client.get_remote_access = MagicMock(side_effect=Exception("foobar!")) rah = RemoteAccessHandler(mock_protocol) rah.run() print(TestRemoteAccessHandler.eventing_data) check_message = "foobar!" self.assertTrue(check_message in TestRemoteAccessHandler.eventing_data[4], "expected message {0} not found in {1}" .format(check_message, TestRemoteAccessHandler.eventing_data[4])) self.assertEqual(False, TestRemoteAccessHandler.eventing_data[2], "is_success is true") def test_remote_access_handler_should_retrieve_users_when_it_is_invoked_the_first_time(self): mock_os_util = MagicMock() with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=mock_os_util): with mock_wire_protocol(DATA_FILE) as mock_protocol: rah = RemoteAccessHandler(mock_protocol) rah.run() self.assertTrue(len(mock_os_util.get_users.call_args_list) == 1, "The first invocation of remote access should have retrieved the current users") def test_remote_access_handler_should_retrieve_users_when_goal_state_contains_jit_users(self): mock_os_util = MagicMock() with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=mock_os_util): with mock_wire_protocol(DATA_FILE_REMOTE_ACCESS) as mock_protocol: rah = RemoteAccessHandler(mock_protocol) rah.run() self.assertTrue(len(mock_os_util.get_users.call_args_list) > 0, "A goal state with jit users did not retrieve the current users") def test_remote_access_handler_should_not_retrieve_users_when_goal_state_does_not_contain_jit_users(self): mock_os_util = MagicMock() with patch("azurelinuxagent.ga.remoteaccess.get_osutil", return_value=mock_os_util): with mock_wire_protocol(DATA_FILE) as mock_protocol: rah = RemoteAccessHandler(mock_protocol) rah.run() # this will trigger one call to retrieve the users mock_protocol.mock_wire_data.set_incarnation(123) # mock a new goal state; the data file does not include any jit users rah.run() self.assertTrue(len(mock_os_util.get_users.call_args_list) == 1, "A goal state without jit users retrieved the current users") Azure-WALinuxAgent-2b21de5/tests/ga/test_report_status.py000066400000000000000000000173361462617747000236220ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. import json from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.ga.agent_update_handler import get_agent_update_handler from azurelinuxagent.ga.exthandlers import ExtHandlersHandler from azurelinuxagent.ga.update import get_update_handler from tests.lib.mock_update_handler import mock_update_handler from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib.tools import AgentTestCase, patch from tests.lib import wire_protocol_data from tests.lib.http_request_predicates import HttpRequestPredicates class ReportStatusTestCase(AgentTestCase): """ Tests for UpdateHandler._report_status() """ def test_update_handler_should_report_status_when_fetch_goal_state_fails(self): # The test executes the main loop of UpdateHandler.run() twice, failing requests for the goal state # on the second iteration. We expect the 2 iterations to report status, despite the goal state failure. fail_goal_state_request = [False] def http_get_handler(url, *_, **__): if HttpRequestPredicates.is_goal_state_request(url) and fail_goal_state_request[0]: return MockHttpResponse(status=410) return None def on_new_iteration(iteration): fail_goal_state_request[0] = iteration == 2 with mock_wire_protocol(wire_protocol_data.DATA_FILE, http_get_handler=http_get_handler) as protocol: exthandlers_handler = ExtHandlersHandler(protocol) with patch.object(exthandlers_handler, "run", wraps=exthandlers_handler.run) as exthandlers_handler_run: with mock_update_handler(protocol, iterations=2, on_new_iteration=on_new_iteration, exthandlers_handler=exthandlers_handler) as update_handler: with patch("azurelinuxagent.common.version.get_daemon_version", return_value=FlexibleVersion("2.2.53")): update_handler.run(debug=True) self.assertEqual(1, exthandlers_handler_run.call_count, "Extensions should have been executed only once.") self.assertEqual(2, len(protocol.mock_wire_data.status_blobs), "Status should have been reported for the 2 iterations.") # # Verify that we reported status for the extension in the test data # first_status = json.loads(protocol.mock_wire_data.status_blobs[0]) handler_aggregate_status = first_status.get('aggregateStatus', {}).get("handlerAggregateStatus") self.assertIsNotNone(handler_aggregate_status, "Could not find the handlerAggregateStatus") self.assertEqual(1, len(handler_aggregate_status), "Expected 1 extension status. Got: {0}".format(handler_aggregate_status)) extension_status = handler_aggregate_status[0] self.assertEqual("OSTCExtensions.ExampleHandlerLinux", extension_status["handlerName"], "The status does not correspond to the test data") # # Verify that we reported the same status (minus timestamps) in the 2 iterations # second_status = json.loads(protocol.mock_wire_data.status_blobs[1]) def remove_timestamps(x): if isinstance(x, list): for v in x: remove_timestamps(v) elif isinstance(x, dict): for k, v in x.items(): if k == "timestampUTC": x[k] = '' else: remove_timestamps(v) remove_timestamps(first_status) remove_timestamps(second_status) self.assertEqual(first_status, second_status) def test_report_status_should_log_errors_only_once_per_goal_state(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: with patch("azurelinuxagent.common.conf.get_autoupdate_enabled", return_value=False): # skip agent update with patch("azurelinuxagent.ga.update.logger.warn") as logger_warn: with patch("azurelinuxagent.common.version.get_daemon_version", return_value=FlexibleVersion("2.2.53")): update_handler = get_update_handler() update_handler._goal_state = protocol.get_goal_state() # these tests skip the initialization of the goal state. so do that here exthandlers_handler = ExtHandlersHandler(protocol) agent_update_handler = get_agent_update_handler(protocol) update_handler._report_status(exthandlers_handler, agent_update_handler) self.assertEqual(0, logger_warn.call_count, "UpdateHandler._report_status() should not report WARNINGS when there are no errors") with patch("azurelinuxagent.ga.update.ExtensionsSummary.__init__", side_effect=Exception("TEST EXCEPTION")): # simulate an error during _report_status() get_warnings = lambda: [args[0] for args, _ in logger_warn.call_args_list if "TEST EXCEPTION" in args[0]] update_handler._report_status(exthandlers_handler, agent_update_handler) update_handler._report_status(exthandlers_handler, agent_update_handler) update_handler._report_status(exthandlers_handler, agent_update_handler) self.assertEqual(1, len(get_warnings()), "UpdateHandler._report_status() should report only 1 WARNING when there are multiple errors within the same goal state") exthandlers_handler.protocol.mock_wire_data.set_incarnation(999) update_handler._try_update_goal_state(exthandlers_handler.protocol) update_handler._report_status(exthandlers_handler, agent_update_handler) self.assertEqual(2, len(get_warnings()), "UpdateHandler._report_status() should continue reporting errors after a new goal state") def test_update_handler_should_add_fast_track_to_supported_features_when_it_is_supported(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS) as protocol: self._test_supported_features_includes_fast_track(protocol, True) def test_update_handler_should_not_add_fast_track_to_supported_features_when_it_is_not_supported(self): def http_get_handler(url, *_, **__): if HttpRequestPredicates.is_host_plugin_vm_settings_request(url): return MockHttpResponse(status=404) return None with mock_wire_protocol(wire_protocol_data.DATA_FILE_VM_SETTINGS, http_get_handler=http_get_handler) as protocol: self._test_supported_features_includes_fast_track(protocol, False) def _test_supported_features_includes_fast_track(self, protocol, expected): with mock_update_handler(protocol) as update_handler: update_handler.run(debug=True) status = json.loads(protocol.mock_wire_data.status_blobs[0]) supported_features = status['supportedFeatures'] includes_fast_track = any(f['Key'] == 'FastTrack' for f in supported_features) self.assertEqual(expected, includes_fast_track, "supportedFeatures should {0}include FastTrack. Got: {1}".format("" if expected else "not ", supported_features)) Azure-WALinuxAgent-2b21de5/tests/ga/test_send_telemetry_events.py000066400000000000000000000563721462617747000253160ustar00rootroot00000000000000# Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib import json import os import platform import re import tempfile import time import uuid from datetime import datetime, timedelta from mock import MagicMock, Mock, patch, PropertyMock from azurelinuxagent.common import logger from azurelinuxagent.common.datacontract import get_properties from azurelinuxagent.common.event import WALAEventOperation, EVENTS_DIRECTORY from azurelinuxagent.common.exception import HttpError, ServiceStoppedError from azurelinuxagent.common.future import ustr from azurelinuxagent.common.osutil.factory import get_osutil from azurelinuxagent.common.protocol.util import ProtocolUtil from azurelinuxagent.common.protocol.wire import event_to_v1_encoded from azurelinuxagent.common.telemetryevent import TelemetryEvent, TelemetryEventParam, \ GuestAgentExtensionEventsSchema from azurelinuxagent.common.utils import restutil, fileutil from azurelinuxagent.common.version import CURRENT_VERSION, DISTRO_NAME, DISTRO_VERSION, AGENT_VERSION, CURRENT_AGENT, \ DISTRO_CODE_NAME from azurelinuxagent.ga.collect_telemetry_events import _CollectAndEnqueueEvents from azurelinuxagent.ga.send_telemetry_events import get_send_telemetry_events_handler from tests.ga.test_monitor import random_generator from tests.lib.mock_wire_protocol import MockHttpResponse, mock_wire_protocol from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.wire_protocol_data import DATA_FILE from tests.lib.tools import AgentTestCase, clear_singleton_instances, mock_sleep from tests.lib.event_logger_tools import EventLoggerTools class TestSendTelemetryEventsHandler(AgentTestCase, HttpRequestPredicates): def setUp(self): AgentTestCase.setUp(self) clear_singleton_instances(ProtocolUtil) self.lib_dir = tempfile.mkdtemp() self.event_dir = os.path.join(self.lib_dir, EVENTS_DIRECTORY) EventLoggerTools.initialize_event_logger(self.event_dir) def tearDown(self): AgentTestCase.tearDown(self) fileutil.rm_dirs(self.lib_dir) _TEST_EVENT_PROVIDER_ID = "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" _TEST_EVENT_OPERATION = "TEST_EVENT_OPERATION" @contextlib.contextmanager def _create_send_telemetry_events_handler(self, timeout=0.5, start_thread=True, batching_queue_limit=1): def http_post_handler(url, body, **__): if self.is_telemetry_request(url): send_telemetry_events_handler.event_calls.append((datetime.now(), body)) return MockHttpResponse(status=200) return None with mock_wire_protocol(DATA_FILE, http_post_handler=http_post_handler) as protocol: protocol_util = MagicMock() protocol_util.get_protocol = Mock(return_value=protocol) send_telemetry_events_handler = get_send_telemetry_events_handler(protocol_util) send_telemetry_events_handler.event_calls = [] with patch("azurelinuxagent.ga.send_telemetry_events.SendTelemetryEventsHandler._MIN_EVENTS_TO_BATCH", batching_queue_limit): with patch("azurelinuxagent.ga.send_telemetry_events.SendTelemetryEventsHandler._MAX_TIMEOUT", timeout): send_telemetry_events_handler.get_mock_wire_protocol = lambda: protocol if start_thread: send_telemetry_events_handler.start() self.assertTrue(send_telemetry_events_handler.is_alive(), "Thread didn't start properly!") yield send_telemetry_events_handler @staticmethod def _stop_handler(telemetry_handler, timeout=0.001): # Giving it some grace time to finish execution and then stopping thread time.sleep(timeout) telemetry_handler.stop() def _assert_test_data_in_event_body(self, telemetry_handler, test_events): # Stop the thread and Wait for the queue and thread to join TestSendTelemetryEventsHandler._stop_handler(telemetry_handler) for telemetry_event in test_events: event_str = event_to_v1_encoded(telemetry_event) found = False for _, event_body in telemetry_handler.event_calls: if event_str in event_body: found = True break self.assertTrue(found, "Event {0} not found in any telemetry calls".format(event_str)) def _assert_error_event_reported(self, mock_add_event, expected_msg, operation=WALAEventOperation.ReportEventErrors): found_msg = False for call_args in mock_add_event.call_args_list: _, kwargs = call_args if expected_msg in kwargs['message'] and kwargs['op'] == operation: found_msg = True break self.assertTrue(found_msg, "Error msg: {0} not reported".format(expected_msg)) def _setup_and_assert_bad_request_scenarios(self, http_post_handler, expected_msgs): with self._create_send_telemetry_events_handler() as telemetry_handler: telemetry_handler.get_mock_wire_protocol().set_http_handlers(http_post_handler=http_post_handler) with patch("azurelinuxagent.common.event.add_event") as mock_add_event: telemetry_handler.enqueue_event(TelemetryEvent()) TestSendTelemetryEventsHandler._stop_handler(telemetry_handler) for msg in expected_msgs: self._assert_error_event_reported(mock_add_event, msg) def test_it_should_send_events_properly(self): events = [TelemetryEvent(eventId=ustr(uuid.uuid4())), TelemetryEvent(eventId=ustr(uuid.uuid4()))] with self._create_send_telemetry_events_handler() as telemetry_handler: for test_event in events: telemetry_handler.enqueue_event(test_event) self._assert_test_data_in_event_body(telemetry_handler, events) def test_it_should_send_as_soon_as_events_available_in_queue_with_minimal_batching_limits(self): events = [TelemetryEvent(eventId=ustr(uuid.uuid4())), TelemetryEvent(eventId=ustr(uuid.uuid4()))] with self._create_send_telemetry_events_handler() as telemetry_handler: test_start_time = datetime.now() for test_event in events: telemetry_handler.enqueue_event(test_event) self._assert_test_data_in_event_body(telemetry_handler, events) # Ensure that we send out the data as soon as we enqueue the events for event_time, _ in telemetry_handler.event_calls: elapsed = event_time - test_start_time self.assertLessEqual(elapsed, timedelta(seconds=2), "Request was not sent as soon as possible") def test_thread_should_wait_for_events_to_get_in_queue_before_processing(self): events = [TelemetryEvent(eventId=ustr(uuid.uuid4())), TelemetryEvent(eventId=ustr(uuid.uuid4()))] with self._create_send_telemetry_events_handler(timeout=0.1) as telemetry_handler: # Do nothing for some time time.sleep(0.3) # Ensure that no events were transmitted by the telemetry handler during this time, i.e. telemetry thread was idle self.assertEqual(0, len(telemetry_handler.event_calls), "Unwanted calls to telemetry") # Now enqueue data and verify send_telemetry_events sends them asap for test_event in events: telemetry_handler.enqueue_event(test_event) self._assert_test_data_in_event_body(telemetry_handler, events) def test_it_should_honor_batch_time_limits_before_sending_telemetry(self): events = [TelemetryEvent(eventId=ustr(uuid.uuid4())), TelemetryEvent(eventId=ustr(uuid.uuid4()))] wait_time = timedelta(seconds=10) orig_sleep = time.sleep with patch("time.sleep", lambda *_: orig_sleep(0.01)): with patch("azurelinuxagent.ga.send_telemetry_events.SendTelemetryEventsHandler._MIN_BATCH_WAIT_TIME", wait_time): with self._create_send_telemetry_events_handler(batching_queue_limit=5) as telemetry_handler: for test_event in events: telemetry_handler.enqueue_event(test_event) self.assertEqual(0, len(telemetry_handler.event_calls), "No events should have been logged") TestSendTelemetryEventsHandler._stop_handler(telemetry_handler, timeout=0.01) wait_time = timedelta(seconds=0.2) with patch("time.sleep", lambda *_: orig_sleep(0.05)): with patch("azurelinuxagent.ga.send_telemetry_events.SendTelemetryEventsHandler._MIN_BATCH_WAIT_TIME", wait_time): with self._create_send_telemetry_events_handler(batching_queue_limit=5) as telemetry_handler: test_start_time = datetime.now() for test_event in events: telemetry_handler.enqueue_event(test_event) while not telemetry_handler.event_calls and (test_start_time + timedelta(seconds=1)) > datetime.now(): # Wait for event calls to be made, wait a max of 1 secs orig_sleep(0.1) self.assertGreater(len(telemetry_handler.event_calls), 0, "No event calls made at all!") self._assert_test_data_in_event_body(telemetry_handler, events) for event_time, _ in telemetry_handler.event_calls: elapsed = event_time - test_start_time # Technically we should send out data after 0.2 secs, but keeping a buffer of 1sec while testing self.assertLessEqual(elapsed, timedelta(seconds=1), "Request was not sent properly") def test_it_should_clear_queue_before_stopping(self): events = [TelemetryEvent(eventId=ustr(uuid.uuid4())), TelemetryEvent(eventId=ustr(uuid.uuid4()))] wait_time = timedelta(seconds=10) with patch("time.sleep", lambda *_: mock_sleep(0.01)): with patch("azurelinuxagent.ga.send_telemetry_events.SendTelemetryEventsHandler._MIN_BATCH_WAIT_TIME", wait_time): with self._create_send_telemetry_events_handler(batching_queue_limit=5) as telemetry_handler: for test_event in events: telemetry_handler.enqueue_event(test_event) self.assertEqual(0, len(telemetry_handler.event_calls), "No events should have been logged") TestSendTelemetryEventsHandler._stop_handler(telemetry_handler, timeout=0.01) # After the service is asked to stop, we should send all data in the queue self._assert_test_data_in_event_body(telemetry_handler, events) def test_it_should_honor_batch_queue_limits_before_sending_telemetry(self): batch_limit = 5 with self._create_send_telemetry_events_handler(batching_queue_limit=batch_limit) as telemetry_handler: events = [] for _ in range(batch_limit-1): test_event = TelemetryEvent(eventId=ustr(uuid.uuid4())) events.append(test_event) telemetry_handler.enqueue_event(test_event) self.assertEqual(0, len(telemetry_handler.event_calls), "No events should have been logged") for _ in range(batch_limit): test_event = TelemetryEvent(eventId=ustr(uuid.uuid4())) events.append(test_event) telemetry_handler.enqueue_event(test_event) self._assert_test_data_in_event_body(telemetry_handler, events) def test_it_should_raise_on_enqueue_if_service_stopped(self): with self._create_send_telemetry_events_handler(start_thread=False) as telemetry_handler: # Ensure the thread is stopped telemetry_handler.stop() with self.assertRaises(ServiceStoppedError) as context_manager: telemetry_handler.enqueue_event(TelemetryEvent(eventId=ustr(uuid.uuid4()))) exception = context_manager.exception self.assertIn("{0} is stopped, not accepting anymore events".format(telemetry_handler.get_thread_name()), str(exception)) def test_it_should_honour_the_incoming_order_of_events(self): with self._create_send_telemetry_events_handler(timeout=0.3, start_thread=False) as telemetry_handler: for index in range(5): telemetry_handler.enqueue_event(TelemetryEvent(eventId=index)) telemetry_handler.start() self.assertTrue(telemetry_handler.is_alive(), "Thread not alive") TestSendTelemetryEventsHandler._stop_handler(telemetry_handler) _, event_body = telemetry_handler.event_calls[0] event_orders = re.findall(r'', event_body.decode('utf-8')) self.assertEqual(sorted(event_orders), event_orders, "Events not ordered correctly") def test_send_telemetry_events_should_report_event_if_wireserver_returns_http_error(self): test_str = "A test exception, Guid: {0}".format(str(uuid.uuid4())) def http_post_handler(url, _, **__): if self.is_telemetry_request(url): return HttpError(test_str) return None self._setup_and_assert_bad_request_scenarios(http_post_handler, [test_str]) def test_send_telemetry_events_should_report_event_when_http_post_returning_503(self): def http_post_handler(url, _, **__): if self.is_telemetry_request(url): return MockHttpResponse(restutil.httpclient.SERVICE_UNAVAILABLE) return None expected_msgs = ["[ProtocolError] [Wireserver Exception] [ProtocolError] [Wireserver Failed]", "[HTTP Failed] Status Code 503"] self._setup_and_assert_bad_request_scenarios(http_post_handler, expected_msgs) def test_send_telemetry_events_should_add_event_on_unexpected_errors(self): with self._create_send_telemetry_events_handler(timeout=0.1) as telemetry_handler: with patch("azurelinuxagent.ga.send_telemetry_events.add_event") as mock_add_event: with patch("azurelinuxagent.common.protocol.wire.WireClient.report_event") as patch_report_event: test_str = "Test exception, Guid: {0}".format(str(uuid.uuid4())) patch_report_event.side_effect = Exception(test_str) telemetry_handler.enqueue_event(TelemetryEvent()) TestSendTelemetryEventsHandler._stop_handler(telemetry_handler, timeout=0.01) self._assert_error_event_reported(mock_add_event, test_str, operation=WALAEventOperation.UnhandledError) def _create_extension_event(self, size=0, name="DummyExtension", message="DummyMessage"): event_data = self._get_event_data(name=size if size != 0 else name, message=random_generator(size) if size != 0 else message) event_file = os.path.join(self.event_dir, "{0}.tld".format(int(time.time() * 1000000))) with open(event_file, 'wb+') as file_descriptor: file_descriptor.write(event_data.encode('utf-8')) @staticmethod def _get_event_data(message, name): event = TelemetryEvent(1, TestSendTelemetryEventsHandler._TEST_EVENT_PROVIDER_ID) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Name, name)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Version, str(CURRENT_VERSION))) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Operation, TestSendTelemetryEventsHandler._TEST_EVENT_OPERATION)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.OperationSuccess, True)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Message, message)) event.parameters.append(TelemetryEventParam(GuestAgentExtensionEventsSchema.Duration, 0)) data = get_properties(event) return json.dumps(data) @patch("azurelinuxagent.common.event.TELEMETRY_EVENT_PROVIDER_ID", _TEST_EVENT_PROVIDER_ID) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_it_should_enqueue_and_send_events_properly(self, mock_lib_dir, *_): mock_lib_dir.return_value = self.lib_dir with self._create_send_telemetry_events_handler() as telemetry_handler: monitor_handler = _CollectAndEnqueueEvents(telemetry_handler) self._create_extension_event(message="Message-Test") test_mtime = 1000 # epoch time, in ms test_opcodename = datetime.fromtimestamp(test_mtime).strftime(logger.Logger.LogTimeFormatInUTC) test_eventtid = 42 test_eventpid = 24 test_taskname = "TEST_TaskName" with patch("os.path.getmtime", return_value=test_mtime): with patch('os.getpid', return_value=test_eventpid): with patch("threading.Thread.ident", new_callable=PropertyMock(return_value=test_eventtid)): with patch("threading.Thread.getName", return_value=test_taskname): monitor_handler.run() TestSendTelemetryEventsHandler._stop_handler(telemetry_handler) # Validating the crafted message by the collect_and_send_events call. extension_events = self._get_extension_events(telemetry_handler) self.assertEqual(1, len(extension_events), "Only 1 event should be sent") collected_event = extension_events[0] # Some of those expected values come from the mock protocol and imds client set up during test initialization osutil = get_osutil() osversion = u"{0}:{1}-{2}-{3}:{4}".format(platform.system(), DISTRO_NAME, DISTRO_VERSION, DISTRO_CODE_NAME, platform.release()) sample_message = '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ '' \ ']]>'.format(AGENT_VERSION, TestSendTelemetryEventsHandler._TEST_EVENT_OPERATION, CURRENT_AGENT, test_opcodename, test_eventtid, test_eventpid, test_taskname, osversion, int(osutil.get_total_mem()), osutil.get_processor_cores(), json.dumps({"CpuArchitecture": platform.machine()})).encode('utf-8') self.assertIn(sample_message, collected_event) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_collect_and_send_events_with_small_events(self, mock_lib_dir): mock_lib_dir.return_value = self.lib_dir with self._create_send_telemetry_events_handler() as telemetry_handler: sizes = [15, 15, 15, 15] # get the powers of 2 - 2**16 is the limit for power in sizes: size = 2 ** power self._create_extension_event(size) _CollectAndEnqueueEvents(telemetry_handler).run() # The send_event call would be called each time, as we are filling up the buffer up to the brim for each call. TestSendTelemetryEventsHandler._stop_handler(telemetry_handler) self.assertEqual(4, len(self._get_extension_events(telemetry_handler))) @patch("azurelinuxagent.common.conf.get_lib_dir") def test_collect_and_send_events_with_large_events(self, mock_lib_dir): mock_lib_dir.return_value = self.lib_dir with self._create_send_telemetry_events_handler() as telemetry_handler: sizes = [17, 17, 17] # get the powers of 2 for power in sizes: size = 2 ** power self._create_extension_event(size) with patch("azurelinuxagent.common.logger.periodic_warn") as patch_periodic_warn: _CollectAndEnqueueEvents(telemetry_handler).run() TestSendTelemetryEventsHandler._stop_handler(telemetry_handler) self.assertEqual(3, patch_periodic_warn.call_count) # The send_event call should never be called as the events are larger than 2**16. self.assertEqual(0, len(self._get_extension_events(telemetry_handler))) @staticmethod def _get_extension_events(telemetry_handler): return [event_xml for _, event_xml in telemetry_handler.event_calls if TestSendTelemetryEventsHandler._TEST_EVENT_OPERATION in event_xml.decode()] Azure-WALinuxAgent-2b21de5/tests/ga/test_update.py000066400000000000000000004346201462617747000221650ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the Apache License. from __future__ import print_function import contextlib import glob import json import os import re import shutil import stat import subprocess import sys import tempfile import time import unittest import uuid import zipfile from datetime import datetime, timedelta from threading import current_thread from azurelinuxagent.ga.guestagent import GuestAgent, GuestAgentError, \ AGENT_ERROR_FILE from tests.common.osutil.test_default import TestOSUtil import azurelinuxagent.common.osutil.default as osutil _ORIGINAL_POPEN = subprocess.Popen from azurelinuxagent.common import conf from azurelinuxagent.common.event import EVENTS_DIRECTORY, WALAEventOperation from azurelinuxagent.common.exception import HttpError, \ ExitException, AgentMemoryExceededException from azurelinuxagent.common.future import ustr, httpclient from azurelinuxagent.ga.persist_firewall_rules import PersistFirewallRulesHandler from azurelinuxagent.common.protocol.hostplugin import HostPluginProtocol from azurelinuxagent.common.protocol.restapi import VMAgentFamily, \ ExtHandlerPackage, ExtHandlerPackageList, Extension, VMStatus, ExtHandlerStatus, ExtensionStatus, \ VMAgentUpdateStatuses from azurelinuxagent.common.protocol.util import ProtocolUtil from azurelinuxagent.common.utils import fileutil, textutil, timeutil, shellutil from azurelinuxagent.common.utils.archive import ARCHIVE_DIRECTORY_NAME, AGENT_STATUS_FILE from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from azurelinuxagent.common.utils.networkutil import FirewallCmdDirectCommands, AddFirewallRules from azurelinuxagent.common.version import AGENT_PKG_GLOB, AGENT_DIR_GLOB, AGENT_NAME, AGENT_DIR_PATTERN, \ AGENT_VERSION, CURRENT_AGENT, CURRENT_VERSION, set_daemon_version, __DAEMON_VERSION_ENV_VARIABLE as DAEMON_VERSION_ENV_VARIABLE from azurelinuxagent.ga.exthandlers import ExtHandlersHandler, ExtHandlerInstance, HandlerEnvironment, ExtensionStatusValue from azurelinuxagent.ga.update import \ get_update_handler, ORPHAN_POLL_INTERVAL, AGENT_PARTITION_FILE, ORPHAN_WAIT_INTERVAL, \ CHILD_LAUNCH_RESTART_MAX, CHILD_HEALTH_INTERVAL, GOAL_STATE_PERIOD_EXTENSIONS_DISABLED, UpdateHandler, \ READONLY_FILE_GLOBS, ExtensionsSummary from tests.lib.mock_update_handler import mock_update_handler from tests.lib.mock_wire_protocol import mock_wire_protocol, MockHttpResponse from tests.lib.wire_protocol_data import DATA_FILE, DATA_FILE_MULTIPLE_EXT, DATA_FILE_VM_SETTINGS from tests.lib.tools import AgentTestCase, AgentTestCaseWithGetVmSizeMock, data_dir, DEFAULT, patch, load_bin_data, Mock, MagicMock, \ clear_singleton_instances, is_python_version_26_or_34, skip_if_predicate_true from tests.lib import wire_protocol_data from tests.lib.http_request_predicates import HttpRequestPredicates NO_ERROR = { "last_failure": 0.0, "failure_count": 0, "was_fatal": False, "reason": '' } FATAL_ERROR = { "last_failure": 42.42, "failure_count": 2, "was_fatal": True, "reason": "Test failure" } WITH_ERROR = { "last_failure": 42.42, "failure_count": 2, "was_fatal": False, "reason": "Test failure" } EMPTY_MANIFEST = { "name": "WALinuxAgent", "version": 1.0, "handlerManifest": { "installCommand": "", "uninstallCommand": "", "updateCommand": "", "enableCommand": "", "disableCommand": "", "rebootAfterInstall": False, "reportHeartbeat": False } } def faux_logger(): print("STDOUT message") print("STDERR message", file=sys.stderr) return DEFAULT @contextlib.contextmanager def _get_update_handler(iterations=1, test_data=None, protocol=None, autoupdate_enabled=True): """ This function returns a mocked version of the UpdateHandler object to be used for testing. It will only run the main loop [iterations] no of times. """ test_data = DATA_FILE if test_data is None else test_data with patch.object(HostPluginProtocol, "is_default_channel", False): if protocol is None: with mock_wire_protocol(test_data) as mock_protocol: with mock_update_handler(mock_protocol, iterations=iterations, autoupdate_enabled=autoupdate_enabled) as update_handler: yield update_handler, mock_protocol else: with mock_update_handler(protocol, iterations=iterations, autoupdate_enabled=autoupdate_enabled) as update_handler: yield update_handler, protocol class UpdateTestCase(AgentTestCaseWithGetVmSizeMock): _test_suite_tmp_dir = None _agent_zip_dir = None @classmethod def setUpClass(cls): super(UpdateTestCase, cls).setUpClass() # copy data_dir/ga/WALinuxAgent-0.0.0.0.zip to _test_suite_tmp_dir/waagent-zip/WALinuxAgent-.zip sample_agent_zip = "WALinuxAgent-0.0.0.0.zip" test_agent_zip = sample_agent_zip.replace("0.0.0.0", AGENT_VERSION) UpdateTestCase._test_suite_tmp_dir = tempfile.mkdtemp() UpdateTestCase._agent_zip_dir = os.path.join(UpdateTestCase._test_suite_tmp_dir, "waagent-zip") os.mkdir(UpdateTestCase._agent_zip_dir) source = os.path.join(data_dir, "ga", sample_agent_zip) target = os.path.join(UpdateTestCase._agent_zip_dir, test_agent_zip) shutil.copyfile(source, target) # The update_handler inherently calls agent update handler, which in turn calls daemon version. So now daemon version logic has fallback if env variable is not set. # The fallback calls popen which is not mocked. So we set the env variable to avoid the fallback. # This will not change any of the test validations. At the ene of all update test validations, we reset the env variable. set_daemon_version("1.2.3.4") @classmethod def tearDownClass(cls): super(UpdateTestCase, cls).tearDownClass() shutil.rmtree(UpdateTestCase._test_suite_tmp_dir) os.environ.pop(DAEMON_VERSION_ENV_VARIABLE) @staticmethod def _get_agent_pkgs(in_dir=None): if in_dir is None: in_dir = UpdateTestCase._agent_zip_dir path = os.path.join(in_dir, AGENT_PKG_GLOB) return glob.glob(path) @staticmethod def _get_agents(in_dir=None): if in_dir is None: in_dir = UpdateTestCase._agent_zip_dir path = os.path.join(in_dir, AGENT_DIR_GLOB) return [a for a in glob.glob(path) if os.path.isdir(a)] @staticmethod def _get_agent_file_path(): return UpdateTestCase._get_agent_pkgs()[0] @staticmethod def _get_agent_file_name(): return os.path.basename(UpdateTestCase._get_agent_file_path()) @staticmethod def _get_agent_path(): return fileutil.trim_ext(UpdateTestCase._get_agent_file_path(), "zip") @staticmethod def _get_agent_name(): return os.path.basename(UpdateTestCase._get_agent_path()) @staticmethod def _get_agent_version(): return FlexibleVersion(UpdateTestCase._get_agent_name().split("-")[1]) @staticmethod def _add_write_permission_to_goal_state_files(): # UpdateHandler.run() marks some of the files from the goal state as read-only. Those files are overwritten when # a new goal state is fetched. This is not a problem for the agent, since it runs as root, but tests need # to make those files writtable before fetching a new goal state. Note that UpdateHandler.run() fetches a new # goal state, so tests that make multiple calls to that method need to call this function in-between calls. for gb in READONLY_FILE_GLOBS: for path in glob.iglob(os.path.join(conf.get_lib_dir(), gb)): fileutil.chmod(path, stat.S_IRUSR | stat.S_IWUSR) def agent_bin(self, version, suffix): return "bin/{0}-{1}{2}.egg".format(AGENT_NAME, version, suffix) def rename_agent_bin(self, path, dst_v): src_bin = glob.glob(os.path.join(path, self.agent_bin("*.*.*.*", '*')))[0] dst_bin = os.path.join(path, self.agent_bin(dst_v, '')) shutil.move(src_bin, dst_bin) def agents(self): return [GuestAgent.from_installed_agent(path) for path in self.agent_dirs()] def agent_count(self): return len(self.agent_dirs()) def agent_dirs(self): return self._get_agents(in_dir=self.tmp_dir) def agent_dir(self, version): return os.path.join(self.tmp_dir, "{0}-{1}".format(AGENT_NAME, version)) def agent_paths(self): paths = glob.glob(os.path.join(self.tmp_dir, "*")) paths.sort() return paths def agent_pkgs(self): return self._get_agent_pkgs(in_dir=self.tmp_dir) def agent_versions(self): v = [FlexibleVersion(AGENT_DIR_PATTERN.match(a).group(1)) for a in self.agent_dirs()] v.sort(reverse=True) return v @contextlib.contextmanager def get_error_file(self, error_data=None): if error_data is None: error_data = NO_ERROR with tempfile.NamedTemporaryFile(mode="w") as fp: json.dump(error_data if error_data is not None else NO_ERROR, fp) fp.seek(0) yield fp def create_error(self, error_data=None): if error_data is None: error_data = NO_ERROR with self.get_error_file(error_data) as path: err = GuestAgentError(path.name) err.load() return err def copy_agents(self, *agents): if len(agents) <= 0: agents = self._get_agent_pkgs() for agent in agents: shutil.copy(agent, self.tmp_dir) return def expand_agents(self): for agent in self.agent_pkgs(): path = os.path.join(self.tmp_dir, fileutil.trim_ext(agent, "zip")) zipfile.ZipFile(agent).extractall(path) def prepare_agent(self, version): """ Create a download for the current agent version, copied from test data """ self.copy_agents(self._get_agent_pkgs()[0]) self.expand_agents() versions = self.agent_versions() src_v = FlexibleVersion(str(versions[0])) from_path = self.agent_dir(src_v) dst_v = FlexibleVersion(str(version)) to_path = self.agent_dir(dst_v) if from_path != to_path: shutil.move(from_path + ".zip", to_path + ".zip") shutil.move(from_path, to_path) self.rename_agent_bin(to_path, dst_v) return def prepare_agents(self, count=20, is_available=True): # Ensure the test data is copied over agent_count = self.agent_count() if agent_count <= 0: self.copy_agents(self._get_agent_pkgs()[0]) self.expand_agents() count -= 1 # Determine the most recent agent version versions = self.agent_versions() src_v = FlexibleVersion(str(versions[0])) # Create agent packages and directories return self.replicate_agents( src_v=src_v, count=count - agent_count, is_available=is_available) def remove_agents(self): for agent in self.agent_paths(): try: if os.path.isfile(agent): os.remove(agent) else: shutil.rmtree(agent) except: # pylint: disable=bare-except pass return def replicate_agents(self, count=5, src_v=AGENT_VERSION, is_available=True, increment=1): from_path = self.agent_dir(src_v) dst_v = FlexibleVersion(str(src_v)) for i in range(0, count): # pylint: disable=unused-variable dst_v += increment to_path = self.agent_dir(dst_v) shutil.copyfile(from_path + ".zip", to_path + ".zip") shutil.copytree(from_path, to_path) self.rename_agent_bin(to_path, dst_v) if not is_available: GuestAgent.from_installed_agent(to_path).mark_failure(is_fatal=True) return dst_v class TestUpdate(UpdateTestCase): def setUp(self): UpdateTestCase.setUp(self) self.event_patch = patch('azurelinuxagent.common.event.add_event') self.update_handler = get_update_handler() protocol = Mock() self.update_handler.protocol_util = Mock() self.update_handler.protocol_util.get_protocol = Mock(return_value=protocol) self.update_handler._goal_state = Mock() self.update_handler._goal_state.extensions_goal_state = Mock() self.update_handler._goal_state.extensions_goal_state.source = "Fabric" # Since ProtocolUtil is a singleton per thread, we need to clear it to ensure that the test cases do not reuse # a previous state clear_singleton_instances(ProtocolUtil) def test_creation(self): self.assertEqual(0, len(self.update_handler.agents)) self.assertEqual(None, self.update_handler.child_agent) self.assertEqual(None, self.update_handler.child_launch_time) self.assertEqual(0, self.update_handler.child_launch_attempts) self.assertEqual(None, self.update_handler.child_process) self.assertEqual(None, self.update_handler.signal_handler) def test_emit_restart_event_emits_event_if_not_clean_start(self): try: mock_event = self.event_patch.start() self.update_handler._set_sentinel() self.update_handler._emit_restart_event() self.assertEqual(1, mock_event.call_count) except Exception as e: # pylint: disable=unused-variable pass self.event_patch.stop() def _create_protocol(self, count=20, versions=None): latest_version = self.prepare_agents(count=count) if versions is None or len(versions) <= 0: versions = [latest_version] return ProtocolMock(versions=versions) def _test_ensure_no_orphans(self, invocations=3, interval=ORPHAN_WAIT_INTERVAL, pid_count=0): with patch.object(self.update_handler, 'osutil') as mock_util: # Note: # - Python only allows mutations of objects to which a function has # a reference. Incrementing an integer directly changes the # reference. Incrementing an item of a list changes an item to # which the code has a reference. # See http://stackoverflow.com/questions/26408941/python-nested-functions-and-variable-scope iterations = [0] def iterator(*args, **kwargs): # pylint: disable=unused-argument iterations[0] += 1 return iterations[0] < invocations mock_util.check_pid_alive = Mock(side_effect=iterator) pid_files = self.update_handler._get_pid_files() self.assertEqual(pid_count, len(pid_files)) with patch('os.getpid', return_value=42): with patch('time.sleep', return_value=None) as mock_sleep: # pylint: disable=redefined-outer-name self.update_handler._ensure_no_orphans(orphan_wait_interval=interval) for pid_file in pid_files: self.assertFalse(os.path.exists(pid_file)) return mock_util.check_pid_alive.call_count, mock_sleep.call_count def test_ensure_no_orphans(self): fileutil.write_file(os.path.join(self.tmp_dir, "0_waagent.pid"), ustr(41)) calls, sleeps = self._test_ensure_no_orphans(invocations=3, pid_count=1) self.assertEqual(3, calls) self.assertEqual(2, sleeps) def test_ensure_no_orphans_skips_if_no_orphans(self): calls, sleeps = self._test_ensure_no_orphans(invocations=3) self.assertEqual(0, calls) self.assertEqual(0, sleeps) def test_ensure_no_orphans_ignores_exceptions(self): with patch('azurelinuxagent.common.utils.fileutil.read_file', side_effect=Exception): calls, sleeps = self._test_ensure_no_orphans(invocations=3) self.assertEqual(0, calls) self.assertEqual(0, sleeps) def test_ensure_no_orphans_kills_after_interval(self): fileutil.write_file(os.path.join(self.tmp_dir, "0_waagent.pid"), ustr(41)) with patch('os.kill') as mock_kill: calls, sleeps = self._test_ensure_no_orphans( invocations=4, interval=3 * ORPHAN_POLL_INTERVAL, pid_count=1) self.assertEqual(3, calls) self.assertEqual(2, sleeps) self.assertEqual(1, mock_kill.call_count) @patch('azurelinuxagent.ga.update.datetime') def test_ensure_partition_assigned(self, mock_time): path = os.path.join(conf.get_lib_dir(), AGENT_PARTITION_FILE) mock_time.utcnow = Mock() self.assertFalse(os.path.exists(path)) for n in range(0, 99): mock_time.utcnow.return_value = Mock(microsecond=n * 10000) self.update_handler._ensure_partition_assigned() self.assertTrue(os.path.exists(path)) s = fileutil.read_file(path) self.assertEqual(n, int(s)) os.remove(path) def test_ensure_readonly_sets_readonly(self): test_files = [ os.path.join(conf.get_lib_dir(), "faux_certificate.crt"), os.path.join(conf.get_lib_dir(), "faux_certificate.p7m"), os.path.join(conf.get_lib_dir(), "faux_certificate.pem"), os.path.join(conf.get_lib_dir(), "faux_certificate.prv"), os.path.join(conf.get_lib_dir(), "ovf-env.xml") ] for path in test_files: fileutil.write_file(path, "Faux content") os.chmod(path, stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP | stat.S_IROTH) self.update_handler._ensure_readonly_files() for path in test_files: mode = os.stat(path).st_mode mode &= (stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO) self.assertEqual(0, mode ^ stat.S_IRUSR) def test_ensure_readonly_leaves_unmodified(self): test_files = [ os.path.join(conf.get_lib_dir(), "faux.xml"), os.path.join(conf.get_lib_dir(), "faux.json"), os.path.join(conf.get_lib_dir(), "faux.txt"), os.path.join(conf.get_lib_dir(), "faux") ] for path in test_files: fileutil.write_file(path, "Faux content") os.chmod(path, stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP | stat.S_IROTH) self.update_handler._ensure_readonly_files() for path in test_files: mode = os.stat(path).st_mode mode &= (stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO) self.assertEqual( stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP | stat.S_IROTH, mode) def _test_evaluate_agent_health(self, child_agent_index=0): self.prepare_agents() latest_agent = self.update_handler.get_latest_agent_greater_than_daemon() self.assertTrue(latest_agent.is_available) self.assertFalse(latest_agent.is_blacklisted) self.assertTrue(len(self.update_handler.agents) > 1) child_agent = self.update_handler.agents[child_agent_index] self.assertTrue(child_agent.is_available) self.assertFalse(child_agent.is_blacklisted) self.update_handler.child_agent = child_agent self.update_handler._evaluate_agent_health(latest_agent) def test_evaluate_agent_health_ignores_installed_agent(self): self.update_handler._evaluate_agent_health(None) def test_evaluate_agent_health_raises_exception_for_restarting_agent(self): self.update_handler.child_launch_time = time.time() - (4 * 60) self.update_handler.child_launch_attempts = CHILD_LAUNCH_RESTART_MAX - 1 self.assertRaises(Exception, self._test_evaluate_agent_health) def test_evaluate_agent_health_will_not_raise_exception_for_long_restarts(self): self.update_handler.child_launch_time = time.time() - 24 * 60 self.update_handler.child_launch_attempts = CHILD_LAUNCH_RESTART_MAX self._test_evaluate_agent_health() def test_evaluate_agent_health_will_not_raise_exception_too_few_restarts(self): self.update_handler.child_launch_time = time.time() self.update_handler.child_launch_attempts = CHILD_LAUNCH_RESTART_MAX - 2 self._test_evaluate_agent_health() def test_evaluate_agent_health_resets_with_new_agent(self): self.update_handler.child_launch_time = time.time() - (4 * 60) self.update_handler.child_launch_attempts = CHILD_LAUNCH_RESTART_MAX - 1 self._test_evaluate_agent_health(child_agent_index=1) self.assertEqual(1, self.update_handler.child_launch_attempts) def test_filter_blacklisted_agents(self): self.prepare_agents() self.update_handler._set_and_sort_agents([GuestAgent.from_installed_agent(path) for path in self.agent_dirs()]) self.assertEqual(len(self.agent_dirs()), len(self.update_handler.agents)) kept_agents = self.update_handler.agents[::2] blacklisted_agents = self.update_handler.agents[1::2] for agent in blacklisted_agents: agent.mark_failure(is_fatal=True) self.update_handler._filter_blacklisted_agents() self.assertEqual(kept_agents, self.update_handler.agents) def test_find_agents(self): self.prepare_agents() self.assertTrue(0 <= len(self.update_handler.agents)) self.update_handler._find_agents() self.assertEqual(len(self._get_agents(self.tmp_dir)), len(self.update_handler.agents)) def test_find_agents_does_reload(self): self.prepare_agents() self.update_handler._find_agents() agents = self.update_handler.agents self.update_handler._find_agents() self.assertNotEqual(agents, self.update_handler.agents) def test_find_agents_sorts(self): self.prepare_agents() self.update_handler._find_agents() v = FlexibleVersion("100000") for a in self.update_handler.agents: self.assertTrue(v > a.version) v = a.version def test_get_latest_agent(self): latest_version = self.prepare_agents() latest_agent = self.update_handler.get_latest_agent_greater_than_daemon() self.assertEqual(len(self._get_agents(self.tmp_dir)), len(self.update_handler.agents)) self.assertEqual(latest_version, latest_agent.version) def test_get_latest_agent_excluded(self): self.prepare_agent(AGENT_VERSION) self.assertEqual(None, self.update_handler.get_latest_agent_greater_than_daemon()) def test_get_latest_agent_no_updates(self): self.assertEqual(None, self.update_handler.get_latest_agent_greater_than_daemon()) def test_get_latest_agent_skip_updates(self): conf.get_autoupdate_enabled = Mock(return_value=False) self.assertEqual(None, self.update_handler.get_latest_agent_greater_than_daemon()) def test_get_latest_agent_skips_unavailable(self): self.prepare_agents() prior_agent = self.update_handler.get_latest_agent_greater_than_daemon() latest_version = self.prepare_agents(count=self.agent_count() + 1, is_available=False) latest_path = os.path.join(self.tmp_dir, "{0}-{1}".format(AGENT_NAME, latest_version)) self.assertFalse(GuestAgent.from_installed_agent(latest_path).is_available) latest_agent = self.update_handler.get_latest_agent_greater_than_daemon() self.assertTrue(latest_agent.version < latest_version) self.assertEqual(latest_agent.version, prior_agent.version) def test_get_pid_files(self): pid_files = self.update_handler._get_pid_files() self.assertEqual(0, len(pid_files)) def test_get_pid_files_returns_previous(self): for n in range(1250): fileutil.write_file(os.path.join(self.tmp_dir, str(n) + "_waagent.pid"), ustr(n + 1)) pid_files = self.update_handler._get_pid_files() self.assertEqual(1250, len(pid_files)) pid_dir, pid_name, pid_re = self.update_handler._get_pid_parts() # pylint: disable=unused-variable for p in pid_files: self.assertTrue(pid_re.match(os.path.basename(p))) def test_is_clean_start_returns_true_when_no_sentinel(self): self.assertFalse(os.path.isfile(self.update_handler._sentinel_file_path())) self.assertTrue(self.update_handler._is_clean_start) def test_is_clean_start_returns_false_when_sentinel_exists(self): self.update_handler._set_sentinel(agent=CURRENT_AGENT) self.assertFalse(self.update_handler._is_clean_start) def test_is_clean_start_returns_false_for_exceptions(self): self.update_handler._set_sentinel() with patch("azurelinuxagent.common.utils.fileutil.read_file", side_effect=Exception): self.assertFalse(self.update_handler._is_clean_start) def test_is_orphaned_returns_false_if_parent_exists(self): fileutil.write_file(conf.get_agent_pid_file_path(), ustr(42)) with patch('os.getppid', return_value=42): self.assertFalse(self.update_handler._is_orphaned) def test_is_orphaned_returns_true_if_parent_is_init(self): with patch('os.getppid', return_value=1): self.assertTrue(self.update_handler._is_orphaned) def test_is_orphaned_returns_true_if_parent_does_not_exist(self): fileutil.write_file(conf.get_agent_pid_file_path(), ustr(24)) with patch('os.getppid', return_value=42): self.assertTrue(self.update_handler._is_orphaned) def test_purge_agents(self): self.prepare_agents() self.update_handler._find_agents() # Ensure at least three agents initially exist self.assertTrue(2 < len(self.update_handler.agents)) # Purge every other agent. Don't add the current version to agents_to_keep explicitly; # the current version is never purged agents_to_keep = [] kept_agents = [] purged_agents = [] for i in range(0, len(self.update_handler.agents)): if self.update_handler.agents[i].version == CURRENT_VERSION: kept_agents.append(self.update_handler.agents[i]) else: if i % 2 == 0: agents_to_keep.append(self.update_handler.agents[i]) kept_agents.append(self.update_handler.agents[i]) else: purged_agents.append(self.update_handler.agents[i]) # Reload and assert only the kept agents remain on disk self.update_handler.agents = agents_to_keep self.update_handler._purge_agents() self.update_handler._find_agents() self.assertEqual( [agent.version for agent in kept_agents], [agent.version for agent in self.update_handler.agents]) # Ensure both directories and packages are removed for agent in purged_agents: agent_path = os.path.join(self.tmp_dir, "{0}-{1}".format(AGENT_NAME, agent.version)) self.assertFalse(os.path.exists(agent_path)) self.assertFalse(os.path.exists(agent_path + ".zip")) # Ensure kept agent directories and packages remain for agent in kept_agents: agent_path = os.path.join(self.tmp_dir, "{0}-{1}".format(AGENT_NAME, agent.version)) self.assertTrue(os.path.exists(agent_path)) self.assertTrue(os.path.exists(agent_path + ".zip")) def _test_run_latest(self, mock_child=None, mock_time=None, child_args=None): if mock_child is None: mock_child = ChildMock() if mock_time is None: mock_time = TimeMock() with patch('azurelinuxagent.ga.update.subprocess.Popen', return_value=mock_child) as mock_popen: with patch('time.time', side_effect=mock_time.time): with patch('time.sleep', side_effect=mock_time.sleep): self.update_handler.run_latest(child_args=child_args) agent_calls = [args[0] for (args, _) in mock_popen.call_args_list if "run-exthandlers" in ''.join(args[0])] self.assertEqual(1, len(agent_calls), "Expected a single call to the latest agent; got: {0}. All mocked calls: {1}".format( agent_calls, mock_popen.call_args_list)) return mock_popen.call_args def test_run_latest(self): self.prepare_agents() with patch("azurelinuxagent.common.conf.get_autoupdate_enabled", return_value=True): agent = self.update_handler.get_latest_agent_greater_than_daemon() args, kwargs = self._test_run_latest() args = args[0] cmds = textutil.safe_shlex_split(agent.get_agent_cmd()) if cmds[0].lower() == "python": cmds[0] = sys.executable self.assertEqual(args, cmds) self.assertTrue(len(args) > 1) self.assertRegex(args[0], r"^(/.*/python[\d.]*)$", "The command doesn't contain full python path") self.assertEqual("-run-exthandlers", args[len(args) - 1]) self.assertEqual(True, 'cwd' in kwargs) self.assertEqual(agent.get_agent_dir(), kwargs['cwd']) self.assertEqual(False, '\x00' in cmds[0]) def test_run_latest_passes_child_args(self): self.prepare_agents() self.update_handler.get_latest_agent_greater_than_daemon() args, _ = self._test_run_latest(child_args="AnArgument") args = args[0] self.assertTrue(len(args) > 1) self.assertRegex(args[0], r"^(/.*/python[\d.]*)$", "The command doesn't contain full python path") self.assertEqual("AnArgument", args[len(args) - 1]) def test_run_latest_polls_and_waits_for_success(self): mock_child = ChildMock(return_value=None) mock_time = TimeMock(time_increment=CHILD_HEALTH_INTERVAL / 3) self._test_run_latest(mock_child=mock_child, mock_time=mock_time) self.assertEqual(2, mock_child.poll.call_count) self.assertEqual(1, mock_child.wait.call_count) def test_run_latest_polling_stops_at_success(self): mock_child = ChildMock(return_value=0) mock_time = TimeMock(time_increment=CHILD_HEALTH_INTERVAL / 3) self._test_run_latest(mock_child=mock_child, mock_time=mock_time) self.assertEqual(1, mock_child.poll.call_count) self.assertEqual(0, mock_child.wait.call_count) def test_run_latest_polling_stops_at_failure(self): mock_child = ChildMock(return_value=42) mock_time = TimeMock() self._test_run_latest(mock_child=mock_child, mock_time=mock_time) self.assertEqual(1, mock_child.poll.call_count) self.assertEqual(0, mock_child.wait.call_count) def test_run_latest_polls_frequently_if_installed_is_latest(self): mock_child = ChildMock(return_value=0) # pylint: disable=unused-variable mock_time = TimeMock(time_increment=CHILD_HEALTH_INTERVAL / 2) self._test_run_latest(mock_time=mock_time) self.assertEqual(1, mock_time.sleep_interval) def test_run_latest_polls_every_second_if_installed_not_latest(self): self.prepare_agents() mock_time = TimeMock(time_increment=CHILD_HEALTH_INTERVAL / 2) self._test_run_latest(mock_time=mock_time) self.assertEqual(1, mock_time.sleep_interval) def test_run_latest_defaults_to_current(self): self.assertEqual(None, self.update_handler.get_latest_agent_greater_than_daemon()) args, kwargs = self._test_run_latest() self.assertEqual(args[0], [sys.executable, "-u", sys.argv[0], "-run-exthandlers"]) self.assertEqual(True, 'cwd' in kwargs) self.assertEqual(os.getcwd(), kwargs['cwd']) def test_run_latest_forwards_output(self): try: tempdir = tempfile.mkdtemp() stdout_path = os.path.join(tempdir, "stdout") stderr_path = os.path.join(tempdir, "stderr") with open(stdout_path, "w") as stdout: with open(stderr_path, "w") as stderr: saved_stdout, sys.stdout = sys.stdout, stdout saved_stderr, sys.stderr = sys.stderr, stderr try: self._test_run_latest(mock_child=ChildMock(side_effect=faux_logger)) finally: sys.stdout = saved_stdout sys.stderr = saved_stderr with open(stdout_path, "r") as stdout: self.assertEqual(1, len(stdout.readlines())) with open(stderr_path, "r") as stderr: self.assertEqual(1, len(stderr.readlines())) finally: shutil.rmtree(tempdir, True) def test_run_latest_nonzero_code_does_not_mark_failure(self): self.prepare_agents() latest_agent = self.update_handler.get_latest_agent_greater_than_daemon() self.assertTrue(latest_agent.is_available) self.assertEqual(0.0, latest_agent.error.last_failure) self.assertEqual(0, latest_agent.error.failure_count) with patch('azurelinuxagent.ga.update.UpdateHandler.get_latest_agent_greater_than_daemon', return_value=latest_agent): self._test_run_latest(mock_child=ChildMock(return_value=1)) self.assertFalse(latest_agent.is_blacklisted, "Agent should not be blacklisted") def test_run_latest_exception_blacklists(self): self.prepare_agents() latest_agent = self.update_handler.get_latest_agent_greater_than_daemon() self.assertTrue(latest_agent.is_available) self.assertEqual(0.0, latest_agent.error.last_failure) self.assertEqual(0, latest_agent.error.failure_count) verify_string = "Force blacklisting: {0}".format(str(uuid.uuid4())) with patch('azurelinuxagent.ga.update.UpdateHandler.get_latest_agent_greater_than_daemon', return_value=latest_agent): with patch("azurelinuxagent.common.conf.get_autoupdate_enabled", return_value=True): self._test_run_latest(mock_child=ChildMock(side_effect=Exception(verify_string))) self.assertFalse(latest_agent.is_available) self.assertTrue(latest_agent.error.is_blacklisted) self.assertNotEqual(0.0, latest_agent.error.last_failure) self.assertEqual(1, latest_agent.error.failure_count) self.assertIn(verify_string, latest_agent.error.reason, "Error reason not found while blacklisting") def test_run_latest_exception_does_not_blacklist_if_terminating(self): self.prepare_agents() latest_agent = self.update_handler.get_latest_agent_greater_than_daemon() self.assertTrue(latest_agent.is_available) self.assertEqual(0.0, latest_agent.error.last_failure) self.assertEqual(0, latest_agent.error.failure_count) with patch('azurelinuxagent.ga.update.UpdateHandler.get_latest_agent_greater_than_daemon', return_value=latest_agent): self.update_handler.is_running = False self._test_run_latest(mock_child=ChildMock(side_effect=Exception("Attempt blacklisting"))) self.assertTrue(latest_agent.is_available) self.assertFalse(latest_agent.error.is_blacklisted) self.assertEqual(0.0, latest_agent.error.last_failure) self.assertEqual(0, latest_agent.error.failure_count) @patch('signal.signal') def test_run_latest_captures_signals(self, mock_signal): self._test_run_latest() self.assertEqual(1, mock_signal.call_count) @patch('signal.signal') def test_run_latest_creates_only_one_signal_handler(self, mock_signal): self.update_handler.signal_handler = "Not None" self._test_run_latest() self.assertEqual(0, mock_signal.call_count) def test_get_latest_agent_should_return_latest_agent_even_on_bad_error_json(self): dst_ver = self.prepare_agents() # Add a malformed error.json file in all existing agents for agent_dir in self.agent_dirs(): error_file_path = os.path.join(agent_dir, AGENT_ERROR_FILE) with open(error_file_path, 'w') as f: f.write("") latest_agent = self.update_handler.get_latest_agent_greater_than_daemon() self.assertEqual(latest_agent.version, dst_ver, "Latest agent version is invalid") def test_set_agents_sets_agents(self): self.prepare_agents() self.update_handler._set_and_sort_agents([GuestAgent.from_installed_agent(path) for path in self.agent_dirs()]) self.assertTrue(len(self.update_handler.agents) > 0) self.assertEqual(len(self.agent_dirs()), len(self.update_handler.agents)) def test_set_agents_sorts_agents(self): self.prepare_agents() self.update_handler._set_and_sort_agents([GuestAgent.from_installed_agent(path) for path in self.agent_dirs()]) v = FlexibleVersion("100000") for a in self.update_handler.agents: self.assertTrue(v > a.version) v = a.version def test_set_sentinel(self): self.assertFalse(os.path.isfile(self.update_handler._sentinel_file_path())) self.update_handler._set_sentinel() self.assertTrue(os.path.isfile(self.update_handler._sentinel_file_path())) def test_set_sentinel_writes_current_agent(self): self.update_handler._set_sentinel() self.assertTrue( fileutil.read_file(self.update_handler._sentinel_file_path()), CURRENT_AGENT) def test_shutdown(self): self.update_handler._set_sentinel() self.update_handler._shutdown() self.assertFalse(self.update_handler.is_running) self.assertFalse(os.path.isfile(self.update_handler._sentinel_file_path())) def test_shutdown_ignores_missing_sentinel_file(self): self.assertFalse(os.path.isfile(self.update_handler._sentinel_file_path())) self.update_handler._shutdown() self.assertFalse(self.update_handler.is_running) self.assertFalse(os.path.isfile(self.update_handler._sentinel_file_path())) def test_shutdown_ignores_exceptions(self): self.update_handler._set_sentinel() try: with patch("os.remove", side_effect=Exception): self.update_handler._shutdown() except Exception as e: # pylint: disable=unused-variable self.assertTrue(False, "Unexpected exception") # pylint: disable=redundant-unittest-assert def test_write_pid_file(self): for n in range(1112): fileutil.write_file(os.path.join(self.tmp_dir, str(n) + "_waagent.pid"), ustr(n + 1)) with patch('os.getpid', return_value=1112): pid_files, pid_file = self.update_handler._write_pid_file() self.assertEqual(1112, len(pid_files)) self.assertEqual("1111_waagent.pid", os.path.basename(pid_files[-1])) self.assertEqual("1112_waagent.pid", os.path.basename(pid_file)) self.assertEqual(fileutil.read_file(pid_file), ustr(1112)) def test_write_pid_file_ignores_exceptions(self): with patch('azurelinuxagent.common.utils.fileutil.write_file', side_effect=Exception): with patch('os.getpid', return_value=42): pid_files, pid_file = self.update_handler._write_pid_file() self.assertEqual(0, len(pid_files)) self.assertEqual(None, pid_file) def test_update_happens_when_extensions_disabled(self): """ Although the extension enabled config will not get checked before an update is found, this test attempts to ensure that behavior never changes. """ with patch('azurelinuxagent.common.conf.get_extensions_enabled', return_value=False): with patch('azurelinuxagent.ga.agent_update_handler.AgentUpdateHandler.run') as download_agent: with mock_wire_protocol(DATA_FILE) as protocol: with mock_update_handler(protocol, autoupdate_enabled=True) as update_handler: update_handler.run() self.assertEqual(1, download_agent.call_count, "Agent update did not execute (no attempts to download the agent") @staticmethod def _get_test_ext_handler_instance(protocol, name="OSTCExtensions.ExampleHandlerLinux", version="1.0.0"): eh = Extension(name=name) eh.version = version return ExtHandlerInstance(eh, protocol) def test_update_handler_recovers_from_error_with_no_certs(self): data = DATA_FILE.copy() data['goal_state'] = 'wire/goal_state_no_certs.xml' def fail_gs_fetch(url, *_, **__): if HttpRequestPredicates.is_goal_state_request(url): return MockHttpResponse(status=500) return None with mock_wire_protocol(data) as protocol: def fail_fetch_on_second_iter(iteration): if iteration == 2: protocol.set_http_handlers(http_get_handler=fail_gs_fetch) if iteration > 2: # Zero out the fail handler for subsequent iterations. protocol.set_http_handlers(http_get_handler=None) with mock_update_handler(protocol, 3, on_new_iteration=fail_fetch_on_second_iter) as update_handler: with patch("azurelinuxagent.ga.update.logger.error") as patched_error: with patch("azurelinuxagent.ga.update.logger.info") as patched_info: def match_unexpected_errors(): unexpected_msg_fragment = "Error fetching the goal state:" matching_errors = [] for (args, _) in filter(lambda a: len(a) > 0, patched_error.call_args_list): if unexpected_msg_fragment in args[0]: matching_errors.append(args[0]) if len(matching_errors) > 1: self.fail("Guest Agent did not recover, with new error(s): {}"\ .format(matching_errors[1:])) def match_expected_info(): expected_msg_fragment = "Fetching the goal state recovered from previous errors" for (call_args, _) in filter(lambda a: len(a) > 0, patched_info.call_args_list): if expected_msg_fragment in call_args[0]: break else: self.fail("Expected the guest agent to recover with '{}', but it didn't"\ .format(expected_msg_fragment)) update_handler.run(debug=True) match_unexpected_errors() # Match on errors first, they can provide more info. match_expected_info() def test_it_should_recreate_handler_env_on_service_startup(self): iterations = 5 with _get_update_handler(iterations, autoupdate_enabled=False) as (update_handler, protocol): update_handler.run(debug=True) expected_handler = self._get_test_ext_handler_instance(protocol) handler_env_file = expected_handler.get_env_file() self.assertTrue(os.path.exists(expected_handler.get_base_dir()), "Extension not found") # First iteration should install the extension handler and # subsequent iterations should not recreate the HandlerEnvironment file last_modification_time = os.path.getmtime(handler_env_file) self.assertEqual(os.path.getctime(handler_env_file), last_modification_time, "The creation time and last modified time of the HandlerEnvironment file dont match") # Simulate a service restart by getting a new instance of the update handler and protocol and # re-runnning the update handler. Then,ensure that the HandlerEnvironment file is recreated with eventsFolder # flag in HandlerEnvironment.json file. self._add_write_permission_to_goal_state_files() with _get_update_handler(iterations=1, autoupdate_enabled=False) as (update_handler, protocol): with patch("azurelinuxagent.common.agent_supported_feature._ETPFeature.is_supported", True): update_handler.run(debug=True) self.assertGreater(os.path.getmtime(handler_env_file), last_modification_time, "HandlerEnvironment file didn't get overwritten") with open(handler_env_file, 'r') as handler_env_content_file: content = json.load(handler_env_content_file) self.assertIn(HandlerEnvironment.eventsFolder, content[0][HandlerEnvironment.handlerEnvironment], "{0} not found in HandlerEnv file".format(HandlerEnvironment.eventsFolder)) def test_it_should_not_setup_persistent_firewall_rules_if_EnableFirewall_is_disabled(self): executed_firewall_commands = [] def _mock_popen(cmd, *args, **kwargs): if 'firewall-cmd' in cmd: executed_firewall_commands.append(cmd) cmd = ["echo", "running"] return _ORIGINAL_POPEN(cmd, *args, **kwargs) with _get_update_handler(iterations=1) as (update_handler, _): with patch("azurelinuxagent.common.logger.info") as patch_info: with patch("azurelinuxagent.common.utils.shellutil.subprocess.Popen", side_effect=_mock_popen): with patch('azurelinuxagent.common.conf.enable_firewall', return_value=False): with patch("azurelinuxagent.common.logger.warn") as patch_warn: update_handler.run(debug=True) self.assertEqual(0, update_handler.get_exit_code(), "Exit code should be 0; List of all warnings logged by the agent: {0}".format( patch_warn.call_args_list)) self.assertEqual(0, len(executed_firewall_commands), "firewall-cmd should not be called at all") self.assertTrue(any( "Not setting up persistent firewall rules as OS.EnableFirewall=False" == args[0] for (args, _) in patch_info.call_args_list), "Info not logged properly, got: {0}".format(patch_info.call_args_list)) @skip_if_predicate_true(is_python_version_26_or_34, "Disabled on Python 2.6 and 3.4 for now. Need to revisit to fix it") def test_it_should_setup_persistent_firewall_rules_on_startup(self): iterations = 1 executed_commands = [] def _mock_popen(cmd, *args, **kwargs): if 'firewall-cmd' in cmd: executed_commands.append(cmd) cmd = ["echo", "running"] return _ORIGINAL_POPEN(cmd, *args, **kwargs) with _get_update_handler(iterations) as (update_handler, _): with patch("azurelinuxagent.common.utils.shellutil.subprocess.Popen", side_effect=_mock_popen) as mock_popen: with patch('azurelinuxagent.common.conf.enable_firewall', return_value=True): with patch('azurelinuxagent.common.osutil.systemd.is_systemd', return_value=True): with patch("azurelinuxagent.common.logger.warn") as patch_warn: update_handler.run(debug=True) self.assertEqual(0, update_handler.get_exit_code(), "Exit code should be 0; List of all warnings logged by the agent: {0}".format( patch_warn.call_args_list)) # Firewall-cmd should only be called 4 times - 1st to check if running, 2nd, 3rd and 4th for the QueryPassThrough cmd self.assertEqual(4, len(executed_commands), "The number of times firewall-cmd should be called is only 4; Executed firewall commands: {0}; All popen calls: {1}".format( executed_commands, mock_popen.call_args_list)) self.assertEqual(PersistFirewallRulesHandler._FIREWALLD_RUNNING_CMD, executed_commands.pop(0), "First command should be to check if firewalld is running") self.assertTrue([FirewallCmdDirectCommands.QueryPassThrough in cmd for cmd in executed_commands], "The remaining commands should only be for querying the firewall commands") def test_it_should_set_dns_tcp_iptable_if_drop_available_accept_unavailable(self): with TestOSUtil._mock_iptables() as mock_iptables: with _get_update_handler(test_data=DATA_FILE) as (update_handler, _): with patch('azurelinuxagent.common.conf.enable_firewall', return_value=True): with patch.object(osutil, '_enable_firewall', True): # drop rule is present mock_iptables.set_command( AddFirewallRules.get_wire_non_root_drop_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait), exit_code=0) # non root tcp iptable rule is absent mock_iptables.set_command(AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait), exit_code=1) update_handler._add_accept_tcp_firewall_rule_if_not_enabled(mock_iptables.destination) drop_check_command = TestOSUtil._command_to_string( AddFirewallRules.get_wire_non_root_drop_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait)) accept_tcp_check_rule = TestOSUtil._command_to_string( AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait)) accept_tcp_insert_rule = TestOSUtil._command_to_string( AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.INSERT_COMMAND, mock_iptables.destination, wait=mock_iptables.wait)) # Filtering the mock iptable command calls with only the ones related to this test. filtered_mock_iptable_calls = [cmd for cmd in mock_iptables.command_calls if cmd in [drop_check_command, accept_tcp_check_rule, accept_tcp_insert_rule]] self.assertEqual(len(filtered_mock_iptable_calls), 3, "Incorrect number of calls to iptables: [{0}]".format( mock_iptables.command_calls)) self.assertEqual(filtered_mock_iptable_calls[0], drop_check_command, "The first command should check the drop rule") self.assertEqual(filtered_mock_iptable_calls[1], accept_tcp_check_rule, "The second command should check the accept rule") self.assertEqual(filtered_mock_iptable_calls[2], accept_tcp_insert_rule, "The third command should add the accept rule") def test_it_should_not_set_dns_tcp_iptable_if_drop_unavailable(self): with TestOSUtil._mock_iptables() as mock_iptables: with _get_update_handler(test_data=DATA_FILE) as (update_handler, _): with patch('azurelinuxagent.common.conf.enable_firewall', return_value=True): with patch.object(osutil, '_enable_firewall', True): # drop rule is not available mock_iptables.set_command( AddFirewallRules.get_wire_non_root_drop_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait), exit_code=1) update_handler._add_accept_tcp_firewall_rule_if_not_enabled(mock_iptables.destination) drop_check_command = TestOSUtil._command_to_string( AddFirewallRules.get_wire_non_root_drop_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait)) accept_tcp_check_rule = TestOSUtil._command_to_string( AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait)) accept_tcp_insert_rule = TestOSUtil._command_to_string( AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.INSERT_COMMAND, mock_iptables.destination, wait=mock_iptables.wait)) # Filtering the mock iptable command calls with only the ones related to this test. filtered_mock_iptable_calls = [cmd for cmd in mock_iptables.command_calls if cmd in [drop_check_command, accept_tcp_check_rule, accept_tcp_insert_rule]] self.assertEqual(len(filtered_mock_iptable_calls), 1, "Incorrect number of calls to iptables: [{0}]".format( mock_iptables.command_calls)) self.assertEqual(filtered_mock_iptable_calls[0], drop_check_command, "The first command should check the drop rule") def test_it_should_not_set_dns_tcp_iptable_if_drop_and_accept_available(self): with TestOSUtil._mock_iptables() as mock_iptables: with _get_update_handler(test_data=DATA_FILE) as (update_handler, _): with patch('azurelinuxagent.common.conf.enable_firewall', return_value=True): with patch.object(osutil, '_enable_firewall', True): # drop rule is available mock_iptables.set_command( AddFirewallRules.get_wire_non_root_drop_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait), exit_code=0) # non root tcp iptable rule is available mock_iptables.set_command(AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait), exit_code=0) update_handler._add_accept_tcp_firewall_rule_if_not_enabled(mock_iptables.destination) drop_check_command = TestOSUtil._command_to_string( AddFirewallRules.get_wire_non_root_drop_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait)) accept_tcp_check_rule = TestOSUtil._command_to_string( AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.CHECK_COMMAND, mock_iptables.destination, wait=mock_iptables.wait)) accept_tcp_insert_rule = TestOSUtil._command_to_string( AddFirewallRules.get_accept_tcp_rule(AddFirewallRules.INSERT_COMMAND, mock_iptables.destination, wait=mock_iptables.wait)) # Filtering the mock iptable command calls with only the ones related to this test. filtered_mock_iptable_calls = [cmd for cmd in mock_iptables.command_calls if cmd in [drop_check_command, accept_tcp_check_rule, accept_tcp_insert_rule]] self.assertEqual(len(filtered_mock_iptable_calls), 2, "Incorrect number of calls to iptables: [{0}]".format( mock_iptables.command_calls)) self.assertEqual(filtered_mock_iptable_calls[0], drop_check_command, "The first command should check the drop rule") self.assertEqual(filtered_mock_iptable_calls[1], accept_tcp_check_rule, "The second command should check the accept rule") @contextlib.contextmanager def _setup_test_for_ext_event_dirs_retention(self): try: with _get_update_handler(test_data=DATA_FILE_MULTIPLE_EXT, autoupdate_enabled=False) as (update_handler, protocol): with patch("azurelinuxagent.common.agent_supported_feature._ETPFeature.is_supported", True): update_handler.run(debug=True) expected_events_dirs = glob.glob(os.path.join(conf.get_ext_log_dir(), "*", EVENTS_DIRECTORY)) no_of_extensions = protocol.mock_wire_data.get_no_of_plugins_in_extension_config() # Ensure extensions installed and events directory created self.assertEqual(len(expected_events_dirs), no_of_extensions, "Extension events directories dont match") for ext_dir in expected_events_dirs: self.assertTrue(os.path.exists(ext_dir), "Extension directory {0} not created!".format(ext_dir)) yield update_handler, expected_events_dirs finally: # The TestUpdate.setUp() initializes the self.tmp_dir to be used as a placeholder # for everything (event logger, status logger, conf.get_lib_dir() and more). # Since we add more data to the dir for this test, ensuring its completely clean before exiting the test. shutil.rmtree(self.tmp_dir, ignore_errors=True) self.tmp_dir = None def test_it_should_delete_extension_events_directory_if_extension_telemetry_pipeline_disabled(self): # Disable extension telemetry pipeline and ensure events directory got deleted with self._setup_test_for_ext_event_dirs_retention() as (update_handler, expected_events_dirs): with patch("azurelinuxagent.common.agent_supported_feature._ETPFeature.is_supported", False): self._add_write_permission_to_goal_state_files() update_handler.run(debug=True) for ext_dir in expected_events_dirs: self.assertFalse(os.path.exists(ext_dir), "Extension directory {0} still exists!".format(ext_dir)) def test_it_should_retain_extension_events_directories_if_extension_telemetry_pipeline_enabled(self): # Rerun update handler again with extension telemetry pipeline enabled to ensure we dont delete events directories with self._setup_test_for_ext_event_dirs_retention() as (update_handler, expected_events_dirs): self._add_write_permission_to_goal_state_files() update_handler.run(debug=True) for ext_dir in expected_events_dirs: self.assertTrue(os.path.exists(ext_dir), "Extension directory {0} should exist!".format(ext_dir)) def test_it_should_recreate_extension_event_directories_for_existing_extensions_if_extension_telemetry_pipeline_enabled(self): with self._setup_test_for_ext_event_dirs_retention() as (update_handler, expected_events_dirs): # Delete existing events directory for ext_dir in expected_events_dirs: shutil.rmtree(ext_dir, ignore_errors=True) self.assertFalse(os.path.exists(ext_dir), "Extension directory not deleted") with patch("azurelinuxagent.common.agent_supported_feature._ETPFeature.is_supported", True): self._add_write_permission_to_goal_state_files() update_handler.run(debug=True) for ext_dir in expected_events_dirs: self.assertTrue(os.path.exists(ext_dir), "Extension directory {0} should exist!".format(ext_dir)) def test_it_should_report_update_status_in_status_blob(self): with mock_wire_protocol(DATA_FILE) as protocol: with patch.object(conf, "get_autoupdate_gafamily", return_value="Prod"): with patch("azurelinuxagent.common.conf.get_enable_ga_versioning", return_value=True): with patch("azurelinuxagent.common.logger.warn") as patch_warn: protocol.aggregate_status = None protocol.incarnation = 1 def get_handler(url, **kwargs): if HttpRequestPredicates.is_agent_package_request(url): return MockHttpResponse(status=httpclient.SERVICE_UNAVAILABLE) return protocol.mock_wire_data.mock_http_get(url, **kwargs) def put_handler(url, *args, **_): if HttpRequestPredicates.is_host_plugin_status_request(url): # Skip reading the HostGA request data as its encoded return MockHttpResponse(status=500) protocol.aggregate_status = json.loads(args[0]) return MockHttpResponse(status=201) def update_goal_state_and_run_handler(autoupdate_enabled=True): protocol.incarnation += 1 protocol.mock_wire_data.set_incarnation(protocol.incarnation) self._add_write_permission_to_goal_state_files() with _get_update_handler(iterations=1, protocol=protocol, autoupdate_enabled=autoupdate_enabled) as (update_handler, _): update_handler.run(debug=True) self.assertEqual(0, update_handler.get_exit_code(), "Exit code should be 0; List of all warnings logged by the agent: {0}".format( patch_warn.call_args_list)) protocol.set_http_handlers(http_get_handler=get_handler, http_put_handler=put_handler) # Case 1: rsm version missing in GS when vm opt-in for rsm upgrades; report missing rsm version error protocol.mock_wire_data.set_extension_config("wire/ext_conf_version_missing_in_agent_family.xml") update_goal_state_and_run_handler() self.assertTrue("updateStatus" in protocol.aggregate_status['aggregateStatus']['guestAgentStatus'], "updateStatus should be reported") update_status = protocol.aggregate_status['aggregateStatus']['guestAgentStatus']["updateStatus"] self.assertEqual(VMAgentUpdateStatuses.Error, update_status['status'], "Status should be an error") self.assertEqual(update_status['code'], 1, "incorrect code reported") self.assertIn("missing version property. So, skipping agent update", update_status['formattedMessage']['message'], "incorrect message reported") # Case 2: rsm version in GS == Current Version; updateStatus should be Success protocol.mock_wire_data.set_extension_config("wire/ext_conf_rsm_version.xml") protocol.mock_wire_data.set_version_in_agent_family(str(CURRENT_VERSION)) update_goal_state_and_run_handler() self.assertTrue("updateStatus" in protocol.aggregate_status['aggregateStatus']['guestAgentStatus'], "updateStatus should be reported if asked in GS") update_status = protocol.aggregate_status['aggregateStatus']['guestAgentStatus']["updateStatus"] self.assertEqual(VMAgentUpdateStatuses.Success, update_status['status'], "Status should be successful") self.assertEqual(update_status['expectedVersion'], str(CURRENT_VERSION), "incorrect version reported") self.assertEqual(update_status['code'], 0, "incorrect code reported") # Case 3: rsm version in GS != Current Version; update fail and report error protocol.mock_wire_data.set_extension_config("wire/ext_conf_rsm_version.xml") protocol.mock_wire_data.set_version_in_agent_family("5.2.0.1") update_goal_state_and_run_handler() self.assertTrue("updateStatus" in protocol.aggregate_status['aggregateStatus']['guestAgentStatus'], "updateStatus should be in status blob. Warns: {0}".format(patch_warn.call_args_list)) update_status = protocol.aggregate_status['aggregateStatus']['guestAgentStatus']["updateStatus"] self.assertEqual(VMAgentUpdateStatuses.Error, update_status['status'], "Status should be an error") self.assertEqual(update_status['expectedVersion'], "5.2.0.1", "incorrect version reported") self.assertEqual(update_status['code'], 1, "incorrect code reported") def test_it_should_wait_to_fetch_first_goal_state(self): with _get_update_handler() as (update_handler, protocol): with patch("azurelinuxagent.common.logger.error") as patch_error: with patch("azurelinuxagent.common.logger.info") as patch_info: # Fail GS fetching for the 1st 5 times the agent asks for it update_handler._fail_gs_count = 5 def get_handler(url, **kwargs): if HttpRequestPredicates.is_goal_state_request(url) and update_handler._fail_gs_count > 0: update_handler._fail_gs_count -= 1 return MockHttpResponse(status=500) return protocol.mock_wire_data.mock_http_get(url, **kwargs) protocol.set_http_handlers(http_get_handler=get_handler) update_handler.run(debug=True) self.assertEqual(0, update_handler.get_exit_code(), "Exit code should be 0; List of all errors logged by the agent: {0}".format( patch_error.call_args_list)) error_msgs = [args[0] for (args, _) in patch_error.call_args_list if "Error fetching the goal state" in args[0]] self.assertTrue(len(error_msgs) > 0, "Error should've been reported when failed to retrieve GS") info_msgs = [args[0] for (args, _) in patch_info.call_args_list if "Fetching the goal state recovered from previous errors." in args[0]] self.assertTrue(len(info_msgs) > 0, "Agent should've logged a message when recovered from GS errors") class TestUpdateWaitForCloudInit(AgentTestCase): @staticmethod @contextlib.contextmanager def create_mock_run_command(delay=None): def run_command_mock(cmd, *args, **kwargs): if cmd == ["cloud-init", "status", "--wait"]: if delay is not None: original_run_command(['sleep', str(delay)], *args, **kwargs) return "cloud-init completed" return original_run_command(cmd, *args, **kwargs) original_run_command = shellutil.run_command with patch("azurelinuxagent.ga.update.shellutil.run_command", side_effect=run_command_mock) as run_command_patch: yield run_command_patch def test_it_should_not_wait_for_cloud_init_by_default(self): update_handler = UpdateHandler() with self.create_mock_run_command() as run_command_patch: update_handler._wait_for_cloud_init() self.assertTrue(run_command_patch.call_count == 0, "'cloud-init status --wait' should not be called by default") def test_it_should_wait_for_cloud_init_when_requested(self): update_handler = UpdateHandler() with patch("azurelinuxagent.ga.update.conf.get_wait_for_cloud_init", return_value=True): with self.create_mock_run_command() as run_command_patch: update_handler._wait_for_cloud_init() self.assertEqual(1, run_command_patch.call_count, "'cloud-init status --wait' should have be called once") @skip_if_predicate_true(lambda: sys.version_info[0] == 2, "Timeouts are not supported on Python 2") def test_it_should_enforce_timeout_waiting_for_cloud_init(self): update_handler = UpdateHandler() with patch("azurelinuxagent.ga.update.conf.get_wait_for_cloud_init", return_value=True): with patch("azurelinuxagent.ga.update.conf.get_wait_for_cloud_init_timeout", return_value=1): with self.create_mock_run_command(delay=5): with patch("azurelinuxagent.ga.update.logger.error") as mock_logger: update_handler._wait_for_cloud_init() call_args = [args for args, _ in mock_logger.call_args_list if "An error occurred while waiting for cloud-init" in args[0]] self.assertTrue( len(call_args) == 1 and len(call_args[0]) == 1 and "command timeout" in call_args[0][0], "Expected a timeout waiting for cloud-init. Log calls: {0}".format(mock_logger.call_args_list)) def test_update_handler_should_wait_for_cloud_init_after_agent_update_and_before_extension_processing(self): method_calls = [] agent_update_handler = Mock() agent_update_handler.run = lambda *_, **__: method_calls.append("AgentUpdateHandler.run()") exthandlers_handler = Mock() exthandlers_handler.run = lambda *_, **__: method_calls.append("ExtHandlersHandler.run()") with mock_wire_protocol(DATA_FILE) as protocol: with mock_update_handler(protocol, iterations=1, agent_update_handler=agent_update_handler, exthandlers_handler=exthandlers_handler) as update_handler: with patch('azurelinuxagent.ga.update.UpdateHandler._wait_for_cloud_init', side_effect=lambda *_, **__: method_calls.append("UpdateHandler._wait_for_cloud_init()")): update_handler.run() self.assertListEqual(["AgentUpdateHandler.run()", "UpdateHandler._wait_for_cloud_init()", "ExtHandlersHandler.run()"], method_calls, "Wait for cloud-init should happen after agent update and before extension processing") class UpdateHandlerRunTestCase(AgentTestCase): def _test_run(self, autoupdate_enabled=False, check_daemon_running=False, expected_exit_code=0, emit_restart_event=None): fileutil.write_file(conf.get_agent_pid_file_path(), ustr(42)) with patch('azurelinuxagent.ga.update.get_monitor_handler') as mock_monitor: with patch('azurelinuxagent.ga.remoteaccess.get_remote_access_handler') as mock_ra_handler: with patch('azurelinuxagent.ga.update.get_env_handler') as mock_env: with patch('azurelinuxagent.ga.update.get_collect_logs_handler') as mock_collect_logs: with patch('azurelinuxagent.ga.update.get_send_telemetry_events_handler') as mock_telemetry_send_events: with patch('azurelinuxagent.ga.update.get_collect_telemetry_events_handler') as mock_event_collector: with patch('azurelinuxagent.ga.update.initialize_event_logger_vminfo_common_parameters'): with patch('azurelinuxagent.ga.update.is_log_collection_allowed', return_value=True): with mock_wire_protocol(DATA_FILE) as protocol: mock_exthandlers_handler = Mock() with mock_update_handler( protocol, exthandlers_handler=mock_exthandlers_handler, remote_access_handler=mock_ra_handler, autoupdate_enabled=autoupdate_enabled, check_daemon_running=check_daemon_running ) as update_handler: if emit_restart_event is not None: update_handler._emit_restart_event = emit_restart_event if isinstance(os.getppid, MagicMock): update_handler.run() else: with patch('os.getppid', return_value=42): update_handler.run() self.assertEqual(1, mock_monitor.call_count) self.assertEqual(1, mock_env.call_count) self.assertEqual(1, mock_collect_logs.call_count) self.assertEqual(1, mock_telemetry_send_events.call_count) self.assertEqual(1, mock_event_collector.call_count) self.assertEqual(expected_exit_code, update_handler.get_exit_code()) if update_handler.get_iterations_completed() > 0: # some test cases exit before executing extensions or remote access self.assertEqual(1, mock_exthandlers_handler.run.call_count) self.assertEqual(1, mock_ra_handler.run.call_count) return update_handler def test_run(self): self._test_run() def test_run_stops_if_orphaned(self): with patch('os.getppid', return_value=1): update_handler = self._test_run(check_daemon_running=True) self.assertEqual(0, update_handler.get_iterations_completed()) def test_run_clears_sentinel_on_successful_exit(self): update_handler = self._test_run() self.assertFalse(os.path.isfile(update_handler._sentinel_file_path())) def test_run_leaves_sentinel_on_unsuccessful_exit(self): with patch('azurelinuxagent.ga.agent_update_handler.AgentUpdateHandler.run', side_effect=Exception): update_handler = self._test_run(autoupdate_enabled=True,expected_exit_code=1) self.assertTrue(os.path.isfile(update_handler._sentinel_file_path())) def test_run_emits_restart_event(self): update_handler = self._test_run(emit_restart_event=Mock()) self.assertEqual(1, update_handler._emit_restart_event.call_count) class TestAgentUpgrade(UpdateTestCase): @contextlib.contextmanager def create_conf_mocks(self, autoupdate_frequency, hotfix_frequency, normal_frequency): # Disabling extension processing to speed up tests as this class deals with testing agent upgrades with patch("azurelinuxagent.common.conf.get_extensions_enabled", return_value=False): with patch("azurelinuxagent.common.conf.get_autoupdate_frequency", return_value=autoupdate_frequency): with patch("azurelinuxagent.common.conf.get_self_update_hotfix_frequency", return_value=hotfix_frequency): with patch("azurelinuxagent.common.conf.get_self_update_regular_frequency", return_value=normal_frequency): with patch("azurelinuxagent.common.conf.get_autoupdate_gafamily", return_value="Prod"): with patch("azurelinuxagent.common.conf.get_enable_ga_versioning", return_value=True): yield @contextlib.contextmanager def __get_update_handler(self, iterations=1, test_data=None, reload_conf=None, autoupdate_frequency=0.001, hotfix_frequency=1.0, normal_frequency=2.0): test_data = DATA_FILE if test_data is None else test_data with _get_update_handler(iterations, test_data) as (update_handler, protocol): protocol.aggregate_status = None def get_handler(url, **kwargs): if reload_conf is not None: reload_conf(url, protocol) if HttpRequestPredicates.is_agent_package_request(url): agent_pkg = load_bin_data(self._get_agent_file_name(), self._agent_zip_dir) protocol.mock_wire_data.call_counts['agentArtifact'] += 1 return MockHttpResponse(status=httpclient.OK, body=agent_pkg) return protocol.mock_wire_data.mock_http_get(url, **kwargs) def put_handler(url, *args, **_): if HttpRequestPredicates.is_host_plugin_status_request(url): # Skip reading the HostGA request data as its encoded return MockHttpResponse(status=500) protocol.aggregate_status = json.loads(args[0]) return MockHttpResponse(status=201) protocol.set_http_handlers(http_get_handler=get_handler, http_put_handler=put_handler) with self.create_conf_mocks(autoupdate_frequency, hotfix_frequency, normal_frequency): with patch("azurelinuxagent.common.event.EventLogger.add_event") as mock_telemetry: update_handler._protocol = protocol yield update_handler, mock_telemetry def __assert_exit_code_successful(self, update_handler): self.assertEqual(0, update_handler.get_exit_code(), "Exit code should be 0") def __assert_upgrade_telemetry_emitted(self, mock_telemetry, upgrade=True, version="9.9.9.10"): upgrade_event_msgs = [kwarg['message'] for _, kwarg in mock_telemetry.call_args_list if 'Current Agent {0} completed all update checks, exiting current process to {1} to the new Agent version {2}'.format(CURRENT_VERSION, "upgrade" if upgrade else "downgrade", version) in kwarg['message'] and kwarg[ 'op'] == WALAEventOperation.AgentUpgrade] self.assertEqual(1, len(upgrade_event_msgs), "Did not find the event indicating that the agent was upgraded. Got: {0}".format( mock_telemetry.call_args_list)) def __assert_agent_directories_available(self, versions): for version in versions: self.assertTrue(os.path.exists(self.agent_dir(version)), "Agent directory {0} not found".format(version)) def __assert_agent_directories_exist_and_others_dont_exist(self, versions): self.__assert_agent_directories_available(versions=versions) other_agents = [agent_dir for agent_dir in self.agent_dirs() if agent_dir not in [self.agent_dir(version) for version in versions]] self.assertFalse(any(other_agents), "All other agents should be purged from agent dir: {0}".format(other_agents)) def __assert_ga_version_in_status(self, aggregate_status, version=str(CURRENT_VERSION)): self.assertIsNotNone(aggregate_status, "Status should be reported") self.assertEqual(aggregate_status['aggregateStatus']['guestAgentStatus']['version'], version, "Status should be reported from the Current version") self.assertEqual(aggregate_status['aggregateStatus']['guestAgentStatus']['status'], 'Ready', "Guest Agent should be reported as Ready") def test_it_should_upgrade_agent_on_process_start_if_auto_upgrade_enabled(self): data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self.__get_update_handler(test_data=data_file, iterations=10) as (update_handler, mock_telemetry): update_handler.run(debug=True) self.__assert_exit_code_successful(update_handler) self.assertEqual(1, update_handler.get_iterations(), "Update handler should've exited after the first run") self.__assert_agent_directories_available(versions=["9.9.9.10"]) self.__assert_upgrade_telemetry_emitted(mock_telemetry) def test_it_should_not_update_agent_with_rsm_if_gs_not_updated_in_next_attempts(self): no_of_iterations = 10 data_file = DATA_FILE.copy() data_file['ext_conf'] = "wire/ext_conf_rsm_version.xml" self.prepare_agents(1) test_frequency = 10 with self.__get_update_handler(iterations=no_of_iterations, test_data=data_file, autoupdate_frequency=test_frequency) as (update_handler, _): # Given version which will fail on first attempt, then rsm shouldn't make any futher attempts since GS is not updated update_handler._protocol.mock_wire_data.set_version_in_agent_family("5.2.1.0") update_handler._protocol.mock_wire_data.set_incarnation(2) update_handler.run(debug=True) self.__assert_exit_code_successful(update_handler) self.assertEqual(no_of_iterations, update_handler.get_iterations(), "Update handler should've run its course") self.assertFalse(os.path.exists(self.agent_dir("5.2.0.1")), "New agent directory should not be found") self.assertGreaterEqual(update_handler._protocol.mock_wire_data.call_counts["manifest_of_ga.xml"], 1, "only 1 agent manifest call should've been made - 1 per incarnation") def test_it_should_not_auto_upgrade_if_auto_update_disabled(self): with self.__get_update_handler(iterations=10) as (update_handler, _): with patch("azurelinuxagent.common.conf.get_autoupdate_enabled", return_value=False): update_handler.run(debug=True) self.__assert_exit_code_successful(update_handler) self.assertGreaterEqual(update_handler.get_iterations(), 10, "Update handler should've run 10 times") self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") def test_it_should_download_only_rsm_version_if_available(self): data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self.__get_update_handler(test_data=data_file) as (update_handler, mock_telemetry): update_handler.run(debug=True) self.__assert_exit_code_successful(update_handler) self.__assert_upgrade_telemetry_emitted(mock_telemetry, version="9.9.9.10") self.__assert_agent_directories_exist_and_others_dont_exist(versions=["9.9.9.10"]) def test_it_should_download_largest_version_if_ga_versioning_disabled(self): data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self.__get_update_handler(test_data=data_file) as (update_handler, mock_telemetry): with patch.object(conf, "get_enable_ga_versioning", return_value=False): update_handler.run(debug=True) self.__assert_exit_code_successful(update_handler) self.__assert_upgrade_telemetry_emitted(mock_telemetry, version="99999.0.0.0") self.__assert_agent_directories_exist_and_others_dont_exist(versions=["99999.0.0.0"]) def test_it_should_cleanup_all_agents_except_rsm_version_and_current_version(self): data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") with self.__get_update_handler(test_data=data_file) as (update_handler, mock_telemetry): update_handler.run(debug=True) self.__assert_exit_code_successful(update_handler) self.__assert_upgrade_telemetry_emitted(mock_telemetry, version="9.9.9.10") self.__assert_agent_directories_exist_and_others_dont_exist(versions=["9.9.9.10", str(CURRENT_VERSION)]) def test_it_should_not_update_if_rsm_version_not_found_in_manifest(self): self.prepare_agents(1) data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_version_missing_in_manifest.xml" with self.__get_update_handler(test_data=data_file) as (update_handler, mock_telemetry): update_handler.run(debug=True) self.__assert_exit_code_successful(update_handler) self.__assert_agent_directories_exist_and_others_dont_exist(versions=[str(CURRENT_VERSION)]) agent_msgs = [kwarg for _, kwarg in mock_telemetry.call_args_list if kwarg['op'] in (WALAEventOperation.AgentUpgrade, WALAEventOperation.Download)] # This will throw if corresponding message not found so not asserting on that rsm_version_found = next(kwarg for kwarg in agent_msgs if "New agent version:5.2.1.0 requested by RSM in Goal state incarnation_1, will update the agent before processing the goal state" in kwarg['message']) self.assertTrue(rsm_version_found['is_success'], "The rsm version found op should be reported as a success") skipping_update = next(kwarg for kwarg in agent_msgs if "No matching package found in the agent manifest for version: 5.2.1.0 in goal state incarnation: incarnation_1, skipping agent update" in kwarg['message']) self.assertEqual(skipping_update['version'], str(CURRENT_VERSION), "The not found message should be reported from current agent version") self.assertFalse(skipping_update['is_success'], "The not found op should be reported as a failure") def test_it_should_try_downloading_rsm_version_on_new_incarnation(self): no_of_iterations = 1000 # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") def reload_conf(url, protocol): mock_wire_data = protocol.mock_wire_data # This function reloads the conf mid-run to mimic an actual customer scenario if HttpRequestPredicates.is_goal_state_request(url) and mock_wire_data.call_counts[ "goalstate"] >= 10 and mock_wire_data.call_counts["goalstate"] < 15: # Ensure we didn't try to download any agents except during the incarnation change self.__assert_agent_directories_available(versions=[str(CURRENT_VERSION)]) # Update the rsm version to "99999.0.0.0" update_handler._protocol.mock_wire_data.set_version_in_agent_family("99999.0.0.0") reload_conf.call_count += 1 self._add_write_permission_to_goal_state_files() reload_conf.incarnation += 1 mock_wire_data.set_incarnation(reload_conf.incarnation) reload_conf.call_count = 0 reload_conf.incarnation = 2 data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self.__get_update_handler(iterations=no_of_iterations, test_data=data_file, reload_conf=reload_conf) as (update_handler, mock_telemetry): update_handler._protocol.mock_wire_data.set_version_in_agent_family(str(CURRENT_VERSION)) update_handler._protocol.mock_wire_data.set_incarnation(2) update_handler.run(debug=True) self.assertGreaterEqual(reload_conf.call_count, 1, "Reload conf not updated as expected") self.__assert_exit_code_successful(update_handler) self.__assert_upgrade_telemetry_emitted(mock_telemetry, version="99999.0.0.0") self.__assert_agent_directories_exist_and_others_dont_exist(versions=["99999.0.0.0", str(CURRENT_VERSION)]) self.assertEqual(update_handler._protocol.mock_wire_data.call_counts['agentArtifact'], 1, "only 1 agent should've been downloaded - 1 per incarnation") self.assertGreaterEqual(update_handler._protocol.mock_wire_data.call_counts["manifest_of_ga.xml"], 1, "only 1 agent manifest call should've been made - 1 per incarnation") def test_it_should_update_to_largest_version_if_rsm_version_not_available(self): no_of_iterations = 100 # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") def reload_conf(url, protocol): mock_wire_data = protocol.mock_wire_data # This function reloads the conf mid-run to mimic an actual customer scenario if HttpRequestPredicates.is_goal_state_request(url) and mock_wire_data.call_counts[ "goalstate"] >= 5: reload_conf.call_count += 1 # By this point, the GS with rsm version should've been executed. Verify that self.__assert_agent_directories_available(versions=[str(CURRENT_VERSION)]) # Update the ga_manifest and incarnation to send largest version manifest # this should download largest version requested in config mock_wire_data.data_files["ga_manifest"] = "wire/ga_manifest.xml" mock_wire_data.reload() self._add_write_permission_to_goal_state_files() reload_conf.incarnation += 1 mock_wire_data.set_incarnation(reload_conf.incarnation) reload_conf.call_count = 0 reload_conf.incarnation = 2 data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf.xml" data_file["ga_manifest"] = "wire/ga_manifest_no_upgrade.xml" with self.__get_update_handler(iterations=no_of_iterations, test_data=data_file, reload_conf=reload_conf) as (update_handler, mock_telemetry): update_handler._protocol.mock_wire_data.set_incarnation(2) update_handler.run(debug=True) self.assertGreater(reload_conf.call_count, 0, "Reload conf not updated") self.__assert_exit_code_successful(update_handler) self.__assert_upgrade_telemetry_emitted(mock_telemetry, version="99999.0.0.0") self.__assert_agent_directories_exist_and_others_dont_exist(versions=["99999.0.0.0", str(CURRENT_VERSION)]) def test_it_should_not_update_largest_version_if_time_window_not_elapsed(self): no_of_iterations = 20 # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") def reload_conf(url, protocol): mock_wire_data = protocol.mock_wire_data # This function reloads the conf mid-run to mimic an actual customer scenario if HttpRequestPredicates.is_goal_state_request(url) and mock_wire_data.call_counts[ "goalstate"] >= 5: reload_conf.call_count += 1 self.__assert_agent_directories_available(versions=[str(CURRENT_VERSION)]) # Update the ga_manifest and incarnation to send largest version manifest mock_wire_data.data_files["ga_manifest"] = "wire/ga_manifest.xml" mock_wire_data.reload() self._add_write_permission_to_goal_state_files() reload_conf.incarnation += 1 mock_wire_data.set_incarnation(reload_conf.incarnation) reload_conf.call_count = 0 reload_conf.incarnation = 2 data_file = wire_protocol_data.DATA_FILE.copy() # This is to fail the agent update at first attempt so that agent doesn't go through update data_file["ga_manifest"] = "wire/ga_manifest_no_uris.xml" with self.__get_update_handler(iterations=no_of_iterations, test_data=data_file, reload_conf=reload_conf, hotfix_frequency=10, normal_frequency=10) as (update_handler, _): update_handler._protocol.mock_wire_data.set_incarnation(2) update_handler.run(debug=True) self.assertGreater(reload_conf.call_count, 0, "Reload conf not updated") self.__assert_exit_code_successful(update_handler) self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") def test_it_should_update_largest_version_if_time_window_elapsed(self): no_of_iterations = 20 # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") def reload_conf(url, protocol): mock_wire_data = protocol.mock_wire_data # This function reloads the conf mid-run to mimic an actual customer scenario if HttpRequestPredicates.is_goal_state_request(url) and mock_wire_data.call_counts[ "goalstate"] >= 5: reload_conf.call_count += 1 self.__assert_agent_directories_available(versions=[str(CURRENT_VERSION)]) # Update the ga_manifest and incarnation to send largest version manifest mock_wire_data.data_files["ga_manifest"] = "wire/ga_manifest.xml" mock_wire_data.reload() self._add_write_permission_to_goal_state_files() reload_conf.incarnation += 1 mock_wire_data.set_incarnation(reload_conf.incarnation) reload_conf.call_count = 0 reload_conf.incarnation = 2 data_file = wire_protocol_data.DATA_FILE.copy() data_file["ga_manifest"] = "wire/ga_manifest_no_uris.xml" with self.__get_update_handler(iterations=no_of_iterations, test_data=data_file, reload_conf=reload_conf, hotfix_frequency=0.001, normal_frequency=0.001) as (update_handler, mock_telemetry): update_handler._protocol.mock_wire_data.set_incarnation(2) update_handler.run(debug=True) self.assertGreater(reload_conf.call_count, 0, "Reload conf not updated") self.__assert_exit_code_successful(update_handler) self.__assert_upgrade_telemetry_emitted(mock_telemetry, version="99999.0.0.0") self.__assert_agent_directories_exist_and_others_dont_exist(versions=["99999.0.0.0", str(CURRENT_VERSION)]) def test_it_should_not_download_anything_if_rsm_version_is_current_version(self): data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") with self.__get_update_handler(test_data=data_file) as (update_handler, _): update_handler._protocol.mock_wire_data.set_version_in_agent_family(str(CURRENT_VERSION)) update_handler._protocol.mock_wire_data.set_incarnation(2) update_handler.run(debug=True) self.__assert_exit_code_successful(update_handler) self.assertFalse(os.path.exists(self.agent_dir("99999.0.0.0")), "New agent directory should not be found") def test_it_should_skip_wait_to_update_immediately_if_rsm_version_available(self): no_of_iterations = 100 def reload_conf(url, protocol): mock_wire_data = protocol.mock_wire_data # This function reloads the conf mid-run to mimic an actual customer scenario # Setting the rsm request to be sent after some iterations if HttpRequestPredicates.is_goal_state_request(url) and mock_wire_data.call_counts["goalstate"] >= 5: reload_conf.call_count += 1 # Assert GA version from status to ensure agent is running fine from the current version self.__assert_ga_version_in_status(protocol.aggregate_status) # Update the ext-conf and incarnation and add rsm version from GS mock_wire_data.data_files["ext_conf"] = "wire/ext_conf_rsm_version.xml" data_file['ga_manifest'] = "wire/ga_manifest.xml" mock_wire_data.reload() self._add_write_permission_to_goal_state_files() mock_wire_data.set_incarnation(2) reload_conf.call_count = 0 data_file = wire_protocol_data.DATA_FILE.copy() data_file['ga_manifest'] = "wire/ga_manifest_no_upgrade.xml" # Setting the prod frequency to mimic a real scenario with self.__get_update_handler(iterations=no_of_iterations, test_data=data_file, reload_conf=reload_conf, autoupdate_frequency=6000) as (update_handler, mock_telemetry): update_handler._protocol.mock_wire_data.set_version_in_ga_manifest(str(CURRENT_VERSION)) update_handler._protocol.mock_wire_data.set_incarnation(20) update_handler.run(debug=True) self.assertGreater(reload_conf.call_count, 0, "Reload conf not updated") self.assertLess(update_handler.get_iterations(), no_of_iterations, "The code should've exited as soon as rsm version was found") self.__assert_exit_code_successful(update_handler) self.__assert_upgrade_telemetry_emitted(mock_telemetry, version="9.9.9.10") def test_it_should_mark_current_agent_as_bad_version_on_downgrade(self): # Create Agent directory for current agent self.prepare_agents(count=1) self.assertTrue(os.path.exists(self.agent_dir(CURRENT_VERSION))) self.assertFalse(next(agent for agent in self.agents() if agent.version == CURRENT_VERSION).is_blacklisted, "The current agent should not be blacklisted") downgraded_version = "2.5.0" data_file = wire_protocol_data.DATA_FILE.copy() data_file["ext_conf"] = "wire/ext_conf_rsm_version.xml" with self.__get_update_handler(test_data=data_file) as (update_handler, mock_telemetry): update_handler._protocol.mock_wire_data.set_version_in_agent_family(downgraded_version) update_handler._protocol.mock_wire_data.set_incarnation(2) update_handler.run(debug=True) self.__assert_exit_code_successful(update_handler) self.__assert_upgrade_telemetry_emitted(mock_telemetry, upgrade=False, version=downgraded_version) current_agent = next(agent for agent in self.agents() if agent.version == CURRENT_VERSION) self.assertTrue(current_agent.is_blacklisted, "The current agent should be blacklisted") self.assertEqual(current_agent.error.reason, "Marking the agent {0} as bad version since a downgrade was requested in the GoalState, " "suggesting that we really don't want to execute any extensions using this version".format(CURRENT_VERSION), "Invalid reason specified for blacklisting agent") self.__assert_agent_directories_exist_and_others_dont_exist(versions=[downgraded_version, str(CURRENT_VERSION)]) def test_it_should_do_self_update_if_vm_opt_out_rsm_upgrades_later(self): no_of_iterations = 100 # Set the test environment by adding 20 random agents to the agent directory self.prepare_agents() self.assertEqual(20, self.agent_count(), "Agent directories not set properly") def reload_conf(url, protocol): mock_wire_data = protocol.mock_wire_data # This function reloads the conf mid-run to mimic an actual customer scenario if HttpRequestPredicates.is_goal_state_request(url) and mock_wire_data.call_counts["goalstate"] >= 5: reload_conf.call_count += 1 # Assert GA version from status to ensure agent is running fine from the current version self.__assert_ga_version_in_status(protocol.aggregate_status) # Update is_vm_enabled_for_rsm_upgrades flag to False update_handler._protocol.mock_wire_data.set_extension_config_is_vm_enabled_for_rsm_upgrades("False") self._add_write_permission_to_goal_state_files() mock_wire_data.set_incarnation(2) reload_conf.call_count = 0 data_file = wire_protocol_data.DATA_FILE.copy() data_file['ext_conf'] = "wire/ext_conf_rsm_version.xml" with self.__get_update_handler(iterations=no_of_iterations, test_data=data_file, reload_conf=reload_conf) as (update_handler, mock_telemetry): update_handler._protocol.mock_wire_data.set_version_in_agent_family(str(CURRENT_VERSION)) update_handler._protocol.mock_wire_data.set_incarnation(20) update_handler.run(debug=True) self.assertGreater(reload_conf.call_count, 0, "Reload conf not updated") self.assertLess(update_handler.get_iterations(), no_of_iterations, "The code should've exited as soon as version was found") self.__assert_exit_code_successful(update_handler) self.__assert_upgrade_telemetry_emitted(mock_telemetry, version="99999.0.0.0") self.__assert_agent_directories_exist_and_others_dont_exist(versions=["99999.0.0.0", str(CURRENT_VERSION)]) @patch('azurelinuxagent.ga.update.get_collect_telemetry_events_handler') @patch('azurelinuxagent.ga.update.get_send_telemetry_events_handler') @patch('azurelinuxagent.ga.update.get_collect_logs_handler') @patch('azurelinuxagent.ga.update.get_monitor_handler') @patch('azurelinuxagent.ga.update.get_env_handler') class MonitorThreadTest(AgentTestCaseWithGetVmSizeMock): def setUp(self): super(MonitorThreadTest, self).setUp() self.event_patch = patch('azurelinuxagent.common.event.add_event') current_thread().name = "ExtHandler" protocol = Mock() self.update_handler = get_update_handler() self.update_handler.protocol_util = Mock() self.update_handler.protocol_util.get_protocol = Mock(return_value=protocol) clear_singleton_instances(ProtocolUtil) def _test_run(self, invocations=1): def iterator(*_, **__): iterator.count += 1 if iterator.count <= invocations: return True return False iterator.count = 0 with patch('os.getpid', return_value=42): with patch.object(UpdateHandler, '_is_orphaned') as mock_is_orphaned: mock_is_orphaned.__get__ = Mock(return_value=False) with patch.object(UpdateHandler, 'is_running') as mock_is_running: mock_is_running.__get__ = Mock(side_effect=iterator) with patch('azurelinuxagent.ga.exthandlers.get_exthandlers_handler'): with patch('azurelinuxagent.ga.remoteaccess.get_remote_access_handler'): with patch('azurelinuxagent.ga.agent_update_handler.get_agent_update_handler'): with patch('azurelinuxagent.ga.update.initialize_event_logger_vminfo_common_parameters'): with patch('azurelinuxagent.ga.cgroupapi.CGroupsApi.cgroups_supported', return_value=False): # skip all cgroup stuff with patch('azurelinuxagent.ga.update.is_log_collection_allowed', return_value=True): with patch('time.sleep'): with patch('sys.exit'): self.update_handler.run() def _setup_mock_thread_and_start_test_run(self, mock_thread, is_alive=True, invocations=0): thread = MagicMock() thread.run = MagicMock() thread.is_alive = MagicMock(return_value=is_alive) thread.start = MagicMock() mock_thread.return_value = thread self._test_run(invocations=invocations) return thread def test_start_threads(self, mock_env, mock_monitor, mock_collect_logs, mock_telemetry_send_events, mock_telemetry_collector): def _get_mock_thread(): thread = MagicMock() thread.run = MagicMock() return thread all_threads = [mock_telemetry_send_events, mock_telemetry_collector, mock_env, mock_monitor, mock_collect_logs] for thread in all_threads: thread.return_value = _get_mock_thread() self._test_run(invocations=0) for thread in all_threads: self.assertEqual(1, thread.call_count) self.assertEqual(1, thread().run.call_count) def test_check_if_monitor_thread_is_alive(self, _, mock_monitor, *args): # pylint: disable=unused-argument mock_monitor_thread = self._setup_mock_thread_and_start_test_run(mock_monitor, is_alive=True, invocations=1) self.assertEqual(1, mock_monitor.call_count) self.assertEqual(1, mock_monitor_thread.run.call_count) self.assertEqual(1, mock_monitor_thread.is_alive.call_count) self.assertEqual(0, mock_monitor_thread.start.call_count) def test_check_if_env_thread_is_alive(self, mock_env, *args): # pylint: disable=unused-argument mock_env_thread = self._setup_mock_thread_and_start_test_run(mock_env, is_alive=True, invocations=1) self.assertEqual(1, mock_env.call_count) self.assertEqual(1, mock_env_thread.run.call_count) self.assertEqual(1, mock_env_thread.is_alive.call_count) self.assertEqual(0, mock_env_thread.start.call_count) def test_restart_monitor_thread_if_not_alive(self, _, mock_monitor, *args): # pylint: disable=unused-argument mock_monitor_thread = self._setup_mock_thread_and_start_test_run(mock_monitor, is_alive=False, invocations=1) self.assertEqual(1, mock_monitor.call_count) self.assertEqual(1, mock_monitor_thread.run.call_count) self.assertEqual(1, mock_monitor_thread.is_alive.call_count) self.assertEqual(1, mock_monitor_thread.start.call_count) def test_restart_env_thread_if_not_alive(self, mock_env, *args): # pylint: disable=unused-argument mock_env_thread = self._setup_mock_thread_and_start_test_run(mock_env, is_alive=False, invocations=1) self.assertEqual(1, mock_env.call_count) self.assertEqual(1, mock_env_thread.run.call_count) self.assertEqual(1, mock_env_thread.is_alive.call_count) self.assertEqual(1, mock_env_thread.start.call_count) def test_restart_monitor_thread(self, _, mock_monitor, *args): # pylint: disable=unused-argument mock_monitor_thread = self._setup_mock_thread_and_start_test_run(mock_monitor, is_alive=False, invocations=1) self.assertEqual(True, mock_monitor.called) self.assertEqual(True, mock_monitor_thread.run.called) self.assertEqual(True, mock_monitor_thread.is_alive.called) self.assertEqual(True, mock_monitor_thread.start.called) def test_restart_env_thread(self, mock_env, *args): # pylint: disable=unused-argument mock_env_thread = self._setup_mock_thread_and_start_test_run(mock_env, is_alive=False, invocations=1) self.assertEqual(True, mock_env.called) self.assertEqual(True, mock_env_thread.run.called) self.assertEqual(True, mock_env_thread.is_alive.called) self.assertEqual(True, mock_env_thread.start.called) class ChildMock(Mock): def __init__(self, return_value=0, side_effect=None): Mock.__init__(self, return_value=return_value, side_effect=side_effect) self.poll = Mock(return_value=return_value, side_effect=side_effect) self.wait = Mock(return_value=return_value, side_effect=side_effect) class GoalStateMock(object): def __init__(self, incarnation, family, versions): if versions is None: versions = [] self.incarnation = incarnation self.extensions_goal_state = Mock() self.extensions_goal_state.id = incarnation self.extensions_goal_state.agent_families = GoalStateMock._create_agent_families(family, versions) agent_manifest = Mock() agent_manifest.pkg_list = GoalStateMock._create_packages(versions) self.fetch_agent_manifest = Mock(return_value=agent_manifest) @staticmethod def _create_agent_families(family, versions): families = [] if len(versions) > 0 and family is not None: manifest = VMAgentFamily(name=family) for i in range(0, 10): manifest.uris.append("https://nowhere.msft/agent/{0}".format(i)) families.append(manifest) return families @staticmethod def _create_packages(versions): packages = ExtHandlerPackageList() for version in versions: package = ExtHandlerPackage(str(version)) for i in range(0, 5): package_uri = "https://nowhere.msft/agent_pkg/{0}".format(i) package.uris.append(package_uri) packages.versions.append(package) return packages class ProtocolMock(object): def __init__(self, family="TestAgent", etag=42, versions=None, client=None): self.family = family self.client = client self.call_counts = { "update_goal_state": 0 } self._goal_state = GoalStateMock(etag, family, versions) self.goal_state_is_stale = False self.etag = etag self.versions = versions if versions is not None else [] def emulate_stale_goal_state(self): self.goal_state_is_stale = True def get_protocol(self): return self def get_goal_state(self): return self._goal_state def update_goal_state(self): self.call_counts["update_goal_state"] += 1 class TimeMock(Mock): def __init__(self, time_increment=1): Mock.__init__(self) self.next_time = time.time() self.time_call_count = 0 self.time_increment = time_increment self.sleep_interval = None def sleep(self, n): self.sleep_interval = n def time(self): self.time_call_count += 1 current_time = self.next_time self.next_time += self.time_increment return current_time class TryUpdateGoalStateTestCase(HttpRequestPredicates, AgentTestCase): """ Tests for UpdateHandler._try_update_goal_state() """ def test_it_should_return_true_on_success(self): update_handler = get_update_handler() with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: self.assertTrue(update_handler._try_update_goal_state(protocol), "try_update_goal_state should have succeeded") def test_it_should_return_false_on_failure(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: def http_get_handler(url, *_, **__): if self.is_goal_state_request(url): return HttpError('Exception to fake an error retrieving the goal state') return None protocol.set_http_handlers(http_get_handler=http_get_handler) update_handler = get_update_handler() self.assertFalse(update_handler._try_update_goal_state(protocol), "try_update_goal_state should have failed") def test_it_should_update_the_goal_state(self): update_handler = get_update_handler() with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: protocol.mock_wire_data.set_incarnation(12345) # the first goal state should produce an update update_handler._try_update_goal_state(protocol) self.assertEqual(update_handler._goal_state.incarnation, '12345', "The goal state was not updated (received unexpected incarnation)") # no changes in the goal state should not produce an update update_handler._try_update_goal_state(protocol) self.assertEqual(update_handler._goal_state.incarnation, '12345', "The goal state should not be updated (received unexpected incarnation)") # a new goal state should produce an update protocol.mock_wire_data.set_incarnation(6789) update_handler._try_update_goal_state(protocol) self.assertEqual(update_handler._goal_state.incarnation, '6789', "The goal state was not updated (received unexpected incarnation)") def test_it_should_log_errors_only_when_the_error_state_changes(self): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as protocol: def http_get_handler(url, *_, **__): if self.is_goal_state_request(url): if fail_goal_state_request: return HttpError('Exception to fake an error retrieving the goal state') return None protocol.set_http_handlers(http_get_handler=http_get_handler) @contextlib.contextmanager def create_log_and_telemetry_mocks(): with patch("azurelinuxagent.ga.update.logger", autospec=True) as logger_patcher: with patch("azurelinuxagent.ga.update.add_event") as add_event_patcher: yield logger_patcher, add_event_patcher calls_to_strings = lambda calls: (str(c) for c in calls) filter_calls = lambda calls, regex=None: (c for c in calls_to_strings(calls) if regex is None or re.match(regex, c)) logger_calls = lambda regex=None: [m for m in filter_calls(logger.method_calls, regex)] # pylint: disable=used-before-assignment,unnecessary-comprehension errors = lambda: logger_calls(r'call.error\(.*Error fetching the goal state.*') periodic_errors = lambda: logger_calls(r'call.error\(.*Fetching the goal state is still failing*') success_messages = lambda: logger_calls(r'call.info\(.*Fetching the goal state recovered from previous errors.*') telemetry_calls = lambda regex=None: [m for m in filter_calls(add_event.mock_calls, regex)] # pylint: disable=used-before-assignment,unnecessary-comprehension goal_state_events = lambda: telemetry_calls(r".*op='FetchGoalState'.*") # # Initially calls to retrieve the goal state are successful... # update_handler = get_update_handler() fail_goal_state_request = False with create_log_and_telemetry_mocks() as (logger, add_event): update_handler._try_update_goal_state(protocol) lc = logger_calls() self.assertTrue(len(lc) == 0, "A successful call should not produce any log messages: [{0}]".format(lc)) tc = telemetry_calls() self.assertTrue(len(tc) == 0, "A successful call should not produce any telemetry events: [{0}]".format(tc)) # # ... then an error happens... # fail_goal_state_request = True with create_log_and_telemetry_mocks() as (logger, add_event): update_handler._try_update_goal_state(protocol) e = errors() self.assertEqual(1, len(e), "A failure should have produced an error: [{0}]".format(e)) gs = goal_state_events() self.assertTrue(len(gs) == 1 and 'is_success=False' in gs[0], "A failure should produce a telemetry event (success=false): [{0}]".format(gs)) # # ... and errors continue happening... # with create_log_and_telemetry_mocks() as (logger, add_event): for _ in range(5): update_handler._update_goal_state_last_error_report = datetime.now() + timedelta(days=1) update_handler._try_update_goal_state(protocol) e = errors() pe = periodic_errors() self.assertEqual(2, len(e), "Two additional errors should have been reported: [{0}]".format(e)) self.assertEqual(len(pe), 3, "Subsequent failures should produce periodic errors: [{0}]".format(pe)) tc = telemetry_calls() self.assertTrue(len(tc) == 5, "The failures should have produced telemetry events. Got: [{0}]".format(tc)) # # ... until we finally succeed # fail_goal_state_request = False with create_log_and_telemetry_mocks() as (logger, add_event): update_handler._try_update_goal_state(protocol) s = success_messages() e = errors() pe = periodic_errors() self.assertEqual(len(s), 1, "Recovering after failures should have produced an info message: [{0}]".format(s)) self.assertTrue(len(e) == 0 and len(pe) == 0, "Recovering after failures should have not produced any errors: [{0}] [{1}]".format(e, pe)) gs = goal_state_events() self.assertTrue(len(gs) == 1 and 'is_success=True' in gs[0], "Recovering after failures should produce a telemetry event (success=true): [{0}]".format(gs)) def _create_update_handler(): """ Creates an UpdateHandler in which agent updates are mocked as a no-op. """ update_handler = get_update_handler() update_handler._download_agent_if_upgrade_available = Mock(return_value=False) return update_handler @contextlib.contextmanager def _mock_exthandlers_handler(extension_statuses=None, save_to_history=False): """ Creates an ExtHandlersHandler that doesn't actually handle any extensions, but that returns status for 1 extension. The returned ExtHandlersHandler uses a mock WireProtocol, and both the run() and report_ext_handlers_status() are mocked. The mock run() is a no-op. If a list of extension_statuses is given, successive calls to the mock report_ext_handlers_status() returns a single extension with each of the statuses in the list. If extension_statuses is omitted all calls to report_ext_handlers_status() return a single extension with a success status. """ def create_vm_status(extension_status): vm_status = VMStatus(status="Ready", message="Ready") vm_status.vmAgent.extensionHandlers = [ExtHandlerStatus()] vm_status.vmAgent.extensionHandlers[0].extension_status = ExtensionStatus(name="TestExtension") vm_status.vmAgent.extensionHandlers[0].extension_status.status = extension_status return vm_status with mock_wire_protocol(DATA_FILE, save_to_history=save_to_history) as protocol: exthandlers_handler = ExtHandlersHandler(protocol) exthandlers_handler.run = Mock() if extension_statuses is None: exthandlers_handler.report_ext_handlers_status = Mock(return_value=create_vm_status(ExtensionStatusValue.success)) else: exthandlers_handler.report_ext_handlers_status = Mock(side_effect=[create_vm_status(s) for s in extension_statuses]) yield exthandlers_handler class ProcessGoalStateTestCase(AgentTestCase): """ Tests for UpdateHandler._process_goal_state() """ def test_it_should_process_goal_state_only_on_new_goal_state(self): with _mock_exthandlers_handler() as exthandlers_handler: update_handler = _create_update_handler() remote_access_handler = Mock() remote_access_handler.run = Mock() agent_update_handler = Mock() agent_update_handler.run = Mock() # process a goal state update_handler._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) self.assertEqual(1, exthandlers_handler.run.call_count, "exthandlers_handler.run() should have been called on the first goal state") self.assertEqual(1, exthandlers_handler.report_ext_handlers_status.call_count, "exthandlers_handler.report_ext_handlers_status() should have been called on the first goal state") self.assertEqual(1, remote_access_handler.run.call_count, "remote_access_handler.run() should have been called on the first goal state") self.assertEqual(1, agent_update_handler.run.call_count, "agent_update_handler.run() should have been called on the first goal state") # process the same goal state update_handler._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) self.assertEqual(1, exthandlers_handler.run.call_count, "exthandlers_handler.run() should have not been called on the same goal state") self.assertEqual(2, exthandlers_handler.report_ext_handlers_status.call_count, "exthandlers_handler.report_ext_handlers_status() should have been called on the same goal state") self.assertEqual(1, remote_access_handler.run.call_count, "remote_access_handler.run() should not have been called on the same goal state") self.assertEqual(2, agent_update_handler.run.call_count, "agent_update_handler.run() should have been called on the same goal state") # process a new goal state exthandlers_handler.protocol.mock_wire_data.set_incarnation(999) exthandlers_handler.protocol.client.update_goal_state() update_handler._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) self.assertEqual(2, exthandlers_handler.run.call_count, "exthandlers_handler.run() should have been called on a new goal state") self.assertEqual(3, exthandlers_handler.report_ext_handlers_status.call_count, "exthandlers_handler.report_ext_handlers_status() should have been called on a new goal state") self.assertEqual(2, remote_access_handler.run.call_count, "remote_access_handler.run() should have been called on a new goal state") self.assertEqual(3, agent_update_handler.run.call_count, "agent_update_handler.run() should have been called on the new goal state") def test_it_should_write_the_agent_status_to_the_history_folder(self): with _mock_exthandlers_handler(save_to_history=True) as exthandlers_handler: update_handler = _create_update_handler() remote_access_handler = Mock() remote_access_handler.run = Mock() agent_update_handler = Mock() agent_update_handler.run = Mock() update_handler._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) incarnation = exthandlers_handler.protocol.get_goal_state().incarnation matches = glob.glob(os.path.join(conf.get_lib_dir(), ARCHIVE_DIRECTORY_NAME, "*_{0}".format(incarnation))) self.assertTrue(len(matches) == 1, "Could not find the history directory for the goal state. Got: {0}".format(matches)) status_file = os.path.join(matches[0], AGENT_STATUS_FILE) self.assertTrue(os.path.exists(status_file), "Could not find {0}".format(status_file)) @staticmethod def _prepare_fast_track_goal_state(): """ Creates a set of mock wire data where the most recent goal state is a FastTrack goal state; also invokes HostPluginProtocol.fetch_vm_settings() to save the Fast Track status to disk """ # Do a query for the vmSettings; this would retrieve a FastTrack goal state and keep track of its timestamp mock_wire_data_file = wire_protocol_data.DATA_FILE_VM_SETTINGS.copy() with mock_wire_protocol(mock_wire_data_file) as protocol: protocol.mock_wire_data.set_etag("0123456789") _ = protocol.client.get_host_plugin().fetch_vm_settings() return mock_wire_data_file def test_it_should_mark_outdated_goal_states_on_service_restart_when_fast_track_is_disabled(self): data_file = self._prepare_fast_track_goal_state() with patch("azurelinuxagent.common.conf.get_enable_fast_track", return_value=False): with mock_wire_protocol(data_file) as protocol: with mock_update_handler(protocol) as update_handler: update_handler.run() self.assertTrue(protocol.client.get_goal_state().extensions_goal_state.is_outdated) @staticmethod def _http_get_vm_settings_handler_not_found(url, *_, **__): if HttpRequestPredicates.is_host_plugin_vm_settings_request(url): return MockHttpResponse(httpclient.NOT_FOUND) # HostGAPlugin returns 404 if the API is not supported return None def test_it_should_mark_outdated_goal_states_on_service_restart_when_host_ga_plugin_stops_supporting_vm_settings(self): data_file = self._prepare_fast_track_goal_state() with mock_wire_protocol(data_file, http_get_handler=self._http_get_vm_settings_handler_not_found) as protocol: with mock_update_handler(protocol) as update_handler: update_handler.run() self.assertTrue(protocol.client.get_goal_state().extensions_goal_state.is_outdated) def test_it_should_clear_the_timestamp_for_the_most_recent_fast_track_goal_state(self): data_file = self._prepare_fast_track_goal_state() if HostPluginProtocol.get_fast_track_timestamp() == timeutil.create_timestamp(datetime.min): raise Exception("The test setup did not save the Fast Track state") with patch("azurelinuxagent.common.conf.get_enable_fast_track", return_value=False): with patch("azurelinuxagent.common.version.get_daemon_version", return_value=FlexibleVersion("2.2.53")): with mock_wire_protocol(data_file) as protocol: with mock_update_handler(protocol) as update_handler: update_handler.run() self.assertEqual(HostPluginProtocol.get_fast_track_timestamp(), timeutil.create_timestamp(datetime.min), "The Fast Track state was not cleared") def test_it_should_default_fast_track_timestamp_to_datetime_min(self): data = DATA_FILE_VM_SETTINGS.copy() # TODO: Currently, there's a limitation in the mocks where bumping the incarnation but the goal # state will cause the agent to error out while trying to write the certificates to disk. These # files have no dependencies on certs, so using them does not present that issue. # # Note that the scenario this test is representing does not depend on certificates at all, and # can be changed to use the default files when the above limitation is addressed. data["vm_settings"] = "hostgaplugin/vm_settings-fabric-no_thumbprints.json" data['goal_state'] = 'wire/goal_state_no_certs.xml' def vm_settings_no_change(url, *_, **__): if HttpRequestPredicates.is_host_plugin_vm_settings_request(url): return MockHttpResponse(httpclient.NOT_MODIFIED) return None def vm_settings_not_supported(url, *_, **__): if HttpRequestPredicates.is_host_plugin_vm_settings_request(url): return MockHttpResponse(404) return None with mock_wire_protocol(data) as protocol: def mock_live_migration(iteration): if iteration == 1: protocol.mock_wire_data.set_incarnation(2) protocol.set_http_handlers(http_get_handler=vm_settings_no_change) elif iteration == 2: protocol.mock_wire_data.set_incarnation(3) protocol.set_http_handlers(http_get_handler=vm_settings_not_supported) with mock_update_handler(protocol, 3, on_new_iteration=mock_live_migration) as update_handler: with patch("azurelinuxagent.ga.update.logger.error") as patched_error: def check_for_errors(): msg_fragment = "Error fetching the goal state:" for (args, _) in filter(lambda a: len(a) > 0, patched_error.call_args_list): if msg_fragment in args[0]: self.fail("Found error: {}".format(args[0])) update_handler.run(debug=True) check_for_errors() timestamp = protocol.client.get_host_plugin()._fast_track_timestamp self.assertEqual(timestamp, timeutil.create_timestamp(datetime.min), "Expected fast track time stamp to be set to {0}, got {1}".format(datetime.min, timestamp)) class HeartbeatTestCase(AgentTestCase): @patch("azurelinuxagent.common.logger.info") @patch("azurelinuxagent.ga.update.add_event") def test_telemetry_heartbeat_creates_event(self, patch_add_event, patch_info, *_): with mock_wire_protocol(wire_protocol_data.DATA_FILE) as mock_protocol: update_handler = get_update_handler() update_handler.last_telemetry_heartbeat = datetime.utcnow() - timedelta(hours=1) update_handler._send_heartbeat_telemetry(mock_protocol) self.assertEqual(1, patch_add_event.call_count) self.assertTrue(any(call_args[0] == "[HEARTBEAT] Agent {0} is running as the goal state agent {1}" for call_args in patch_info.call_args), "The heartbeat was not written to the agent's log") class AgentMemoryCheckTestCase(AgentTestCase): @patch("azurelinuxagent.common.logger.info") @patch("azurelinuxagent.ga.update.add_event") def test_check_agent_memory_usage_raises_exit_exception(self, patch_add_event, patch_info, *_): with patch("azurelinuxagent.ga.cgroupconfigurator.CGroupConfigurator._Impl.check_agent_memory_usage", side_effect=AgentMemoryExceededException()): with patch('azurelinuxagent.common.conf.get_enable_agent_memory_usage_check', return_value=True): with self.assertRaises(ExitException) as context_manager: update_handler = get_update_handler() update_handler._last_check_memory_usage_time = time.time() - 24 * 60 update_handler._check_agent_memory_usage() self.assertEqual(1, patch_add_event.call_count) self.assertTrue(any("Check on agent memory usage" in call_args[0] for call_args in patch_info.call_args), "The memory check was not written to the agent's log") self.assertIn("Agent {0} is reached memory limit -- exiting".format(CURRENT_AGENT), ustr(context_manager.exception), "An incorrect exception was raised") @patch("azurelinuxagent.common.logger.warn") @patch("azurelinuxagent.ga.update.add_event") def test_check_agent_memory_usage_fails(self, patch_add_event, patch_warn, *_): with patch("azurelinuxagent.ga.cgroupconfigurator.CGroupConfigurator._Impl.check_agent_memory_usage", side_effect=Exception()): with patch('azurelinuxagent.common.conf.get_enable_agent_memory_usage_check', return_value=True): update_handler = get_update_handler() update_handler._last_check_memory_usage_time = time.time() - 24 * 60 update_handler._check_agent_memory_usage() self.assertTrue(any("Error checking the agent's memory usage" in call_args[0] for call_args in patch_warn.call_args), "The memory check was not written to the agent's log") self.assertEqual(1, patch_add_event.call_count) add_events = [kwargs for _, kwargs in patch_add_event.call_args_list if kwargs["op"] == WALAEventOperation.AgentMemory] self.assertTrue( len(add_events) == 1, "Exactly 1 event should have been emitted when memory usage check fails. Got: {0}".format(add_events)) self.assertIn( "Error checking the agent's memory usage", add_events[0]["message"], "The error message is not correct when memory usage check failed") @patch("azurelinuxagent.ga.cgroupconfigurator.CGroupConfigurator._Impl.check_agent_memory_usage") @patch("azurelinuxagent.ga.update.add_event") def test_check_agent_memory_usage_not_called(self, patch_add_event, patch_memory_usage, *_): # This test ensures that agent not called immediately on startup, instead waits for CHILD_LAUNCH_INTERVAL with patch('azurelinuxagent.common.conf.get_enable_agent_memory_usage_check', return_value=True): update_handler = get_update_handler() update_handler._check_agent_memory_usage() self.assertEqual(0, patch_memory_usage.call_count) self.assertEqual(0, patch_add_event.call_count) class GoalStateIntervalTestCase(AgentTestCase): def test_initial_goal_state_period_should_default_to_goal_state_period(self): configuration_provider = conf.ConfigurationProvider() test_file = os.path.join(self.tmp_dir, "waagent.conf") with open(test_file, "w") as file_: file_.write("Extensions.GoalStatePeriod=987654321\n") conf.load_conf_from_file(test_file, configuration_provider) self.assertEqual(987654321, conf.get_initial_goal_state_period(conf=configuration_provider)) def test_update_handler_should_use_the_default_goal_state_period(self): update_handler = get_update_handler() default = conf.get_int_default_value("Extensions.GoalStatePeriod") self.assertEqual(default, update_handler._goal_state_period, "The UpdateHanlder is not using the default goal state period") def test_update_handler_should_not_use_the_default_goal_state_period_when_extensions_are_disabled(self): with patch('azurelinuxagent.common.conf.get_extensions_enabled', return_value=False): update_handler = get_update_handler() self.assertEqual(GOAL_STATE_PERIOD_EXTENSIONS_DISABLED, update_handler._goal_state_period, "Incorrect goal state period when extensions are disabled") def test_the_default_goal_state_period_and_initial_goal_state_period_should_be_the_same(self): update_handler = get_update_handler() default = conf.get_int_default_value("Extensions.GoalStatePeriod") self.assertEqual(default, update_handler._goal_state_period, "The UpdateHanlder is not using the default goal state period") def test_update_handler_should_use_the_initial_goal_state_period_when_it_is_different_to_the_goal_state_period(self): with patch('azurelinuxagent.common.conf.get_initial_goal_state_period', return_value=99999): update_handler = get_update_handler() self.assertEqual(99999, update_handler._goal_state_period, "Expected the initial goal state period") def test_update_handler_should_use_the_initial_goal_state_period_until_the_goal_state_converges(self): initial_goal_state_period, goal_state_period = 11111, 22222 with patch('azurelinuxagent.common.conf.get_initial_goal_state_period', return_value=initial_goal_state_period): with patch('azurelinuxagent.common.conf.get_goal_state_period', return_value=goal_state_period): with _mock_exthandlers_handler([ExtensionStatusValue.transitioning, ExtensionStatusValue.success]) as exthandlers_handler: remote_access_handler = Mock() agent_update_handler = Mock() update_handler = _create_update_handler() self.assertEqual(initial_goal_state_period, update_handler._goal_state_period, "Expected the initial goal state period") # the extension is transisioning, so we should still be using the initial goal state period update_handler._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) self.assertEqual(initial_goal_state_period, update_handler._goal_state_period, "Expected the initial goal state period when the extension is transitioning") # the goal state converged (the extension succeeded), so we should switch to the regular goal state period update_handler._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) self.assertEqual(goal_state_period, update_handler._goal_state_period, "Expected the regular goal state period after the goal state converged") def test_update_handler_should_switch_to_the_regular_goal_state_period_when_the_goal_state_does_not_converges(self): initial_goal_state_period, goal_state_period = 11111, 22222 with patch('azurelinuxagent.common.conf.get_initial_goal_state_period', return_value=initial_goal_state_period): with patch('azurelinuxagent.common.conf.get_goal_state_period', return_value=goal_state_period): with _mock_exthandlers_handler([ExtensionStatusValue.transitioning, ExtensionStatusValue.transitioning]) as exthandlers_handler: remote_access_handler = Mock() agent_update_handler = Mock() update_handler = _create_update_handler() self.assertEqual(initial_goal_state_period, update_handler._goal_state_period, "Expected the initial goal state period") # the extension is transisioning, so we should still be using the initial goal state period update_handler._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) self.assertEqual(initial_goal_state_period, update_handler._goal_state_period, "Expected the initial goal state period when the extension is transitioning") # a new goal state arrives before the current goal state converged (the extension is transitioning), so we should switch to the regular goal state period exthandlers_handler.protocol.mock_wire_data.set_incarnation(100) update_handler._process_goal_state(exthandlers_handler, remote_access_handler, agent_update_handler) self.assertEqual(goal_state_period, update_handler._goal_state_period, "Expected the regular goal state period when the goal state does not converge") class ExtensionsSummaryTestCase(AgentTestCase): @staticmethod def _create_extensions_summary(extension_statuses): """ Creates an ExtensionsSummary from an array of (extension name, extension status) tuples """ vm_status = VMStatus(status="Ready", message="Ready") vm_status.vmAgent.extensionHandlers = [ExtHandlerStatus()] * len(extension_statuses) for i in range(len(extension_statuses)): vm_status.vmAgent.extensionHandlers[i].extension_status = ExtensionStatus(name=extension_statuses[i][0]) vm_status.vmAgent.extensionHandlers[0].extension_status.status = extension_statuses[i][1] return ExtensionsSummary(vm_status) def test_equality_operator_should_return_true_on_items_with_the_same_value(self): summary1 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.transitioning)]) summary2 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.transitioning)]) self.assertTrue(summary1 == summary2, "{0} == {1} should be True".format(summary1, summary2)) def test_equality_operator_should_return_false_on_items_with_different_values(self): summary1 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.transitioning)]) summary2 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.success)]) self.assertFalse(summary1 == summary2, "{0} == {1} should be False") summary1 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success)]) summary2 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.success)]) self.assertFalse(summary1 == summary2, "{0} == {1} should be False") def test_inequality_operator_should_return_true_on_items_with_different_values(self): summary1 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.transitioning)]) summary2 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.success)]) self.assertTrue(summary1 != summary2, "{0} != {1} should be True".format(summary1, summary2)) summary1 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success)]) summary2 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.success)]) self.assertTrue(summary1 != summary2, "{0} != {1} should be True") def test_inequality_operator_should_return_false_on_items_with_same_value(self): summary1 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.transitioning)]) summary2 = ExtensionsSummaryTestCase._create_extensions_summary([("Extension 1", ExtensionStatusValue.success), ("Extension 2", ExtensionStatusValue.transitioning)]) self.assertFalse(summary1 != summary2, "{0} != {1} should be False".format(summary1, summary2)) if __name__ == '__main__': unittest.main()Azure-WALinuxAgent-2b21de5/tests/lib/000077500000000000000000000000001462617747000174405ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/lib/__init__.py000066400000000000000000000011651462617747000215540ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-2b21de5/tests/lib/cgroups_tools.py000066400000000000000000000043721462617747000227220ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os from azurelinuxagent.common.utils import fileutil class CGroupsTools(object): @staticmethod def create_legacy_agent_cgroup(cgroups_file_system_root, controller, daemon_pid): """ Previous versions of the daemon (2.2.31-2.2.40) wrote their PID to /sys/fs/cgroup/{cpu,memory}/WALinuxAgent/WALinuxAgent; starting from version 2.2.41 we track the agent service in walinuxagent.service instead of WALinuxAgent/WALinuxAgent. This method creates a mock cgroup using the legacy path and adds the given PID to it. """ legacy_cgroup = os.path.join(cgroups_file_system_root, controller, "WALinuxAgent", "WALinuxAgent") if not os.path.exists(legacy_cgroup): os.makedirs(legacy_cgroup) fileutil.append_file(os.path.join(legacy_cgroup, "cgroup.procs"), daemon_pid + "\n") return legacy_cgroup @staticmethod def create_agent_cgroup(cgroups_file_system_root, controller, extension_handler_pid): """ Previous versions of the daemon (2.2.31-2.2.40) wrote their PID to /sys/fs/cgroup/{cpu,memory}/WALinuxAgent/WALinuxAgent; starting from version 2.2.41 we track the agent service in walinuxagent.service instead of WALinuxAgent/WALinuxAgent. This method creates a mock cgroup using the newer path and adds the given PID to it. """ new_cgroup = os.path.join(cgroups_file_system_root, controller, "walinuxagent.service") if not os.path.exists(new_cgroup): os.makedirs(new_cgroup) fileutil.append_file(os.path.join(new_cgroup, "cgroup.procs"), extension_handler_pid + "\n") return new_cgroup Azure-WALinuxAgent-2b21de5/tests/lib/event_logger_tools.py000066400000000000000000000052701462617747000237160ustar00rootroot00000000000000# Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import platform import azurelinuxagent.common.event as event from azurelinuxagent.common.version import DISTRO_NAME, DISTRO_VERSION, DISTRO_CODE_NAME import tests.lib.tools as tools from tests.lib import wire_protocol_data from tests.lib.mock_wire_protocol import mock_wire_protocol class EventLoggerTools(object): mock_imds_data = { 'location': 'uswest', 'subscriptionId': 'AAAAAAAA-BBBB-CCCC-DDDD-EEEEEEEEEEEE', 'resourceGroupName': 'test-rg', 'vmId': '99999999-8888-7777-6666-555555555555', 'image_origin': 2468 } @staticmethod def initialize_event_logger(event_dir): """ Initializes the event logger using mock data for the common parameters; the goal state fields are taken from wire_protocol_data.DATA_FILE and the IMDS fields from mock_imds_data. """ if not os.path.exists(event_dir): os.mkdir(event_dir) event.init_event_logger(event_dir) mock_imds_info = tools.Mock() mock_imds_info.location = EventLoggerTools.mock_imds_data['location'] mock_imds_info.subscriptionId = EventLoggerTools.mock_imds_data['subscriptionId'] mock_imds_info.resourceGroupName = EventLoggerTools.mock_imds_data['resourceGroupName'] mock_imds_info.vmId = EventLoggerTools.mock_imds_data['vmId'] mock_imds_info.image_origin = EventLoggerTools.mock_imds_data['image_origin'] mock_imds_client = tools.Mock() mock_imds_client.get_compute = tools.Mock(return_value=mock_imds_info) with mock_wire_protocol(wire_protocol_data.DATA_FILE) as mock_protocol: with tools.patch("azurelinuxagent.common.event.get_imds_client", return_value=mock_imds_client): event.initialize_event_logger_vminfo_common_parameters(mock_protocol) @staticmethod def get_expected_os_version(): """ Returns the expected value for the OS Version in telemetry events """ return u"{0}:{1}-{2}-{3}:{4}".format(platform.system(), DISTRO_NAME, DISTRO_VERSION, DISTRO_CODE_NAME, platform.release()) Azure-WALinuxAgent-2b21de5/tests/lib/extension_emulator.py000066400000000000000000000337111462617747000237430ustar00rootroot00000000000000# Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import json import re import uuid import os import contextlib import subprocess import azurelinuxagent.common.conf as conf from azurelinuxagent.common.future import ustr from azurelinuxagent.common.utils import fileutil from azurelinuxagent.ga.exthandlers import ExtHandlerInstance, ExtCommandEnvVariable from tests.lib.tools import Mock, patch from tests.lib.wire_protocol_data import WireProtocolData from tests.lib.mock_wire_protocol import MockHttpResponse from tests.lib.http_request_predicates import HttpRequestPredicates class ExtensionCommandNames(object): INSTALL = "install" UNINSTALL = "uninstall" UPDATE = "update" ENABLE = "enable" DISABLE = "disable" class Actions(object): """ A collection of static methods providing some basic functionality for the ExtensionEmulator class' actions. """ @staticmethod def succeed_action(*_, **__): """ A nop action with the correct function signature for ExtensionEmulator actions. """ return 0 @staticmethod def generate_unique_fail(): """ Utility function for tracking the return code of a command. Returns both a unique return code, and a function pointer which returns said return code. """ return_code = str(uuid.uuid4()) def fail_action(*_, **__): return return_code return return_code, fail_action def extension_emulator(name="OSTCExtensions.ExampleHandlerLinux", version="1.0.0", update_mode="UpdateWithInstall", report_heartbeat=False, continue_on_update_failure=False, supports_multiple_extensions=False, install_action=Actions.succeed_action, uninstall_action=Actions.succeed_action, enable_action=Actions.succeed_action, disable_action=Actions.succeed_action, update_action=Actions.succeed_action): """ Factory method for ExtensionEmulator objects with sensible defaults. """ # Linter reports too many arguments, but this isn't an issue because all are defaulted; # no caller will have to actually provide all of the arguments listed. return ExtensionEmulator(name, version, update_mode, report_heartbeat, continue_on_update_failure, supports_multiple_extensions, install_action, uninstall_action, enable_action, disable_action, update_action) @contextlib.contextmanager def enable_invocations(*emulators): """ Allows ExtHandlersHandler objects to call the specified emulators and keeps track of the order of those invocations. Returns the invocation record. Note that this method patches subprocess.Popen and ExtHandlerInstance.load_manifest. """ invocation_record = InvocationRecord() patched_popen = generate_patched_popen(invocation_record, *emulators) patched_load_manifest = generate_mock_load_manifest(*emulators) with patch.object(ExtHandlerInstance, "load_manifest", patched_load_manifest): with patch("subprocess.Popen", patched_popen): yield invocation_record def generate_put_handler(*emulators): """ Create a HTTP handler to store status blobs for each provided emulator. For use with tests.lib.mocks.mock_wire_protocol. """ def mock_put_handler(url, *args, **_): if HttpRequestPredicates.is_host_plugin_status_request(url): status_blob = WireProtocolData.get_status_blob_from_hostgaplugin_put_status_request(args[0]) else: status_blob = args[0] handler_statuses = json.loads(status_blob).get("aggregateStatus", {}).get("handlerAggregateStatus", []) for handler_status in handler_statuses: supplied_name = handler_status.get("handlerName", None) supplied_version = handler_status.get("handlerVersion", None) try: matching_ext = _first_matching_emulator(emulators, supplied_name, supplied_version) matching_ext.status_blobs.append(handler_status) except StopIteration: # Tests will want to know that the agent is running an extension they didn't specifically allocate. raise Exception("Extension running, but not present in emulators: {0}, {1}".format(supplied_name, supplied_version)) return MockHttpResponse(status=200) return mock_put_handler class InvocationRecord: def __init__(self): self._queue = [] def add(self, ext_name, ext_ver, ext_cmd): self._queue.append((ext_name, ext_ver, ext_cmd)) def compare(self, *expected_cmds): """ Verifies that any and all recorded invocations appear in the provided command list in that exact ordering. Each cmd in expected_cmds should be a tuple of the form (ExtensionEmulator, ExtensionCommandNames object). """ for (expected_ext_emulator, command_name) in expected_cmds: try: (ext_name, ext_ver, ext_cmd) = self._queue.pop(0) if not expected_ext_emulator.matches(ext_name, ext_ver) or command_name != ext_cmd: raise Exception("Unexpected invocation: have ({0}, {1}, {2}), but expected ({3}, {4}, {5})".format( ext_name, ext_ver, ext_cmd, expected_ext_emulator.name, expected_ext_emulator.version, command_name )) except IndexError: raise Exception("No more invocations recorded. Expected ({0}, {1}, {2}).".format(expected_ext_emulator.name, expected_ext_emulator.version, command_name)) if self._queue: raise Exception("Invocation recorded, but not expected: ({0}, {1}, {2})".format( *self._queue[0] )) def _first_matching_emulator(emulators, name, version): for ext in emulators: if ext.matches(name, version): return ext raise StopIteration class ExtensionEmulator: """ A wrapper class for the possible actions and options that an extension might support. """ def __init__(self, name, version, update_mode, report_heartbeat, continue_on_update_failure, supports_multiple_extensions, install_action, uninstall_action, enable_action, disable_action, update_action): # Linter reports too many arguments, but this constructor has its access mediated by # a factory method; the calls affected by the number of arguments here is very # limited in scope. self.name = name self.version = version self.update_mode = update_mode self.report_heartbeat = report_heartbeat self.continue_on_update_failure = continue_on_update_failure self.supports_multiple_extensions = supports_multiple_extensions self._actions = { ExtensionCommandNames.INSTALL: ExtensionEmulator._extend_func(install_action), ExtensionCommandNames.UNINSTALL: ExtensionEmulator._extend_func(uninstall_action), ExtensionCommandNames.UPDATE: ExtensionEmulator._extend_func(update_action), ExtensionCommandNames.ENABLE: ExtensionEmulator._extend_func(enable_action), ExtensionCommandNames.DISABLE: ExtensionEmulator._extend_func(disable_action) } self._status_blobs = [] @property def actions(self): """ A read-only property designed to allow inspection for the emulated extension's actions. `actions` maps an ExtensionCommandNames object to a mock wrapping the function this emulator was initialized with. """ return self._actions @property def status_blobs(self): """ A property for storing and retreiving the status blobs for the extension this object is emulating that are uploaded to the HTTP PUT /status endpoint. """ return self._status_blobs @staticmethod def _extend_func(func): """ Convert a function such that its returned value mimicks a Popen object (i.e. with correct return values for poll() and wait() calls). """ def wrapped_func(cmd, *args, **kwargs): return_value = func(cmd, *args, **kwargs) prefix = kwargs['env'][ExtCommandEnvVariable.ExtensionSeqNumber] if ExtCommandEnvVariable.ExtensionName in kwargs['env']: prefix = "{0}.{1}".format(kwargs['env'][ExtCommandEnvVariable.ExtensionName], prefix) status_file = os.path.join(os.path.dirname(cmd), "status", "{seq}.status".format(seq=prefix)) if return_value == 0: status_contents = [{ "status": {"status": "success"} }] else: try: ec = int(return_value) except Exception: # Error when trying to parse return_value, probably not an integer. # Failing with -1 and passing the return_value as message ec = -1 status_contents = [{"status": {"status": "error", "code": ec, "formattedMessage": {"message": return_value, "lang": "en-US"}}}] fileutil.write_file(status_file, json.dumps(status_contents)) return Mock(**{ "poll.return_value": return_value, "wait.return_value": return_value }) # Wrap the function in a mock to allow invocation reflection a la .assert_not_called(), etc. return Mock(wraps=wrapped_func) def matches(self, name, version): return self.name == name and self.version == version def generate_patched_popen(invocation_record, *emulators): """ Create a mock popen function able to invoke the proper action for an extension emulator in emulators. """ original_popen = subprocess.Popen def patched_popen(cmd, *args, **kwargs): try: handler_name, handler_version, command_name = extract_extension_info_from_command(cmd) except ValueError: return original_popen(cmd, *args, **kwargs) try: name = handler_name # MultiConfig scenario, search for full name - . if ExtCommandEnvVariable.ExtensionName in kwargs['env']: name = "{0}.{1}".format(handler_name, kwargs['env'][ExtCommandEnvVariable.ExtensionName]) invocation_record.add(name, handler_version, command_name) matching_ext = _first_matching_emulator(emulators, name, handler_version) return matching_ext.actions[command_name](cmd, *args, **kwargs) except StopIteration: raise Exception("Extension('{name}', '{version}') not listed as a parameter. Is it being emulated?".format( name=handler_name, version=handler_version )) return patched_popen def generate_mock_load_manifest(*emulators): original_load_manifest = ExtHandlerInstance.load_manifest def mock_load_manifest(self): matching_emulator = None names = [self.ext_handler.name] # Incase of MC, search for full names - . if self.supports_multi_config: names = [self.get_extension_full_name(ext) for ext in self.extensions] for name in names: try: matching_emulator = _first_matching_emulator(emulators, name, self.ext_handler.version) except StopIteration: continue else: break if matching_emulator is None: raise Exception( "Extension('{name}', '{version}') not listed as a parameter. Is it being emulated?".format( name=self.ext_handler.name, version=self.ext_handler.version)) base_manifest = original_load_manifest(self) base_manifest.data["handlerManifest"].update({ "continueOnUpdateFailure": matching_emulator.continue_on_update_failure, "reportHeartbeat": matching_emulator.report_heartbeat, "updateMode": matching_emulator.update_mode, "supportsMultipleExtensions": matching_emulator.supports_multiple_extensions }) return base_manifest return mock_load_manifest def extract_extension_info_from_command(command): """ Parse a command into a tuple of extension info. """ if not isinstance(command, (str, ustr)): raise ValueError("Cannot extract extension info from non-string commands") # Group layout of the expected command; this lets us grab what we want after a match template = r'(?<={base_dir}/)(?P{ext_name})-(?P{ext_ver})(?:/{script_file} -)(?P{ext_cmd})' base_dir_regex = conf.get_lib_dir() script_file_regex = r'[^\s]+' ext_cmd_regex = r'[a-zA-Z]+' ext_name_regex = r'[a-zA-Z]+(?:[.a-zA-Z]+)?' ext_ver_regex = r'[0-9]+(?:[.0-9]+)*' full_regex = template.format( ext_name=ext_name_regex, ext_ver=ext_ver_regex, base_dir=base_dir_regex, script_file=script_file_regex, ext_cmd=ext_cmd_regex ) match_obj = re.search(full_regex, command) if not match_obj: raise ValueError("Command does not match the desired format: {0}".format(command)) return match_obj.group('name', 'ver', 'cmd')Azure-WALinuxAgent-2b21de5/tests/lib/http_request_predicates.py000066400000000000000000000112361462617747000247470ustar00rootroot00000000000000import re from azurelinuxagent.common.utils import restutil class HttpRequestPredicates(object): """ Utility functions to check the urls used by tests """ @staticmethod def is_goal_state_request(url): return url.lower() == 'http://{0}/machine/?comp=goalstate'.format(restutil.KNOWN_WIRESERVER_IP) @staticmethod def is_certificates_request(url): return re.match(r'http://{0}(:80)?/machine/.*?comp=certificates'.format(restutil.KNOWN_WIRESERVER_IP), url, re.IGNORECASE) @staticmethod def is_extensions_config_request(url): return re.match(r'http://{0}(:80)?/machine/.*?comp=config&type=extensionsConfig'.format(restutil.KNOWN_WIRESERVER_IP), url, re.IGNORECASE) @staticmethod def is_hosting_environment_config_request(url): return re.match(r'http://{0}(:80)?/machine/.*?comp=config&type=hostingEnvironmentConfig'.format(restutil.KNOWN_WIRESERVER_IP), url, re.IGNORECASE) @staticmethod def is_shared_config_request(url): return re.match(r'http://{0}(:80)?/machine/.*?comp=config&type=sharedConfig'.format(restutil.KNOWN_WIRESERVER_IP), url, re.IGNORECASE) @staticmethod def is_telemetry_request(url): return url.lower() == 'http://{0}/machine?comp=telemetrydata'.format(restutil.KNOWN_WIRESERVER_IP) @staticmethod def is_health_service_request(url): return url.lower() == 'http://{0}:80/healthservice'.format(restutil.KNOWN_WIRESERVER_IP) @staticmethod def is_in_vm_artifacts_profile_request(url): return re.match(r'https://.+\.blob\.core\.windows\.net/\$system/.+\.(vmSettings|settings)\?.+', url) is not None @staticmethod def _get_host_plugin_request_artifact_location(url, request_kwargs): if 'headers' not in request_kwargs: raise ValueError('Host plugin request is missing HTTP headers ({0})'.format(url)) headers = request_kwargs['headers'] if 'x-ms-artifact-location' not in headers: raise ValueError('Host plugin request is missing the x-ms-artifact-location header ({0})'.format(url)) return headers['x-ms-artifact-location'] @staticmethod def is_host_plugin_vm_settings_request(url): return url.lower() == 'http://{0}:{1}/vmsettings'.format(restutil.KNOWN_WIRESERVER_IP, restutil.HOST_PLUGIN_PORT) @staticmethod def is_host_plugin_health_request(url): return url.lower() == 'http://{0}:{1}/health'.format(restutil.KNOWN_WIRESERVER_IP, restutil.HOST_PLUGIN_PORT) @staticmethod def is_host_plugin_extension_artifact_request(url): return url.lower() == 'http://{0}:{1}/extensionartifact'.format(restutil.KNOWN_WIRESERVER_IP, restutil.HOST_PLUGIN_PORT) @staticmethod def is_status_request(url): return HttpRequestPredicates.is_storage_status_request(url) or HttpRequestPredicates.is_host_plugin_status_request(url) @staticmethod def is_storage_status_request(url): # e.g. 'https://test.blob.core.windows.net/vhds/test-cs12.test-cs12.test-cs12.status?sr=b&sp=rw&se=9999-01-01&sk=key1&sv=2014-02-14&sig=hfRh7gzUE7sUtYwke78IOlZOrTRCYvkec4hGZ9zZzXo' return re.match(r'^https://.+/.*\.status\?[^/]+$', url, re.IGNORECASE) @staticmethod def is_host_plugin_status_request(url): return url.lower() == 'http://{0}:{1}/status'.format(restutil.KNOWN_WIRESERVER_IP, restutil.HOST_PLUGIN_PORT) @staticmethod def is_host_plugin_extension_request(request_url, request_kwargs, extension_url): if not HttpRequestPredicates.is_host_plugin_extension_artifact_request(request_url): return False artifact_location = HttpRequestPredicates._get_host_plugin_request_artifact_location(request_url, request_kwargs) return artifact_location == extension_url @staticmethod def is_host_plugin_in_vm_artifacts_profile_request(url, request_kwargs): if not HttpRequestPredicates.is_host_plugin_extension_artifact_request(url): return False artifact_location = HttpRequestPredicates._get_host_plugin_request_artifact_location(url, request_kwargs) return HttpRequestPredicates.is_in_vm_artifacts_profile_request(artifact_location) @staticmethod def is_host_plugin_put_logs_request(url): return url.lower() == 'http://{0}:{1}/vmagentlog'.format(restutil.KNOWN_WIRESERVER_IP, restutil.HOST_PLUGIN_PORT) @staticmethod def is_agent_package_request(url): return re.match(r"^http://mock-goal-state/ga-manifests/OSTCExtensions.WALinuxAgent__([\d.]+)$", url) is not None @staticmethod def is_ga_manifest_request(url): return "manifest_of_ga.xml" in url Azure-WALinuxAgent-2b21de5/tests/lib/miscellaneous_tools.py000066400000000000000000000040161462617747000240760ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # # # Utility functions for the unit tests. # # This module is meant for simple, small tools that don't fit elsewhere. # import datetime import os import time from azurelinuxagent.common.future import ustr def format_processes(pid_list): """ Formats the given PIDs as a sequence of PIDs and their command lines """ def get_command_line(pid): try: cmdline = '/proc/{0}/cmdline'.format(pid) if os.path.exists(cmdline): with open(cmdline, "r") as cmdline_file: return "[PID: {0}] {1}".format(pid, cmdline_file.read()) except Exception: pass return "[PID: {0}] UNKNOWN".format(pid) return ustr([get_command_line(pid) for pid in pid_list]) def wait_for(predicate, timeout=10, frequency=0.01): """ Waits until the given predicate is true or the given timeout elapses. Returns the last evaluation of the predicate. Both the timeout and frequency are in seconds; the latter indicates how often the predicate is evaluated. """ def to_seconds(time_delta): return (time_delta.microseconds + (time_delta.seconds + time_delta.days * 24 * 3600) * 10 ** 6) / 10 ** 6 start_time = datetime.datetime.now() while to_seconds(datetime.datetime.now() - start_time) < timeout: if predicate(): return True time.sleep(frequency) return False Azure-WALinuxAgent-2b21de5/tests/lib/mock_cgroup_environment.py000066400000000000000000000127351462617747000247560ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib import os from tests.lib.tools import patch, data_dir from tests.lib.mock_environment import MockEnvironment, MockCommand _MOCKED_COMMANDS = [ MockCommand(r"^systemctl --version$", '''systemd 237 +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid '''), MockCommand(r"^mount -t cgroup$", '''cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd) cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) '''), MockCommand(r"^mount -t cgroup2$", '''cgroup on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime) '''), MockCommand(r"^systemctl show walinuxagent\.service --property Slice", '''Slice=system.slice '''), MockCommand(r"^systemctl show walinuxagent\.service --property CPUAccounting$", '''CPUAccounting=no '''), MockCommand(r"^systemctl show walinuxagent\.service --property CPUQuotaPerSecUSec$", '''CPUQuotaPerSecUSec=infinity '''), MockCommand(r"^systemctl show walinuxagent\.service --property MemoryAccounting$", '''MemoryAccounting=no '''), MockCommand(r"^systemctl show extension\.service --property ControlGroup$", '''ControlGroup=/system.slice/extension.service '''), MockCommand(r"^systemctl daemon-reload", ""), MockCommand(r"^systemctl stop ([^\s]+)"), MockCommand(r"^systemd-run (.+) --unit=([^\s]+) --scope ([^\s]+)", ''' Running scope as unit: TEST_UNIT.scope Thu 28 May 2020 07:25:55 AM PDT '''), ] _MOCKED_FILES = [ ("/proc/self/cgroup", os.path.join(data_dir, 'cgroups', 'proc_self_cgroup')), (r"/proc/[0-9]+/cgroup", os.path.join(data_dir, 'cgroups', 'proc_pid_cgroup')), ("/sys/fs/cgroup/unified/cgroup.controllers", os.path.join(data_dir, 'cgroups', 'sys_fs_cgroup_unified_cgroup.controllers')) ] _MOCKED_PATHS = [ r"^(/lib/systemd/system)", r"^(/etc/systemd/system)" ] class UnitFilePaths: walinuxagent = "/lib/systemd/system/walinuxagent.service" logcollector = "/lib/systemd/system/azure-walinuxagent-logcollector.slice" azure = "/lib/systemd/system/azure.slice" vmextensions = "/lib/systemd/system/azure-vmextensions.slice" extensionslice = "/lib/systemd/system/azure-vmextensions-Microsoft.CPlat.Extension.slice" slice = "/lib/systemd/system/walinuxagent.service.d/10-Slice.conf" cpu_accounting = "/lib/systemd/system/walinuxagent.service.d/11-CPUAccounting.conf" cpu_quota = "/lib/systemd/system/walinuxagent.service.d/12-CPUQuota.conf" memory_accounting = "/lib/systemd/system/walinuxagent.service.d/13-MemoryAccounting.conf" extension_service_cpu_accounting = '/lib/systemd/system/extension.service.d/11-CPUAccounting.conf' extension_service_cpu_quota = '/lib/systemd/system/extension.service.d/12-CPUQuota.conf' extension_service_memory_accounting = '/lib/systemd/system/extension.service.d/13-MemoryAccounting.conf' extension_service_memory_limit = '/lib/systemd/system/extension.service.d/14-MemoryLimit.conf' @contextlib.contextmanager def mock_cgroup_environment(tmp_dir): """ Creates a mocks environment used by the tests related to cgroups (currently it only provides support for systemd platforms). The command output used in __MOCKED_COMMANDS comes from an Ubuntu 18 system. """ data_files = [ (os.path.join(data_dir, 'init', 'walinuxagent.service'), UnitFilePaths.walinuxagent), (os.path.join(data_dir, 'init', 'azure.slice'), UnitFilePaths.azure), (os.path.join(data_dir, 'init', 'azure-vmextensions.slice'), UnitFilePaths.vmextensions) ] with patch('azurelinuxagent.ga.cgroupapi.CGroupsApi.cgroups_supported', return_value=True): with patch('azurelinuxagent.common.osutil.systemd.is_systemd', return_value=True): with MockEnvironment(tmp_dir, commands=_MOCKED_COMMANDS, paths=_MOCKED_PATHS, files=_MOCKED_FILES, data_files=data_files) as mock: yield mock Azure-WALinuxAgent-2b21de5/tests/lib/mock_command.py000077500000000000000000000011051462617747000224410ustar00rootroot00000000000000#!/usr/bin/env python3 import os import sys if len(sys.argv) != 4: sys.stderr.write("usage: {0} ".format(os.path.basename(__file__))) # W0632: Possible unbalanced tuple unpacking with sequence: left side has 3 label(s), right side has 0 value(s) (unbalanced-tuple-unpacking) # Disabled: Unpacking is balanced: there is a check for the length on line 5 stdout, return_value, stderr = sys.argv[1:] # pylint: disable=W0632 if stdout != '': sys.stdout.write(stdout) if stderr != '': sys.stderr.write(stderr) sys.exit(int(return_value)) Azure-WALinuxAgent-2b21de5/tests/lib/mock_environment.py000066400000000000000000000154501462617747000233740ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import re import shutil import subprocess from azurelinuxagent.common.future import ustr from azurelinuxagent.common.utils import fileutil from tests.lib.tools import patch, patch_builtin class MockCommand: def __init__(self, command, stdout='', return_value=0, stderr=''): self.command = command self.stdout = stdout self.return_value = return_value self.stderr = stderr def __str__(self): return ' '.join(self.command) if isinstance(self.command, list) else self.command class MockEnvironment: """ A MockEnvironment is a Context Manager that can be used to mock a set of commands, file system paths, and/or files. It can be useful in tests that need to execute commands or access/modify files that are not available in all platforms, or require root privileges, or can change global system settings. For a sample usage see the mock_cgroup_environment() function. Currently, MockEnvironment mocks subprocess.Popen(), fileutil.mkdir(), os.path.exists() and the builtin() open function (it mocks fileutil.mkdir() instead od os.mkdir() because the agent's code users the former since it provides backwards compatibility with Python 2). The mock for Popen looks for a match in the given 'commands' and, if found, forwards the call to the mock_command.py, which produces the output specified by the matching item. Otherwise it forwards the call to the original Popen function. The mocks for the other functions first look for a match in the given 'files' array and, if found, map the file to the corresponding path in the matching item (if the mapping points to an Exception, the Exception is raised). If there is no match, then it checks if the file is included in the given 'paths' array and maps the path to the given 'tmp_dir' (e.g. "/lib/systemd/system" becomes "/lib/systemd/system".) If there no matches, the path is not changed. Once this mapping has completed the mocks invoke the corresponding original function. Matches are done using regular expressions; the regular expressions in 'paths' must create group 0 to indicate the section of the path that needs to be mapped (i.e. use parenthesis around the section that needs to be mapped.) The items in the given 'data_files' are copied to the 'tmp_dir'. The add_*() methods insert new items int the list of mock objects. Items added by these methods take precedence over the items provided to the __init__() method. """ def __init__(self, tmp_dir, commands=None, paths=None, files=None, data_files=None): # make a copy of the arrays passed as arguments since individual tests can modify them self.tmp_dir = tmp_dir self.commands = [] if commands is None else commands[:] self.paths = [] if paths is None else paths[:] self.files = [] if files is None else files[:] self._data_files = data_files # get references to the functions we'll mock so that we can call the original implementations self._original_popen = subprocess.Popen self._original_mkdir = fileutil.mkdir self._original_path_exists = os.path.exists self._original_open = open self.patchers = [ patch_builtin("open", side_effect=self._mock_open), patch("subprocess.Popen", side_effect=self._mock_popen), patch("os.path.exists", side_effect=self._mock_path_exists), patch("azurelinuxagent.common.utils.fileutil.mkdir", side_effect=self._mock_mkdir) ] def __enter__(self): if self._data_files is not None: for items in self._data_files: self.add_data_file(items[0], items[1]) try: for patcher in self.patchers: patcher.start() except Exception: self._stop_patchers() raise return self def __exit__(self, *_): self._stop_patchers() def _stop_patchers(self): for patcher in self.patchers: try: patcher.stop() except Exception: pass def add_command(self, command): self.commands.insert(0, command) def add_path(self, mock): self.paths.insert(0, mock) def add_file(self, actual, mock): self.files.insert(0, (actual, mock)) def add_data_file(self, source, target): shutil.copyfile(source, self.get_mapped_path(target)) def get_mapped_path(self, path): for item in self.files: match = re.match(item[0], path) if match is not None: return item[1] for item in self.paths: mapped = re.sub(item, r"{0}\1".format(self.tmp_dir), path) if mapped != path: mapped_parent = os.path.split(mapped)[0] if not self._original_path_exists(mapped_parent): os.makedirs(mapped_parent) return mapped return path def _mock_popen(self, command, *args, **kwargs): if isinstance(command, list): command_string = " ".join(command) else: command_string = command for cmd in self.commands: match = re.match(cmd.command, command_string) if match is not None: mock_script = os.path.join(os.path.split(__file__)[0], "mock_command.py") if 'shell' in kwargs and kwargs['shell']: command = "{0} '{1}' {2} '{3}'".format(mock_script, cmd.stdout, cmd.return_value, cmd.stderr) else: command = [mock_script, cmd.stdout, ustr(cmd.return_value), cmd.stderr] break return self._original_popen(command, *args, **kwargs) def _mock_mkdir(self, path, *args, **kwargs): return self._original_mkdir(self.get_mapped_path(path), *args, **kwargs) def _mock_open(self, path, *args, **kwargs): mapped_path = self.get_mapped_path(path) if isinstance(mapped_path, Exception): raise mapped_path return self._original_open(mapped_path, *args, **kwargs) def _mock_path_exists(self, path): return self._original_path_exists(self.get_mapped_path(path)) Azure-WALinuxAgent-2b21de5/tests/lib/mock_update_handler.py000066400000000000000000000146711462617747000240130ustar00rootroot00000000000000 # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib from mock import PropertyMock from azurelinuxagent.ga.agent_update_handler import AgentUpdateHandler from azurelinuxagent.ga.exthandlers import ExtHandlersHandler from azurelinuxagent.ga.remoteaccess import RemoteAccessHandler from azurelinuxagent.ga.update import UpdateHandler, get_update_handler from tests.lib.tools import patch, Mock, mock_sleep @contextlib.contextmanager def mock_update_handler(protocol, iterations=1, on_new_iteration=lambda _: None, exthandlers_handler=None, remote_access_handler=None, agent_update_handler=None, autoupdate_enabled=False, check_daemon_running=False, start_background_threads=False, check_background_threads=False ): """ Creates a mock UpdateHandler that executes its main loop for the given 'iterations'. * If 'on_new_iteration' is given, it is invoked at the beginning of each iteration passing the iteration number as argument. * Network requests (e.g. requests for the goal state) are done using the given 'protocol'. * The mock UpdateHandler uses mock no-op ExtHandlersHandler and RemoteAccessHandler, unless they are given by 'exthandlers_handler' and 'remote_access_handler'. * Agent updates are disabled, unless specified otherwise by 'autoupdate_enabled'. * The check for the daemon is skipped, unless specified otherwise by 'check_daemon_running' * Background threads (monitor, env, telemetry, etc) are not started, unless specified otherwise by 'start_background_threads' * The check for background threads is skipped, unless specified otherwise by 'check_background_threads' * The UpdateHandler is augmented with these extra functions: * get_exit_code() - returns the code passed to sys.exit() when the handler exits * get_iterations() - returns the number of iterations executed by the main loop * get_iterations_completed() - returns the number of iterations of the main loop that completed execution (i.e. were not interrupted by an exception, return, etc) """ iteration_count = [0] def is_running(*args): # mock for property UpdateHandler.is_running, which controls the main loop if len(args) == 0: # getter enter_loop = iteration_count[0] < iterations if enter_loop: iteration_count[0] += 1 on_new_iteration(iteration_count[0]) return enter_loop else: # setter return None if exthandlers_handler is None: exthandlers_handler = ExtHandlersHandler(protocol) else: exthandlers_handler.protocol = protocol if remote_access_handler is None: remote_access_handler = RemoteAccessHandler(protocol) if agent_update_handler is None: agent_update_handler = AgentUpdateHandler(protocol) cleanup_functions = [] def patch_object(target, attribute): p = patch.object(target, attribute) p.start() cleanup_functions.insert(0, p.stop) try: with patch("azurelinuxagent.ga.exthandlers.get_exthandlers_handler", return_value=exthandlers_handler): with patch("azurelinuxagent.ga.update.get_agent_update_handler", return_value=agent_update_handler): with patch("azurelinuxagent.ga.remoteaccess.get_remote_access_handler", return_value=remote_access_handler): with patch("azurelinuxagent.ga.update.conf.get_autoupdate_enabled", return_value=autoupdate_enabled): with patch.object(UpdateHandler, "is_running", PropertyMock(side_effect=is_running)): with patch('azurelinuxagent.ga.update.time.sleep', side_effect=lambda _: mock_sleep(0.001)) as sleep: with patch('sys.exit', side_effect=lambda _: 0) as mock_exit: if not check_daemon_running: patch_object(UpdateHandler, "_check_daemon_running") if not start_background_threads: patch_object(UpdateHandler, "_start_threads") if not check_background_threads: patch_object(UpdateHandler, "_check_threads_running") def get_exit_code(): if mock_exit.call_count == 0: raise Exception("The UpdateHandler did not exit") if mock_exit.call_count != 1: raise Exception("The UpdateHandler exited multiple times ({0})".format(mock_exit.call_count)) args, _ = mock_exit.call_args return args[0] def get_iterations(): return iteration_count[0] def get_iterations_completed(): return sleep.call_count update_handler = get_update_handler() update_handler.protocol_util.get_protocol = Mock(return_value=protocol) update_handler.get_exit_code = get_exit_code update_handler.get_iterations = get_iterations update_handler.get_iterations_completed = get_iterations_completed yield update_handler finally: for f in cleanup_functions: f() Azure-WALinuxAgent-2b21de5/tests/lib/mock_wire_protocol.py000066400000000000000000000164331462617747000237210ustar00rootroot00000000000000# Copyright 2020 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import contextlib from azurelinuxagent.common.protocol.wire import WireProtocol from azurelinuxagent.common.utils import restutil from tests.lib.tools import patch from tests.lib import wire_protocol_data @contextlib.contextmanager def mock_wire_protocol(mock_wire_data_file, http_get_handler=None, http_post_handler=None, http_put_handler=None, do_not_mock=lambda method, url: False, fail_on_unknown_request=True, save_to_history=False): """ Creates a WireProtocol object that handles requests to the WireServer, the Host GA Plugin, and some requests to storage (requests that provide mock data in wire_protocol_data.py). The data returned by those requests is read from the files specified by 'mock_wire_data_file' (which must follow the structure of the data files defined in tests/protocol/wire_protocol_data.py). The caller can also provide handler functions for specific HTTP methods using the http_*_handler arguments. The return value of the handler function is interpreted similarly to the "return_value" argument of patch(): if it is an exception the exception is raised or, if it is any object other than None, the value is returned by the mock. If the handler function returns None the call is handled using the mock wireserver data or passed to the original to restutil.http_request. The 'do_not_mock' lambda can be used to skip the mocks for specific requests; if the lambda returns True, the mocks won't be applied and the original common.utils.restutil.http_request will be invoked instead. The 'save_to_history' parameter is passed thru in the call to WireProtocol.detect(). The returned protocol object maintains a list of "tracked" urls. When a handler function returns a value than is not None the url for the request is automatically added to the tracked list. The handler function can add other items to this list using the track_url() method on the mock. The return value of this function is an instance of WireProtocol augmented with these properties/methods: * mock_wire_data - the WireProtocolData constructed from the mock_wire_data_file parameter. * start() - starts the patchers for http_request and CryptUtil * stop() - stops the patchers * track_url(url) - adds the given item to the list of tracked urls. * get_tracked_urls() - returns the list of tracked urls. NOTE: This function patches common.utils.restutil.http_request and common.protocol.wire.CryptUtil; you need to be aware of this if your tests patch those methods or others in the call stack (e.g. restutil.get, resutil._http_request, etc) """ tracked_urls = [] # use a helper function to keep the HTTP handlers (they need to be modified by set_http_handlers() and # Python 2.* does not support nonlocal declarations) def http_handlers(get, post, put): http_handlers.get = get http_handlers.post = post http_handlers.put = put del tracked_urls[:] http_handlers(get=http_get_handler, post=http_post_handler, put=http_put_handler) # # function used to patch restutil.http_request # original_http_request = restutil.http_request def http_request(method, url, data, timeout, **kwargs): # call the original resutil.http_request if the request should be mocked if protocol.do_not_mock(method, url): return original_http_request(method, url, data, timeout, **kwargs) # if there is a handler for the request, use it handler = None if method == 'GET': handler = http_handlers.get elif method == 'POST': handler = http_handlers.post elif method == 'PUT': handler = http_handlers.put if handler is not None: if method == 'GET': return_value = handler(url, **kwargs) else: return_value = handler(url, data, **kwargs) if return_value is not None: tracked_urls.append(url) if isinstance(return_value, Exception): raise return_value return return_value # if the request was not handled try to use the mock wireserver data try: if method == 'GET': return protocol.mock_wire_data.mock_http_get(url, **kwargs) if method == 'POST': return protocol.mock_wire_data.mock_http_post(url, data, **kwargs) if method == 'PUT': return protocol.mock_wire_data.mock_http_put(url, data, **kwargs) except NotImplementedError: pass # if there was not a response for the request then fail it or call the original resutil.http_request if fail_on_unknown_request: raise ValueError('Unknown HTTP request: {0} [{1}]'.format(url, method)) return original_http_request(method, url, data, timeout, **kwargs) # # functions to start/stop the mocks # def start(): patched = patch("azurelinuxagent.common.utils.restutil.http_request", side_effect=http_request) patched.start() start.http_request_patch = patched patched = patch("azurelinuxagent.common.protocol.wire.CryptUtil", side_effect=protocol.mock_wire_data.mock_crypt_util) patched.start() start.crypt_util_patch = patched start.http_request_patch = None start.crypt_util_patch = None def stop(): if start.crypt_util_patch is not None: start.crypt_util_patch.stop() if start.http_request_patch is not None: start.http_request_patch.stop() # # create the protocol object # protocol = WireProtocol(restutil.KNOWN_WIRESERVER_IP) protocol.mock_wire_data = wire_protocol_data.WireProtocolData(mock_wire_data_file) protocol.start = start protocol.stop = stop protocol.track_url = lambda url: tracked_urls.append(url) # pylint: disable=unnecessary-lambda protocol.get_tracked_urls = lambda: tracked_urls protocol.set_http_handlers = lambda http_get_handler=None, http_post_handler=None, http_put_handler=None:\ http_handlers(get=http_get_handler, post=http_post_handler, put=http_put_handler) protocol.do_not_mock = do_not_mock # go do it try: protocol.start() protocol.detect(save_to_history=save_to_history) yield protocol finally: protocol.stop() class MockHttpResponse: def __init__(self, status, body=b'', headers=None, reason=None): self.body = body self.status = status self.headers = [] if headers is None else headers self.reason = reason def read(self, *_): return self.body def getheaders(self): return self.headers Azure-WALinuxAgent-2b21de5/tests/lib/tools.py000066400000000000000000000507651462617747000211670ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # """ Define util functions for unit test """ import difflib import os import pprint import re import shutil import stat import sys import tempfile import time import unittest from functools import wraps from threading import currentThread import azurelinuxagent.common.conf as conf import azurelinuxagent.common.event as event import azurelinuxagent.common.logger as logger from azurelinuxagent.common.future import range # pylint: disable=redefined-builtin from azurelinuxagent.common.utils import fileutil from azurelinuxagent.common.version import PY_VERSION_MAJOR import tests try: from unittest.mock import Mock, patch, MagicMock, ANY, DEFAULT, call, PropertyMock # pylint: disable=unused-import # Import mock module for Python2 and Python3 from bin.waagent2 import Agent # pylint: disable=unused-import except ImportError: from mock import Mock, patch, MagicMock, ANY, DEFAULT, call, PropertyMock test_dir = tests.__path__[0] data_dir = os.path.join(test_dir, "data") debug = False if os.environ.get('DEBUG') == '1': debug = True # Enable verbose logger to stdout if debug: logger.add_logger_appender(logger.AppenderType.STDOUT, logger.LogLevel.VERBOSE) _MAX_LENGTH = 120 _MAX_LENGTH_SAFE_REPR = 80 # Mock sleep to reduce test execution time _SLEEP = time.sleep def mock_sleep(sec=0.01): """ Mocks the time.sleep method to reduce unit test time :param sec: Time to replace the sleep call with, default = 0.01sec """ _SLEEP(sec) def safe_repr(obj, short=False): try: result = repr(obj) except Exception: result = object.__repr__(obj) if not short or len(result) < _MAX_LENGTH: return result return result[:_MAX_LENGTH_SAFE_REPR] + ' [truncated]...' def skip_if_predicate_false(predicate, message): if not predicate(): if hasattr(unittest, "skip"): return unittest.skip(message) return lambda func: None return lambda func: func def skip_if_predicate_true(predicate, message): if predicate(): if hasattr(unittest, "skip"): return unittest.skip(message) return lambda func: None return lambda func: func def _safe_repr(obj, short=False): """ Copied from Python 3.x """ try: result = repr(obj) except Exception: result = object.__repr__(obj) if not short or len(result) < _MAX_LENGTH: return result return result[:_MAX_LENGTH] + ' [truncated]...' def i_am_root(): return os.geteuid() == 0 def is_python_version_26(): return sys.version_info[0] == 2 and sys.version_info[1] == 6 def is_python_version_34(): return sys.version_info[0] == 3 and sys.version_info[1] == 4 def is_python_version_26_or_34(): return is_python_version_26() or is_python_version_34() class AgentTestCase(unittest.TestCase): @classmethod def setUpClass(cls): # Setup newer unittest assertions missing in prior versions of Python if not hasattr(cls, "assertRegex"): cls.assertRegex = cls.assertRegexpMatches if hasattr(cls, "assertRegexpMatches") else cls.emulate_assertRegexpMatches if not hasattr(cls, "assertNotRegex"): cls.assertNotRegex = cls.assertNotRegexpMatches if hasattr(cls, "assertNotRegexpMatches") else cls.emulate_assertNotRegexpMatches # pylint: disable=no-member if not hasattr(cls, "assertIn"): cls.assertIn = cls.emulate_assertIn if not hasattr(cls, "assertNotIn"): cls.assertNotIn = cls.emulate_assertNotIn if not hasattr(cls, "assertGreater"): cls.assertGreater = cls.emulate_assertGreater if not hasattr(cls, "assertGreaterEqual"): cls.assertGreaterEqual = cls.emulate_assertGreaterEqual if not hasattr(cls, "assertLess"): cls.assertLess = cls.emulate_assertLess if not hasattr(cls, "assertLessEqual"): cls.assertLessEqual = cls.emulate_assertLessEqual if not hasattr(cls, "assertIsNone"): cls.assertIsNone = cls.emulate_assertIsNone if not hasattr(cls, "assertIsNotNone"): cls.assertIsNotNone = cls.emulate_assertIsNotNone if hasattr(cls, "assertRaisesRegexp"): cls.assertRaisesRegex = cls.assertRaisesRegexp if not hasattr(cls, "assertRaisesRegex"): cls.assertRaisesRegex = cls.emulate_raises_regex if not hasattr(cls, "assertListEqual"): cls.assertListEqual = cls.emulate_assertListEqual if not hasattr(cls, "assertIsInstance"): cls.assertIsInstance = cls.emulate_assertIsInstance if sys.version_info < (2, 7): # assertRaises does not implement a context manager in 2.6; override it with emulate_assertRaises but # keep a pointer to the original implementation to use when a context manager is not requested. cls.original_assertRaises = unittest.TestCase.assertRaises cls.assertRaises = cls.emulate_assertRaises cls.assertDictEqual = cls.emulate_assertDictEqual @classmethod def tearDownClass(cls): pass def setUp(self): prefix = "{0}_".format(self.__class__.__name__) self.tmp_dir = tempfile.mkdtemp(prefix=prefix) self.test_file = 'test_file' conf.get_lib_dir = Mock(return_value=self.tmp_dir) ext_log_dir = os.path.join(self.tmp_dir, "azure") conf.get_ext_log_dir = Mock(return_value=ext_log_dir) conf.get_agent_pid_file_path = Mock(return_value=os.path.join(self.tmp_dir, "waagent.pid")) event.init_event_status(self.tmp_dir) event.init_event_logger(self.tmp_dir) def tearDown(self): if not debug and self.tmp_dir is not None: shutil.rmtree(self.tmp_dir) def emulate_assertIn(self, a, b, msg=None): if a not in b: msg = msg if msg is not None else "{0} not found in {1}".format(_safe_repr(a), _safe_repr(b)) self.fail(msg) def emulate_assertNotIn(self, a, b, msg=None): if a in b: msg = msg if msg is not None else "{0} unexpectedly found in {1}".format(_safe_repr(a), _safe_repr(b)) self.fail(msg) def emulate_assertGreater(self, a, b, msg=None): if not a > b: msg = msg if msg is not None else '{0} not greater than {1}'.format(_safe_repr(a), _safe_repr(b)) self.fail(msg) def emulate_assertGreaterEqual(self, a, b, msg=None): if not a >= b: msg = msg if msg is not None else '{0} not greater or equal to {1}'.format(_safe_repr(a), _safe_repr(b)) self.fail(msg) def emulate_assertLess(self, a, b, msg=None): if not a < b: msg = msg if msg is not None else '{0} not less than {1}'.format(_safe_repr(a), _safe_repr(b)) self.fail(msg) def emulate_assertLessEqual(self, a, b, msg=None): if not a <= b: msg = msg if msg is not None else '{0} not less or equal to {1}'.format(_safe_repr(a), _safe_repr(b)) self.fail(msg) def emulate_assertIsNone(self, x, msg=None): if x is not None: msg = msg if msg is not None else '{0} is not None'.format(_safe_repr(x)) self.fail(msg) def emulate_assertIsNotNone(self, x, msg=None): if x is None: msg = msg if msg is not None else '{0} is None'.format(_safe_repr(x)) self.fail(msg) def emulate_assertRegexpMatches(self, text, regexp, msg=None): if re.search(regexp, text) is not None: return msg = msg if msg is not None else "'{0}' does not match '{1}'.".format(text, regexp) self.fail(msg) def emulate_assertNotRegexpMatches(self, text, regexp, msg=None): if re.search(regexp, text, flags=1) is None: return msg = msg if msg is not None else "'{0}' should not match '{1}'.".format(text, regexp) self.fail(msg) class _AssertRaisesContextManager(object): def __init__(self, expected_exception_type, test_case, match_regex=None, regex_flags=0): self._expected_exception_type = expected_exception_type self._test_case = test_case self._match_regex = match_regex self._regex_flags = regex_flags self.exception = None def __enter__(self): return self @staticmethod def _get_type_name(t): return t.__name__ if hasattr(t, "__name__") else str(t) def __exit__(self, exception_type, exception, *_): if exception_type is None: expected = AgentTestCase._AssertRaisesContextManager._get_type_name(self._expected_exception_type) self._test_case.fail("Did not raise an exception; expected '{0}'".format(expected)) if not issubclass(exception_type, self._expected_exception_type): raised = AgentTestCase._AssertRaisesContextManager._get_type_name(exception_type) expected = AgentTestCase._AssertRaisesContextManager._get_type_name(self._expected_exception_type) self._test_case.fail("Raised '{0}', but expected '{1}'".format(raised, expected)) if self._match_regex is not None: exception_text = str(exception) if re.search(self._match_regex, exception_text, flags=self._regex_flags) is None: self._test_case.fail("The exception did not match the expected pattern. Expected: r'{0}' Got: '{1}'".format(self._match_regex, exception_text)) self.exception = exception return True def emulate_assertRaises(self, exception_type, function=None, *args, **kwargs): # pylint: disable=keyword-arg-before-vararg # return a context manager only when function is not provided; otherwise use the original assertRaises if function is None: return AgentTestCase._AssertRaisesContextManager(exception_type, self) self.original_assertRaises(exception_type, function, *args, **kwargs) return None def emulate_raises_regex(self, exception_type, regex, function, *args, **kwargs): try: function(*args, **kwargs) except Exception as e: if re.search(regex, str(e), flags=1) is not None: return else: self.fail("Expected exception {0} matching {1}. Actual: {2}".format( exception_type, regex, str(e))) self.fail("No exception was thrown. Expected exception {0} matching {1}".format(exception_type, regex)) def assertRaisesRegexCM(self, exception_type, regex, flags=0): """ Similar to assertRaisesRegex, but returns a context manager (mostly needed for Python 2.*, which does not have a assertRaisesRegex) """ return AgentTestCase._AssertRaisesContextManager(exception_type, self, match_regex=regex, regex_flags=flags) def emulate_assertDictEqual(self, first, second, msg=None): def fail(message): self.fail(self._formatMessage(msg, message)) for k in first.keys(): if k not in second: fail("'{0}' is missing from second".format(k)) if first[k] != second[k]: fail("'{0}' != '{1}' (key: {2})".format(first[k], second[k], k)) for k in second.keys(): if k not in first: fail("'{0}' is missing from first".format(k)) def emulate_assertListEqual(self, seq1, seq2, msg=None, seq_type=None): """An equality assertion for ordered sequences (like lists and tuples). For the purposes of this function, a valid ordered sequence type is one which can be indexed, has a length, and has an equality operator. Args: seq1: The first sequence to compare. seq2: The second sequence to compare. seq_type: The expected datatype of the sequences, or None if no datatype should be enforced. msg: Optional message to use on failure instead of a list of differences. """ if seq_type is not None: seq_type_name = seq_type.__name__ if not isinstance(seq1, seq_type): raise self.failureException('First sequence is not a %s: %s' % (seq_type_name, safe_repr(seq1))) if not isinstance(seq2, seq_type): raise self.failureException('Second sequence is not a %s: %s' % (seq_type_name, safe_repr(seq2))) else: seq_type_name = "sequence" differing = None try: len1 = len(seq1) except (TypeError, NotImplementedError): differing = 'First %s has no length. Non-sequence?' % ( seq_type_name) if differing is None: try: len2 = len(seq2) except (TypeError, NotImplementedError): differing = 'Second %s has no length. Non-sequence?' % ( seq_type_name) if differing is None: if seq1 == seq2: return seq1_repr = safe_repr(seq1) seq2_repr = safe_repr(seq2) if len(seq1_repr) > 30: seq1_repr = seq1_repr[:30] + '...' if len(seq2_repr) > 30: seq2_repr = seq2_repr[:30] + '...' elements = (seq_type_name.capitalize(), seq1_repr, seq2_repr) differing = '%ss differ: %s != %s\n' % elements for i in range(min(len1, len2)): try: item1 = seq1[i] except (TypeError, IndexError, NotImplementedError): differing += ('\nUnable to index element %d of first %s\n' % (i, seq_type_name)) break try: item2 = seq2[i] except (TypeError, IndexError, NotImplementedError): differing += ('\nUnable to index element %d of second %s\n' % (i, seq_type_name)) break if item1 != item2: differing += ('\nFirst differing element %d:\n%s\n%s\n' % (i, safe_repr(item1), safe_repr(item2))) break else: if (len1 == len2 and seq_type is None and type(seq1) != type(seq2)): # The sequences are the same, but have differing types. return if len1 > len2: differing += ('\nFirst %s contains %d additional ' 'elements.\n' % (seq_type_name, len1 - len2)) try: differing += ('First extra element %d:\n%s\n' % (len2, safe_repr(seq1[len2]))) except (TypeError, IndexError, NotImplementedError): differing += ('Unable to index element %d ' 'of first %s\n' % (len2, seq_type_name)) elif len1 < len2: differing += ('\nSecond %s contains %d additional ' 'elements.\n' % (seq_type_name, len2 - len1)) try: differing += ('First extra element %d:\n%s\n' % (len1, safe_repr(seq2[len1]))) except (TypeError, IndexError, NotImplementedError): differing += ('Unable to index element %d ' 'of second %s\n' % (len1, seq_type_name)) standardMsg = differing diffMsg = '\n' + '\n'.join( difflib.ndiff(pprint.pformat(seq1).splitlines(), pprint.pformat(seq2).splitlines())) standardMsg = self._truncateMessage(standardMsg, diffMsg) msg = self._formatMessage(msg, standardMsg) self.fail(msg) def emulate_assertIsInstance(self, obj, object_type, msg=None): if not isinstance(obj, object_type): msg = msg if msg is not None else '{0} is not an instance of {1}'.format(_safe_repr(obj), _safe_repr(object_type)) self.fail(msg) @staticmethod def _create_files(tmp_dir, prefix, suffix, count, with_sleep=0): for i in range(count): f = os.path.join(tmp_dir, '.'.join((prefix, str(i), suffix))) fileutil.write_file(f, "faux content") time.sleep(with_sleep) @staticmethod def create_script(script_file, contents): """ Creates an executable script with the given contents. If file ends with ".py", it creates a Python3 script, otherwise it creates a bash script. """ with open(script_file, "w") as script: if script_file.endswith(".py"): script.write("#!/usr/bin/env python3\n") else: script.write("#!/usr/bin/env bash\n") script.write(contents) os.chmod(script_file, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR) class AgentTestCaseWithGetVmSizeMock(AgentTestCase): def setUp(self): self._get_vm_size_patch = patch('azurelinuxagent.ga.update.UpdateHandler._get_vm_size', return_value="unknown") self._get_vm_size_patch.start() super(AgentTestCaseWithGetVmSizeMock, self).setUp() def tearDown(self): if self._get_vm_size_patch: self._get_vm_size_patch.stop() super(AgentTestCaseWithGetVmSizeMock, self).tearDown() def load_data(name): """Load test data""" path = os.path.join(data_dir, name) with open(path, "r") as data_file: return data_file.read() def load_bin_data(name, directory=None): """Load test bin data""" if directory is None: directory = data_dir path = os.path.join(directory, name) with open(path, "rb") as data_file: return data_file.read() supported_distro = [ ["ubuntu", "12.04", ""], ["ubuntu", "14.04", ""], ["ubuntu", "14.10", ""], ["ubuntu", "15.10", ""], ["ubuntu", "15.10", "Snappy Ubuntu Core"], ["coreos", "", ""], ["flatcar", "", ""], ["suse", "12", "SUSE Linux Enterprise Server"], ["suse", "13.2", "openSUSE"], ["suse", "11", "SUSE Linux Enterprise Server"], ["suse", "13.1", "openSUSE"], ["debian", "6.0", ""], ["redhat", "6.5", ""], ["redhat", "7.0", ""], ] def open_patch(): open_name = '__builtin__.open' if PY_VERSION_MAJOR == 3: open_name = 'builtins.open' return open_name def patch_builtin(target, *args, **kwargs): prefix = 'builtins' if PY_VERSION_MAJOR >= 3 else '__builtin__' return patch("{0}.{1}".format(prefix, target), *args, **kwargs) def distros(distro_name=".*", distro_version=".*", distro_full_name=".*"): """Run test on multiple distros""" def decorator(test_method): @wraps(test_method) def wrapper(self, *args, **kwargs): for distro in supported_distro: if re.match(distro_name, distro[0]) and \ re.match(distro_version, distro[1]) and \ re.match(distro_full_name, distro[2]): if debug: logger.info("Run {0} on {1}", test_method.__name__, distro) new_args = [] new_args.extend(args) new_args.extend(distro) test_method(self, *new_args, **kwargs) # Call tearDown and setUp to create separated environment # for distro testing self.tearDown() self.setUp() return wrapper return decorator def clear_singleton_instances(cls): # Adding this lock to avoid any race conditions with cls._lock: obj_name = "%s__%s" % (cls.__name__, currentThread().getName()) # Object Name = className__threadName if obj_name in cls._instances: del cls._instances[obj_name] Azure-WALinuxAgent-2b21de5/tests/lib/wire_protocol_data.py000066400000000000000000000505341462617747000237010ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import base64 import datetime import json import re from azurelinuxagent.common.utils import timeutil from azurelinuxagent.common.utils.textutil import parse_doc, find, findall from tests.lib.http_request_predicates import HttpRequestPredicates from tests.lib.tools import load_bin_data, load_data, MagicMock, Mock from azurelinuxagent.common.protocol.imds import IMDS_ENDPOINT from azurelinuxagent.common.exception import HttpError, ResourceGoneError from azurelinuxagent.common.future import httpclient from azurelinuxagent.common.utils.cryptutil import CryptUtil DATA_FILE = { "version_info": "wire/version_info.xml", "goal_state": "wire/goal_state.xml", "hosting_env": "wire/hosting_env.xml", "shared_config": "wire/shared_config.xml", "certs": "wire/certs.xml", "ext_conf": "wire/ext_conf.xml", "manifest": "wire/manifest.xml", "ga_manifest": "wire/ga_manifest.xml", "trans_prv": "wire/trans_prv", "trans_cert": "wire/trans_cert", "test_ext": "ext/sample_ext-1.3.0.zip", "imds_info": "imds/valid.json", "remote_access": None, "in_vm_artifacts_profile": None, "vm_settings": None, "ETag": None } DATA_FILE_IN_VM_ARTIFACTS_PROFILE = DATA_FILE.copy() DATA_FILE_IN_VM_ARTIFACTS_PROFILE["ext_conf"] = "wire/ext_conf_in_vm_artifacts_profile.xml" DATA_FILE_IN_VM_ARTIFACTS_PROFILE["in_vm_artifacts_profile"] = "wire/in_vm_artifacts_profile.json" DATA_FILE_IN_VM_META_DATA = DATA_FILE.copy() DATA_FILE_IN_VM_META_DATA["ext_conf"] = "wire/ext_conf_in_vm_metadata.xml" DATA_FILE_INVALID_VM_META_DATA = DATA_FILE.copy() DATA_FILE_INVALID_VM_META_DATA["ext_conf"] = "wire/ext_conf_invalid_vm_metadata.xml" DATA_FILE_NO_EXT = DATA_FILE.copy() DATA_FILE_NO_EXT["ext_conf"] = "wire/ext_conf_no_extensions-block_blob.xml" DATA_FILE_NOOP_GS = DATA_FILE.copy() DATA_FILE_NOOP_GS["goal_state"] = "wire/goal_state_noop.xml" DATA_FILE_NOOP_GS["ext_conf"] = None DATA_FILE_EXT_NO_SETTINGS = DATA_FILE.copy() DATA_FILE_EXT_NO_SETTINGS["ext_conf"] = "wire/ext_conf_no_settings.xml" DATA_FILE_EXT_NO_PUBLIC = DATA_FILE.copy() DATA_FILE_EXT_NO_PUBLIC["ext_conf"] = "wire/ext_conf_no_public.xml" DATA_FILE_EXT_AUTOUPGRADE = DATA_FILE.copy() DATA_FILE_EXT_AUTOUPGRADE["ext_conf"] = "wire/ext_conf_autoupgrade.xml" DATA_FILE_EXT_INTERNALVERSION = DATA_FILE.copy() DATA_FILE_EXT_INTERNALVERSION["ext_conf"] = "wire/ext_conf_internalversion.xml" DATA_FILE_EXT_AUTOUPGRADE_INTERNALVERSION = DATA_FILE.copy() DATA_FILE_EXT_AUTOUPGRADE_INTERNALVERSION["ext_conf"] = "wire/ext_conf_autoupgrade_internalversion.xml" DATA_FILE_EXT_ROLLINGUPGRADE = DATA_FILE.copy() DATA_FILE_EXT_ROLLINGUPGRADE["ext_conf"] = "wire/ext_conf_upgradeguid.xml" DATA_FILE_EXT_SEQUENCING = DATA_FILE.copy() DATA_FILE_EXT_SEQUENCING["ext_conf"] = "wire/ext_conf_sequencing.xml" DATA_FILE_EXT_ADDITIONAL_LOCATIONS = DATA_FILE.copy() DATA_FILE_EXT_ADDITIONAL_LOCATIONS["ext_conf"] = "wire/ext_conf_additional_locations.xml" DATA_FILE_EXT_DELETION = DATA_FILE.copy() DATA_FILE_EXT_DELETION["manifest"] = "wire/manifest_deletion.xml" DATA_FILE_EXT_SINGLE = DATA_FILE.copy() DATA_FILE_EXT_SINGLE["manifest"] = "wire/manifest_deletion.xml" DATA_FILE_MULTIPLE_EXT = DATA_FILE.copy() DATA_FILE_MULTIPLE_EXT["ext_conf"] = "wire/ext_conf_multiple_extensions.xml" DATA_FILE_CASE_MISMATCH_EXT = DATA_FILE.copy() DATA_FILE_CASE_MISMATCH_EXT["ext_conf"] = "wire/ext_conf_settings_case_mismatch.xml" DATA_FILE_NO_CERT_FORMAT = DATA_FILE.copy() DATA_FILE_NO_CERT_FORMAT["certs"] = "wire/certs_no_format_specified.xml" DATA_FILE_CERT_FORMAT_NOT_PFX = DATA_FILE.copy() DATA_FILE_CERT_FORMAT_NOT_PFX["certs"] = "wire/certs_format_not_pfx.xml" DATA_FILE_REMOTE_ACCESS = DATA_FILE.copy() DATA_FILE_REMOTE_ACCESS["goal_state"] = "wire/goal_state_remote_access.xml" DATA_FILE_REMOTE_ACCESS["remote_access"] = "wire/remote_access_single_account.xml" DATA_FILE_PLUGIN_SETTINGS_MISMATCH = DATA_FILE.copy() DATA_FILE_PLUGIN_SETTINGS_MISMATCH["ext_conf"] = "wire/invalid_config/ext_conf_plugin_settings_version_mismatch.xml" DATA_FILE_REQUIRED_FEATURES = DATA_FILE.copy() DATA_FILE_REQUIRED_FEATURES["ext_conf"] = "wire/ext_conf_required_features.xml" DATA_FILE_VM_SETTINGS = DATA_FILE.copy() DATA_FILE_VM_SETTINGS["vm_settings"] = "hostgaplugin/vm_settings.json" DATA_FILE_VM_SETTINGS["ETag"] = "1" DATA_FILE_VM_SETTINGS["ext_conf"] = "hostgaplugin/ext_conf.xml" DATA_FILE_VM_SETTINGS["in_vm_artifacts_profile"] = "hostgaplugin/in_vm_artifacts_profile.json" class WireProtocolData(object): def __init__(self, data_files=None): if data_files is None: data_files = DATA_FILE self.emulate_stale_goal_state = False self.call_counts = { "comp=versions": 0, "/versions": 0, "/health": 0, "/HealthService": 0, "/vmAgentLog": 0, "goalstate": 0, "hostingEnvironmentConfig": 0, "sharedConfig": 0, "certificates": 0, "extensionsConfig": 0, "remoteaccessinfouri": 0, "extensionArtifact": 0, "agentArtifact": 0, "manifest.xml": 0, "manifest_of_ga.xml": 0, "ExampleHandlerLinux": 0, "in_vm_artifacts_profile": 0, "vm_settings": 0 } self.status_blobs = [] self.data_files = data_files self.version_info = None self.goal_state = None self.hosting_env = None self.shared_config = None self.certs = None self.ext_conf = None self.manifest = None self.ga_manifest = None self.trans_prv = None self.trans_cert = None self.ext = None self.remote_access = None self.in_vm_artifacts_profile = None self.vm_settings = None self.etag = None self.prev_etag = None self.imds_info = None self.reload() def reload(self): self.version_info = load_data(self.data_files.get("version_info")) self.goal_state = load_data(self.data_files.get("goal_state")) self.hosting_env = load_data(self.data_files.get("hosting_env")) self.shared_config = load_data(self.data_files.get("shared_config")) self.certs = load_data(self.data_files.get("certs")) self.ext_conf = self.data_files.get("ext_conf") if self.ext_conf is not None: self.ext_conf = load_data(self.ext_conf) self.manifest = load_data(self.data_files.get("manifest")) self.ga_manifest = load_data(self.data_files.get("ga_manifest")) self.trans_prv = load_data(self.data_files.get("trans_prv")) self.trans_cert = load_data(self.data_files.get("trans_cert")) self.imds_info = json.loads(load_data(self.data_files.get("imds_info"))) self.ext = load_bin_data(self.data_files.get("test_ext")) vm_settings = self.data_files.get("vm_settings") if vm_settings is not None: self.vm_settings = load_data(self.data_files.get("vm_settings")) self.etag = self.data_files.get("ETag") remote_access_data_file = self.data_files.get("remote_access") if remote_access_data_file is not None: self.remote_access = load_data(remote_access_data_file) in_vm_artifacts_profile_file = self.data_files.get("in_vm_artifacts_profile") if in_vm_artifacts_profile_file is not None: self.in_vm_artifacts_profile = load_data(in_vm_artifacts_profile_file) def reset_call_counts(self): for counter in self.call_counts: self.call_counts[counter] = 0 def mock_http_get(self, url, *_, **kwargs): content = '' response_headers = [] resp = MagicMock() resp.status = httpclient.OK if "comp=versions" in url: # wire server versions content = self.version_info self.call_counts["comp=versions"] += 1 elif "/versions" in url: # HostPlugin versions content = '["2015-09-01"]' self.call_counts["/versions"] += 1 elif url.endswith("/health"): # HostPlugin health content = '' self.call_counts["/health"] += 1 elif "goalstate" in url: content = self.goal_state self.call_counts["goalstate"] += 1 elif HttpRequestPredicates.is_hosting_environment_config_request(url): content = self.hosting_env self.call_counts["hostingEnvironmentConfig"] += 1 elif HttpRequestPredicates.is_shared_config_request(url): content = self.shared_config self.call_counts["sharedConfig"] += 1 elif HttpRequestPredicates.is_certificates_request(url): content = self.certs self.call_counts["certificates"] += 1 elif HttpRequestPredicates.is_extensions_config_request(url): content = self.ext_conf self.call_counts["extensionsConfig"] += 1 elif "remoteaccessinfouri" in url: content = self.remote_access self.call_counts["remoteaccessinfouri"] += 1 elif ".vmSettings" in url or ".settings" in url: content = self.in_vm_artifacts_profile self.call_counts["in_vm_artifacts_profile"] += 1 elif "/vmSettings" in url: if self.vm_settings is None: resp.status = httpclient.NOT_FOUND elif self.call_counts["vm_settings"] > 0 and self.prev_etag == self.etag: resp.status = httpclient.NOT_MODIFIED else: content = self.vm_settings response_headers = [('ETag', self.etag)] self.prev_etag = self.etag self.call_counts["vm_settings"] += 1 elif '{0}/metadata/compute'.format(IMDS_ENDPOINT) in url: content = json.dumps(self.imds_info.get("compute", "{}")) else: # A stale GoalState results in a 400 from the HostPlugin # for which the HTTP handler in restutil raises ResourceGoneError if self.emulate_stale_goal_state: if "extensionArtifact" in url: self.emulate_stale_goal_state = False self.call_counts["extensionArtifact"] += 1 raise ResourceGoneError() else: raise HttpError() # For HostPlugin requests, replace the URL with that passed # via the x-ms-artifact-location header if "extensionArtifact" in url: self.call_counts["extensionArtifact"] += 1 if "headers" not in kwargs: raise ValueError("HostPlugin request is missing the HTTP headers: {0}", kwargs) # pylint: disable=raising-format-tuple if "x-ms-artifact-location" not in kwargs["headers"]: raise ValueError("HostPlugin request is missing the x-ms-artifact-location header: {0}", kwargs) # pylint: disable=raising-format-tuple url = kwargs["headers"]["x-ms-artifact-location"] if "manifest.xml" in url: content = self.manifest self.call_counts["manifest.xml"] += 1 elif HttpRequestPredicates.is_ga_manifest_request(url): content = self.ga_manifest self.call_counts["manifest_of_ga.xml"] += 1 elif "ExampleHandlerLinux" in url: content = self.ext self.call_counts["ExampleHandlerLinux"] += 1 resp.read = Mock(return_value=content) return resp elif ".vmSettings" in url or ".settings" in url: content = self.in_vm_artifacts_profile self.call_counts["in_vm_artifacts_profile"] += 1 else: raise NotImplementedError(url) resp.read = Mock(return_value=content.encode("utf-8")) resp.getheaders = Mock(return_value=response_headers) return resp def mock_http_post(self, url, *_, **__): content = None resp = MagicMock() resp.status = httpclient.OK if url.endswith('/HealthService'): self.call_counts['/HealthService'] += 1 content = '' else: raise NotImplementedError(url) resp.read = Mock(return_value=content.encode("utf-8")) return resp def mock_http_put(self, url, data, **_): content = '' resp = MagicMock() resp.status = httpclient.OK if url.endswith('/vmAgentLog'): self.call_counts['/vmAgentLog'] += 1 elif HttpRequestPredicates.is_storage_status_request(url): self.status_blobs.append(data) elif HttpRequestPredicates.is_host_plugin_status_request(url): self.status_blobs.append(WireProtocolData.get_status_blob_from_hostgaplugin_put_status_request(content)) else: raise NotImplementedError(url) resp.read = Mock(return_value=content.encode("utf-8")) return resp def mock_crypt_util(self, *args, **kw): # Partially patch instance method of class CryptUtil cryptutil = CryptUtil(*args, **kw) cryptutil.gen_transport_cert = Mock(side_effect=self.mock_gen_trans_cert) return cryptutil def mock_gen_trans_cert(self, trans_prv_file, trans_cert_file): with open(trans_prv_file, 'w+') as prv_file: prv_file.write(self.trans_prv) with open(trans_cert_file, 'w+') as cert_file: cert_file.write(self.trans_cert) @staticmethod def get_status_blob_from_hostgaplugin_put_status_request(data): status_object = json.loads(data) content = status_object["content"] return base64.b64decode(content) def get_no_of_plugins_in_extension_config(self): if self.ext_conf is None: return 0 ext_config_doc = parse_doc(self.ext_conf) plugins_list = find(ext_config_doc, "Plugins") return len(findall(plugins_list, "Plugin")) def get_no_of_extensions_in_config(self): if self.ext_conf is None: return 0 ext_config_doc = parse_doc(self.ext_conf) plugin_settings = find(ext_config_doc, "PluginSettings") return len(findall(plugin_settings, "ExtensionRuntimeSettings")) + len( findall(plugin_settings, "RuntimeSettings")) # # Having trouble reading the regular expressions below? you are not alone! # # For the use of "(?<=" "(?=" see 7.2.1 in https://docs.python.org/3.1/library/re.html # For the use of "\g<1>" see backreferences in https://docs.python.org/3.1/library/re.html#re.sub # # Note that these regular expressions are not enough to parse all valid XML documents (e.g. they do # not account for metacharacters like < or > in the values) but they are good enough for the test # data. There are some basic checks, but the functions may not match valid XML or produce invalid # XML if their input is too complex. # @staticmethod def replace_xml_element_value(xml_document, element_name, element_value): element_regex = r'(?<=<{0}>).+(?=)'.format(element_name) if not re.search(element_regex, xml_document): raise Exception("Can't find XML element '{0}' in {1}".format(element_name, xml_document)) return re.sub(element_regex, element_value, xml_document) @staticmethod def replace_xml_attribute_value(xml_document, element_name, attribute_name, attribute_value): attribute_regex = r'(?<=<{0} )(.*{1}=")[^"]+(?="[^>]*>)'.format(element_name, attribute_name) if not re.search(attribute_regex, xml_document): raise Exception("Can't find attribute {0} in XML element '{1}'. Document: {2}".format(attribute_name, element_name, xml_document)) return re.sub(attribute_regex, r'\g<1>{0}'.format(attribute_value), xml_document) def set_etag(self, etag, timestamp=None): """ Sets the ETag for the mock response. This function is used to mock a new goal state, and it also updates the timestamp (extensionsLastModifiedTickCount) in vmSettings. """ if timestamp is None: timestamp = datetime.datetime.utcnow() self.etag = etag try: vm_settings = json.loads(self.vm_settings) vm_settings["extensionsLastModifiedTickCount"] = timeutil.datetime_to_ticks(timestamp) self.vm_settings = json.dumps(vm_settings) except ValueError: # some test data include syntax errors; ignore those pass def set_vm_settings_source(self, source): """ Sets the "extensionGoalStatesSource" for the mock vm_settings data """ vm_settings = json.loads(self.vm_settings) vm_settings["extensionGoalStatesSource"] = source self.vm_settings = json.dumps(vm_settings) def set_incarnation(self, incarnation, timestamp=None): """ Sets the incarnation in the goal state, but not on its subcomponents (e.g. hosting env, shared config). This function is used to mock a new goal state, and it also updates the timestamp (createdOnTicks) in ExtensionsConfig. """ self.goal_state = WireProtocolData.replace_xml_element_value(self.goal_state, "Incarnation", str(incarnation)) if self.ext_conf is not None: if timestamp is None: timestamp = datetime.datetime.utcnow() self.ext_conf = WireProtocolData.replace_xml_attribute_value(self.ext_conf, "InVMGoalStateMetaData", "createdOnTicks", timeutil.datetime_to_ticks(timestamp)) def set_container_id(self, container_id): self.goal_state = WireProtocolData.replace_xml_element_value(self.goal_state, "ContainerId", container_id) def set_role_config_name(self, role_config_name): self.goal_state = WireProtocolData.replace_xml_element_value(self.goal_state, "ConfigName", role_config_name) def set_hosting_env_deployment_name(self, deployment_name): self.hosting_env = WireProtocolData.replace_xml_attribute_value(self.hosting_env, "Deployment", "name", deployment_name) def set_shared_config_deployment_name(self, deployment_name): self.shared_config = WireProtocolData.replace_xml_attribute_value(self.shared_config, "Deployment", "name", deployment_name) def set_extensions_config_sequence_number(self, sequence_number): ''' Sets the sequence number for *all* extensions ''' self.ext_conf = WireProtocolData.replace_xml_attribute_value(self.ext_conf, "RuntimeSettings", "seqNo", str(sequence_number)) def set_extensions_config_version(self, version): ''' Sets the version for *all* extensions ''' self.ext_conf = WireProtocolData.replace_xml_attribute_value(self.ext_conf, "Plugin", "version", version) def set_extensions_config_state(self, state): ''' Sets the state for *all* extensions ''' self.ext_conf = WireProtocolData.replace_xml_attribute_value(self.ext_conf, "Plugin", "state", state) def set_manifest_version(self, version): ''' Sets the version of the extension manifest ''' self.manifest = WireProtocolData.replace_xml_element_value(self.manifest, "Version", version) def set_extension_config(self, ext_conf_file): self.ext_conf = load_data(ext_conf_file) def set_ga_manifest(self, ga_manifest): self.ga_manifest = load_data(ga_manifest) def set_version_in_agent_family(self, version): self.ext_conf = WireProtocolData.replace_xml_element_value(self.ext_conf, "Version", version) def set_extension_config_is_vm_enabled_for_rsm_upgrades(self, is_vm_enabled_for_rsm_upgrades): self.ext_conf = WireProtocolData.replace_xml_element_value(self.ext_conf, "IsVMEnabledForRSMUpgrades", is_vm_enabled_for_rsm_upgrades) def set_version_in_ga_manifest(self, version): self.ga_manifest = WireProtocolData.replace_xml_element_value(self.ga_manifest, "Version", version) Azure-WALinuxAgent-2b21de5/tests/pa/000077500000000000000000000000001462617747000172725ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests/pa/__init__.py000066400000000000000000000011651462617747000214060ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # Azure-WALinuxAgent-2b21de5/tests/pa/test_deprovision.py000066400000000000000000000127421462617747000232520ustar00rootroot00000000000000# Copyright 2016 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import tempfile import unittest import azurelinuxagent.common.utils.fileutil as fileutil from azurelinuxagent.pa.deprovision import get_deprovision_handler from azurelinuxagent.pa.deprovision.default import DeprovisionHandler from tests.lib.tools import AgentTestCase, distros, Mock, patch class TestDeprovision(AgentTestCase): @patch('signal.signal') @patch('azurelinuxagent.common.osutil.get_osutil') @patch('azurelinuxagent.common.protocol.util.get_protocol_util') @patch('azurelinuxagent.pa.deprovision.default.read_input') def test_confirmation(self, mock_read, mock_protocol, mock_util, mock_signal): # pylint: disable=unused-argument dh = DeprovisionHandler() dh.setup = Mock() dh.setup.return_value = ([], []) dh.do_actions = Mock() # Do actions if confirmed mock_read.return_value = "y" dh.run() self.assertEqual(1, dh.do_actions.call_count) # Skip actions if not confirmed mock_read.return_value = "n" dh.run() self.assertEqual(1, dh.do_actions.call_count) # Do actions if forced mock_read.return_value = "n" dh.run(force=True) self.assertEqual(2, dh.do_actions.call_count) @distros("ubuntu") @patch('azurelinuxagent.common.conf.get_lib_dir') def test_del_lib_dir_files(self, distro_name, distro_version, distro_full_name, mock_conf): dirs = [ 'WALinuxAgent-2.2.26/config', 'Microsoft.Azure.Extensions.CustomScript-2.0.6/config', 'Microsoft.Azure.Extensions.CustomScript-2.0.6/status' ] files = [ 'HostingEnvironmentConfig.xml', 'Incarnation', 'Protocol', 'SharedConfig.xml', 'WireServerEndpoint', 'Extensions.1.xml', 'ExtensionsConfig.1.xml', 'GoalState.1.xml', 'Extensions.2.xml', 'ExtensionsConfig.2.xml', 'GoalState.2.xml', 'Microsoft.Azure.Extensions.CustomScript-2.0.6/config/42.settings', 'Microsoft.Azure.Extensions.CustomScript-2.0.6/config/HandlerStatus', 'Microsoft.Azure.Extensions.CustomScript-2.0.6/config/HandlerState', 'Microsoft.Azure.Extensions.CustomScript-2.0.6/status/12.notstatus', 'Microsoft.Azure.Extensions.CustomScript-2.0.6/mrseq', 'WALinuxAgent-2.2.26/config/0.settings' ] tmp = tempfile.mkdtemp() mock_conf.return_value = tmp for d in dirs: fileutil.mkdir(os.path.join(tmp, d)) for f in files: fileutil.write_file(os.path.join(tmp, f), "Value") deprovision_handler = get_deprovision_handler(distro_name, distro_version, distro_full_name) warnings = [] actions = [] deprovision_handler.del_lib_dir_files(warnings, actions) deprovision_handler.del_ext_handler_files(warnings, actions) self.assertTrue(len(warnings) == 0) self.assertTrue(len(actions) == 2) self.assertEqual(fileutil.rm_files, actions[0].func) self.assertEqual(fileutil.rm_files, actions[1].func) self.assertEqual(11, len(actions[0].args)) self.assertEqual(3, len(actions[1].args)) for f in actions[0].args: self.assertTrue(os.path.basename(f) in files) for f in actions[1].args: self.assertTrue(f[len(tmp)+1:] in files) @distros("redhat") def test_deprovision(self, distro_name, distro_version, distro_full_name): deprovision_handler = get_deprovision_handler(distro_name, distro_version, distro_full_name) warnings, actions = deprovision_handler.setup(deluser=False) # pylint: disable=unused-variable assert any("/etc/resolv.conf" in w for w in warnings) @distros("ubuntu") def test_deprovision_ubuntu(self, distro_name, distro_version, distro_full_name): deprovision_handler = get_deprovision_handler(distro_name, distro_version, distro_full_name) with patch("os.path.realpath", return_value="/run/resolvconf/resolv.conf"): warnings, actions = deprovision_handler.setup(deluser=False) # pylint: disable=unused-variable assert any("/etc/resolvconf/resolv.conf.d/tail" in w for w in warnings) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/pa/test_provision.py000066400000000000000000000415421462617747000227410ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os import re import tempfile import unittest import azurelinuxagent.common.conf as conf from azurelinuxagent.common.exception import ProvisionError from azurelinuxagent.common.osutil.default import DefaultOSUtil from azurelinuxagent.common.protocol.util import OVF_FILE_NAME from azurelinuxagent.pa.provision import get_provision_handler from azurelinuxagent.pa.provision.cloudinit import CloudInitProvisionHandler from azurelinuxagent.pa.provision.default import ProvisionHandler from azurelinuxagent.common.utils import fileutil from tests.lib.tools import AgentTestCase, distros, load_data, MagicMock, Mock, patch class TestProvision(AgentTestCase): @distros("redhat") @patch('azurelinuxagent.common.osutil.default.DefaultOSUtil.get_instance_id', return_value='B9F3C233-9913-9F42-8EB3-BA656DF32502') def test_provision(self, mock_util, distro_name, distro_version, distro_full_name): # pylint: disable=unused-argument provision_handler = get_provision_handler(distro_name, distro_version, distro_full_name) mock_osutil = MagicMock() mock_osutil.decode_customdata = Mock(return_value="") provision_handler.osutil = mock_osutil provision_handler.protocol_util.osutil = mock_osutil provision_handler.protocol_util.get_protocol = MagicMock() conf.get_dvd_mount_point = Mock(return_value=self.tmp_dir) ovfenv_file = os.path.join(self.tmp_dir, OVF_FILE_NAME) ovfenv_data = load_data("ovf-env.xml") fileutil.write_file(ovfenv_file, ovfenv_data) provision_handler.run() def test_customdata(self): base64data = 'Q3VzdG9tRGF0YQ==' data = DefaultOSUtil().decode_customdata(base64data) fileutil.write_file(tempfile.mktemp(), data) @patch('azurelinuxagent.common.conf.get_provision_enabled', return_value=False) def test_provisioning_is_skipped_when_not_enabled(self, mock_conf): # pylint: disable=unused-argument ph = ProvisionHandler() ph.osutil = DefaultOSUtil() ph.osutil.get_instance_id = Mock( return_value='B9F3C233-9913-9F42-8EB3-BA656DF32502') ph.check_provisioned_file = Mock() ph.report_ready = Mock() ph.write_provisioned = Mock() ph.run() self.assertEqual(0, ph.check_provisioned_file.call_count) self.assertEqual(1, ph.report_ready.call_count) self.assertEqual(1, ph.write_provisioned.call_count) @patch('os.path.isfile', return_value=False) def test_check_provisioned_file_not_provisioned(self, mock_isfile): # pylint: disable=unused-argument ph = ProvisionHandler() self.assertFalse(ph.check_provisioned_file()) @patch('os.path.isfile', return_value=True) @patch('azurelinuxagent.common.utils.fileutil.read_file', return_value="B9F3C233-9913-9F42-8EB3-BA656DF32502") @patch('azurelinuxagent.pa.deprovision.get_deprovision_handler') def test_check_provisioned_file_is_provisioned(self, mock_deprovision, mock_read, mock_isfile): # pylint: disable=unused-argument ph = ProvisionHandler() ph.osutil = Mock() ph.osutil.is_current_instance_id = Mock(return_value=True) ph.write_provisioned = Mock() deprovision_handler = Mock() mock_deprovision.return_value = deprovision_handler self.assertTrue(ph.check_provisioned_file()) self.assertEqual(1, ph.osutil.is_current_instance_id.call_count) self.assertEqual(0, deprovision_handler.run_changed_unique_id.call_count) @patch('os.path.isfile', return_value=True) @patch('azurelinuxagent.common.utils.fileutil.read_file', return_value="B9F3C233-9913-9F42-8EB3-BA656DF32502") @patch('azurelinuxagent.pa.deprovision.get_deprovision_handler') def test_check_provisioned_file_not_deprovisioned(self, mock_deprovision, mock_read, mock_isfile): # pylint: disable=unused-argument ph = ProvisionHandler() ph.osutil = Mock() ph.osutil.is_current_instance_id = Mock(return_value=False) ph.report_ready = Mock() ph.write_provisioned = Mock() deprovision_handler = Mock() mock_deprovision.return_value = deprovision_handler self.assertTrue(ph.check_provisioned_file()) self.assertEqual(1, ph.osutil.is_current_instance_id.call_count) self.assertEqual(1, deprovision_handler.run_changed_unique_id.call_count) @distros() @patch('azurelinuxagent.common.conf.get_provisioning_agent', return_value='waagent') def test_provision_telemetry_pga_false(self, distro_name, distro_version, distro_full_name, _): """ ProvisionGuestAgent flag is 'false' """ self._provision_test(distro_name, # pylint: disable=no-value-for-parameter distro_version, distro_full_name, OVF_FILE_NAME, 'false', True) @distros() @patch('azurelinuxagent.common.conf.get_provisioning_agent', return_value='waagent') def test_provision_telemetry_pga_true(self, distro_name, distro_version, distro_full_name, _): """ ProvisionGuestAgent flag is 'true' """ self._provision_test(distro_name, # pylint: disable=no-value-for-parameter distro_version, distro_full_name, 'ovf-env-2.xml', 'true', True) @distros() @patch('azurelinuxagent.common.conf.get_provisioning_agent', return_value='waagent') def test_provision_telemetry_pga_empty(self, distro_name, distro_version, distro_full_name, _): """ ProvisionGuestAgent flag is '' """ self._provision_test(distro_name, # pylint: disable=no-value-for-parameter distro_version, distro_full_name, 'ovf-env-3.xml', 'true', False) @distros() @patch('azurelinuxagent.common.conf.get_provisioning_agent', return_value='waagent') def test_provision_telemetry_pga_bad(self, distro_name, distro_version, distro_full_name, _): """ ProvisionGuestAgent flag is 'bad data' """ self._provision_test(distro_name, # pylint: disable=no-value-for-parameter distro_version, distro_full_name, 'ovf-env-4.xml', 'bad data', True) @patch('azurelinuxagent.common.osutil.default.DefaultOSUtil.get_instance_id', return_value='B9F3C233-9913-9F42-8EB3-BA656DF32502') @patch('azurelinuxagent.pa.provision.default.ProvisionHandler.write_agent_disabled') @patch('azurelinuxagent.pa.provision.default.cloud_init_is_enabled', return_value=False) def _provision_test(self, distro_name, distro_version, distro_full_name, ovf_file, provisionMessage, expect_success, # pylint:disable=unused-argument patch_cloud_init_is_enabled, patch_write_agent_disabled, patch_get_instance_id): """ Assert that the agent issues two telemetry messages as part of a successful provisioning. 1. Provision 2. GuestState """ ph = get_provision_handler(distro_name, distro_version, distro_full_name) ph.report_event = MagicMock() ph.reg_ssh_host_key = MagicMock(return_value='--thumprint--') mock_osutil = MagicMock() mock_osutil.decode_customdata = Mock(return_value="") ph.osutil = mock_osutil ph.protocol_util.osutil = mock_osutil ph.protocol_util.get_protocol = MagicMock() conf.get_dvd_mount_point = Mock(return_value=self.tmp_dir) ovfenv_file = os.path.join(self.tmp_dir, OVF_FILE_NAME) ovfenv_data = load_data(ovf_file) fileutil.write_file(ovfenv_file, ovfenv_data) ph.run() if expect_success: self.assertEqual(2, ph.report_event.call_count) positional_args, kw_args = ph.report_event.call_args_list[0] # [call('Provisioning succeeded (146473.68s)', duration=65, is_success=True)] self.assertTrue(re.match(r'Provisioning succeeded \(\d+\.\d+s\)', positional_args[0]) is not None) self.assertTrue(isinstance(kw_args['duration'], int)) self.assertTrue(kw_args['is_success']) positional_args, kw_args = ph.report_event.call_args_list[1] self.assertTrue(kw_args['operation'] == 'ProvisionGuestAgent') self.assertTrue(kw_args['message'] == provisionMessage) self.assertTrue(kw_args['is_success']) expected_disabled = True if provisionMessage == 'false' else False self.assertTrue(patch_write_agent_disabled.call_count == expected_disabled) else: self.assertEqual(1, ph.report_event.call_count) positional_args, kw_args = ph.report_event.call_args_list[0] # [call(u'[ProtocolError] Failed to validate OVF: ProvisionGuestAgent not found')] self.assertTrue('Failed to validate OVF: ProvisionGuestAgent not found' in positional_args[0]) self.assertFalse(kw_args['is_success']) @distros() @patch( 'azurelinuxagent.common.osutil.default.DefaultOSUtil.get_instance_id', return_value='B9F3C233-9913-9F42-8EB3-BA656DF32502') @patch('azurelinuxagent.common.conf.get_provisioning_agent', return_value='waagent') @patch('azurelinuxagent.pa.provision.default.cloud_init_is_enabled', return_value=False) def test_provision_telemetry_fail(self, distro_name, distro_version, distro_full_name, # pylint:disable=unused-argument patch_cloud_init_is_enabled, patch_get_provisioning_agent, mock_util): """ Assert that the agent issues one telemetry message as part of a failed provisioning. 1. Provision """ ph = get_provision_handler(distro_name, distro_version, distro_full_name) ph.report_event = MagicMock() ph.reg_ssh_host_key = MagicMock(side_effect=ProvisionError( "--unit-test--")) mock_osutil = MagicMock() mock_osutil.decode_customdata = Mock(return_value="") ph.osutil = mock_osutil ph.protocol_util.osutil = mock_osutil ph.protocol_util.get_protocol = MagicMock() conf.get_dvd_mount_point = Mock(return_value=self.tmp_dir) ovfenv_file = os.path.join(self.tmp_dir, OVF_FILE_NAME) ovfenv_data = load_data("ovf-env.xml") fileutil.write_file(ovfenv_file, ovfenv_data) ph.run() positional_args, kw_args = ph.report_event.call_args_list[0] # pylint: disable=unused-variable self.assertTrue(re.match(r'Provisioning failed: \[ProvisionError\] --unit-test-- \(\d+\.\d+s\)', positional_args[0]) is not None) @patch('azurelinuxagent.pa.provision.default.ProvisionHandler.write_agent_disabled') @distros() def test_handle_provision_guest_agent(self, patch_write_agent_disabled, distro_name, distro_version, distro_full_name): ph = get_provision_handler(distro_name, distro_version, distro_full_name) patch_write_agent_disabled.call_count = 0 ph.handle_provision_guest_agent(provision_guest_agent='false') self.assertEqual(1, patch_write_agent_disabled.call_count) ph.handle_provision_guest_agent(provision_guest_agent='False') self.assertEqual(2, patch_write_agent_disabled.call_count) ph.handle_provision_guest_agent(provision_guest_agent='FALSE') self.assertEqual(3, patch_write_agent_disabled.call_count) ph.handle_provision_guest_agent(provision_guest_agent='') self.assertEqual(3, patch_write_agent_disabled.call_count) ph.handle_provision_guest_agent(provision_guest_agent=' ') self.assertEqual(3, patch_write_agent_disabled.call_count) ph.handle_provision_guest_agent(provision_guest_agent=None) self.assertEqual(3, patch_write_agent_disabled.call_count) ph.handle_provision_guest_agent(provision_guest_agent='true') self.assertEqual(3, patch_write_agent_disabled.call_count) ph.handle_provision_guest_agent(provision_guest_agent='True') self.assertEqual(3, patch_write_agent_disabled.call_count) ph.handle_provision_guest_agent(provision_guest_agent='TRUE') self.assertEqual(3, patch_write_agent_disabled.call_count) @patch( 'azurelinuxagent.common.conf.get_provisioning_agent', return_value='auto' ) @patch( 'azurelinuxagent.pa.provision.factory.cloud_init_is_enabled', return_value=False ) def test_get_provision_handler_config_auto_no_cloudinit( self, patch_cloud_init_is_enabled, # pylint: disable=unused-argument patch_get_provisioning_agent): # pylint: disable=unused-argument provisioning_handler = get_provision_handler() self.assertIsInstance(provisioning_handler, ProvisionHandler, 'Auto provisioning handler should be waagent if cloud-init is not enabled') @patch( 'azurelinuxagent.common.conf.get_provisioning_agent', return_value='waagent' ) @patch( 'azurelinuxagent.pa.provision.factory.cloud_init_is_enabled', return_value=True ) def test_get_provision_handler_config_waagent( self, patch_cloud_init_is_enabled, # pylint: disable=unused-argument patch_get_provisioning_agent): # pylint: disable=unused-argument provisioning_handler = get_provision_handler() self.assertIsInstance(provisioning_handler, ProvisionHandler, 'Provisioning handler should be waagent if agent is set to waagent') @patch( 'azurelinuxagent.common.conf.get_provisioning_agent', return_value='auto' ) @patch( 'azurelinuxagent.pa.provision.factory.cloud_init_is_enabled', return_value=True ) def test_get_provision_handler_config_auto_cloudinit( self, patch_cloud_init_is_enabled, # pylint: disable=unused-argument patch_get_provisioning_agent): # pylint: disable=unused-argument provisioning_handler = get_provision_handler() self.assertIsInstance(provisioning_handler, CloudInitProvisionHandler, 'Auto provisioning handler should be cloud-init if cloud-init is enabled') @patch( 'azurelinuxagent.common.conf.get_provisioning_agent', return_value='cloud-init' ) def test_get_provision_handler_config_cloudinit( self, patch_get_provisioning_agent): # pylint: disable=unused-argument provisioning_handler = get_provision_handler() self.assertIsInstance(provisioning_handler, CloudInitProvisionHandler, 'Provisioning handler should be cloud-init if agent is set to cloud-init') if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests/test_agent.py000066400000000000000000000330201462617747000213770ustar00rootroot00000000000000# Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Requires Python 2.6+ and Openssl 1.0+ # import os.path from azurelinuxagent.agent import parse_args, Agent, usage, AgentCommands from azurelinuxagent.common import conf from azurelinuxagent.ga import logcollector, cgroupconfigurator from azurelinuxagent.ga.cgroupapi import SystemdCgroupsApi from azurelinuxagent.common.utils import fileutil from azurelinuxagent.ga.collect_logs import CollectLogsHandler from tests.lib.tools import AgentTestCase, data_dir, Mock, patch EXPECTED_CONFIGURATION = \ """AutoUpdate.Enabled = True AutoUpdate.GAFamily = Prod AutoUpdate.UpdateToLatestVersion = True Autoupdate.Frequency = 3600 DVD.MountPoint = /mnt/cdrom/secure Debug.AgentCpuQuota = 50 Debug.AgentCpuThrottledTimeThreshold = 120 Debug.AgentMemoryQuota = 31457280 Debug.AutoUpdateHotfixFrequency = 14400 Debug.AutoUpdateNormalFrequency = 86400 Debug.CgroupCheckPeriod = 300 Debug.CgroupDisableOnProcessCheckFailure = True Debug.CgroupDisableOnQuotaCheckFailure = True Debug.CgroupLogMetrics = False Debug.CgroupMonitorExpiryTime = 2022-03-31 Debug.CgroupMonitorExtensionName = Microsoft.Azure.Monitor.AzureMonitorLinuxAgent Debug.EnableAgentMemoryUsageCheck = False Debug.EnableFastTrack = True Debug.EnableGAVersioning = True Debug.EtpCollectionPeriod = 300 Debug.FirewallRulesLogPeriod = 86400 DetectScvmmEnv = False EnableOverProvisioning = True Extension.LogDir = /var/log/azure Extensions.Enabled = True Extensions.GoalStatePeriod = 6 Extensions.InitialGoalStatePeriod = 6 Extensions.WaitForCloudInit = False Extensions.WaitForCloudInitTimeout = 3600 HttpProxy.Host = None HttpProxy.Port = None Lib.Dir = /var/lib/waagent Logs.Collect = True Logs.CollectPeriod = 3600 Logs.Console = True Logs.Verbose = False OS.AllowHTTP = False OS.CheckRdmaDriver = False OS.EnableFIPS = True OS.EnableFirewall = False OS.EnableFirewallPeriod = 300 OS.EnableRDMA = False OS.HomeDir = /home OS.MonitorDhcpClientRestartPeriod = 30 OS.OpensslPath = /usr/bin/openssl OS.PasswordPath = /etc/shadow OS.RemovePersistentNetRulesPeriod = 30 OS.RootDeviceScsiTimeout = 300 OS.RootDeviceScsiTimeoutPeriod = 30 OS.SshClientAliveInterval = 42 OS.SshDir = /notareal/path OS.SudoersDir = /etc/sudoers.d OS.UpdateRdmaDriver = False Pid.File = /var/run/waagent.pid Provisioning.Agent = auto Provisioning.AllowResetSysUser = False Provisioning.DecodeCustomData = False Provisioning.DeleteRootPassword = True Provisioning.ExecuteCustomData = False Provisioning.MonitorHostName = True Provisioning.MonitorHostNamePeriod = 30 Provisioning.PasswordCryptId = 6 Provisioning.PasswordCryptSaltLength = 10 Provisioning.RegenerateSshHostKeyPair = True Provisioning.SshHostKeyPairType = rsa ResourceDisk.EnableSwap = False ResourceDisk.EnableSwapEncryption = False ResourceDisk.Filesystem = ext4 ResourceDisk.Format = True ResourceDisk.MountOptions = None ResourceDisk.MountPoint = /mnt/resource ResourceDisk.SwapSizeMB = 0""".split('\n') class TestAgent(AgentTestCase): def test_accepts_configuration_path(self): conf_path = os.path.join(data_dir, "test_waagent.conf") c, f, v, d, cfp, lcm, _ = parse_args(["-configuration-path:" + conf_path]) # pylint: disable=unused-variable self.assertEqual(cfp, conf_path) @patch("os.path.exists", return_value=True) def test_checks_configuration_path(self, mock_exists): conf_path = "/foo/bar-baz/something.conf" c, f, v, d, cfp, lcm, _ = parse_args(["-configuration-path:"+conf_path]) # pylint: disable=unused-variable self.assertEqual(cfp, conf_path) self.assertEqual(mock_exists.call_count, 1) @patch("sys.stderr") @patch("os.path.exists", return_value=False) @patch("sys.exit", side_effect=Exception) def test_rejects_missing_configuration_path(self, mock_exit, mock_exists, mock_stderr): # pylint: disable=unused-argument try: c, f, v, d, cfp, lcm, _ = parse_args(["-configuration-path:/foo/bar.conf"]) # pylint: disable=unused-variable except Exception: self.assertEqual(mock_exit.call_count, 1) def test_configuration_path_defaults_to_none(self): c, f, v, d, cfp, lcm, _ = parse_args([]) # pylint: disable=unused-variable self.assertEqual(cfp, None) def test_agent_accepts_configuration_path(self): Agent(False, conf_file_path=os.path.join(data_dir, "test_waagent.conf")) self.assertTrue(conf.get_fips_enabled()) @patch("azurelinuxagent.common.conf.load_conf_from_file") def test_agent_uses_default_configuration_path(self, mock_load): Agent(False) mock_load.assert_called_once_with("/etc/waagent.conf") @patch("azurelinuxagent.daemon.get_daemon_handler") @patch("azurelinuxagent.common.conf.load_conf_from_file") def test_agent_does_not_pass_configuration_path(self, mock_load, mock_handler): mock_daemon = Mock() mock_daemon.run = Mock() mock_handler.return_value = mock_daemon agent = Agent(False) agent.daemon() mock_daemon.run.assert_called_once_with(child_args=None) self.assertEqual(1, mock_load.call_count) @patch("azurelinuxagent.daemon.get_daemon_handler") @patch("azurelinuxagent.common.conf.load_conf_from_file") def test_agent_passes_configuration_path(self, mock_load, mock_handler): mock_daemon = Mock() mock_daemon.run = Mock() mock_handler.return_value = mock_daemon agent = Agent(False, conf_file_path="/foo/bar.conf") agent.daemon() mock_daemon.run.assert_called_once_with(child_args="-configuration-path:/foo/bar.conf") self.assertEqual(1, mock_load.call_count) @patch("azurelinuxagent.common.conf.get_ext_log_dir") def test_agent_ensures_extension_log_directory(self, mock_dir): ext_log_dir = os.path.join(self.tmp_dir, "FauxLogDir") mock_dir.return_value = ext_log_dir self.assertFalse(os.path.isdir(ext_log_dir)) agent = Agent(False, # pylint: disable=unused-variable conf_file_path=os.path.join(data_dir, "test_waagent.conf")) self.assertTrue(os.path.isdir(ext_log_dir)) @patch("azurelinuxagent.common.logger.error") @patch("azurelinuxagent.common.conf.get_ext_log_dir") def test_agent_logs_if_extension_log_directory_is_a_file(self, mock_dir, mock_log): ext_log_dir = os.path.join(self.tmp_dir, "FauxLogDir") mock_dir.return_value = ext_log_dir fileutil.write_file(ext_log_dir, "Foo") self.assertTrue(os.path.isfile(ext_log_dir)) self.assertFalse(os.path.isdir(ext_log_dir)) agent = Agent(False, # pylint: disable=unused-variable conf_file_path=os.path.join(data_dir, "test_waagent.conf")) self.assertTrue(os.path.isfile(ext_log_dir)) self.assertFalse(os.path.isdir(ext_log_dir)) self.assertEqual(1, mock_log.call_count) def test_agent_get_configuration(self): Agent(False, conf_file_path=os.path.join(data_dir, "test_waagent.conf")) actual_configuration = [] configuration = conf.get_configuration() for k in sorted(configuration.keys()): actual_configuration.append("{0} = {1}".format(k, configuration[k])) self.assertListEqual(EXPECTED_CONFIGURATION, actual_configuration) def test_checks_log_collector_mode(self): # Specify full mode c, f, v, d, cfp, lcm, _ = parse_args(["-collect-logs", "-full"]) # pylint: disable=unused-variable self.assertEqual(c, "collect-logs") self.assertEqual(lcm, True) # Defaults to None if mode not specified c, f, v, d, cfp, lcm, _ = parse_args(["-collect-logs"]) # pylint: disable=unused-variable self.assertEqual(c, "collect-logs") self.assertEqual(lcm, False) @patch("sys.stderr") @patch("sys.exit", side_effect=Exception) def test_rejects_invalid_log_collector_mode(self, mock_exit, mock_stderr): # pylint: disable=unused-argument try: c, f, v, d, cfp, lcm, _ = parse_args(["-collect-logs", "-notvalid"]) # pylint: disable=unused-variable except Exception: self.assertEqual(mock_exit.call_count, 1) @patch("os.path.exists", return_value=True) @patch("azurelinuxagent.agent.LogCollector") def test_calls_collect_logs_with_proper_mode(self, mock_log_collector, *args): # pylint: disable=unused-argument agent = Agent(False, conf_file_path=os.path.join(data_dir, "test_waagent.conf")) mock_log_collector.run = Mock() agent.collect_logs(is_full_mode=True) full_mode = mock_log_collector.call_args_list[0][0][0] self.assertTrue(full_mode) agent.collect_logs(is_full_mode=False) full_mode = mock_log_collector.call_args_list[1][0][0] self.assertFalse(full_mode) @patch("azurelinuxagent.agent.LogCollector") def test_calls_collect_logs_on_valid_cgroups(self, mock_log_collector): try: CollectLogsHandler.enable_monitor_cgroups_check() mock_log_collector.run = Mock() def mock_cgroup_paths(*args, **kwargs): if args and args[0] == "self": relative_path = "{0}/{1}".format(cgroupconfigurator.LOGCOLLECTOR_SLICE, logcollector.CGROUPS_UNIT) return (cgroupconfigurator.LOGCOLLECTOR_SLICE, relative_path) return SystemdCgroupsApi.get_process_cgroup_relative_paths(*args, **kwargs) with patch("azurelinuxagent.agent.SystemdCgroupsApi.get_process_cgroup_paths", side_effect=mock_cgroup_paths): agent = Agent(False, conf_file_path=os.path.join(data_dir, "test_waagent.conf")) agent.collect_logs(is_full_mode=True) mock_log_collector.assert_called_once() finally: CollectLogsHandler.disable_monitor_cgroups_check() @patch("azurelinuxagent.agent.LogCollector") def test_doesnt_call_collect_logs_on_invalid_cgroups(self, mock_log_collector): try: CollectLogsHandler.enable_monitor_cgroups_check() mock_log_collector.run = Mock() def mock_cgroup_paths(*args, **kwargs): if args and args[0] == "self": return ("NOT_THE_CORRECT_PATH", "NOT_THE_CORRECT_PATH") return SystemdCgroupsApi.get_process_cgroup_relative_paths(*args, **kwargs) with patch("azurelinuxagent.agent.SystemdCgroupsApi.get_process_cgroup_paths", side_effect=mock_cgroup_paths): agent = Agent(False, conf_file_path=os.path.join(data_dir, "test_waagent.conf")) exit_error = RuntimeError("Exiting") with patch("sys.exit", return_value=exit_error) as mock_exit: try: agent.collect_logs(is_full_mode=True) except RuntimeError as re: mock_exit.assert_called_once_with(logcollector.INVALID_CGROUPS_ERRCODE) self.assertEqual(exit_error, re) finally: CollectLogsHandler.disable_monitor_cgroups_check() def test_it_should_parse_setup_firewall_properly(self): test_firewall_meta = { "dst_ip": "1.2.3.4", "uid": "9999", "wait": "-w" } cmd, _, _, _, _, _, firewall_metadata = parse_args( ["-{0}".format(AgentCommands.SetupFirewall), "-dst_ip=1.2.3.4", "-uid=9999", "-w"]) self.assertEqual(cmd, AgentCommands.SetupFirewall) self.assertEqual(firewall_metadata, test_firewall_meta) # Defaults to None if command is different test_firewall_meta = { "dst_ip": None, "uid": None, "wait": "" } cmd, _, _, _, _, _, firewall_metadata = parse_args(["-{0}".format(AgentCommands.Help)]) self.assertEqual(cmd, AgentCommands.Help) self.assertEqual(test_firewall_meta, firewall_metadata) def test_it_should_ignore_empty_arguments(self): test_firewall_meta = { "dst_ip": "1.2.3.4", "uid": "9999", "wait": "" } cmd, _, _, _, _, _, firewall_metadata = parse_args( ["-{0}".format(AgentCommands.SetupFirewall), "-dst_ip=1.2.3.4", "-uid=9999", ""]) self.assertEqual(cmd, AgentCommands.SetupFirewall) self.assertEqual(firewall_metadata, test_firewall_meta) def test_agent_usage_message(self): message = usage() # Python 2.6 does not have assertIn() self.assertTrue("-verbose" in message) self.assertTrue("-force" in message) self.assertTrue("-help" in message) self.assertTrue("-configuration-path" in message) self.assertTrue("-deprovision" in message) self.assertTrue("-register-service" in message) self.assertTrue("-version" in message) self.assertTrue("-daemon" in message) self.assertTrue("-start" in message) self.assertTrue("-run-exthandlers" in message) self.assertTrue("-show-configuration" in message) self.assertTrue("-collect-logs" in message) # sanity check self.assertFalse("-not-a-valid-option" in message) Azure-WALinuxAgent-2b21de5/tests_e2e/000077500000000000000000000000001462617747000174255ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/GuestAgentDcrTestExtension/000077500000000000000000000000001462617747000246615ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/GuestAgentDcrTestExtension/GuestAgentDcrTest.py000066400000000000000000000065111462617747000305750ustar00rootroot00000000000000#!/usr/bin/env python # pylint: disable=all from __future__ import print_function from Utils.WAAgentUtil import waagent import Utils.HandlerUtil as Util import sys import re import traceback import os import datetime ExtensionShortName = "GADcrTestExt" OperationFileName = "operations-{0}.log" def install(): operation = "install" status = "success" msg = "Installed successfully" hutil = parse_context(operation) hutil.log("Start to install.") hutil.log(msg) hutil.do_exit(0, operation, status, '0', msg) def enable(): # Global Variables definition operation = "enable" status = "success" msg = "Enabled successfully." # Operations.append(operation) hutil = parse_context(operation) hutil.log("Start to enable.") public_settings = hutil.get_public_settings() name = public_settings.get("name") if name: name = "Name: {0}".format(name) hutil.log(name) msg = "{0} {1}".format(msg, name) print(name) else: hutil.error("The name in public settings is not provided.") # msg = msg % ','.join(Operations) hutil.log(msg) hutil.do_exit(0, operation, status, '0', msg) def disable(): operation = "disable" status = "success" msg = "Disabled successfully." # Operations.append(operation) hutil = parse_context(operation) hutil.log("Start to disable.") # msg % ','.join(Operations) hutil.log(msg) hutil.do_exit(0, operation, status, '0', msg) def uninstall(): operation = "uninstall" status = "success" msg = "Uninstalled successfully." # Operations.append(operation) hutil = parse_context(operation) hutil.log("Start to uninstall.") # msg % ','.join(Operations) hutil.log(msg) hutil.do_exit(0, operation, status, '0', msg) def update(): operation = "update" status = "success" msg = "Updated successfully." # Operations.append(operation) hutil = parse_context(operation) hutil.log("Start to update.") # msg % ','.join(Operations) hutil.log(msg) hutil.do_exit(0, operation, status, '0', msg) def parse_context(operation): hutil = Util.HandlerUtility(waagent.Log, waagent.Error) hutil.do_parse_context(operation) op_log = os.path.join(hutil.get_log_dir(), OperationFileName.format(hutil.get_extension_version())) with open(op_log, 'a+') as oplog_handler: oplog_handler.write("Date:{0}; Operation:{1}; SeqNo:{2}\n" .format(datetime.datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ"), operation, hutil.get_seq_no())) return hutil def main(): waagent.LoggerInit('/var/log/waagent.log', '/dev/stdout') waagent.Log("%s started to handle." % (ExtensionShortName)) try: for a in sys.argv[1:]: if re.match("^([-/]*)(disable)", a): disable() elif re.match("^([-/]*)(uninstall)", a): uninstall() elif re.match("^([-/]*)(install)", a): install() elif re.match("^([-/]*)(enable)", a): enable() elif re.match("^([-/]*)(update)", a): update() except Exception as e: err_msg = "Failed with error: {0}, {1}".format(e, traceback.format_exc()) waagent.Error(err_msg) if __name__ == '__main__': main() Azure-WALinuxAgent-2b21de5/tests_e2e/GuestAgentDcrTestExtension/HandlerManifest.json000066400000000000000000000007451462617747000306260ustar00rootroot00000000000000[{ "name": "GuestAgentDcrTestExtension", "version": 1.0, "handlerManifest": { "installCommand": "./GuestAgentDcrTest.py --install", "uninstallCommand": "./GuestAgentDcrTest.py --uninstall", "updateCommand": "./GuestAgentDcrTest.py --update", "enableCommand": "./GuestAgentDcrTest.py --enable", "disableCommand": "./GuestAgentDcrTest.py --disable", "updateMode": "UpdateWithoutInstall", "rebootAfterInstall": false, "reportHeartbeat": false } }] Azure-WALinuxAgent-2b21de5/tests_e2e/GuestAgentDcrTestExtension/Makefile000066400000000000000000000003721462617747000263230ustar00rootroot00000000000000default: build build: $(eval NAME = $(shell grep -Pom1 "(?<=)[^<]+" manifest.xml)) $(eval VERSION = $(shell grep -Pom1 "(?<=)[^<]+" manifest.xml)) @echo "Building '$(NAME)-$(VERSION).zip' ..." zip -r9 $(NAME)-$(VERSION).zip * Azure-WALinuxAgent-2b21de5/tests_e2e/GuestAgentDcrTestExtension/Utils/000077500000000000000000000000001462617747000257615ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/GuestAgentDcrTestExtension/Utils/HandlerUtil.py000077500000000000000000000365521462617747000305640ustar00rootroot00000000000000# # Handler library for Linux IaaS # # Copyright 2014 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # pylint: disable=all """ JSON def: HandlerEnvironment.json [{ "name": "ExampleHandlerLinux", "seqNo": "seqNo", "version": "1.0", "handlerEnvironment": { "logFolder": "", "configFolder": "", "statusFolder": "", "heartbeatFile": "", } }] Example ./config/1.settings "{"runtimeSettings":[{"handlerSettings":{"protectedSettingsCertThumbprint":"1BE9A13AA1321C7C515EF109746998BAB6D86FD1","protectedSettings": "MIIByAYJKoZIhvcNAQcDoIIBuTCCAbUCAQAxggFxMIIBbQIBADBVMEExPzA9BgoJkiaJk/IsZAEZFi9XaW5kb3dzIEF6dXJlIFNlcnZpY2UgTWFuYWdlbWVudCBmb3IgR+nhc6VHQTQpCiiV2zANBgkqhkiG9w0BAQEFAASCAQCKr09QKMGhwYe+O4/a8td+vpB4eTR+BQso84cV5KCAnD6iUIMcSYTrn9aveY6v6ykRLEw8GRKfri2d6tvVDggUrBqDwIgzejGTlCstcMJItWa8Je8gHZVSDfoN80AEOTws9Fp+wNXAbSuMJNb8EnpkpvigAWU2v6pGLEFvSKC0MCjDTkjpjqciGMcbe/r85RG3Zo21HLl0xNOpjDs/qqikc/ri43Y76E/Xv1vBSHEGMFprPy/Hwo3PqZCnulcbVzNnaXN3qi/kxV897xGMPPC3IrO7Nc++AT9qRLFI0841JLcLTlnoVG1okPzK9w6ttksDQmKBSHt3mfYV+skqs+EOMDsGCSqGSIb3DQEHATAUBggqhkiG9w0DBwQITgu0Nu3iFPuAGD6/QzKdtrnCI5425fIUy7LtpXJGmpWDUA==","publicSettings":{"port":"3000"}}}]}" Example HeartBeat { "version": 1.0, "heartbeat" : { "status": "ready", "code": 0, "Message": "Sample Handler running. Waiting for a new configuration from user." } } Example Status Report: [{"version":"1.0","timestampUTC":"2014-05-29T04:20:13Z","status":{"name":"Chef Extension Handler","operation":"chef-client-run","status":"success","code":0,"formattedMessage":{"lang":"en-US","message":"Chef-client run success"}}}] """ import os import os.path import sys import imp import base64 import json import time import re from xml.etree import ElementTree from os.path import join from Utils.WAAgentUtil import waagent from waagent import LoggerInit DateTimeFormat = "%Y-%m-%dT%H:%M:%SZ" MANIFEST_XML = "manifest.xml" class HandlerContext: def __init__(self, name): self._name = name self._version = '0.0' self._config_dir = None self._log_dir = None self._log_file = None self._status_dir = None self._heartbeat_file = None self._seq_no = -1 self._status_file = None self._settings_file = None self._config = None return class HandlerUtility: def __init__(self, log, error, s_name=None, l_name=None, extension_version=None, logFileName='extension.log', console_logger=None, file_logger=None): self._log = log self._log_to_con = console_logger self._log_to_file = file_logger self._error = error self._logFileName = logFileName if s_name is None or l_name is None or extension_version is None: (l_name, s_name, extension_version) = self._get_extension_info() self._short_name = s_name self._extension_version = extension_version self._log_prefix = '[%s-%s] ' % (l_name, extension_version) def get_extension_version(self): return self._extension_version def _get_log_prefix(self): return self._log_prefix def _get_extension_info(self): if os.path.isfile(MANIFEST_XML): return self._get_extension_info_manifest() ext_dir = os.path.basename(os.getcwd()) (long_name, version) = ext_dir.split('-') short_name = long_name.split('.')[-1] return long_name, short_name, version def _get_extension_info_manifest(self): with open(MANIFEST_XML) as fh: doc = ElementTree.parse(fh) namespace = doc.find('{http://schemas.microsoft.com/windowsazure}ProviderNameSpace').text short_name = doc.find('{http://schemas.microsoft.com/windowsazure}Type').text version = doc.find('{http://schemas.microsoft.com/windowsazure}Version').text long_name = "%s.%s" % (namespace, short_name) return (long_name, short_name, version) def _get_current_seq_no(self, config_folder): seq_no = -1 cur_seq_no = -1 freshest_time = None for subdir, dirs, files in os.walk(config_folder): for file in files: try: cur_seq_no = int(os.path.basename(file).split('.')[0]) if (freshest_time == None): freshest_time = os.path.getmtime(join(config_folder, file)) seq_no = cur_seq_no else: current_file_m_time = os.path.getmtime(join(config_folder, file)) if (current_file_m_time > freshest_time): freshest_time = current_file_m_time seq_no = cur_seq_no except ValueError: continue return seq_no def log(self, message): self._log(self._get_log_prefix() + message) def log_to_console(self, message): if self._log_to_con is not None: self._log_to_con(self._get_log_prefix() + message) else: self.error("Unable to log to console, console log method not set") def log_to_file(self, message): if self._log_to_file is not None: self._log_to_file(self._get_log_prefix() + message) else: self.error("Unable to log to file, file log method not set") def error(self, message): self._error(self._get_log_prefix() + message) @staticmethod def redact_protected_settings(content): redacted_tmp = re.sub('"protectedSettings":\s*"[^"]+=="', '"protectedSettings": "*** REDACTED ***"', content) redacted = re.sub('"protectedSettingsCertThumbprint":\s*"[^"]+"', '"protectedSettingsCertThumbprint": "*** REDACTED ***"', redacted_tmp) return redacted def _parse_config(self, ctxt): config = None try: config = json.loads(ctxt) except: self.error('JSON exception decoding ' + HandlerUtility.redact_protected_settings(ctxt)) if config is None: self.error("JSON error processing settings file:" + HandlerUtility.redact_protected_settings(ctxt)) else: handlerSettings = config['runtimeSettings'][0]['handlerSettings'] if 'protectedSettings' in handlerSettings and \ 'protectedSettingsCertThumbprint' in handlerSettings and \ handlerSettings['protectedSettings'] is not None and \ handlerSettings["protectedSettingsCertThumbprint"] is not None: protectedSettings = handlerSettings['protectedSettings'] thumb = handlerSettings['protectedSettingsCertThumbprint'] cert = waagent.LibDir + '/' + thumb + '.crt' pkey = waagent.LibDir + '/' + thumb + '.prv' unencodedSettings = base64.standard_b64decode(protectedSettings) openSSLcmd = "openssl smime -inform DER -decrypt -recip {0} -inkey {1}" cleartxt = waagent.RunSendStdin(openSSLcmd.format(cert, pkey), unencodedSettings)[1] if cleartxt is None: self.error("OpenSSL decode error using thumbprint " + thumb) self.do_exit(1, "Enable", 'error', '1', 'Failed to decrypt protectedSettings') jctxt = '' try: jctxt = json.loads(cleartxt) except: self.error('JSON exception decoding ' + HandlerUtility.redact_protected_settings(cleartxt)) handlerSettings['protectedSettings']=jctxt self.log('Config decoded correctly.') return config def do_parse_context(self, operation): _context = self.try_parse_context() if not _context: self.do_exit(1, operation, 'error', '1', operation + ' Failed') return _context def try_parse_context(self): self._context = HandlerContext(self._short_name) handler_env = None config = None ctxt = None code = 0 # get the HandlerEnvironment.json. According to the extension handler spec, it is always in the ./ directory self.log('cwd is ' + os.path.realpath(os.path.curdir)) handler_env_file = './HandlerEnvironment.json' if not os.path.isfile(handler_env_file): self.error("Unable to locate " + handler_env_file) return None ctxt = waagent.GetFileContents(handler_env_file) if ctxt == None: self.error("Unable to read " + handler_env_file) try: handler_env = json.loads(ctxt) except: pass if handler_env == None: self.log("JSON error processing " + handler_env_file) return None if type(handler_env) == list: handler_env = handler_env[0] self._context._name = handler_env['name'] self._context._version = str(handler_env['version']) self._context._config_dir = handler_env['handlerEnvironment']['configFolder'] self._context._log_dir = handler_env['handlerEnvironment']['logFolder'] self._context._log_file = os.path.join(handler_env['handlerEnvironment']['logFolder'], self._logFileName) self._change_log_file() self._context._status_dir = handler_env['handlerEnvironment']['statusFolder'] self._context._heartbeat_file = handler_env['handlerEnvironment']['heartbeatFile'] self._context._seq_no = self._get_current_seq_no(self._context._config_dir) if self._context._seq_no < 0: self.error("Unable to locate a .settings file!") return None self._context._seq_no = str(self._context._seq_no) self.log('sequence number is ' + self._context._seq_no) self._context._status_file = os.path.join(self._context._status_dir, self._context._seq_no + '.status') self._context._settings_file = os.path.join(self._context._config_dir, self._context._seq_no + '.settings') self.log("setting file path is" + self._context._settings_file) ctxt = None ctxt = waagent.GetFileContents(self._context._settings_file) if ctxt == None: error_msg = 'Unable to read ' + self._context._settings_file + '. ' self.error(error_msg) return None self.log("JSON config: " + HandlerUtility.redact_protected_settings(ctxt)) self._context._config = self._parse_config(ctxt) return self._context def _change_log_file(self): self.log("Change log file to " + self._context._log_file) LoggerInit(self._context._log_file, '/dev/stdout') self._log = waagent.Log self._error = waagent.Error def set_verbose_log(self, verbose): if (verbose == "1" or verbose == 1): self.log("Enable verbose log") LoggerInit(self._context._log_file, '/dev/stdout', verbose=True) else: self.log("Disable verbose log") LoggerInit(self._context._log_file, '/dev/stdout', verbose=False) def is_seq_smaller(self): return int(self._context._seq_no) <= self._get_most_recent_seq() def save_seq(self): self._set_most_recent_seq(self._context._seq_no) self.log("set most recent sequence number to " + self._context._seq_no) def exit_if_enabled(self, remove_protected_settings=False): self.exit_if_seq_smaller(remove_protected_settings) def exit_if_seq_smaller(self, remove_protected_settings): if(self.is_seq_smaller()): self.log("Current sequence number, " + self._context._seq_no + ", is not greater than the sequnce number of the most recent executed configuration. Exiting...") sys.exit(0) self.save_seq() if remove_protected_settings: self.scrub_settings_file() def _get_most_recent_seq(self): if (os.path.isfile('mrseq')): seq = waagent.GetFileContents('mrseq') if (seq): return int(seq) return -1 def is_current_config_seq_greater_inused(self): return int(self._context._seq_no) > self._get_most_recent_seq() def get_inused_config_seq(self): return self._get_most_recent_seq() def set_inused_config_seq(self, seq): self._set_most_recent_seq(seq) def _set_most_recent_seq(self, seq): waagent.SetFileContents('mrseq', str(seq)) def do_status_report(self, operation, status, status_code, message): self.log("{0},{1},{2},{3}".format(operation, status, status_code, message)) tstamp = time.strftime(DateTimeFormat, time.gmtime()) stat = [{ "version": self._context._version, "timestampUTC": tstamp, "status": { "name": self._context._name, "operation": operation, "status": status, "code": status_code, "formattedMessage": { "lang": "en-US", "message": message } } }] stat_rept = json.dumps(stat) if self._context._status_file: tmp = "%s.tmp" % (self._context._status_file) with open(tmp, 'w+') as f: f.write(stat_rept) os.rename(tmp, self._context._status_file) def do_heartbeat_report(self, heartbeat_file, status, code, message): # heartbeat health_report = '[{"version":"1.0","heartbeat":{"status":"' + status + '","code":"' + code + '","Message":"' + message + '"}}]' if waagent.SetFileContents(heartbeat_file, health_report) == None: self.error('Unable to wite heartbeat info to ' + heartbeat_file) def do_exit(self, exit_code, operation, status, code, message): try: self.do_status_report(operation, status, code, message) except Exception as e: self.log("Can't update status: " + str(e)) sys.exit(exit_code) def get_name(self): return self._context._name def get_seq_no(self): return self._context._seq_no def get_log_dir(self): return self._context._log_dir def get_handler_settings(self): if (self._context._config != None): return self._context._config['runtimeSettings'][0]['handlerSettings'] return None def get_protected_settings(self): if (self._context._config != None): return self.get_handler_settings().get('protectedSettings') return None def get_public_settings(self): handlerSettings = self.get_handler_settings() if (handlerSettings != None): return self.get_handler_settings().get('publicSettings') return None def scrub_settings_file(self): content = waagent.GetFileContents(self._context._settings_file) redacted = HandlerUtility.redact_protected_settings(content) waagent.SetFileContents(self._context._settings_file, redacted)Azure-WALinuxAgent-2b21de5/tests_e2e/GuestAgentDcrTestExtension/Utils/LogUtil.py000077500000000000000000000031671462617747000277240ustar00rootroot00000000000000# Logging utilities # # Copyright 2014 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # pylint: disable=all import os import os.path import string import sys OutputSize = 4 * 1024 def tail(log_file, output_size = OutputSize): pos = min(output_size, os.path.getsize(log_file)) with open(log_file, "r") as log: log.seek(0, os.SEEK_END) log.seek(log.tell() - pos, os.SEEK_SET) buf = log.read(output_size) buf = filter(lambda x: x in string.printable, buf) # encoding works different for between interpreter version, we are keeping separate implementation to ensure # backward compatibility if sys.version_info[0] == 3: buf = ''.join(list(buf)).encode('ascii', 'ignore').decode("ascii", "ignore") elif sys.version_info[0] == 2: buf = buf.decode("ascii", "ignore") return buf def get_formatted_log(summary, stdout, stderr): msg_format = ("{0}\n" "---stdout---\n" "{1}\n" "---errout---\n" "{2}\n") return msg_format.format(summary, stdout, stderr)Azure-WALinuxAgent-2b21de5/tests_e2e/GuestAgentDcrTestExtension/Utils/ScriptUtil.py000077500000000000000000000132661462617747000304500ustar00rootroot00000000000000# Script utilities # # Copyright 2014 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # pylint: disable=all import os import os.path import time import subprocess import traceback import string import shlex import sys from Utils import LogUtil from Utils.WAAgentUtil import waagent DefaultStdoutFile = "stdout" DefaultErroutFile = "errout" def run_command(hutil, args, cwd, operation, extension_short_name, version, exit_after_run=True, interval=30, std_out_file_name=DefaultStdoutFile, std_err_file_name=DefaultErroutFile): std_out_file = os.path.join(cwd, std_out_file_name) err_out_file = os.path.join(cwd, std_err_file_name) std_out = None err_out = None try: std_out = open(std_out_file, "w") err_out = open(err_out_file, "w") start_time = time.time() child = subprocess.Popen(args, cwd=cwd, stdout=std_out, stderr=err_out) time.sleep(1) while child.poll() is None: msg = "Command is running..." msg_with_cmd_output = LogUtil.get_formatted_log(msg, LogUtil.tail(std_out_file), LogUtil.tail(err_out_file)) msg_without_cmd_output = msg + " Stdout/Stderr omitted from output." hutil.log_to_file(msg_with_cmd_output) hutil.log_to_console(msg_without_cmd_output) hutil.do_status_report(operation, 'transitioning', '0', msg_without_cmd_output) time.sleep(interval) exit_code = child.returncode if child.returncode and child.returncode != 0: msg = "Command returned an error." msg_with_cmd_output = LogUtil.get_formatted_log(msg, LogUtil.tail(std_out_file), LogUtil.tail(err_out_file)) msg_without_cmd_output = msg + " Stdout/Stderr omitted from output." hutil.error(msg_without_cmd_output) waagent.AddExtensionEvent(name=extension_short_name, op=operation, isSuccess=False, version=version, message="(01302)" + msg_without_cmd_output) else: msg = "Command is finished." msg_with_cmd_output = LogUtil.get_formatted_log(msg, LogUtil.tail(std_out_file), LogUtil.tail(err_out_file)) msg_without_cmd_output = msg + " Stdout/Stderr omitted from output." hutil.log_to_file(msg_with_cmd_output) hutil.log_to_console(msg_without_cmd_output) waagent.AddExtensionEvent(name=extension_short_name, op=operation, isSuccess=True, version=version, message="(01302)" + msg_without_cmd_output) end_time = time.time() waagent.AddExtensionEvent(name=extension_short_name, op=operation, isSuccess=True, version=version, message=("(01304)Command execution time: " "{0}s").format(str(end_time - start_time))) log_or_exit(hutil, exit_after_run, exit_code, operation, msg_with_cmd_output) except Exception as e: error_msg = ("Failed to launch command with error: {0}," "stacktrace: {1}").format(e, traceback.format_exc()) hutil.error(error_msg) waagent.AddExtensionEvent(name=extension_short_name, op=operation, isSuccess=False, version=version, message="(01101)" + error_msg) exit_code = 1 msg = 'Launch command failed: {0}'.format(e) log_or_exit(hutil, exit_after_run, exit_code, operation, msg) finally: if std_out: std_out.close() if err_out: err_out.close() return exit_code # do_exit calls sys.exit which raises an exception so we do not call it from the finally block def log_or_exit(hutil, exit_after_run, exit_code, operation, msg): status = 'success' if exit_code == 0 else 'failed' if exit_after_run: hutil.do_exit(exit_code, operation, status, str(exit_code), msg) else: hutil.do_status_report(operation, status, str(exit_code), msg) def parse_args(cmd): cmd = filter(lambda x: x in string.printable, cmd) # encoding works different for between interpreter version, we are keeping separate implementation to ensure # backward compatibility if sys.version_info[0] == 3: cmd = ''.join(list(cmd)).encode('ascii', 'ignore').decode("ascii", "ignore") elif sys.version_info[0] == 2: cmd = cmd.decode("ascii", "ignore") args = shlex.split(cmd) # From python 2.6 to python 2.7.2, shlex.split output UCS-4 result like # '\x00\x00a'. Temp workaround is to replace \x00 for idx, val in enumerate(args): if '\x00' in args[idx]: args[idx] = args[idx].replace('\x00', '') return args Azure-WALinuxAgent-2b21de5/tests_e2e/GuestAgentDcrTestExtension/Utils/WAAgentUtil.py000077500000000000000000000103451462617747000304650ustar00rootroot00000000000000# Wrapper module for waagent # # waagent is not written as a module. This wrapper module is created # to use the waagent code as a module. # # Copyright 2014 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # pylint: disable=all import imp import os import os.path # # The following code will search and load waagent code and expose # it as a submodule of current module # def searchWAAgent(): # if the extension ships waagent in its package to default to this version first pkg_agent_path = os.path.join(os.getcwd(), 'waagent') if os.path.isfile(pkg_agent_path): return pkg_agent_path agentPath = '/usr/sbin/waagent' if os.path.isfile(agentPath): return agentPath user_paths = os.environ['PYTHONPATH'].split(os.pathsep) for user_path in user_paths: agentPath = os.path.join(user_path, 'waagent') if os.path.isfile(agentPath): return agentPath return None waagent = None agentPath = searchWAAgent() if agentPath: waagent = imp.load_source('waagent', agentPath) else: raise Exception("Can't load waagent.") if not hasattr(waagent, "AddExtensionEvent"): """ If AddExtensionEvent is not defined, provide a dummy impl. """ def _AddExtensionEvent(*args, **kwargs): pass waagent.AddExtensionEvent = _AddExtensionEvent if not hasattr(waagent, "WALAEventOperation"): class _WALAEventOperation: HeartBeat = "HeartBeat" Provision = "Provision" Install = "Install" UnIsntall = "UnInstall" Disable = "Disable" Enable = "Enable" Download = "Download" Upgrade = "Upgrade" Update = "Update" waagent.WALAEventOperation = _WALAEventOperation # Better deal with the silly waagent typo, in anticipation of a proper fix of the typo later on waagent if not hasattr(waagent.WALAEventOperation, 'Uninstall'): if hasattr(waagent.WALAEventOperation, 'UnIsntall'): waagent.WALAEventOperation.Uninstall = waagent.WALAEventOperation.UnIsntall else: # This shouldn't happen, but just in case... waagent.WALAEventOperation.Uninstall = 'Uninstall' def GetWaagentHttpProxyConfigString(): """ Get http_proxy and https_proxy from waagent config. Username and password is not supported now. This code is adopted from /usr/sbin/waagent """ host = None port = None try: waagent.Config = waagent.ConfigurationProvider( None) # Use default waagent conf file (most likely /etc/waagent.conf) host = waagent.Config.get("HttpProxy.Host") port = waagent.Config.get("HttpProxy.Port") except Exception as e: # waagent.ConfigurationProvider(None) will throw an exception on an old waagent # Has to silently swallow because logging is not yet available here # and we don't want to bring that in here. Also if the call fails, then there's # no proxy config in waagent.conf anyway, so it's safe to silently swallow. pass result = '' if host is not None: result = "http://" + host if port is not None: result += ":" + port return result waagent.HttpProxyConfigString = GetWaagentHttpProxyConfigString() # end: waagent http proxy config stuff __ExtensionName__ = None def InitExtensionEventLog(name): global __ExtensionName__ __ExtensionName__ = name def AddExtensionEvent(name=__ExtensionName__, op=waagent.WALAEventOperation.Enable, isSuccess=False, message=None): if name is not None: waagent.AddExtensionEvent(name=name, op=op, isSuccess=isSuccess, message=message) Azure-WALinuxAgent-2b21de5/tests_e2e/GuestAgentDcrTestExtension/Utils/test/000077500000000000000000000000001462617747000267405ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/GuestAgentDcrTestExtension/Utils/test/MockUtil.py000077500000000000000000000024161462617747000310470ustar00rootroot00000000000000#!/usr/bin/env python # # Sample Extension # # Copyright 2014 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # pylint: disable=all # TODO: These tests were copied as reference - they are not currently running class MockUtil(): def __init__(self, test): self.test = test def get_log_dir(self): return "/tmp" def log(self, msg): print(msg) def error(self, msg): print(msg) def get_seq_no(self): return "0" def do_status_report(self, operation, status, status_code, message): self.test.assertNotEqual(None, message) self.last = "do_status_report" def do_exit(self,exit_code,operation,status,code,message): self.test.assertNotEqual(None, message) self.last = "do_exit" Azure-WALinuxAgent-2b21de5/tests_e2e/GuestAgentDcrTestExtension/Utils/test/env.py000077500000000000000000000014161462617747000301070ustar00rootroot00000000000000#!/usr/bin/env python # # Sample Extension # # Copyright 2014 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import sys import os #append installer directory to sys.path root = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) sys.path.append(root) Azure-WALinuxAgent-2b21de5/tests_e2e/GuestAgentDcrTestExtension/Utils/test/mock.sh000077500000000000000000000012661462617747000302350ustar00rootroot00000000000000#!/bin/bash # # Copyright 2014 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. echo "Start..." sleep 0.1 echo "Running" >&2 echo "Warning" sleep 0.1 echo "Finished" exit $1 Azure-WALinuxAgent-2b21de5/tests_e2e/GuestAgentDcrTestExtension/Utils/test/test_logutil.py000077500000000000000000000022211462617747000320300ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2014 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # pylint: disable=all # TODO: These tests were copied as reference - they are not currently running import unittest import LogUtil as lu class TestLogUtil(unittest.TestCase): def test_tail(self): with open("/tmp/testtail", "w+") as F: F.write(u"abcdefghijklmnopqrstu\u6211vwxyz".encode("utf-8")) tail = lu.tail("/tmp/testtail", 2) self.assertEquals("yz", tail) tail = lu.tail("/tmp/testtail") self.assertEquals("abcdefghijklmnopqrstuvwxyz", tail) if __name__ == '__main__': unittest.main() test_null_protected_settings.py000077500000000000000000000027011462617747000352400ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/GuestAgentDcrTestExtension/Utils/test#!/usr/bin/env python # # Sample Extension # # Copyright 2014 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # pylint: disable=all # TODO: These tests were copied as reference - they are not currently running import unittest import HandlerUtil as Util def mock_log(*args, **kwargs): pass class TestNullProtectedSettings(unittest.TestCase): def test_null_protected_settings(self): hutil = Util.HandlerUtility(mock_log, mock_log, "UnitTest", "HandlerUtil.UnitTest", "0.0.1") config = hutil._parse_config(Settings) handlerSettings = config['runtimeSettings'][0]['handlerSettings'] self.assertEquals(handlerSettings["protectedSettings"], None) Settings="""\ { "runtimeSettings":[{ "handlerSettings":{ "protectedSettingsCertThumbprint":null, "protectedSettings":null, "publicSettings":{} } }] } """ if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests_e2e/GuestAgentDcrTestExtension/Utils/test/test_redacted_settings.py000066400000000000000000000041041462617747000340430ustar00rootroot00000000000000#!/usr/bin/env python # # Tests for redacted settings # # Copyright 2014 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # pylint: disable=all # TODO: These tests were copied as reference - they are not currently running import unittest import Utils.HandlerUtil as Util class TestRedactedProtectedSettings(unittest.TestCase): def test_redacted_protected_settings(self): redacted = Util.HandlerUtility.redact_protected_settings(settings_original) self.assertIn('"protectedSettings": "*** REDACTED ***"', redacted) self.assertIn('"protectedSettingsCertThumbprint": "*** REDACTED ***"', redacted) settings_original = """\ { "runtimeSettings": [{ "handlerSettings": { "protectedSettingsCertThumbprint": "9310D2O49D7216D4A1CEDCE9D8A7CE5DBD7FB7BF", "protectedSettings": "MIIC4AYJKoZIhvcNAQcWoIIB0TCDEc0CAQAxggFpMIIBZQIBADBNMDkxNzA1BgoJkiaJk/IsZAEZFidXaW5kb3dzIEF6dXJlIENSUCBDZXJ0aWZpY2F0ZSBHZW5lcmF0b3ICEB8f7DyzHLGjSDLnEWd4YeAwDQYJKoZIhvcNAQEBBQAEggEAiZj2gQtT4MpdTaEH8rUVFB/8Ucc8OxGFWu8VKbIdoHLKp1WcDb7Vlzv6fHLBIccgXGuR1XHTvtlD4QiKpSet341tPPug/R5ZtLSRz1pqtXZdrFcuuSxOa6ib/+la5ukdygcVwkEnmNSQaiipPKyqPH2JsuhmGCdXFiKwCSTrgGE6GyCBtaK9KOf48V/tYXHnDGrS9q5a1gRF5KVI2B26UYSO7V7pXjzYCd/Sp9yGj7Rw3Kqf9Lpix/sPuqWjV6e2XFlD3YxaHSeHVnLI/Bkz2E6Ri8yfPYus52r/mECXPL2YXqY9dGyrlKKIaD9AuzMyvvy1A74a9VBq7zxQQ4adEzBbBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDyEf4mRrmWJgDhW4j2nRNTJU4yXxocQm/PhAr39Um7n0pgI2Cn28AabYtsHWjKqr8Al9LX6bKm8cnmnLjqTntphCw==", "publicSettings": {} } }] } """ if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests_e2e/GuestAgentDcrTestExtension/Utils/test/test_scriptutil.py000077500000000000000000000040641462617747000325620ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2014 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # pylint: disable=all # TODO: These tests were copied as reference - they are not currently running import os import os.path import env import ScriptUtil as su import unittest from MockUtil import MockUtil class TestScriptUtil(unittest.TestCase): def test_parse_args(self): print(__file__) cmd = u'sh foo.bar.sh -af bar --foo=bar | more \u6211' args = su.parse_args(cmd.encode('utf-8')) self.assertNotEquals(None, args) self.assertNotEquals(0, len(args)) print(args) def test_run_command(self): hutil = MockUtil(self) test_script = "mock.sh" os.chdir(os.path.join(env.root, "test")) exit_code = su.run_command(hutil, ["sh", test_script, "0"], os.getcwd(), 'RunScript-0', 'TestExtension', '1.0', True, 0.1) self.assertEquals(0, exit_code) self.assertEquals("do_exit", hutil.last) exit_code = su.run_command(hutil, ["sh", test_script, "75"], os.getcwd(), 'RunScript-1', 'TestExtension', '1.0', False, 0.1) self.assertEquals(75, exit_code) self.assertEquals("do_status_report", hutil.last) def test_log_or_exit(self): hutil = MockUtil(self) su.log_or_exit(hutil, True, 0, 'LogOrExit-0', 'Message1') self.assertEquals("do_exit", hutil.last) su.log_or_exit(hutil, False, 0, 'LogOrExit-1', 'Message2') self.assertEquals("do_status_report", hutil.last) if __name__ == '__main__': unittest.main() Azure-WALinuxAgent-2b21de5/tests_e2e/GuestAgentDcrTestExtension/manifest.xml000066400000000000000000000016571462617747000272220ustar00rootroot00000000000000 Microsoft.Azure.TestExtensions GuestAgentDcrTest 1.4.1 VmRole Microsoft Azure Guest Agent test Extension for testing Linux Virtual Machines in DCR true https://github.com/larohra/GuestAgentDcrTestExtension/blob/master/LICENSE http://www.microsoft.com/privacystatement/en-us/OnlineServices/Default.aspx https://github.com/larohra/GuestAgentDcrTestExtension true Linux Microsoft Azure-WALinuxAgent-2b21de5/tests_e2e/GuestAgentDcrTestExtension/references000066400000000000000000000000601462617747000267210ustar00rootroot00000000000000# TODO: Investigate the use of this file Utils/ Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/000077500000000000000000000000001462617747000221445ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/docker/000077500000000000000000000000001462617747000234135ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/docker/Dockerfile000066400000000000000000000213101462617747000254020ustar00rootroot00000000000000# # * Sample command to build the image: # # docker build -t waagenttests . # # * Sample command to execute a container interactively: # # docker run --rm -it -v /home/nam/src/WALinuxAgent:/home/waagent/WALinuxAgent waagenttests bash --login # FROM mcr.microsoft.com/cbl-mariner/base/core:2.0 LABEL description="Test environment for WALinuxAgent" SHELL ["/bin/bash", "-c"] # # Install the required packages as root # USER root RUN \ tdnf -y update && \ # mariner packages can be found in this repository https://cvedashboard.azurewebsites.net/#/packages \ # \ # Install basic dependencies \ # \ tdnf -y install git python3 python3-devel wget bzip2 ca-certificates && \ \ # \ # Install LISA dependencies \ # \ tdnf install -y git gcc gobject-introspection-devel cairo-devel pkg-config python3-devel libvirt-devel \ cairo-gobject binutils kernel-headers glibc-devel python3-pip python3-virtualenv && \ \ # \ # Install test dependencies \ # \ tdnf -y install zip tar && \ \ # \ # Create user waagent, which is used to execute the tests \ # \ groupadd waagent && \ useradd --shell /bin/bash --create-home -g waagent waagent && \ \ # \ # Install the Azure CLI \ # \ tdnf -y install azure-cli && \ tdnf clean all && \ : # # Install LISA as user waagent # USER waagent RUN \ export PATH="$HOME/.local/bin:$PATH" && \ \ # \ # Install LISA. \ # \ # (note that we use a specific commit, which is the version of LISA that has been verified to work with our \ # tests; when taking a new LISA version, make sure to verify that the tests work OK before pushing the \ # Docker image to our registry) \ # \ cd $HOME && \ git clone https://github.com/microsoft/lisa.git && \ cd lisa && \ git checkout 2c16e32001fdefb9572dff61241451b648259dbf && \ \ python3 -m pip install --upgrade pip && \ python3 -m pip install --editable .[azure,libvirt] --config-settings editable_mode=compat && \ \ # \ # Install additional test dependencies \ # \ # (note that we update azure-mgmt-compute to 29.1.0 - LISA installs 26.1; this is needed in order to access \ # osProfile.linuxConfiguration.enableVMAgentPlatformUpdates in the VM model - that property is used by some \ # tests, such as Agent versioning) \ # \ python3 -m pip install distro msrestazure pytz && \ python3 -m pip install azure-mgmt-compute==29.1.0 --upgrade && \ \ # \ # Download Pypy to a known location, from which it will be installed to the test VMs. \ # \ wget https://dcrdata.blob.core.windows.net/python/pypy3.7-x64.tar.bz2 -O /tmp/pypy3.7-x64.tar.bz2 && \ wget https://dcrdata.blob.core.windows.net/python/pypy3.7-arm64.tar.bz2 -O /tmp/pypy3.7-arm64.tar.bz2 && \ \ # \ # Install pudb, which can be useful to debug issues in the image \ # \ python3 -m pip install pudb && \ \ # \ # The setup for the tests depends on a few paths; add those to the profile \ # \ echo 'export PYTHONPATH="$HOME/WALinuxAgent"' >> $HOME/.bash_profile && \ echo 'export PATH="$HOME/.local/bin:$PATH"' >> $HOME/.bash_profile && \ echo 'cd $HOME' >> $HOME/.bash_profile && \ : Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/lib/000077500000000000000000000000001462617747000227125ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/lib/agent_junit.py000066400000000000000000000064721462617747000256040ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import Type # # Disable those warnings, since 'lisa' is an external, non-standard, dependency # E0401: Unable to import 'dataclasses_json' (import-error) # E0401: Unable to import 'lisa.notifiers.junit' (import-error) # E0401: Unable to import 'lisa' (import-error) # E0401: Unable to import 'lisa.messages' (import-error) from dataclasses import dataclass # pylint: disable=E0401 from dataclasses_json import dataclass_json # pylint: disable=E0401 from lisa.notifiers.junit import JUnit # pylint: disable=E0401 from lisa import schema # pylint: disable=E0401 from lisa.messages import ( # pylint: disable=E0401 MessageBase, TestResultMessage, TestStatus ) @dataclass_json() @dataclass class AgentJUnitSchema(schema.Notifier): path: str = "agent.junit.xml" include_subtest: bool = True class AgentJUnit(JUnit): @classmethod def type_name(cls) -> str: return "agent.junit" @classmethod def type_schema(cls) -> Type[schema.TypedSchema]: return AgentJUnitSchema def _received_message(self, message: MessageBase) -> None: # The Agent sends its own TestResultMessages setting their type as "AgentTestResultMessage". # Any other message types are sent by LISA. if isinstance(message, TestResultMessage) and message.type != "AgentTestResultMessage": if "Unexpected error in AgentTestSuite" in message.message: # Ignore these errors, they are already reported as AgentTestResultMessages return if "TestFailedException" in message.message: # Ignore these errors, they are already reported as test failures return # Change the suite name to "_Runbook_" for LISA messages in order to separate them # from actual test results. message.suite_full_name = "_Runbook_" message.suite_name = message.suite_full_name image = message.information.get('image') if image is not None: # NOTE: The value of message.information['environment'] is similar to "[generated_2]" and can be correlated # with the main LISA log to find the specific VM for the message. message.full_name = f"{image} [{message.information['environment']}]" message.name = message.full_name # LISA silently skips tests on situations that should be errors (e.g. trying to create a test VM using an image that is not available). # Mark these messages as failed so that the JUnit report shows them as errors. if message.status == TestStatus.SKIPPED: message.status = TestStatus.FAILED super()._received_message(message) Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/lib/agent_test_loader.py000066400000000000000000000446771462617747000267710ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import importlib.util import re # E0401: Unable to import 'yaml' (import-error) import yaml # pylint: disable=E0401 from pathlib import Path from typing import Any, Dict, List, Type import tests_e2e from tests_e2e.tests.lib.agent_test import AgentTest, AgentVmTest, AgentVmssTest class TestInfo(object): """ Description of a test """ # The class that implements the test test_class: Type[AgentVmTest] # If True, an error in the test blocks the execution of the test suite (defaults to False) blocks_suite: bool @property def name(self) -> str: return self.test_class.__name__ def __str__(self): return self.name class TestSuiteInfo(object): """ Description of a test suite """ # The name of the test suite name: str # The tests that comprise the suite tests: List[TestInfo] # Images or image sets (as defined in images.yml) on which the suite must run. images: List[str] # The locations (regions) on which the suite must run; if empty, the suite can run on any location locations: List[str] # Whether this suite must run on its own test VM owns_vm: bool # If True, the suite must run on a scale set (instead of a single VM) executes_on_scale_set: bool # Whether to install the test Agent on the test VM install_test_agent: bool # Customization for the ARM template used when creating the test VM template: str # skip test suite if the test not supposed to run on specific clouds skip_on_clouds: List[str] # skip test suite if test suite not suppose to run on specific images skip_on_images: List[str] def __str__(self): return self.name class VmImageInfo(object): # The URN of the image (publisher, offer, version separated by spaces) urn: str # Indicates that the image is available only on those locations. If empty, the image should be available in all locations locations: Dict[str, List[str]] # Indicates that the image is available only for those VM sizes. If empty, the image should be available for all VM sizes vm_sizes: List[str] def __str__(self): return self.urn class AgentTestLoader(object): """ Loads a given set of test suites from the YAML configuration files. """ def __init__(self, test_suites: str, cloud: str): """ Loads the specified 'test_suites', which are given as a string of comma-separated suite names or a YAML description of a single test_suite. The 'cloud' parameter indicates the cloud on which the tests will run. It is used to validate any restrictions on the test suite and/or images location. When given as a comma-separated list, each item must correspond to the name of the YAML files describing s suite (those files are located under the .../WALinuxAgent/tests_e2e/test_suites directory). For example, if test_suites == "agent_bvt, fast_track" then this method will load files agent_bvt.yml and fast_track.yml. When given as a YAML string, the value must correspond to the description a single test suite, for example name: "AgentBvt" tests: - "bvts/extension_operations.py" - "bvts/run_command.py" - "bvts/vm_access.py" """ self.__test_suites: List[TestSuiteInfo] = self._load_test_suites(test_suites) self.__cloud: str = cloud self.__images: Dict[str, List[VmImageInfo]] = self._load_images() self._validate() _SOURCE_CODE_ROOT: Path = Path(tests_e2e.__path__[0]) @property def test_suites(self) -> List[TestSuiteInfo]: return self.__test_suites @property def images(self) -> Dict[str, List[VmImageInfo]]: """ A dictionary where, for each item, the key is the name of an image or image set and the value is a list of VmImageInfos for the corresponding images. """ return self.__images # Matches a reference to a random subset of images within a set with an optional count: random(, []), e.g. random(endorsed, 3), random(endorsed) RANDOM_IMAGES_RE = re.compile(r"random\((?P[^,]+)(\s*,\s*(?P\d+))?\)") def _validate(self): """ Performs some basic validations on the data loaded from the YAML description files """ def _parse_image(image: str) -> str: """ Parses a reference to an image or image set and returns the name of the image or image set """ match = AgentTestLoader.RANDOM_IMAGES_RE.match(image) if match is not None: return match.group('image_set') return image for suite in self.test_suites: # Validate that the images the suite must run on are in images.yml for image in suite.images: image = _parse_image(image) if image not in self.images: raise Exception(f"Invalid image reference in test suite {suite.name}: Can't find {image} in images.yml") # If the suite specifies a cloud and it's location, validate that location string is start with and then validate that the images it uses are available in that location for suite_location in suite.locations: if suite_location.startswith(self.__cloud + ":"): suite_location = suite_location.split(":")[1] else: continue for suite_image in suite.images: suite_image = _parse_image(suite_image) for image in self.images[suite_image]: # If the image has a location restriction, validate that it is available on the location the suite must run on if image.locations: locations = image.locations.get(self.__cloud) if locations is not None and not any(suite_location in l for l in locations): raise Exception(f"Test suite {suite.name} must be executed in {suite_location}, but <{image.urn}> is not available in that location") # if the suite specifies skip clouds, validate that cloud used in our tests for suite_skip_cloud in suite.skip_on_clouds: if suite_skip_cloud not in ["AzureCloud", "AzureChinaCloud", "AzureUSGovernment"]: raise Exception(f"Invalid cloud {suite_skip_cloud} for in {suite.name}") # if the suite specifies skip images, validate that images used in our tests for suite_skip_image in suite.skip_on_images: if suite_skip_image not in self.images: raise Exception(f"Invalid image reference in test suite {suite.name}: Can't find {suite_skip_image} in images.yml") @staticmethod def _load_test_suites(test_suites: str) -> List[TestSuiteInfo]: # # Attempt to parse 'test_suites' as the YML description of a single suite # parsed = yaml.safe_load(test_suites) # # A comma-separated list (e.g. "foo", "foo, bar", etc.) is valid YAML, but it is parsed as a string. An actual test suite would # be parsed as a dictionary. If it is a dict, take is as the YML description of a single test suite # if isinstance(parsed, dict): return [AgentTestLoader._load_test_suite(parsed)] # # If test_suites is not YML, then it should be a comma-separated list of description files # description_files: List[Path] = [AgentTestLoader._SOURCE_CODE_ROOT/"test_suites"/f"{t.strip()}.yml" for t in test_suites.split(',')] return [AgentTestLoader._load_test_suite(f) for f in description_files] @staticmethod def _load_test_suite(description_file: Path) -> TestSuiteInfo: """ Loads the description of a TestSuite from its YAML file. A test suite is described by the properties listed below. Sample test suite: name: "AgentBvt" tests: - "bvts/extension_operations.py" - "bvts/run_command.py" - "bvts/vm_access.py" images: "endorsed" locations: "AzureCloud:eastuseaup" owns_vm: true install_test_agent: true template: "bvts/template.py" skip_on_clouds: "AzureChinaCloud" skip_on_images: "ubuntu_2004" * name - A string used to identify the test suite * tests - A list of the tests in the suite. Each test can be specified by a string (the path for its source code relative to WALinuxAgent/tests_e2e/tests), or a dictionary with two items: * source: the path for its source code relative to WALinuxAgent/tests_e2e/tests * blocks_suite: [Optional; boolean] If True, a failure on the test will stop execution of the test suite (i.e. the rest of the tests in the suite will not be executed). By default, a failure on a test does not stop execution of the test suite. * images - A string, or a list of strings, specifying the images on which the test suite must be executed. Each value can be the name of a single image (e.g."ubuntu_2004"), or the name of an image set (e.g. "endorsed"). The names for images and image sets are defined in WALinuxAgent/tests_e2e/tests_suites/images.yml. * locations - [Optional; string or list of strings] If given, the test suite must be executed on that cloud location(e.g. "AzureCloud:eastus2euap"). If not specified, or set to an empty string, the test suite will be executed in the default location. This is useful for test suites that exercise a feature that is enabled only in certain regions. * owns_vm - [Optional; boolean] By default all suites in a test run are executed on the same test VMs; if this value is set to True, new test VMs will be created and will be used exclusively for this test suite. This is useful for suites that modify the test VMs in such a way that the setup may cause problems in other test suites (for example, some tests targeted to the HGAP block internet access in order to force the agent to use the HGAP). * executes_on_scale_set - [Optional; boolean] True indicates that the test runs on a scale set. * install_test_agent - [Optional; boolean] By default the setup process installs the test Agent on the test VMs; set this property to False to skip the installation. * template - [Optional; string] If given, the ARM template for the test VM is customized using the given Python module. * skip_on_clouds - [Optional; string or list of strings] If given, the test suite will be skipped in the specified cloud(e.g. "AzureCloud"). If not specified, the test suite will be executed in all the clouds that we use. This is useful if you want to skip a test suite validation in a particular cloud when certain feature is not available in that cloud. # skip_on_images - [Optional; string or list of strings] If given, the test suite will be skipped on the specified images or image sets(e.g. "ubuntu_2004"). If not specified, the test suite will be executed on all the images that we use. This is useful if you want to skip a test suite validation on a particular images or image sets when certain feature is not available on that image. """ test_suite: Dict[str, Any] = AgentTestLoader._load_file(description_file) if any([test_suite.get(p) is None for p in ["name", "tests", "images"]]): raise Exception(f"Invalid test suite: {description_file}. 'name', 'tests', and 'images' are required properties") test_suite_info = TestSuiteInfo() test_suite_info.name = test_suite["name"] test_suite_info.tests = [] for test in test_suite["tests"]: test_info = TestInfo() if isinstance(test, str): test_info.test_class = AgentTestLoader._load_test_class(test) test_info.blocks_suite = False else: test_info.test_class = AgentTestLoader._load_test_class(test["source"]) test_info.blocks_suite = test.get("blocks_suite", False) test_suite_info.tests.append(test_info) images = test_suite["images"] if isinstance(images, str): test_suite_info.images = [images] else: test_suite_info.images = images locations = test_suite.get("locations") if locations is None: test_suite_info.locations = [] else: if isinstance(locations, str): test_suite_info.locations = [locations] else: test_suite_info.locations = locations test_suite_info.owns_vm = "owns_vm" in test_suite and test_suite["owns_vm"] test_suite_info.install_test_agent = "install_test_agent" not in test_suite or test_suite["install_test_agent"] test_suite_info.executes_on_scale_set = "executes_on_scale_set" in test_suite and test_suite["executes_on_scale_set"] test_suite_info.template = test_suite.get("template", "") # TODO: Add support for custom templates if test_suite_info.executes_on_scale_set and test_suite_info.template != '': raise Exception(f"Currently custom templates are not supported on scale sets. [Test suite: {test_suite_info.name}]") skip_on_clouds = test_suite.get("skip_on_clouds") if skip_on_clouds is not None: if isinstance(skip_on_clouds, str): test_suite_info.skip_on_clouds = [skip_on_clouds] else: test_suite_info.skip_on_clouds = skip_on_clouds else: test_suite_info.skip_on_clouds = [] skip_on_images = test_suite.get("skip_on_images") if skip_on_images is not None: if isinstance(skip_on_images, str): test_suite_info.skip_on_images = [skip_on_images] else: test_suite_info.skip_on_images = skip_on_images else: test_suite_info.skip_on_images = [] return test_suite_info @staticmethod def _load_test_class(relative_path: str) -> Type[AgentVmTest]: """ Loads an AgentTest from its source code file, which is given as a path relative to WALinuxAgent/tests_e2e/tests. """ full_path: Path = AgentTestLoader._SOURCE_CODE_ROOT/"tests"/relative_path spec = importlib.util.spec_from_file_location(f"tests_e2e.tests.{relative_path.replace('/', '.').replace('.py', '')}", str(full_path)) module = importlib.util.module_from_spec(spec) spec.loader.exec_module(module) # return all the classes in the module that are subclasses of AgentTest but are not AgentVmTest or AgentVmssTest themselves. matches = [v for v in module.__dict__.values() if isinstance(v, type) and issubclass(v, AgentTest) and v != AgentVmTest and v != AgentVmssTest] if len(matches) != 1: raise Exception(f"Error in {full_path} (each test file must contain exactly one class derived from AgentTest)") return matches[0] @staticmethod def _load_images() -> Dict[str, List[VmImageInfo]]: """ Loads images.yml into a dictionary where, for each item, the key is an image or image set and the value is a list of VmImageInfos for the corresponding images. See the comments in image.yml for a description of the structure of each item. """ image_descriptions = AgentTestLoader._load_file(AgentTestLoader._SOURCE_CODE_ROOT/"test_suites"/"images.yml") if "images" not in image_descriptions: raise Exception("images.yml is missing the 'images' item") images = {} # first load the images as 1-item lists for name, description in image_descriptions["images"].items(): i = VmImageInfo() if isinstance(description, str): i.urn = description i.locations = {} i.vm_sizes = [] else: if "urn" not in description: raise Exception(f"Image {name} is missing the 'urn' property: {description}") i.urn = description["urn"] i.locations = description["locations"] if "locations" in description else {} i.vm_sizes = description["vm_sizes"] if "vm_sizes" in description else [] for cloud in i.locations.keys(): if cloud not in ["AzureCloud", "AzureChinaCloud", "AzureUSGovernment"]: raise Exception(f"Invalid cloud {cloud} for image {name} in images.yml") images[name] = [i] # now load the image-sets, mapping them to the images that we just computed for image_set_name, image_list in image_descriptions["image-sets"].items(): # the same name cannot denote an image and an image-set if image_set_name in images: raise Exception(f"Invalid image-set in images.yml: {image_set_name}. The name is used by an existing image") images_in_set = [] for i in image_list: if i not in images: raise Exception(f"Can't find image {i} (referenced by image-set {image_set_name}) in images.yml") images_in_set.extend(images[i]) images[image_set_name] = images_in_set return images @staticmethod def _load_file(file: Path) -> Dict[str, Any]: """Helper to load a YML file""" try: with file.open() as f: return yaml.safe_load(f) except Exception as e: raise Exception(f"Can't load {file}: {e}") Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/lib/agent_test_suite.py000066400000000000000000001340771462617747000266460ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import datetime import json import logging import time import traceback import uuid from pathlib import Path from threading import RLock from typing import Any, Dict, List, Tuple # Disable those warnings, since 'lisa' is an external, non-standard, dependency # E0401: Unable to import 'lisa' (import-error) # etc from lisa import ( # pylint: disable=E0401 Environment, Logger, notifier, simple_requirement, TestCaseMetadata, TestSuite as LisaTestSuite, TestSuiteMetadata, ) from lisa.environment import EnvironmentStatus # pylint: disable=E0401 from lisa.messages import TestStatus, TestResultMessage # pylint: disable=E0401 from lisa.node import LocalNode # pylint: disable=E0401 from lisa.util.constants import RUN_ID # pylint: disable=E0401 from lisa.sut_orchestrator.azure.common import get_node_context # pylint: disable=E0401 from lisa.sut_orchestrator.azure.platform_ import AzurePlatform # pylint: disable=E0401 import makepkg from azurelinuxagent.common.version import AGENT_VERSION from tests_e2e.tests.lib.virtual_machine_client import VirtualMachineClient from tests_e2e.tests.lib.virtual_machine_scale_set_client import VirtualMachineScaleSetClient import tests_e2e from tests_e2e.orchestrator.lib.agent_test_loader import TestSuiteInfo from tests_e2e.tests.lib.agent_log import AgentLog, AgentLogRecord from tests_e2e.tests.lib.agent_test import TestSkipped, RemoteTestError from tests_e2e.tests.lib.agent_test_context import AgentTestContext, AgentVmTestContext, AgentVmssTestContext from tests_e2e.tests.lib.logging import log, set_thread_name, set_current_thread_log from tests_e2e.tests.lib.network_security_rule import NetworkSecurityRule from tests_e2e.tests.lib.resource_group_client import ResourceGroupClient from tests_e2e.tests.lib.shell import run_command, CommandError from tests_e2e.tests.lib.ssh_client import SshClient def _initialize_lisa_logger(): """ Customizes the LISA logger. The default behavior of this logger is too verbose, which makes reading the logs difficult. We set up a more succinct formatter and decrease the log level to INFO (the default is VERBOSE). In the future we may consider making this customization settable at runtime in case we need to debug LISA issues. """ logger: Logger = logging.getLogger("lisa") logger.setLevel(logging.INFO) formatter = logging.Formatter('%(asctime)s.%(msecs)03d [%(levelname)s] [%(threadName)s] %(message)s', datefmt="%Y-%m-%dT%H:%M:%SZ") for handler in logger.handlers: handler.setFormatter(formatter) # # We want to customize the LISA logger as early as possible, so we do it when this module is first imported. That will # happen early in the LISA workflow, when it loads the test suites to execute. # _initialize_lisa_logger() # # Possible values for the collect_logs parameter # class CollectLogs(object): Always = 'always' # Always collect logs Failed = 'failed' # Collect logs only on test failures No = 'no' # Never collect logs # # Possible values for the keep_environment parameter # class KeepEnvironment(object): Always = 'always' # Do not delete resources created by the test suite Failed = 'failed' # Skip delete only on test failures No = 'no' # Always delete resources created by the test suite class TestFailedException(Exception): def __init__(self, env_name: str, test_cases: List[str]): msg = "Test suite {0} failed.".format(env_name) if test_cases: msg += " Failed tests: " + ','.join(test_cases) super().__init__(msg) class _TestNode(object): """ Name and IP address of a test VM """ def __init__(self, name: str, ip_address: str): self.name = name self.ip_address = ip_address def __str__(self): return f"{self.name}:{self.ip_address}" @TestSuiteMetadata(area="waagent", category="", description="") class AgentTestSuite(LisaTestSuite): """ Manages the setup of test VMs and execution of Agent test suites. This class acts as the interface with the LISA framework, which will invoke the execute() method when a runbook is executed. """ def __init__(self, metadata: TestSuiteMetadata) -> None: super().__init__(metadata) self._working_directory: Path # Root directory for temporary files self._log_path: Path # Root directory for log files self._pypy_x64_path: Path # Path to the Pypy x64 download self._pypy_arm64_path: Path # Path to the Pypy ARM64 download self._test_agent_package_path: Path # Path to the package for the test Agent self._test_source_directory: Path # Root directory of the source code for the end-to-end tests self._test_tools_tarball_path: Path # Path to the tarball with the tools needed on the test node self._runbook_name: str # name of the runbook execution, used as prefix on ARM resources created by the AgentTestSuite self._lisa_log: Logger # Main log for the LISA run self._lisa_environment_name: str # Name assigned by LISA to the test environment, useful for correlation with LISA logs self._environment_name: str # Name assigned by the AgentTestSuiteCombinator to the test environment self._test_suites: List[AgentTestSuite] # Test suites to execute in the environment self._cloud: str # Azure cloud where test VMs are located self._subscription_id: str # Azure subscription where test VMs are located self._location: str # Azure location (region) where test VMs are located self._image: str # Image used to create the test VMs; it can be empty if LISA chose the size, or when using an existing VM self._is_vhd: bool # True when the test VMs were created by LISA from a VHD; this is usually used to validate a new VHD and the test Agent is not installed # username and public SSH key for the admin account used to connect to the test VMs self._user: str self._identity_file: str # If not empty, adds a Network Security Rule allowing SSH access from the specified IP address to any test VMs created by the test suite. self._allow_ssh: str self._skip_setup: bool # If True, skip the setup of the test VMs self._collect_logs: str # Whether to collect logs from the test VMs (one of 'always', 'failed', or 'no') self._keep_environment: str # Whether to skip deletion of the resources created by the test suite (one of 'always', 'failed', or 'no') # Resource group and VM/VMSS for the test machines. self._vm_name and self._vmss_name are mutually exclusive, only one of them will be set. self._resource_group_name: str self._vm_name: str self._vm_ip_address: str self._vmss_name: str self._test_nodes: List[_TestNode] # VMs or scale set instances the tests will run on # Whether to create and delete a scale set. self._create_scale_set: bool self._delete_scale_set: bool # # Test suites within the same runbook may be executed concurrently, and we need to keep track of how many resource # groups are being created. We use this lock and counter to allow only 1 thread to increment the resource group # count. # _rg_count_lock = RLock() _rg_count = 0 def _initialize(self, environment: Environment, variables: Dict[str, Any], lisa_working_path: str, lisa_log_path: str, lisa_log: Logger): """ Initializes the AgentTestSuite from the data passed as arguments by LISA. NOTE: All the interface with LISA should be confined to this method. The rest of the test code should not have any dependencies on LISA. """ self._working_directory = self._get_working_directory(lisa_working_path) self._log_path = self._get_log_path(variables, lisa_log_path) self._test_agent_package_path = self._working_directory/"eggs"/f"WALinuxAgent-{AGENT_VERSION}.zip" self._test_source_directory = Path(tests_e2e.__path__[0]) self._test_tools_tarball_path = self._working_directory/"waagent-tools.tar" self._pypy_x64_path = Path("/tmp/pypy3.7-x64.tar.bz2") self._pypy_arm64_path = Path("/tmp/pypy3.7-arm64.tar.bz2") self._runbook_name = variables["name"] self._lisa_log = lisa_log self._lisa_environment_name = environment.name self._environment_name = variables["c_env_name"] self._test_suites = variables["c_test_suites"] self._cloud = variables["cloud"] self._subscription_id = variables["subscription_id"] self._location = variables["c_location"] self._image = variables["c_image"] self._is_vhd = variables["c_is_vhd"] self._user = variables["user"] self._identity_file = variables["identity_file"] self._allow_ssh = variables["allow_ssh"] self._skip_setup = variables["skip_setup"] self._keep_environment = variables["keep_environment"] self._collect_logs = variables["collect_logs"] # The AgentTestSuiteCombinator can create 4 kinds of platform/environment combinations: # # * New VM # The VM is created by LISA. The platform will be 'azure' and the environment will contain a single 'remote' node. # # * Existing VM # The VM was passed as argument to the runbook. The platform will be 'ready' and the environment will contain a single 'remote' node. # # * New VMSS # The AgentTestSuite will create the scale set before executing the tests. The platform will be 'ready' and the environment will a single 'local' node. # # * Existing VMSS # The VMSS was passed as argument to the runbook. The platform will be 'ready' and the environment will contain a list of 'remote' nodes, # one for each instance of the scale set. # # Note that _vm_name and _vmss_name are mutually exclusive, only one of them will be set. self._vm_name = None self._vm_ip_address = None self._vmss_name = None self._create_scale_set = False self._delete_scale_set = False if isinstance(environment.nodes[0], LocalNode): # We need to create a new VMSS. # Use the same naming convention as LISA for the scale set name: lisa----n0, # except that, for the "rg_name", LISA uses "e" as prefix (e.g. "e0", "e1", etc.), while we use "w" (for # WALinuxAgent, e.g. "w0", "w1", etc.) to avoid name collisions. Also, note that we hardcode the scale set name # to "n0" since we are creating a single scale set. Lastly, the resource group name cannot have any uppercase # characters, because the publicIP cannot have uppercase characters in its domain name label. AgentTestSuite._rg_count_lock.acquire() try: self._resource_group_name = f"lisa-{self._runbook_name.lower()}-{RUN_ID}-w{AgentTestSuite._rg_count}" AgentTestSuite._rg_count += 1 finally: AgentTestSuite._rg_count_lock.release() self._vmss_name = f"{self._resource_group_name}-n0" self._test_nodes = [] # we'll fill this up when the scale set is created self._create_scale_set = True self._delete_scale_set = False # we set it to True once we create the scale set else: # Else we are using a VM that was created by LISA, or an existing VM/VMSS node_context = get_node_context(environment.nodes[0]) if isinstance(environment.nodes[0].features._platform, AzurePlatform): # The test VM was created by LISA self._resource_group_name = node_context.resource_group_name self._vm_name = node_context.vm_name self._vm_ip_address = environment.nodes[0].connection_info['address'] self._test_nodes = [_TestNode(self._vm_name, self._vm_ip_address)] else: # An existing VM/VMSS was passed as argument to the runbook self._resource_group_name = variables["resource_group_name"] if variables["vm_name"] != "": self._vm_name = variables["vm_name"] self._vm_ip_address = environment.nodes[0].connection_info['address'] self._test_nodes = [_TestNode(self._vm_name, self._vm_ip_address)] else: self._vmss_name = variables["vmss_name"] self._test_nodes = [_TestNode(node.name, node.connection_info['address']) for node in environment.nodes.list()] @staticmethod def _get_log_path(variables: Dict[str, Any], lisa_log_path: str) -> Path: # NOTE: If "log_path" is not given as argument to the runbook, use a path derived from LISA's log for the test suite. # That path is derived from LISA's "--log_path" command line argument and has a value similar to # "<--log_path>/20230217/20230217-040022-342/tests/20230217-040119-288-agent_test_suite"; use the directory # 2 levels up. log_path = variables.get("log_path") if log_path is not None and len(log_path) > 0: return Path(log_path) return Path(lisa_log_path).parent.parent @staticmethod def _get_working_directory(lisa_working_path: str) -> Path: # LISA's "working_path" has a value similar to # "<--working_path>/20230322/20230322-194430-287/tests/20230322-194451-333-agent_test_suite # where "<--working_path>" is the value given to the --working_path command line argument. Create the working directory for # the AgentTestSuite as # "<--working_path>/20230322/20230322-194430-287/waagent # This directory will be unique for each execution of the runbook ("20230322-194430" is the timestamp and "287" is a # unique ID per execution) return Path(lisa_working_path).parent.parent/"waagent" # # Test suites within the same runbook may be executed concurrently, and setup needs to be done only once. # We use these locks to allow only 1 thread to do the setup. Setup completion is marked using the 'completed' # file: the thread doing the setup creates the file and threads that find that the file already exists # simply skip setup. # _working_directory_lock = RLock() _setup_lock = RLock() def _create_working_directory(self) -> None: """ Creates the working directory for the test suite. """ self._working_directory_lock.acquire() try: if not self._working_directory.exists(): log.info("Creating working directory: %s", self._working_directory) self._working_directory.mkdir(parents=True) finally: self._working_directory_lock.release() def _setup_test_run(self) -> None: """ Prepares the test suite for execution (currently, it just builds the agent package) Returns the path to the agent package. """ self._setup_lock.acquire() try: completed: Path = self._working_directory / "completed" if completed.exists(): log.info("Found %s. Build has already been done, skipping.", completed) return log.info("") log.info("********************************** [Preparing Test Run] **********************************") log.info("") self._lisa_log.info("Building agent package to %s", self._test_agent_package_path) log.info("Building agent package to %s", self._test_agent_package_path) makepkg.run(agent_family="Test", output_directory=str(self._working_directory), log=log) if not self._test_agent_package_path.exists(): # the target path is created by makepkg, ensure we are using the correct value raise Exception(f"The test Agent package was not created at the expected path {self._test_agent_package_path}") # # Ensure that Pypy (both x64 and ARM) has been downloaded to the local machine; it is pre-downloaded to /tmp on # the container image used for Azure Pipelines runs, but for developer runs it may need to be downloaded. # for pypy in [self._pypy_x64_path, self._pypy_arm64_path]: if pypy.exists(): log.info("Found Pypy at %s", pypy) else: pypy_download = f"https://dcrdata.blob.core.windows.net/python/{pypy.name}" self._lisa_log.info("Downloading %s to %s", pypy_download, pypy) log.info("Downloading %s to %s", pypy_download, pypy) run_command(["wget", pypy_download, "-O", pypy]) # # Create a tarball with the tools we need to copy to the test node. The tarball includes two directories: # # * bin - Executables file (Bash and Python scripts) # * lib - Library files (Python modules) # self._lisa_log.info("Creating %s with the tools needed on the test node", self._test_tools_tarball_path) log.info("Creating %s with the tools needed on the test node", self._test_tools_tarball_path) log.info("Adding orchestrator/scripts") command = "cd {0} ; tar cf {1} --transform='s,^,bin/,' *".format(self._test_source_directory/"orchestrator"/"scripts", self._test_tools_tarball_path) log.info("%s", command) run_command(command, shell=True) log.info("Adding tests/scripts") command = "cd {0} ; tar rf {1} --transform='s,^,bin/,' *".format(self._test_source_directory/"tests"/"scripts", self._test_tools_tarball_path) log.info("%s", command) run_command(command, shell=True) log.info("Adding tests/lib") command = "cd {0} ; tar rf {1} --transform='s,^,lib/,' --exclude=__pycache__ tests_e2e/tests/lib".format(self._test_source_directory.parent, self._test_tools_tarball_path) log.info("%s", command) run_command(command, shell=True) log.info("Contents of %s:\n%s", self._test_tools_tarball_path, run_command(['tar', 'tvf', str(self._test_tools_tarball_path)])) log.info("Completed setup, creating %s", completed) completed.touch() finally: self._setup_lock.release() def _clean_up(self, success: bool) -> None: """ Cleans up any items created by the test suite run. """ if self._delete_scale_set: if self._keep_environment == KeepEnvironment.Always: log.info("Won't delete the scale set %s, per the test suite configuration.", self._vmss_name) elif self._keep_environment == KeepEnvironment.No or self._keep_environment == KeepEnvironment.Failed and success: try: self._lisa_log.info("Deleting resource group containing the test VMSS: %s", self._resource_group_name) resource_group = ResourceGroupClient(cloud=self._cloud, location=self._location, subscription=self._subscription_id, name=self._resource_group_name) resource_group.delete() except Exception as error: # pylint: disable=broad-except log.warning("Error deleting resource group %s: %s", self._resource_group_name, error) def _setup_test_nodes(self) -> None: """ Prepares the test nodes for execution of the test suite (installs tools and the test agent, etc) """ install_test_agent = self._test_suites[0].install_test_agent # All suites in the environment have the same value for install_test_agent log.info("") log.info("************************************ [Test Nodes Setup] ************************************") log.info("") for node in self._test_nodes: self._lisa_log.info(f"Setting up test node {node}") log.info("Test Node: %s", node.name) log.info("IP Address: %s", node.ip_address) log.info("") ssh_client = SshClient(ip_address=node.ip_address, username=self._user, identity_file=Path(self._identity_file)) self._check_ssh_connectivity(ssh_client) # # Cleanup the test node (useful for developer runs) # log.info('Preparing the test node for setup') # Note that removing lib requires sudo, since a Python cache may have been created by tests using sudo ssh_client.run_command("rm -rvf ~/{bin,lib,tmp}", use_sudo=True) # # Copy Pypy, the test Agent, and the test tools to the test node # ssh_client = SshClient(ip_address=node.ip_address, username=self._user, identity_file=Path(self._identity_file)) if ssh_client.get_architecture() == "aarch64": pypy_path = self._pypy_arm64_path else: pypy_path = self._pypy_x64_path target_path = Path("~")/"tmp" ssh_client.run_command(f"mkdir {target_path}") log.info("Copying %s to %s:%s", pypy_path, node.name, target_path) ssh_client.copy_to_node(pypy_path, target_path) log.info("Copying %s to %s:%s", self._test_agent_package_path, node.name, target_path) ssh_client.copy_to_node(self._test_agent_package_path, target_path) log.info("Copying %s to %s:%s", self._test_tools_tarball_path, node.name, target_path) ssh_client.copy_to_node(self._test_tools_tarball_path, target_path) # # Extract the tarball with the test tools. The tarball includes two directories: # # * bin - Executables file (Bash and Python scripts) # * lib - Library files (Python modules) # # After extracting the tarball on the test node, 'bin' will be added to PATH and PYTHONPATH will be set to 'lib'. # # Note that executables are placed directly under 'bin', while the path for Python modules is preserved under 'lib. # log.info('Installing tools on the test node') command = f"tar xvf {target_path/self._test_tools_tarball_path.name} && ~/bin/install-tools" log.info("Remote command [%s] completed:\n%s", command, ssh_client.run_command(command)) if self._is_vhd: log.info("Using a VHD; will not install the Test Agent.") elif not install_test_agent: log.info("Will not install the Test Agent per the test suite configuration.") else: log.info("Installing the Test Agent on the test node") command = f"install-agent --package ~/tmp/{self._test_agent_package_path.name} --version {AGENT_VERSION}" log.info("%s\n%s", command, ssh_client.run_command(command, use_sudo=True)) log.info("Completed test node setup") @staticmethod def _check_ssh_connectivity(ssh_client: SshClient) -> None: # We may be trying to connect to the test node while it is still booting. Execute a simple command to check that SSH is ready, # and raise an exception if it is not after a few attempts. max_attempts = 5 for attempt in range(max_attempts): try: log.info("Checking SSH connectivity to the test node...") ssh_client.run_command("echo 'SSH connectivity check'") log.info("SSH is ready.") break except CommandError as error: # Check for "System is booting up. Unprivileged users are not permitted to log in yet. Please come back later. For technical details, see pam_nologin(8)." if not any(m in error.stderr for m in ["Unprivileged users are not permitted to log in yet", "Permission denied"]): raise if attempt >= max_attempts - 1: raise Exception(f"SSH connectivity check failed after {max_attempts} attempts, giving up [{error}]") log.info("SSH is not ready [%s], will retry after a short delay.", error) time.sleep(15) def _collect_logs_from_test_nodes(self) -> None: """ Collects the test logs from the test nodes and copies them to the local machine """ for node in self._test_nodes: node_name = node.name ssh_client = SshClient(ip_address=node.ip_address, username=self._user, identity_file=Path(self._identity_file)) try: # Collect the logs on the test machine into a compressed tarball self._lisa_log.info("Collecting logs on test node %s", node_name) log.info("Collecting logs on test node %s", node_name) stdout = ssh_client.run_command("collect-logs", use_sudo=True) log.info(stdout) # Copy the tarball to the local logs directory tgz_name = self._environment_name if len(self._test_nodes) > 1: # Append instance of scale set to the end of tarball name tgz_name += '_' + node_name.split('_')[-1] remote_path = "/tmp/waagent-logs.tgz" local_path = self._log_path / '{0}.tgz'.format(tgz_name) log.info("Copying %s:%s to %s", node_name, remote_path, local_path) ssh_client.copy_from_node(remote_path, local_path) except: # pylint: disable=bare-except log.exception("Failed to collect logs from the test machine") # NOTES: # # * environment_status=EnvironmentStatus.Deployed skips most of LISA's initialization of the test node, which is not needed # for agent tests. # # * We need to take the LISA Logger using a parameter named 'log'; this parameter hides tests_e2e.tests.lib.logging.log. # Be aware then, that within this method 'log' refers to the LISA log, and elsewhere it refers to tests_e2e.tests.lib.logging.log. # # W0621: Redefining name 'log' from outer scope (line 53) (redefined-outer-name) @TestCaseMetadata(description="", priority=0, requirement=simple_requirement(environment_status=EnvironmentStatus.Deployed)) def main(self, environment: Environment, variables: Dict[str, Any], working_path: str, log_path: str, log: Logger): # pylint: disable=redefined-outer-name """ Entry point from LISA """ self._initialize(environment, variables, working_path, log_path, log) self._execute() def _execute(self) -> None: unexpected_error = False test_suite_success = True # Set the thread name to the name of the environment. The thread name is added to each item in LISA's log. with set_thread_name(self._environment_name): log_path: Path = self._log_path / f"env-{self._environment_name}.log" with set_current_thread_log(log_path): start_time: datetime.datetime = datetime.datetime.now() failed_cases = [] try: # Log the environment's name and the variables received from the runbook (note that we need to expand the names of the test suites) log.info("LISA Environment (for correlation with the LISA log): %s", self._lisa_environment_name) log.info("Test suites: %s", [t.name for t in self._test_suites]) self._create_working_directory() if not self._skip_setup: self._setup_test_run() try: test_context = self._create_test_context() if not self._skip_setup: try: self._setup_test_nodes() except: test_suite_success = False raise check_log_start_time = datetime.datetime.min for suite in self._test_suites: log.info("Executing test suite %s", suite.name) self._lisa_log.info("Executing Test Suite %s", suite.name) case_success, check_log_start_time = self._execute_test_suite(suite, test_context, check_log_start_time) test_suite_success = case_success and test_suite_success if not case_success: failed_cases.append(suite.name) finally: if self._collect_logs == CollectLogs.Always or self._collect_logs == CollectLogs.Failed and not test_suite_success: self._collect_logs_from_test_nodes() except Exception as e: # pylint: disable=bare-except # Report the error and raise an exception to let LISA know that the test errored out. unexpected_error = True log.exception("UNEXPECTED ERROR.") self._report_test_result( self._environment_name, "Unexpected Error", TestStatus.FAILED, start_time, message="UNEXPECTED ERROR.", add_exception_stack_trace=True) raise Exception(f"[{self._environment_name}] Unexpected error in AgentTestSuite: {e}") finally: self._clean_up(test_suite_success and not unexpected_error) if unexpected_error: self._mark_log_as_failed() # Check if any test failures or unexpected errors occurred. If so, raise an Exception here so that # lisa marks the environment as failed. Otherwise, lisa would mark this environment as passed and # clean up regardless of the value of 'keep_environment'. This should be the last thing that # happens during suite execution. if not test_suite_success or unexpected_error: raise TestFailedException(self._environment_name, failed_cases) def _execute_test_suite(self, suite: TestSuiteInfo, test_context: AgentTestContext, check_log_start_time: datetime.datetime) -> Tuple[bool, datetime.datetime]: """ Executes the given test suite and returns a tuple of a bool indicating whether all the tests in the suite succeeded, and the timestamp that should be used for the next check of the agent log. """ suite_name = suite.name suite_full_name = f"{suite_name}-{self._environment_name}" suite_start_time: datetime.datetime = datetime.datetime.now() check_log_start_time_override = datetime.datetime.max # tests can override the timestamp for the agent log check with the get_ignore_errors_before_timestamp() method with set_thread_name(suite_full_name): # The thread name is added to the LISA log log_path: Path = self._log_path / f"{suite_full_name}.log" with set_current_thread_log(log_path): suite_success: bool = True try: log.info("") log.info("**************************************** %s ****************************************", suite_name) log.info("") summary: List[str] = [] ignore_error_rules: List[Dict[str, Any]] = [] for test in suite.tests: test_full_name = f"{suite_name}-{test.name}" test_start_time: datetime.datetime = datetime.datetime.now() log.info("******** Executing %s", test.name) self._lisa_log.info("Executing test %s", test_full_name) test_success: bool = True test_instance = test.test_class(test_context) try: test_instance.run() summary.append(f"[Passed] {test.name}") log.info("******** [Passed] %s", test.name) self._lisa_log.info("[Passed] %s", test_full_name) self._report_test_result( suite_full_name, test.name, TestStatus.PASSED, test_start_time) except TestSkipped as e: summary.append(f"[Skipped] {test.name}") log.info("******** [Skipped] %s: %s", test.name, e) self._lisa_log.info("******** [Skipped] %s", test_full_name) self._report_test_result( suite_full_name, test.name, TestStatus.SKIPPED, test_start_time, message=str(e)) except AssertionError as e: test_success = False summary.append(f"[Failed] {test.name}") log.error("******** [Failed] %s: %s", test.name, e) self._lisa_log.error("******** [Failed] %s", test_full_name) self._report_test_result( suite_full_name, test.name, TestStatus.FAILED, test_start_time, message=str(e)) except RemoteTestError as e: test_success = False summary.append(f"[Failed] {test.name}") message = f"UNEXPECTED ERROR IN [{e.command}] {e.stderr}\n{e.stdout}" log.error("******** [Failed] %s: %s", test.name, message) self._lisa_log.error("******** [Failed] %s", test_full_name) self._report_test_result( suite_full_name, test.name, TestStatus.FAILED, test_start_time, message=str(message)) except: # pylint: disable=bare-except test_success = False summary.append(f"[Error] {test.name}") log.exception("UNEXPECTED ERROR IN %s", test.name) self._lisa_log.exception("UNEXPECTED ERROR IN %s", test_full_name) self._report_test_result( suite_full_name, test.name, TestStatus.FAILED, test_start_time, message="Unexpected error.", add_exception_stack_trace=True) log.info("") suite_success = suite_success and test_success ignore_error_rules.extend(test_instance.get_ignore_error_rules()) # Check if the test is requesting to override the timestamp for the agent log check. # Note that if multiple tests in the suite provide an override, we'll use the earliest timestamp. test_check_log_start_time = test_instance.get_ignore_errors_before_timestamp() if test_check_log_start_time != datetime.datetime.min: check_log_start_time_override = min(check_log_start_time_override, test_check_log_start_time) if not test_success and test.blocks_suite: log.warning("%s failed and blocks the suite. Stopping suite execution.", test.name) break log.info("") log.info("******** [Test Results]") log.info("") for r in summary: log.info("\t%s", r) log.info("") except: # pylint: disable=bare-except suite_success = False self._report_test_result( suite_full_name, suite_name, TestStatus.FAILED, suite_start_time, message=f"Unhandled exception while executing test suite {suite_name}.", add_exception_stack_trace=True) finally: if not suite_success: self._mark_log_as_failed() next_check_log_start_time = datetime.datetime.utcnow() suite_success = suite_success and self._check_agent_log_on_test_nodes(ignore_error_rules, check_log_start_time_override if check_log_start_time_override != datetime.datetime.max else check_log_start_time) return suite_success, next_check_log_start_time def _check_agent_log_on_test_nodes(self, ignore_error_rules: List[Dict[str, Any]], check_log_start_time: datetime.datetime) -> bool: """ Checks the agent log on the test nodes for errors; returns true on success (no errors in the logs) """ success: bool = True for node in self._test_nodes: node_name = node.name ssh_client = SshClient(ip_address=node.ip_address, username=self._user, identity_file=Path(self._identity_file)) test_result_name = self._environment_name if len(self._test_nodes) > 1: # If there are multiple test nodes, as in a scale set, append the name of the node to the name of the result test_result_name += '_' + node_name.split('_')[-1] start_time: datetime.datetime = datetime.datetime.now() try: message = f"Checking agent log on test node {node_name}, starting at {check_log_start_time.strftime('%Y-%m-%dT%H:%M:%S.%fZ')}" self._lisa_log.info(message) log.info(message) output = ssh_client.run_command("check-agent-log.py -j") errors = json.loads(output, object_hook=AgentLogRecord.from_dictionary) # Filter out errors that occurred before the starting timestamp or that match an ignore rule errors = [e for e in errors if e.timestamp >= check_log_start_time and (len(ignore_error_rules) == 0 or not AgentLog.matches_ignore_rule(e, ignore_error_rules))] if len(errors) == 0: # If no errors, we are done; don't create a log or test result. log.info("There are no errors in the agent log") else: message = f"Detected {len(errors)} error(s) in the agent log on {node_name}" self._lisa_log.error(message) log.error("%s:\n\n%s\n", message, '\n'.join(['\t\t' + e.text.replace('\n', '\n\t\t') for e in errors])) self._mark_log_as_failed() success = False self._report_test_result( test_result_name, "CheckAgentLog", TestStatus.FAILED, start_time, message=message + ' - First few errors:\n' + '\n'.join([e.text for e in errors[0:3]])) except: # pylint: disable=bare-except log.exception("Error checking agent log on %s", node_name) success = False self._report_test_result( test_result_name, "CheckAgentLog", TestStatus.FAILED, start_time, "Error checking agent log", add_exception_stack_trace=True) return success def _create_test_context(self,) -> AgentTestContext: """ Creates the context for the test run. """ if self._vm_name is not None: self._lisa_log.info("Creating test context for virtual machine") vm: VirtualMachineClient = VirtualMachineClient( cloud=self._cloud, location=self._location, subscription=self._subscription_id, resource_group=self._resource_group_name, name=self._vm_name) return AgentVmTestContext( working_directory=self._working_directory, vm=vm, ip_address=self._vm_ip_address, username=self._user, identity_file=self._identity_file) else: log.info("Creating test context for scale set") if self._create_scale_set: self._create_test_scale_set() else: log.info("Using existing scale set %s", self._vmss_name) scale_set = VirtualMachineScaleSetClient( cloud=self._cloud, location=self._location, subscription=self._subscription_id, resource_group=self._resource_group_name, name=self._vmss_name) # If we created the scale set, fill up the test nodes if self._create_scale_set: self._test_nodes = [_TestNode(name=i.instance_name, ip_address=i.ip_address) for i in scale_set.get_instances_ip_address()] return AgentVmssTestContext( working_directory=self._working_directory, vmss=scale_set, username=self._user, identity_file=self._identity_file) @staticmethod def _mark_log_as_failed(): """ Adds a message to indicate the log contains errors. """ log.info("MARKER-LOG-WITH-ERRORS") @staticmethod def _report_test_result( suite_name: str, test_name: str, status: TestStatus, start_time: datetime.datetime, message: str = "", add_exception_stack_trace: bool = False ) -> None: """ Reports a test result to the junit notifier """ # The junit notifier requires an initial RUNNING message in order to register the test in its internal cache. msg: TestResultMessage = TestResultMessage() msg.type = "AgentTestResultMessage" msg.id_ = str(uuid.uuid4()) msg.status = TestStatus.RUNNING msg.suite_full_name = suite_name msg.suite_name = msg.suite_full_name msg.full_name = test_name msg.name = msg.full_name msg.elapsed = 0 notifier.notify(msg) # Now send the actual result. The notifier pipeline makes a deep copy of the message so it is OK to re-use the # same object and just update a few fields. If using a different object, be sure that the "id_" is the same. msg.status = status msg.message = message if add_exception_stack_trace: msg.stacktrace = traceback.format_exc() msg.elapsed = (datetime.datetime.now() - start_time).total_seconds() notifier.notify(msg) def _create_test_scale_set(self) -> None: """ Creates a scale set for the test run """ self._lisa_log.info("Creating resource group %s", self._resource_group_name) resource_group = ResourceGroupClient(cloud=self._cloud, location=self._location, subscription=self._subscription_id, name=self._resource_group_name) resource_group.create() self._delete_scale_set = True self._lisa_log.info("Creating scale set %s", self._vmss_name) log.info("Creating scale set %s", self._vmss_name) template, parameters = self._get_scale_set_deployment_template(self._vmss_name) resource_group.deploy_template(template, parameters) def _get_scale_set_deployment_template(self, scale_set_name: str) -> Tuple[Dict[str, Any], Dict[str, Any]]: """ Returns the deployment template for scale sets and its parameters """ def read_file(path: str) -> str: with open(path, "r") as file_: return file_.read().strip() publisher, offer, sku, version = self._image.replace(":", " ").split(' ') template: Dict[str, Any] = json.loads(read_file(str(self._test_source_directory/"orchestrator"/"templates/vmss.json"))) # Scale sets for some images need to be deployed with 'plan' property plan_required_images = ["almalinux", "kinvolk", "erockyenterprisesoftwarefoundationinc1653071250513"] if publisher in plan_required_images: resources: List[Dict[str, Any]] = template.get('resources') for resource in resources: if resource.get('type') == "Microsoft.Compute/virtualMachineScaleSets": resource["plan"] = { "name": "[parameters('sku')]", "product": "[parameters('offer')]", "publisher": "[parameters('publisher')]" } if self._allow_ssh != '': NetworkSecurityRule(template, is_lisa_template=False).add_allow_ssh_rule(self._allow_ssh) return template, { "username": {"value": self._user}, "sshPublicKey": {"value": read_file(f"{self._identity_file}.pub")}, "vmName": {"value": scale_set_name}, "publisher": {"value": publisher}, "offer": {"value": offer}, "sku": {"value": sku}, "version": {"value": version} } Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/lib/agent_test_suite_combinator.py000066400000000000000000000657521462617747000310660ustar00rootroot00000000000000# Copyright (c) Microsoft Corporation. # Licensed under the MIT license. import datetime import logging import random import re import traceback import urllib.parse import uuid from dataclasses import dataclass, field from typing import Any, Dict, List, Optional, Type # E0401: Unable to import 'dataclasses_json' (import-error) from dataclasses_json import dataclass_json # pylint: disable=E0401 # Disable those warnings, since 'lisa' is an external, non-standard, dependency # E0401: Unable to import 'lisa' (import-error) # etc from lisa import notifier, schema # pylint: disable=E0401 from lisa.combinator import Combinator # pylint: disable=E0401 from lisa.messages import TestStatus, TestResultMessage # pylint: disable=E0401 from lisa.util import field_metadata # pylint: disable=E0401 from tests_e2e.orchestrator.lib.agent_test_loader import AgentTestLoader, VmImageInfo, TestSuiteInfo from tests_e2e.tests.lib.logging import set_thread_name from tests_e2e.tests.lib.virtual_machine_client import VirtualMachineClient from tests_e2e.tests.lib.virtual_machine_scale_set_client import VirtualMachineScaleSetClient @dataclass_json() @dataclass class AgentTestSuitesCombinatorSchema(schema.Combinator): """ Defines the parameters passed to the combinator from the runbook. The runbook is a static document and always passes all these parameters to the combinator, so they are all marked as required. Optional parameters can pass an empty value to indicate that they are not specified. """ allow_ssh: str = field(default_factory=str, metadata=field_metadata(required=True)) cloud: str = field(default_factory=str, metadata=field_metadata(required=True)) identity_file: str = field(default_factory=str, metadata=field_metadata(required=True)) image: str = field(default_factory=str, metadata=field_metadata(required=True)) keep_environment: str = field(default_factory=str, metadata=field_metadata(required=True)) location: str = field(default_factory=str, metadata=field_metadata(required=True)) resource_group_name: str = field(default_factory=str, metadata=field_metadata(required=True)) subscription_id: str = field(default_factory=str, metadata=field_metadata(required=True)) test_suites: str = field(default_factory=str, metadata=field_metadata(required=True)) user: str = field(default_factory=str, metadata=field_metadata(required=True)) vm_name: str = field(default_factory=str, metadata=field_metadata(required=True)) vm_size: str = field(default_factory=str, metadata=field_metadata(required=True)) vmss_name: str = field(default_factory=str, metadata=field_metadata(required=True)) class AgentTestSuitesCombinator(Combinator): """ The "agent_test_suites" combinator returns a list of variables that specify the test environments (i.e. test VMs) that the test suites must be executed on. These variables are prefixed with "c_" to distinguish them from the command line arguments of the runbook. See the runbook definition for details on each of those variables. The combinator can generate environments for VMs created and managed by LISA, Scale Sets created and managed by the AgentTestSuite, or existing VMs or Scale Sets. """ def __init__(self, runbook: AgentTestSuitesCombinatorSchema) -> None: super().__init__(runbook) if self.runbook.cloud not in self._DEFAULT_LOCATIONS: raise Exception(f"Invalid cloud: {self.runbook.cloud}") if self.runbook.vm_name != '' and self.runbook.vmss_name != '': raise Exception("Invalid runbook parameters: 'vm_name' and 'vmss_name' are mutually exclusive.") if self.runbook.vm_name != '': if self.runbook.image != '' or self.runbook.vm_size != '': raise Exception("Invalid runbook parameters: The 'vm_name' parameter indicates an existing VM, 'image' and 'vm_size' should not be specified.") if self.runbook.resource_group_name == '': raise Exception("Invalid runbook parameters: The 'vm_name' parameter indicates an existing VM, a 'resource_group_name' must be specified.") if self.runbook.vmss_name != '': if self.runbook.image != '' or self.runbook.vm_size != '': raise Exception("Invalid runbook parameters: The 'vmss_name' parameter indicates an existing VMSS, 'image' and 'vm_size' should not be specified.") if self.runbook.resource_group_name == '': raise Exception("Invalid runbook parameters: The 'vmss_name' parameter indicates an existing VMSS, a 'resource_group_name' must be specified.") self._log: logging.Logger = logging.getLogger("lisa") with set_thread_name("AgentTestSuitesCombinator"): if self.runbook.vm_name != '': self._environments = [self.create_existing_vm_environment()] elif self.runbook.vmss_name != '': self._environments = [self.create_existing_vmss_environment()] else: self._environments = self.create_environment_list() self._index = 0 @classmethod def type_name(cls) -> str: return "agent_test_suites" @classmethod def type_schema(cls) -> Type[schema.TypedSchema]: return AgentTestSuitesCombinatorSchema def _next(self) -> Optional[Dict[str, Any]]: result: Optional[Dict[str, Any]] = None if self._index < len(self._environments): result = self._environments[self._index] self._index += 1 return result _DEFAULT_LOCATIONS = { "AzureCloud": "westus2", "AzureChinaCloud": "chinanorth2", "AzureUSGovernment": "usgovarizona", } _MARKETPLACE_IMAGE_INFORMATION_LOCATIONS = { "AzureCloud": "", # empty indicates the default location used by LISA "AzureChinaCloud": "chinanorth2", "AzureUSGovernment": "usgovarizona", } _SHARED_RESOURCE_GROUP_LOCATIONS = { "AzureCloud": "", # empty indicates the default location used by LISA "AzureChinaCloud": "chinanorth2", "AzureUSGovernment": "usgovarizona", } def create_environment_list(self) -> List[Dict[str, Any]]: """ Examines the test_suites specified in the runbook and returns a list of the environments (i.e. test VMs or scale sets) that need to be created in order to execute these suites. Note that if the runbook provides an 'image', 'location', or 'vm_size', those values override any values provided in the configuration of the test suites. """ environments: List[Dict[str, Any]] = [] shared_environments: Dict[str, Dict[str, Any]] = {} # environments shared by multiple test suites loader = AgentTestLoader(self.runbook.test_suites, self.runbook.cloud) runbook_images = self._get_runbook_images(loader) skip_test_suites: List[str] = [] skip_test_suites_images: List[str] = [] for test_suite_info in loader.test_suites: if self.runbook.cloud in test_suite_info.skip_on_clouds: skip_test_suites.append(test_suite_info.name) continue if len(runbook_images) > 0: images_info: List[VmImageInfo] = runbook_images else: images_info: List[VmImageInfo] = self._get_test_suite_images(test_suite_info, loader) skip_images_info: List[VmImageInfo] = self._get_test_suite_skip_images(test_suite_info, loader) if len(skip_images_info) > 0: skip_test_suite_image = f"{test_suite_info.name}: {','.join([i.urn for i in skip_images_info])}" skip_test_suites_images.append(skip_test_suite_image) for image in images_info: if image in skip_images_info: continue # 'image.urn' can actually be the URL to a VHD or an image from a gallery if the runbook provided it in the 'image' parameter if self._is_vhd(image.urn): marketplace_image = "" vhd = image.urn image_name = urllib.parse.urlparse(vhd).path.split('/')[-1] # take the last fragment of the URL's path (e.g. "RHEL_8_Standard-8.3.202006170423.vhd") shared_gallery = "" elif self._is_image_from_gallery(image.urn): marketplace_image = "" vhd = "" image_name = self._get_name_of_image_from_gallery(image.urn) shared_gallery = image.urn else: marketplace_image = image.urn vhd = "" image_name = self._get_image_name(image.urn) shared_gallery = "" if test_suite_info.executes_on_scale_set and (vhd != "" or shared_gallery != ""): raise Exception("VHDS and images from galleries are currently not supported on scale sets.") location: str = self._get_location(test_suite_info, image) if location is None: continue vm_size = self._get_vm_size(image) if test_suite_info.owns_vm or not test_suite_info.install_test_agent: # # Create an environment for exclusive use by this suite # # TODO: Allow test suites that set 'install_test_agent' to False to share environments (we need to ensure that # all the suites in the shared environment have the same value for 'install_test_agent') # if test_suite_info.executes_on_scale_set: env = self.create_vmss_environment( env_name=f"{image_name}-vmss-{test_suite_info.name}", marketplace_image=marketplace_image, location=location, vm_size=vm_size, test_suite_info=test_suite_info) else: env = self.create_vm_environment( env_name=f"{image_name}-{test_suite_info.name}", marketplace_image=marketplace_image, vhd=vhd, shared_gallery=shared_gallery, location=location, vm_size=vm_size, test_suite_info=test_suite_info) environments.append(env) else: # add this suite to the shared environments env_name: str = f"{image_name}-vmss-{location}" if test_suite_info.executes_on_scale_set else f"{image_name}-{location}" env = shared_environments.get(env_name) if env is not None: env["c_test_suites"].append(test_suite_info) else: if test_suite_info.executes_on_scale_set: env = self.create_vmss_environment( env_name=env_name, marketplace_image=marketplace_image, location=location, vm_size=vm_size, test_suite_info=test_suite_info) else: env = self.create_vm_environment( env_name=env_name, marketplace_image=marketplace_image, vhd=vhd, shared_gallery=shared_gallery, location=location, vm_size=vm_size, test_suite_info=test_suite_info) shared_environments[env_name] = env if test_suite_info.template != '': vm_tags = env["vm_tags"] if "templates" not in vm_tags: vm_tags["templates"] = test_suite_info.template else: vm_tags["templates"] += "," + test_suite_info.template environments.extend(shared_environments.values()) if len(environments) == 0: raise Exception("No VM images were found to execute the test suites.") # Log a summary of each environment and the suites that will be executed on it format_suites = lambda suites: ", ".join([s.name for s in suites]) summary = [f"{e['c_env_name']}: [{format_suites(e['c_test_suites'])}]" for e in environments] summary.sort() self._log.info("Executing tests on %d environments\n\n%s\n", len(environments), '\n'.join([f"\t{s}" for s in summary])) if len(skip_test_suites) > 0: self._log.info("Skipping test suites %s", skip_test_suites) if len(skip_test_suites_images) > 0: self._log.info("Skipping test suits run on images \n %s", '\n'.join([f"\t{skip}" for skip in skip_test_suites_images])) return environments def create_existing_vm_environment(self) -> Dict[str, Any]: loader = AgentTestLoader(self.runbook.test_suites, self.runbook.cloud) vm: VirtualMachineClient = VirtualMachineClient( cloud=self.runbook.cloud, location=self.runbook.location, subscription=self.runbook.subscription_id, resource_group=self.runbook.resource_group_name, name=self.runbook.vm_name) ip_address = vm.get_ip_address() return { "c_env_name": self.runbook.vm_name, "c_platform": [ { "type": "ready", "capture_vm_information": False } ], "c_environment": { "environments": [ { "nodes": [ { "type": "remote", "name": self.runbook.vm_name, "public_address": ip_address, "public_port": 22, "username": self.runbook.user, "private_key_file": self.runbook.identity_file } ], } ] }, "c_location": self.runbook.location, "c_test_suites": loader.test_suites, } def create_existing_vmss_environment(self) -> Dict[str, Any]: loader = AgentTestLoader(self.runbook.test_suites, self.runbook.cloud) vmss = VirtualMachineScaleSetClient( cloud=self.runbook.cloud, location=self.runbook.location, subscription=self.runbook.subscription_id, resource_group=self.runbook.resource_group_name, name=self.runbook.vmss_name) ip_addresses = vmss.get_instances_ip_address() return { "c_env_name": self.runbook.vmss_name, "c_environment": { "environments": [ { "nodes": [ { "type": "remote", "name": i.instance_name, "public_address": i.ip_address, "public_port": 22, "username": self.runbook.user, "private_key_file": self.runbook.identity_file } for i in ip_addresses ], } ] }, "c_platform": [ { "type": "ready", "capture_vm_information": False } ], "c_location": self.runbook.location, "c_test_suites": loader.test_suites, } def create_vm_environment(self, env_name: str, marketplace_image: str, vhd: str, shared_gallery: str, location: str, vm_size: str, test_suite_info: TestSuiteInfo) -> Dict[str, Any]: # # Custom ARM templates (to create the test VMs) require special handling. These templates are processed by the azure_update_arm_template # hook, which does not have access to the runbook variables. Instead, we use a dummy VM tag named "templates" and pass the # names of the custom templates in its value. The hook can then retrieve the value from the Platform object (see wiki for more details). # We also use a dummy item, "vm_tags" in the environment dictionary in order to concatenate templates from multiple test suites when they # share the same test environment. Similarly, we use a dummy VM tag named "allow_ssh" to pass the value of the "allow_ssh" runbook parameter. # vm_tags = {} if self.runbook.allow_ssh != '': vm_tags["allow_ssh"] = self.runbook.allow_ssh environment = { "c_platform": [ { "type": "azure", "admin_username": self.runbook.user, "admin_private_key_file": self.runbook.identity_file, "keep_environment": self.runbook.keep_environment, "capture_vm_information": False, "azure": { "deploy": True, "cloud": self.runbook.cloud, "marketplace_image_information_location": self._MARKETPLACE_IMAGE_INFORMATION_LOCATIONS[self.runbook.cloud], "shared_resource_group_location": self._SHARED_RESOURCE_GROUP_LOCATIONS[self.runbook.cloud], "subscription_id": self.runbook.subscription_id, "wait_delete": False, "vm_tags": vm_tags }, "requirement": { "core_count": { "min": 2 }, "azure": { "marketplace": marketplace_image, "vhd": vhd, "shared_gallery": shared_gallery, "location": location, "vm_size": vm_size } } } ], "c_environment": None, "c_env_name": env_name, "c_test_suites": [test_suite_info], "c_location": location, "c_image": marketplace_image, "c_is_vhd": vhd != "", "vm_tags": vm_tags } if shared_gallery != '': # Currently all the images in our shared gallery require secure boot environment['c_platform'][0]['requirement']["features"] = { "items": [ { "type": "Security_Profile", "security_profile": "secureboot" } ] } return environment def create_vmss_environment(self, env_name: str, marketplace_image: str, location: str, vm_size: str, test_suite_info: TestSuiteInfo) -> Dict[str, Any]: return { "c_platform": [ { "type": "ready", "capture_vm_information": False } ], "c_environment": { "environments": [ { "nodes": [ {"type": "local"} ], } ] }, "c_env_name": env_name, "c_test_suites": [test_suite_info], "c_location": location, "c_image": marketplace_image, "c_is_vhd": False, "c_vm_size": vm_size, "vm_tags": {} } def _get_runbook_images(self, loader: AgentTestLoader) -> List[VmImageInfo]: """ Returns the images specified in the runbook, or an empty list if none are specified. """ if self.runbook.image == "": return [] images = loader.images.get(self.runbook.image) if images is not None: return images # If it is not image or image set, it must be a URN, VHD, or an image from a gallery if not self._is_urn(self.runbook.image) and not self._is_vhd(self.runbook.image) and not self._is_image_from_gallery(self.runbook.image): raise Exception(f"The 'image' parameter must be an image, image set name, urn, vhd, or an image from a shared gallery: {self.runbook.image}") i = VmImageInfo() i.urn = self.runbook.image # Note that this could be a URN or the URI for a VHD, or an image from a shared gallery i.locations = [] i.vm_sizes = [] return [i] @staticmethod def _get_test_suite_images(suite: TestSuiteInfo, loader: AgentTestLoader) -> List[VmImageInfo]: """ Returns the images used by a test suite. A test suite may be reference multiple image sets and sets can intersect; this method eliminates any duplicates. """ unique: Dict[str, VmImageInfo] = {} for image in suite.images: match = AgentTestLoader.RANDOM_IMAGES_RE.match(image) if match is None: image_list = loader.images[image] else: count = match.group('count') if count is None: count = 1 matching_images = loader.images[match.group('image_set')].copy() random.shuffle(matching_images) image_list = matching_images[0:int(count)] for i in image_list: unique[i.urn] = i return [v for k, v in unique.items()] @staticmethod def _get_test_suite_skip_images(suite: TestSuiteInfo, loader: AgentTestLoader) -> List[VmImageInfo]: """ Returns images that need to be skipped by the suite. A test suite may reference multiple image sets and sets can intersect; this method eliminates any duplicates. """ skip_unique: Dict[str, VmImageInfo] = {} for image in suite.skip_on_images: image_list = loader.images[image] for i in image_list: skip_unique[i.urn] = i return [v for k, v in skip_unique.items()] def _get_location(self, suite_info: TestSuiteInfo, image: VmImageInfo) -> str: """ Returns the location on which the test VM for the given test suite and image should be created. If the image is not available on any location, returns None, to indicate that the test suite should be skipped. """ # If the runbook specified a location, use it. if self.runbook.location != "": return self.runbook.location # Then try the suite location, if any. for location in suite_info.locations: if location.startswith(self.runbook.cloud + ":"): return location.split(":")[1] # If the image has a location restriction, use any location where it is available. # However, if it is not available on any location, skip the image (return None) if image.locations: image_locations = image.locations.get(self.runbook.cloud) if image_locations is not None: if len(image_locations) == 0: return None return image_locations[0] # Else use the default. return AgentTestSuitesCombinator._DEFAULT_LOCATIONS[self.runbook.cloud] def _get_vm_size(self, image: VmImageInfo) -> str: """ Returns the VM size that should be used to create the test VM for the given image. If the size is set to an empty string, LISA will choose an appropriate size """ # If the runbook specified a VM size, use it. if self.runbook.vm_size != '': return self.runbook.vm_size # If the image specifies a list of VM sizes, use any of them. if len(image.vm_sizes) > 0: return image.vm_sizes[0] # Otherwise, set the size to empty and LISA will select an appropriate size. return "" @staticmethod def _get_image_name(urn: str) -> str: """ Creates an image name ("offer-sku") given its URN """ match = AgentTestSuitesCombinator._URN.match(urn) if match is None: raise Exception(f"Invalid URN: {urn}") return f"{match.group('offer')}-{match.group('sku')}" _URN = re.compile(r"(?P[^\s:]+)[\s:](?P[^\s:]+)[\s:](?P[^\s:]+)[\s:](?P[^\s:]+)") @staticmethod def _is_urn(urn: str) -> bool: # URNs can be given as ' ' or ':::' return AgentTestSuitesCombinator._URN.match(urn) is not None @staticmethod def _is_vhd(vhd: str) -> bool: # VHDs are given as URIs to storage; do some basic validation, not intending to be exhaustive. parsed = urllib.parse.urlparse(vhd) return parsed.scheme == 'https' and parsed.netloc != "" and parsed.path != "" # Images from a gallery are given as "//". _IMAGE_FROM_GALLERY = re.compile(r"(?P[^/]+)/(?P[^/]+)/(?P[^/]+)") @staticmethod def _is_image_from_gallery(image: str) -> bool: return AgentTestSuitesCombinator._IMAGE_FROM_GALLERY.match(image) is not None @staticmethod def _get_name_of_image_from_gallery(image: str) -> bool: match = AgentTestSuitesCombinator._IMAGE_FROM_GALLERY.match(image) if match is None: raise Exception(f"Invalid image from gallery: {image}") return match.group('image') @staticmethod def _report_test_result( suite_name: str, test_name: str, status: TestStatus, start_time: datetime.datetime, message: str = "", add_exception_stack_trace: bool = False ) -> None: """ Reports a test result to the junit notifier """ # The junit notifier requires an initial RUNNING message in order to register the test in its internal cache. msg: TestResultMessage = TestResultMessage() msg.type = "AgentTestResultMessage" msg.id_ = str(uuid.uuid4()) msg.status = TestStatus.RUNNING msg.suite_full_name = suite_name msg.suite_name = msg.suite_full_name msg.full_name = test_name msg.name = msg.full_name msg.elapsed = 0 notifier.notify(msg) # Now send the actual result. The notifier pipeline makes a deep copy of the message so it is OK to re-use the # same object and just update a few fields. If using a different object, be sure that the "id_" is the same. msg.status = status msg.message = message if add_exception_stack_trace: msg.stacktrace = traceback.format_exc() msg.elapsed = (datetime.datetime.now() - start_time).total_seconds() notifier.notify(msg) Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/lib/update_arm_template_hook.py000066400000000000000000000072611462617747000303260ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import importlib.util import logging from pathlib import Path from typing import Any # Disable those warnings, since 'lisa' is an external, non-standard, dependency # E0401: Unable to import 'lisa.*' (import-error) # pylint: disable=E0401 from lisa.environment import Environment from lisa.util import hookimpl, plugin_manager from lisa.sut_orchestrator.azure.platform_ import AzurePlatformSchema # pylint: enable=E0401 import tests_e2e from tests_e2e.tests.lib.network_security_rule import NetworkSecurityRule from tests_e2e.tests.lib.update_arm_template import UpdateArmTemplate class UpdateArmTemplateHook: """ This hook allows to customize the ARM template used to create the test VMs (see wiki for details). """ @hookimpl def azure_update_arm_template(self, template: Any, environment: Environment) -> None: log: logging.Logger = logging.getLogger("lisa") azure_runbook: AzurePlatformSchema = environment.platform.runbook.get_extended_runbook(AzurePlatformSchema) vm_tags = azure_runbook.vm_tags # # Add the allow SSH security rule if requested by the runbook # allow_ssh: str = vm_tags.get("allow_ssh") if allow_ssh is not None: log.info("******** Waagent: Adding network security rule to allow SSH connections from %s", allow_ssh) NetworkSecurityRule(template, is_lisa_template=True).add_allow_ssh_rule(allow_ssh) # # Apply any template customizations provided by the tests. # # The "templates" tag is a comma-separated list of the template customizations provided by the tests test_templates = vm_tags.get("templates") if test_templates is not None: log.info("******** Waagent: Applying custom templates '%s' to environment '%s'", test_templates, environment.name) for t in test_templates.split(","): update_arm_template = self._get_update_arm_template(t) update_arm_template().update(template, is_lisa_template=True) _SOURCE_CODE_ROOT: Path = Path(tests_e2e.__path__[0]) @staticmethod def _get_update_arm_template(test_template: str) -> UpdateArmTemplate: """ Returns the UpdateArmTemplate class that implements the template customization for the test. """ source_file: Path = UpdateArmTemplateHook._SOURCE_CODE_ROOT/"tests"/test_template spec = importlib.util.spec_from_file_location(f"tests_e2e.tests.templates.{source_file.name}", str(source_file)) module = importlib.util.module_from_spec(spec) spec.loader.exec_module(module) # find all the classes in the module that are subclasses of UpdateArmTemplate but are not UpdateArmTemplate itself. matches = [v for v in module.__dict__.values() if isinstance(v, type) and issubclass(v, UpdateArmTemplate) and v != UpdateArmTemplate] if len(matches) != 1: raise Exception(f"Error in {source_file}: template files must contain exactly one class derived from UpdateArmTemplate)") return matches[0] plugin_manager.register(UpdateArmTemplateHook()) Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/runbook.yml000066400000000000000000000127401462617747000243520ustar00rootroot00000000000000name: $(name) testcase: - criteria: area: waagent extension: - "./lib" variable: # # The test environments are generated dynamically by the AgentTestSuitesCombinator using the 'platform' and 'environment' variables. # Most of the variables below are parameters for the combinator and/or the AgentTestSuite (marked as 'is_case_visible'), but a few of # them, such as the runbook name and the SSH proxy variables, are handled by LISA. # # Many of these variables are optional, depending on the scenario. An empty values indicates that the variable has not been specified. # # # The name of the runbook, it is added as a prefix ("lisa-") to ARM resources created by the test run. # # Set the name to your email alias when doing developer runs. # - name: name value: "WALinuxAgent" is_case_visible: true # # Test suites to execute # - name: test_suites value: "agent_bvt, no_outbound_connections, extensions_disabled, agent_not_provisioned, fips, agent_ext_workflow, agent_status, multi_config_ext, agent_cgroups, ext_cgroups, agent_firewall, ext_telemetry_pipeline, ext_sequencing, agent_persist_firewall, publish_hostname, agent_update, recover_network_interface" # # Parameters used to create test VMs # - name: subscription_id value: "" is_case_visible: true - name: cloud value: "AzureCloud" is_case_visible: true - name: location value: "" - name: image value: "" - name: vm_size value: "" # # Whether to skip deletion of the test VMs after the test run completes. # # Possible values: always, no, failed # - name: keep_environment value: "no" is_case_visible: true # # Username and SSH public key for the admin user on the test VMs # - name: user value: "waagent" is_case_visible: true - name: identity_file value: "" is_case_visible: true # # Set the resource group and vm, or the group and the vmss, to execute the test run on an existing VM or VMSS. # - name: resource_group_name value: "" is_case_visible: true - name: vm_name value: "" is_case_visible: true - name: vmss_name value: "" is_case_visible: true # # Directory for test logs # - name: log_path value: "" is_case_visible: true # # Whether to collect logs from the test VM # # Possible values: always, no, failed # - name: collect_logs value: "failed" is_case_visible: true # # Whether to skip setup of the test VMs. This is useful in developer runs when using existing VMs to save initialization time. # - name: skip_setup value: false is_case_visible: true # # Takes an IP address as value; if not empty, it adds a Network Security Rule allowing SSH access from the specified IP address to any test VMs created by the runbook execution. # - name: allow_ssh value: "" is_case_visible: true # # These variables are handled by LISA to use an SSH proxy when executing the runbook # - name: proxy value: False - name: proxy_host value: "" - name: proxy_user value: "foo" - name: proxy_identity_file value: "" is_secret: true # # The variables below are generated by the AgentTestSuitesCombinator combinator. They are # prefixed with "c_" to distinguish them from the rest of the variables, whose value can be set from # the command line. # # # The combinator generates the test environments using these two variables, which are passed to LISA # - name: c_environment value: {} - name: c_platform value: [] # # Name of the test environment, used for mainly for logging purposes # - name: c_env_name value: "" is_case_visible: true # # Test suites assigned for execution in the current test environment. # # The combinator splits the test suites specified in the 'test_suites' variable in subsets and assigns each subset # to a test environment. The AgentTestSuite uses 'c_test_suites' to execute the suites assigned to the current environment. # - name: c_test_suites value: [] is_case_visible: true # # These parameters are used by the AgentTestSuite to create the test scale sets. # # Note that there are other 3 variables named 'image', 'vm_size' and 'location', which can be passed # from the command line. The combinator generates the values for these parameters using test metadata, # but they can be overriden with these command line variables. The final values are passed to the # AgentTestSuite in the corresponding 'c_*' variables. # - name: c_image value: "" is_case_visible: true - name: c_vm_size value: "" is_case_visible: true - name: c_location value: "" is_case_visible: true # # True if the image is a VHD (instead of a URN) # - name: c_is_vhd value: false is_case_visible: true environment: $(c_environment) platform: $(c_platform) combinator: type: agent_test_suites allow_ssh: $(allow_ssh) cloud: $(cloud) identity_file: $(identity_file) image: $(image) keep_environment: $(keep_environment) location: $(location) resource_group_name: $(resource_group_name) subscription_id: $(subscription_id) test_suites: $(test_suites) user: $(user) vm_name: $(vm_name) vm_size: $(vm_size) vmss_name: $(vmss_name) concurrency: 32 notifier: - type: agent.junit dev: enabled: $(proxy) mock_tcp_ping: $(proxy) jump_boxes: - private_key_file: $(proxy_identity_file) address: $(proxy_host) username: $(proxy_user) password: "dummy" Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/sample_runbooks/000077500000000000000000000000001462617747000253475ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/sample_runbooks/local_machine/000077500000000000000000000000001462617747000301255ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/sample_runbooks/local_machine/hello_world.py000066400000000000000000000020131462617747000330050ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # E0401: Unable to import 'lisa' (import-error) from lisa import ( # pylint: disable=E0401 Logger, Node, TestCaseMetadata, TestSuite, TestSuiteMetadata, ) @TestSuiteMetadata(area="sample", category="", description="") class HelloWorld(TestSuite): @TestCaseMetadata(description="") def main(self, node: Node, log: Logger) -> None: log.info(f"Hello world from {node.os.name}!") Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/sample_runbooks/local_machine/local.yml000066400000000000000000000014731462617747000317470ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # Executes the test suites on the local machine # extension: - "." environment: environments: - nodes: - type: local notifier: - type: console testcase: - criteria: area: sample Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/scripts/000077500000000000000000000000001462617747000236335ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/scripts/agent-service000077500000000000000000000044321462617747000263200ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # set -euo pipefail # # The service name is walinuxagent in Ubuntu/debian and waagent elsewhere # usage() ( echo "Usage: agent-service command" exit 1 ) if [ "$#" -lt 1 ]; then usage fi cmd=$1 shift if [ "$#" -ne 0 ] || [ -z ${cmd+x} ] ; then usage fi if command -v systemctl &> /dev/null; then service-status() { systemctl --no-pager -l status $1; } service-stop() { systemctl stop $1; } service-restart() { systemctl restart $1; } service-start() { systemctl start $1; } service-disable() { systemctl disable $1; } else service-status() { service $1 status; } service-stop() { service $1 stop; } service-restart() { service $1 restart; } service-start() { service $1 start; } service-disable() { service $1 disable; } fi python=$(get-agent-python) distro=$($python -c 'from azurelinuxagent.common.version import get_distro; print(get_distro()[0])') distro=$(echo $distro | tr '[:upper:]' '[:lower:]') if [[ $distro == *"ubuntu"* || $distro == *"debian"* ]]; then service_name="walinuxagent" else service_name="waagent" fi echo "Service name: $service_name" if [[ "$cmd" == "restart" ]]; then echo "Restarting service..." service-restart $service_name echo "Service status..." service-status $service_name fi if [[ "$cmd" == "start" ]]; then echo "Starting service..." service-start $service_name fi if [[ "$cmd" == "stop" ]]; then echo "Stopping service..." service-stop $service_name fi if [[ "$cmd" == "status" ]]; then echo "Service status..." service-status $service_name fi if [[ "$cmd" == "disable" ]]; then echo "Disabling service..." service-disable $service_name fi Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/scripts/check-agent-log.py000077500000000000000000000026641462617747000271500ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import argparse import json import sys from pathlib import Path from tests_e2e.tests.lib.agent_log import AgentLog try: parser = argparse.ArgumentParser() parser.add_argument('path', nargs='?', help='Path of the log file', default='/var/log/waagent.log') parser.add_argument('-j', '--json', action='store_true', help='Produce a JSON report') parser.set_defaults(json=False) args = parser.parse_args() error_list = AgentLog(Path(args.path)).get_errors() if args.json: print(json.dumps(error_list, default=lambda o: o.__dict__)) else: if len(error_list) == 0: print("No errors were found.") else: for e in error_list: print(e.text) except Exception as e: print(f"{e}", file=sys.stderr) sys.exit(1) sys.exit(0) Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/scripts/collect-logs000077500000000000000000000020761462617747000261550ustar00rootroot00000000000000#!/usr/bin/env bash # # Collects the logs needed to debug agent issues into a compressed tarball. # # Note that we do "set -euxo pipefail" only after executing "tar". That command exits with code 1 on warnings # and we do not want to consider those as failures. logs_file_name="/tmp/waagent-logs.tgz" echo "Collecting logs to $logs_file_name ..." PYTHON=$(get-agent-python) waagent_conf=$($PYTHON -c 'from azurelinuxagent.common.osutil import get_osutil; print(get_osutil().agent_conf_file_path)') tar --exclude='journal/*' --exclude='omsbundle' --exclude='omsagent' --exclude='mdsd' --exclude='scx*' \ --exclude='*.so' --exclude='*__LinuxDiagnostic__*' --exclude='*.zip' --exclude='*.deb' --exclude='*.rpm' \ --warning=no-file-changed \ -czf "$logs_file_name" \ /var/log \ /var/lib/waagent/ \ $waagent_conf set -euxo pipefail # Ignore warnings (exit code 1) exit_code=$? if [ "$exit_code" == "1" ]; then echo "WARNING: tar exit code is 1" elif [ "$exit_code" != "0" ]; then exit $exit_code fi chmod a+r "$logs_file_name" ls -l "$logs_file_name" Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/scripts/get-agent-bin-path000077500000000000000000000026671462617747000271470ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # Returns the path for the 'waagent' command. # set -euo pipefail # On most distros, 'waagent' is in PATH if which waagent 2> /dev/null; then exit 0 fi # if the agent is running, get the path from 'cmdline' in the /proc file system if test -e /run/waagent.pid; then cmdline="/proc/$(cat /run/waagent.pid)/cmdline" if test -e "$cmdline"; then # cmdline is a sequence of null-terminated strings; break into lines and look for waagent if tr '\0' '\n' < "$cmdline" | grep waagent; then exit 0 fi fi fi # try some well-known locations declare -a known_locations=( "/usr/sbin/waagent" "/usr/share/oem/bin/waagent" ) for path in "${known_locations[@]}" do if [[ -e $path ]]; then echo "$path" exit 0 fi done echo "Can't find the path for the 'waagent' command" >&2 exit 1 Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/scripts/get-agent-modules-path000077500000000000000000000020621462617747000300340ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # Returns the PYTHONPATH on which the azurelinuxagent and associated modules are located. # # To do this, the script walks the site packages for the Python used to execute the agent, # looking for the directory that contains "azurelinuxagent". # set -euo pipefail $(get-agent-python) -c ' import site import os for dir in site.getsitepackages(): if os.path.isdir(dir + "/azurelinuxagent"): print(dir) exit(0) exit(1) ' Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/scripts/get-agent-python000077500000000000000000000031141462617747000267520ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # Returns the path of the Python executable used to start the Agent. # set -euo pipefail # if the agent is running, get the python command from 'exe' in the /proc file system if test -e /run/waagent.pid; then exe="/proc/$(cat /run/waagent.pid)/exe" if test -e "$exe"; then # exe is a symbolic link; return its target readlink -f "$exe" exit 0 fi fi # try all the instances of 'python' and 'python3' in $PATH for path in $(echo "$PATH" | tr ':' '\n'); do if [[ -e $path ]]; then for python in $(find "$path" -maxdepth 1 -name python3 -or -name python); do if $python -c 'import azurelinuxagent' 2> /dev/null; then echo "$python" exit 0 fi done fi done # try some well-known locations declare -a known_locations=( "/usr/share/oem/python/bin/python" ) for python in "${known_locations[@]}" do if $python -c 'import azurelinuxagent' 2> /dev/null; then echo "$python" exit 0 fi done exit 1 Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/scripts/install-agent000077500000000000000000000133231462617747000263250ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # set -euo pipefail usage() ( echo "Usage: install-agent -p|--package -v|--version " exit 1 ) while [[ $# -gt 0 ]]; do case $1 in -p|--package) shift if [ "$#" -lt 1 ]; then usage fi package=$1 shift ;; -v|--version) shift if [ "$#" -lt 1 ]; then usage fi version=$1 shift ;; *) usage esac done if [ "$#" -ne 0 ] || [ -z ${package+x} ] || [ -z ${version+x} ]; then usage fi # # Find the command to manage services # if command -v systemctl &> /dev/null; then service-status() { systemctl --no-pager -l status $1; } service-stop() { systemctl stop $1; } service-start() { systemctl start $1; } else service-status() { service $1 status; } service-stop() { service $1 stop; } service-start() { service $1 start; } fi # # Find the service name (walinuxagent in Ubuntu and waagent elsewhere) # if service-status walinuxagent > /dev/null 2>&1;then service_name="walinuxagent" else service_name="waagent" fi # # Output the initial version of the agent # python=$(get-agent-python) waagent=$(get-agent-bin-path) echo "========== Initial Status ==========" echo "Service Name: $service_name" echo "Agent Path: $waagent" echo "Agent Version:" $python "$waagent" --version echo "Service Status:" # Sometimes the service can take a while to start; give it a few minutes, started=false for i in {1..6} do if service-status $service_name; then started=true break fi echo "Waiting for service to start..." sleep 30 done if [ $started == false ]; then echo "Service failed to start." exit 1 fi # # Install the package # echo "========== Installing Agent ==========" echo "Installing $package as version $version..." unzip.py "$package" "/var/lib/waagent/WALinuxAgent-$version" python=$(get-agent-python) # Ensure that AutoUpdate is enabled. some distros, e.g. Flatcar have a waagent.conf in different path waagent_conf_path=$($python -c 'from azurelinuxagent.common.osutil import get_osutil; osutil=get_osutil(); print(osutil.agent_conf_file_path)') echo "Agent's conf path: $waagent_conf_path" sed -i 's/AutoUpdate.Enabled=n/AutoUpdate.Enabled=y/g' "$waagent_conf_path" # By default UpdateToLatestVersion flag set to True, so that agent go through update logic to look for new agents. # But in e2e tests this flag needs to be off in test version 9.9.9.9 to stop the agent updates, so that our scenarios run on 9.9.9.9. sed -i '$a AutoUpdate.UpdateToLatestVersion=n' "$waagent_conf_path" # Logging and exiting tests if Extensions.Enabled flag is disabled for other distros than debian if grep -q "Extensions.Enabled=n" $waagent_conf_path; then pypy_get_distro=$(pypy3 -c 'from azurelinuxagent.common.version import get_distro; print(get_distro())') python_get_distro=$($python -c 'from azurelinuxagent.common.version import get_distro; print(get_distro())') # As we know debian distros disable extensions by default, so we need to enable them to verify agent extension scenarios # If rest of the distros disable extensions, then we exit the test setup to fail the test. if [[ $pypy_get_distro == *"debian"* ]] || [[ $python_get_distro == *"debian"* ]]; then echo "Extensions.Enabled flag is disabled and this is expected in debian distro, so enabling it" update-waagent-conf Extensions.Enabled=y else echo "Extensions.Enabled flag is disabled which is unexpected in this distro, so exiting test setup to fail the test" exit 1 fi fi # # TODO: Remove this block once the symlink is created in the Flatcar image # # Currently, the Agent looks for /usr/share/oem/waagent.conf, but new Flatcar images use /etc/waagent.conf. Flatcar will create # this symlink in new images, but we need to create it for now. if [[ $(uname -a) == *"flatcar"* ]]; then if [[ ! -f /usr/share/oem/waagent.conf ]]; then ln -s "$waagent_conf_path" /usr/share/oem/waagent.conf fi fi # # Restart the service # echo "Restarting service..." agent-service stop # Rename the previous log to ensure the new log starts with the agent we just installed mv /var/log/waagent.log /var/log/waagent."$(date --iso-8601=seconds)".log agent-service start # # Verify that the new agent is running and output its status. # Note that the extension handler may take some time to start so give 1 minute. # echo "Verifying agent installation..." check-version() { # We need to wait for the extension handler to start, give it a couple of minutes for i in {1..12} do if waagent-version | grep -E "Goal state agent:\s+$version" > /dev/null; then return 0 fi sleep 10 done return 1 } if check-version "$version"; then printf "The agent was installed successfully\n" exit_code=0 else printf "************************************\n" printf " * ERROR: Failed to install agent. *\n" printf "************************************\n" exit_code=1 fi printf "\n" echo "========== Final Status ==========" $python "$waagent" --version printf "\n" agent-service status exit $exit_code Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/scripts/install-tools000077500000000000000000000114221462617747000263650ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # Installs the tools in ~/bin/scripts/* to ~/bin, as well as Pypy. # # It also makes Pypy the default python for the current user. # set -euo pipefail PATH="$HOME/bin:$PATH" python=$(get-agent-python) echo "Python executable: $python" echo "Python version: $($python --version)" # # Install Pypy as ~/bin/pypy3 # # Note that bzip2/lbzip2 (used by tar to uncompress *.bz2 files) are not available by default in some distros; # use Python to uncompress the Pypy tarball. # echo "Installing Pypy 3.7" $python ~/bin/uncompress.py ~/tmp/pypy3.7-*.tar.bz2 ~/tmp/pypy3.7.tar tar xf ~/tmp/pypy3.7.tar -C ~/bin echo "Pypy was installed in $(ls -d ~/bin/pypy*)" ln -s ~/bin/pypy*/bin/pypy3.7 ~/bin/pypy3 echo "Creating symbolic link to Pypy: ~/bin/pypy3" # # The 'distro' and 'platform' modules in Pypy have small differences with the ones in the system's Python. # This can create problems in tests that use the get_distro() method in the Agent's 'version.py' module. # To work around this, we copy the system's 'distro' module to Pypy. # # In the case of 'platform', the 'linux_distribution' method was removed on Python 3.7 so we check the # system's module and, if the method does not exist, we also remove it from Pypy. Ubuntu 16 and 18 are # special cases in that the 'platform' module in Pypy identifies the distro as 'debian'; in this case we # copy the system's 'platform' module to Pypy. # distro_path=$($python -c ' try: import distro except: exit(0) print(distro.__file__.replace("__init__.py", "distro.py")) exit(0) ') if [[ "$distro_path" != "" ]]; then echo "Copying the system's distro module to Pypy" cp -v "$distro_path" ~/bin/pypy*/site-packages else echo "The distro module is not is not installing on the system; skipping." fi has_linux_distribution=$($python -c 'import platform; print(hasattr(platform, "linux_distribution"))') if [[ "$has_linux_distribution" == "False" ]]; then echo "Python does not have platform.linux_distribution; removing it from Pypy" sed -i 's/def linux_distribution(/def __linux_distribution__(/' ~/bin/pypy*/lib-python/3/platform.py else echo "Python has platform.linux_distribution" uname=$(uname -v) if [[ "$uname" == *~18*-Ubuntu* || "$uname" == *~16*-Ubuntu* ]]; then echo "Copying the system's platform module to Pypy" pypy_platform=$(pypy3 -c 'import platform; print(platform.__file__)') python_platform=$($python -c 'import platform; print(platform.__file__)') cp -v "$python_platform" "$pypy_platform" fi fi # # Now install the test Agent as a module package in Pypy. # echo "Installing Agent modules to Pypy" unzip.py ~/tmp/WALinuxAgent-*.zip ~/tmp/WALinuxAgent unzip.py ~/tmp/WALinuxAgent/bin/WALinuxAgent-*.egg ~/tmp/WALinuxAgent/bin/WALinuxAgent.egg mv ~/tmp/WALinuxAgent/bin/WALinuxAgent.egg/azurelinuxagent ~/bin/pypy*/site-packages # # Log the results of get_distro() in Pypy and Python. # pypy_get_distro=$(pypy3 -c 'from azurelinuxagent.common.version import get_distro; print(get_distro())') python_get_distro=$($python -c 'from azurelinuxagent.common.version import get_distro; print(get_distro())') echo "Pypy get_distro(): $pypy_get_distro" echo "Python get_distro(): $python_get_distro" # # Create ~/bin/set-agent-env to set PATH and PYTHONPATH. # # We append $HOME/bin to PATH and set PYTHONPATH to $HOME/lib (bin contains the scripts used by tests, while # lib contains the Python libraries used by tests). # echo "Creating ~/bin/set-agent-env to set PATH and PYTHONPATH" echo " if [[ \$PATH != *\"$HOME/bin\"* ]]; then PATH=\"$HOME/bin:\$PATH:\" fi export PYTHONPATH=\"$HOME/lib\" " > ~/bin/set-agent-env chmod u+x ~/bin/set-agent-env # # Add ~/bin/set-agent-env to .bash_profile to simplify interactive debugging sessions # # Note that in some distros .bash_profile is a symbolic link to a read-only file. Make a copy in that case. # echo "Adding ~/bin/set-agent-env to ~/.bash_profile" if test -e ~/.bash_profile && ls -l .bash_profile | grep '\->'; then cp ~/.bash_profile ~/.bash_profile-bk rm ~/.bash_profile mv ~/.bash_profile-bk ~/.bash_profile fi if ! test -e ~/.bash_profile || ! grep '~/bin/set-agent-env' ~/.bash_profile > /dev/null; then echo 'source ~/bin/set-agent-env ' >> ~/.bash_profile fi Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/scripts/prepare-pypy000077500000000000000000000035561462617747000262270ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This script is used to prepare a tarball containing Pypy with the assert-py module pre-installed. # It needs to be run on x64 and arm64 VMs and the resulting tarballs need to be uploaded to storage, # from where they are downloaded and installed to the test VMs (see wiki for detail). # set -euo pipefail cd /tmp rm -rf pypy3.7-* arch=$(uname -m) printf "Preparing Pypy for architecture %s...\n" $arch printf "\n*** Downloading Pypy...\n" if [[ $arch == "aarch64" ]]; then tarball="pypy3.7-arm64.tar.bz2" wget https://downloads.python.org/pypy/pypy3.7-v7.3.5-aarch64.tar.bz2 -O $tarball else tarball="pypy3.7-x64.tar.bz2" wget https://downloads.python.org/pypy/pypy3.7-v7.3.5-linux64.tar.bz2 -O $tarball fi printf "\n*** Installing assertpy...\n" tar xf $tarball ./pypy3.7-v7.3.5-*/bin/pypy -m ensurepip ./pypy3.7-v7.3.5-*/bin/pypy -mpip install assertpy printf "\n*** Creating new tarball for Pypy...\n" # remove the cache files created when Pypy, and set the owner to 0/0, in order to match the original tarball find pypy3.7-v7.3.5-* -name '*.pyc' -exec rm {} \; mv -v $tarball "$tarball.original" tar cf $tarball --bzip2 --owner 0:0 --group 0:0 pypy3.7-v7.3.5-* rm -rf pypy3.7-v7.3.5-* printf "\nPypy is ready at %s\n" "$(pwd)/$tarball" Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/scripts/uncompress.py000077500000000000000000000017401462617747000264100ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # Un-compresses a bz2 file # import argparse import bz2 import shutil parser = argparse.ArgumentParser() parser.add_argument('source', help='File to uncompress') parser.add_argument('target', help='Output file') args = parser.parse_args() with bz2.BZ2File(args.source, 'rb') as f_in: with open(args.target, 'wb') as f_out: shutil.copyfileobj(f_in, f_out) Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/scripts/unzip.py000077500000000000000000000020121462617747000253500ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import argparse import sys import zipfile try: parser = argparse.ArgumentParser() parser.add_argument('source', help='ZIP package to expand') parser.add_argument('target', help='Destination directory') args = parser.parse_args() zipfile.ZipFile(args.source).extractall(args.target) except Exception as e: print(f"{e}", file=sys.stderr) sys.exit(1) sys.exit(0) Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/scripts/update-waagent-conf000077500000000000000000000030101462617747000274040ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # Updates waagent.conf with the specified setting and value(allows multiple) and restarts the Agent. # set -euo pipefail if [[ $# -lt 1 ]]; then echo "Usage: update-waagent-conf []" exit 1 fi PYTHON=$(get-agent-python) waagent_conf=$($PYTHON -c 'from azurelinuxagent.common.osutil import get_osutil; print(get_osutil().agent_conf_file_path)') for setting_value in "$@"; do IFS='=' read -r -a setting_value_array <<< "$setting_value" name=${setting_value_array[0]} value=${setting_value_array[1]} if [[ -z "$name" || -z "$value" ]]; then echo "Invalid setting=value: $setting_value" exit 1 fi echo "Setting $name=$value in $waagent_conf" sed -i -E "/^$name=/d" "$waagent_conf" sed -i -E "\$a $name=$value" "$waagent_conf" updated=$(grep "$name" "$waagent_conf") echo "Updated value: $updated" done agent-service restartAzure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/scripts/waagent-version000077500000000000000000000014151462617747000266730ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # returns the version of the agent # set -euo pipefail python=$(get-agent-python) waagent=$(get-agent-bin-path) $python "$waagent" --versionAzure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/templates/000077500000000000000000000000001462617747000241425ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/orchestrator/templates/vmss.json000066400000000000000000000246661462617747000260430ustar00rootroot00000000000000{ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "username": { "type": "string" }, "sshPublicKey": { "type": "string" }, "vmName": { "type": "string" }, "scenarioPrefix": { "type": "string", "defaultValue": "e2e-test" }, "publisher": { "type": "string" }, "offer": { "type": "string" }, "sku": { "type": "string" }, "version": { "type": "string" } }, "variables": { "nicName": "[concat(parameters('scenarioPrefix'),'Nic')]", "vnetAddressPrefix": "10.130.0.0/16", "subnetName": "[concat(parameters('scenarioPrefix'),'Subnet')]", "subnetPrefix": "10.130.0.0/24", "publicIPAddressName": "[concat(parameters('scenarioPrefix'),'PublicIp')]", "lbIpName": "[concat(parameters('scenarioPrefix'),'PublicLbIp')]", "virtualNetworkName": "[concat(parameters('scenarioPrefix'),'Vnet')]", "lbName": "[concat(parameters('scenarioPrefix'),'lb')]", "lbIpId": "[resourceId('Microsoft.Network/publicIPAddresses', variables('lbIpName'))]", "bepoolName": "[concat(variables('lbName'), 'bepool')]", "natpoolName": "[concat(variables('lbName'), 'natpool')]", "feIpConfigName": "[concat(variables('lbName'), 'fepool', 'IpConfig')]", "sshProbeName": "[concat(variables('lbName'), 'probe')]", "vnetID": "[resourceId('Microsoft.Network/virtualNetworks',variables('virtualNetworkName'))]", "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]", "lbId": "[resourceId('Microsoft.Network/loadBalancers', variables('lbName'))]", "bepoolID": "[concat(variables('lbId'), '/backendAddressPools/', variables('bepoolName'))]", "natpoolID": "[concat(variables('lbId'), '/inboundNatPools/', variables('natpoolName'))]", "feIpConfigId": "[concat(variables('lbId'), '/frontendIPConfigurations/', variables('feIpConfigName'))]", "sshProbeId": "[concat(variables('lbId'), '/probes/', variables('sshProbeName'))]", "sshKeyPath": "[concat('/home/', parameters('username'), '/.ssh/authorized_keys')]" }, "resources": [ { "apiVersion": "2023-06-01", "type": "Microsoft.Network/virtualNetworks", "name": "[variables('virtualNetworkName')]", "location": "[resourceGroup().location]", "properties": { "addressSpace": { "addressPrefixes": [ "[variables('vnetAddressPrefix')]" ] }, "subnets": [ { "name": "[variables('subnetName')]", "properties": { "addressPrefix": "[variables('subnetPrefix')]" } } ] } }, { "type": "Microsoft.Network/publicIPAddresses", "name": "[variables('lbIpName')]", "location": "[resourceGroup().location]", "apiVersion": "2023-06-01", "properties": { "publicIPAllocationMethod": "Dynamic", "dnsSettings": { "domainNameLabel": "[parameters('vmName')]" } } }, { "type": "Microsoft.Network/loadBalancers", "name": "[variables('lbName')]", "location": "[resourceGroup().location]", "apiVersion": "2020-06-01", "dependsOn": [ "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]", "[concat('Microsoft.Network/publicIPAddresses/', variables('lbIpName'))]" ], "properties": { "frontendIPConfigurations": [ { "name": "[variables('feIpConfigName')]", "properties": { "PublicIpAddress": { "id": "[variables('lbIpId')]" } } } ], "backendAddressPools": [ { "name": "[variables('bepoolName')]" } ], "inboundNatPools": [ { "name": "[variables('natpoolName')]", "properties": { "FrontendIPConfiguration": { "Id": "[variables('feIpConfigId')]" }, "BackendPort": 22, "Protocol": "tcp", "FrontendPortRangeStart": 3500, "FrontendPortRangeEnd": 4500 } } ], "loadBalancingRules": [ { "name": "ProbeRule", "properties": { "frontendIPConfiguration": { "id": "[variables('feIpConfigId')]" }, "backendAddressPool": { "id": "[variables('bepoolID')]" }, "protocol": "Tcp", "frontendPort": 80, "backendPort": 80, "idleTimeoutInMinutes": 5, "probe": { "id": "[variables('sshProbeId')]" } } } ], "probes": [ { "name": "[variables('sshProbeName')]", "properties": { "protocol": "tcp", "port": 22, "intervalInSeconds": 5, "numberOfProbes": 2 } } ] } }, { "apiVersion": "2023-03-01", "type": "Microsoft.Compute/virtualMachineScaleSets", "name": "[parameters('vmName')]", "location": "[resourceGroup().location]", "dependsOn": [ "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]", "[concat('Microsoft.Network/loadBalancers/', variables('lbName'))]" ], "sku": { "name": "Standard_D2s_v3", "tier": "Standard", "capacity": 3 }, "properties": { "orchestrationMode": "Uniform", "overprovision": false, "virtualMachineProfile": { "extensionProfile": { "extensions": [] }, "osProfile": { "computerNamePrefix": "[parameters('vmName')]", "adminUsername": "[parameters('username')]", "linuxConfiguration": { "disablePasswordAuthentication": true, "ssh": { "publicKeys": [ { "path": "[variables('sshKeyPath')]", "keyData": "[parameters('sshPublicKey')]" } ] } } }, "storageProfile": { "osDisk": { "osType": "Linux", "createOption": "FromImage", "caching": "ReadWrite", "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB": 64 }, "imageReference": { "publisher": "[parameters('publisher')]", "offer": "[parameters('offer')]", "sku": "[parameters('sku')]", "version": "[parameters('version')]" } }, "diagnosticsProfile": { "bootDiagnostics": { "enabled": true } }, "networkProfile": { "networkInterfaceConfigurations": [ { "name": "[variables('nicName')]", "properties": { "primary": true, "ipConfigurations": [ { "name": "ipconfig1", "properties": { "primary": true, "publicIPAddressConfiguration": { "name": "[variables('publicIPAddressName')]", "properties": { "idleTimeoutInMinutes": 15 } }, "subnet": { "id": "[variables('subnetRef')]" } } } ] } } ] } }, "upgradePolicy": { "mode": "Automatic" }, "platformFaultDomainCount": 1 } } ] } Azure-WALinuxAgent-2b21de5/tests_e2e/pipeline/000077500000000000000000000000001462617747000212325ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/pipeline/pipeline-cleanup.yml000066400000000000000000000037131462617747000252130ustar00rootroot00000000000000# # Pipeline for cleaning up any remaining Resource Groups generated by the Azure.WALinuxAgent pipeline. # # Deletes any resource groups that are older than 'older_than' and match the 'name_pattern' regular expression # parameters: - name: name_pattern displayName: Regular expression to match the name of the resource groups to delete type: string default: lisa-WALinuxAgent-.* - name: older_than displayName: Delete resources older than (use the syntax of the "date -d" command) type: string default: 12 hours ago - name: service_connections type: object default: - waagenttests.public - waagenttests.china - waagenttests.gov pool: name: waagent-pool steps: - ${{ each service_connection in parameters.service_connections }}: - task: AzureCLI@2 inputs: azureSubscription: ${{ service_connection }} scriptType: 'bash' scriptLocation: 'inlineScript' inlineScript: | set -euxo pipefail # # We use the REST API to list the resource groups because we need the createdTime and that # property is not available via the az-cli commands. # subscription_id=$(az account list --all --query "[?isDefault].id" -o tsv) date=$(date --utc +%Y-%m-%d'T'%H:%M:%S.%N'Z' -d "${{ parameters.older_than }}") rest_endpoint=$(az cloud show --query "endpoints.resourceManager" -o tsv) pattern="${{ parameters.name_pattern }}" az rest --method GET \ --url "${rest_endpoint}/subscriptions/${subscription_id}/resourcegroups" \ --url-parameters api-version=2021-04-01 \$expand=createdTime \ --output json \ --query value \ | jq --arg date "$date" '.[] | select (.createdTime < $date).name | match("'${pattern}'"; "g").string' \ | xargs -l -t -r az group delete --subscription "${subscription_id}" --no-wait -y -n Azure-WALinuxAgent-2b21de5/tests_e2e/pipeline/pipeline.yml000066400000000000000000000115371462617747000235710ustar00rootroot00000000000000# variables: # # NOTE: When creating the pipeline, "connection_info" must be added as a variable pointing to the # cloud specific service connection; see wiki for details. # parameters: # # See the test wiki for a description of the parameters # # NOTES: # * 'image', 'location' and 'vm_size' override any values in the test suites/images definition # files. Those parameters are useful for 1-off tests, like testing a VHD or checking if # an image is supported in a particular location. # * Azure Pipelines do not allow empty string for the parameter value, using "-" instead. # - name: test_suites displayName: Test Suites (comma-separated list of test suites to run) type: string default: "-" - name: image displayName: Image (image/image set name, URN, or VHD) type: string default: "-" - name: location displayName: Location (region) type: string default: "-" - name: vm_size displayName: VM size type: string default: "-" - name: collect_logs displayName: Collect logs from test VMs type: string default: failed values: - always - failed - no - name: collect_lisa_logs displayName: Collect LISA logs type: boolean default: false - name: keep_environment displayName: Keep the test VMs (do not delete them) type: string default: failed values: - always - failed - no pool: name: waagent-pool jobs: - job: "ExecuteTests" timeoutInMinutes: 90 steps: - task: UsePythonVersion@0 displayName: "Set Python Version" inputs: versionSpec: '3.10' addToPath: true architecture: 'x64' # Extract the Azure cloud from the "connection_info" variable. Its value includes one of # 'public', 'china', or 'gov' as a suffix (the suffix comes after the '.'). - bash: | case $(echo $CONNECTION_INFO | sed 's/.*\.//') in public) echo "##vso[task.setvariable variable=cloud]AzureCloud" ;; china) echo "##vso[task.setvariable variable=cloud]AzureChinaCloud" ;; gov) echo "##vso[task.setvariable variable=cloud]AzureUSGovernment" ;; *) echo "Invalid CONNECTION_INFO: $CONNECTION_INFO" >&2 exit 1 ;; esac displayName: "Set Cloud type" - task: DownloadSecureFile@1 name: downloadSshKey displayName: "Download SSH key" inputs: secureFile: 'id_rsa' - task: AzureKeyVault@2 displayName: "Fetch connection info" inputs: azureSubscription: $(connection_info) KeyVaultName: 'waagenttests' SecretsFilter: '*' - task: AzureCLI@2 displayName: "Download connection certificate" inputs: azureSubscription: $(connection_info) scriptType: bash scriptLocation: inlineScript inlineScript: | # This temporary directory removed after the pipeline execution mkdir -p $(Agent.TempDirectory)/app az keyvault secret download --file $(Agent.TempDirectory)/app/cert.pem --vault-name waagenttests --name AZURE-CLIENT-CERTIFICATE - bash: $(Build.SourcesDirectory)/tests_e2e/pipeline/scripts/execute_tests.sh displayName: "Execute tests" continueOnError: true env: SUBSCRIPTION_ID: $(SUBSCRIPTION-ID) AZURE_CLIENT_ID: $(AZURE-CLIENT-ID) AZURE_TENANT_ID: $(AZURE-TENANT-ID) CR_USER: $(CR-USER) CR_SECRET: $(CR-SECRET) CLOUD: ${{ variables.cloud }} COLLECT_LOGS: ${{ parameters.collect_logs }} IMAGE: ${{ parameters.image }} KEEP_ENVIRONMENT: ${{ parameters.keep_environment }} LOCATION: ${{ parameters.location }} TEST_SUITES: ${{ parameters.test_suites }} VM_SIZE: ${{ parameters.vm_size }} - bash: $(Build.SourcesDirectory)/tests_e2e/pipeline/scripts/collect_artifacts.sh displayName: "Collect test artifacts" # Collect artifacts even if the previous step is cancelled (e.g. timeout) condition: always() env: COLLECT_LISA_LOGS: ${{ parameters.collect_lisa_logs }} - publish: $(Build.ArtifactStagingDirectory) artifact: 'artifacts' displayName: 'Publish test artifacts' condition: always() - task: PublishTestResults@2 displayName: 'Publish test results' condition: always() inputs: testResultsFormat: 'JUnit' testResultsFiles: 'runbook_logs/agent.junit.xml' searchFolder: $(Build.ArtifactStagingDirectory) failTaskOnFailedTests: true Azure-WALinuxAgent-2b21de5/tests_e2e/pipeline/scripts/000077500000000000000000000000001462617747000227215ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/pipeline/scripts/collect_artifacts.sh000077500000000000000000000041511462617747000267460ustar00rootroot00000000000000#!/usr/bin/env bash # # Moves the relevant logs to the staging directory # set -euxo pipefail # # The execute_test.sh script gives ownership of the log directory to the 'waagent' user in # the Docker container; re-take ownership # sudo find "$LOGS_DIRECTORY" -exec chown "$USER" {} \; # # Move the logs for failed tests to a temporary location # mkdir "$BUILD_ARTIFACTSTAGINGDIRECTORY"/tmp for log in $(grep -l MARKER-LOG-WITH-ERRORS "$LOGS_DIRECTORY"/*.log); do mv "$log" "$BUILD_ARTIFACTSTAGINGDIRECTORY"/tmp done # # Move the environment logs to "environment_logs" # if ls "$LOGS_DIRECTORY"/env-*.log > /dev/null 2>&1; then mkdir "$BUILD_ARTIFACTSTAGINGDIRECTORY"/environment_logs mv "$LOGS_DIRECTORY"/env-*.log "$BUILD_ARTIFACTSTAGINGDIRECTORY"/environment_logs fi # # Move the rest of the logs to "test_logs" # if ls "$LOGS_DIRECTORY"/*.log > /dev/null 2>&1; then mkdir "$BUILD_ARTIFACTSTAGINGDIRECTORY"/test_logs mv "$LOGS_DIRECTORY"/*.log "$BUILD_ARTIFACTSTAGINGDIRECTORY"/test_logs fi # # Move the logs for failed tests to the main directory # if ls "$BUILD_ARTIFACTSTAGINGDIRECTORY"/tmp/*.log > /dev/null 2>&1; then mv "$BUILD_ARTIFACTSTAGINGDIRECTORY"/tmp/*.log "$BUILD_ARTIFACTSTAGINGDIRECTORY" fi rmdir "$BUILD_ARTIFACTSTAGINGDIRECTORY"/tmp # # Move the logs collected from the test VMs to vm_logs # if ls "$LOGS_DIRECTORY"/*.tgz > /dev/null 2>&1; then mkdir "$BUILD_ARTIFACTSTAGINGDIRECTORY"/vm_logs mv "$LOGS_DIRECTORY"/*.tgz "$BUILD_ARTIFACTSTAGINGDIRECTORY"/vm_logs fi # # Move the main LISA log and the JUnit report to "runbook_logs" # # Note that files created by LISA are under .../lisa//" # mkdir "$BUILD_ARTIFACTSTAGINGDIRECTORY"/runbook_logs mv "$LOGS_DIRECTORY"/lisa/*/*/lisa-*.log "$BUILD_ARTIFACTSTAGINGDIRECTORY"/runbook_logs mv "$LOGS_DIRECTORY"/lisa/*/*/agent.junit.xml "$BUILD_ARTIFACTSTAGINGDIRECTORY"/runbook_logs # # Move the rest of the LISA logs to "lisa_logs" # if [[ ${COLLECT_LISA_LOGS,,} == 'true' ]]; then # case-insensitive comparison mkdir "$BUILD_ARTIFACTSTAGINGDIRECTORY"/lisa_logs mv "$LOGS_DIRECTORY"/lisa/*/*/* "$BUILD_ARTIFACTSTAGINGDIRECTORY"/lisa_logs fi Azure-WALinuxAgent-2b21de5/tests_e2e/pipeline/scripts/execute_tests.sh000077500000000000000000000056521462617747000261540ustar00rootroot00000000000000#!/usr/bin/env bash set -euxo pipefail echo "Hostname: $(hostname)" echo "\$USER: $USER" # # UID of 'waagent' in the Docker container # WAAGENT_UID=1000 # # Set the correct mode and owner for the private SSH key and generate the public key. # cd "$AGENT_TEMPDIRECTORY" mkdir ssh cp "$DOWNLOADSSHKEY_SECUREFILEPATH" ssh chmod 700 ssh/id_rsa ssh-keygen -y -f ssh/id_rsa > ssh/id_rsa.pub sudo find ssh -exec chown "$WAAGENT_UID" {} \; # # Allow write access to the sources directory. This is needed because building the agent package (within the Docker # container) writes the egg info to that directory. # chmod a+w "$BUILD_SOURCESDIRECTORY" # # Create the directory where the Docker container will create the test logs and give ownership to 'waagent' # LOGS_DIRECTORY="$AGENT_TEMPDIRECTORY/logs" echo "##vso[task.setvariable variable=logs_directory]$LOGS_DIRECTORY" mkdir "$LOGS_DIRECTORY" sudo chown "$WAAGENT_UID" "$LOGS_DIRECTORY" # # Give the current user access to the Docker daemon # sudo usermod -aG docker $USER newgrp docker < /dev/null # # Pull the container image used to execute the tests # az acr login --name waagenttests --username "$CR_USER" --password "$CR_SECRET" docker pull waagenttests.azurecr.io/waagenttests:latest # Azure Pipelines does not allow an empty string as the value for a pipeline parameter; instead we use "-" to indicate # an empty value. Change "-" to "" for the variables that capture the parameter values. if [[ $TEST_SUITES == "-" ]]; then TEST_SUITES="" # Don't set the test_suites variable else TEST_SUITES="-v test_suites:\"$TEST_SUITES\"" fi if [[ $IMAGE == "-" ]]; then IMAGE="" fi if [[ $LOCATION == "-" ]]; then LOCATION="" fi if [[ $VM_SIZE == "-" ]]; then VM_SIZE="" fi # # Get the external IP address of the VM. # IP_ADDRESS=$(curl -4 ifconfig.io/ip) # certificate location in the container AZURE_CLIENT_CERTIFICATE_PATH="/home/waagent/app/cert.pem" docker run --rm \ --volume "$BUILD_SOURCESDIRECTORY:/home/waagent/WALinuxAgent" \ --volume "$AGENT_TEMPDIRECTORY"/ssh:/home/waagent/.ssh \ --volume "$AGENT_TEMPDIRECTORY"/app:/home/waagent/app \ --volume "$LOGS_DIRECTORY":/home/waagent/logs \ --env AZURE_CLIENT_ID \ --env AZURE_TENANT_ID \ --env AZURE_CLIENT_CERTIFICATE_PATH=$AZURE_CLIENT_CERTIFICATE_PATH \ waagenttests.azurecr.io/waagenttests \ bash --login -c \ "lisa \ --runbook \$HOME/WALinuxAgent/tests_e2e/orchestrator/runbook.yml \ --log_path \$HOME/logs/lisa \ --working_path \$HOME/tmp \ -v cloud:$CLOUD \ -v subscription_id:$SUBSCRIPTION_ID \ -v identity_file:\$HOME/.ssh/id_rsa \ -v log_path:\$HOME/logs \ -v collect_logs:\"$COLLECT_LOGS\" \ -v keep_environment:\"$KEEP_ENVIRONMENT\" \ -v image:\"$IMAGE\" \ -v location:\"$LOCATION\" \ -v vm_size:\"$VM_SIZE\" \ -v allow_ssh:\"$IP_ADDRESS\" \ $TEST_SUITES" Azure-WALinuxAgent-2b21de5/tests_e2e/pipeline/scripts/setup-agent.sh000077500000000000000000000035431462617747000255210ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # Script to setup the agent VM for the Azure Pipelines agent pool; it simply installs the Azure CLI, the Docker Engine and jq. # set -euox pipefail # Add delay per Azure Pipelines documentation sleep 30 # Install Azure CLI curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash # Add Docker's official GPG key: sudo apt-get update sudo apt-get install ca-certificates curl gnupg sudo install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg sudo chmod a+r /etc/apt/keyrings/docker.gpg # Add the repository to Apt sources: echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update # Install Docker Engine sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin # Verify that Docker Engine is installed correctly by running the hello-world image. sudo docker run hello-world # Install jq; it is used by the cleanup pipeline to parse the JSON output of the Azure CLI sudo apt-get install -y jq Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/000077500000000000000000000000001462617747000220005ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/agent_bvt.yml000066400000000000000000000002501462617747000244710ustar00rootroot00000000000000name: "AgentBvt" tests: - "agent_bvt/extension_operations.py" - "agent_bvt/run_command.py" - "agent_bvt/vm_access.py" images: - "endorsed" - "endorsed-arm64" Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/agent_cgroups.yml000066400000000000000000000005261462617747000253660ustar00rootroot00000000000000# # The test suite verify the agent running in expected cgroups and also, checks agent tracking the cgroups for polling resource metrics. Also, it verifies the agent cpu quota is set as expected. # name: "AgentCgroups" tests: - "agent_cgroups/agent_cgroups.py" - "agent_cgroups/agent_cpu_quota.py" images: "cgroups-endorsed" owns_vm: trueAzure-WALinuxAgent-2b21de5/tests_e2e/test_suites/agent_ext_workflow.yml000066400000000000000000000005741462617747000264410ustar00rootroot00000000000000name: "AgentExtWorkflow" tests: - "agent_ext_workflow/extension_workflow.py" images: - "centos_79" - "suse_12" - "rhel_79" - "ubuntu_1604" - "ubuntu_1804" # This test suite uses the DCR Test Extension, which is only published in southcentralus region in public cloud locations: "AzureCloud:southcentralus" skip_on_clouds: - "AzureChinaCloud" - "AzureUSGovernment" Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/agent_firewall.yml000066400000000000000000000015261462617747000255120ustar00rootroot00000000000000# # This test verifies that the agent firewall rules are set correctly. The expected firewall rules are: # 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 # 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 # 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW # The first rule allows tcp traffic to port 53 for non root user. The second rule allows traffic to wireserver for root user. # The third rule drops all other traffic to wireserver. # name: "AgentFirewall" tests: - "agent_firewall/agent_firewall.py" images: - "endorsed" - "endorsed-arm64" owns_vm: true # This vm cannot be shared with other tests because it modifies the firewall rules and agent status.Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/agent_not_provisioned.yml000066400000000000000000000006061462617747000271240ustar00rootroot00000000000000# # Disables Agent provisioning using osProfile.linuxConfiguration.provisionVMAgent and verifies that the agent is disabled # and extension operations are not allowed. # name: "AgentNotProvisioned" tests: - "agent_not_provisioned/agent_not_provisioned.py" images: "random(endorsed)" template: "agent_not_provisioned/disable_agent_provisioning.py" owns_vm: true install_test_agent: false Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/agent_persist_firewall.yml000066400000000000000000000021471462617747000272630ustar00rootroot00000000000000# # Iptable rules that agent add not persisted on reboot. So we use firewalld service if distro supports it otherwise agent creates custom service and only runs on boot before network up. # so that attacker will not have room to contact the wireserver # This test verifies that either of the service is active. Ensure those rules are added on boot and working as expected. # name: "AgentPersistFirewall" tests: - "agent_persist_firewall/agent_persist_firewall.py" images: - "endorsed" - "endorsed-arm64" owns_vm: true # This vm cannot be shared with other tests because it modifies the firewall rules and agent status. # agent persist firewall service not running on flatcar distro since agent can't install custom service due to read only filesystem. # so skipping the test run on flatcar distro. # (2023-11-14T19:04:13.738695Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service) skip_on_images: - "flatcar" - "flatcar_arm64" - "debian_9" # TODO: Reboot is slow on debian_9. Need to investigate further.Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/agent_publish.yml000066400000000000000000000005161462617747000253510ustar00rootroot00000000000000# # This test is used to verify that the agent will be updated after publishing a new version to the agent update channel. # name: "AgentPublish" tests: - "agent_publish/agent_publish.py" images: - "random(endorsed, 10)" - "random(endorsed-arm64, 2)" locations: "AzureCloud:centraluseuap" owns_vm: true install_test_agent: falseAzure-WALinuxAgent-2b21de5/tests_e2e/test_suites/agent_status.yml000066400000000000000000000003031462617747000252200ustar00rootroot00000000000000# # This scenario validates the agent status is updated without any goal state changes # name: "AgentStatus" tests: - "agent_status/agent_status.py" images: - "endorsed" - "endorsed-arm64" Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/agent_update.yml000066400000000000000000000012561462617747000251670ustar00rootroot00000000000000# Scenario validates RSM and Self-updates paths # RSM update: If vm enrolled into RSM, it will validate agent uses RSM to update to target version # Self-update: If vm not enrolled into RSM, it will validate agent uses self-update to update to latest version published name: "AgentUpdate" tests: - "agent_update/rsm_update.py" - "agent_update/self_update.py" images: - "random(endorsed, 10)" # - "random(endorsed-arm64, 2)" TODO: HGPA not deployed on some arm64 hosts(so agent stuck on Vmesttings calls as per contract) and will enable once HGPA deployed there locations: "AzureCloud:eastus2euap" owns_vm: true skip_on_clouds: - "AzureChinaCloud" - "AzureUSGovernment"Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/agent_wait_for_cloud_init.yml000066400000000000000000000013651462617747000277310ustar00rootroot00000000000000# # This test verifies that the Agent waits for cloud-init to complete before it starts processing extensions. # # NOTE: This test is not fully automated. It requires a custom image where the test Agent has been installed and Extensions.WaitForCloudInit is enabled in waagent.conf. # To execute it manually, create a custom image and use the 'image' runbook parameter, for example: "-v: image:gallery/wait-cloud-init/1.0.1". # name: "AgentWaitForCloudInit" tests: - "agent_wait_for_cloud_init/agent_wait_for_cloud_init.py" template: "agent_wait_for_cloud_init/add_cloud_init_script.py" install_test_agent: false # Dummy image, since the parameter is required. The actual image needs to be passed as a parameter to the runbook. images: "ubuntu_2204" Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/ext_cgroups.yml000066400000000000000000000010401462617747000250600ustar00rootroot00000000000000# # The test suite installs the few extensions and # verify those extensions are running in expected cgroups and also, checks agent tracking those cgroups for polling resource metrics. # name: "ExtCgroups" tests: - "ext_cgroups/ext_cgroups.py" images: "cgroups-endorsed" # The DCR test extension installs sample service, so this test suite uses it to test services cgroups but this is only published in southcentralus region in public cloud. locations: "AzureCloud:southcentralus" skip_on_clouds: - "AzureChinaCloud" - "AzureUSGovernment"Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/ext_sequencing.yml000066400000000000000000000005501462617747000255440ustar00rootroot00000000000000# # Adds extensions with multiple dependencies to VMSS using 'provisionAfterExtensions' property and validates they are # enabled in order of dependencies. # name: "ExtSequencing" tests: - "ext_sequencing/ext_sequencing.py" images: "endorsed" # This scenario is executed on instances of a scaleset created by the agent test suite. executes_on_scale_set: trueAzure-WALinuxAgent-2b21de5/tests_e2e/test_suites/ext_telemetry_pipeline.yml000066400000000000000000000005071462617747000273040ustar00rootroot00000000000000# # This test ensures that the agent does not throw any errors while trying to transmit events to wireserver. It does not # validate if the events actually make it to wireserver # name: "ExtTelemetryPipeline" tests: - "agent_bvt/vm_access.py" - "ext_telemetry_pipeline/ext_telemetry_pipeline.py" images: "random(endorsed)" Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/extensions_disabled.yml000066400000000000000000000004131462617747000265470ustar00rootroot00000000000000# # The test suite disables extension processing and verifies that extensions # are not processed, but the agent continues reporting status. # name: "ExtensionsDisabled" tests: - "extensions_disabled/extensions_disabled.py" images: "random(endorsed)" owns_vm: true Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/fail.yml000066400000000000000000000002461462617747000234400ustar00rootroot00000000000000name: "Fail" tests: - "samples/fail_test.py" - "samples/fail_remote_test.py" - "samples/error_test.py" - "samples/error_remote_test.py" images: "ubuntu_2004" Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/fips.yml000066400000000000000000000010651462617747000234660ustar00rootroot00000000000000# # FIPS should not affect extension processing. The test enables FIPS and then executes an extension. # # NOTE: Enabling FIPS is very specific to the distro. This test is only executed on Mariner 2. # # TODO: Add other distros. # # NOTE: FIPS can be enabled on RHEL9 using these instructions: see https://access.redhat.com/solutions/137833#rhel9), # but extensions with protected settings do not work end-to-end, since the Agent can't decrypt the tenant # certificate. # name: "FIPS" tests: - source: "fips/fips.py" images: "mariner_2" owns_vm: true Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/images.yml000066400000000000000000000136371462617747000240020ustar00rootroot00000000000000# # Image sets are used to group images # image-sets: # Endorsed distros that are tested on the daily runs endorsed: # # TODO: Add Debian 8 # # - "debian_8" # - "alma_9" - "centos_79" - "centos_82" - "debian_9" - "debian_10" - "debian_11" - "flatcar" - "suse_12" - "mariner_1" - "mariner_2" - "suse_15" - "rhel_79" - "rhel_82" - "rhel_90" - "rocky_9" - "ubuntu_1604" - "ubuntu_1804" - "ubuntu_2004" - "ubuntu_2204" - "ubuntu_2404" # Endorsed distros (ARM64) that are tested on the daily runs endorsed-arm64: - "debian_11_arm64" - "flatcar_arm64" - "mariner_2_arm64" - "rhel_90_arm64" - "ubuntu_2204_arm64" # As of today agent only support and enabled resource governance feature on following distros cgroups-endorsed: - "centos_82" - "rhel_82" - "ubuntu_1604" - "ubuntu_1804" - "ubuntu_2004" # These distros use Python 2.6. Currently they are not tested on the daily runs; this image set is here just for reference. python-26: - "centos_610" - "oracle_610" - "rhel_610" # # An image can be specified by a string giving its urn, as in # # ubuntu_2004: "Canonical 0001-com-ubuntu-server-focal 20_04-lts latest" # # or by an object with 3 properties: urn, locations and vm_sizes, as in # # mariner_2_arm64: # urn: "microsoftcblmariner cbl-mariner cbl-mariner-2-arm64 latest" # locations: # - AzureCloud: ["eastus"] # vm_sizes: # - "Standard_D2pls_v5" # # 'urn' is required, while 'locations' and 'vm_sizes' are optional. The latter # two properties can be used to specify that the image is available only in # some locations, or that it can be used only on some VM sizes. # # The 'locations' property consists of 3 items, one for each cloud (AzureCloud, # AzureUSGovernment and AzureChinaCloud). For each of these items: # # - If the item is not present, the image is available in all locations for that cloud. # - If the value is a list of locations, the image is available only in those locations # - If the value is an empty list, the image is not available in that cloud. # # URNs follow the format ' ' or # ':::' # images: alma_9: urn: "almalinux almalinux 9-gen2 latest" locations: AzureChinaCloud: [] centos_610: "OpenLogic CentOS 6.10 latest" centos_75: "OpenLogic CentOS 7.5 latest" centos_79: "OpenLogic CentOS 7_9 latest" centos_82: urn: "OpenLogic CentOS 8_2 latest" vm_sizes: # Since centos derived from redhat, please see the comment for vm size in rhel_82 - "Standard_B2s" debian_8: "credativ Debian 8 latest" debian_9: "credativ Debian 9 latest" debian_10: "Debian debian-10 10 latest" debian_11: "Debian debian-11 11 latest" debian_11_arm64: urn: "Debian debian-11 11-backports-arm64 latest" locations: AzureUSGovernment: [] AzureChinaCloud: [] flatcar: urn: "kinvolk flatcar-container-linux-free stable latest" locations: AzureChinaCloud: [] AzureUSGovernment: [] flatcar_arm64: urn: "kinvolk flatcar-container-linux-corevm stable latest" locations: AzureChinaCloud: [] AzureUSGovernment: [] mariner_1: urn: "microsoftcblmariner cbl-mariner cbl-mariner-1 latest" locations: AzureChinaCloud: [] mariner_2: "microsoftcblmariner cbl-mariner cbl-mariner-2 latest" mariner_2_arm64: urn: "microsoftcblmariner cbl-mariner cbl-mariner-2-arm64 latest" locations: AzureChinaCloud: [] AzureUSGovernment: [] oracle_610: "Oracle Oracle-Linux 6.10 latest" oracle_75: urn: "Oracle Oracle-Linux 7.5 latest" locations: AzureChinaCloud: [] AzureUSGovernment: [] oracle_79: urn: "Oracle Oracle-Linux ol79-gen2 latest" locations: AzureChinaCloud: [] oracle_82: urn: "Oracle Oracle-Linux ol82-gen2 latest" locations: AzureChinaCloud: [] AzureUSGovernment: [] rhel_610: "RedHat RHEL 6.10 latest" rhel_75: urn: "RedHat RHEL 7.5 latest" locations: AzureChinaCloud: [] rhel_79: urn: "RedHat RHEL 7_9 latest" locations: AzureChinaCloud: [] rhel_82: urn: "RedHat RHEL 8.2 latest" locations: AzureChinaCloud: [] vm_sizes: # Previously one user reported agent hang on this VM size for redhat 7+ but not observed in rhel 8. So I'm using same vm size to test agent cgroups scenario for rhel 8 to make sure we don't see any issue in automation. - "Standard_B2s" rhel_90: urn: "RedHat RHEL 9_0 latest" locations: AzureChinaCloud: [] rhel_90_arm64: urn: "RedHat rhel-arm64 9_0-arm64 latest" locations: AzureChinaCloud: [] AzureUSGovernment: [] rocky_9: urn: "erockyenterprisesoftwarefoundationinc1653071250513 rockylinux-9 rockylinux-9 latest" locations: AzureChinaCloud: [] AzureUSGovernment: [] suse_12: "SUSE sles-12-sp5-basic gen1 latest" suse_15: "SUSE sles-15-sp2-basic gen2 latest" ubuntu_1604: "Canonical UbuntuServer 16.04-LTS latest" ubuntu_1804: "Canonical UbuntuServer 18.04-LTS latest" ubuntu_2004: "Canonical 0001-com-ubuntu-server-focal 20_04-lts latest" ubuntu_2204: "Canonical 0001-com-ubuntu-server-jammy 22_04-lts latest" ubuntu_2204_arm64: urn: "Canonical 0001-com-ubuntu-server-jammy 22_04-lts-arm64 latest" locations: AzureChinaCloud: [] AzureUSGovernment: [] ubuntu_2404: # TODO: Currently using the daily build, update to the release build once it is available urn: "Canonical 0001-com-ubuntu-server-noble-daily 24_04-daily-lts-gen2 latest" locations: AzureChinaCloud: [] AzureUSGovernment: [] Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/keyvault_certificates.yml000066400000000000000000000004051462617747000271130ustar00rootroot00000000000000# # This test verifies that the Agent can download and extract KeyVault certificates that use different encryption algorithms # name: "KeyvaultCertificates" tests: - "keyvault_certificates/keyvault_certificates.py" images: - "endorsed" - "endorsed-arm64" Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/multi_config_ext.yml000066400000000000000000000005141462617747000260620ustar00rootroot00000000000000# # Multi-config extensions are no longer supported but there are still customers running RCv2 and we don't want to break # them. This test suite is used to verify that the agent processes RCv2 (a multi-config extension) as expected. # name: "MultiConfigExt" tests: - "multi_config_ext/multi_config_ext.py" images: - "endorsed" Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/no_outbound_connections.yml000066400000000000000000000020531462617747000274600ustar00rootroot00000000000000# # This suite is used to test the scenario where outbound connections are blocked on the VM. In this case, # the agent should fallback to the HostGAPlugin to request any downloads. # # The suite uses a custom ARM template to create a VM with a Network Security Group that blocks all outbound # connections. The first test in the suite verifies that the setup of the NSG was successful, then the rest # of the tests exercise different extension operations. The last test in the suite checks the agent log # to verify it did fallback to the HostGAPlugin to execute the extensions. # name: "NoOutboundConnections" tests: - source: "no_outbound_connections/check_no_outbound_connections.py" blocks_suite: true # If the NSG is not setup correctly, there is no point in executing the rest of the tests. - "agent_bvt/extension_operations.py" - "agent_bvt/run_command.py" - "agent_bvt/vm_access.py" - "no_outbound_connections/check_fallback_to_hgap.py" images: "random(endorsed)" template: "no_outbound_connections/deny_outbound_connections.py" owns_vm: true Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/pass.yml000066400000000000000000000001471462617747000234730ustar00rootroot00000000000000name: "Pass" tests: - "samples/pass_test.py" - "samples/pass_remote_test.py" images: "ubuntu_2004" Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/publish_hostname.yml000066400000000000000000000002711462617747000260670ustar00rootroot00000000000000# # Changes hostname and checks that the agent published the updated hostname to dns. # name: "PublishHostname" tests: - "publish_hostname/publish_hostname.py" images: - "endorsed"Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/recover_network_interface.yml000066400000000000000000000010611462617747000277570ustar00rootroot00000000000000# # Brings the primary network interface down and checks that the agent can recover the network. # name: "RecoverNetworkInterface" tests: - "recover_network_interface/recover_network_interface.py" images: # TODO: This scenario should be run on all distros which bring the network interface down to publish hostname. Currently, only RedhatOSUtil attempts to recover the network interface if down after hostname publishing. - "centos_79" - "centos_75" - "centos_82" - "rhel_75" - "rhel_79" - "rhel_82" - "oracle_75" - "oracle_79" - "oracle_82"Azure-WALinuxAgent-2b21de5/tests_e2e/test_suites/vmss.yml000066400000000000000000000002021462617747000235050ustar00rootroot00000000000000# # Sample test for scale sets # name: "VMSS" tests: - "samples/vmss_test.py" executes_on_scale_set: true images: "ubuntu_2004" Azure-WALinuxAgent-2b21de5/tests_e2e/tests/000077500000000000000000000000001462617747000205675ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/__init__.py000066400000000000000000000000001462617747000226660ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_bvt/000077500000000000000000000000001462617747000225405ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_bvt/extension_operations.py000077500000000000000000000066141462617747000274030ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # BVT for extension operations (Install/Enable/Update/Uninstall). # # The test executes an older version of an extension, then updates it to a newer version, and lastly # it removes it. The actual extension is irrelevant, but the test uses CustomScript for simplicity, # since it's invocation is trivial and the entire extension workflow can be tested end-to-end by # checking the message in the status produced by the extension. # import uuid from assertpy import assert_that from azure.core.exceptions import ResourceNotFoundError from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.vm_extension_identifier import VmExtensionIds, VmExtensionIdentifier from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.ssh_client import SshClient from tests_e2e.tests.lib.virtual_machine_extension_client import VirtualMachineExtensionClient class ExtensionOperationsBvt(AgentVmTest): def run(self): ssh_client: SshClient = self._context.create_ssh_client() is_arm64: bool = ssh_client.get_architecture() == "aarch64" custom_script_2_0 = VirtualMachineExtensionClient( self._context.vm, VmExtensionIds.CustomScript, resource_name="CustomScript") if is_arm64: log.info("Will skip the update scenario, since currently there is only 1 version of CSE on ARM64") else: log.info("Installing %s", custom_script_2_0) message = f"Hello {uuid.uuid4()}!" custom_script_2_0.enable( protected_settings={ 'commandToExecute': f"echo \'{message}\'" }, auto_upgrade_minor_version=False ) custom_script_2_0.assert_instance_view(expected_version="2.0", expected_message=message) custom_script_2_1 = VirtualMachineExtensionClient( self._context.vm, VmExtensionIdentifier(VmExtensionIds.CustomScript.publisher, VmExtensionIds.CustomScript.type, "2.1"), resource_name="CustomScript") if is_arm64: log.info("Installing %s", custom_script_2_1) else: log.info("Updating %s", custom_script_2_0) message = f"Hello {uuid.uuid4()}!" custom_script_2_1.enable( protected_settings={ 'commandToExecute': f"echo \'{message}\'" } ) custom_script_2_1.assert_instance_view(expected_version="2.1", expected_message=message) custom_script_2_1.delete() assert_that(custom_script_2_1.get_instance_view).\ described_as("Fetching the instance view should fail after removing the extension").\ raises(ResourceNotFoundError) if __name__ == "__main__": ExtensionOperationsBvt.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_bvt/run_command.py000077500000000000000000000070711462617747000254240ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # BVT for RunCommand. # # Note that there are two incarnations of RunCommand (which are actually two different extensions): # Microsoft.CPlat.Core.RunCommandHandlerLinux and Microsoft.CPlat.Core.RunCommandLinux. This test # exercises both using the same strategy: execute the extension to create a file on the test VM, # then fetch the contents of the file over SSH and compare against the known value. # import base64 import uuid from assertpy import assert_that, soft_assertions from typing import Callable, Dict from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.vm_extension_identifier import VmExtensionIds from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.ssh_client import SshClient from tests_e2e.tests.lib.virtual_machine_extension_client import VirtualMachineExtensionClient class RunCommandBvt(AgentVmTest): class TestCase: def __init__(self, extension: VirtualMachineExtensionClient, get_settings: Callable[[str], Dict[str, str]]): self.extension = extension self.get_settings = get_settings def run(self): ssh_client: SshClient = self._context.create_ssh_client() test_cases = [ RunCommandBvt.TestCase( VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.RunCommand, resource_name="RunCommand"), lambda s: { "script": base64.standard_b64encode(bytearray(s, 'utf-8')).decode('utf-8') }) ] if ssh_client.get_architecture() == "aarch64": log.info("Skipping test case for %s, since it has not been published on ARM64", VmExtensionIds.RunCommandHandler) else: test_cases.append( RunCommandBvt.TestCase( VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.RunCommandHandler, resource_name="RunCommandHandler"), lambda s: { "source": { "script": s } })) with soft_assertions(): for t in test_cases: log.info("Test case: %s", t.extension) unique = str(uuid.uuid4()) test_file = f"/tmp/waagent-test.{unique}" script = f"echo '{unique}' > {test_file}" log.info("Script to execute: %s", script) t.extension.enable(settings=t.get_settings(script)) t.extension.assert_instance_view() log.info("Verifying contents of the file created by the extension") contents = ssh_client.run_command(f"cat {test_file}").rstrip() # remove the \n assert_that(contents).\ described_as("Contents of the file created by the extension").\ is_equal_to(unique) log.info("The contents match") if __name__ == "__main__": RunCommandBvt.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_bvt/vm_access.py000077500000000000000000000061671462617747000250720ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # BVT for the VmAccess extension # # The test executes VmAccess to add a user and then verifies that an SSH connection to the VM can # be established with that user's identity. # import uuid from assertpy import assert_that from pathlib import Path from tests_e2e.tests.lib.agent_test import AgentVmTest, TestSkipped from tests_e2e.tests.lib.vm_extension_identifier import VmExtensionIds from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.ssh_client import SshClient from tests_e2e.tests.lib.virtual_machine_extension_client import VirtualMachineExtensionClient class VmAccessBvt(AgentVmTest): def run(self): ssh_client: SshClient = self._context.create_ssh_client() if not VmExtensionIds.VmAccess.supports_distro(ssh_client.run_command("get_distro.py").rstrip()): raise TestSkipped("Currently VMAccess is not supported on this distro") # Try to use a unique username for each test run (note that we truncate to 32 chars to # comply with the rules for usernames) log.info("Generating a new username and SSH key") username: str = f"test-{uuid.uuid4()}"[0:32] log.info("Username: %s", username) # Create an SSH key for the user and fetch the public key private_key_file: Path = self._context.working_directory/f"{username}_rsa" public_key_file: Path = self._context.working_directory/f"{username}_rsa.pub" log.info("Generating SSH key as %s", private_key_file) ssh_client = SshClient(ip_address=self._context.ip_address, username=username, identity_file=private_key_file) ssh_client.generate_ssh_key(private_key_file) with public_key_file.open() as f: public_key = f.read() # Invoke the extension vm_access = VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.VmAccess, resource_name="VmAccess") vm_access.enable( protected_settings={ 'username': username, 'ssh_key': public_key, 'reset_ssh': 'false' } ) vm_access.assert_instance_view() # Verify the user was added correctly by starting an SSH session to the VM log.info("Verifying SSH connection to the test VM") stdout = ssh_client.run_command("echo -n $USER") assert_that(stdout).described_as("Output from SSH command").is_equal_to(username) log.info("SSH command output ($USER): %s", stdout) if __name__ == "__main__": VmAccessBvt.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_cgroups/000077500000000000000000000000001462617747000234275ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_cgroups/agent_cgroups.py000066400000000000000000000031711462617747000266430ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.agent_test_context import AgentVmTestContext from tests_e2e.tests.lib.logging import log class AgentCgroups(AgentVmTest): """ This test verifies that the agent is running in the expected cgroups. """ def __init__(self, context: AgentVmTestContext): super().__init__(context) self._ssh_client = self._context.create_ssh_client() def run(self): log.info("=====Prepare agent=====") log.info("Restarting agent service to make sure service starts with new configuration that was setup by the cgroupconfigurator") self._ssh_client.run_command("agent-service restart", use_sudo=True) log.info("=====Validating agent cgroups=====") self._run_remote_test(self._ssh_client, "agent_cgroups-check_cgroups_agent.py") log.info("Successfully Verified that agent present in correct cgroups") if __name__ == "__main__": AgentCgroups.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_cgroups/agent_cpu_quota.py000066400000000000000000000041071462617747000271610ustar00rootroot00000000000000from typing import List, Dict, Any from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.agent_test_context import AgentVmTestContext from tests_e2e.tests.lib.logging import log class AgentCPUQuota(AgentVmTest): """ The test verify that the agent detects when it is throttled for using too much CPU, that it detects processes that do belong to the agent's cgroup, and that resource metrics are generated. """ def __init__(self, context: AgentVmTestContext): super().__init__(context) self._ssh_client = self._context.create_ssh_client() def run(self): log.info("=====Validating agent cpu quota checks") self._run_remote_test(self._ssh_client, "agent_cpu_quota-check_agent_cpu_quota.py", use_sudo=True) log.info("Successfully Verified that agent running in expected CPU quotas") def get_ignore_error_rules(self) -> List[Dict[str, Any]]: ignore_rules = [ # This is produced by the test, so it is expected # Examples: # 2023-10-03T17:59:03.007572Z INFO MonitorHandler ExtHandler [CGW] Disabling resource usage monitoring. Reason: Check on cgroups failed: # [CGroupsException] The agent's cgroup includes unexpected processes: ['[PID: 3190] /usr/bin/python3\x00/home/azureuser/bin/agent_cpu_quota-start_servi', '[PID: 3293] dd\x00if=/dev/zero\x00of=/dev/null\x00'] # [CGroupsException] The agent has been throttled for 5.7720997 seconds {'message': r"Disabling resource usage monitoring. Reason: Check on cgroups failed"}, # This may happen during service stop while terminating the process # Example: # 2022-03-11T21:11:11.713161Z ERROR E2ETest [Errno 3] No such process: {'message': r'E2ETest.*No such process'}, # 2022-10-26T15:38:39.655677Z ERROR E2ETest 'dd if=/dev/zero of=/dev/null' failed: -15 (): {'message': r"E2ETest.*dd.*failed: -15"} ] return ignore_rules if __name__ == "__main__": AgentCPUQuota.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_ext_workflow/000077500000000000000000000000001462617747000244775ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_ext_workflow/README.md000066400000000000000000000041501462617747000257560ustar00rootroot00000000000000# Agent Extension Worflow Test This scenario tests if the correct extension workflow sequence is being executed from the agent. ### GuestAgentDcrTestExtension This is a test extension that exists for the sole purpose of testing the extension workflow of agent. This is currently deployed to SCUS only. All the extension does is prints the settings['name'] out to stdout. It is run everytime enable is called. Another important feature of this extension is that it maintains a `operations-.log` **for every operation that the agent executes on that extension**. We use this to confirm that the agent executed the correct sequence of operations. Sample operations-.log file snippet - ```text Date:2019-07-30T21:54:03Z; Operation:install; SeqNo:0 Date:2019-07-30T21:54:05Z; Operation:enable; SeqNo:0 Date:2019-07-30T21:54:37Z; Operation:enable; SeqNo:1 Date:2019-07-30T21:55:20Z; Operation:disable; SeqNo:1 Date:2019-07-30T21:55:22Z; Operation:uninstall; SeqNo:1 ``` The setting for this extension is of the format - ```json { "name": String } ``` ##### Repo link https://github.com/larohra/GuestAgentDcrTestExtension ##### Available Versions: - 1.1.5 - Version with Basic functionalities as mentioned above - 1.2.0 - Same functionalities as above with `"updateMode": "UpdateWithInstall"` in HandlerManifest.json to test update case - 1.3.0 - Same functionalities as above with `"updateMode": "UpdateWithoutInstall"` in HandlerManifest.json to test update case ### Test Sequence - Install the test extension on the VM - Assert the extension status by checking if our Enable string matches the status message (We receive the status message by using the Azure SDK by polling for the VM instance view and parsing the extension status message) The Enable string of our test is of the following format (this is set in the `Settings` object when we call enable from the tests ) - ```text [ExtensionName]-[Version], Count: [Enable-count] ``` - Match the operation sequence as per the test and make sure they are in the correct chronological order - Restart the agent and verify if the correct operation sequence is followedAzure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_ext_workflow/extension_workflow.py000066400000000000000000000523141462617747000310240ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from azure.mgmt.compute.models import VirtualMachineExtensionInstanceView from assertpy import assert_that, soft_assertions from random import choice import uuid from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.agent_test_context import AgentVmTestContext from tests_e2e.tests.lib.vm_extension_identifier import VmExtensionIds, VmExtensionIdentifier from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.ssh_client import SshClient from tests_e2e.tests.lib.virtual_machine_extension_client import VirtualMachineExtensionClient class ExtensionWorkflow(AgentVmTest): """ This scenario tests if the correct extension workflow sequence is being executed from the agent. It installs the GuestAgentDcrTestExtension on the test VM and makes requests to install, enable, update, and delete the extension from the VM. The GuestAgentDcrTestExtension maintains a local `operations-.log` for every operation that the agent executes on that extension. We use this to confirm that the agent executed the correct sequence of operations. Sample operations-.log file snippet - Date:2019-07-30T21:54:03Z; Operation:install; SeqNo:0 Date:2019-07-30T21:54:05Z; Operation:enable; SeqNo:0 Date:2019-07-30T21:54:37Z; Operation:enable; SeqNo:1 Date:2019-07-30T21:55:20Z; Operation:disable; SeqNo:1 Date:2019-07-30T21:55:22Z; Operation:uninstall; SeqNo:1 The setting for the GuestAgentDcrTestExtension is of the format - { "name": String } Test sequence - - Install the test extension on the VM - Assert the extension status by checking if our Enable string matches the status message - The Enable string of our test is of the following format (this is set in the `Settings` object when we c all enable from the tests): [ExtensionName]-[Version], Count: [Enable-count] - Match the operation sequence as per the test and make sure they are in the correct chronological order - Restart the agent and verify if the correct operation sequence is followed """ def __init__(self, context: AgentVmTestContext): super().__init__(context) self._ssh_client = context.create_ssh_client() # This class represents the GuestAgentDcrTestExtension running on the test VM class GuestAgentDcrTestExtension: COUNT_KEY_NAME = "Count" NAME_KEY_NAME = "name" DATA_KEY_NAME = "data" def __init__(self, extension: VirtualMachineExtensionClient, ssh_client: SshClient, version: str): self.extension = extension self.name = "GuestAgentDcrTestExt" self.version = version self.expected_message = "" self.enable_count = 0 self.ssh_client = ssh_client self.data = None def modify_ext_settings_and_enable(self, data=None): self.enable_count += 1 # Settings follows the following format: [ExtensionName]-[Version], Count: [Enable-count] setting_name = "%s-%s, %s: %s" % (self.name, self.version, self.COUNT_KEY_NAME, self.enable_count) # We include data in the settings to test the special characters case. The settings with data follows the # following format: [ExtensionName]-[Version], Count: [Enable-count], data: [data] if data is not None: setting_name = "{0}, {1}: {2}".format(setting_name, self.DATA_KEY_NAME, data) self.expected_message = setting_name settings = {self.NAME_KEY_NAME: setting_name.encode('utf-8')} log.info("") log.info("Add or update extension {0} , version {1}, settings {2}".format(self.extension, self.version, settings)) self.extension.enable(settings=settings, auto_upgrade_minor_version=False) def assert_instance_view(self, data=None): log.info("") # If data is not None, we want to assert the instance view has the expected data if data is None: log.info("Assert instance view has expected message for test extension. Expected version: {0}, " "Expected message: {1}".format(self.version, self.expected_message)) self.extension.assert_instance_view(expected_version=self.version, expected_message=self.expected_message) else: self.data = data log.info("Assert instance view has expected data for test extension. Expected version: {0}, " "Expected data: {1}".format(self.version, data)) self.extension.assert_instance_view(expected_version=self.version, assert_function=self.assert_data_in_instance_view) def assert_data_in_instance_view(self, instance_view: VirtualMachineExtensionInstanceView): log.info("Asserting extension status ...") status_message = instance_view.statuses[0].message log.info("Status message: %s" % status_message) with soft_assertions(): expected_ext_version = "%s-%s" % (self.name, self.version) assert_that(expected_ext_version in status_message).described_as( f"Specific extension version name should be in the InstanceView message ({expected_ext_version})").is_true() expected_count = "%s: %s" % (self.COUNT_KEY_NAME, self.enable_count) assert_that(expected_count in status_message).described_as( f"Expected count should be in the InstanceView message ({expected_count})").is_true() if self.data is not None: expected_data = "{0}: {1}".format(self.DATA_KEY_NAME, self.data) assert_that(expected_data in status_message).described_as( f"Expected data should be in the InstanceView message ({expected_data})").is_true() def execute_assertion_script(self, file_name, args): log.info("") log.info("Running {0} remotely with arguments {1}".format(file_name, args)) result = self.ssh_client.run_command(f"{file_name} {args}", use_sudo=True) log.info(result) log.info("Assertion completed successfully") def assert_scenario(self, file_name: str, command_args: str, assert_status: bool = False, restart_agent: list = None, data: str = None): # Assert the extension status by checking if our Enable string matches the status message in the instance # view if assert_status: self.assert_instance_view(data=data) # Remotely execute the assertion script self.execute_assertion_script(file_name, command_args) # Restart the agent and test the status again if enabled (by checking the operations.log file in the VM) # Restarting agent should just run enable again and rerun the same settings if restart_agent is not None: log.info("") log.info("Restarting the agent...") output = self.ssh_client.run_command("agent-service restart", use_sudo=True) log.info("Restart completed:\n%s", output) for args in restart_agent: self.execute_assertion_script('agent_ext_workflow-assert_operation_sequence.py', args) if assert_status: self.assert_instance_view() def update_ext_version(self, extension: VirtualMachineExtensionClient, version: str): self.extension = extension self.version = version def run(self): is_arm64: bool = self._ssh_client.get_architecture() == "aarch64" if is_arm64: log.info("Skipping test case for %s, since it has not been published on ARM64", VmExtensionIds.GuestAgentDcrTestExtension) else: log.info("") log.info("*******Verifying the extension install scenario*******") # Record the time we start the test start_time = self._ssh_client.run_command("date '+%Y-%m-%dT%TZ'").rstrip() # Create DcrTestExtension with version 1.1.5 dcr_test_ext_id_1_1 = VmExtensionIdentifier( VmExtensionIds.GuestAgentDcrTestExtension.publisher, VmExtensionIds.GuestAgentDcrTestExtension.type, "1.1" ) dcr_test_ext_client = VirtualMachineExtensionClient( self._context.vm, dcr_test_ext_id_1_1, resource_name="GuestAgentDcrTestExt" ) dcr_ext = ExtensionWorkflow.GuestAgentDcrTestExtension( extension=dcr_test_ext_client, ssh_client=self._ssh_client, version="1.1.5" ) # Install test extension on the VM dcr_ext.modify_ext_settings_and_enable() # command_args are the args we pass to the agent_ext_workflow-assert_operation_sequence.py file to verify # the operation sequence for the current test command_args = f"--start-time {start_time} " \ f"normal_ops_sequence " \ f"--version {dcr_ext.version} " \ f"--ops install enable" # restart_agentcommand_args are the args we pass to the agent_ext_workflow-assert_operation_sequence.py file # to verify the operation sequence after restarting the agent. Restarting agent should just run enable again # and rerun the same settings restart_agent_command_args = [f"--start-time {start_time} " f"normal_ops_sequence " f"--version {dcr_ext.version} " f"--ops install enable enable"] # Assert the operation sequence to confirm the agent executed the operations in the correct chronological # order dcr_ext.assert_scenario( file_name='agent_ext_workflow-assert_operation_sequence.py', command_args=command_args, assert_status=True, restart_agent=restart_agent_command_args ) log.info("") log.info("*******Verifying the extension enable scenario*******") # Record the time we start the test start_time = self._ssh_client.run_command("date '+%Y-%m-%dT%TZ'").rstrip() # Enable test extension on the VM dcr_ext.modify_ext_settings_and_enable() command_args = f"--start-time {start_time} " \ f"normal_ops_sequence " \ f"--version {dcr_ext.version} " \ f"--ops enable" restart_agent_command_args = [f"--start-time {start_time} " f"normal_ops_sequence " f"--version {dcr_ext.version} " f"--ops enable enable"] dcr_ext.assert_scenario( file_name='agent_ext_workflow-assert_operation_sequence.py', command_args=command_args, assert_status=True, restart_agent=restart_agent_command_args ) log.info("") log.info("*******Verifying the extension enable with special characters scenario*******") test_guid = str(uuid.uuid4()) random_special_char_sentences = [ "Quizdeltagerne spiste jordbær med fløde, mens cirkusklovnen Wolther spillede på xylofon.", "Falsches Üben von Xylophonmusik quält jeden größeren Zwerg", "Zwölf Boxkämpfer jagten Eva quer über den Sylter Deich", "Heizölrückstoßabdämpfung", "Γαζέες καὶ μυρτιὲς δὲν θὰ βρῶ πιὰ στὸ χρυσαφὶ ξέφωτο", "Ξεσκεπάζω τὴν ψυχοφθόρα βδελυγμία", "El pingüino Wenceslao hizo kilómetros bajo exhaustiva lluvia y frío, añoraba a su querido cachorro.", "Portez ce vieux whisky au juge blond qui fume sur son île intérieure, à côté de l'alcôve ovoïde, où les bûches" ] sentence = choice(random_special_char_sentences) test_str = "{0}; Special chars: {1}".format(test_guid, sentence) # Enable test extension on the VM dcr_ext.modify_ext_settings_and_enable(data=test_str) command_args = f"--data {test_guid}" # We first ensure that the stdout contains the special characters and then we check if the test_guid is # logged atleast once in the agent log to ensure that there were no errors when handling special characters # in the agent dcr_ext.assert_scenario( file_name='agent_ext_workflow-check_data_in_agent_log.py', command_args=command_args, assert_status=True, data=test_guid ) log.info("") log.info("*******Verifying the extension uninstall scenario*******") # Record the time we start the test start_time = self._ssh_client.run_command("date '+%Y-%m-%dT%TZ'").rstrip() # Remove the test extension on the VM log.info("Delete %s from VM", dcr_test_ext_client) dcr_ext.extension.delete() command_args = f"--start-time {start_time} " \ f"normal_ops_sequence " \ f"--version {dcr_ext.version} " \ f"--ops disable uninstall" restart_agent_command_args = [f"--start-time {start_time} " f"normal_ops_sequence " f"--version {dcr_ext.version} " f"--ops disable uninstall"] dcr_ext.assert_scenario( file_name='agent_ext_workflow-assert_operation_sequence.py', command_args=command_args, restart_agent=restart_agent_command_args ) log.info("") log.info("*******Verifying the extension update with install scenario*******") # Record the time we start the test start_time = self._ssh_client.run_command("date '+%Y-%m-%dT%TZ'").rstrip() # Version 1.2.0 of the test extension has the same functionalities as 1.1.5 with # "updateMode": "UpdateWithInstall" in HandlerManifest.json to test update case new_version_update_mode_with_install = "1.2.0" old_version = "1.1.5" # Create DcrTestExtension with version 1.1 and 1.2 dcr_test_ext_id_1_2 = VmExtensionIdentifier( VmExtensionIds.GuestAgentDcrTestExtension.publisher, VmExtensionIds.GuestAgentDcrTestExtension.type, "1.2" ) dcr_test_ext_client_1_2 = VirtualMachineExtensionClient( self._context.vm, dcr_test_ext_id_1_2, resource_name="GuestAgentDcrTestExt" ) dcr_ext = ExtensionWorkflow.GuestAgentDcrTestExtension( extension=dcr_test_ext_client, ssh_client=self._ssh_client, version=old_version ) # Install test extension v1.1.5 on the VM and assert instance view dcr_ext.modify_ext_settings_and_enable() dcr_ext.assert_instance_view() # Update extension object & version to new version dcr_ext.update_ext_version(dcr_test_ext_client_1_2, new_version_update_mode_with_install) # Install test extension v1.2.0 on the VM dcr_ext.modify_ext_settings_and_enable() command_args = f"--start-time {start_time} " \ f"update_sequence " \ f"--old-version {old_version} " \ f"--old-ver-ops disable uninstall " \ f"--new-version {new_version_update_mode_with_install} " \ f"--new-ver-ops update install enable " \ f"--final-ops disable update uninstall install enable" restart_agent_command_args = [ f"--start-time {start_time} " f"normal_ops_sequence " f"--version {old_version} " f"--ops disable uninstall", f"--start-time {start_time} " f"normal_ops_sequence " f"--version {new_version_update_mode_with_install} " f"--ops update install enable enable" ] dcr_ext.assert_scenario( file_name='agent_ext_workflow-assert_operation_sequence.py', command_args=command_args, assert_status=True, restart_agent=restart_agent_command_args ) log.info("") log.info("Delete %s from VM", dcr_test_ext_client_1_2) dcr_ext.extension.delete() log.info("") log.info("*******Verifying the extension update without install scenario*******") # Record the time we start the test start_time = self._ssh_client.run_command("date '+%Y-%m-%dT%TZ'").rstrip() # Version 1.3.0 of the test extension has the same functionalities as 1.1.5 with # "updateMode": "UpdateWithoutInstall" in HandlerManifest.json to test update case new_version_update_mode_without_install = "1.3.0" # Create DcrTestExtension with version 1.1 and 1.3 dcr_test_ext_id_1_3 = VmExtensionIdentifier( VmExtensionIds.GuestAgentDcrTestExtension.publisher, VmExtensionIds.GuestAgentDcrTestExtension.type, "1.3") dcr_test_ext_client_1_3 = VirtualMachineExtensionClient( self._context.vm, dcr_test_ext_id_1_3, resource_name="GuestAgentDcrTestExt" ) dcr_ext = ExtensionWorkflow.GuestAgentDcrTestExtension( extension=dcr_test_ext_client, ssh_client=self._ssh_client, version=old_version ) # Install test extension v1.1.5 on the VM and assert instance view dcr_ext.modify_ext_settings_and_enable() dcr_ext.assert_instance_view() # Update extension object & version to new version dcr_ext.update_ext_version(dcr_test_ext_client_1_3, new_version_update_mode_without_install) # Install test extension v1.3.0 on the VM dcr_ext.modify_ext_settings_and_enable() command_args = f"--start-time {start_time} " \ f"update_sequence " \ f"--old-version {old_version} " \ f"--old-ver-ops disable uninstall " \ f"--new-version {new_version_update_mode_without_install} " \ f"--new-ver-ops update enable " \ f"--final-ops disable update uninstall enable" restart_agent_command_args = [ f"--start-time {start_time} " f"normal_ops_sequence " f"--version {old_version} " f"--ops disable uninstall", f"--start-time {start_time} " f"normal_ops_sequence " f"--version {new_version_update_mode_without_install} " f"--ops update enable enable" ] dcr_ext.assert_scenario( file_name='agent_ext_workflow-assert_operation_sequence.py', command_args=command_args, assert_status=True, restart_agent=restart_agent_command_args ) log.info("") log.info("*******Verifying no lag between agent start and gs processing*******") log.info("") log.info("Running agent_ext_workflow-validate_no_lag_between_agent_start_and_gs_processing.py remotely...") result = self._ssh_client.run_command("agent_ext_workflow-validate_no_lag_between_agent_start_and_gs_processing.py", use_sudo=True) log.info(result) log.info("Validation for no lag time between agent start and gs processing completed successfully") if __name__ == "__main__": ExtensionWorkflow.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_firewall/000077500000000000000000000000001462617747000235525ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_firewall/agent_firewall.py000066400000000000000000000027711462617747000271160ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.agent_test_context import AgentVmTestContext from tests_e2e.tests.lib.logging import log class AgentFirewall(AgentVmTest): """ This test verifies the agent firewall rules are added properly. It checks each firewall rule is present and working as expected. """ def __init__(self, context: AgentVmTestContext): super().__init__(context) self._ssh_client = self._context.create_ssh_client() def run(self): log.info("Checking iptable rules added by the agent") self._run_remote_test(self._ssh_client, f"agent_firewall-verify_all_firewall_rules.py --user {self._context.username}", use_sudo=True) log.info("Successfully verified all rules present and working as expected.") if __name__ == "__main__": AgentFirewall.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_not_provisioned/000077500000000000000000000000001462617747000251665ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_not_provisioned/agent_not_provisioned.py000077500000000000000000000112651462617747000321470ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from assertpy import fail, assert_that from typing import Any, Dict, List from azure.mgmt.compute.models import VirtualMachineInstanceView from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.vm_extension_identifier import VmExtensionIds from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.shell import CommandError from tests_e2e.tests.lib.ssh_client import SshClient from tests_e2e.tests.lib.virtual_machine_extension_client import VirtualMachineExtensionClient class AgentNotProvisioned(AgentVmTest): """ When osProfile.linuxConfiguration.provisionVMAgent is set to 'false', this test verifies that the agent is disabled and that extension operations are not allowed. """ def run(self): # # Check the agent's log for the messages that indicate it is disabled. # ssh_client: SshClient = self._context.create_ssh_client() log.info("Checking the Agent's log to verify that it is disabled.") try: output = ssh_client.run_command(""" # We need to wait for the agent to start and hit the disable code, give it a few minutes n=18 for i in $(seq $n); do grep -E 'WARNING.*Daemon.*Disabling guest agent in accordance with ovf-env.xml' /var/log/waagent.log || \ grep -E 'WARNING.*Daemon.*Disabling the guest agent by sleeping forever; to re-enable, remove /var/lib/waagent/disable_agent and restart' /var/log/waagent.log if [[ $? == 0 ]]; then exit 0 fi echo "Did not find the expected message in the agent's log, retrying after sleeping for a few seconds (attempt $i/$n)..." sleep 10 done echo "Did not find the expected message in the agent's log, giving up." exit 1 """) log.info("The Agent is disabled, log message: [%s]", output.rstrip()) except CommandError as e: fail(f"The agent's log does not contain the expected messages: {e}") # # Validate that the agent is not reporting status. # log.info("Verifying that the Agent status is 'Not Ready' (i.e. it is not reporting status).") instance_view: VirtualMachineInstanceView = self._context.vm.get_instance_view() log.info("Instance view of VM Agent:\n%s", instance_view.vm_agent.serialize()) assert_that(instance_view.vm_agent.statuses).described_as("The VM agent should have exactly 1 status").is_length(1) assert_that(instance_view.vm_agent.statuses[0].code).described_as("The VM Agent should not be available").is_equal_to('ProvisioningState/Unavailable') assert_that(instance_view.vm_agent.statuses[0].display_status).described_as("The VM Agent should not ready").is_equal_to('Not Ready') log.info("The Agent status is 'Not Ready'") # # Validate that extensions cannot be executed. # log.info("Verifying that extension processing is disabled.") log.info("Executing CustomScript; it should fail.") custom_script = VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.CustomScript, resource_name="CustomScript") try: custom_script.enable(settings={'commandToExecute': "date"}, force_update=True, timeout=20 * 60) fail("CustomScript should have failed") except Exception as error: assert_that("OperationNotAllowed" in str(error)) \ .described_as(f"Expected an OperationNotAllowed: {error}") \ .is_true() log.info("CustomScript failed, as expected: %s", error) def get_ignore_error_rules(self) -> List[Dict[str, Any]]: return [ {'message': 'Disabling guest agent in accordance with ovf-env.xml'}, {'message': 'Disabling the guest agent by sleeping forever; to re-enable, remove /var/lib/waagent/disable_agent and restart'} ] if __name__ == "__main__": AgentNotProvisioned.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_not_provisioned/disable_agent_provisioning.py000077500000000000000000000047001462617747000331330ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import Any, Dict from tests_e2e.tests.lib.update_arm_template import UpdateArmTemplate class DisableAgentProvisioning(UpdateArmTemplate): """ Updates the ARM template to set osProfile.linuxConfiguration.provisionVMAgent to false. """ def update(self, template: Dict[str, Any], is_lisa_template: bool) -> None: if not is_lisa_template: raise Exception('This test can only customize LISA ARM templates.') # # NOTE: LISA's template uses this function to generate the value for osProfile.linuxConfiguration. The function is # under the 'lisa' namespace. We set 'provisionVMAgent' to False. # # "getLinuxConfiguration": { # "parameters": [ # ... # ], # "output": { # "type": "object", # "value": { # "disablePasswordAuthentication": true, # "ssh": { # "publicKeys": [ # { # "path": "[parameters('keyPath')]", # "keyData": "[parameters('publicKeyData')]" # } # ] # }, # "provisionVMAgent": true # } # } # } # get_linux_configuration = self.get_lisa_function(template, 'getLinuxConfiguration') output = self.get_function_output(get_linux_configuration) if output.get('customData') is not None: raise Exception(f"The getOSProfile function already has a 'customData'. Won't override it. Definition: {get_linux_configuration}") output['provisionVMAgent'] = False Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_persist_firewall/000077500000000000000000000000001462617747000253235ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_persist_firewall/agent_persist_firewall.py000066400000000000000000000075221462617747000324370ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.agent_test_context import AgentVmTestContext from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.ssh_client import SshClient class AgentPersistFirewallTest(AgentVmTest): """ This test verifies agent setup persist firewall rules using custom network setup service or firewalld service. Ensure those rules are added on boot and working as expected. """ def __init__(self, context: AgentVmTestContext): super().__init__(context) self._ssh_client: SshClient = self._context.create_ssh_client() def run(self): self._test_setup() # Test case 1: After test agent install, verify firewalld or network.setup is running self._verify_persist_firewall_service_running() # Test case 2: Perform reboot and ensure firewall rules added on boot and working as expected self._context.vm.restart(wait_for_boot=True, ssh_client=self._ssh_client) self._verify_persist_firewall_service_running() self._verify_firewall_rules_on_boot("first_boot") # Test case 3: Disable the agent(so that agent won't get started after reboot) # perform reboot and ensure firewall rules added on boot even after agent is disabled self._disable_agent() self._context.vm.restart(wait_for_boot=True, ssh_client=self._ssh_client) self._verify_persist_firewall_service_running() self._verify_firewall_rules_on_boot("second_boot") # Test case 4: perform firewalld rules deletion and ensure deleted rules added back to rule set after agent start self._verify_firewall_rules_readded() def _test_setup(self): log.info("Doing test setup") self._run_remote_test(self._ssh_client, f"agent_persist_firewall-test_setup {self._context.username}", use_sudo=True) log.info("Successfully completed test setup\n") def _verify_persist_firewall_service_running(self): log.info("Verifying persist firewall service is running") self._run_remote_test(self._ssh_client, "agent_persist_firewall-verify_persist_firewall_service_running.py", use_sudo=True) log.info("Successfully verified persist firewall service is running\n") def _verify_firewall_rules_on_boot(self, boot_name): log.info("Verifying firewall rules on {0}".format(boot_name)) self._run_remote_test(self._ssh_client, f"agent_persist_firewall-verify_firewall_rules_on_boot.py --user {self._context.username} --boot_name {boot_name}", use_sudo=True) log.info("Successfully verified firewall rules on {0}".format(boot_name)) def _disable_agent(self): log.info("Disabling agent") self._run_remote_test(self._ssh_client, "agent-service disable", use_sudo=True) log.info("Successfully disabled agent\n") def _verify_firewall_rules_readded(self): log.info("Verifying firewall rules readded") self._run_remote_test(self._ssh_client, "agent_persist_firewall-verify_firewalld_rules_readded.py", use_sudo=True) log.info("Successfully verified firewall rules readded\n") Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_publish/000077500000000000000000000000001462617747000234135ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_publish/agent_publish.py000066400000000000000000000102611462617747000266110ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import uuid from datetime import datetime from typing import Any, Dict, List from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.agent_test_context import AgentVmTestContext from tests_e2e.tests.lib.vm_extension_identifier import VmExtensionIds, VmExtensionIdentifier from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.ssh_client import SshClient from tests_e2e.tests.lib.virtual_machine_extension_client import VirtualMachineExtensionClient class AgentPublishTest(AgentVmTest): """ This script verifies if the agent update performed in the vm. """ def __init__(self, context: AgentVmTestContext): super().__init__(context) self._ssh_client: SshClient = self._context.create_ssh_client() def run(self): """ we run the scenario in the following steps: 1. Print the current agent version before the update 2. Prepare the agent for the update 3. Check for agent update from the log 4. Print the agent version after the update 5. Ensure CSE is working """ self._get_agent_info() self._prepare_agent() self._check_update() self._get_agent_info() self._check_cse() def get_ignore_errors_before_timestamp(self) -> datetime: timestamp = self._ssh_client.run_command("agent_publish-get_agent_log_record_timestamp.py") return datetime.strptime(timestamp.strip(), u'%Y-%m-%d %H:%M:%S.%f') def _get_agent_info(self) -> None: stdout: str = self._ssh_client.run_command("waagent-version", use_sudo=True) log.info('Agent info \n%s', stdout) def _prepare_agent(self) -> None: log.info("Modifying agent update related config flags and renaming the log file") self._run_remote_test(self._ssh_client, "sh -c 'agent-service stop && mv /var/log/waagent.log /var/log/waagent.$(date --iso-8601=seconds).log && update-waagent-conf AutoUpdate.UpdateToLatestVersion=y AutoUpdate.GAFamily=Test AutoUpdate.Enabled=y Extensions.Enabled=y'", use_sudo=True) log.info('Renamed log file and updated agent-update DownloadNewAgents GAFamily config flags') def _check_update(self) -> None: log.info("Verifying for agent update status") self._run_remote_test(self._ssh_client, "agent_publish-check_update.py") log.info('Successfully checked the agent update') def _check_cse(self) -> None: custom_script_2_1 = VirtualMachineExtensionClient( self._context.vm, VmExtensionIdentifier(VmExtensionIds.CustomScript.publisher, VmExtensionIds.CustomScript.type, "2.1"), resource_name="CustomScript") log.info("Installing %s", custom_script_2_1) message = f"Hello {uuid.uuid4()}!" custom_script_2_1.enable( settings={ 'commandToExecute': f"echo \'{message}\'" }, auto_upgrade_minor_version=False ) custom_script_2_1.assert_instance_view(expected_version="2.1", expected_message=message) def get_ignore_error_rules(self) -> List[Dict[str, Any]]: ignore_rules = [ # # This is expected as latest version can be the less than test version # # WARNING ExtHandler ExtHandler Agent WALinuxAgent-9.9.9.9 is permanently blacklisted # { 'message': r"Agent WALinuxAgent-9.9.9.9 is permanently blacklisted" } ] return ignore_rules if __name__ == "__main__": AgentPublishTest.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_status/000077500000000000000000000000001462617747000232705ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_status/agent_status.py000066400000000000000000000217701462617747000263520ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # Validates the agent status is updated without processing additional goal states (aside from the first goal state # from fabric) # from azure.mgmt.compute.models import VirtualMachineInstanceView, InstanceViewStatus, VirtualMachineAgentInstanceView from assertpy import assert_that from datetime import datetime, timedelta from time import sleep import json from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.agent_test_context import AgentVmTestContext from tests_e2e.tests.lib.logging import log class RetryableAgentStatusException(BaseException): pass class AgentStatus(AgentVmTest): def __init__(self, context: AgentVmTestContext): super().__init__(context) self._ssh_client = self._context.create_ssh_client() def validate_instance_view_vmagent_status(self, instance_view: VirtualMachineInstanceView): status: InstanceViewStatus = instance_view.vm_agent.statuses[0] # Validate message field if status.message is None: raise RetryableAgentStatusException("Agent status is invalid: 'message' property in instance view is None") elif 'unresponsive' in status.message: raise RetryableAgentStatusException("Agent status is invalid: Instance view shows unresponsive agent") # Validate display status field if status.display_status is None: raise RetryableAgentStatusException("Agent status is invalid: 'display_status' property in instance view is None") elif 'Not Ready' in status.display_status: raise RetryableAgentStatusException("Agent status is invalid: Instance view shows agent status is not ready") # Validate time field if status.time is None: raise RetryableAgentStatusException("Agent status is invalid: 'time' property in instance view is None") def validate_instance_view_vmagent(self, instance_view: VirtualMachineInstanceView): """ Checks that instance view has vm_agent.statuses and vm_agent.vm_agent_version properties which report the Guest Agent as running and Ready: "vm_agent": { "extension_handlers": [], "vm_agent_version": "9.9.9.9", "statuses": [ { "level": "Info", "time": "2023-08-11T09:13:01.000Z", "message": "Guest Agent is running", "code": "ProvisioningState/succeeded", "display_status": "Ready" } ] } """ # Using dot operator for properties here because azure.mgmt.compute.models has classes for InstanceViewStatus # and VirtualMachineAgentInstanceView. All the properties we validate are attributes of these classes and # initialized to None if instance_view.vm_agent is None: raise RetryableAgentStatusException("Agent status is invalid: 'vm_agent' property in instance view is None") # Validate vm_agent_version field vm_agent: VirtualMachineAgentInstanceView = instance_view.vm_agent if vm_agent.vm_agent_version is None: raise RetryableAgentStatusException("Agent status is invalid: 'vm_agent_version' property in instance view is None") elif 'Unknown' in vm_agent.vm_agent_version: raise RetryableAgentStatusException("Agent status is invalid: Instance view shows agent version is unknown") # Validate statuses field if vm_agent.statuses is None: raise RetryableAgentStatusException("Agent status is invalid: 'statuses' property in instance view is None") elif len(instance_view.vm_agent.statuses) < 1: raise RetryableAgentStatusException("Agent status is invalid: Instance view is missing an agent status entry") else: self.validate_instance_view_vmagent_status(instance_view=instance_view) log.info("Instance view has valid agent status, agent version: {0}, status: {1}" .format(vm_agent.vm_agent_version, vm_agent.statuses[0].display_status)) def check_status_updated(self, status_timestamp: datetime, prev_status_timestamp: datetime, gs_processed_log: str, prev_gs_processed_log: str): log.info("") log.info("Check that the agent status updated without processing any additional goal states...") # If prev_ variables are not updated, then this is the first reported agent status if prev_status_timestamp is not None and prev_gs_processed_log is not None: # The agent status timestamp should be greater than the prev timestamp if status_timestamp > prev_status_timestamp: log.info( "Current agent status timestamp {0} is greater than previous status timestamp {1}" .format(status_timestamp, prev_status_timestamp)) else: raise RetryableAgentStatusException("Agent status failed to update: Current agent status timestamp {0} " "is not greater than previous status timestamp {1}" .format(status_timestamp, prev_status_timestamp)) # The last goal state processed in the agent log should be the same as before if prev_gs_processed_log == gs_processed_log: log.info( "The last processed goal state is the same as the last processed goal state in the last agent " "status update: \n{0}".format(gs_processed_log) .format(status_timestamp, prev_status_timestamp)) else: raise Exception("Agent status failed to update without additional goal state: The agent processed an " "additional goal state since the last agent status update. \n{0}" "".format(gs_processed_log)) log.info("") log.info("The agent status successfully updated without additional goal states") def run(self): log.info("") log.info("*******Verifying the agent status updates 3 times*******") timeout = datetime.now() + timedelta(minutes=6) instance_view_exception = None status_updated = 0 prev_status_timestamp = None prev_gs_processed_log = None # Retry validating agent status updates 2 times with timeout of 6 minutes while datetime.now() <= timeout and status_updated < 2: instance_view = self._context.vm.get_instance_view() log.info("") log.info( "Check instance view to validate that the Guest Agent reports valid status...") log.info("Instance view of VM is:\n%s", json.dumps(instance_view.serialize(), indent=2)) try: # Validate the guest agent reports valid status self.validate_instance_view_vmagent(instance_view) status_timestamp = instance_view.vm_agent.statuses[0].time gs_processed_log = self._ssh_client.run_command( "agent_status-get_last_gs_processed.py", use_sudo=True) self.check_status_updated(status_timestamp, prev_status_timestamp, gs_processed_log, prev_gs_processed_log) # Update variables with timestamps for this update status_updated += 1 prev_status_timestamp = status_timestamp prev_gs_processed_log = gs_processed_log # Sleep 30s to allow agent status to update before we check again sleep(30) except RetryableAgentStatusException as e: instance_view_exception = str(e) log.info("") log.info(instance_view_exception) log.info("Waiting 30s before retry...") sleep(30) # If status_updated is 0, we know the agent status in the instance view was never valid log.info("") assert_that(status_updated > 0).described_as( "Timeout has expired, instance view has invalid agent status: {0}".format( instance_view_exception)).is_true() # Fail the test if we weren't able to validate the agent status updated 3 times assert_that(status_updated == 2).described_as( "Timeout has expired, the agent status failed to update 2 times").is_true() if __name__ == "__main__": AgentStatus.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_update/000077500000000000000000000000001462617747000232275ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_update/rsm_update.py000066400000000000000000000341271462617747000257530ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # BVT for the agent update scenario # # The test verifies agent update for rsm workflow. This test covers three scenarios downgrade, upgrade and no update. # For each scenario, we initiate the rsm request with target version and then verify agent updated to that target version. # import json import re from typing import List, Dict, Any import requests from assertpy import assert_that, fail from azure.identity import DefaultAzureCredential from azure.mgmt.compute.models import VirtualMachine from msrestazure.azure_cloud import Cloud from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.agent_test_context import AgentVmTestContext from tests_e2e.tests.lib.azure_clouds import AZURE_CLOUDS from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import retry_if_false from tests_e2e.tests.lib.virtual_machine_client import VirtualMachineClient class RsmUpdateBvt(AgentVmTest): def __init__(self, context: AgentVmTestContext): super().__init__(context) self._ssh_client = self._context.create_ssh_client() self._installed_agent_version = "9.9.9.9" self._downgrade_version = "9.9.9.9" def get_ignore_error_rules(self) -> List[Dict[str, Any]]: ignore_rules = [ # # This is expected as we validate the downgrade scenario # # WARNING ExtHandler ExtHandler Agent WALinuxAgent-9.9.9.9 is permanently blacklisted # Note: Version varies depending on the pipeline branch the test is running on { 'message': rf"Agent WALinuxAgent-{self._installed_agent_version} is permanently blacklisted", 'if': lambda r: r.prefix == 'ExtHandler' and self._installed_agent_version > self._downgrade_version }, # We don't allow downgrades below then daemon version # 2023-07-11T02:28:21.249836Z WARNING ExtHandler ExtHandler [AgentUpdateError] The Agent received a request to downgrade to version 1.4.0.0, but downgrading to a version less than the Agent installed on the image (1.4.0.1) is not supported. Skipping downgrade. # { 'message': r"downgrading to a version less than the Agent installed on the image.* is not supported" } ] return ignore_rules def run(self) -> None: # retrieve the installed agent version in the vm before run the scenario self._retrieve_installed_agent_version() # Allow agent to send supported feature flag self._verify_agent_reported_supported_feature_flag() log.info("*******Verifying the Agent Downgrade scenario*******") stdout: str = self._ssh_client.run_command("waagent-version", use_sudo=True) log.info("Current agent version running on the vm before update is \n%s", stdout) self._downgrade_version: str = "2.3.15.0" log.info("Attempting downgrade version %s", self._downgrade_version) self._request_rsm_update(self._downgrade_version) self._check_rsm_gs(self._downgrade_version) self._prepare_agent() # Verify downgrade scenario self._verify_guest_agent_update(self._downgrade_version) self._verify_agent_reported_update_status(self._downgrade_version) # Verify upgrade scenario log.info("*******Verifying the Agent Upgrade scenario*******") stdout: str = self._ssh_client.run_command("waagent-version", use_sudo=True) log.info("Current agent version running on the vm before update is \n%s", stdout) upgrade_version: str = "2.3.15.1" log.info("Attempting upgrade version %s", upgrade_version) self._request_rsm_update(upgrade_version) self._check_rsm_gs(upgrade_version) self._verify_guest_agent_update(upgrade_version) self._verify_agent_reported_update_status(upgrade_version) # verify no version update. log.info("*******Verifying the no version update scenario*******") stdout: str = self._ssh_client.run_command("waagent-version", use_sudo=True) log.info("Current agent version running on the vm before update is \n%s", stdout) current_version: str = "2.3.15.1" log.info("Attempting update version same as current version %s", current_version) self._request_rsm_update(current_version) self._check_rsm_gs(current_version) self._verify_guest_agent_update(current_version) self._verify_agent_reported_update_status(current_version) # verify requested version below daemon version # All the daemons set to 2.2.53, so requesting version below daemon version log.info("*******Verifying requested version below daemon version scenario*******") stdout: str = self._ssh_client.run_command("waagent-version", use_sudo=True) log.info("Current agent version running on the vm before update is \n%s", stdout) version: str = "1.5.0.0" log.info("Attempting requested version %s", version) self._request_rsm_update(version) self._check_rsm_gs(version) self._verify_no_guest_agent_update(version) self._verify_agent_reported_update_status(version) def _check_rsm_gs(self, requested_version: str) -> None: # This checks if RSM GS available to the agent after we send the rsm update request log.info( 'Executing wait_for_rsm_gs.py remote script to verify latest GS contain requested version after rsm update requested') self._run_remote_test(self._ssh_client, f"agent_update-wait_for_rsm_gs.py --version {requested_version}", use_sudo=True) log.info('Verified latest GS contain requested version after rsm update requested') def _prepare_agent(self) -> None: """ This method is to ensure agent is ready for accepting rsm updates. As part of that we update following flags 1) Changing daemon version since daemon has a hard check on agent version in order to update agent. It doesn't allow versions which are less than daemon version. 2) Updating GAFamily type "Test" and GAUpdates flag to process agent updates on test versions. """ log.info( 'Executing modify_agent_version remote script to update agent installed version to lower than requested version') output: str = self._ssh_client.run_command("agent_update-modify_agent_version 2.2.53", use_sudo=True) log.info('Successfully updated agent installed version \n%s', output) log.info( 'Executing update-waagent-conf remote script to update agent update config flags to allow and download test versions') output: str = self._ssh_client.run_command( "update-waagent-conf AutoUpdate.UpdateToLatestVersion=y Debug.EnableGAVersioning=y AutoUpdate.GAFamily=Test", use_sudo=True) log.info('Successfully updated agent update config \n %s', output) @staticmethod def _verify_agent_update_flag_enabled(vm: VirtualMachineClient) -> bool: result: VirtualMachine = vm.get_model() flag: bool = result.os_profile.linux_configuration.enable_vm_agent_platform_updates if flag is None: return False return flag def _enable_agent_update_flag(self, vm: VirtualMachineClient) -> None: osprofile = { "location": self._context.vm.location, # location is required field "properties": { "osProfile": { "linuxConfiguration": { "enableVMAgentPlatformUpdates": True } } } } log.info("updating the vm with osProfile property:\n%s", osprofile) vm.update(osprofile) def _request_rsm_update(self, requested_version: str) -> None: """ This method is to simulate the rsm request. First we ensure the PlatformUpdates enabled in the vm and then make a request using rest api """ if not self._verify_agent_update_flag_enabled(self._context.vm): # enable the flag log.info("Attempting vm update to set the enableVMAgentPlatformUpdates flag") self._enable_agent_update_flag(self._context.vm) log.info("Updated the enableVMAgentPlatformUpdates flag to True") else: log.info("Already enableVMAgentPlatformUpdates flag set to True") cloud: Cloud = AZURE_CLOUDS[self._context.vm.cloud] credential: DefaultAzureCredential = DefaultAzureCredential(authority=cloud.endpoints.active_directory) token = credential.get_token(cloud.endpoints.resource_manager + "/.default") headers = {'Authorization': 'Bearer ' + token.token, 'Content-Type': 'application/json'} # Later this api call will be replaced by azure-python-sdk wrapper base_url = cloud.endpoints.resource_manager url = base_url + "/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Compute/virtualMachines/{2}/" \ "UpgradeVMAgent?api-version=2022-08-01".format(self._context.vm.subscription, self._context.vm.resource_group, self._context.vm.name) data = { "target": "Microsoft.OSTCLinuxAgent.Test", "targetVersion": requested_version } log.info("Attempting rsm upgrade post request to endpoint: {0} with data: {1}".format(url, data)) response = requests.post(url, data=json.dumps(data), headers=headers) if response.status_code == 202: log.info("RSM upgrade request accepted") else: raise Exception("Error occurred while making RSM upgrade request. Status code : {0} and msg: {1}".format( response.status_code, response.content)) def _verify_guest_agent_update(self, requested_version: str) -> None: """ Verify current agent version running on rsm requested version """ def _check_agent_version(requested_version: str) -> bool: waagent_version: str = self._ssh_client.run_command("waagent-version", use_sudo=True) expected_version = f"Goal state agent: {requested_version}" if expected_version in waagent_version: return True else: return False waagent_version: str = "" log.info("Verifying agent updated to requested version: {0}".format(requested_version)) success: bool = retry_if_false(lambda: _check_agent_version(requested_version)) if not success: fail("Guest agent didn't update to requested version {0} but found \n {1}. \n " "To debug verify if CRP has upgrade operation around that time and also check if agent log has any errors ".format( requested_version, waagent_version)) waagent_version: str = self._ssh_client.run_command("waagent-version", use_sudo=True) log.info( f"Successfully verified agent updated to requested version. Current agent version running:\n {waagent_version}") def _verify_no_guest_agent_update(self, version: str) -> None: """ verify current agent version is not updated to requested version """ log.info("Verifying no update happened to agent") current_agent: str = self._ssh_client.run_command("waagent-version", use_sudo=True) assert_that(current_agent).does_not_contain(version).described_as( f"Agent version changed.\n Current agent {current_agent}") log.info("Verified agent was not updated to requested version") def _verify_agent_reported_supported_feature_flag(self): """ RSM update rely on supported flag that agent sends to CRP.So, checking if GA reports feature flag from the agent log """ log.info( "Executing verify_versioning_supported_feature.py remote script to verify agent reported supported feature flag, so that CRP can send RSM update request") self._run_remote_test(self._ssh_client, "agent_update-verify_versioning_supported_feature.py", use_sudo=True) log.info("Successfully verified that Agent reported VersioningGovernance supported feature flag") def _verify_agent_reported_update_status(self, version: str): """ Verify if the agent reported update status to CRP after update performed """ log.info( "Executing verify_agent_reported_update_status.py remote script to verify agent reported update status for version {0}".format( version)) self._run_remote_test(self._ssh_client, f"agent_update-verify_agent_reported_update_status.py --version {version}", use_sudo=True) log.info("Successfully Agent reported update status for version {0}".format(version)) def _retrieve_installed_agent_version(self): """ Retrieve the installed agent version """ log.info("Retrieving installed agent version") stdout: str = self._ssh_client.run_command("waagent-version", use_sudo=True) log.info("Retrieved installed agent version \n {0}".format(stdout)) match = re.search(r'.*Goal state agent: (\S*)', stdout) if match: self._installed_agent_version = match.groups()[0] else: log.warning("Unable to retrieve installed agent version and set to default value {0}".format( self._installed_agent_version)) if __name__ == "__main__": RsmUpdateBvt.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_update/self_update.py000066400000000000000000000204551462617747000261020ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import os from pathlib import Path from threading import RLock from assertpy import fail import azurelinuxagent from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.agent_test_context import AgentVmTestContext from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import retry_if_false from tests_e2e.tests.lib.shell import run_command class SelfUpdateBvt(AgentVmTest): """ This test case is to verify that the agent can update itself to the latest version using self-update path when vm not enrolled to RSM updates """ def __init__(self, context: AgentVmTestContext): super().__init__(context) self._ssh_client = self._context.create_ssh_client() self._test_version = "2.8.9.9" self._test_pkg_name = f"WALinuxAgent-{self._test_version}.zip" _setup_lock = RLock() def run(self): log.info("Verifying agent updated to latest version from custom test version") self._test_setup() self._verify_agent_updated_to_latest_version() log.info("Verifying agent remains on custom test version when AutoUpdate.UpdateToLatestVersion=n") self._test_setup_and_update_to_latest_version_false() self._verify_agent_remains_on_custom_test_version() def _test_setup(self) -> None: """ Builds the custom test agent pkg as some lower version and installs it on the vm """ self._build_custom_test_agent() output: str = self._ssh_client.run_command( f"agent_update-self_update_test_setup --package ~/tmp/{self._test_pkg_name} --version {self._test_version} --update_to_latest_version y", use_sudo=True) log.info("Successfully installed custom test agent pkg version \n%s", output) def _build_custom_test_agent(self) -> None: """ Builds the custom test pkg """ with self._setup_lock: agent_source_path: Path = self._context.working_directory / "source" source_pkg_path: Path = agent_source_path / "eggs" / f"{self._test_pkg_name}" if source_pkg_path.exists(): log.info("The test pkg already exists at %s, skipping build", source_pkg_path) else: if agent_source_path.exists(): os.rmdir(agent_source_path) # Remove if partial build exists source_directory: Path = Path(azurelinuxagent.__path__[0]).parent copy_cmd: str = f"cp -r {source_directory} {agent_source_path}" log.info("Copying agent source %s to %s", source_directory, agent_source_path) run_command(copy_cmd, shell=True) if not agent_source_path.exists(): raise Exception( f"The agent source was not copied to the expected path {agent_source_path}") version_file: Path = agent_source_path / "azurelinuxagent" / "common" / "version.py" version_cmd = rf"""sed -E -i "s/^AGENT_VERSION\s+=\s+'[0-9.]+'/AGENT_VERSION = '{self._test_version}'/g" {version_file}""" log.info("Setting agent version to %s to build new pkg", self._test_version) run_command(version_cmd, shell=True) makepkg_file: Path = agent_source_path / "makepkg.py" build_cmd: str = f"env PYTHONPATH={agent_source_path} python3 {makepkg_file} -o {agent_source_path}" log.info("Building custom test agent pkg version %s", self._test_version) run_command(build_cmd, shell=True) if not source_pkg_path.exists(): raise Exception( f"The test pkg was not created at the expected path {source_pkg_path}") target_path: Path = Path("~") / "tmp" log.info("Copying %s to %s:%s", source_pkg_path, self._context.vm, target_path) self._ssh_client.copy_to_node(source_pkg_path, target_path) def _verify_agent_updated_to_latest_version(self) -> None: """ Verifies the agent updated to latest version from custom test version. We retrieve latest version from goal state and compare with current agent version running as that latest version """ latest_version: str = self._ssh_client.run_command("agent_update-self_update_latest_version.py", use_sudo=True).rstrip() self._verify_guest_agent_update(latest_version) # Verify agent updated to latest version by custom test agent self._ssh_client.run_command( "agent_update-self_update_check.py --latest-version {0} --current-version {1}".format(latest_version, self._test_version)) def _verify_guest_agent_update(self, latest_version: str) -> None: """ Verify current agent version running on latest version """ def _check_agent_version(latest_version: str) -> bool: waagent_version: str = self._ssh_client.run_command("waagent-version", use_sudo=True) expected_version = f"Goal state agent: {latest_version}" if expected_version in waagent_version: return True else: return False waagent_version: str = "" log.info("Verifying agent updated to latest version: {0}".format(latest_version)) success: bool = retry_if_false(lambda: _check_agent_version(latest_version), delay=60) if not success: fail("Guest agent didn't update to latest version {0} but found \n {1}".format( latest_version, waagent_version)) waagent_version: str = self._ssh_client.run_command("waagent-version", use_sudo=True) log.info( f"Successfully verified agent updated to latest version. Current agent version running:\n {waagent_version}") def _test_setup_and_update_to_latest_version_false(self) -> None: """ Builds the custom test agent pkg as some lower version and installs it on the vm Also modify the configuration AutoUpdate.UpdateToLatestVersion=n """ self._build_custom_test_agent() output: str = self._ssh_client.run_command( f"agent_update-self_update_test_setup --package ~/tmp/{self._test_pkg_name} --version {self._test_version} --update_to_latest_version n", use_sudo=True) log.info("Successfully installed custom test agent pkg version \n%s", output) def _verify_agent_remains_on_custom_test_version(self) -> None: """ Verifies the agent remains on custom test version when UpdateToLatestVersion=n """ def _check_agent_version(version: str) -> bool: waagent_version: str = self._ssh_client.run_command("waagent-version", use_sudo=True) expected_version = f"Goal state agent: {version}" if expected_version in waagent_version: return True else: return False waagent_version: str = "" log.info("Verifying if current agent on version: {0}".format(self._test_version)) success: bool = retry_if_false(lambda: _check_agent_version(self._test_version), delay=60) if not success: fail("Guest agent was on different version than expected version {0} and found \n {1}".format( self._test_version, waagent_version)) waagent_version: str = self._ssh_client.run_command("waagent-version", use_sudo=True) log.info( f"Successfully verified agent stayed on test version. Current agent version running:\n {waagent_version}") if __name__ == "__main__": SelfUpdateBvt.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_wait_for_cloud_init/000077500000000000000000000000001462617747000257705ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_wait_for_cloud_init/add_cloud_init_script.py000077500000000000000000000052701462617747000326760ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import base64 from typing import Any, Dict from tests_e2e.tests.agent_wait_for_cloud_init.agent_wait_for_cloud_init import AgentWaitForCloudInit from tests_e2e.tests.lib.update_arm_template import UpdateArmTemplate class AddCloudInitScript(UpdateArmTemplate): """ Adds AgentWaitForCloudInit.CloudInitScript to the ARM template as osProfile.customData. """ def update(self, template: Dict[str, Any], is_lisa_template: bool) -> None: if not is_lisa_template: raise Exception('This test can only customize LISA ARM templates.') # # cloud-init configuration needs to be added in the osProfile.customData property as a base64-encoded string. # # LISA uses the getOSProfile function to generate the value for osProfile; add customData to its output, checking that we do not # override any existing value (the current LISA template does not have any). # # "getOSProfile": { # "parameters": [ # ... # ], # "output": { # "type": "object", # "value": { # "computername": "[parameters('computername')]", # "adminUsername": "[parameters('admin_username')]", # "adminPassword": "[if(parameters('has_password'), parameters('admin_password'), json('null'))]", # "linuxConfiguration": "[if(parameters('has_linux_configuration'), parameters('linux_configuration'), json('null'))]" # } # } # } # encoded_script = base64.b64encode(AgentWaitForCloudInit.CloudInitScript.encode('utf-8')).decode('utf-8') get_os_profile = self.get_lisa_function(template, 'getOSProfile') output = self.get_function_output(get_os_profile) if output.get('customData') is not None: raise Exception(f"The getOSProfile function already has a 'customData'. Won't override it. Definition: {get_os_profile}") output['customData'] = encoded_script Azure-WALinuxAgent-2b21de5/tests_e2e/tests/agent_wait_for_cloud_init/agent_wait_for_cloud_init.py000077500000000000000000000073201462617747000335500ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import time from assertpy import fail from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.shell import CommandError from tests_e2e.tests.lib.ssh_client import SshClient class AgentWaitForCloudInit(AgentVmTest): """ This test verifies that the Agent waits for cloud-init to complete before it starts processing extensions. To do this, it adds 'CloudInitScript' in cloud-init's custom data. The script ensures first that the Agent is waiting for cloud-init, and then sleeps for a couple of minutes before completing. The scripts appends a set of known messages to waagent.log, and the test simply verifies that the messages are present in the log in the expected order, and that they occur before the Agent reports that it is processing extensions. """ CloudInitScript = """#!/usr/bin/env bash set -euox pipefail echo ">>> $(date) cloud-init script begin" >> /var/log/waagent.log while ! grep 'Waiting for cloud-init to complete' /var/log/waagent.log; do sleep 15 done echo ">>> $(date) The Agent is waiting for cloud-init, will pause for a couple of minutes" >> /var/log/waagent.log sleep 120 echo ">>> $(date) cloud-init script end" >> /var/log/waagent.log """ def run(self): ssh_client: SshClient = self._context.create_ssh_client() log.info("Waiting for Agent to start processing extensions") for _ in range(15): try: ssh_client.run_command("grep 'ProcessExtensionsGoalState started' /var/log/waagent.log") break except CommandError: log.info("The Agent has not started to process extensions, will check again after a short delay") time.sleep(60) else: raise Exception("Timeout while waiting for the Agent to start processing extensions") log.info("The Agent has started to process extensions") output = ssh_client.run_command( "grep -E '^>>>|" + "INFO ExtHandler ExtHandler cloud-init completed|" + "INFO ExtHandler ExtHandler ProcessExtensionsGoalState started' /var/log/waagent.log") output = output.rstrip().splitlines() expected = [ 'cloud-init script begin', 'The Agent is waiting for cloud-init, will pause for a couple of minutes', 'cloud-init script end', 'cloud-init completed', 'ProcessExtensionsGoalState started' ] indent = lambda lines: "\n".join([f" {ln}" for ln in lines]) if len(output) == len(expected) and all([expected[i] in output[i] for i in range(len(expected))]): log.info("The Agent waited for cloud-init before processing extensions.\nLog messages:\n%s", indent(output)) else: fail(f"The Agent did not wait for cloud-init before processing extensions.\nExpected:\n{indent(expected)}\nActual:\n{indent(output)}") if __name__ == "__main__": AgentWaitForCloudInit.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/ext_cgroups/000077500000000000000000000000001462617747000231315ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/ext_cgroups/ext_cgroups.py000066400000000000000000000032241462617747000260460ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from tests_e2e.tests.ext_cgroups.install_extensions import InstallExtensions from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.agent_test_context import AgentVmTestContext from tests_e2e.tests.lib.logging import log class ExtCgroups(AgentVmTest): """ This test verifies the installed extensions assigned correctly in their cgroups. """ def __init__(self, context: AgentVmTestContext): super().__init__(context) self._ssh_client = self._context.create_ssh_client() def run(self): log.info("=====Installing extensions to validate ext cgroups scenario") InstallExtensions(self._context).run() log.info("=====Executing remote script check_cgroups_extensions.py to validate extension cgroups") self._run_remote_test(self._ssh_client, "ext_cgroups-check_cgroups_extensions.py", use_sudo=True) log.info("Successfully verified that extensions present in correct cgroup") if __name__ == "__main__": ExtCgroups.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/ext_cgroups/install_extensions.py000066400000000000000000000114761462617747000274410ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from datetime import datetime, timedelta from pathlib import Path from tests_e2e.tests.lib.agent_test_context import AgentVmTestContext from tests_e2e.tests.lib.vm_extension_identifier import VmExtensionIds from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.virtual_machine_extension_client import VirtualMachineExtensionClient class InstallExtensions: """ This test installs the multiple extensions in order to verify extensions cgroups in the next test. """ def __init__(self, context: AgentVmTestContext): self._context = context self._ssh_client = self._context.create_ssh_client() def run(self): self._prepare_agent() # Install the GATest extension to test service cgroups self._install_gatest_extension() # Install the Azure Monitor Agent to test long running process cgroup self._install_ama() # Install the VM Access extension to test sample extension self._install_vmaccess() # Install the CSE extension to test extension cgroup self._install_cse() def _prepare_agent(self): log.info("=====Executing update-waagent-conf remote script to update monitoring deadline flag for tracking azuremonitoragent service") future_date = datetime.utcnow() + timedelta(days=2) expiry_time = future_date.date().strftime("%Y-%m-%d") # Agent needs extension info and it's services info in the handlermanifest.xml to monitor and limit the resource usage. # As part of pilot testing , agent hardcoded azuremonitoragent service name to monitor it for sometime in production without need of manifest update from extesnion side. # So that they can get sense of resource usage for their extensions. This we did for few months and now we no logner monitoring it in production. # But we are changing the config flag expiry time to future date in this test. So that test agent will start track the cgroups that is used by the service. result = self._ssh_client.run_command(f"update-waagent-conf Debug.CgroupMonitorExpiryTime={expiry_time}", use_sudo=True) log.info(result) log.info("Updated agent cgroups config(CgroupMonitorExpiryTime)") def _install_ama(self): ama_extension = VirtualMachineExtensionClient( self._context.vm, VmExtensionIds.AzureMonitorLinuxAgent, resource_name="AMAAgent") log.info("Installing %s", ama_extension) ama_extension.enable() ama_extension.assert_instance_view() def _install_vmaccess(self): # fetch the public key public_key_file: Path = Path(self._context.identity_file).with_suffix(".pub") with public_key_file.open() as f: public_key = f.read() # Invoke the extension vm_access = VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.VmAccess, resource_name="VmAccess") log.info("Installing %s", vm_access) vm_access.enable( protected_settings={ 'username': self._context.username, 'ssh_key': public_key, 'reset_ssh': 'false' } ) vm_access.assert_instance_view() def _install_gatest_extension(self): gatest_extension = VirtualMachineExtensionClient( self._context.vm, VmExtensionIds.GATestExtension, resource_name="GATestExt") log.info("Installing %s", gatest_extension) gatest_extension.enable() gatest_extension.assert_instance_view() def _install_cse(self): # Use custom script to output the cgroups assigned to it at runtime and save to /var/lib/waagent/tmp/custom_script_check. script_contents = """ mkdir /var/lib/waagent/tmp cp /proc/$$/cgroup /var/lib/waagent/tmp/custom_script_check """ custom_script_2_0 = VirtualMachineExtensionClient( self._context.vm, VmExtensionIds.CustomScript, resource_name="CustomScript") log.info("Installing %s", custom_script_2_0) custom_script_2_0.enable( protected_settings={ 'commandToExecute': f"echo \'{script_contents}\' | bash" } ) custom_script_2_0.assert_instance_view() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/ext_sequencing/000077500000000000000000000000001462617747000236105ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/ext_sequencing/ext_seq_test_cases.py000066400000000000000000000245711462617747000300600ustar00rootroot00000000000000def add_one_dependent_ext_without_settings(): # Dependent extensions without settings should be enabled with dependencies return [ { "name": "AzureMonitorLinuxAgent", "properties": { "provisionAfterExtensions": ["CustomScript"], "publisher": "Microsoft.Azure.Monitor", "type": "AzureMonitorLinuxAgent", "typeHandlerVersion": "1.5", "autoUpgradeMinorVersion": True } }, { "name": "CustomScript", "properties": { "publisher": "Microsoft.Azure.Extensions", "type": "CustomScript", "typeHandlerVersion": "2.1", "autoUpgradeMinorVersion": True, "settings": { "commandToExecute": "date" } } } ] def add_two_extensions_with_dependencies(): # Checks that extensions are enabled in the correct order when there is only one valid sequence return [ { "name": "AzureMonitorLinuxAgent", "properties": { "provisionAfterExtensions": [], "publisher": "Microsoft.Azure.Monitor", "type": "AzureMonitorLinuxAgent", "typeHandlerVersion": "1.5", "autoUpgradeMinorVersion": True } }, { "name": "RunCommandLinux", "properties": { "provisionAfterExtensions": ["AzureMonitorLinuxAgent"], "publisher": "Microsoft.CPlat.Core", "type": "RunCommandLinux", "typeHandlerVersion": "1.0", "autoUpgradeMinorVersion": True, "settings": { "commandToExecute": "date" } } }, { "name": "CustomScript", "properties": { "provisionAfterExtensions": ["RunCommandLinux", "AzureMonitorLinuxAgent"], "publisher": "Microsoft.Azure.Extensions", "type": "CustomScript", "typeHandlerVersion": "2.1", "autoUpgradeMinorVersion": True, "settings": { "commandToExecute": "date" } } } ] def remove_one_dependent_extension(): # Checks that remaining extensions with dependencies are enabled in the correct order after removing a dependent # extension return [ { "name": "AzureMonitorLinuxAgent", "properties": { "publisher": "Microsoft.Azure.Monitor", "type": "AzureMonitorLinuxAgent", "typeHandlerVersion": "1.5", "autoUpgradeMinorVersion": True } }, { "name": "CustomScript", "properties": { "provisionAfterExtensions": ["AzureMonitorLinuxAgent"], "publisher": "Microsoft.Azure.Extensions", "type": "CustomScript", "typeHandlerVersion": "2.1", "autoUpgradeMinorVersion": True, "settings": { "commandToExecute": "date" } } } ] def remove_all_dependencies(): # Checks that extensions are enabled after adding and removing dependencies return [ { "name": "AzureMonitorLinuxAgent", "properties": { "publisher": "Microsoft.Azure.Monitor", "type": "AzureMonitorLinuxAgent", "typeHandlerVersion": "1.5", "autoUpgradeMinorVersion": True } }, { "name": "RunCommandLinux", "properties": { "publisher": "Microsoft.CPlat.Core", "type": "RunCommandLinux", "typeHandlerVersion": "1.0", "autoUpgradeMinorVersion": True, "settings": { "commandToExecute": "date" } } }, { "name": "CustomScript", "properties": { "publisher": "Microsoft.Azure.Extensions", "type": "CustomScript", "typeHandlerVersion": "2.1", "autoUpgradeMinorVersion": True, "settings": { "commandToExecute": "date" } } } ] def add_one_dependent_extension(): # Checks that a valid enable sequence occurs when only one extension has dependencies return [ { "name": "AzureMonitorLinuxAgent", "properties": { "provisionAfterExtensions": ["RunCommandLinux", "CustomScript"], "publisher": "Microsoft.Azure.Monitor", "type": "AzureMonitorLinuxAgent", "typeHandlerVersion": "1.5", "autoUpgradeMinorVersion": True } }, { "name": "RunCommandLinux", "properties": { "publisher": "Microsoft.CPlat.Core", "type": "RunCommandLinux", "typeHandlerVersion": "1.0", "autoUpgradeMinorVersion": True, "settings": { "commandToExecute": "date" } } }, { "name": "CustomScript", "properties": { "publisher": "Microsoft.Azure.Extensions", "type": "CustomScript", "typeHandlerVersion": "2.1", "autoUpgradeMinorVersion": True, "settings": { "commandToExecute": "date" } } } ] def add_single_dependencies(): # Checks that extensions are enabled in the correct order when there is only one valid sequence and each extension # has no more than one dependency return [ { "name": "AzureMonitorLinuxAgent", "properties": { "provisionAfterExtensions": [], "publisher": "Microsoft.Azure.Monitor", "type": "AzureMonitorLinuxAgent", "typeHandlerVersion": "1.5", "autoUpgradeMinorVersion": True } }, { "name": "RunCommandLinux", "properties": { "provisionAfterExtensions": ["CustomScript"], "publisher": "Microsoft.CPlat.Core", "type": "RunCommandLinux", "typeHandlerVersion": "1.0", "autoUpgradeMinorVersion": True, "settings": { "commandToExecute": "date" } } }, { "name": "CustomScript", "properties": { "provisionAfterExtensions": ["AzureMonitorLinuxAgent"], "publisher": "Microsoft.Azure.Extensions", "type": "CustomScript", "typeHandlerVersion": "2.1", "autoUpgradeMinorVersion": True, "settings": { "commandToExecute": "date" } } } ] def remove_all_dependent_extensions(): # Checks that remaining extensions with dependencies are enabled in the correct order after removing all dependent # extension return [ { "name": "AzureMonitorLinuxAgent", "properties": { "publisher": "Microsoft.Azure.Monitor", "type": "AzureMonitorLinuxAgent", "typeHandlerVersion": "1.5", "autoUpgradeMinorVersion": True } } ] def add_failing_dependent_extension_with_one_dependency(): # This case tests that extensions dependent on a failing extensions are skipped, but extensions that are not # dependent on the failing extension still get enabled return [ { "name": "AzureMonitorLinuxAgent", "properties": { "provisionAfterExtensions": ["CustomScript"], "publisher": "Microsoft.Azure.Monitor", "type": "AzureMonitorLinuxAgent", "typeHandlerVersion": "1.5", "autoUpgradeMinorVersion": True, "settings": {} } }, { "name": "RunCommandLinux", "properties": { "publisher": "Microsoft.CPlat.Core", "type": "RunCommandLinux", "typeHandlerVersion": "1.0", "autoUpgradeMinorVersion": True, "settings": { "commandToExecute": "date" } } }, { "name": "CustomScript", "properties": { "publisher": "Microsoft.Azure.Extensions", "type": "CustomScript", "typeHandlerVersion": "2.1", "autoUpgradeMinorVersion": True, "settings": { "commandToExecute": "exit 1" } } } ] def add_failing_dependent_extension_with_two_dependencies(): # This case tests that all extensions dependent on a failing extensions are skipped return [ { "name": "AzureMonitorLinuxAgent", "properties": { "provisionAfterExtensions": ["CustomScript"], "publisher": "Microsoft.Azure.Monitor", "type": "AzureMonitorLinuxAgent", "typeHandlerVersion": "1.5", "autoUpgradeMinorVersion": True } }, { "name": "RunCommandLinux", "properties": { "provisionAfterExtensions": ["CustomScript"], "publisher": "Microsoft.CPlat.Core", "type": "RunCommandLinux", "typeHandlerVersion": "1.0", "autoUpgradeMinorVersion": True, "settings": { "commandToExecute": "date" } } }, { "name": "CustomScript", "properties": { "publisher": "Microsoft.Azure.Extensions", "type": "CustomScript", "typeHandlerVersion": "2.1", "autoUpgradeMinorVersion": True, "settings": { "commandToExecute": "exit 1" } } } ] Azure-WALinuxAgent-2b21de5/tests_e2e/tests/ext_sequencing/ext_sequencing.py000066400000000000000000000424351462617747000272130ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This test adds extensions with multiple dependencies to a VMSS using the 'provisionAfterExtensions' property and # validates they are enabled in order of dependencies. # import copy import re import uuid from datetime import datetime from typing import List, Dict, Any from assertpy import fail from azure.mgmt.compute.models import VirtualMachineScaleSetVMExtensionsSummary from tests_e2e.tests.ext_sequencing.ext_seq_test_cases import add_one_dependent_ext_without_settings, add_two_extensions_with_dependencies, \ remove_one_dependent_extension, remove_all_dependencies, add_one_dependent_extension, \ add_single_dependencies, remove_all_dependent_extensions, add_failing_dependent_extension_with_one_dependency, add_failing_dependent_extension_with_two_dependencies from tests_e2e.tests.lib.agent_test import AgentVmssTest, TestSkipped from tests_e2e.tests.lib.agent_test_context import AgentVmTestContext from tests_e2e.tests.lib.virtual_machine_scale_set_client import VmssInstanceIpAddress from tests_e2e.tests.lib.vm_extension_identifier import VmExtensionIds from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.resource_group_client import ResourceGroupClient from tests_e2e.tests.lib.ssh_client import SshClient class ExtSequencing(AgentVmssTest): def __init__(self, context: AgentVmTestContext): super().__init__(context) self._scenario_start = datetime.min # Cases to test different dependency scenarios _test_cases = [ add_one_dependent_ext_without_settings, add_two_extensions_with_dependencies, # remove_one_dependent_extension should only be run after another test case which has RunCommandLinux in the # model remove_one_dependent_extension, # remove_all_dependencies should only be run after another test case which has extension dependencies in the # model remove_all_dependencies, add_one_dependent_extension, add_single_dependencies, # remove_all_dependent_extensions should only be run after another test case which has dependent extension in # the model remove_all_dependent_extensions, add_failing_dependent_extension_with_one_dependency, add_failing_dependent_extension_with_two_dependencies ] @staticmethod def _get_dependency_map(extensions: List[Dict[str, Any]]) -> Dict[str, Dict[str, Any]]: dependency_map: Dict[str, Dict[str, Any]] = dict() for ext in extensions: ext_name = ext['name'] provisioned_after = ext['properties'].get('provisionAfterExtensions') depends_on = provisioned_after if provisioned_after else [] # We know an extension should fail if commandToExecute is exactly "exit 1" ext_settings = ext['properties'].get("settings") ext_command = ext['properties']['settings'].get("commandToExecute") if ext_settings else None should_fail = ext_command == "exit 1" dependency_map[ext_name] = {"should_fail": should_fail, "depends_on": depends_on} return dependency_map @staticmethod def _get_sorted_extension_names(extensions: List[VirtualMachineScaleSetVMExtensionsSummary], ssh_client: SshClient, test_case_start: datetime) -> List[str]: # Using VmExtensionIds to get publisher for each ext to be used in remote script extension_full_names = { "AzureMonitorLinuxAgent": VmExtensionIds.AzureMonitorLinuxAgent, "RunCommandLinux": VmExtensionIds.RunCommand, "CustomScript": VmExtensionIds.CustomScript } enabled_times = [] for ext in extensions: # Only check extensions which succeeded provisioning if "succeeded" in ext.statuses_summary[0].code: enabled_time = ssh_client.run_command(f"ext_sequencing-get_ext_enable_time.py --ext '{extension_full_names[ext.name]}'", use_sudo=True) formatted_time = datetime.strptime(enabled_time.strip(), u'%Y-%m-%dT%H:%M:%SZ') if formatted_time < test_case_start: fail("Extension {0} was not enabled".format(extension_full_names[ext.name])) enabled_times.append( { "name": ext.name, "enabled_time": formatted_time } ) # sort the extensions based on their enabled datetime sorted_extensions = sorted(enabled_times, key=lambda ext_: ext_["enabled_time"]) log.info("") log.info("Extensions sorted by time they were enabled: {0}".format( ', '.join(["{0}: {1}".format(ext["name"], ext["enabled_time"]) for ext in sorted_extensions]))) sorted_extension_names = [ext["name"] for ext in sorted_extensions] return sorted_extension_names @staticmethod def _validate_extension_sequencing(dependency_map: Dict[str, Dict[str, Any]], sorted_extension_names: List[str], relax_check: bool): installed_ext = dict() # Iterate through the extensions in the enabled order and validate if their depending extensions are already # enabled prior to that. for ext in sorted_extension_names: # Check if the depending extension are already installed if ext not in dependency_map: # There should not be any unexpected extensions on the scale set, even in the case we share the VMSS, # because we update the scale set model with the extensions. Any extensions that are not in the scale # set model would be disabled. fail("Unwanted extension found in VMSS Instance view: {0}".format(ext)) if dependency_map[ext] is not None: dependencies = dependency_map[ext].get('depends_on') for dep in dependencies: if installed_ext.get(dep) is None: # The depending extension is not installed prior to the current extension if relax_check: log.info("{0} is not installed prior to {1}".format(dep, ext)) else: fail("{0} is not installed prior to {1}".format(dep, ext)) # Mark the current extension as installed installed_ext[ext] = ext # Validate that only extensions expected to fail, and their dependent extensions, failed for ext, details in dependency_map.items(): failing_ext_dependencies = [dep for dep in details['depends_on'] if dependency_map[dep]['should_fail']] if ext not in installed_ext: if details['should_fail']: log.info("Extension {0} failed as expected".format(ext)) elif failing_ext_dependencies: log.info("Extension {0} failed as expected because it is dependent on {1}".format(ext, ' and '.join(failing_ext_dependencies))) else: fail("{0} unexpectedly failed. Only extensions that are expected to fail or depend on a failing extension should fail".format(ext)) log.info("Validated extension sequencing") def run(self): instances_ip_address: List[VmssInstanceIpAddress] = self._context.vmss.get_instances_ip_address() ssh_clients: Dict[str, SshClient] = dict() for instance in instances_ip_address: ssh_clients[instance.instance_name] = SshClient(ip_address=instance.ip_address, username=self._context.username, identity_file=self._context.identity_file) if not VmExtensionIds.AzureMonitorLinuxAgent.supports_distro(next(iter(ssh_clients.values())).run_command("get_distro.py").rstrip()): raise TestSkipped("Currently AzureMonitorLinuxAgent is not supported on this distro") # This is the base ARM template that's used for deploying extensions for this scenario base_extension_template = { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Compute/virtualMachineScaleSets", "name": f"{self._context.vmss.name}", "location": "[resourceGroup().location]", "apiVersion": "2018-06-01", "properties": { "virtualMachineProfile": { "extensionProfile": { "extensions": [] } } } } ] } for case in self._test_cases: test_case_start = datetime.now() if self._scenario_start == datetime.min: self._scenario_start = test_case_start # Assign unique guid to forceUpdateTag for each extension to make sure they're always unique to force CRP # to generate a new sequence number each time test_guid = str(uuid.uuid4()) extensions = case() for ext in extensions: ext["properties"].update({ "forceUpdateTag": test_guid }) # We update the extension template here with extensions that are specific to the scenario that we want to # test out log.info("") log.info("Test case: {0}".format(case.__name__.replace('_', ' '))) ext_template = copy.deepcopy(base_extension_template) ext_template['resources'][0]['properties']['virtualMachineProfile']['extensionProfile'][ 'extensions'] = extensions # Log the dependency map for the extensions in this test case dependency_map = self._get_dependency_map(extensions) log.info("") log.info("The dependency map of the extensions for this test case is:") for ext, details in dependency_map.items(): dependencies = details.get('depends_on') dependency_list = "-" if not dependencies else ' and '.join(dependencies) log.info("{0} depends on {1}".format(ext, dependency_list)) # Deploy updated extension template to the scale set. log.info("") log.info("Deploying extensions with the above dependencies to the scale set...") rg_client = ResourceGroupClient(self._context.vmss.cloud, self._context.vmss.subscription, self._context.vmss.resource_group, self._context.vmss.location) try: rg_client.deploy_template(template=ext_template) except Exception as e: # We only expect to catch an exception during deployment if we are forcing one of the extensions to # fail. We know an extension should fail if "failing" is in the case name. Otherwise, report the # failure. deployment_failure_pattern = r"[\s\S]*\"details\": [\s\S]* \"code\": \"(?P.*)\"[\s\S]* \"message\": \"(?P.*)\"[\s\S]*" msg_pattern = r"Multiple VM extensions failed to be provisioned on the VM. Please see the VM extension instance view for other failures. The first extension failed due to the error: VM Extension '.*' is marked as failed since it depends upon the VM Extension 'CustomScript' which has failed." deployment_failure_match = re.match(deployment_failure_pattern, str(e)) if "failing" not in case.__name__: fail("Extension template deployment unexpectedly failed: {0}".format(e)) elif not deployment_failure_match or deployment_failure_match.group("code") != "VMExtensionProvisioningError" or not re.match(msg_pattern, deployment_failure_match.group("msg")): fail("Extension template deployment failed as expected, but with an unexpected error: {0}".format(e)) # Get the extensions on the VMSS from the instance view log.info("") instance_view_extensions = self._context.vmss.get_instance_view().extensions # Validate that the extensions were enabled in the correct order on each instance of the scale set for instance_name, ssh_client in ssh_clients.items(): log.info("") log.info("Validate extension sequencing on {0}:{1}...".format(instance_name, ssh_client.ip_address)) # Sort the VM extensions by the time they were enabled sorted_extension_names = self._get_sorted_extension_names(instance_view_extensions, ssh_client, test_case_start) # Validate that the extensions were enabled in the correct order. We relax this check if no settings # are provided for a dependent extension, since the guest agent currently ignores dependencies in this # case. relax_check = True if "settings" in case.__name__ else False self._validate_extension_sequencing(dependency_map, sorted_extension_names, relax_check) log.info("------") def get_ignore_errors_before_timestamp(self) -> datetime: # Ignore errors in the agent log before the first test case starts return self._scenario_start def get_ignore_error_rules(self) -> List[Dict[str, Any]]: ignore_rules = [ # # WARNING ExtHandler ExtHandler Missing dependsOnExtension on extension Microsoft.Azure.Monitor.AzureMonitorLinuxAgent # This message appears when an extension doesn't depend on another extension # { 'message': r"Missing dependsOnExtension on extension .*" }, # # WARNING ExtHandler ExtHandler Extension Microsoft.Azure.Monitor.AzureMonitorLinuxAgent does not have any settings. Will ignore dependency (dependency level: 1) # We currently ignore dependencies for extensions without settings # { 'message': r"Extension .* does not have any settings\. Will ignore dependency \(dependency level: \d\)" }, # # 2023-10-31T17:46:59.675959Z WARNING ExtHandler ExtHandler Dependent extension Microsoft.Azure.Extensions.CustomScript failed or timed out, will skip processing the rest of the extensions # We intentionally make CustomScript fail to test that dependent extensions are skipped # { 'message': r"Dependent extension Microsoft.Azure.Extensions.CustomScript failed or timed out, will skip processing the rest of the extensions" }, # # 2023-10-31T17:48:13.349214Z ERROR ExtHandler ExtHandler Event: name=Microsoft.Azure.Extensions.CustomScript, op=ExtensionProcessing, message=Dependent Extension Microsoft.Azure.Extensions.CustomScript did not succeed. Status was error, duration=0 # We intentionally make CustomScript fail to test that dependent extensions are skipped # { 'message': r"Event: name=Microsoft.Azure.Extensions.CustomScript, op=ExtensionProcessing, message=Dependent Extension Microsoft.Azure.Extensions.CustomScript did not succeed. Status was error, duration=0" }, # # 2023-10-31T17:47:07.689083Z WARNING ExtHandler ExtHandler [PERIODIC] This status is being reported by the Guest Agent since no status file was reported by extension Microsoft.Azure.Monitor.AzureMonitorLinuxAgent: [ExtensionStatusError] Status file /var/lib/waagent/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent-1.28.11/status/6.status does not exist # We expect extensions that are dependent on a failing extension to not report status # { 'message': r"\[PERIODIC\] This status is being reported by the Guest Agent since no status file was reported by extension .*: \[ExtensionStatusError\] Status file \/var\/lib\/waagent\/.*\/status\/\d.status does not exist" }, # # 2023-10-31T17:48:11.306835Z WARNING ExtHandler ExtHandler A new goal state was received, but not all the extensions in the previous goal state have completed: [('Microsoft.Azure.Extensions.CustomScript', 'error'), ('Microsoft.Azure.Monitor.AzureMonitorLinuxAgent', 'transitioning'), ('Microsoft.CPlat.Core.RunCommandLinux', 'success')] # This message appears when the previous test scenario had failing extensions due to extension dependencies # { 'message': r"A new goal state was received, but not all the extensions in the previous goal state have completed: \[(\(u?'.*', u?'(error|transitioning|success)'\),?)+\]" } ] return ignore_rules if __name__ == "__main__": ExtSequencing.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/ext_telemetry_pipeline/000077500000000000000000000000001462617747000253465ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/ext_telemetry_pipeline/ext_telemetry_pipeline.py000077500000000000000000000126761462617747000325160ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This test ensures that the agent does not throw any errors while trying to transmit events to wireserver. It does not # validate if the events actually make it to wireserver # TODO: Update this test suite to verify that the agent picks up AND sends telemetry produced by extensions # (work item https://dev.azure.com/msazure/One/_workitems/edit/24903999) # import random from typing import List, Dict, Any from azurelinuxagent.common.conf import get_etp_collection_period from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.vm_extension_identifier import VmExtensionIds from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.ssh_client import SshClient from tests_e2e.tests.lib.virtual_machine_extension_client import VirtualMachineExtensionClient class ExtTelemetryPipeline(AgentVmTest): def run(self): ssh_client: SshClient = self._context.create_ssh_client() # Extensions we will create events for extensions = ["Microsoft.Azure.Extensions.CustomScript"] if VmExtensionIds.VmAccess.supports_distro(ssh_client.run_command("get_distro.py").rstrip()): extensions.append("Microsoft.OSTCExtensions.VMAccessForLinux") # Set the etp collection period to 30 seconds instead of default 5 minutes default_collection_period = get_etp_collection_period() log.info("") log.info("Set ETP collection period to 30 seconds on the test VM [%s]", self._context.vm.name) output = ssh_client.run_command("update-waagent-conf Debug.EtpCollectionPeriod=30", use_sudo=True) log.info("Updated waagent conf with Debug.ETPCollectionPeriod=30 completed:\n%s", output) # Add CSE to the test VM twice to ensure its events directory still exists after re-enabling log.info("") log.info("Add CSE to the test VM...") cse = VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.CustomScript, resource_name="CustomScript") cse.enable(settings={'commandToExecute': "echo 'enable'"}, protected_settings={}) cse.assert_instance_view() log.info("") log.info("Add CSE to the test VM again...") cse.enable(settings={'commandToExecute': "echo 'enable again'"}, protected_settings={}) cse.assert_instance_view() # Check agent log to verify ETP is enabled command = "agent_ext_workflow-check_data_in_agent_log.py --data 'Extension Telemetry pipeline enabled: True'" log.info("") log.info("Check agent log to verify ETP is enabled...") log.info("Remote command [%s] completed:\n%s", command, ssh_client.run_command(command)) # Add good extension events for each extension and check that the TelemetryEventsCollector collects them # TODO: Update test suite to check that the agent is picking up the events generated by the extension, instead # of generating on the extensions' behalf # (work item - https://dev.azure.com/msazure/One/_workitems/edit/24903999) log.info("") log.info("Add good extension events and check they are reported...") max_events = random.randint(10, 50) self._run_remote_test(ssh_client, f"ext_telemetry_pipeline-add_extension_events.py " f"--extensions {','.join(extensions)} " f"--num_events_total {max_events}", use_sudo=True) log.info("") log.info("Good extension events were successfully reported.") # Add invalid events for each extension and check that the TelemetryEventsCollector drops them log.info("") log.info("Add bad extension events and check they are reported...") self._run_remote_test(ssh_client, f"ext_telemetry_pipeline-add_extension_events.py " f"--extensions {','.join(extensions)} " f"--num_events_total {max_events} " f"--num_events_bad {random.randint(5, max_events-5)}", use_sudo=True) log.info("") log.info("Bad extension events were successfully dropped.") # Reset the etp collection period to the default value so this VM can be shared with other suites log.info("") log.info("Reset ETP collection period to {0} seconds on the test VM [{1}]".format(default_collection_period, self._context.vm.name)) output = ssh_client.run_command("update-waagent-conf Debug.EtpCollectionPeriod={0}".format(default_collection_period), use_sudo=True) log.info("Updated waagent conf with default collection period completed:\n%s", output) def get_ignore_error_rules(self) -> List[Dict[str, Any]]: return [ {'message': r"Dropped events for Extension.*"} ] if __name__ == "__main__": ExtTelemetryPipeline.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/extensions_disabled/000077500000000000000000000000001462617747000246155ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/extensions_disabled/extensions_disabled.py000077500000000000000000000153431462617747000312260ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This test disables extension processing on waagent.conf and verifies that extensions are not processed, but the # agent continues reporting status. # import datetime import pytz import uuid from assertpy import assert_that, fail from typing import Any from azure.mgmt.compute.models import VirtualMachineInstanceView from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.vm_extension_identifier import VmExtensionIds from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.ssh_client import SshClient from tests_e2e.tests.lib.virtual_machine_extension_client import VirtualMachineExtensionClient class ExtensionsDisabled(AgentVmTest): class TestCase: def __init__(self, extension: VirtualMachineExtensionClient, settings: Any): self.extension = extension self.settings = settings def run(self): ssh_client: SshClient = self._context.create_ssh_client() # Disable extension processing on the test VM log.info("") log.info("Disabling extension processing on the test VM [%s]", self._context.vm.name) output = ssh_client.run_command("update-waagent-conf Extensions.Enabled=n", use_sudo=True) log.info("Disable completed:\n%s", output) disabled_timestamp: datetime.datetime = datetime.datetime.utcnow() - datetime.timedelta(minutes=60) # Prepare test cases unique = str(uuid.uuid4()) test_file = f"waagent-test.{unique}" test_cases = [ ExtensionsDisabled.TestCase( VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.CustomScript, resource_name="CustomScript"), {'commandToExecute': f"echo '{unique}' > /tmp/{test_file}"} ), ExtensionsDisabled.TestCase( VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.RunCommandHandler, resource_name="RunCommandHandler"), {'source': {'script': f"echo '{unique}' > /tmp/{test_file}"}} ) ] for t in test_cases: log.info("") log.info("Test case: %s", t.extension) # # Validate that the agent is not processing extensions by attempting to enable extension & checking that # provisioning fails fast # log.info( "Executing {0}; the agent should report a VMExtensionProvisioningError without processing the extension" .format(t.extension.__str__())) try: t.extension.enable(settings=t.settings, force_update=True, timeout=6 * 60) fail("The agent should have reported an error processing the goal state") except Exception as error: assert_that("VMExtensionProvisioningError" in str(error)) \ .described_as(f"Expected a VMExtensionProvisioningError error, but actual error was: {error}") \ .is_true() assert_that("Extension will not be processed since extension processing is disabled" in str(error)) \ .described_as( f"Error message should communicate that extension will not be processed, but actual error " f"was: {error}").is_true() log.info("Goal state processing for {0} failed as expected".format(t.extension.__str__())) # # Validate the agent did not process the extension by checking it did not execute the extension settings # output = ssh_client.run_command("dir /tmp", use_sudo=True) assert_that(output) \ .described_as( f"Contents of '/tmp' on test VM contains {test_file}. Contents: {output}. \n This indicates " f"{t.extension.__str__()} was unexpectedly processed") \ .does_not_contain(f"{test_file}") log.info("The agent did not process the extension settings for {0} as expected".format(t.extension.__str__())) # # Validate that the agent continued reporting status even if it is not processing extensions # log.info("") instance_view: VirtualMachineInstanceView = self._context.vm.get_instance_view() log.info("Instance view of VM Agent:\n%s", instance_view.vm_agent.serialize()) assert_that(instance_view.vm_agent.statuses).described_as("The VM agent should have exactly 1 status").is_length(1) assert_that(instance_view.vm_agent.statuses[0].display_status).described_as("The VM Agent should be ready").is_equal_to('Ready') # The time in the status is time zone aware and 'disabled_timestamp' is not; we need to make the latter time zone aware before comparing them assert_that(instance_view.vm_agent.statuses[0].time)\ .described_as("The VM Agent should be have reported status even after extensions were disabled")\ .is_greater_than(pytz.utc.localize(disabled_timestamp)) log.info("The VM Agent reported status after extensions were disabled, as expected.") # # Validate that the agent processes extensions after re-enabling extension processing # log.info("") log.info("Enabling extension processing on the test VM [%s]", self._context.vm.name) output = ssh_client.run_command("update-waagent-conf Extensions.Enabled=y", use_sudo=True) log.info("Enable completed:\n%s", output) for t in test_cases: try: log.info("") log.info("Executing {0}; the agent should process the extension".format(t.extension.__str__())) t.extension.enable(settings=t.settings, force_update=True, timeout=15 * 60) log.info("Goal state processing for {0} succeeded as expected".format(t.extension.__str__())) except Exception as error: fail(f"Unexpected error while processing {t.extension.__str__()} after re-enabling extension " f"processing: {error}") if __name__ == "__main__": ExtensionsDisabled.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/fips/000077500000000000000000000000001462617747000215305ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/fips/fips.py000077500000000000000000000054421462617747000230530ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import uuid from assertpy import fail from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.shell import CommandError from tests_e2e.tests.lib.ssh_client import SshClient from tests_e2e.tests.lib.virtual_machine_extension_client import VirtualMachineExtensionClient from tests_e2e.tests.lib.vm_extension_identifier import VmExtensionIds class Fips(AgentVmTest): """ Enables FIPS on the test VM, which is Mariner 2 VM, and verifies that extensions with protected settings are handled correctly under FIPS. """ def run(self): ssh_client: SshClient = self._context.create_ssh_client() try: command = "fips-enable_fips_mariner" log.info("Enabling FIPS on the test VM [%s]", command) output = ssh_client.run_command(command) log.info("Enable FIPS completed\n%s", output) except CommandError as e: raise Exception(f"Failed to enable FIPS: {e}") log.info("Restarting test VM") self._context.vm.restart(wait_for_boot=True, ssh_client=ssh_client) try: command = "fips-check_fips_mariner" log.info("Verifying that FIPS is enabled [%s]", command) output = ssh_client.run_command(command).rstrip() if output != "FIPS mode is enabled.": fail(f"FIPS is not enabled - '{command}' returned '{output}'") log.info(output) except CommandError as e: raise Exception(f"Failed to verify that FIPS is enabled: {e}") # Execute an extension with protected settings to ensure the tenant certificate can be decrypted under FIPS custom_script = VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.CustomScript, resource_name="CustomScript") log.info("Installing %s", custom_script) message = f"Hello {uuid.uuid4()}!" custom_script.enable( protected_settings={ 'commandToExecute': f"echo \'{message}\'" } ) custom_script.assert_instance_view(expected_message=message) if __name__ == "__main__": Fips.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/keyvault_certificates/000077500000000000000000000000001462617747000251605ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/keyvault_certificates/keyvault_certificates.py000077500000000000000000000112761462617747000321350ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This test verifies that the Agent can download and extract KeyVault certificates that use different encryption algorithms (currently EC and RSA). # from assertpy import fail from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.shell import CommandError from tests_e2e.tests.lib.ssh_client import SshClient class KeyvaultCertificates(AgentVmTest): def run(self): test_certificates = { 'C49A06B3044BD1778081366929B53EBF154133B3': { 'AzureCloud': 'https://waagenttests.vault.azure.net/secrets/ec-cert/39862f0c6dff4b35bc8a83a5770c2102', 'AzureChinaCloud': 'https://waagenttests.vault.azure.cn/secrets/ec-cert/bb610217ef70412bb3b3c8d7a7fabfdc', 'AzureUSGovernment': 'https://waagenttests.vault.usgovcloudapi.net/secrets/ec-cert/9c20ef55c7074a468f04a168b3488933' }, '2F846E657258E50C7011E1F68EA9AD129BA4AB31': { 'AzureCloud': 'https://waagenttests.vault.azure.net/secrets/rsa-cert/0b5eac1e66fb457bb3c3419fce17e705', 'AzureChinaCloud': 'https://waagenttests.vault.azure.cn/secrets/rsa-cert/98679243f8d6493e95281a852d8cee00', 'AzureUSGovernment': 'https://waagenttests.vault.usgovcloudapi.net/secrets/rsa-cert/463a8a6be3b3436d85d3d4e406621c9e' } } thumbprints = test_certificates.keys() certificate_urls = [u[self._context.vm.cloud] for u in test_certificates.values()] # The test certificates should be downloaded to these locations expected_certificates = " ".join([f"/var/lib/waagent/{t}.{{crt,prv}}" for t in thumbprints]) # The test may be running on a VM that has already been tested (e.g. while debugging the test), so we need to delete any existing test certificates first # (note that rm -f does not fail if the given files do not exist) ssh_client: SshClient = self._context.create_ssh_client() log.info("Deleting any existing test certificates on the test VM.") existing_certificates = ssh_client.run_command(f"rm -f -v {expected_certificates}", use_sudo=True) if existing_certificates == "": log.info("No existing test certificates were found on the test VM.") else: log.info("Some test certificates had already been downloaded to the test VM (they have been deleted now):\n%s", existing_certificates) osprofile = { "location": self._context.vm.location, "properties": { "osProfile": { "secrets": [ { "sourceVault": { "id": f"/subscriptions/{self._context.vm.subscription}/resourceGroups/waagent-tests/providers/Microsoft.KeyVault/vaults/waagenttests" }, "vaultCertificates": [{"certificateUrl": url} for url in certificate_urls] } ], } } } log.info("updating the vm's osProfile with the certificates to download:\n%s", osprofile) self._context.vm.update(osprofile) # If the test has already run on the VM, force a new goal state to ensure the certificates are downloaded since the VM model most likely already had the certificates # and the update operation would not have triggered a goal state if existing_certificates != "": log.info("Reapplying the goal state to ensure the test certificates are downloaded.") self._context.vm.reapply() try: output = ssh_client.run_command(f"ls {expected_certificates}", use_sudo=True) log.info("Found all the expected certificates:\n%s", output) except CommandError as error: if error.stdout != "": log.info("Found some of the expected certificates:\n%s", error.stdout) fail(f"Failed to find certificates\n{error.stderr}") if __name__ == "__main__": KeyvaultCertificates.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/000077500000000000000000000000001462617747000213355ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/__init__.py000066400000000000000000000000001462617747000234340ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/agent_log.py000066400000000000000000000761101462617747000236530ustar00rootroot00000000000000# # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import os import re from datetime import datetime from pathlib import Path from typing import Any, AnyStr, Dict, Iterable, List, Match from azurelinuxagent.common.version import DISTRO_NAME, DISTRO_VERSION class AgentLogRecord: """ Represents an entry in the Agent's log (note that entries can span multiple lines in the log) Sample message: 2023-03-13T15:44:04.906673Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 9.9.9.9) """ text: str # Full text of the record when: str # Timestamp (as text) level: str # Level (INFO, ERROR, etc) thread: str # Thread name (e.g. 'Daemon', 'ExtHandler') prefix: str # Prefix (e.g. 'Daemon', 'ExtHandler', ) message: str # Message @staticmethod def from_match(match: Match[AnyStr]): """Builds a record from a regex match""" record = AgentLogRecord() record.text = match.string record.when = match.group("when") record.level = match.group("level") record.thread = match.group("thread") record.prefix = match.group("prefix") record.message = match.group("message") return record @staticmethod def from_dictionary(dictionary: Dict[str, str]): """Deserializes from a dict""" record = AgentLogRecord() record.text = dictionary["text"] record.when = dictionary["when"] record.level = dictionary["level"] record.thread = dictionary["thread"] record.prefix = dictionary["prefix"] record.message = dictionary["message"] return record @property def timestamp(self) -> datetime: # Extension logs may follow different timestamp formats # 2023/07/10 20:50:13.459260 ext_timestamp_regex_1 = r"\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}[.\d]+" # 2023/07/10 20:50:13 ext_timestamp_regex_2 = r"\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}" if re.match(ext_timestamp_regex_1, self.when): return datetime.strptime(self.when, u'%Y/%m/%d %H:%M:%S.%f') elif re.match(ext_timestamp_regex_2, self.when): return datetime.strptime(self.when, u'%Y/%m/%d %H:%M:%S') # Logs from agent follow this format: 2023-07-10T20:50:13.038599Z return datetime.strptime(self.when, u'%Y-%m-%dT%H:%M:%S.%fZ') def __str__(self): return self.text class AgentLog(object): """ Provides facilities to parse and/or extract errors from the agent's log. """ def __init__(self, path: Path = Path('/var/log/waagent.log')): self._path: Path = path self._counter_table: Dict[str, int] = {} def get_errors(self) -> List[AgentLogRecord]: """ Returns any ERRORs or WARNINGs in the agent log. The function filters out known/uninteresting errors, which are kept in the 'ignore_list' variable. """ # # Items in this list are known errors and they are ignored. # # * 'message' - A regular expression matched using re.search; be sure to escape any regex metacharacters. A positive match indicates # that the error should be ignored # * 'if' - A lambda that takes as parameter an AgentLogRecord representing an error and returns true if the error should be ignored # ignore_rules = [ # # NOTE: This list was taken from the older agent tests and needs to be cleaned up. Feel free to un-comment rules as new tests are added. # # # The following message is expected to log an error if systemd is not enabled on it # { # 'message': r"Did not detect Systemd, unable to set wa(|linux)agent-network-setup.service", # 'if': lambda _: not self._is_systemd() # }, # # # # Journalctl in Debian 8.11 does not have the --utc option by default. # # Ignoring this error for Deb 8 as its not a blocker and since Deb 8 is old and not widely used # { # 'message': r"journalctl: unrecognized option '--utc'", # 'if': lambda r: re.match(r"(debian8\.11)\D*", DISTRO_NAME, flags=re.IGNORECASE) is not None and r.level == "WARNING" # }, # # Sometimes it takes the Daemon some time to identify primary interface and the route to Wireserver, # # ignoring those errors if they come from the Daemon. # { # 'message': r"(No route exists to \d+\.\d+\.\d+\.\d+|" # r"Could not determine primary interface, please ensure \/proc\/net\/route is correct|" # r"Contents of \/proc\/net\/route:|Primary interface examination will retry silently)", # 'if': lambda r: r.prefix == "Daemon" # }, # # # This happens in CENTOS and RHEL when waagent attempt to format and mount the error while cloud init is already doing it # # 2021-09-20T06:45:57.253801Z WARNING Daemon Daemon Could not mount resource disk: mount: /dev/sdb1 is already mounted or /mnt/resource busy # # /dev/sdb1 is already mounted on /mnt/resource # { # 'message': r"Could not mount resource disk: mount: \/dev\/sdb1 is already mounted or \/mnt\/resource busy", # 'if': lambda r: # re.match(r"((centos7\.8)|(redhat7\.8)|(redhat7\.6)|(redhat8\.2))\D*", DISTRO_NAME, flags=re.IGNORECASE) # and r.level == "WARNING" # and r.prefix == "Daemon" # }, # # # # 2021-09-20T06:45:57.246593Z ERROR Daemon Daemon Command: [mkfs.ext4 -F /dev/sdb1], return code: [1], result: [mke2fs 1.42.9 (28-Dec-2013) # # /dev/sdb1 is mounted; will not make a filesystem here! # { # 'message': r"Command: \[mkfs.ext4 -F \/dev\/sdb1\], return code: \[1\]", # 'if': lambda r: # re.match(r"((centos7\.8)|(redhat7\.8)|(redhat7\.6)|(redhat8\.2))\D*", DISTRO_NAME, flags=re.IGNORECASE) # and r.level == "ERROR" # and r.prefix == "Daemon" # }, # # 2023-06-28T09:31:38.903835Z WARNING EnvHandler ExtHandler Move rules file 75-persistent-net-generator.rules to /var/lib/waagent/75-persistent-net-generator.rules # The environment thread performs this operation periodically # { 'message': r"Move rules file (70|75)-persistent.*.rules to /var/lib/waagent/(70|75)-persistent.*.rules", 'if': lambda r: r.level == "WARNING" }, # # Probably the agent should log this as INFO, but for now it is a warning # e.g. # 2021-07-29T04:40:17.190879Z WARNING EnvHandler ExtHandler Dhcp client is not running. # old agents logs don't have a prefix of thread and/or logger names. { 'message': r"Dhcp client is not running.", 'if': lambda r: r.level == "WARNING" }, # Known bug fixed in the current agent, but still present in older daemons # { 'message': r"\[CGroupsException\].*Error: join\(\) argument must be str, bytes, or os.PathLike object, not 'NoneType'", 'if': lambda r: r.level == "WARNING" and r.prefix == "Daemon" }, # This warning is expected on when WireServer gives us the incomplete goalstate without roleinstance data { 'message': r"\[ProtocolError\] Fetched goal state without a RoleInstance", }, # # Download warnings (manifest and zips). # # Examples: # 2021-03-31T03:48:35.216494Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] [HTTP Failed] GET https://zrdfepirv2cbn04prdstr01a.blob.core.windows.net/f72653efd9e349ed9842c8b99e4c1712/Microsoft.CPlat.Core_NullSeqA_useast2euap_manifest.xml -- IOError ('The read operation timed out',) -- 1 attempts made # 2021-03-31T06:54:29.655861Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] [HTTP Retry] GET http://168.63.129.16:32526/extensionArtifact -- Status Code 502 -- 1 attempts made # 2021-03-31T06:43:17.806663Z WARNING ExtHandler ExtHandler Download failed, switching to host plugin { 'message': r"(Fetch failed: \[HttpError\] .+ GET .+ -- [0-9]+ attempts made)|(Download failed, switching to host plugin)", 'if': lambda r: r.level == "WARNING" and r.prefix == "ExtHandler" and r.thread == "ExtHandler" }, # 2021-07-09T01:46:53.307959Z INFO MonitorHandler ExtHandler [CGW] Disabling resource usage monitoring. Reason: Check on cgroups failed: # [CGroupsException] The agent's cgroup includes unexpected processes: ['[PID: 2367] UNKNOWN'] { 'message': r"The agent's cgroup includes unexpected processes: \[('\[PID:\s?\d+\]\s*UNKNOWN'(,\s*)?)+\]" }, # 2021-12-20T07:46:23.020197Z INFO ExtHandler ExtHandler [CGW] The agent's process is not within a memory cgroup # Ignoring this since memory cgroup(MemoryAccounting) not enabled. { 'message': r"The agent's process is not within a memory cgroup", 'if': lambda r: re.match(r"(((centos|redhat)7\.[48])|(redhat7\.6)|(redhat8\.2))\D*", DISTRO_NAME, flags=re.IGNORECASE) }, # # Ubuntu 22 uses cgroups v2, so we need to ignore these: # # 2023-03-15T20:47:56.684849Z INFO ExtHandler ExtHandler [CGW] The CPU cgroup controller is not mounted # 2023-03-15T20:47:56.685392Z INFO ExtHandler ExtHandler [CGW] The memory cgroup controller is not mounted # 2023-03-15T20:47:56.688576Z INFO ExtHandler ExtHandler [CGW] The agent's process is not within a CPU cgroup # 2023-03-15T20:47:56.688981Z INFO ExtHandler ExtHandler [CGW] The agent's process is not within a memory cgroup # { 'message': r"\[CGW\]\s*(The (CPU|memory) cgroup controller is not mounted)|(The agent's process is not within a (CPU|memory) cgroup)", 'if': lambda r: DISTRO_NAME == 'ubuntu' and DISTRO_VERSION >= '22.00' }, # # Old daemons can produce this message # # 2023-05-24T18:04:27.467009Z WARNING Daemon Daemon Could not mount cgroups: [Errno 1] Operation not permitted: '/sys/fs/cgroup/cpu,cpuacct' -> '/sys/fs/cgroup/cpu' # { 'message': r"Could not mount cgroups: \[Errno 1\] Operation not permitted", 'if': lambda r: r.prefix == 'Daemon' }, # # The daemon does not need the artifacts profile blob, but the request is done as part of protocol initialization. This timeout can be ignored, if the issue persist the log would include additional instances. # # 2022-01-20T06:52:21.515447Z WARNING Daemon Daemon Fetch failed: [HttpError] [HTTP Failed] GET https://dcrgajhx62.blob.core.windows.net/$system/edprpwqbj6.5c2ddb5b-d6c3-4d73-9468-54419ca87a97.vmSettings -- IOError timed out -- 6 attempts made # { 'message': r"\[HTTP Failed\] GET https://.*\.vmSettings -- IOError timed out", 'if': lambda r: r.level == "WARNING" and r.prefix == "Daemon" }, # # 2022-02-09T04:50:37.384810Z ERROR ExtHandler ExtHandler Error fetching the goal state: [ProtocolError] GET vmSettings [correlation ID: 2bed9b62-188e-4668-b1a8-87c35cfa4927 eTag: 7031887032544600793]: [Internal error in HostGAPlugin] [HTTP Failed] [502: Bad Gateway] b'{ "errorCode": "VMArtifactsProfileBlobContentNotFound", "message": "VM artifacts profile blob has no content in it.", "details": ""}' # # Fetching the goal state may catch the HostGAPlugin in the process of computing the vmSettings. This can be ignored, if the issue persist the log would include other errors as well. # { 'message': r"\[ProtocolError\] GET vmSettings.*VMArtifactsProfileBlobContentNotFound", 'if': lambda r: r.level == "ERROR" }, # # 2022-11-01T02:45:55.513692Z ERROR ExtHandler ExtHandler Error fetching the goal state: [ProtocolError] GET vmSettings [correlation ID: 616873cc-be87-41b6-83b7-ef3a76370628 eTag: 3693655388249891516]: [Internal error in HostGAPlugin] [HTTP Failed] [502: Bad Gateway] { "errorCode": "InternalError", "message": "The server encountered an internal error. Please retry the request.", "details": ""} # # Fetching the goal state may catch the HostGAPlugin in the process of computing the vmSettings. This can be ignored, if the issue persist the log would include other errors as well. # { 'message': r"\[ProtocolError\] GET vmSettings.*Please retry the request", 'if': lambda r: r.level == "ERROR" }, # # 2022-08-16T01:50:10.759502Z ERROR ExtHandler ExtHandler Error fetching the goal state: [ProtocolError] GET vmSettings [correlation ID: e162f7c3-8d0c-4a9b-a987-8f9ec0699dae eTag: 9757461589808963322]: Timeout # # Fetching the goal state may hit timeouts in the HostGAPlugin's vmSettings. This can be ignored, if the issue persist the log would include other errors as well. # { 'message': r"\[ProtocolError\] GET vmSettings.*Timeout", 'if': lambda r: r.level == "ERROR" }, # # 2021-12-29T06:50:49.904601Z ERROR ExtHandler ExtHandler Error fetching the goal state: [ProtocolError] Error fetching goal state Inner error: [ResourceGoneError] [HTTP Failed] [410: Gone] The page you requested was removed. # 2022-03-21T02:44:03.770017Z ERROR ExtHandler ExtHandler Error fetching the goal state: [ProtocolError] Error fetching goal state Inner error: [ResourceGoneError] Resource is gone # 2022-02-16T04:46:50.477315Z WARNING Daemon Daemon Fetching the goal state failed: [ResourceGoneError] [HTTP Failed] [410: Gone] b'\n\n ResourceNotAvailable\n The resource requested is no longer available. Please refresh your cache.\n

\n
' # # ResourceGone can happen if we are fetching one of the URIs in the goal state and a new goal state arrives { 'message': r"(?s)(Fetching the goal state failed|Error fetching goal state|Error fetching the goal state).*(\[ResourceGoneError\]|\[410: Gone\]|Resource is gone)", 'if': lambda r: r.level in ("WARNING", "ERROR") }, # # 2022-12-02T05:45:51.771876Z ERROR ExtHandler ExtHandler Error fetching the goal state: [ProtocolError] [Wireserver Exception] [HttpError] [HTTP Failed] GET http://168.63.129.16/machine/ -- IOError [Errno 104] Connection reset by peer -- 6 attempts made # { 'message': r"\[HttpError\] \[HTTP Failed\] GET http://168.63.129.16/machine/ -- IOError \[Errno 104\] Connection reset by peer", 'if': lambda r: r.level in ("WARNING", "ERROR") }, # # 2022-03-08T03:03:23.036161Z WARNING ExtHandler ExtHandler Fetch failed from [http://168.63.129.16:32526/extensionArtifact]: [HTTP Failed] [400: Bad Request] b'' # 2022-03-08T03:03:23.042008Z WARNING ExtHandler ExtHandler Fetch failed: [ProtocolError] Fetch failed from [http://168.63.129.16:32526/extensionArtifact]: [HTTP Failed] [400: Bad Request] b'' # # Warning downloading extension manifest. If the issue persists, this would cause errors elsewhere so safe to ignore { 'message': r"\[http://168.63.129.16:32526/extensionArtifact\]: \[HTTP Failed\] \[400: Bad Request\]", 'if': lambda r: r.level == "WARNING" }, # # 2022-03-29T05:52:10.089958Z WARNING ExtHandler ExtHandler An error occurred while retrieving the goal state: [ProtocolError] GET vmSettings [correlation ID: da106cf5-83a0-44ec-9484-d0e9223847ab eTag: 9856274988128027586]: Timeout # # Ignore warnings about timeouts in vmSettings; if the condition persists, an error will occur elsewhere. # { 'message': r"GET vmSettings \[[^]]+\]: Timeout", 'if': lambda r: r.level == "WARNING" }, # # 2022-09-30T02:48:33.134649Z WARNING MonitorHandler ExtHandler Error in SendHostPluginHeartbeat: [HttpError] [HTTP Failed] GET http://168.63.129.16:32526/health -- IOError timed out -- 1 attempts made --- [NOTE: Will not log the same error for the next hour] # # Ignore timeouts in the HGAP's health API... those are tracked in the HGAP dashboard so no need to worry about them on test runs # { 'message': r"SendHostPluginHeartbeat:.*GET http://168.63.129.16:32526/health.*timed out", 'if': lambda r: r.level == "WARNING" }, # # 2022-09-30T03:09:25.013398Z WARNING MonitorHandler ExtHandler Error in SendHostPluginHeartbeat: [ResourceGoneError] [HTTP Failed] [410: Gone] # # ResourceGone should not happen very often, since the monitor thread already refreshes the goal state before sending the HostGAPlugin heartbeat. Errors can still happen, though, since the goal state # can change in-between the time at which the monitor thread refreshes and the time at which it sends the heartbeat. Ignore these warnings unless there are 2 or more of them. # { 'message': r"SendHostPluginHeartbeat:.*ResourceGoneError.*410", 'if': lambda r: r.level == "WARNING" and self._increment_counter("SendHostPluginHeartbeat-ResourceGoneError-410") < 2 # ignore unless there are 2 or more instances }, # # 2023-01-18T02:58:25.589492Z ERROR SendTelemetryHandler ExtHandler Event: name=WALinuxAgent, op=ReportEventErrors, message=DroppedEventsCount: 1 # Reasons (first 5 errors): [ProtocolError] [Wireserver Exception] [ProtocolError] [Wireserver Failed] URI http://168.63.129.16/machine?comp=telemetrydata [HTTP Failed] Status Code 400: Traceback (most recent call last): # { 'message': r"(?s)\[ProtocolError\].*http://168.63.129.16/machine\?comp=telemetrydata.*Status Code 400", 'if': lambda r: r.thread == 'SendTelemetryHandler' and self._increment_counter("SendTelemetryHandler-telemetrydata-Status Code 400") < 2 # ignore unless there are 2 or more instances }, # # 2023-07-26T22:05:42.841692Z ERROR SendTelemetryHandler ExtHandler Event: name=WALinuxAgent, op=ReportEventErrors, message=DroppedEventsCount: 1 # Reasons (first 5 errors): [ProtocolError] Failed to send events:[ResourceGoneError] [HTTP Failed] [410: Gone] b'\n\n ResourceNotAvailable\n The resource requested is no longer available. Please refresh your cache.\n
\n
': Traceback (most recent call last): # { 'message': r"(?s)\[ProtocolError\].*Failed to send events.*\[410: Gone\]", 'if': lambda r: r.thread == 'SendTelemetryHandler' and self._increment_counter("SendTelemetryHandler-telemetrydata-Status Code 410") < 2 # ignore unless there are 2 or more instances }, # # Ignore these errors in flatcar: # # 1) 2023-03-16T14:30:33.091427Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology # 2) 2023-03-16T14:30:33.091708Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 # 3) 2023-03-16T14:30:34.660976Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required # 4) 2023-03-16T14:30:34.800112Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' # # 1, 2) under investigation # 3) There seems to be a configuration issue in flatcar that prevents python from using HTTPS when trying to reach storage. This does not produce any actual errors, since the agent fallbacks to the HGAP. # 4) Remove this when bug 17523033 is fixed. # { 'message': r"(Failed to mount resource disk)|(unable to detect disk topology)", 'if': lambda r: r.prefix == 'Daemon' and DISTRO_NAME == 'flatcar' }, { 'message': r"(HTTPS is unavailable and required)|(Unable to setup the persistent firewall rules.*Read-only file system)", 'if': lambda r: DISTRO_NAME == 'flatcar' }, # # AzureSecurityLinuxAgent fails to install on a few distros (e.g. Debian 11) # # 2023-03-16T14:29:48.798415Z ERROR ExtHandler ExtHandler Event: name=Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent, op=Install, message=[ExtensionOperationError] Non-zero exit code: 56, /var/lib/waagent/Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent-2.21.115/handler.sh install # { 'message': r"Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent.*op=Install.*Non-zero exit code: 56,", }, # # Ignore LogCollector failure to fetch vmSettings if it recovers # # 2023-08-27T08:13:42.520557Z WARNING MainThread LogCollector Fetch failed: [HttpError] [HTTP Failed] GET https://md-hdd-tkst3125n3x0.blob.core.chinacloudapi.cn/$system/lisa-WALinuxAgent-20230827-080144-029-e0-n0.cb9a406f-584b-4702-98bb-41a3ad5e334f.vmSettings -- IOError timed out -- 6 attempts made # { 'message': r"Fetch failed:.*GET.*vmSettings.*timed out", 'if': lambda r: r.prefix == 'LogCollector' and self.agent_log_contains("LogCollector Log collection successfully completed", after_timestamp=r.timestamp) }, # # In tests, we use both autoupdate flags to install test agent with different value and changing it depending on the scenario. So, we can ignore this warning. # # 2024-01-30T22:22:37.299911Z WARNING ExtHandler ExtHandler AutoUpdate.Enabled property is **Deprecated** now but it's set to different value from AutoUpdate.UpdateToLatestVersion. Please consider removing it if added by mistake { 'message': r"AutoUpdate.Enabled property is \*\*Deprecated\*\* now but it's set to different value from AutoUpdate.UpdateToLatestVersion", 'if': lambda r: r.prefix == 'ExtHandler' and r.thread == 'ExtHandler' } ] def is_error(r: AgentLogRecord) -> bool: return r.level in ('ERROR', 'WARNING') or any(err in r.text for err in ['Exception', 'Traceback', '[CGW]']) errors = [] primary_interface_error = None provisioning_complete = False for record in self.read(): if is_error(record) and not self.matches_ignore_rule(record, ignore_rules): # Handle "/proc/net/route contains no routes" and "/proc/net/route is missing headers" as a special case # since it can take time for the primary interface to come up, and we don't want to report transient # errors as actual errors. The last of these errors in the log will be reported if "/proc/net/route contains no routes" in record.text or "/proc/net/route is missing headers" in record.text and record.prefix == "Daemon": primary_interface_error = record provisioning_complete = False else: errors.append(record) if "Provisioning complete" in record.text and record.prefix == "Daemon": provisioning_complete = True # Keep the "no routes found" as a genuine error message if it was never corrected if primary_interface_error is not None and not provisioning_complete: errors.append(primary_interface_error) return errors def agent_log_contains(self, data: str, after_timestamp: str = datetime.min): """ This function looks for the specified test data string in the WALinuxAgent logs and returns if found or not. :param data: The string to look for in the agent logs :param after_timestamp: A timestamp appears after this timestamp :return: True if test data string found in the agent log after after_timestamp and False if not. """ for record in self.read(): if data in record.text and record.timestamp > after_timestamp: return True return False @staticmethod def _is_systemd(): # Taken from azurelinuxagent/common/osutil/systemd.py; repeated here because it is available only on agents >= 2.3 return os.path.exists("/run/systemd/system/") def _increment_counter(self, counter_name) -> int: """ Keeps a table of counters indexed by the given 'counter_name'. Each call to the function increments the value of that counter and returns the new value. """ count = self._counter_table.get(counter_name) count = 1 if count is None else count + 1 self._counter_table[counter_name] = count return count @staticmethod def matches_ignore_rule(record: AgentLogRecord, ignore_rules: List[Dict[str, Any]]) -> bool: """ Returns True if the given 'record' matches any of the 'ignore_rules' """ return any(re.search(rule['message'], record.message) is not None and ('if' not in rule or rule['if'](record)) for rule in ignore_rules) # The format of the log has changed over time and the current log may include records from different sources. Most records are single-line, but some of them # can span across multiple lines. We will assume records always start with a line similar to the examples below; any other lines will be assumed to be part # of the record that is being currently parsed. # # Newer Agent: 2019-11-27T22:22:48.123985Z VERBOSE ExtHandler ExtHandler Report vm agent status # 2021-03-30T19:45:33.793213Z INFO ExtHandler [Microsoft.Azure.Security.Monitoring.AzureSecurityLinuxAgent-2.14.64] Target handler state: enabled [incarnation 3] # # 2.2.46: the date time was changed to ISO-8601 format but the thread name was not added. # 2021-05-28T01:17:40.683072Z INFO ExtHandler Wire server endpoint:168.63.129.16 # 2021-05-28T01:17:40.683823Z WARNING ExtHandler Move rules file 70-persistent-net.rules to /var/lib/waagent/70-persistent-net.rules # 2021-05-28T01:17:40.767600Z INFO ExtHandler Successfully added Azure fabric firewall rules # # Older Agent: 2021/03/30 19:35:35.971742 INFO Daemon Azure Linux Agent Version:2.2.45 # # Oldest Agent: 2023/06/07 08:04:35.336313 WARNING Disabling guest agent in accordance with ovf-env.xml # # Extension: 2021/03/30 19:45:31 Azure Monitoring Agent for Linux started to handle. # 2021/03/30 19:45:31 [Microsoft.Azure.Monitor.AzureMonitorLinuxAgent-1.7.0] cwd is /var/lib/waagent/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent-1.7.0 # _NEWER_AGENT_RECORD = re.compile(r'(?P[\d-]+T[\d:.]+Z)\s(?PVERBOSE|INFO|WARNING|ERROR)\s(?P\S+)\s(?P(Daemon)|(ExtHandler)|(LogCollector)|(\[\S+\]))\s(?P.*)') _2_2_46_AGENT_RECORD = re.compile(r'(?P[\d-]+T[\d:.]+Z)\s(?PVERBOSE|INFO|WARNING|ERROR)\s(?P)(?PDaemon|ExtHandler|\[\S+\])\s(?P.*)') _OLDER_AGENT_RECORD = re.compile(r'(?P[\d/]+\s[\d:.]+)\s(?PVERBOSE|INFO|WARNING|ERROR)\s(?P)(?PDaemon|ExtHandler)\s(?P.*)') _OLDEST_AGENT_RECORD = re.compile(r'(?P[\d/]+\s[\d:.]+)\s(?PVERBOSE|INFO|WARNING|ERROR)\s(?P)(?P)(?P.*)') _EXTENSION_RECORD = re.compile(r'(?P[\d/]+\s[\d:.]+)\s(?P)(?P)((?P\[[^\]]+\])\s)?(?P.*)') def read(self) -> Iterable[AgentLogRecord]: """ Generator function that returns each of the entries in the agent log parsed as AgentLogRecords. The function can be used following this pattern: for record in read_agent_log(): ... do something... """ if not self._path.exists(): raise IOError('{0} does not exist'.format(self._path)) def match_record(): for regex in [self._NEWER_AGENT_RECORD, self._2_2_46_AGENT_RECORD, self._OLDER_AGENT_RECORD, self._OLDEST_AGENT_RECORD]: m = regex.match(line) if m is not None: return m # The extension regex also matches the old agent records, so it needs to be last return self._EXTENSION_RECORD.match(line) def complete_record(): record.text = record.text.rstrip() # the text includes \n if extra_lines != "": record.text = record.text + "\n" + extra_lines.rstrip() record.message = record.message + "\n" + extra_lines.rstrip() return record with self._path.open() as file_: record = None extra_lines = "" line = file_.readline() while line != "": # while not EOF match = match_record() if match is not None: if record is not None: yield complete_record() record = AgentLogRecord.from_match(match) extra_lines = "" else: extra_lines = extra_lines + line line = file_.readline() if record is not None: yield complete_record() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/agent_test.py000066400000000000000000000101231462617747000240410ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import sys from abc import ABC, abstractmethod from datetime import datetime from assertpy import fail from typing import Any, Dict, List from tests_e2e.tests.lib.agent_test_context import AgentTestContext, AgentVmTestContext, AgentVmssTestContext from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import FAIL_EXIT_CODE from tests_e2e.tests.lib.shell import CommandError from tests_e2e.tests.lib.ssh_client import ATTEMPTS, ATTEMPT_DELAY, SshClient class TestSkipped(Exception): """ Tests can raise this exception to indicate they should not be executed (for example, if trying to execute them on an unsupported distro """ class RemoteTestError(CommandError): """ Raised when a remote test fails with an unexpected error. """ class AgentTest(ABC): """ Abstract base class for Agent tests """ def __init__(self, context: AgentTestContext): self._context: AgentTestContext = context @abstractmethod def run(self): """ Test must define this method, which is used to execute the test. """ def get_ignore_error_rules(self) -> List[Dict[str, Any]]: """ Tests can override this method to return a list with rules to ignore errors in the agent log (see agent_log.py for sample rules). """ return [] def get_ignore_errors_before_timestamp(self) -> datetime: # Ignore errors in the agent log before this timestamp return datetime.min @classmethod def run_from_command_line(cls): """ Convenience method to execute the test when it is being invoked directly from the command line (as opposed as being invoked from a test framework or library.) """ try: if issubclass(cls, AgentVmTest): cls(AgentVmTestContext.from_args()).run() elif issubclass(cls, AgentVmssTest): cls(AgentVmssTestContext.from_args()).run() else: raise Exception(f"Class {cls.__name__} is not a valid test class") except SystemExit: # Bad arguments pass except AssertionError as e: log.error("%s", e) sys.exit(1) except: # pylint: disable=bare-except log.exception("Test failed") sys.exit(1) sys.exit(0) def _run_remote_test(self, ssh_client: SshClient, command: str, use_sudo: bool = False, attempts: int = ATTEMPTS, attempt_delay: int = ATTEMPT_DELAY) -> None: """ Derived classes can use this method to execute a remote test (a test that runs over SSH). """ try: output = ssh_client.run_command(command=command, use_sudo=use_sudo, attempts=attempts, attempt_delay=attempt_delay) log.info("*** PASSED: [%s]\n%s", command, self._indent(output)) except CommandError as error: if error.exit_code == FAIL_EXIT_CODE: fail(f"[{command}] {error.stderr}{self._indent(error.stdout)}") raise RemoteTestError(command=error.command, exit_code=error.exit_code, stdout=self._indent(error.stdout), stderr=error.stderr) @staticmethod def _indent(text: str, indent: str = " " * 8): return "\n".join(f"{indent}{line}" for line in text.splitlines()) class AgentVmTest(AgentTest): """ Base class for Agent tests that run on a single VM """ class AgentVmssTest(AgentTest): """ Base class for Agent tests that run on a scale set """ Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/agent_test_context.py000066400000000000000000000126771462617747000256250ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import argparse import os from abc import ABC from pathlib import Path from tests_e2e.tests.lib.virtual_machine_client import VirtualMachineClient from tests_e2e.tests.lib.virtual_machine_scale_set_client import VirtualMachineScaleSetClient from tests_e2e.tests.lib.ssh_client import SshClient class AgentTestContext(ABC): """ Base class for the execution context of agent tests; includes the working directories and SSH info for the tests. """ DEFAULT_SSH_PORT = 22 def __init__(self, working_directory: Path, username: str, identity_file: Path, ssh_port: int): self.working_directory: Path = working_directory self.username: str = username self.identity_file: Path = identity_file self.ssh_port: int = ssh_port @staticmethod def _create_argument_parser() -> argparse.ArgumentParser: """ Creates an ArgumentParser that includes the arguments common to the concrete classes derived from AgentTestContext """ parser = argparse.ArgumentParser() parser.add_argument('-c', '--cloud', dest="cloud", required=False, choices=['AzureCloud', 'AzureChinaCloud', 'AzureUSGovernment'], default="AzureCloud") parser.add_argument('-g', '--group', required=True) parser.add_argument('-l', '--location', required=True) parser.add_argument('-s', '--subscription', required=True) parser.add_argument('-w', '--working-directory', dest="working_directory", required=False, default=str(Path().home() / "tmp")) parser.add_argument('-u', '--username', required=False, default=os.getenv("USER")) parser.add_argument('-k', '--identity-file', dest="identity_file", required=False, default=str(Path.home() / ".ssh" / "id_rsa")) parser.add_argument('-p', '--ssh-port', dest="ssh_port", required=False, default=AgentTestContext.DEFAULT_SSH_PORT) return parser class AgentVmTestContext(AgentTestContext): """ Execution context for agent tests targeted to individual VMs. """ def __init__(self, working_directory: Path, vm: VirtualMachineClient, ip_address: str, username: str, identity_file: Path, ssh_port: int = AgentTestContext.DEFAULT_SSH_PORT): super().__init__(working_directory, username, identity_file, ssh_port) self.vm: VirtualMachineClient = vm self.ip_address: str = ip_address def create_ssh_client(self) -> SshClient: """ Convenience method to create an SSH client using the connection info from the context. """ return SshClient( ip_address=self.ip_address, username=self.username, identity_file=self.identity_file, port=self.ssh_port) @staticmethod def from_args(): """ Creates an AgentVmTestContext from the command line arguments. """ parser = AgentTestContext._create_argument_parser() parser.add_argument('-vm', '--vm', required=True) parser.add_argument('-a', '--ip-address', dest="ip_address", required=False) # Use the vm name as default args = parser.parse_args() working_directory: Path = Path(args.working_directory) if not working_directory.exists(): working_directory.mkdir(exist_ok=True) vm: VirtualMachineClient = VirtualMachineClient(cloud=args.cloud, location=args.location, subscription=args.subscription, resource_group=args.group, name=args.vm) ip_address = args.ip_address if args.ip_address is not None else args.vm return AgentVmTestContext(working_directory=working_directory, vm=vm, ip_address=ip_address, username=args.username, identity_file=Path(args.identity_file), ssh_port=args.ssh_port) class AgentVmssTestContext(AgentTestContext): """ Execution context for agent tests targeted to VM Scale Sets. """ def __init__(self, working_directory: Path, vmss: VirtualMachineScaleSetClient, username: str, identity_file: Path, ssh_port: int = AgentTestContext.DEFAULT_SSH_PORT): super().__init__(working_directory, username, identity_file, ssh_port) self.vmss: VirtualMachineScaleSetClient = vmss @staticmethod def from_args(): """ Creates an AgentVmssTestContext from the command line arguments. """ parser = AgentTestContext._create_argument_parser() parser.add_argument('-vmss', '--vmss', required=True) args = parser.parse_args() working_directory: Path = Path(args.working_directory) if not working_directory.exists(): working_directory.mkdir(exist_ok=True) vmss = VirtualMachineScaleSetClient(cloud=args.cloud, location=args.location, subscription=args.subscription, resource_group=args.group, name=args.vmss) return AgentVmssTestContext(working_directory=working_directory, vmss=vmss, username=args.username, identity_file=Path(args.identity_file), ssh_port=args.ssh_port) Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/azure_clouds.py000066400000000000000000000016121462617747000244060ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import Dict from msrestazure.azure_cloud import Cloud, AZURE_PUBLIC_CLOUD, AZURE_CHINA_CLOUD, AZURE_US_GOV_CLOUD AZURE_CLOUDS: Dict[str, Cloud] = { "AzureCloud": AZURE_PUBLIC_CLOUD, "AzureChinaCloud": AZURE_CHINA_CLOUD, "AzureUSGovernment": AZURE_US_GOV_CLOUD } Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/azure_sdk_client.py000066400000000000000000000043071462617747000252400ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import Any, Callable from azure.identity import DefaultAzureCredential from azure.core.polling import LROPoller from tests_e2e.tests.lib.azure_clouds import AZURE_CLOUDS from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import execute_with_retry class AzureSdkClient: """ Base class for classes implementing clients of the Azure SDK. """ _DEFAULT_TIMEOUT = 10 * 60 # (in seconds) @staticmethod def create_client(client_type: type, cloud: str, subscription_id: str): """ Creates an SDK client of the given 'client_type' """ azure_cloud = AZURE_CLOUDS[cloud] return client_type( base_url=azure_cloud.endpoints.resource_manager, credential=DefaultAzureCredential(authority=azure_cloud.endpoints.active_directory), credential_scopes=[azure_cloud.endpoints.resource_manager + "/.default"], subscription_id=subscription_id) @staticmethod def _execute_async_operation(operation: Callable[[], LROPoller], operation_name: str, timeout: int) -> Any: """ Starts an async operation and waits its completion. Returns the operation's result. """ log.info("Starting [%s]", operation_name) poller: LROPoller = execute_with_retry(operation) log.info("Waiting for [%s]", operation_name) poller.wait(timeout=timeout) if not poller.done(): raise TimeoutError(f"[{operation_name}] did not complete within {timeout} seconds") log.info("[%s] completed", operation_name) return poller.result() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/cgroup_helpers.py000066400000000000000000000135631462617747000247400ustar00rootroot00000000000000import os import re from assertpy import assert_that, fail from azurelinuxagent.common.osutil import systemd from azurelinuxagent.common.utils import shellutil from azurelinuxagent.common.version import DISTRO_NAME, DISTRO_VERSION from tests_e2e.tests.lib.agent_log import AgentLog from tests_e2e.tests.lib.logging import log BASE_CGROUP = '/sys/fs/cgroup' AGENT_CGROUP_NAME = 'WALinuxAgent' AGENT_SERVICE_NAME = systemd.get_agent_unit_name() AGENT_CONTROLLERS = ['cpu', 'memory'] EXT_CONTROLLERS = ['cpu', 'memory'] CGROUP_TRACKED_PATTERN = re.compile(r'Started tracking cgroup ([^\s]+)\s+\[(?P[^\s]+)\]') GATESTEXT_FULL_NAME = "Microsoft.Azure.Extensions.Edp.GATestExtGo" GATESTEXT_SERVICE = "gatestext.service" AZUREMONITOREXT_FULL_NAME = "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent" AZUREMONITORAGENT_SERVICE = "azuremonitoragent.service" MDSD_SERVICE = "mdsd.service" def verify_if_distro_supports_cgroup(): """ checks if agent is running in a distro that supports cgroups """ log.info("===== Checking if distro supports cgroups") base_cgroup_fs_exists = os.path.exists(BASE_CGROUP) assert_that(base_cgroup_fs_exists).is_true().described_as("Cgroup file system:{0} not found in Distro {1}-{2}".format(BASE_CGROUP, DISTRO_NAME, DISTRO_VERSION)) log.info('Distro %s-%s supports cgroups\n', DISTRO_NAME, DISTRO_VERSION) def print_cgroups(): """ log the mounted cgroups information """ log.info("====== Currently mounted cgroups ======") for m in shellutil.run_command(['mount']).splitlines(): # output is similar to # mount # sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel) # proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) # devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=1842988k,nr_inodes=460747,mode=755) # cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) # cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids) # cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory) # cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio) # cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb) if 'type cgroup' in m: log.info('\t%s', m) def print_service_status(): log.info("====== Agent Service status ======") output = shellutil.run_command(["systemctl", "status", systemd.get_agent_unit_name()]) for line in output.splitlines(): log.info("\t%s", line) def get_agent_cgroup_mount_path(): return os.path.join('/', 'azure.slice', AGENT_SERVICE_NAME) def get_extension_cgroup_mount_path(extension_name): return os.path.join('/', 'azure.slice/azure-vmextensions.slice', "azure-vmextensions-" + extension_name + ".slice") def get_unit_cgroup_mount_path(unit_name): """ Returns the cgroup mount path for the given unit """ output = shellutil.run_command(["systemctl", "show", unit_name, "--property", "ControlGroup"]) # Output is similar to # systemctl show walinuxagent.service --property ControlGroup # ControlGroup=/azure.slice/walinuxagent.service # matches above output and extract right side value match = re.match("[^=]+=(?P.+)", output) if match is not None: return match.group('value') return None def verify_agent_cgroup_assigned_correctly(): """ This method checks agent is running and assigned to the correct cgroup using service status output """ log.info("===== Verifying the daemon and the agent are assigned to the same correct cgroup using systemd") service_status = shellutil.run_command(["systemctl", "status", systemd.get_agent_unit_name()]) log.info("Agent service status output:\n%s", service_status) is_active = False is_cgroup_assigned = False cgroup_mount_path = get_agent_cgroup_mount_path() is_active_pattern = re.compile(r".*Active:\s+active.*") for line in service_status.splitlines(): if re.match(is_active_pattern, line): is_active = True elif cgroup_mount_path in line: is_cgroup_assigned = True if not is_active: fail('walinuxagent service was not active/running. Service status:{0}'.format(service_status)) if not is_cgroup_assigned: fail('walinuxagent service was not assigned to the expected cgroup:{0}'.format(cgroup_mount_path)) log.info("Successfully verified the agent cgroup assigned correctly by systemd\n") def get_agent_cpu_quota(): """ Returns the cpu quota for the agent service """ output = shellutil.run_command(["systemctl", "show", AGENT_SERVICE_NAME, "--property", "CPUQuotaPerSecUSec"]) # Output is similar to # systemctl show walinuxagent --property CPUQuotaPerSecUSec # CPUQuotaPerSecUSec=infinity match = re.match("[^=]+=(?P.+)", output) if match is not None: return match.group('value') return None def check_agent_quota_disabled(): """ Returns True if the cpu quota is infinity """ cpu_quota = get_agent_cpu_quota() # the quota can be expressed as seconds (s) or milliseconds (ms); no quota is expressed as "infinity" return cpu_quota == 'infinity' def check_cgroup_disabled_with_unknown_process(): """ Returns True if the cgroup is disabled with unknown process """ for record in AgentLog().read(): match = re.search("Disabling resource usage monitoring. Reason: Check on cgroups failed:.+UNKNOWN", record.message, flags=re.DOTALL) if match is not None: log.info("Found message:\n\t%s", record.text.replace("\n", "\n\t")) return True return False Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/firewall_helpers.py000066400000000000000000000166011462617747000252420ustar00rootroot00000000000000from typing import List, Tuple from assertpy import fail from azurelinuxagent.common.future import ustr from azurelinuxagent.common.utils import shellutil from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import retry_if_false WIRESERVER_ENDPOINT_FILE = '/var/lib/waagent/WireServerEndpoint' WIRESERVER_IP = '168.63.129.16' FIREWALL_PERIOD = 30 # helper methods shared by multiple tests class IPTableRules(object): # -D deletes the specific rule in the iptable chain DELETE_COMMAND = "-D" # -C checks if a specific rule exists CHECK_COMMAND = "-C" class FirewalldRules(object): # checks if a specific rule exists QUERY_PASSTHROUGH = "--query-passthrough" # removes a specific rule REMOVE_PASSTHROUGH = "--remove-passthrough" def get_wireserver_ip() -> str: try: with open(WIRESERVER_ENDPOINT_FILE, 'r') as f: wireserver_ip = f.read() except Exception: wireserver_ip = WIRESERVER_IP return wireserver_ip def get_root_accept_rule_command(command: str) -> List[str]: return ['sudo', 'iptables', '-t', 'security', command, 'OUTPUT', '-d', get_wireserver_ip(), '-p', 'tcp', '-m', 'owner', '--uid-owner', '0', '-j', 'ACCEPT', '-w'] def get_non_root_accept_rule_command(command: str) -> List[str]: return ['sudo', 'iptables', '-t', 'security', command, 'OUTPUT', '-d', get_wireserver_ip(), '-p', 'tcp', '--destination-port', '53', '-j', 'ACCEPT', '-w'] def get_non_root_drop_rule_command(command: str) -> List[str]: return ['sudo', 'iptables', '-t', 'security', command, 'OUTPUT', '-d', get_wireserver_ip(), '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'DROP', '-w'] def get_non_root_accept_tcp_firewalld_rule(command): return ["firewall-cmd", "--permanent", "--direct", command, "ipv4", "-t", "security", "-A", "OUTPUT", "-d", get_wireserver_ip(), "-p", "tcp", "--destination-port", "53", "-j", "ACCEPT"] def get_root_accept_firewalld_rule(command): return ["firewall-cmd", "--permanent", "--direct", command, "ipv4", "-t", "security", "-A", "OUTPUT", "-d", get_wireserver_ip(), "-p", "tcp", "-m", "owner", "--uid-owner", "0", "-j", "ACCEPT"] def get_non_root_drop_firewalld_rule(command): return ["firewall-cmd", "--permanent", "--direct", command, "ipv4", "-t", "security", "-A", "OUTPUT", "-d", get_wireserver_ip(), "-p", "tcp", "-m", "conntrack", "--ctstate", "INVALID,NEW", "-j", "DROP"] def execute_cmd(cmd: List[str]): """ Note: The shellutil.run_command return stdout if exit_code=0, otherwise returns Exception """ return shellutil.run_command(cmd, track_process=False) def execute_cmd_return_err_code(cmd: List[str]): """ Note: The shellutil.run_command return err_code plus stdout/stderr """ try: stdout = execute_cmd(cmd) return 0, stdout except Exception as error: return -1, ustr(error) def check_if_iptable_rule_is_available(full_command: List[str]) -> bool: """ This function is used to check if given rule is present in iptable rule set "-C" return exit code 0 if the rule is available. """ exit_code, _ = execute_cmd_return_err_code(full_command) return exit_code == 0 def print_current_iptable_rules() -> None: """ This function prints the current iptable rules """ try: cmd = ["sudo", "iptables", "-t", "security", "-L", "-nxv"] stdout = execute_cmd(cmd) for line in stdout.splitlines(): log.info(str(line)) except Exception as error: log.warning("Error -- Failed to fetch the ip table rule set {0}".format(error)) def get_all_iptable_rule_commands(command: str) -> Tuple[List[str], List[str], List[str]]: return get_root_accept_rule_command(command), get_non_root_accept_rule_command(command), get_non_root_drop_rule_command(command) def verify_all_rules_exist() -> None: """ This function is used to verify all the iptable rules are present in the rule set """ def check_all_iptables() -> bool: root_accept, non_root_accept, non_root_drop = get_all_iptable_rule_commands(IPTableRules.CHECK_COMMAND) found: bool = check_if_iptable_rule_is_available(root_accept) and check_if_iptable_rule_is_available( non_root_accept) and check_if_iptable_rule_is_available(non_root_drop) return found log.info("Verifying all ip table rules are present in rule set") # Agent will re-add rules within OS.EnableFirewallPeriod, So waiting that time + some buffer found: bool = retry_if_false(check_all_iptables, attempts=2, delay=FIREWALL_PERIOD+15) if not found: fail("IP table rules missing in rule set.\n Current iptable rules: {0}".format( print_current_iptable_rules())) log.info("verified All ip table rules are present in rule set") def firewalld_service_running(): """ Checks if firewalld service is running on the VM Eg: firewall-cmd --state > running """ cmd = ["firewall-cmd", "--state"] exit_code, output = execute_cmd_return_err_code(cmd) if exit_code != 0: log.warning("Firewall service not running: {0}".format(output)) return exit_code == 0 and output.rstrip() == "running" def get_all_firewalld_rule_commands(command): return get_root_accept_firewalld_rule(command), get_non_root_accept_tcp_firewalld_rule( command), get_non_root_drop_firewalld_rule(command) def check_if_firewalld_rule_is_available(command): """ This function is used to check if given firewalld rule is present in rule set --query-passthrough return exit code 0 if the rule is available """ exit_code, _ = execute_cmd_return_err_code(command) if exit_code == 0: return True return False def verify_all_firewalld_rules_exist(): """ This function is used to verify all the firewalld rules are present in the rule set """ def check_all_firewalld_rules(): root_accept, non_root_accept, non_root_drop = get_all_firewalld_rule_commands(FirewalldRules.QUERY_PASSTHROUGH) found = check_if_firewalld_rule_is_available(root_accept) and check_if_firewalld_rule_is_available( non_root_accept) and check_if_firewalld_rule_is_available(non_root_drop) return found log.info("Verifying all firewalld rules are present in rule set") found = retry_if_false(check_all_firewalld_rules, attempts=2) if not found: fail("Firewalld rules missing in rule set. {0}".format( print_current_firewalld_rules())) print_current_firewalld_rules() log.info("verified All firewalld rules are present in rule set") def print_current_firewalld_rules(): """ This function prints the current firewalld rules """ try: cmd = ["firewall-cmd", "--permanent", "--direct", "--get-all-passthroughs"] exit_code, stdout = execute_cmd_return_err_code(cmd) if exit_code != 0: log.warning("Warning -- Failed to fetch firewalld rules with error code: %s and error: %s", exit_code, stdout) else: log.info("Current firewalld rules:") for line in stdout.splitlines(): log.info(str(line)) except Exception as error: raise Exception("Error -- Failed to fetch the firewalld rule set {0}".format(error)) Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/logging.py000066400000000000000000000145071462617747000233440ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This module defines a single object, 'log', of type AgentLogger, which the end-to-end tests and libraries use # for logging. # import contextlib import sys from logging import FileHandler, Formatter, Handler, Logger, StreamHandler, INFO from pathlib import Path from threading import current_thread from typing import Dict, Callable class _AgentLoggingHandler(Handler): """ AgentLoggingHandler is a helper class for AgentLogger. This handler simply redirects logging to other handlers. It maintains a set of FileHandlers associated to specific threads. When a thread emits a log record, the AgentLoggingHandler passes through the call to the FileHandlers associated with that thread, or to a StreamHandler that outputs to stdout if there is not a FileHandler for that thread. Thread can set a FileHandler for themselves using _AgentLoggingHandler.set_current_thread_log() and remove that handler using _AgentLoggingHandler.close_current_thread_log(). The _AgentLoggingHandler simply passes through calls to setLevel, setFormatter, flush, and close to the handlers it maintains. AgentLoggingHandler is meant to be primarily used in multithreaded scenarios and is thread-safe. """ def __init__(self): super().__init__() self.formatter: Formatter = Formatter('%(asctime)s.%(msecs)03d [%(levelname)s] %(message)s', datefmt="%Y-%m-%dT%H:%M:%SZ") self.default_handler = StreamHandler(sys.stdout) self.default_handler.setFormatter(self.formatter) self.per_thread_handlers: Dict[int, FileHandler] = {} def set_thread_log(self, thread_ident: int, log_file: Path) -> None: self.close_current_thread_log() handler: FileHandler = FileHandler(str(log_file)) handler.setFormatter(self.formatter) self.per_thread_handlers[thread_ident] = handler def get_thread_log(self, thread_ident: int) -> Path: handler = self.per_thread_handlers.get(thread_ident) if handler is None: return None return Path(handler.baseFilename) def close_thread_log(self, thread_ident: int) -> None: handler = self.per_thread_handlers.pop(thread_ident, None) if handler is not None: handler.close() def set_current_thread_log(self, log_file: Path) -> None: self.set_thread_log(current_thread().ident, log_file) def get_current_thread_log(self) -> Path: return self.get_thread_log(current_thread().ident) def close_current_thread_log(self) -> None: self.close_thread_log(current_thread().ident) def emit(self, record) -> None: handler = self.per_thread_handlers.get(current_thread().ident) if handler is None: handler = self.default_handler handler.emit(record) def setLevel(self, level) -> None: self._for_each_handler(lambda h: h.setLevel(level)) def setFormatter(self, fmt) -> None: self._for_each_handler(lambda h: h.setFormatter(fmt)) def flush(self) -> None: self._for_each_handler(lambda h: h.flush()) def close(self) -> None: self._for_each_handler(lambda h: h.close()) def _for_each_handler(self, op: Callable[[Handler], None]) -> None: op(self.default_handler) # copy of the values into a new list in case the dictionary changes while we are iterating for handler in list(self.per_thread_handlers.values()): op(handler) class AgentLogger(Logger): """ AgentLogger is a Logger customized for agent test scenarios. When tests are executed from the command line (for example, during development) the AgentLogger can be used with its default configuration, which simply outputs to stdout. When tests are executed from the test framework, typically there are multiple test suites executed concurrently on different threads, and each test suite must have its own log file; in that case, each thread can call AgentLogger.set_current_thread_log() to send all the logging from that thread to a particular file. """ def __init__(self): super().__init__(name="waagent", level=INFO) self._handler: _AgentLoggingHandler = _AgentLoggingHandler() self.addHandler(self._handler) def set_thread_log(self, thread_ident: int, log_file: Path) -> None: self._handler.set_thread_log(thread_ident, log_file) def get_thread_log_file(self, thread_ident: int) -> Path: """ Returns the Path of the log file for the current thread, or None if a log has not been set """ return self._handler.get_thread_log(thread_ident) def close_thread_log(self, thread_ident: int) -> None: self._handler.close_thread_log(thread_ident) def set_current_thread_log(self, log_file: Path) -> None: self._handler.set_current_thread_log(log_file) def get_current_thread_log(self) -> Path: return self._handler.get_current_thread_log() def close_current_thread_log(self) -> None: self._handler.close_current_thread_log() log: AgentLogger = AgentLogger() @contextlib.contextmanager def set_current_thread_log(log_file: Path): """ Context Manager to set the log file for the current thread temporarily """ initial_value = log.get_current_thread_log() log.set_current_thread_log(log_file) try: yield finally: log.close_current_thread_log() if initial_value is not None: log.set_current_thread_log(initial_value) @contextlib.contextmanager def set_thread_name(name: str): """ Context Manager to change the name of the current thread temporarily """ initial_name = current_thread().name current_thread().name = name try: yield finally: current_thread().name = initial_name Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/network_security_rule.py000066400000000000000000000172171462617747000263660ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import json from typing import Any, Dict, List from tests_e2e.tests.lib.update_arm_template import UpdateArmTemplate class NetworkSecurityRule: """ Provides methods to add network security rules to the given ARM template. The security rules are added under _NETWORK_SECURITY_GROUP, which is also added to the template. """ def __init__(self, template: Dict[str, Any], is_lisa_template: bool): self._template = template self._is_lisa_template = is_lisa_template _NETWORK_SECURITY_GROUP: str = "waagent-nsg" def add_allow_ssh_rule(self, ip_address: str) -> None: self.add_security_rule( json.loads(f"""{{ "name": "waagent-ssh", "properties": {{ "description": "Allows inbound SSH connections from the orchestrator machine.", "protocol": "Tcp", "sourcePortRange": "*", "destinationPortRange": "22", "sourceAddressPrefix": "{ip_address}", "destinationAddressPrefix": "*", "access": "Allow", "priority": 100, "direction": "Inbound" }} }}""")) def add_security_rule(self, security_rule: Dict[str, Any]) -> None: self._get_network_security_group()["properties"]["securityRules"].append(security_rule) def _get_network_security_group(self) -> Dict[str, Any]: resources: List[Dict[str, Any]] = self._template["resources"] # # If the NSG already exists, just return it # try: return UpdateArmTemplate.get_resource_by_name(resources, self._NETWORK_SECURITY_GROUP, "Microsoft.Network/networkSecurityGroups") except KeyError: pass # # Otherwise, create it and append it to the list of resources # network_security_group = json.loads(f"""{{ "type": "Microsoft.Network/networkSecurityGroups", "name": "{self._NETWORK_SECURITY_GROUP}", "location": "[resourceGroup().location]", "apiVersion": "2020-05-01", "properties": {{ "securityRules": [] }} }}""") resources.append(network_security_group) # # Add a dependency on the NSG to the virtual network # network_resource = UpdateArmTemplate.get_resource(resources, "Microsoft.Network/virtualNetworks") network_resource_dependencies = network_resource.get("dependsOn") nsg_reference = f"[resourceId('Microsoft.Network/networkSecurityGroups', '{self._NETWORK_SECURITY_GROUP}')]" if network_resource_dependencies is None: network_resource["dependsOn"] = [nsg_reference] else: network_resource_dependencies.append(nsg_reference) # # Add a reference to the NSG to the properties of the subnets. # nsg_reference = json.loads(f"""{{ "networkSecurityGroup": {{ "id": "[resourceId('Microsoft.Network/networkSecurityGroups', '{self._NETWORK_SECURITY_GROUP}')]" }} }}""") if self._is_lisa_template: # The subnets are a copy property of the virtual network in LISA's ARM template: # # { # "condition": "[empty(parameters('virtual_network_resource_group'))]", # "apiVersion": "2020-05-01", # "type": "Microsoft.Network/virtualNetworks", # "name": "[parameters('virtual_network_name')]", # "location": "[parameters('location')]", # "properties": { # "addressSpace": { # "addressPrefixes": [ # "10.0.0.0/16" # ] # }, # "copy": [ # { # "name": "subnets", # "count": "[parameters('subnet_count')]", # "input": { # "name": "[concat(parameters('subnet_prefix'), copyIndex('subnets'))]", # "properties": { # "addressPrefix": "[concat('10.0.', copyIndex('subnets'), '.0/24')]" # } # } # } # ] # } # } # subnets_copy = network_resource["properties"].get("copy") if network_resource.get("properties") is not None else None if subnets_copy is None: raise Exception("Cannot find the copy property of the virtual network in the ARM template") subnets = [i for i in subnets_copy if "name" in i and i["name"] == 'subnets'] if len(subnets) == 0: raise Exception("Cannot find the subnets of the virtual network in the ARM template") subnets_input = subnets[0].get("input") if subnets_input is None: raise Exception("Cannot find the input property of the subnets in the ARM template") subnets_properties = subnets_input.get("properties") if subnets_properties is None: subnets_input["properties"] = nsg_reference else: subnets_properties.update(nsg_reference) else: # # The subnets are simple property of the virtual network in template for scale sets: # { # "apiVersion": "2023-06-01", # "type": "Microsoft.Network/virtualNetworks", # "name": "[variables('virtualNetworkName')]", # "location": "[resourceGroup().location]", # "properties": { # "addressSpace": { # "addressPrefixes": [ # "[variables('vnetAddressPrefix')]" # ] # }, # "subnets": [ # { # "name": "[variables('subnetName')]", # "properties": { # "addressPrefix": "[variables('subnetPrefix')]", # } # } # ] # } # } subnets = network_resource["properties"].get("subnets") if network_resource.get("properties") is not None else None if subnets is None: raise Exception("Cannot find the subnets property of the virtual network in the ARM template") subnets_properties = subnets[0].get("properties") if subnets_properties is None: subnets["properties"] = nsg_reference else: subnets_properties.update(nsg_reference) return network_security_group Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/remote_test.py000066400000000000000000000026161462617747000242460ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import sys from typing import Callable from tests_e2e.tests.lib.logging import log SUCCESS_EXIT_CODE = 0 FAIL_EXIT_CODE = 100 ERROR_EXIT_CODE = 200 def run_remote_test(test_method: Callable[[], None]) -> None: """ Helper function to run a remote test; implements coding conventions for remote tests, e.g. error message goes to stderr, test log goes to stdout, etc. """ try: test_method() log.info("*** PASSED") except AssertionError as e: print(f"{e}", file=sys.stderr) log.error("%s", e) sys.exit(FAIL_EXIT_CODE) except Exception as e: print(f"UNEXPECTED ERROR: {e}", file=sys.stderr) log.exception("*** UNEXPECTED ERROR") sys.exit(ERROR_EXIT_CODE) sys.exit(SUCCESS_EXIT_CODE) Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/resource_group_client.py000066400000000000000000000057331462617747000263200ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This module includes facilities to create a resource group and deploy an arm template to it # from typing import Dict, Any from azure.mgmt.compute import ComputeManagementClient from azure.mgmt.resource import ResourceManagementClient from azure.mgmt.resource.resources.models import DeploymentProperties, DeploymentMode from tests_e2e.tests.lib.azure_sdk_client import AzureSdkClient from tests_e2e.tests.lib.logging import log class ResourceGroupClient(AzureSdkClient): """ Provides operations on resource groups (create, template deployment, etc). """ def __init__(self, cloud: str, subscription: str, name: str, location: str = ""): super().__init__() self.cloud: str = cloud self.location = location self.subscription: str = subscription self.name: str = name self._compute_client = AzureSdkClient.create_client(ComputeManagementClient, cloud, subscription) self._resource_client = AzureSdkClient.create_client(ResourceManagementClient, cloud, subscription) def create(self) -> None: """ Creates a resource group """ log.info("Creating resource group %s", self) self._resource_client.resource_groups.create_or_update(self.name, {"location": self.location}) def deploy_template(self, template: Dict[str, Any], parameters: Dict[str, Any] = None): """ Deploys an ARM template to the resource group """ if parameters: properties = DeploymentProperties(template=template, parameters=parameters, mode=DeploymentMode.incremental) else: properties = DeploymentProperties(template=template, mode=DeploymentMode.incremental) log.info("Deploying template to resource group %s...", self) self._execute_async_operation( operation=lambda: self._resource_client.deployments.begin_create_or_update(self.name, 'TestDeployment', {'properties': properties}), operation_name=f"Deploy template to resource group {self}", timeout=AzureSdkClient._DEFAULT_TIMEOUT) def delete(self) -> None: """ Deletes the resource group """ log.info("Deleting resource group %s (no wait)", self) self._resource_client.resource_groups.begin_delete(self.name) # Do not wait for the deletion to complete def __str__(self): return f"{self.name}" Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/retry.py000066400000000000000000000066161462617747000230650ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import time from typing import Callable, Any from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.shell import CommandError def execute_with_retry(operation: Callable[[], Any]) -> Any: """ Some Azure errors (e.g. throttling) are retryable; this method attempts the given operation retrying a few times (after a short delay) if the error includes the string "RetryableError" """ attempts = 3 while attempts > 0: attempts -= 1 try: return operation() except Exception as e: # TODO: Do we need to retry on msrestazure.azure_exceptions.CloudError? if "RetryableError" not in str(e) or attempts == 0: raise log.warning("The operation failed with a RetryableError, retrying in 30 secs. Error: %s", e) time.sleep(30) def retry_ssh_run(operation: Callable[[], Any], attempts: int, attempt_delay: int) -> Any: """ This method attempts to retry ssh run command a few times if operation failed with connection time out """ i = 0 while True: i += 1 try: return operation() except CommandError as e: retryable = ((e.exit_code == 255 and ("Connection timed out" in e.stderr or "Connection refused" in e.stderr)) or "Unprivileged users are not permitted to log in yet" in e.stderr) if not retryable or i >= attempts: raise log.warning("The SSH operation failed, retrying in %s secs [Attempt %s/%s].\n%s", attempt_delay, i, attempts, e) time.sleep(attempt_delay) def retry_if_false(operation: Callable[[], bool], attempts: int = 5, delay: int = 30) -> bool: """ This method attempts the given operation retrying a few times (after a short delay) Note: Method used for operations which are return True or False """ success: bool = False while attempts > 0 and not success: attempts -= 1 try: success = operation() except Exception as e: log.warning("Error in operation: %s", e) if attempts == 0: raise if not success and attempts != 0: log.info("Current operation failed, retrying in %s secs.", delay) time.sleep(delay) return success def retry(operation: Callable[[], Any], attempts: int = 5, delay: int = 30) -> Any: """ This method attempts the given operation retrying a few times on exceptions. Returns the value returned by the operation. """ while attempts > 0: attempts -= 1 try: return operation() except Exception as e: if attempts == 0: raise log.warning("Error in operation, retrying in %s secs: %s", delay, e) time.sleep(delay) Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/shell.py000066400000000000000000000037661462617747000230320ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from subprocess import Popen, PIPE from typing import Any class CommandError(Exception): """ Exception raised by run_command when the command returns an error """ def __init__(self, command: Any, exit_code: int, stdout: str, stderr: str): super().__init__(f"'{command}' failed (exit code: {exit_code}): {stderr}") self.command: Any = command self.exit_code: int = exit_code self.stdout: str = stdout self.stderr: str = stderr def __str__(self): return f"'{self.command}' failed (exit code: {self.exit_code})\nstdout:\n{self.stdout}\nstderr:\n{self.stderr}\n" def run_command(command: Any, shell=False) -> str: """ This function is a thin wrapper around Popen/communicate in the subprocess module. It executes the given command and returns its stdout. If the command returns a non-zero exit code, the function raises a CommandError. Similarly to Popen, the 'command' can be a string or a list of strings, and 'shell' indicates whether to execute the command through the shell. NOTE: The command's stdout and stderr are read as text streams. """ process = Popen(command, stdout=PIPE, stderr=PIPE, shell=shell, text=True) stdout, stderr = process.communicate() if process.returncode != 0: raise CommandError(command, process.returncode, stdout, stderr) return stdout Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/ssh_client.py000066400000000000000000000076551462617747000240570ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import re from pathlib import Path from tests_e2e.tests.lib import shell from tests_e2e.tests.lib.retry import retry_ssh_run ATTEMPTS: int = 3 ATTEMPT_DELAY: int = 30 class SshClient(object): def __init__(self, ip_address: str, username: str, identity_file: Path, port: int = 22): self.ip_address: str = ip_address self.username: str = username self.identity_file: Path = identity_file self.port: int = port def run_command(self, command: str, use_sudo: bool = False, attempts: int = ATTEMPTS, attempt_delay: int = ATTEMPT_DELAY) -> str: """ Executes the given command over SSH and returns its stdout. If the command returns a non-zero exit code, the function raises a CommandError. """ if re.match(r"^\s*sudo\s*", command): raise Exception("Do not include 'sudo' in the 'command' argument, use the 'use_sudo' parameter instead") destination = f"ssh://{self.username}@{self.ip_address}:{self.port}" # Note that we add ~/bin to the remote PATH, since Python (Pypy) and other test tools are installed there. # Note, too, that when using sudo we need to carry over the value of PATH to the sudo session sudo = "sudo env PATH=$PATH PYTHONPATH=$PYTHONPATH" if use_sudo else '' command = [ "ssh", "-o", "StrictHostKeyChecking=no", "-i", self.identity_file, destination, f"if [[ -e ~/bin/set-agent-env ]]; then source ~/bin/set-agent-env; fi; {sudo} {command}" ] return retry_ssh_run(lambda: shell.run_command(command), attempts, attempt_delay) @staticmethod def generate_ssh_key(private_key_file: Path): """ Generates an SSH key on the given Path """ shell.run_command( ["ssh-keygen", "-m", "PEM", "-t", "rsa", "-b", "4096", "-q", "-N", "", "-f", str(private_key_file)]) def get_architecture(self): return self.run_command("uname -m").rstrip() def copy_to_node(self, local_path: Path, remote_path: Path, recursive: bool = False, attempts: int = ATTEMPTS, attempt_delay: int = ATTEMPT_DELAY) -> None: """ File copy to a remote node """ self._copy(local_path, remote_path, remote_source=False, remote_target=True, recursive=recursive, attempts=attempts, attempt_delay=attempt_delay) def copy_from_node(self, remote_path: Path, local_path: Path, recursive: bool = False, attempts: int = ATTEMPTS, attempt_delay: int = ATTEMPT_DELAY) -> None: """ File copy from a remote node """ self._copy(remote_path, local_path, remote_source=True, remote_target=False, recursive=recursive, attempts=attempts, attempt_delay=attempt_delay) def _copy(self, source: Path, target: Path, remote_source: bool, remote_target: bool, recursive: bool, attempts: int, attempt_delay: int) -> None: if remote_source: source = f"{self.username}@{self.ip_address}:{source}" if remote_target: target = f"{self.username}@{self.ip_address}:{target}" command = ["scp", "-o", "StrictHostKeyChecking=no", "-i", self.identity_file] if recursive: command.append("-r") command.extend([str(source), str(target)]) return retry_ssh_run(lambda: shell.run_command(command), attempts, attempt_delay) Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/update_arm_template.py000066400000000000000000000140531462617747000257260ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from abc import ABC, abstractmethod from typing import Any, Dict, List class UpdateArmTemplate(ABC): @abstractmethod def update(self, template: Dict[str, Any], is_lisa_template: bool) -> None: """ Derived classes implement this method to customize the ARM template used to create the test VMs. The 'template' parameter is a dictionary created from the template's JSON document, as parsed by json.loads(). If the 'is_lisa_template' parameter is True, the template was created by LISA. The original JSON document is located at https://github.com/microsoft/lisa/blob/main/lisa/sut_orchestrator/azure/arm_template.json """ @staticmethod def get_resource(resources: List[Dict[str, Any]], type_name: str) -> Any: """ Returns the first resource of the specified type in the given 'resources' list. Raises KeyError if no resource of the specified type is found. """ for item in resources: if item["type"] == type_name: return item raise KeyError(f"Cannot find a resource of type {type_name} in the ARM template") @staticmethod def get_resource_by_name(resources: List[Dict[str, Any]], resource_name: str, type_name: str) -> Any: """ Returns the first resource of the specified type and name in the given 'resources' list. Raises KeyError if no resource of the specified type and name is found. """ for item in resources: if item["type"] == type_name and item["name"] == resource_name: return item raise KeyError(f"Cannot find a resource {resource_name} of type {type_name} in the ARM template") @staticmethod def get_lisa_function(template: Dict[str, Any], function_name: str) -> Dict[str, Any]: """ Looks for the given function name in the LISA namespace and returns its definition. Raises KeyError if the function is not found. """ # # NOTE: LISA's functions are in the "lisa" namespace, for example: # # "functions": [ # { # "namespace": "lisa", # "members": { # "getOSProfile": { # "parameters": [ # { # "name": "computername", # "type": "string" # }, # etc. # ], # "output": { # "type": "object", # "value": { # "computername": "[parameters('computername')]", # "adminUsername": "[parameters('admin_username')]", # "adminPassword": "[if(parameters('has_password'), parameters('admin_password'), json('null'))]", # "linuxConfiguration": "[if(parameters('has_linux_configuration'), parameters('linux_configuration'), json('null'))]" # } # } # }, # } # } # ] functions = template.get("functions") if functions is None: raise Exception('Cannot find "functions" in the LISA template.') for namespace in functions: name = namespace.get("namespace") if name is None: raise Exception(f'Cannot find "namespace" in the LISA template: {namespace}') if name == "lisa": lisa_functions = namespace.get('members') if lisa_functions is None: raise Exception(f'Cannot find the members of the lisa namespace in the LISA template: {namespace}') function_definition = lisa_functions.get(function_name) if function_definition is None: raise KeyError(f'Cannot find function {function_name} in the lisa namespace in the LISA template: {namespace}') return function_definition raise Exception(f'Cannot find the "lisa" namespace in the LISA template: {functions}') @staticmethod def get_function_output(function: Dict[str, Any]) -> Dict[str, Any]: """ Returns the "value" property of the output for the given function. Sample function: { "parameters": [ { "name": "computername", "type": "string" }, etc. ], "output": { "type": "object", "value": { "computername": "[parameters('computername')]", "adminUsername": "[parameters('admin_username')]", "adminPassword": "[if(parameters('has_password'), parameters('admin_password'), json('null'))]", "linuxConfiguration": "[if(parameters('has_linux_configuration'), parameters('linux_configuration'), json('null'))]" } } } """ output = function.get('output') if output is None: raise Exception(f'Cannot find the "output" of the given function: {function}') value = output.get('value') if value is None: raise Exception(f"Cannot find the output's value of the given function: {function}") return value Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/virtual_machine_client.py000066400000000000000000000216511462617747000264240ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This module includes facilities to execute operations on virtual machines (list extensions, restart, etc). # import datetime import json import time from typing import Any, Dict, List from azure.mgmt.compute import ComputeManagementClient from azure.mgmt.compute.models import VirtualMachineExtension, VirtualMachineInstanceView, VirtualMachine from azure.mgmt.network import NetworkManagementClient from azure.mgmt.network.models import NetworkInterface, PublicIPAddress from azure.mgmt.resource import ResourceManagementClient from tests_e2e.tests.lib.azure_sdk_client import AzureSdkClient from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import execute_with_retry from tests_e2e.tests.lib.shell import CommandError from tests_e2e.tests.lib.ssh_client import SshClient class VirtualMachineClient(AzureSdkClient): """ Provides operations on virtual machines (get instance view, update, restart, etc). """ def __init__(self, cloud: str, location: str, subscription: str, resource_group: str, name: str): super().__init__() self.cloud: str = cloud self.location = location self.subscription: str = subscription self.resource_group: str = resource_group self.name: str = name self._compute_client = AzureSdkClient.create_client(ComputeManagementClient, cloud, subscription) self._resource_client = AzureSdkClient.create_client(ResourceManagementClient, cloud, subscription) self._network_client = AzureSdkClient.create_client(NetworkManagementClient, cloud, subscription) def get_ip_address(self) -> str: """ Retrieves the public IP address of the virtual machine """ vm_model = self.get_model() nic: NetworkInterface = self._network_client.network_interfaces.get( resource_group_name=self.resource_group, network_interface_name=vm_model.network_profile.network_interfaces[0].id.split('/')[-1]) # the name of the interface is the last component of the id public_ip: PublicIPAddress = self._network_client.public_ip_addresses.get( resource_group_name=self.resource_group, public_ip_address_name=nic.ip_configurations[0].public_ip_address.id.split('/')[-1]) # the name of the ip address is the last component of the id return public_ip.ip_address def get_private_ip_address(self) -> str: """ Retrieves the private IP address of the virtual machine """ vm_model = self.get_model() nic: NetworkInterface = self._network_client.network_interfaces.get( resource_group_name=self.resource_group, network_interface_name=vm_model.network_profile.network_interfaces[0].id.split('/')[ -1]) # the name of the interface is the last component of the id private_ip = nic.ip_configurations[0].private_ip_address return private_ip def get_model(self) -> VirtualMachine: """ Retrieves the model of the virtual machine. """ log.info("Retrieving VM model for %s", self) return execute_with_retry( lambda: self._compute_client.virtual_machines.get( resource_group_name=self.resource_group, vm_name=self.name)) def get_instance_view(self) -> VirtualMachineInstanceView: """ Retrieves the instance view of the virtual machine """ log.info("Retrieving instance view for %s", self) return execute_with_retry(lambda: self._compute_client.virtual_machines.get( resource_group_name=self.resource_group, vm_name=self.name, expand="instanceView" ).instance_view) def get_extensions(self) -> List[VirtualMachineExtension]: """ Retrieves the extensions installed on the virtual machine """ log.info("Retrieving extensions for %s", self) return execute_with_retry( lambda: self._compute_client.virtual_machine_extensions.list( resource_group_name=self.resource_group, vm_name=self.name)) def update(self, properties: Dict[str, Any], timeout: int = AzureSdkClient._DEFAULT_TIMEOUT) -> None: """ Updates a set of properties on the virtual machine """ # location is a required by begin_create_or_update, always add it properties_copy = properties.copy() properties_copy["location"] = self.location log.info("Updating %s with properties: %s", self, properties_copy) self._execute_async_operation( lambda: self._compute_client.virtual_machines.begin_create_or_update( self.resource_group, self.name, properties_copy), operation_name=f"Update {self}", timeout=timeout) def reapply(self, timeout: int = AzureSdkClient._DEFAULT_TIMEOUT) -> None: """ Reapplies the goal state on the virtual machine """ self._execute_async_operation( lambda: self._compute_client.virtual_machines.begin_reapply(self.resource_group, self.name), operation_name=f"Reapply {self}", timeout=timeout) def restart( self, wait_for_boot, ssh_client: SshClient = None, boot_timeout: datetime.timedelta = datetime.timedelta(minutes=5), timeout: int = AzureSdkClient._DEFAULT_TIMEOUT) -> None: """ Restarts (reboots) the virtual machine. NOTES: * If wait_for_boot is True, an SshClient must be provided in order to verify that the restart was successful. * 'timeout' is the timeout for the restart operation itself, while 'boot_timeout' is the timeout for waiting the boot to complete. """ if wait_for_boot and ssh_client is None: raise ValueError("An SshClient must be provided if wait_for_boot is True") before_restart = datetime.datetime.utcnow() self._execute_async_operation( lambda: self._compute_client.virtual_machines.begin_restart( resource_group_name=self.resource_group, vm_name=self.name), operation_name=f"Restart {self}", timeout=timeout) if not wait_for_boot: return start = datetime.datetime.utcnow() while datetime.datetime.utcnow() < start + boot_timeout: log.info("Waiting for VM %s to boot", self) time.sleep(15) # Note that we always sleep at least 1 time, to give the reboot time to start instance_view = self.get_instance_view() power_state = [s.code for s in instance_view.statuses if "PowerState" in s.code] if len(power_state) != 1: raise Exception(f"Could not find PowerState in the instance view statuses:\n{json.dumps(instance_view.statuses)}") log.info("VM's Power State: %s", power_state[0]) if power_state[0] == "PowerState/running": # We may get an instance view captured before the reboot actually happened; verify # that the reboot actually happened by checking the system's uptime. log.info("Verifying VM's uptime to ensure the reboot has completed...") try: uptime = ssh_client.run_command("cat /proc/uptime | sed 's/ .*//'", attempts=1).rstrip() # The uptime is the first field in the file log.info("Uptime: %s", uptime) boot_time = datetime.datetime.utcnow() - datetime.timedelta(seconds=float(uptime)) if boot_time > before_restart: log.info("VM %s completed boot and is running. Boot time: %s", self, boot_time) return log.info("The VM has not rebooted yet. Restart time: %s. Boot time: %s", before_restart, boot_time) except CommandError as e: if (e.exit_code == 255 and ("Connection refused" in str(e) or "Connection timed out" in str(e))) or "Unprivileged users are not permitted to log in yet" in str(e): log.info("VM %s is not yet accepting SSH connections", self) else: raise raise Exception(f"VM {self} did not boot after {boot_timeout}") def __str__(self): return f"{self.resource_group}:{self.name}" Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/virtual_machine_extension_client.py000066400000000000000000000166721462617747000305270ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This module includes facilities to execute VM extension operations (enable, remove, etc). # import json import uuid from assertpy import assert_that, soft_assertions from typing import Any, Callable, Dict from azure.mgmt.compute import ComputeManagementClient from azure.mgmt.compute.models import VirtualMachineExtension, VirtualMachineExtensionInstanceView from tests_e2e.tests.lib.azure_sdk_client import AzureSdkClient from tests_e2e.tests.lib.vm_extension_identifier import VmExtensionIdentifier from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import execute_with_retry from tests_e2e.tests.lib.virtual_machine_client import VirtualMachineClient class VirtualMachineExtensionClient(AzureSdkClient): """ Client for operations virtual machine extensions. """ def __init__(self, vm: VirtualMachineClient, extension: VmExtensionIdentifier, resource_name: str = None): super().__init__() self._vm: VirtualMachineClient = vm self._identifier = extension self._resource_name = resource_name or extension.type self._compute_client: ComputeManagementClient = AzureSdkClient.create_client(ComputeManagementClient, self._vm.cloud, self._vm.subscription) def get_instance_view(self) -> VirtualMachineExtensionInstanceView: """ Retrieves the instance view of the extension """ log.info("Retrieving instance view for %s...", self._identifier) return execute_with_retry(lambda: self._compute_client.virtual_machine_extensions.get( resource_group_name=self._vm.resource_group, vm_name=self._vm.name, vm_extension_name=self._resource_name, expand="instanceView" ).instance_view) def enable( self, settings: Dict[str, Any] = None, protected_settings: Dict[str, Any] = None, auto_upgrade_minor_version: bool = True, force_update: bool = False, force_update_tag: str = None, timeout: int = AzureSdkClient._DEFAULT_TIMEOUT ) -> None: """ Performs an enable operation on the extension. NOTE: 'force_update' is not a parameter of the actual ARM API. It is provided here for convenience: If set to True, the 'force_update_tag' can be left unspecified and this method will generate a random tag. """ if force_update_tag is not None and not force_update: raise ValueError("If force_update_tag is provided then force_update must be set to true") if force_update and force_update_tag is None: force_update_tag = str(uuid.uuid4()) extension_parameters = VirtualMachineExtension( publisher=self._identifier.publisher, location=self._vm.location, type_properties_type=self._identifier.type, type_handler_version=self._identifier.version, auto_upgrade_minor_version=auto_upgrade_minor_version, settings=settings, protected_settings=protected_settings, force_update_tag=force_update_tag) # Hide the protected settings from logging if protected_settings is not None: extension_parameters.protected_settings = "*****[REDACTED]*****" log.info("Enabling %s", self._identifier) log.info("%s", extension_parameters) # Now set the actual protected settings before invoking the extension extension_parameters.protected_settings = protected_settings result: VirtualMachineExtension = self._execute_async_operation( lambda: self._compute_client.virtual_machine_extensions.begin_create_or_update( self._vm.resource_group, self._vm.name, self._resource_name, extension_parameters), operation_name=f"Enable {self._identifier}", timeout=timeout) log.info("Provisioning state: %s", result.provisioning_state) def delete(self, timeout: int = AzureSdkClient._DEFAULT_TIMEOUT) -> None: """ Performs a delete operation on the extension """ self._execute_async_operation( lambda: self._compute_client.virtual_machine_extensions.begin_delete( self._vm.resource_group, self._vm.name, self._resource_name), operation_name=f"Delete {self._identifier}", timeout=timeout) def assert_instance_view( self, expected_status_code: str = "ProvisioningState/succeeded", expected_version: str = None, expected_message: str = None, assert_function: Callable[[VirtualMachineExtensionInstanceView], None] = None ) -> None: """ Asserts that the extension's instance view matches the given expected values. If 'expected_version' and/or 'expected_message' are omitted, they are not validated. If 'assert_function' is provided, it is invoked passing as parameter the instance view. This function can be used to perform additional validations. """ # Sometimes we get incomplete instance view with only 'name' property which causes issues during assertions. # Retry attempt to get instance view if only 'name' property is populated. attempt = 1 instance_view = self.get_instance_view() while instance_view.name is not None and instance_view.type_handler_version is None and instance_view.statuses is None and attempt < 3: log.info("Instance view is incomplete: %s\nRetrying attempt to get instance view...", instance_view.serialize()) instance_view = self.get_instance_view() attempt += 1 log.info("Instance view:\n%s", json.dumps(instance_view.serialize(), indent=4)) with soft_assertions(): if expected_version is not None: # Compare only the major and minor versions (i.e. the first 2 items in the result of split()) installed_version = instance_view.type_handler_version assert_that(expected_version.split(".")[0:2]).described_as("Unexpected extension version").is_equal_to(installed_version.split(".")[0:2]) assert_that(instance_view.statuses).described_as(f"Expected 1 status, got: {instance_view.statuses}").is_length(1) status = instance_view.statuses[0] if expected_message is not None: assert_that(expected_message in status.message).described_as(f"{expected_message} should be in the InstanceView message ({status.message})").is_true() assert_that(status.code).described_as("InstanceView status code").is_equal_to(expected_status_code) if assert_function is not None: assert_function(instance_view) log.info("The instance view matches the expected values") def __str__(self): return f"{self._identifier}" Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/virtual_machine_scale_set_client.py000066400000000000000000000113111462617747000304360ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This module includes facilities to execute operations on virtual machines scale sets (list instances, delete, etc). # import re from typing import List from azure.mgmt.compute import ComputeManagementClient from azure.mgmt.compute.models import VirtualMachineScaleSetVM, VirtualMachineScaleSetInstanceView from azure.mgmt.network import NetworkManagementClient from tests_e2e.tests.lib.azure_sdk_client import AzureSdkClient from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import execute_with_retry class VmssInstanceIpAddress(object): """ IP address of a virtual machine scale set instance """ def __init__(self, instance_name: str, ip_address: str): self.instance_name: str = instance_name self.ip_address: str = ip_address def __str__(self): return f"{self.instance_name}:{self.ip_address}" class VirtualMachineScaleSetClient(AzureSdkClient): """ Provides operations on virtual machine scale sets. """ def __init__(self, cloud: str, location: str, subscription: str, resource_group: str, name: str): super().__init__() self.cloud: str = cloud self.location = location self.subscription: str = subscription self.resource_group: str = resource_group self.name: str = name self._compute_client = AzureSdkClient.create_client(ComputeManagementClient, cloud, subscription) self._network_client = AzureSdkClient.create_client(NetworkManagementClient, cloud, subscription) def list_vms(self) -> List[VirtualMachineScaleSetVM]: """ Returns the VM instances of the virtual machine scale set """ log.info("Retrieving instances of scale set %s", self) return list(self._compute_client.virtual_machine_scale_set_vms.list(resource_group_name=self.resource_group, virtual_machine_scale_set_name=self.name)) def get_instances_ip_address(self) -> List[VmssInstanceIpAddress]: """ Returns a list containing the IP addresses of scale set instances """ log.info("Retrieving IP addresses of scale set %s", self) ip_addresses = self._network_client.public_ip_addresses.list_virtual_machine_scale_set_public_ip_addresses(resource_group_name=self.resource_group, virtual_machine_scale_set_name=self.name) ip_addresses = list(ip_addresses) def parse_instance(resource_id: str) -> str: # the resource_id looks like /subscriptions/{subs}}/resourceGroups/{rg}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmss}/virtualMachines/{instance}/networkInterfaces/{netiace}/ipConfigurations/ipconfig1/publicIPAddresses/{name} match = re.search(r'virtualMachines/(?P[0-9])/networkInterfaces', resource_id) if match is None: raise Exception(f"Unable to parse instance from IP address ID:{resource_id}") return match.group('instance') return [VmssInstanceIpAddress(instance_name=f"{self.name}_{parse_instance(a.id)}", ip_address=a.ip_address) for a in ip_addresses if a.ip_address is not None] def delete_extension(self, extension: str, timeout: int = AzureSdkClient._DEFAULT_TIMEOUT) -> None: """ Deletes the given operation """ log.info("Deleting extension %s from %s", extension, self) self._execute_async_operation( operation=lambda: self._compute_client.virtual_machine_scale_set_extensions.begin_delete(resource_group_name=self.resource_group, vm_scale_set_name=self.name, vmss_extension_name=extension), operation_name=f"Delete {extension} from {self}", timeout=timeout) def get_instance_view(self) -> VirtualMachineScaleSetInstanceView: """ Retrieves the instance view of the virtual machine """ log.info("Retrieving instance view for %s", self) return execute_with_retry(lambda: self._compute_client.virtual_machine_scale_sets.get_instance_view( resource_group_name=self.resource_group, vm_scale_set_name=self.name )) def __str__(self): return f"{self.resource_group}:{self.name}" Azure-WALinuxAgent-2b21de5/tests_e2e/tests/lib/vm_extension_identifier.py000066400000000000000000000065401462617747000266340ustar00rootroot00000000000000# Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import Dict, List class VmExtensionIdentifier(object): """ Represents the information that identifies an extension to the ARM APIs publisher - e.g. Microsoft.Azure.Extensions type - e.g. CustomScript version - e.g. 2.1, 2.* name - arbitrary name for the extension ARM resource """ def __init__(self, publisher: str, ext_type: str, version: str): self.publisher: str = publisher self.type: str = ext_type self.version: str = version unsupported_distros: Dict[str, List[str]] = { "Microsoft.OSTCExtensions.VMAccessForLinux": ["flatcar"], "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent": ["flatcar", "mariner_1", "ubuntu_2404"] } def supports_distro(self, system_info: str) -> bool: """ Returns true if an unsupported distro name for the extension is found in the provided system info """ ext_unsupported_distros = VmExtensionIdentifier.unsupported_distros.get(self.publisher + "." + self.type) if ext_unsupported_distros is not None and any(distro in system_info for distro in ext_unsupported_distros): return False return True def __str__(self): return f"{self.publisher}.{self.type}" class VmExtensionIds(object): """ A set of extensions used by the tests, listed here for convenience (easy to reference them by name). Only the major version is specified, and the minor version is set to 0 (set autoUpgradeMinorVersion to True in the call to enable to use the latest version) """ CustomScript: VmExtensionIdentifier = VmExtensionIdentifier(publisher='Microsoft.Azure.Extensions', ext_type='CustomScript', version="2.0") # Older run command extension, still used by the Portal as of Dec 2022 RunCommand: VmExtensionIdentifier = VmExtensionIdentifier(publisher='Microsoft.CPlat.Core', ext_type='RunCommandLinux', version="1.0") # New run command extension, with support for multi-config RunCommandHandler: VmExtensionIdentifier = VmExtensionIdentifier(publisher='Microsoft.CPlat.Core', ext_type='RunCommandHandlerLinux', version="1.0") VmAccess: VmExtensionIdentifier = VmExtensionIdentifier(publisher='Microsoft.OSTCExtensions', ext_type='VMAccessForLinux', version="1.0") GuestAgentDcrTestExtension: VmExtensionIdentifier = VmExtensionIdentifier(publisher='Microsoft.Azure.TestExtensions.Edp', ext_type='GuestAgentDcrTest', version='1.0') AzureMonitorLinuxAgent: VmExtensionIdentifier = VmExtensionIdentifier(publisher='Microsoft.Azure.Monitor', ext_type='AzureMonitorLinuxAgent', version="1.5") GATestExtension: VmExtensionIdentifier = VmExtensionIdentifier(publisher='Microsoft.Azure.Extensions.Edp', ext_type='GATestExtGo', version="1.2") Azure-WALinuxAgent-2b21de5/tests_e2e/tests/multi_config_ext/000077500000000000000000000000001462617747000241265ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/multi_config_ext/multi_config_ext.py000066400000000000000000000200011462617747000300300ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This test adds multiple instances of RCv2 and verifies that the extensions are processed and deleted as expected. # import uuid from typing import Dict, Callable, Any from assertpy import fail from azure.mgmt.compute.models import VirtualMachineInstanceView from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.vm_extension_identifier import VmExtensionIds from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.virtual_machine_client import VirtualMachineClient from tests_e2e.tests.lib.virtual_machine_extension_client import VirtualMachineExtensionClient class MultiConfigExt(AgentVmTest): class TestCase: def __init__(self, extension: VirtualMachineExtensionClient, get_settings: Callable[[str], Dict[str, str]]): self.extension = extension self.get_settings = get_settings self.test_guid: str = str(uuid.uuid4()) def enable_and_assert_test_cases(self, cases_to_enable: Dict[str, TestCase], cases_to_assert: Dict[str, TestCase], delete_extensions: bool = False): for resource_name, test_case in cases_to_enable.items(): log.info("") log.info("Adding {0} to the test VM. guid={1}".format(resource_name, test_case.test_guid)) test_case.extension.enable(settings=test_case.get_settings(test_case.test_guid)) test_case.extension.assert_instance_view() log.info("") log.info("Check that each extension has the expected guid in its status message...") for resource_name, test_case in cases_to_assert.items(): log.info("") log.info("Checking {0} has expected status message with {1}".format(resource_name, test_case.test_guid)) test_case.extension.assert_instance_view(expected_message=f"{test_case.test_guid}") # Delete each extension on the VM if delete_extensions: log.info("") log.info("Delete each extension...") self.delete_extensions(cases_to_assert) def delete_extensions(self, test_cases: Dict[str, TestCase]): for resource_name, test_case in test_cases.items(): log.info("") log.info("Deleting {0} from the test VM".format(resource_name)) test_case.extension.delete() log.info("") vm: VirtualMachineClient = VirtualMachineClient( cloud=self._context.vm.cloud, location=self._context.vm.location, subscription=self._context.vm.subscription, resource_group=self._context.vm.resource_group, name=self._context.vm.name) instance_view: VirtualMachineInstanceView = vm.get_instance_view() if instance_view.extensions is not None: for ext in instance_view.extensions: if ext.name in test_cases.keys(): fail("Extension was not deleted: \n{0}".format(ext)) log.info("") log.info("All extensions were successfully deleted.") def run(self): # Create 3 different RCv2 extensions and a single config extension (CSE) and assign each a unique guid. Each # extension will have settings that echo its assigned guid. We will use this guid to verify the extension # statuses later. mc_settings: Callable[[Any], Dict[str, Dict[str, str]]] = lambda s: { "source": {"script": f"echo {s}"}} sc_settings: Callable[[Any], Dict[str, str]] = lambda s: {'commandToExecute': f"echo {s}"} test_cases: Dict[str, MultiConfigExt.TestCase] = { "MCExt1": MultiConfigExt.TestCase( VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.RunCommandHandler, resource_name="MCExt1"), mc_settings), "MCExt2": MultiConfigExt.TestCase( VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.RunCommandHandler, resource_name="MCExt2"), mc_settings), "MCExt3": MultiConfigExt.TestCase( VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.RunCommandHandler, resource_name="MCExt3"), mc_settings), "CSE": MultiConfigExt.TestCase( VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.CustomScript), sc_settings) } # Add each extension to the VM and validate the instance view has succeeded status with its assigned guid in the # status message log.info("") log.info("Add CSE and 3 instances of RCv2 to the VM. Each instance will echo a unique guid...") self.enable_and_assert_test_cases(cases_to_enable=test_cases, cases_to_assert=test_cases) # Update MCExt3 and CSE with new guids and add a new instance of RCv2 to the VM updated_test_cases: Dict[str, MultiConfigExt.TestCase] = { "MCExt3": MultiConfigExt.TestCase( VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.RunCommandHandler, resource_name="MCExt3"), mc_settings), "MCExt4": MultiConfigExt.TestCase( VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.RunCommandHandler, resource_name="MCExt4"), mc_settings), "CSE": MultiConfigExt.TestCase( VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.CustomScript), sc_settings) } test_cases.update(updated_test_cases) # Enable only the updated extensions, verify every extension has the correct test guid is in status message, and # remove all extensions from the test vm log.info("") log.info("Update MCExt3 and CSE with new guids and add a new instance of RCv2 to the VM...") self.enable_and_assert_test_cases(cases_to_enable=updated_test_cases, cases_to_assert=test_cases, delete_extensions=True) # Enable, verify, and remove only multi config extensions log.info("") log.info("Add only multi-config extensions to the VM...") mc_test_cases: Dict[str, MultiConfigExt.TestCase] = { "MCExt5": MultiConfigExt.TestCase( VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.RunCommandHandler, resource_name="MCExt5"), mc_settings), "MCExt6": MultiConfigExt.TestCase( VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.RunCommandHandler, resource_name="MCExt6"), mc_settings) } self.enable_and_assert_test_cases(cases_to_enable=mc_test_cases, cases_to_assert=mc_test_cases, delete_extensions=True) # Enable, verify, and delete only single config extensions log.info("") log.info("Add only single-config extension to the VM...") sc_test_cases: Dict[str, MultiConfigExt.TestCase] = { "CSE": MultiConfigExt.TestCase( VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.CustomScript), sc_settings) } self.enable_and_assert_test_cases(cases_to_enable=sc_test_cases, cases_to_assert=sc_test_cases, delete_extensions=True) if __name__ == "__main__": MultiConfigExt.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/no_outbound_connections/000077500000000000000000000000001462617747000255245ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/no_outbound_connections/check_fallback_to_hgap.py000077500000000000000000000043631462617747000325040ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from assertpy import assert_that from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.ssh_client import SshClient class CheckFallbackToHGAP(AgentVmTest): """ Check the agent log to verify that the default channel was changed to HostGAPlugin before executing any extensions. """ def run(self): # 2023-04-14T14:49:43.005530Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. # 2023-04-14T14:49:44.625061Z INFO ExtHandler [Microsoft.Azure.Monitor.AzureMonitorLinuxAgent-1.25.2] Target handler state: enabled [incarnation_2] ssh_client: SshClient = self._context.create_ssh_client() log.info("Parsing agent log on the test VM") output = ssh_client.run_command("grep -E 'INFO ExtHandler.*(Default channel changed to HostGAPlugin)|(Target handler state:)' /var/log/waagent.log | head").split('\n') log.info("Output (first 10 lines) from the agent log:\n\t\t%s", '\n\t\t'.join(output)) assert_that(len(output) > 1).is_true().described_as( "The agent log should contain multiple matching records" ) assert_that(output[0]).contains("Default channel changed to HostGAPlugin").described_as( "The agent log should contain a record indicating that the default channel was changed to HostGAPlugin before executing any extensions" ) log.info("The agent log indicates that the default channel was changed to HostGAPlugin before executing any extensions") if __name__ == "__main__": CheckFallbackToHGAP.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/no_outbound_connections/check_no_outbound_connections.py000077500000000000000000000043561462617747000342030ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from assertpy import fail from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.shell import CommandError from tests_e2e.tests.lib.ssh_client import SshClient class CheckNoOutboundConnections(AgentVmTest): """ Verifies that there is no outbound connectivity on the test VM. """ def run(self): # This script is executed on the test VM. It tries to connect to a well-known DNS server (DNS is on port 53). script: str = """ import socket, sys try: socket.setdefaulttimeout(5) socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect(("8.8.8.8", 53)) except socket.timeout: print("No outbound connectivity [expected]") exit(0) print("There is outbound connectivity [unexpected: the custom ARM template should not allow it]", file=sys.stderr) exit(1) """ ssh_client: SshClient = self._context.create_ssh_client() try: log.info("Verifying that there is no outbound connectivity on the test VM") ssh_client.run_command("pypy3 -c '{0}'".format(script.replace('"', '\"'))) log.info("There is no outbound connectivity, as expected.") except CommandError as e: if e.exit_code == 1 and "There is outbound connectivity" in e.stderr: fail("There is outbound connectivity on the test VM, the custom ARM template should not allow it") else: raise Exception(f"Unexpected error while checking outbound connectivity on the test VM: {e}") if __name__ == "__main__": CheckNoOutboundConnections.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/no_outbound_connections/deny_outbound_connections.py000077500000000000000000000032641462617747000333660ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import json from typing import Any, Dict from tests_e2e.tests.lib.network_security_rule import NetworkSecurityRule from tests_e2e.tests.lib.update_arm_template import UpdateArmTemplate class DenyOutboundConnections(UpdateArmTemplate): """ Updates the ARM template to add a security rule that denies all outbound connections. """ def update(self, template: Dict[str, Any], is_lisa_template: bool) -> None: NetworkSecurityRule(template, is_lisa_template).add_security_rule( json.loads("""{ "name": "waagent-no-outbound", "properties": { "description": "Denies all outbound connections.", "protocol": "*", "sourcePortRange": "*", "destinationPortRange": "*", "sourceAddressPrefix": "*", "destinationAddressPrefix": "Internet", "access": "Deny", "priority": 200, "direction": "Outbound" } }""")) Azure-WALinuxAgent-2b21de5/tests_e2e/tests/publish_hostname/000077500000000000000000000000001462617747000241335ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/publish_hostname/publish_hostname.py000066400000000000000000000255531462617747000300630ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This test updates the hostname and checks that the agent published the hostname to DNS. It also checks that the # primary network is up after publishing the hostname. This test was added in response to a bug in publishing the # hostname on fedora distros, where there was a race condition between NetworkManager restart and Network Interface # restart which caused the primary interface to go down. # import datetime import re from assertpy import fail from time import sleep from tests_e2e.tests.lib.shell import CommandError from tests_e2e.tests.lib.agent_test import AgentVmTest, TestSkipped from tests_e2e.tests.lib.agent_test_context import AgentVmTestContext from tests_e2e.tests.lib.logging import log class PublishHostname(AgentVmTest): def __init__(self, context: AgentVmTestContext): super().__init__(context) self._context = context self._ssh_client = context.create_ssh_client() self._private_ip = context.vm.get_private_ip_address() self._vm_password = "" def add_vm_password(self): # Add password to VM to help with debugging in case of failure # REMOVE PWD FROM LOGS IF WE EVER MAKE THESE RUNS/LOGS PUBLIC username = self._ssh_client.username pwd = self._ssh_client.run_command("openssl rand -base64 32 | tr : .").rstrip() self._vm_password = pwd log.info("VM Username: {0}; VM Password: {1}".format(username, pwd)) self._ssh_client.run_command("echo '{0}:{1}' | sudo -S chpasswd".format(username, pwd)) def check_and_install_dns_tools(self): lookup_cmd = "dig -x {0}".format(self._private_ip) dns_regex = r"[\S\s]*;; ANSWER SECTION:\s.*PTR\s*(?P.*)\.internal\.(cloudapp\.net|chinacloudapp\.cn|usgovcloudapp\.net).*[\S\s]*" # Not all distros come with dig. Install dig if not on machine try: self._ssh_client.run_command("dig -v") except CommandError as e: if "dig: command not found" in e.stderr: distro = self._ssh_client.run_command("get_distro.py").rstrip().lower() if "debian_9" in distro: # Debian 9 hostname look up needs to be done with "host" instead of dig lookup_cmd = "host {0}".format(self._private_ip) dns_regex = r".*pointer\s(?P.*)\.internal\.(cloudapp\.net|chinacloudapp\.cn|usgovcloudapp\.net).*" elif "debian" in distro: self._ssh_client.run_command("apt install -y dnsutils", use_sudo=True) elif "alma" in distro or "rocky" in distro: self._ssh_client.run_command("dnf install -y bind-utils", use_sudo=True) else: raise else: raise return lookup_cmd, dns_regex def check_agent_reports_status(self): status_updated = False last_agent_status_time = self._context.vm.get_instance_view().vm_agent.statuses[0].time log.info("Agent reported status at {0}".format(last_agent_status_time)) retries = 3 while retries > 0 and not status_updated: agent_status_time = self._context.vm.get_instance_view().vm_agent.statuses[0].time if agent_status_time != last_agent_status_time: status_updated = True log.info("Agent reported status at {0}".format(last_agent_status_time)) else: retries -= 1 sleep(60) if not status_updated: fail("Agent hasn't reported status since {0} and ssh connection failed. Use the serial console in portal " "to check the contents of '/sys/class/net/eth0/operstate'. If the contents of this file are 'up', " "no further action is needed. If contents are 'down', that indicates the network interface is down " "and more debugging needs to be done to confirm this is not caused by the agent.\n VM: {1}\n RG: {2}" "\nSubscriptionId: {3}\nUsername: {4}\nPassword: {5}".format(last_agent_status_time, self._context.vm, self._context.vm.resource_group, self._context.vm.subscription, self._context.username, self._vm_password)) def retry_ssh_if_connection_reset(self, command: str, use_sudo=False): # The agent may bring the network down and back up to publish the hostname, which can reset the ssh connection. # Adding retry here for connection reset. retries = 3 while retries > 0: try: return self._ssh_client.run_command(command, use_sudo=use_sudo) except CommandError as e: retries -= 1 retryable = e.exit_code == 255 and "Connection reset by peer" in e.stderr if not retryable or retries == 0: raise log.warning("The SSH operation failed, retrying in 30 secs") sleep(30) def run(self): # TODO: Investigate why hostname is not being published on Ubuntu, alma, and rocky as expected distros_with_known_publishing_issues = ["ubuntu", "alma", "rocky"] distro = self._ssh_client.run_command("get_distro.py").lower() if any(d in distro for d in distros_with_known_publishing_issues): raise TestSkipped("Known issue with hostname publishing on this distro. Will skip test until we continue " "investigation.") # Add password to VM and log. This allows us to debug with serial console if necessary self.add_vm_password() # This test looks up what hostname is published to dns. Check that the tools necessary to get hostname are # installed, and if not install them. lookup_cmd, dns_regex = self.check_and_install_dns_tools() # Check if this distro monitors hostname changes. If it does, we should check that the agent detects the change # and publishes the host name. If it doesn't, we should check that the hostname is automatically published. monitors_hostname = self._ssh_client.run_command("get-waagent-conf-value Provisioning.MonitorHostName", use_sudo=True).rstrip().lower() hostname_change_ctr = 0 # Update the hostname 3 times while hostname_change_ctr < 3: try: hostname = "hostname-monitor-{0}".format(hostname_change_ctr) log.info("Update hostname to {0}".format(hostname)) self.retry_ssh_if_connection_reset("hostnamectl set-hostname {0}".format(hostname), use_sudo=True) # Wait for the agent to detect the hostname change for up to 2 minutes if hostname monitoring is enabled if monitors_hostname == "y" or monitors_hostname == "yes": log.info("Agent hostname monitoring is enabled") timeout = datetime.datetime.now() + datetime.timedelta(minutes=2) hostname_detected = "" while datetime.datetime.now() <= timeout: try: hostname_detected = self.retry_ssh_if_connection_reset("grep -n 'Detected hostname change:.*-> {0}' /var/log/waagent.log".format(hostname), use_sudo=True) if hostname_detected: log.info("Agent detected hostname change: {0}".format(hostname_detected)) break except CommandError as e: # Exit code 1 indicates grep did not find a match. Sleep if exit code is 1, otherwise raise. if e.exit_code != 1: raise sleep(15) if not hostname_detected: fail("Agent did not detect hostname change: {0}".format(hostname)) else: log.info("Agent hostname monitoring is disabled") # Check that the expected hostname is published with 4 minute timeout timeout = datetime.datetime.now() + datetime.timedelta(minutes=4) published_hostname = "" while datetime.datetime.now() <= timeout: try: dns_info = self.retry_ssh_if_connection_reset(lookup_cmd) actual_hostname = re.match(dns_regex, dns_info) if actual_hostname: # Compare published hostname to expected hostname published_hostname = actual_hostname.group('hostname') if hostname == published_hostname: log.info("SUCCESS Hostname {0} was published successfully".format(hostname)) break else: log.info("Unable to parse the dns info: {0}".format(dns_info)) except CommandError as e: if "NXDOMAIN" in e.stdout: log.info("DNS Lookup could not find domain. Will try again.") else: raise sleep(30) if published_hostname == "" or published_hostname != hostname: fail("Hostname {0} was not published successfully. Actual host name is: {1}".format(hostname, published_hostname)) hostname_change_ctr += 1 except CommandError as e: # If failure is ssh issue, we should confirm that the VM did not lose network connectivity due to the # agent's operations on the network. If agent reports status after this failure, then we know the # network is up. if e.exit_code == 255 and ("Connection timed out" in e.stderr or "Connection refused" in e.stderr): self.check_agent_reports_status() raise if __name__ == "__main__": PublishHostname.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/recover_network_interface/000077500000000000000000000000001462617747000260255ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/recover_network_interface/recover_network_interface.py000066400000000000000000000165001462617747000336370ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # This test uses CSE to bring the network down and call check_and_recover_nic_state to bring the network back into an # 'up' and 'connected' state. The intention of the test is to alert us if there is some change in newer distros which # affects this logic. # import json from typing import List, Dict, Any from assertpy import fail, assert_that from time import sleep from tests_e2e.tests.lib.agent_test import AgentVmTest, TestSkipped from tests_e2e.tests.lib.agent_test_context import AgentVmTestContext from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.virtual_machine_extension_client import VirtualMachineExtensionClient from tests_e2e.tests.lib.vm_extension_identifier import VmExtensionIds class RecoverNetworkInterface(AgentVmTest): def __init__(self, context: AgentVmTestContext): super().__init__(context) self._context = context self._ssh_client = context.create_ssh_client() self._private_ip = context.vm.get_private_ip_address() self._vm_password = "" def add_vm_password(self): # Add password to VM to help with debugging in case of failure # REMOVE PWD FROM LOGS IF WE EVER MAKE THESE RUNS/LOGS PUBLIC username = self._ssh_client.username pwd = self._ssh_client.run_command("openssl rand -base64 32 | tr : .").rstrip() self._vm_password = pwd log.info("VM Username: {0}; VM Password: {1}".format(username, pwd)) self._ssh_client.run_command("echo '{0}:{1}' | sudo -S chpasswd".format(username, pwd)) def check_agent_reports_status(self): status_updated = False last_agent_status_time = self._context.vm.get_instance_view().vm_agent.statuses[0].time log.info("Agent reported status at {0}".format(last_agent_status_time)) retries = 3 while retries > 0 and not status_updated: agent_status_time = self._context.vm.get_instance_view().vm_agent.statuses[0].time if agent_status_time != last_agent_status_time: status_updated = True log.info("Agent reported status at {0}".format(last_agent_status_time)) else: retries -= 1 sleep(60) if not status_updated: fail("Agent hasn't reported status since {0} and ssh connection failed. Use the serial console in portal " "to debug".format(last_agent_status_time)) def run(self): # Add password to VM and log. This allows us to debug with serial console if necessary log.info("") log.info("Adding password to the VM to use for debugging in case necessary...") self.add_vm_password() # Skip the test if NM_CONTROLLED=n. The current recover logic does not work in this case result = self._ssh_client.run_command("recover_network_interface-get_nm_controlled.py", use_sudo=True) if "Interface is NOT NM controlled" in result: raise TestSkipped("Current recover method will not work on interfaces where NM_Controlled=n") # Get the primary network interface name ifname = self._ssh_client.run_command("pypy3 -c 'from azurelinuxagent.common.osutil.redhat import RedhatOSUtil; print(RedhatOSUtil().get_if_name())'").rstrip() # The interface name needs to be in double quotes for the pypy portion of the script formatted_ifname = f'"{ifname}"' # The script should bring the primary network interface down and use the agent to recover the interface. These # commands will bring the network down, so they should be executed on the machine using CSE instead of ssh. script = f""" set -euxo pipefail ifdown {ifname}; nic_state=$(nmcli -g general.state device show {ifname}) echo Primary network interface state before recovering: $nic_state source /home/{self._context.username}/bin/set-agent-env; pypy3 -c 'from azurelinuxagent.common.osutil.redhat import RedhatOSUtil; RedhatOSUtil().check_and_recover_nic_state({formatted_ifname})'; nic_state=$(nmcli -g general.state device show {ifname}); echo Primary network interface state after recovering: $nic_state """ log.info("") log.info("Using CSE to bring the primary network interface down and call the OSUtil to bring the interface back up. Command to execute: {0}".format(script)) custom_script = VirtualMachineExtensionClient(self._context.vm, VmExtensionIds.CustomScript, resource_name="CustomScript") custom_script.enable(protected_settings={'commandToExecute': script}, settings={}) # Check that the interface was down and brought back up in instance view log.info("") log.info("Checking the instance view to confirm the primary network interface was brought down and successfully recovered by the agent...") instance_view = custom_script.get_instance_view() log.info("Instance view for custom script after enable is: {0}".format(json.dumps(instance_view.serialize(), indent=4))) assert_that(len(instance_view.statuses)).described_as("Instance view should have a status for CustomScript").is_greater_than(0) assert_that(instance_view.statuses[0].message).described_as("The primary network interface should be in a disconnected state before the attempt to recover").contains("Primary network interface state before recovering: 30 (disconnected)") assert_that(instance_view.statuses[0].message).described_as("The primary network interface should be in a connected state after the attempt to recover").contains("Primary network interface state after recovering: 100 (connected)") # Check that the agent is successfully reporting status after recovering the network log.info("") log.info("Checking that the agent is reporting status after recovering the network...") self.check_agent_reports_status() log.info("") log.info("The primary network interface was successfully recovered by the agent.") def get_ignore_error_rules(self) -> List[Dict[str, Any]]: ignore_rules = [ # # We may see temporary network unreachable warnings since we are bringing the network interface down # 2024-02-01T23:40:03.563499Z ERROR ExtHandler ExtHandler Error fetching the goal state: [ProtocolError] GET vmSettings [correlation ID: ac21bdd7-1a7a-4bba-b307-b9d5bc30da33 eTag: 941323814975149980]: Request failed: [Errno 101] Network is unreachable # { 'message': r"Error fetching the goal state: \[ProtocolError\] GET vmSettings.*Request failed: \[Errno 101\] Network is unreachable" } ] return ignore_rules if __name__ == "__main__": RecoverNetworkInterface.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/samples/000077500000000000000000000000001462617747000222335ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/samples/error_remote_test.py000077500000000000000000000017231462617747000263560ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from tests_e2e.tests.lib.agent_test import AgentVmTest class ErrorRemoteTest(AgentVmTest): """ A trivial remote test that fails """ def run(self): self._run_remote_test(self._context.create_ssh_client(), "samples-error_remote_test.py") if __name__ == "__main__": ErrorRemoteTest.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/samples/error_test.py000077500000000000000000000016561462617747000250100ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from tests_e2e.tests.lib.agent_test import AgentVmTest class ErrorTest(AgentVmTest): """ A trivial test that errors out """ def run(self): raise Exception("* TEST ERROR *") # simulate an unexpected error if __name__ == "__main__": ErrorTest.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/samples/fail_remote_test.py000077500000000000000000000017201462617747000261350ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from tests_e2e.tests.lib.agent_test import AgentVmTest class FailRemoteTest(AgentVmTest): """ A trivial remote test that fails """ def run(self): self._run_remote_test(self._context.create_ssh_client(), "samples-fail_remote_test.py") if __name__ == "__main__": FailRemoteTest.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/samples/fail_test.py000077500000000000000000000016271462617747000245700ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from assertpy import fail from tests_e2e.tests.lib.agent_test import AgentVmTest class FailTest(AgentVmTest): """ A trivial test that fails """ def run(self): fail("* TEST FAILED *") if __name__ == "__main__": FailTest.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/samples/pass_remote_test.py000077500000000000000000000017231462617747000261730ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from tests_e2e.tests.lib.agent_test import AgentVmTest class PassRemoteTest(AgentVmTest): """ A trivial remote test that succeeds """ def run(self): self._run_remote_test(self._context.create_ssh_client(), "samples-pass_remote_test.py") if __name__ == "__main__": PassRemoteTest.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/samples/pass_test.py000077500000000000000000000016521462617747000246210ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from tests_e2e.tests.lib.agent_test import AgentVmTest from tests_e2e.tests.lib.logging import log class PassTest(AgentVmTest): """ A trivial test that passes. """ def run(self): log.info("* PASSED *") if __name__ == "__main__": PassTest.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/samples/vmss_test.py000077500000000000000000000024551462617747000246450ustar00rootroot00000000000000#!/usr/bin/env python3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from tests_e2e.tests.lib.agent_test import AgentVmssTest from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.ssh_client import SshClient class VmssTest(AgentVmssTest): """ Sample test for scale sets """ def run(self): for address in self._context.vmss.get_instances_ip_address(): ssh_client: SshClient = SshClient(ip_address=address.ip_address, username=self._context.username, identity_file=self._context.identity_file) log.info("%s: Hostname: %s", address.instance_name, ssh_client.run_command("hostname").strip()) log.info("* PASSED *") if __name__ == "__main__": VmssTest.run_from_command_line() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/000077500000000000000000000000001462617747000222565ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/agent_cgroups-check_cgroups_agent.py000077500000000000000000000112261462617747000314700ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import os import re from assertpy import fail from tests_e2e.tests.lib.agent_log import AgentLog from tests_e2e.tests.lib.cgroup_helpers import BASE_CGROUP, AGENT_CONTROLLERS, get_agent_cgroup_mount_path, \ AGENT_SERVICE_NAME, verify_if_distro_supports_cgroup, print_cgroups, \ verify_agent_cgroup_assigned_correctly from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test def verify_if_cgroup_controllers_are_mounted(): """ Checks if controllers CPU, Memory that agent use are mounted in the system """ log.info("===== Verifying cgroup controllers that agent use are mounted in the system") all_controllers_present = os.path.exists(BASE_CGROUP) missing_controllers = [] mounted_controllers = [] for controller in AGENT_CONTROLLERS: controller_path = os.path.join(BASE_CGROUP, controller) if not os.path.exists(controller_path): all_controllers_present = False missing_controllers.append(controller_path) else: mounted_controllers.append(controller_path) if not all_controllers_present: fail('Not all of the controllers {0} mounted in expected cgroups. Mounted controllers are: {1}.\n ' 'Missing controllers are: {2} \n System mounted cgroups are:\n{3}'.format(AGENT_CONTROLLERS, mounted_controllers, missing_controllers, print_cgroups())) log.info('Verified all cgroup controllers are present.\n {0}'.format(mounted_controllers)) def verify_agent_cgroup_created_on_file_system(): """ Checks agent service is running in azure.slice/{agent_service) cgroup and mounted in same system cgroup controllers mounted path """ log.info("===== Verifying the agent cgroup paths exist on file system") agent_cgroup_mount_path = get_agent_cgroup_mount_path() all_agent_cgroup_controllers_path_exist = True missing_agent_cgroup_controllers_path = [] verified_agent_cgroup_controllers_path = [] log.info("expected agent cgroup mount path: %s", agent_cgroup_mount_path) for controller in AGENT_CONTROLLERS: agent_controller_path = os.path.join(BASE_CGROUP, controller, agent_cgroup_mount_path[1:]) if not os.path.exists(agent_controller_path): all_agent_cgroup_controllers_path_exist = False missing_agent_cgroup_controllers_path.append(agent_controller_path) else: verified_agent_cgroup_controllers_path.append(agent_controller_path) if not all_agent_cgroup_controllers_path_exist: fail("Agent's cgroup paths couldn't be found on file system. Missing agent cgroups path :{0}.\n Verified agent cgroups path:{1}".format(missing_agent_cgroup_controllers_path, verified_agent_cgroup_controllers_path)) log.info('Verified all agent cgroup paths are present.\n {0}'.format(verified_agent_cgroup_controllers_path)) def verify_agent_cgroups_tracked(): """ Checks if agent is tracking agent cgroups path for polling resource usage. This is verified by checking the agent log for the message "Started tracking cgroup" """ log.info("===== Verifying agent started tracking cgroups from the log") tracking_agent_cgroup_message_re = r'Started tracking cgroup [^\s]+\s+\[(?P[^\s]+)\]' tracked_cgroups = [] for record in AgentLog().read(): match = re.search(tracking_agent_cgroup_message_re, record.message) if match is not None: tracked_cgroups.append(match.group('path')) for controller in AGENT_CONTROLLERS: if not any(AGENT_SERVICE_NAME in cgroup_path and controller in cgroup_path for cgroup_path in tracked_cgroups): fail('Agent {0} is not being tracked. Tracked cgroups:{1}'.format(controller, tracked_cgroups)) log.info("Agent is tracking cgroups correctly.\n%s", tracked_cgroups) def main(): verify_if_distro_supports_cgroup() verify_if_cgroup_controllers_are_mounted() verify_agent_cgroup_created_on_file_system() verify_agent_cgroup_assigned_correctly() verify_agent_cgroups_tracked() run_remote_test(main) Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/agent_cpu_quota-check_agent_cpu_quota.py000077500000000000000000000244531462617747000323320ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import datetime import os import re import shutil import time from assertpy import fail from azurelinuxagent.common.osutil import systemd from azurelinuxagent.common.utils import shellutil from azurelinuxagent.ga.cgroupconfigurator import _DROP_IN_FILE_CPU_QUOTA from tests_e2e.tests.lib.agent_log import AgentLog from tests_e2e.tests.lib.cgroup_helpers import check_agent_quota_disabled, \ get_agent_cpu_quota from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test from tests_e2e.tests.lib.retry import retry_if_false def prepare_agent(): # This function prepares the agent: # 1) It modifies the service unit file to wrap the agent process with a script that starts the actual agent and then # launches an instance of the dummy process to consume the CPU. Since all these processes are in the same cgroup, # this has the same effect as the agent itself consuming the CPU. # # The process tree is similar to # # /usr/bin/python3 /home/azureuser/bin/agent_cpu_quota-start_service.py /usr/bin/python3 -u /usr/sbin/waagent -daemon # ├─/usr/bin/python3 -u /usr/sbin/waagent -daemon # │ └─python3 -u bin/WALinuxAgent-9.9.9.9-py3.8.egg -run-exthandlers # │ └─4*[{python3}] # ├─dd if=/dev/zero of=/dev/null # │ # └─{python3} # # And the agent's cgroup looks like # # CGroup: /azure.slice/walinuxagent.service # ├─10507 /usr/bin/python3 /home/azureuser/bin/agent_cpu_quota-start_service.py /usr/bin/python3 -u /usr/sbin/waagent -daemon # ├─10508 /usr/bin/python3 -u /usr/sbin/waagent -daemon # ├─10516 python3 -u bin/WALinuxAgent-9.9.9.9-py3.8.egg -run-exthandlers # ├─10711 dd if=/dev/zero of=/dev/null # # 2) It turns on a few debug flags and resart the agent log.info("***Preparing agent for testing cpu quota") # # Create a drop in file to wrap "start-service.py" around the actual agent: This will ovveride the ExecStart line in the agent's unit file # # ExecStart= (need to be empty to clear the original ExecStart) # ExecStart=/home/.../agent_cgroups-start-service.py /usr/bin/python3 -u /usr/sbin/waagent -daemon # service_file = systemd.get_agent_unit_file() exec_start = None with open(service_file, "r") as file_: for line in file_: match = re.match("ExecStart=(.+)", line) if match is not None: exec_start = match.group(1) break else: file_.seek(0) raise Exception("Could not find ExecStart in {0}\n:{1}".format(service_file, file_.read())) agent_python = exec_start.split()[0] current_directory = os.path.dirname(os.path.abspath(__file__)) start_service_script = os.path.join(current_directory, "agent_cpu_quota-start_service.py") drop_in_file = os.path.join(systemd.get_agent_drop_in_path(), "99-ExecStart.conf") log.info("Creating %s...", drop_in_file) with open(drop_in_file, "w") as file_: file_.write(""" [Service] ExecStart= ExecStart={0} {1} {2} """.format(agent_python, start_service_script, exec_start)) log.info("Executing daemon-reload") shellutil.run_command(["systemctl", "daemon-reload"]) # Disable all checks on cgroups and enable log metrics every 20 sec log.info("Executing script update-waagent-conf to enable agent cgroups config flag") result = shellutil.run_command(["update-waagent-conf", "Debug.CgroupCheckPeriod=20", "Debug.CgroupLogMetrics=y", "Debug.CgroupDisableOnProcessCheckFailure=n", "Debug.CgroupDisableOnQuotaCheckFailure=n"]) log.info("Successfully enabled agent cgroups config flag: {0}".format(result)) def verify_agent_reported_metrics(): """ This method verifies that the agent reports % Processor Time and Throttled Time metrics """ log.info("** Verifying agent reported metrics") log.info("Parsing agent log for metrics") processor_time = [] throttled_time = [] def check_agent_log_for_metrics() -> bool: for record in AgentLog().read(): match = re.search(r"% Processor Time\s*\[walinuxagent.service\]\s*=\s*([0-9.]+)", record.message) if match is not None: processor_time.append(float(match.group(1))) else: match = re.search(r"Throttled Time\s*\[walinuxagent.service\]\s*=\s*([0-9.]+)", record.message) if match is not None: throttled_time.append(float(match.group(1))) if len(processor_time) < 1 or len(throttled_time) < 1: return False return True found: bool = retry_if_false(check_agent_log_for_metrics) if found: log.info("%% Processor Time: %s", processor_time) log.info("Throttled Time: %s", throttled_time) log.info("Successfully verified agent reported resource metrics") else: fail( "The agent doesn't seem to be collecting % Processor Time and Throttled Time metrics. Agent found Processor Time: {0}, Throttled Time: {1}".format( processor_time, throttled_time)) def wait_for_log_message(message, timeout=datetime.timedelta(minutes=5)): log.info("Checking agent's log for message matching [%s]", message) start_time = datetime.datetime.now() while datetime.datetime.now() - start_time <= timeout: for record in AgentLog().read(): match = re.search(message, record.message, flags=re.DOTALL) if match is not None: log.info("Found message:\n\t%s", record.text.replace("\n", "\n\t")) return time.sleep(30) fail("The agent did not find [{0}] in its log within the allowed timeout".format(message)) def verify_process_check_on_agent_cgroups(): """ This method checks agent detect unexpected processes in its cgroup and disables the CPUQuota """ log.info("***Verifying process check on agent cgroups") log.info("Ensuring agent CPUQuota is enabled and backup the drop-in file to restore later in further tests") if check_agent_quota_disabled(): fail("The agent's CPUQuota is not enabled: {0}".format(get_agent_cpu_quota())) quota_drop_in = os.path.join(systemd.get_agent_drop_in_path(), _DROP_IN_FILE_CPU_QUOTA) quota_drop_in_backup = quota_drop_in + ".bk" log.info("Backing up %s to %s...", quota_drop_in, quota_drop_in_backup) shutil.copy(quota_drop_in, quota_drop_in_backup) # # Re-enable Process checks on cgroups and verify that the agent detects unexpected processes in its cgroup and disables the CPUQuota wehen # that happens # shellutil.run_command(["update-waagent-conf", "Debug.CgroupDisableOnProcessCheckFailure=y"]) # The log message indicating the check failed is similar to # 2021-03-29T23:33:15.603530Z INFO MonitorHandler ExtHandler Disabling resource usage monitoring. Reason: Check on cgroups failed: # [CGroupsException] The agent's cgroup includes unexpected processes: ['[PID: 25826] python3\x00/home/nam/Compute-Runtime-Tux-Pipeline/dungeon_crawler/s'] wait_for_log_message( "Disabling resource usage monitoring. Reason: Check on cgroups failed:.+The agent's cgroup includes unexpected processes") disabled: bool = retry_if_false(check_agent_quota_disabled) if not disabled: fail("The agent did not disable its CPUQuota: {0}".format(get_agent_cpu_quota())) def verify_throttling_time_check_on_agent_cgroups(): """ This method checks agent disables its CPUQuota when it exceeds its throttling limit """ log.info("***Verifying CPU throttling check on agent cgroups") # Now disable the check on unexpected processes and enable the check on throttledtime and verify that the agent disables its CPUQuota when it exceeds its throttling limit log.info("Re-enabling CPUQuota...") quota_drop_in = os.path.join(systemd.get_agent_drop_in_path(), _DROP_IN_FILE_CPU_QUOTA) quota_drop_in_backup = quota_drop_in + ".bk" log.info("Restoring %s from %s...", quota_drop_in, quota_drop_in_backup) shutil.copy(quota_drop_in_backup, quota_drop_in) shellutil.run_command(["systemctl", "daemon-reload"]) shellutil.run_command(["update-waagent-conf", "Debug.CgroupDisableOnProcessCheckFailure=n", "Debug.CgroupDisableOnQuotaCheckFailure=y", "Debug.AgentCpuThrottledTimeThreshold=5"]) # The log message indicating the check failed is similar to # 2021-04-01T20:47:55.892569Z INFO MonitorHandler ExtHandler Disabling resource usage monitoring. Reason: Check on cgroups failed: # [CGroupsException] The agent has been throttled for 121.339916938 seconds # # After we need to wait for a little longer for the agent to update systemd: # 2021-04-14T01:51:44.399860Z INFO MonitorHandler ExtHandler Executing systemctl daemon-reload... # wait_for_log_message( "Disabling resource usage monitoring. Reason: Check on cgroups failed:.+The agent has been throttled", timeout=datetime.timedelta(minutes=10)) wait_for_log_message("Stopped tracking cgroup walinuxagent.service", timeout=datetime.timedelta(minutes=10)) wait_for_log_message("Executing systemctl daemon-reload...", timeout=datetime.timedelta(minutes=5)) disabled: bool = retry_if_false(check_agent_quota_disabled) if not disabled: fail("The agent did not disable its CPUQuota: {0}".format(get_agent_cpu_quota())) def main(): prepare_agent() verify_agent_reported_metrics() verify_process_check_on_agent_cgroups() verify_throttling_time_check_on_agent_cgroups() run_remote_test(main) Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/agent_cpu_quota-start_service.py000077500000000000000000000057031462617747000306710ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This script starts the actual agent and then launches an instance of the dummy process periodically to consume the CPU # import signal import subprocess import sys import threading import time import traceback from azurelinuxagent.common import logger class CpuConsumer(threading.Thread): def __init__(self): threading.Thread.__init__(self) self._stopped = False def run(self): threading.current_thread().setName("*Stress*") while not self._stopped: try: # Dummy operation(reads empty streams and drops) which creates load on the CPU dd_command = ["dd", "if=/dev/zero", "of=/dev/null"] logger.info("Starting dummy dd command: {0} to stress CPU", ' '.join(dd_command)) subprocess.Popen(dd_command) logger.info("dd command completed; sleeping...") i = 0 while i < 30 and not self._stopped: time.sleep(1) i += 1 except Exception as exception: logger.error("{0}:\n{1}", exception, traceback.format_exc()) def stop(self): self._stopped = True try: threading.current_thread().setName("*StartService*") logger.set_prefix("E2ETest") logger.add_logger_appender(logger.AppenderType.FILE, logger.LogLevel.INFO, "/var/log/waagent.log") agent_command_line = sys.argv[1:] logger.info("Starting Agent: {0}", ' '.join(agent_command_line)) agent_process = subprocess.Popen(agent_command_line) # sleep a little to give the agent a chance to initialize time.sleep(15) cpu_consumer = CpuConsumer() cpu_consumer.start() def forward_signal(signum, _): if signum == signal.SIGTERM: logger.info("Stopping stress thread...") cpu_consumer.stop() logger.info("Forwarding signal {0} to Agent", signum) agent_process.send_signal(signum) signal.signal(signal.SIGTERM, forward_signal) agent_process.wait() logger.info("Agent completed") cpu_consumer.stop() cpu_consumer.join() logger.info("Stress completed") logger.info("Exiting...") sys.exit(agent_process.returncode) except Exception as exception: logger.error("Unexpected error occurred while starting agent service : {0}", exception) raise Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/agent_ext_workflow-assert_operation_sequence.py000077500000000000000000000170221462617747000340140ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # The DcrTestExtension maintains an `operations-.log` for every operation that the agent executes on that # extension. This script asserts that the operations sequence in the log file matches the expected operations given as # input to this script. We do this to confirm that the agent executed the correct sequence of operations. # # Sample operations-.log file snippet - # Date:2019-07-30T21:54:03Z; Operation:install; SeqNo:0 # Date:2019-07-30T21:54:05Z; Operation:enable; SeqNo:0 # Date:2019-07-30T21:54:37Z; Operation:enable; SeqNo:1 # Date:2019-07-30T21:55:20Z; Operation:disable; SeqNo:1 # Date:2019-07-30T21:55:22Z; Operation:uninstall; SeqNo:1 # import argparse import os import sys import time from datetime import datetime from typing import Any, Dict, List DELIMITER = ";" OPS_FILE_DIR = "/var/log/azure/Microsoft.Azure.TestExtensions.Edp.GuestAgentDcrTest/" OPS_FILE_PATTERN = ["operations-%s.log", "%s/operations-%s.log"] MAX_RETRY = 5 SLEEP_TIMER = 30 def parse_ops_log(ops_version: str, input_ops: List[str], start_time: str): # input_ops are the expected operations that we expect to see in the operations log file ver = (ops_version,) ops_file_name = None for file_pat in OPS_FILE_PATTERN: ops_file_name = os.path.join(OPS_FILE_DIR, file_pat % ver) if not os.path.exists(ops_file_name): ver = ver + (ops_version,) ops_file_name = None continue break if not ops_file_name: raise IOError("Operations File %s not found" % os.path.join(OPS_FILE_DIR, OPS_FILE_PATTERN[0] % ops_version)) ops = [] with open(ops_file_name, 'r') as ops_log: # we get the last len(input_ops) from the log file and ensure they match with the input_ops # Example of a line in the log file - `Date:2019-07-30T21:54:03Z; Operation:install; SeqNo:0` content = ops_log.readlines()[-len(input_ops):] for op_log in content: data = op_log.split(DELIMITER) date = datetime.strptime(data[0].split("Date:")[1], "%Y-%m-%dT%H:%M:%SZ") op = data[1].split("Operation:")[1] seq_no = data[2].split("SeqNo:")[1].strip('\n') # We only capture the operations that > start_time of the test if start_time > date: continue ops.append({'date': date, 'op': op, 'seq_no': seq_no}) return ops def assert_ops_in_sequence(actual_ops: List[Dict[str, Any]], expected_ops: List[str]): exit_code = 0 if len(actual_ops) != len(expected_ops): print("Operation sequence length doesn't match, exit code 2") exit_code = 2 last_date = datetime(70, 1, 1) for idx, val in enumerate(actual_ops): if exit_code != 0: break if val['date'] < last_date or val['op'] != expected_ops[idx]: print("Operation sequence doesn't match, exit code 2") exit_code = 2 last_date = val['date'] return exit_code def check_update_sequence(args): # old_ops_file_name = OPS_FILE_PATTERN % args.old_version # new_ops_file_name = OPS_FILE_PATTERN % args.new_version actual_ops = parse_ops_log(args.old_version, args.old_ops, args.start_time) actual_ops.extend(parse_ops_log(args.new_version, args.new_ops, args.start_time)) actual_ops = sorted(actual_ops, key=lambda op: op['date']) exit_code = assert_ops_in_sequence(actual_ops, args.ops) return exit_code, actual_ops def check_operation_sequence(args): # ops_file_name = OPS_FILE_PATTERN % args.version actual_ops = parse_ops_log(args.version, args.ops, args.start_time) exit_code = assert_ops_in_sequence(actual_ops, args.ops) return exit_code, actual_ops def main(): # There are 2 main ways you can call this file - normal_ops_sequence or update_sequence parser = argparse.ArgumentParser() cmd_parsers = parser.add_subparsers(help="sub-command help", dest="command") # We use start_time to make sure we're testing the correct test and not some other test parser.add_argument("--start-time", dest='start_time', required=True) # Normal_ops_sequence gets the version of the ext and parses the corresponding operations file to get the operation # sequence that were run on the extension normal_ops_sequence_parser = cmd_parsers.add_parser("normal_ops_sequence", help="Test the normal operation sequence") normal_ops_sequence_parser.add_argument('--version', dest='version') normal_ops_sequence_parser.add_argument('--ops', nargs='*', dest='ops', default=argparse.SUPPRESS) # Update_sequence mode is used to check for the update scenario. We get the expected old operations, expected # new operations and the final operation list and verify if the expected operations match the actual ones update_sequence_parser = cmd_parsers.add_parser("update_sequence", help="Test the update operation sequence") update_sequence_parser.add_argument("--old-version", dest="old_version") update_sequence_parser.add_argument("--new-version", dest="new_version") update_sequence_parser.add_argument("--old-ver-ops", nargs="*", dest="old_ops", default=argparse.SUPPRESS) update_sequence_parser.add_argument("--new-ver-ops", nargs="*", dest="new_ops", default=argparse.SUPPRESS) update_sequence_parser.add_argument("--final-ops", nargs="*", dest="ops", default=argparse.SUPPRESS) args, unknown = parser.parse_known_args() if unknown or len(unknown) > 0: # Print any unknown arguments passed to this script and fix them with low priority print("[Low Proiority][To-Fix] Found unknown args: %s" % ', '.join(unknown)) args.start_time = datetime.strptime(args.start_time, "%Y-%m-%dT%H:%M:%SZ") exit_code = 999 actual_ops = [] for i in range(0, MAX_RETRY): if args.command == "update_sequence": exit_code, actual_ops = check_update_sequence(args) elif args.command == "normal_ops_sequence": exit_code, actual_ops = check_operation_sequence(args) else: print("No such command %s, exit code 5\n" % args.command) exit_code, actual_ops = 5, [] break if exit_code == 0: break print("{0} test failed with exit code: {1}; Retry attempt: {2}; Retrying in {3} secs".format(args.command, exit_code, i, SLEEP_TIMER)) time.sleep(SLEEP_TIMER) if exit_code != 0: print("Expected Operations: %s" % ", ".join(args.ops)) print("Actual Operations: %s" % ','.join(["[%s, Date: %s]" % (op['op'], op['date'].strftime("%Y-%m-%dT%H:%M:%SZ")) for op in actual_ops])) print("Assertion completed, exiting with code: %s" % exit_code) sys.exit(exit_code) if __name__ == "__main__": print("Asserting operations\n") main() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/agent_ext_workflow-check_data_in_agent_log.py000077500000000000000000000027371462617747000333250ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Checks that the input data is found in the agent log # import argparse import sys from pathlib import Path from tests_e2e.tests.lib.agent_log import AgentLog def main(): parser = argparse.ArgumentParser() parser.add_argument("--data", dest='data', required=True) args, _ = parser.parse_known_args() print("Verifying data: {0} in waagent.log".format(args.data)) found = False try: found = AgentLog(Path('/var/log/waagent.log')).agent_log_contains(args.data) if found: print("Found data: {0} in agent log".format(args.data)) else: print("Did not find data: {0} in agent log".format(args.data)) except Exception as e: print("Error thrown when searching for test data in agent log: {0}".format(str(e))) sys.exit(0 if found else 1) if __name__ == "__main__": main()agent_ext_workflow-validate_no_lag_between_agent_start_and_gs_processing.py000077500000000000000000000117641462617747000414560ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Asserts that goal state processing completed no more than 15 seconds after agent start # from datetime import timedelta import re import sys import time from pathlib import Path from tests_e2e.tests.lib.agent_log import AgentLog def main(): success = True needs_retry = True retry = 3 while retry >= 0 and needs_retry: success = True needs_retry = False agent_started_time = [] agent_msg = [] time_diff_max_secs = 15 last_agent_log_timestamp = None # Example: Agent WALinuxAgent-2.2.47.2 is running as the goal state agent agent_started_regex = r"Azure Linux Agent \(Goal State Agent version [0-9.]+\)" gs_completed_regex = r"ProcessExtensionsGoalState completed\s\[(?P[a-z]+_\d+)\s(?P\d+)\sms\]" verified_atleast_one_log_line = False verified_atleast_one_agent_started_log_line = False verified_atleast_one_gs_complete_log_line = False agent_log = AgentLog(Path('/var/log/waagent.log')) try: for agent_record in agent_log.read(): last_agent_log_timestamp = agent_record.timestamp verified_atleast_one_log_line = True agent_started = re.match(agent_started_regex, agent_record.message) verified_atleast_one_agent_started_log_line = verified_atleast_one_agent_started_log_line or agent_started if agent_started: agent_started_time.append(agent_record.timestamp) agent_msg.append(agent_record.text) gs_complete = re.match(gs_completed_regex, agent_record.message) verified_atleast_one_gs_complete_log_line = verified_atleast_one_gs_complete_log_line or gs_complete if agent_started_time and gs_complete: duration = gs_complete.group('duration') diff = agent_record.timestamp - agent_started_time.pop() # Reduce the duration it took to complete the Goalstate, essentially we should only care about how long # the agent took after start/restart to start processing GS diff -= timedelta(milliseconds=int(duration)) agent_msg_line = agent_msg.pop() if diff.seconds > time_diff_max_secs: success = False print("Found delay between agent start and GoalState Processing > {0}secs: " "Messages: \n {1} {2}".format(time_diff_max_secs, agent_msg_line, agent_record.text)) except IOError as e: print("Unable to validate no lag time: {0}".format(str(e))) if not verified_atleast_one_log_line: success = False print("Didn't parse a single log line, ensure the log_parser is working fine and verify log regex") if not verified_atleast_one_agent_started_log_line: success = False print("Didn't parse a single agent started log line, ensure the Regex is working fine: {0}" .format(agent_started_regex)) if not verified_atleast_one_gs_complete_log_line: success = False print("Didn't parse a single GS completed log line, ensure the Regex is working fine: {0}" .format(gs_completed_regex)) if agent_started_time or agent_msg: # If agent_started_time or agent_msg is not empty, there is a mismatch in the number of agent start messages # and GoalState Processing messages # If another check hasn't already failed, and the last parsed log is less than 15 seconds after the # mismatched agent start log, we should retry after sleeping for 5s to give the agent time to finish # GoalState processing if success and last_agent_log_timestamp < (agent_started_time[-1] + timedelta(seconds=15)): needs_retry = True print("Sleeping for 5 seconds to allow goal state processing to complete...") time.sleep(5) else: success = False print("Mismatch between number of agent start messages and number of GoalState Processing messages\n " "Agent Start Messages: \n {0}".format('\n'.join(agent_msg))) retry -= 1 sys.exit(0 if success else 1) if __name__ == "__main__": main() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/agent_firewall-verify_all_firewall_rules.py000077500000000000000000000355411462617747000330570ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This script checks all agent firewall rules added properly and working as expected # import argparse import os import pwd import socket from typing import List from azurelinuxagent.common.utils import shellutil from azurelinuxagent.common.utils.textutil import format_exception from tests_e2e.tests.lib.firewall_helpers import get_root_accept_rule_command, get_non_root_accept_rule_command, \ get_non_root_drop_rule_command, print_current_iptable_rules, get_wireserver_ip, get_all_iptable_rule_commands, \ check_if_iptable_rule_is_available, IPTableRules, verify_all_rules_exist, FIREWALL_PERIOD, execute_cmd from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test import http.client as httpclient from tests_e2e.tests.lib.retry import retry ROOT_USER = 'root' VERSIONS_PATH = '/?comp=versions' def switch_user(user: str) -> None: """ This function switches the function to a given user """ try: uid = pwd.getpwnam(user)[2] log.info("uid:%s and user name:%s", uid, user) os.seteuid(uid) except Exception as e: raise Exception("Error -- failed to switch user to {0} : Failed with exception {1}".format(user, e)) def verify_rules_deleted_successfully(commands: List[List[str]] = None) -> None: """ This function is used to verify if provided rule or all(if not specified) iptable rules are deleted successfully. """ log.info("-----Verifying requested rules deleted successfully") if commands is None: commands = [] if not commands: root_accept, non_root_accept, non_root_drop = get_all_iptable_rule_commands(IPTableRules.CHECK_COMMAND) commands.extend([root_accept, non_root_accept, non_root_drop]) # "-C" return error code 1 when not available which is expected after deletion for command in commands: if not check_if_iptable_rule_is_available(command): pass else: raise Exception("Deletion of ip table rules not successful\n.Current ip table rules:\n" + print_current_iptable_rules()) log.info("ip table rules deleted successfully \n %s", commands) def delete_iptable_rules(commands: List[List[str]] = None) -> None: """ This function is used to delete the provided rule or all(if not specified) iptable rules """ if commands is None: commands = [] if not commands: root_accept, non_root_accept, non_root_drop = get_all_iptable_rule_commands(IPTableRules.DELETE_COMMAND) commands.extend([root_accept, non_root_accept, non_root_drop]) log.info("-----Deleting ip table rules \n %s", commands) try: cmd = None for command in commands: cmd = command retry(lambda: execute_cmd(cmd=cmd), attempts=3) except Exception as e: raise Exception("Error -- Failed to Delete the ip table rule set {0}".format(e)) log.info("Success --Deletion of ip table rule") def verify_dns_tcp_to_wireserver_is_allowed(user: str) -> None: """ This function is used to verify if tcp to wireserver is allowed for the given user """ log.info("-----Verifying DNS tcp to wireserver is allowed") switch_user(user) try: socket.create_connection((get_wireserver_ip(), 53), timeout=30) except Exception as e: raise Exception( "Error -- while using DNS TCP request as user:({0}), make sure the firewall rules are set correctly {1}".format(user, e)) log.info("Success -- can connect to wireserver port 53 using TCP as a user:(%s)", user) def verify_dns_tcp_to_wireserver_is_blocked(user: str) -> None: """ This function is used to verify if tcp to wireserver is blocked for given user """ log.info("-----Verifying DNS tcp to wireserver is blocked") switch_user(user) try: socket.create_connection((get_wireserver_ip(), 53), timeout=10) raise Exception("Error -- unprivileged user:({0}) could connect to wireserver port 53 using TCP".format(user)) except Exception as e: # Expected timeout if unprivileged user reaches wireserver if isinstance(e, socket.timeout): log.info("Success -- unprivileged user:(%s) access to wireserver port 53 using TCP is blocked", user) else: raise Exception("Unexpected error while connecting to wireserver: {0}".format(format_exception(e))) def verify_http_to_wireserver_blocked(user: str) -> None: """ This function is used to verify if http to wireserver is blocked for the given user """ log.info("-----Verifying http request to wireserver is blocked") switch_user(user) try: client = httpclient.HTTPConnection(get_wireserver_ip(), timeout=10) except Exception as e: raise Exception("Error -- failed to create HTTP connection with user: {0} \n {1}".format(user, e)) try: blocked = False client.request('GET', VERSIONS_PATH) except Exception as e: # if we get timeout exception, it means the request is blocked if isinstance(e, socket.timeout): blocked = True else: raise Exception("Unexpected error while connecting to wireserver: {0}".format(format_exception(e))) if not blocked: raise Exception("Error -- unprivileged user:({0}) could connect to wireserver, make sure the firewall rules are set correctly".format(user)) log.info("Success -- unprivileged user:(%s) access to wireserver is blocked", user) def verify_http_to_wireserver_allowed(user: str) -> None: """ This function is used to verify if http to wireserver is allowed for the given user """ log.info("-----Verifying http request to wireserver is allowed") switch_user(user) try: client = httpclient.HTTPConnection(get_wireserver_ip(), timeout=30) except Exception as e: raise Exception("Error -- failed to create HTTP connection with user:{0} \n {1}".format(user, e)) try: client.request('GET', VERSIONS_PATH) except Exception as e: # if we get exception, it means the request is failed to connect raise Exception("Error -- unprivileged user:({0}) access to wireserver failed:\n {1}".format(user, e)) log.info("Success -- privileged user:(%s) access to wireserver is allowed", user) def verify_non_root_accept_rule(): """ This function verifies the non root accept rule and make sure it is re added by agent after deletion """ log.info("-----Verifying non root accept rule behavior") log.info("Before deleting the non root accept rule , ensure a non root user can do a tcp to wireserver but cannot do a http request") verify_dns_tcp_to_wireserver_is_allowed(NON_ROOT_USER) verify_http_to_wireserver_blocked(NON_ROOT_USER) # switch to root user required to stop the agent switch_user(ROOT_USER) # stop the agent, so that it won't re-add rules while checking log.info("Stop Guest Agent service") # agent-service is script name and stop is argument stop_agent = ["agent-service", "stop"] shellutil.run_command(stop_agent) # deleting non root accept rule non_root_accept_delete_cmd = get_non_root_accept_rule_command(IPTableRules.DELETE_COMMAND) delete_iptable_rules([non_root_accept_delete_cmd]) # verifying deletion successful non_root_accept_check_cmd = get_non_root_accept_rule_command(IPTableRules.CHECK_COMMAND) verify_rules_deleted_successfully([non_root_accept_check_cmd]) log.info("** Current IP table rules\n") print_current_iptable_rules() log.info("After deleting the non root accept rule , ensure a non root user cannot do a tcp to wireserver request") verify_dns_tcp_to_wireserver_is_blocked(NON_ROOT_USER) switch_user(ROOT_USER) # restart the agent to re-add the deleted rules log.info("Restart Guest Agent service to re-add the deleted rules") # agent-service is script name and start is argument start_agent = ["agent-service", "start"] shellutil.run_command(start_agent) verify_all_rules_exist() log.info("** Current IP table rules \n") print_current_iptable_rules() log.info("After appending the rule back , ensure a non root user can do a tcp to wireserver but cannot do a http request\n") verify_dns_tcp_to_wireserver_is_allowed(NON_ROOT_USER) verify_http_to_wireserver_blocked(NON_ROOT_USER) log.info("Ensuring missing rules are re-added by the running agent") # deleting non root accept rule non_root_accept_delete_cmd = get_non_root_accept_rule_command(IPTableRules.DELETE_COMMAND) delete_iptable_rules([non_root_accept_delete_cmd]) verify_all_rules_exist() log.info("** Current IP table rules \n") print_current_iptable_rules() log.info("non root accept rule verified successfully\n") def verify_root_accept_rule(): """ This function verifies the root accept rule and make sure it is re added by agent after deletion """ log.info("-----Verifying root accept rule behavior") log.info("Before deleting the root accept rule , ensure a root user can do a http request but non root user cannot") verify_http_to_wireserver_allowed(ROOT_USER) verify_http_to_wireserver_blocked(NON_ROOT_USER) # switch to root user required to stop the agent switch_user(ROOT_USER) # stop the agent, so that it won't re-add rules while checking log.info("Stop Guest Agent service") # agent-service is script name and stop is argument stop_agent = ["agent-service", "stop"] shellutil.run_command(stop_agent) # deleting root accept rule root_accept_delete_cmd = get_root_accept_rule_command(IPTableRules.DELETE_COMMAND) # deleting drop rule too otherwise after restart, the daemon will go into loop since it cannot connect to wireserver. This would block the agent initialization drop_delete_cmd = get_non_root_drop_rule_command(IPTableRules.DELETE_COMMAND) delete_iptable_rules([root_accept_delete_cmd, drop_delete_cmd]) # verifying deletion successful root_accept_check_cmd = get_root_accept_rule_command(IPTableRules.CHECK_COMMAND) drop_check_cmd = get_non_root_drop_rule_command(IPTableRules.CHECK_COMMAND) verify_rules_deleted_successfully([root_accept_check_cmd, drop_check_cmd]) log.info("** Current IP table rules\n") print_current_iptable_rules() # restart the agent to re-add the deleted rules log.info("Restart Guest Agent service to re-add the deleted rules") # agent-service is script name and start is argument start_agent = ["agent-service", "start"] shellutil.run_command(start_agent) verify_all_rules_exist() log.info("** Current IP table rules \n") print_current_iptable_rules() log.info("After appending the rule back , ensure a root user can do a http request but non root user cannot") verify_dns_tcp_to_wireserver_is_allowed(NON_ROOT_USER) verify_http_to_wireserver_blocked(NON_ROOT_USER) verify_http_to_wireserver_allowed(ROOT_USER) log.info("Ensuring missing rules are re-added by the running agent") # deleting root accept rule root_accept_delete_cmd = get_root_accept_rule_command(IPTableRules.DELETE_COMMAND) delete_iptable_rules([root_accept_delete_cmd]) verify_all_rules_exist() log.info("** Current IP table rules \n") print_current_iptable_rules() log.info("root accept rule verified successfully\n") def verify_non_root_drop_rule(): """ This function verifies drop rule and make sure it is re added by agent after deletion """ log.info("-----Verifying non root drop rule behavior") # switch to root user required to stop the agent switch_user(ROOT_USER) # stop the agent, so that it won't re-add rules while checking log.info("Stop Guest Agent service") # agent-service is script name and stop is argument stop_agent = ["agent-service", "stop"] shellutil.run_command(stop_agent) # deleting non root delete rule non_root_drop_delete_cmd = get_non_root_drop_rule_command(IPTableRules.DELETE_COMMAND) delete_iptable_rules([non_root_drop_delete_cmd]) # verifying deletion successful non_root_drop_check_cmd = get_non_root_drop_rule_command(IPTableRules.CHECK_COMMAND) verify_rules_deleted_successfully([non_root_drop_check_cmd]) log.info("** Current IP table rules\n") print_current_iptable_rules() log.info("After deleting the non root drop rule, ensure a non root user can do http request to wireserver") verify_http_to_wireserver_allowed(NON_ROOT_USER) # restart the agent to re-add the deleted rules log.info("Restart Guest Agent service to re-add the deleted rules") # agent-service is script name and start is argument start_agent = ["agent-service", "start"] shellutil.run_command(start_agent) verify_all_rules_exist() log.info("** Current IP table rules\n") print_current_iptable_rules() log.info("After appending the rule back , ensure a non root user can do a tcp to wireserver but cannot do a http request") verify_dns_tcp_to_wireserver_is_allowed(NON_ROOT_USER) verify_http_to_wireserver_blocked(NON_ROOT_USER) verify_http_to_wireserver_allowed(ROOT_USER) log.info("Ensuring missing rules are re-added by the running agent") # deleting non root delete rule non_root_drop_delete_cmd = get_non_root_drop_rule_command(IPTableRules.DELETE_COMMAND) delete_iptable_rules([non_root_drop_delete_cmd]) verify_all_rules_exist() log.info("** Current IP table rules\n") print_current_iptable_rules() log.info("non root drop rule verified successfully\n") def prepare_agent(): log.info("Executing script update-waagent-conf to enable agent firewall config flag") # Changing the firewall period from default 5 mins to 1 min, so that test won't wait for that long to verify rules shellutil.run_command(["update-waagent-conf", "OS.EnableFirewall=y", f"OS.EnableFirewallPeriod={FIREWALL_PERIOD}"]) log.info("Successfully enabled agent firewall config flag") def main(): prepare_agent() log.info("** Current IP table rules\n") print_current_iptable_rules() verify_all_rules_exist() verify_non_root_accept_rule() verify_root_accept_rule() verify_non_root_drop_rule() parser = argparse.ArgumentParser() parser.add_argument('-u', '--user', required=True, help="Non root user") args = parser.parse_args() NON_ROOT_USER = args.user run_remote_test(main) Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/agent_persist_firewall-access_wireserver000077500000000000000000000050661462617747000324630ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Helper script which tries to access Wireserver on system reboot. Also prints out iptable rules if non-root and still # able to access Wireserver USER=$(whoami) echo "$(date --utc +%FT%T.%3NZ): Running as user: $USER" function check_online { ping 8.8.8.8 -c 1 -i .2 -t 30 > /dev/null 2>&1 && echo 0 || echo 1 } # Check more, sleep less MAX_CHECKS=10 # Initial starting value for checks CHECKS=0 IS_ONLINE=$(check_online) # Loop while we're not online. while [ "$IS_ONLINE" -eq 1 ]; do CHECKS=$((CHECKS + 1)) if [ $CHECKS -gt $MAX_CHECKS ]; then break fi echo "$(date --utc +%FT%T.%3NZ): Network still not accessible" # We're offline. Sleep for a bit, then check again sleep 1; IS_ONLINE=$(check_online) done if [ "$IS_ONLINE" -eq 1 ]; then # We will never be able to get online. Kill script. echo "Unable to connect to network, exiting now" echo "ExitCode: 1" exit 1 fi echo "Finally online, Time: $(date --utc +%FT%T.%3NZ)" echo "Trying to contact Wireserver as $USER to see if accessible" echo "" echo "IPTables before accessing Wireserver" sudo iptables -t security -L -nxv echo "" WIRE_IP=$(cat /var/lib/waagent/WireServerEndpoint 2>/dev/null || echo '168.63.129.16' | tr -d '[:space:]') if command -v wget >/dev/null 2>&1; then wget --tries=3 "http://$WIRE_IP/?comp=versions" --timeout=5 -O "/tmp/wire-versions-$USER.xml" else curl --retry 3 --retry-delay 5 --connect-timeout 5 "http://$WIRE_IP/?comp=versions" -o "/tmp/wire-versions-$USER.xml" fi WIRE_EC=$? echo "ExitCode: $WIRE_EC" if [[ "$USER" != "root" && "$WIRE_EC" == 0 ]]; then echo "Wireserver should not be accessible for non-root user ($USER)" fi if [[ "$USER" != "root" ]]; then echo "" echo "checking tcp traffic to wireserver port 53 for non-root user ($USER)" echo -n 2>/dev/null < /dev/tcp/$WIRE_IP/53 && echo 0 || echo 1 # Establish network connection for port 53 TCP_EC=$? echo "TCP 53 Connection ExitCode: $TCP_EC" fiAzure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/agent_persist_firewall-test_setup000077500000000000000000000022211462617747000311320ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Script adds cron job on reboot to make sure iptables rules are added to allow access to Wireserver and also, enable the firewall config flag # if [[ $# -ne 1 ]]; then echo "Usage: agent_persist_firewall-test_setup " exit 1 fi echo "@reboot /home/$1/bin/agent_persist_firewall-access_wireserver > /tmp/reboot-cron-root.log 2>&1" | crontab -u root - echo "@reboot /home/$1/bin/agent_persist_firewall-access_wireserver > /tmp/reboot-cron-$1.log 2>&1" | crontab -u $1 - update-waagent-conf OS.EnableFirewall=yagent_persist_firewall-verify_firewall_rules_on_boot.py000077500000000000000000000152551462617747000354400ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This script checks firewall rules are set on boot through cron job logs.And also capture the logs for debugging purposes. # import argparse import os import re import shutil from assertpy import fail from azurelinuxagent.common.utils import shellutil from tests_e2e.tests.lib.firewall_helpers import verify_all_rules_exist from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import retry def move_cron_logs_to_var_log(): # Move the cron logs to /var/log log.info("Moving cron logs to /var/log for debugging purposes") for cron_log in [ROOT_CRON_LOG, NON_ROOT_CRON_LOG, NON_ROOT_WIRE_XML, ROOT_WIRE_XML]: try: shutil.move(src=cron_log, dst=os.path.join("/var", "log", "{0}.{1}".format(os.path.basename(cron_log), BOOT_NAME))) except Exception as e: log.info("Unable to move cron log to /var/log; {0}".format(e)) def check_wireserver_versions_file_exist(wire_version_file): log.info("Checking wire-versions file exist: {0}".format(wire_version_file)) if not os.path.exists(wire_version_file): log.info("File: {0} not found".format(wire_version_file)) return False if os.stat(wire_version_file).st_size > 0: return True return False def verify_data_in_cron_logs(cron_log, verify, err_msg): log.info("Verifying Cron logs") def cron_log_checks(): if not os.path.exists(cron_log): raise Exception("Cron log file not found: {0}".format(cron_log)) with open(cron_log) as f: cron_logs_lines = list(map(lambda _: _.strip(), f.readlines())) if not cron_logs_lines: raise Exception("Empty cron file, looks like cronjob didnt run") if any("Unable to connect to network, exiting now" in line for line in cron_logs_lines): raise Exception("VM was unable to connect to network on startup. Skipping test validation") if not any("ExitCode" in line for line in cron_logs_lines): raise Exception("Cron logs still incomplete, will try again in a minute") if not any(verify(line) for line in cron_logs_lines): fail("Verification failed! (UNEXPECTED): {0}".format(err_msg)) log.info("Verification succeeded. Cron logs as expected") retry(cron_log_checks) def verify_wireserver_ip_reachable_for_root(): """ For root logs - Ensure the /var/log/wire-versions-root.xml is not-empty (generated by the cron job) Ensure the exit code in the /var/log/reboot-cron-root.log file is 0 """ log.info("Verifying Wireserver IP is reachable from root user") def check_exit_code(line): match = re.match("ExitCode:\\s(\\d+)", line) return match is not None and int(match.groups()[0]) == 0 verify_data_in_cron_logs(cron_log=ROOT_CRON_LOG, verify=check_exit_code, err_msg="Exit Code should be 0 for root based cron job!") if not check_wireserver_versions_file_exist(ROOT_WIRE_XML): fail("Wire version file should not be empty for root user!") def verify_wireserver_ip_unreachable_for_non_root(): """ For non-root - Ensure the /tmp/wire-versions-non-root.xml is empty (generated by the cron job) Ensure the exit code in the /tmp/reboot-cron-non-root.log file is non-0 """ log.info("Verifying WireServer IP is unreachable from non-root user") def check_exit_code(line): match = re.match("ExitCode:\\s(\\d+)", line) return match is not None and int(match.groups()[0]) != 0 verify_data_in_cron_logs(cron_log=NON_ROOT_CRON_LOG, verify=check_exit_code, err_msg="Exit Code should be non-0 for non-root cron job!") if check_wireserver_versions_file_exist(NON_ROOT_WIRE_XML): fail("Wire version file should be empty for non-root user!") def verify_tcp_connection_to_wireserver_for_non_root(): """ For non-root - Ensure the TCP 53 Connection exit code in the /tmp/reboot-cron-non-root.log file is 0 """ log.info("Verifying TCP connection to Wireserver port for non-root user") def check_exit_code(line): match = re.match("TCP 53 Connection ExitCode:\\s(\\d+)", line) return match is not None and int(match.groups()[0]) == 0 verify_data_in_cron_logs(cron_log=NON_ROOT_CRON_LOG, verify=check_exit_code, err_msg="TCP 53 Connection Exit Code should be 0 for non-root cron job!") def generate_svg(): """ This is a good to have, but not must have. Not failing tests if we're unable to generate a SVG """ log.info("Running systemd-analyze plot command to get the svg for boot execution order") dest_dir = os.path.join("/var", "log", "svgs") if not os.path.exists(dest_dir): os.makedirs(dest_dir) svg_name = os.path.join(dest_dir, "{0}.svg".format(BOOT_NAME)) cmd = ["systemd-analyze plot > {0}".format(svg_name)] err_code, stdout = shellutil.run_get_output(cmd) if err_code != 0: log.info("Unable to generate svg: {0}".format(stdout)) log.info("SVG generated successfully") def main(): try: # Verify firewall rules are set on boot through cron job logs verify_wireserver_ip_unreachable_for_non_root() verify_wireserver_ip_reachable_for_root() verify_tcp_connection_to_wireserver_for_non_root() verify_all_rules_exist() finally: # save the logs to /var/log to capture by collect-logs, this might be useful for debugging move_cron_logs_to_var_log() generate_svg() parser = argparse.ArgumentParser() parser.add_argument('-u', '--user', required=True, help="Non root user") parser.add_argument('-bn', '--boot_name', required=True, help="Boot Name") args = parser.parse_args() NON_ROOT_USER = args.user BOOT_NAME = args.boot_name ROOT_CRON_LOG = "/tmp/reboot-cron-root.log" NON_ROOT_CRON_LOG = f"/tmp/reboot-cron-{NON_ROOT_USER}.log" NON_ROOT_WIRE_XML = f"/tmp/wire-versions-{NON_ROOT_USER}.xml" ROOT_WIRE_XML = "/tmp/wire-versions-root.xml" main() agent_persist_firewall-verify_firewalld_rules_readded.py000077500000000000000000000147271462617747000355400ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This script deleting the firewalld rules and ensure deleted rules added back to the firewalld rule set after agent start # from azurelinuxagent.common.osutil import get_osutil from tests_e2e.tests.lib.firewall_helpers import firewalld_service_running, print_current_firewalld_rules, \ get_non_root_accept_tcp_firewalld_rule, get_all_firewalld_rule_commands, FirewalldRules, execute_cmd, \ check_if_firewalld_rule_is_available, verify_all_firewalld_rules_exist, get_root_accept_firewalld_rule, \ get_non_root_drop_firewalld_rule from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import retry def delete_firewalld_rules(commands=None): """ This function is used to delete the provided rule or all(if not specified) from the firewalld rules """ if commands is None: commands = [] if not commands: root_accept, non_root_accept, non_root_drop = get_all_firewalld_rule_commands(FirewalldRules.REMOVE_PASSTHROUGH) commands.extend([root_accept, non_root_accept, non_root_drop]) log.info("Deleting firewalld rules \n %s", commands) try: cmd = None for command in commands: cmd = command retry(lambda: execute_cmd(cmd=cmd), attempts=3) except Exception as e: raise Exception("Error -- Failed to Delete the firewalld rule set {0}".format(e)) log.info("Success --Deletion of firewalld rule") def verify_rules_deleted_successfully(commands=None): """ This function is used to verify if provided rule or all(if not specified) rules are deleted successfully. """ log.info("Verifying requested rules deleted successfully") if commands is None: commands = [] if not commands: root_accept, non_root_accept, non_root_drop = get_all_firewalld_rule_commands(FirewalldRules.QUERY_PASSTHROUGH) commands.extend([root_accept, non_root_accept, non_root_drop]) # "--QUERY-PASSTHROUGH" return error code 1 when not available which is expected after deletion for command in commands: if not check_if_firewalld_rule_is_available(command): pass else: raise Exception("Deletion of firewalld rules not successful\n.Current firewalld rules:\n" + print_current_firewalld_rules()) log.info("firewalld rules deleted successfully \n %s", commands) def verify_non_root_accept_rule(): """ This function verifies the non root accept rule and make sure it is re added by agent after deletion """ log.info("verifying non root accept rule") agent_name = get_osutil().get_service_name() # stop the agent, so that it won't re-add rules while checking log.info("stop the agent, so that it won't re-add rules while checking") cmd = ["systemctl", "stop", agent_name] execute_cmd(cmd) # deleting tcp rule accept_tcp_rule_with_delete = get_non_root_accept_tcp_firewalld_rule(FirewalldRules.REMOVE_PASSTHROUGH) delete_firewalld_rules([accept_tcp_rule_with_delete]) # verifying deletion successful accept_tcp_rule_with_check = get_non_root_accept_tcp_firewalld_rule(FirewalldRules.QUERY_PASSTHROUGH) verify_rules_deleted_successfully([accept_tcp_rule_with_check]) # restart the agent to re-add the deleted rules log.info("restart the agent to re-add the deleted rules") cmd = ["systemctl", "restart", agent_name] execute_cmd(cmd=cmd) verify_all_firewalld_rules_exist() def verify_root_accept_rule(): """ This function verifies the root accept rule and make sure it is re added by agent after deletion """ log.info("Verifying root accept rule") agent_name = get_osutil().get_service_name() # stop the agent, so that it won't re-add rules while checking log.info("stop the agent, so that it won't re-add rules while checking") cmd = ["systemctl", "stop", agent_name] execute_cmd(cmd) # deleting root accept rule root_accept_rule_with_delete = get_root_accept_firewalld_rule(FirewalldRules.REMOVE_PASSTHROUGH) delete_firewalld_rules([root_accept_rule_with_delete]) # verifying deletion successful root_accept_rule_with_check = get_root_accept_firewalld_rule(FirewalldRules.QUERY_PASSTHROUGH) verify_rules_deleted_successfully([root_accept_rule_with_check]) # restart the agent to re-add the deleted rules log.info("restart the agent to re-add the deleted rules") cmd = ["systemctl", "restart", agent_name] execute_cmd(cmd=cmd) verify_all_firewalld_rules_exist() def verify_non_root_drop_rule(): """ This function verifies drop rule and make sure it is re added by agent after deletion """ log.info("Verifying non root drop rule") agent_name = get_osutil().get_service_name() # stop the agent, so that it won't re-add rules while checking log.info("stop the agent, so that it won't re-add rules while checking") cmd = ["systemctl", "stop", agent_name] execute_cmd(cmd) # deleting non-root drop rule non_root_drop_with_delete = get_non_root_drop_firewalld_rule(FirewalldRules.REMOVE_PASSTHROUGH) delete_firewalld_rules([non_root_drop_with_delete]) # verifying deletion successful non_root_drop_with_check = get_non_root_drop_firewalld_rule(FirewalldRules.QUERY_PASSTHROUGH) verify_rules_deleted_successfully([non_root_drop_with_check]) # restart the agent to re-add the deleted rules log.info("restart the agent to re-add the deleted rules") cmd = ["systemctl", "restart", agent_name] execute_cmd(cmd=cmd) verify_all_firewalld_rules_exist() def main(): if firewalld_service_running(): log.info("Displaying current firewalld rules") print_current_firewalld_rules() verify_non_root_accept_rule() verify_root_accept_rule() verify_non_root_drop_rule() else: log.info("firewalld.service is not running and skipping test") if __name__ == "__main__": main() agent_persist_firewall-verify_persist_firewall_service_running.py000077500000000000000000000053541462617747000375370ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This script verifies firewalld rules set on the vm if firewalld service is running and if it's not running, it verifies network-setup service is enabled by the agent # from assertpy import fail from azurelinuxagent.common.osutil import get_osutil from azurelinuxagent.common.utils import shellutil from tests_e2e.tests.lib.firewall_helpers import execute_cmd_return_err_code, \ firewalld_service_running, verify_all_firewalld_rules_exist from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import retry_if_false def verify_network_setup_service_enabled(): """ Checks if network-setup service is enabled in the vm """ agent_name = get_osutil().get_service_name() service_name = "{0}-network-setup.service".format(agent_name) cmd = ["systemctl", "is-enabled", service_name] def op(cmd): exit_code, output = execute_cmd_return_err_code(cmd) return exit_code == 0 and output.rstrip() == "enabled" try: status = retry_if_false(lambda: op(cmd), attempts=5, delay=30) except Exception as e: log.warning("Error -- while checking network.service is-enabled status {0}".format(e)) status = False if not status: cmd = ["systemctl", "status", service_name] fail("network-setup.service is not enabled!. Current status: {0}".format(shellutil.run_command(cmd))) log.info("network-setup.service is enabled") def verify_firewall_service_running(): log.info("Ensure test agent initialize the firewalld/network service setup") # Check if firewall active on the Vm log.info("Checking if firewall service is active on the VM") if firewalld_service_running(): # Checking if firewalld rules are present in rule set if firewall service is active verify_all_firewalld_rules_exist() else: # Checking if network-setup service is enabled if firewall service is not active log.info("Checking if network-setup service is enabled by the agent since firewall service is not active") verify_network_setup_service_enabled() if __name__ == "__main__": verify_firewall_service_running() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/agent_publish-check_update.py000077500000000000000000000100521462617747000300720ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import re from assertpy import fail from tests_e2e.tests.lib.agent_log import AgentLog from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test from tests_e2e.tests.lib.retry import retry_if_false # pylint: disable=W0105 """ Post the _LOG_PATTERN_00 changes, the last group sometimes might not have the 'Agent' part at the start of the sentence; thus making it optional. > WALinuxAgent-2.2.18 discovered WALinuxAgent-2.2.47 as an update and will exit (None, 'WALinuxAgent-2.2.18', '2.2.47') """ _UPDATE_PATTERN_00 = re.compile(r'(.*Agent\s)?(\S*)\sdiscovered\sWALinuxAgent-(\S*)\sas an update and will exit') """ > Agent WALinuxAgent-2.2.45 discovered update WALinuxAgent-2.2.47 -- exiting ('Agent', 'WALinuxAgent-2.2.45', '2.2.47') """ _UPDATE_PATTERN_01 = re.compile(r'(.*Agent)?\s(\S*) discovered update WALinuxAgent-(\S*) -- exiting') """ > Normal Agent upgrade discovered, updating to WALinuxAgent-2.9.1.0 -- exiting ('Normal Agent', WALinuxAgent, '2.9.1.0 ') """ _UPDATE_PATTERN_02 = re.compile(r'(.*Agent) upgrade discovered, updating to (WALinuxAgent)-(\S*) -- exiting') """ > Agent update found, exiting current process to downgrade to the new Agent version 1.3.0.0 (Agent, 'downgrade', '1.3.0.0') """ _UPDATE_PATTERN_03 = re.compile(r'(.*Agent) update found, exiting current process to (\S*) to the new Agent version (\S*)') """ Current Agent 2.8.9.9 completed all update checks, exiting current process to upgrade to the new Agent version 2.10.0.7 ('2.8.9.9', 'upgrade', '2.10.0.7') """ _UPDATE_PATTERN_04 = re.compile(r'Current Agent (\S*) completed all update checks, exiting current process to (\S*) to the new Agent version (\S*)') """ > Agent WALinuxAgent-2.2.47 is running as the goal state agent ('2.2.47',) """ _RUNNING_PATTERN_00 = re.compile(r'.*Agent\sWALinuxAgent-(\S*)\sis running as the goal state agent') def verify_agent_update_from_log(): exit_code = 0 detected_update = False update_successful = False update_version = '' agentlog = AgentLog() for record in agentlog.read(): if 'TelemetryData' in record.text: continue for p in [_UPDATE_PATTERN_00, _UPDATE_PATTERN_01, _UPDATE_PATTERN_02, _UPDATE_PATTERN_03, _UPDATE_PATTERN_04]: update_match = re.match(p, record.message) if update_match: detected_update = True update_version = update_match.groups()[2] log.info('found the agent update log: %s', record.text) break if detected_update: running_match = re.match(_RUNNING_PATTERN_00, record.text) if running_match and update_version == running_match.groups()[0]: update_successful = True log.info('found the agent started new version log: %s', record.text) if detected_update: log.info('update was detected: %s', update_version) if update_successful: log.info('update was successful') else: log.warning('update was not successful') exit_code = 1 else: log.warning('update was not detected') exit_code = 1 return exit_code == 0 # This method will trace agent update messages in the agent log and determine if the update was successful or not. def main(): found: bool = retry_if_false(verify_agent_update_from_log) if not found: fail('update was not found in the logs') run_remote_test(main) Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/agent_publish-get_agent_log_record_timestamp.py000077500000000000000000000052561462617747000337040ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import re from datetime import datetime from tests_e2e.tests.lib.agent_log import AgentLog # pylint: disable=W0105 """ > WALinuxAgent-2.2.18 discovered WALinuxAgent-2.2.47 as an update and will exit (None, 'WALinuxAgent-2.2.18', '2.2.47') """ _UPDATE_PATTERN_00 = re.compile(r'(.*Agent\s)?(\S*)\sdiscovered\sWALinuxAgent-(\S*)\sas an update and will exit') """ > Agent WALinuxAgent-2.2.45 discovered update WALinuxAgent-2.2.47 -- exiting ('Agent', 'WALinuxAgent-2.2.45', '2.2.47') """ _UPDATE_PATTERN_01 = re.compile(r'(.*Agent)?\s(\S*) discovered update WALinuxAgent-(\S*) -- exiting') """ > Normal Agent upgrade discovered, updating to WALinuxAgent-2.9.1.0 -- exiting ('Normal Agent', WALinuxAgent, '2.9.1.0 ') """ _UPDATE_PATTERN_02 = re.compile(r'(.*Agent) upgrade discovered, updating to (WALinuxAgent)-(\S*) -- exiting') """ > Agent update found, exiting current process to downgrade to the new Agent version 1.3.0.0 (Agent, 'downgrade', '1.3.0.0') """ _UPDATE_PATTERN_03 = re.compile( r'(.*Agent) update found, exiting current process to (\S*) to the new Agent version (\S*)') """ Current Agent 2.8.9.9 completed all update checks, exiting current process to upgrade to the new Agent version 2.10.0.7 ('2.8.9.9', 'upgrade', '2.10.0.7') """ _UPDATE_PATTERN_04 = re.compile(r'Current Agent (\S*) completed all update checks, exiting current process to (\S*) to the new Agent version (\S*)') """ This script return timestamp of update message in the agent log """ def main(): try: agentlog = AgentLog() for record in agentlog.read(): for p in [_UPDATE_PATTERN_00, _UPDATE_PATTERN_01, _UPDATE_PATTERN_02, _UPDATE_PATTERN_03, _UPDATE_PATTERN_04]: update_match = re.match(p, record.message) if update_match: return record.timestamp return datetime.min except Exception as e: raise Exception("Error thrown when searching for update pattern in agent log to get record timestamp: {0}".format(str(e))) if __name__ == "__main__": timestamp = main() print(timestamp) Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/agent_status-get_last_gs_processed.py000077500000000000000000000025231462617747000316760ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Writes the last goal state processed line in the log to stdout # import re import sys from tests_e2e.tests.lib.agent_log import AgentLog def main(): gs_completed_regex = r"ProcessExtensionsGoalState completed\s\[[a-z_\d]{13,14}\s\d+\sms\]" last_gs_processed = None agent_log = AgentLog() try: for agent_record in agent_log.read(): gs_complete = re.match(gs_completed_regex, agent_record.message) if gs_complete is not None: last_gs_processed = agent_record.text except IOError as e: print("Unable to get last goal state processed: {0}".format(str(e))) print(last_gs_processed) sys.exit(0) if __name__ == "__main__": main() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/agent_update-modify_agent_version000077500000000000000000000024021462617747000310520ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This script to update necessary flags to make agent ready for rsm updates # set -euo pipefail if [[ $# -ne 1 ]]; then echo "Usage: agent_update-modify_agent_version " exit 1 fi version=$1 PYTHON=$(get-agent-python) echo "Agent's Python: $PYTHON" # some distros return .pyc byte file instead source file .py. So, I retrieve parent directory first. version_file_dir=$($PYTHON -c 'import azurelinuxagent.common.version as v; import os; print(os.path.dirname(v.__file__))') version_file_full_path="$version_file_dir/version.py" sed -E -i "s/^AGENT_VERSION\s+=\s+'[0-9.]+'/AGENT_VERSION = '$version'/" $version_file_full_pathAzure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/agent_update-self_update_check.py000077500000000000000000000044121462617747000307220ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Script verifies agent update was done by test agent # import argparse import re from assertpy import fail from tests_e2e.tests.lib.agent_log import AgentLog from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.retry import retry_if_false #2023-12-28T04:34:23.535652Z INFO ExtHandler ExtHandler Current Agent 2.8.9.9 completed all update checks, exiting current process to upgrade to the new Agent version 2.10.0.7 _UPDATE_PATTERN = re.compile(r'Current Agent (\S*) completed all update checks, exiting current process to upgrade to the new Agent version (\S*)') def verify_agent_update_from_log(latest_version, current_version) -> bool: """ Checks if the agent updated to the latest version from current version """ agentlog = AgentLog() for record in agentlog.read(): update_match = re.match(_UPDATE_PATTERN, record.message) if update_match: log.info('found the agent update log: %s', record.text) if update_match.groups()[0] == current_version and update_match.groups()[1] == latest_version: return True return False def main() -> None: parser = argparse.ArgumentParser() parser.add_argument('-l', '--latest-version', required=True) parser.add_argument('-c', '--current-version', required=True) args = parser.parse_args() found: bool = retry_if_false(lambda: verify_agent_update_from_log(args.latest_version, args.current_version)) if not found: fail('agent update was not found in the logs for latest version {0} from current version {1}'.format(args.latest_version, args.current_version)) if __name__ == "__main__": main() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/agent_update-self_update_latest_version.py000077500000000000000000000046721462617747000327160ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # returns the agent latest version published # from azurelinuxagent.common.protocol.goal_state import GoalStateProperties from azurelinuxagent.common.protocol.util import get_protocol_util from azurelinuxagent.common.utils.flexible_version import FlexibleVersion from tests_e2e.tests.lib.retry import retry def get_agent_family_manifest(goal_state): """ Get the agent_family from last GS for Test Family """ agent_families = goal_state.extensions_goal_state.agent_families agent_family_manifests = [] for m in agent_families: if m.name == 'Test': if len(m.uris) > 0: agent_family_manifests.append(m) return agent_family_manifests[0] def get_largest_version(agent_manifest): """ Get the largest version from the agent manifest """ largest_version = FlexibleVersion("0.0.0.0") for pkg in agent_manifest.pkg_list.versions: pkg_version = FlexibleVersion(pkg.version) if pkg_version > largest_version: largest_version = pkg_version return largest_version def main(): try: protocol = get_protocol_util().get_protocol(init_goal_state=False) retry(lambda: protocol.client.reset_goal_state( goal_state_properties=GoalStateProperties.ExtensionsGoalState)) goal_state = protocol.client.get_goal_state() agent_family = get_agent_family_manifest(goal_state) agent_manifest = goal_state.fetch_agent_manifest(agent_family.name, agent_family.uris) largest_version = get_largest_version(agent_manifest) print(str(largest_version)) except Exception as e: raise Exception("Unable to verify agent updated to latest version since test failed to get the which is the latest version from the agent manifest: {0}".format(e)) if __name__ == "__main__": main() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/agent_update-self_update_test_setup000077500000000000000000000041401462617747000314130ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This script prepares the new agent and install it on the vm # set -euo pipefail usage() ( echo "Usage: agent_update-self_update_test_setup -p|--package -v|--version -u|--update_to_latest_version " exit 1 ) while [[ $# -gt 0 ]]; do case $1 in -p|--package) shift if [ "$#" -lt 1 ]; then usage fi package=$1 shift ;; -v|--version) shift if [ "$#" -lt 1 ]; then usage fi version=$1 shift ;; -u|--update_to_latest_version) shift if [ "$#" -lt 1 ]; then usage fi update_to_latest_version=$1 shift ;; *) usage esac done if [ "$#" -ne 0 ] || [ -z ${package+x} ] || [ -z ${version+x} ]; then usage fi echo "updating the related to self-update flags" update-waagent-conf AutoUpdate.UpdateToLatestVersion=$update_to_latest_version Debug.EnableGAVersioning=n Debug.SelfUpdateHotfixFrequency=120 Debug.SelfUpdateRegularFrequency=120 Autoupdate.Frequency=120 agent-service stop mv /var/log/waagent.log /var/log/waagent.$(date --iso-8601=seconds).log echo "Cleaning up the existing agents" rm -rf /var/lib/waagent/WALinuxAgent-* echo "Installing $package as version $version..." unzip.py $package /var/lib/waagent/WALinuxAgent-$version agent-service restart agent_update-verify_agent_reported_update_status.py000077500000000000000000000043301462617747000345450ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Verify if the agent reported update status to CRP via status file # import argparse import glob import json from assertpy import fail from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test from tests_e2e.tests.lib.retry import retry_if_false def check_agent_reported_update_status(expected_version: str) -> bool: agent_status_file = "/var/lib/waagent/history/*/waagent_status.json" file_paths = glob.glob(agent_status_file, recursive=True) for file in file_paths: with open(file, 'r') as f: data = json.load(f) log.info("Agent status file is %s and it's content %s", file, data) guest_agent_status = data["aggregateStatus"]["guestAgentStatus"] if "updateStatus" in guest_agent_status.keys(): if guest_agent_status["updateStatus"]["expectedVersion"] == expected_version: log.info("we found the expected version %s in agent status file", expected_version) return True log.info("we did not find the expected version %s in agent status file", expected_version) return False def main(): parser = argparse.ArgumentParser() parser.add_argument('-v', '--version', required=True) args = parser.parse_args() log.info("checking agent status file to verify if agent reported update status") found: bool = retry_if_false(lambda: check_agent_reported_update_status(args.version)) if not found: fail("Agent failed to report update status, so skipping rest of the agent update validations") run_remote_test(main) agent_update-verify_versioning_supported_feature.py000077500000000000000000000036501462617747000346050ustar00rootroot00000000000000Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Verify if the agent reported supportedfeature VersioningGovernance flag to CRP via status file # import glob import json from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test from tests_e2e.tests.lib.retry import retry_if_false def check_agent_supports_versioning() -> bool: agent_status_file = "/var/lib/waagent/history/*/waagent_status.json" file_paths = glob.glob(agent_status_file, recursive=True) for file in file_paths: with open(file, 'r') as f: data = json.load(f) log.info("Agent status file is %s and it's content %s", file, data) supported_features = data["supportedFeatures"] for supported_feature in supported_features: if supported_feature["Key"] == "VersioningGovernance": return True return False def main(): log.info("checking agent status file for VersioningGovernance supported feature flag") found: bool = retry_if_false(check_agent_supports_versioning) if not found: raise Exception("Agent failed to report supported feature flag. So, skipping agent update validations " "since CRP will not send RSM requested version in GS if feature flag not found in status") run_remote_test(main) Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/agent_update-wait_for_rsm_gs.py000077500000000000000000000053551462617747000304650ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Verify the latest goal state included rsm requested version and if not, retry # import argparse from azurelinuxagent.common.protocol.util import get_protocol_util from azurelinuxagent.common.protocol.goal_state import GoalState, GoalStateProperties from azurelinuxagent.common.protocol.wire import WireProtocol from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test from tests_e2e.tests.lib.retry import retry_if_false, retry def get_requested_version(gs: GoalState) -> str: agent_families = gs.extensions_goal_state.agent_families agent_family_manifests = [m for m in agent_families if m.name == "Test" and len(m.uris) > 0] if len(agent_family_manifests) == 0: raise Exception( u"No manifest links found for agent family Test, skipping agent update verification") manifest = agent_family_manifests[0] if manifest.version is not None: return str(manifest.version) return "" def verify_rsm_requested_version(wire_protocol: WireProtocol, expected_version: str) -> bool: log.info("fetching the goal state to check if it includes rsm requested version") wire_protocol.client.update_goal_state() goal_state = wire_protocol.client.get_goal_state() requested_version = get_requested_version(goal_state) if requested_version == expected_version: return True else: return False def main(): parser = argparse.ArgumentParser() parser.add_argument('-v', '--version', required=True) args = parser.parse_args() protocol = get_protocol_util().get_protocol(init_goal_state=False) retry(lambda: protocol.client.reset_goal_state( goal_state_properties=GoalStateProperties.ExtensionsGoalState)) found: bool = retry_if_false(lambda: verify_rsm_requested_version(protocol, args.version)) if not found: raise Exception("The latest goal state didn't contain requested version after we submit the rsm request for: {0}.".format(args.version)) else: log.info("Successfully verified that latest GS contains rsm requested version : %s", args.version) run_remote_test(main) Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/ext_cgroups-check_cgroups_extensions.py000077500000000000000000000254541462617747000323030ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import os from assertpy import fail from tests_e2e.tests.lib.agent_log import AgentLog from tests_e2e.tests.lib.cgroup_helpers import verify_if_distro_supports_cgroup, \ verify_agent_cgroup_assigned_correctly, BASE_CGROUP, EXT_CONTROLLERS, get_unit_cgroup_mount_path, \ GATESTEXT_SERVICE, AZUREMONITORAGENT_SERVICE, MDSD_SERVICE, check_agent_quota_disabled, \ check_cgroup_disabled_with_unknown_process, CGROUP_TRACKED_PATTERN, AZUREMONITOREXT_FULL_NAME, GATESTEXT_FULL_NAME, \ print_cgroups from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test def verify_custom_script_cgroup_assigned_correctly(): """ This method verifies that the CSE script is created expected folder after install and also checks if CSE ran under the expected cgroups """ log.info("===== Verifying custom script was assigned to the correct cgroups") # CSE creates this folder to save the output of cgroup information where the CSE script was executed. Since CSE process exits after execution, # and cgroup paths gets cleaned up by the system, so this information saved at run time when the extension executed. check_temporary_folder_exists() cpu_mounted = False memory_mounted = False log.info("custom script cgroup mounts:") with open('/var/lib/waagent/tmp/custom_script_check') as fh: controllers = fh.read() log.info("%s", controllers) extension_path = "/azure.slice/azure-vmextensions.slice/azure-vmextensions-Microsoft.Azure.Extensions.CustomScript" correct_cpu_mount_v1 = "cpu,cpuacct:{0}".format(extension_path) correct_cpu_mount_v2 = "cpuacct,cpu:{0}".format(extension_path) correct_memory_mount = "memory:{0}".format(extension_path) for mounted_controller in controllers.split("\n"): if correct_cpu_mount_v1 in mounted_controller or correct_cpu_mount_v2 in mounted_controller: log.info('Custom script extension mounted under correct cgroup ' 'for CPU: %s', mounted_controller) cpu_mounted = True elif correct_memory_mount in mounted_controller: log.info('Custom script extension mounted under correct cgroup ' 'for Memory: %s', mounted_controller) memory_mounted = True if not cpu_mounted: fail('Custom script not mounted correctly for CPU! Expected {0} or {1}'.format(correct_cpu_mount_v1, correct_cpu_mount_v2)) if not memory_mounted: fail('Custom script not mounted correctly for Memory! Expected {0}'.format(correct_memory_mount)) def check_temporary_folder_exists(): tmp_folder = "/var/lib/waagent/tmp" if not os.path.exists(tmp_folder): fail("Temporary folder {0} was not created which means CSE script did not run!".format(tmp_folder)) def verify_ext_cgroup_controllers_created_on_file_system(): """ This method ensure that extension cgroup controllers are created on file system after extension install """ log.info("===== Verifying ext cgroup controllers exist on file system") all_controllers_present = os.path.exists(BASE_CGROUP) missing_controllers_path = [] verified_controllers_path = [] for controller in EXT_CONTROLLERS: controller_path = os.path.join(BASE_CGROUP, controller) if not os.path.exists(controller_path): all_controllers_present = False missing_controllers_path.append(controller_path) else: verified_controllers_path.append(controller_path) if not all_controllers_present: fail('Expected all of the extension controller: {0} paths present in the file system after extension install. But missing cgroups paths are :{1}\n' 'and verified cgroup paths are: {2} \nSystem mounted cgroups are \n{3}'.format(EXT_CONTROLLERS, missing_controllers_path, verified_controllers_path, print_cgroups())) log.info('Verified all extension cgroup controller paths are present and they are: \n {0}'.format(verified_controllers_path)) def verify_extension_service_cgroup_created_on_file_system(): """ This method ensure that extension service cgroup paths are created on file system after running extension """ log.info("===== Verifying the extension service cgroup paths exist on file system") # GA Test Extension Service gatestext_cgroup_mount_path = get_unit_cgroup_mount_path(GATESTEXT_SERVICE) verify_extension_service_cgroup_created(GATESTEXT_SERVICE, gatestext_cgroup_mount_path) # Azure Monitor Extension Service azuremonitoragent_cgroup_mount_path = get_unit_cgroup_mount_path(AZUREMONITORAGENT_SERVICE) azuremonitoragent_service_name = AZUREMONITORAGENT_SERVICE # Old versions of AMA extension has different service name if azuremonitoragent_cgroup_mount_path is None: azuremonitoragent_cgroup_mount_path = get_unit_cgroup_mount_path(MDSD_SERVICE) azuremonitoragent_service_name = MDSD_SERVICE verify_extension_service_cgroup_created(azuremonitoragent_service_name, azuremonitoragent_cgroup_mount_path) log.info('Verified all extension service cgroup paths created in file system .\n') def verify_extension_service_cgroup_created(service_name, cgroup_mount_path): log.info("expected extension service cgroup mount path: %s", cgroup_mount_path) all_controllers_present = True missing_cgroups_path = [] verified_cgroups_path = [] for controller in EXT_CONTROLLERS: # cgroup_mount_path is similar to /azure.slice/walinuxagent.service # cgroup_mount_path[1:] = azure.slice/walinuxagent.service # expected extension_service_controller_path similar to /sys/fs/cgroup/cpu/azure.slice/walinuxagent.service extension_service_controller_path = os.path.join(BASE_CGROUP, controller, cgroup_mount_path[1:]) if not os.path.exists(extension_service_controller_path): all_controllers_present = False missing_cgroups_path.append(extension_service_controller_path) else: verified_cgroups_path.append(extension_service_controller_path) if not all_controllers_present: fail("Extension service: [{0}] cgroup paths couldn't be found on file system. Missing cgroup paths are: {1} \n Verified cgroup paths are: {2} \n " "System mounted cgroups are \n{3}".format(service_name, missing_cgroups_path, verified_cgroups_path, print_cgroups())) def verify_ext_cgroups_tracked(): """ Checks if ext cgroups are tracked by the agent. This is verified by checking the agent log for the message "Started tracking cgroup {extension_name}" """ log.info("===== Verifying ext cgroups tracked") cgroups_added_for_telemetry = [] gatestext_cgroups_tracked = False azuremonitoragent_cgroups_tracked = False gatestext_service_cgroups_tracked = False azuremonitoragent_service_cgroups_tracked = False for record in AgentLog().read(): # Cgroup tracking logged as # 2021-11-14T13:09:59.351961Z INFO ExtHandler ExtHandler Started tracking cgroup Microsoft.Azure.Extensions.Edp.GATestExtGo-1.0.0.2 # [/sys/fs/cgroup/cpu,cpuacct/azure.slice/azure-vmextensions.slice/azure-vmextensions-Microsoft.Azure.Extensions.Edp.GATestExtGo_1.0.0.2.slice] cgroup_tracked_match = CGROUP_TRACKED_PATTERN.findall(record.message) if len(cgroup_tracked_match) != 0: name, path = cgroup_tracked_match[0][0], cgroup_tracked_match[0][1] if name.startswith(GATESTEXT_FULL_NAME): gatestext_cgroups_tracked = True elif name.startswith(AZUREMONITOREXT_FULL_NAME): azuremonitoragent_cgroups_tracked = True elif name.startswith(GATESTEXT_SERVICE): gatestext_service_cgroups_tracked = True elif name.startswith(AZUREMONITORAGENT_SERVICE) or name.startswith(MDSD_SERVICE): azuremonitoragent_service_cgroups_tracked = True cgroups_added_for_telemetry.append((name, path)) # agent, gatest extension, azuremonitor extension and extension service cgroups if len(cgroups_added_for_telemetry) < 1: fail('Expected cgroups were not tracked, according to the agent log. ' 'Pattern searched for: {0} and found \n{1}'.format(CGROUP_TRACKED_PATTERN.pattern, cgroups_added_for_telemetry)) if not gatestext_cgroups_tracked: fail('Expected gatestext cgroups were not tracked, according to the agent log. ' 'Pattern searched for: {0} and found \n{1}'.format(CGROUP_TRACKED_PATTERN.pattern, cgroups_added_for_telemetry)) if not azuremonitoragent_cgroups_tracked: fail('Expected azuremonitoragent cgroups were not tracked, according to the agent log. ' 'Pattern searched for: {0} and found \n{1}'.format(CGROUP_TRACKED_PATTERN.pattern, cgroups_added_for_telemetry)) if not gatestext_service_cgroups_tracked: fail('Expected gatestext service cgroups were not tracked, according to the agent log. ' 'Pattern searched for: {0} and found \n{1}'.format(CGROUP_TRACKED_PATTERN.pattern, cgroups_added_for_telemetry)) if not azuremonitoragent_service_cgroups_tracked: fail('Expected azuremonitoragent service cgroups were not tracked, according to the agent log. ' 'Pattern searched for: {0} and found \n{1}'.format(CGROUP_TRACKED_PATTERN.pattern, cgroups_added_for_telemetry)) log.info("Extension cgroups tracked as expected\n%s", cgroups_added_for_telemetry) def main(): verify_if_distro_supports_cgroup() verify_ext_cgroup_controllers_created_on_file_system() verify_custom_script_cgroup_assigned_correctly() verify_agent_cgroup_assigned_correctly() verify_extension_service_cgroup_created_on_file_system() verify_ext_cgroups_tracked() try: run_remote_test(main) except Exception as e: # It is possible that agent cgroup can be disabled due to UNKNOWN process or throttled before we run this check, in that case, we should ignore the validation if check_agent_quota_disabled() and check_cgroup_disabled_with_unknown_process(): log.info("Cgroup is disabled due to UNKNOWN process, ignoring ext cgroups validations") else: raise Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/ext_sequencing-get_ext_enable_time.py000077500000000000000000000061531462617747000316420ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Gets the timestamp for when the provided extension was enabled # import argparse import json import os import sys from pathlib import Path def main(): """ Returns the timestamp of when the provided extension was enabled """ parser = argparse.ArgumentParser() parser.add_argument("--ext", dest='ext', required=True) args, _ = parser.parse_known_args() # Extension enabled time is in extension extension status file ext_dirs = [item for item in os.listdir(Path('/var/lib/waagent')) if item.startswith(args.ext)] if not ext_dirs: print("Extension {0} directory does not exist".format(args.ext), file=sys.stderr) sys.exit(1) ext_status_path = Path('/var/lib/waagent/' + ext_dirs[0] + '/status') ext_status_files = os.listdir(ext_status_path) ext_status_files.sort() if not ext_status_files: # Extension did not report a status print("Extension {0} did not report a status".format(args.ext), file=sys.stderr) sys.exit(1) latest_ext_status_path = os.path.join(ext_status_path, ext_status_files[-1]) ext_status_file = open(latest_ext_status_path, 'r') ext_status = json.loads(ext_status_file.read()) # Example status file # [ # { # "status": { # "status": "success", # "formattedMessage": { # "lang": "en-US", # "message": "Enable succeeded" # }, # "operation": "Enable", # "code": "0", # "name": "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent" # }, # "version": "1.0", # "timestampUTC": "2023-12-12T23:14:45Z" # } # ] msg = "" if len(ext_status) == 0 or not ext_status[0]['status']: msg = "Extension {0} did not report a status".format(args.ext) elif not ext_status[0]['status']['operation'] or ext_status[0]['status']['operation'] != 'Enable': msg = "Extension {0} did not report a status for enable operation".format(args.ext) elif ext_status[0]['status']['status'] != 'success': msg = "Extension {0} did not report success for the enable operation".format(args.ext) elif not ext_status[0]['timestampUTC']: msg = "Extension {0} did not report the time the enable operation succeeded".format(args.ext) else: print(ext_status[0]['timestampUTC']) sys.exit(0) print(msg, file=sys.stderr) sys.exit(1) if __name__ == "__main__": main() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/ext_telemetry_pipeline-add_extension_events.py000077500000000000000000000242111462617747000336200ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Adds extension events for each provided extension and verifies the TelemetryEventsCollector collected or dropped them # import argparse import json import os import sys import time import uuid from assertpy import fail from datetime import datetime, timedelta from random import choice from typing import List from tests_e2e.tests.lib.agent_log import AgentLog from tests_e2e.tests.lib.logging import log def add_extension_events(extensions: List[str], bad_event_count=0, no_of_events_per_extension=50): def missing_key(bad_event): key = choice(list(bad_event.keys())) del bad_event[key] return "MissingKeyError: {0}".format(key) def oversize_error(bad_event): bad_event["EventLevel"] = "ThisIsAnOversizeError\n" * 300 return "OversizeEventError" def empty_message(bad_event): bad_event["Message"] = "" return "EmptyMessageError" errors = [ missing_key, oversize_error, empty_message ] sample_ext_event = { "EventLevel": "INFO", "Message": "Starting IaaS ScriptHandler Extension v1", "Version": "1.0", "TaskName": "Extension Info", "EventPid": "3228", "EventTid": "1", "OperationId": "519e4beb-018a-4bd9-8d8e-c5226cf7f56e", "TimeStamp": "2019-12-12T01:20:05.0950244Z" } sample_messages = [ "Starting IaaS ScriptHandler Extension v1", "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.", "The quick brown fox jumps over the lazy dog", "Cursus risus at ultrices mi.", "Doing Something", "Iaculis eu non diam phasellus.", "Doing other thing", "Look ma, lemons", "Pretium quam vulputate dignissim suspendisse.", "Man this is insane", "I wish it worked as it should and not as it ain't", "Ut faucibus pulvinar elementum integer enim neque volutpat ac tincidunt." "Did you get any of that?", "Non-English message - 此文字不是英文的" "κόσμε", "�", "Quizdeltagerne spiste jordbær med fløde, mens cirkusklovnen Wolther spillede på xylofon.", "Falsches Üben von Xylophonmusik quält jeden größeren Zwerg", "Zwölf Boxkämpfer jagten Eva quer über den Sylter Deich", "Heizölrückstoßabdämpfung", "Γαζέες καὶ μυρτιὲς δὲν θὰ βρῶ πιὰ στὸ χρυσαφὶ ξέφωτο", "Ξεσκεπάζω τὴν ψυχοφθόρα βδελυγμία", "El pingüino Wenceslao hizo kilómetros bajo exhaustiva lluvia y frío, añoraba a su querido cachorro.", "Portez ce vieux whisky au juge blond qui fume sur son île intérieure, à côté de l'alcôve ovoïde, où les bûches", "se consument dans l'âtre, ce qui lui permet de penser à la cænogenèse de l'être dont il est question", "dans la cause ambiguë entendue à Moÿ, dans un capharnaüm qui, pense-t-il, diminue çà et là la qualité de son œuvre.", "D'fhuascail Íosa, Úrmhac na hÓighe Beannaithe, pór Éava agus Ádhaimh", "Árvíztűrő tükörfúrógép", "Kæmi ný öxi hér ykist þjófum nú bæði víl og ádrepa", "Sævör grét áðan því úlpan var ónýt", "いろはにほへとちりぬるを わかよたれそつねならむ うゐのおくやまけふこえて あさきゆめみしゑひもせす", "イロハニホヘト チリヌルヲ ワカヨタレソ ツネナラム ウヰノオクヤマ ケフコエテ アサキユメミシ ヱヒモセスン", "? דג סקרן שט בים מאוכזב ולפתע מצא לו חברה איך הקליטה" "Pchnąć w tę łódź jeża lub ośm skrzyń fig", "В чащах юга жил бы цитрус? Да, но фальшивый экземпляр!", "๏ เป็นมนุษย์สุดประเสริฐเลิศคุณค่า กว่าบรรดาฝูงสัตว์เดรัจฉาน", "Pijamalı hasta, yağız şoföre çabucak güvendi." ] for ext in extensions: bad_count = bad_event_count event_dir = os.path.join("/var/log/azure/", ext, "events") if not os.path.isdir(event_dir): fail(f"Expected events dir: {event_dir} does not exist") log.info("") log.info("Expected dir: {0} exists".format(event_dir)) log.info("Creating random extension events for {0}. No of Good Events: {1}, No of Bad Events: {2}".format( ext, no_of_events_per_extension - bad_event_count, bad_event_count)) new_opr_id = str(uuid.uuid4()) event_list = [] for _ in range(no_of_events_per_extension): event = sample_ext_event.copy() event["OperationId"] = new_opr_id event["TimeStamp"] = datetime.utcnow().strftime(u'%Y-%m-%dT%H:%M:%S.%fZ') event["Message"] = choice(sample_messages) if bad_count != 0: # Make this event a bad event reason = choice(errors)(event) bad_count -= 1 # Missing key error might delete the TaskName key from the event if "TaskName" in event: event["TaskName"] = "{0}. This is a bad event: {1}".format(event["TaskName"], reason) else: event["EventLevel"] = "{0}. This is a bad event: {1}".format(event["EventLevel"], reason) event_list.append(event) file_name = os.path.join(event_dir, '{0}.json'.format(int(time.time() * 1000000))) log.info("Create json with extension events in event directory: {0}".format(file_name)) with open("{0}.tmp".format(file_name), 'w+') as f: json.dump(event_list, f) os.rename("{0}.tmp".format(file_name), file_name) def wait_for_extension_events_dir_empty(extensions: List[str]): # By ensuring events dir to be empty, we verify that the telemetry events collector has completed its run start_time = datetime.now() timeout = timedelta(minutes=2) ext_event_dirs = [os.path.join("/var/log/azure/", ext, "events") for ext in extensions] while (start_time + timeout) >= datetime.now(): log.info("") log.info("Waiting for extension event directories to be empty...") all_dir_empty = True for event_dir in ext_event_dirs: if not os.path.exists(event_dir) or len(os.listdir(event_dir)) != 0: log.info("Dir: {0} is not yet empty".format(event_dir)) all_dir_empty = False if all_dir_empty: log.info("Extension event directories are empty: \n{0}".format(ext_event_dirs)) return time.sleep(20) fail("Extension events dir not empty before 2 minute timeout") def main(): # This test is a best effort test to ensure that the agent does not throw any errors while trying to transmit # events to wireserver. We're not validating if the events actually make it to wireserver. parser = argparse.ArgumentParser() parser.add_argument("--extensions", dest='extensions', type=str, required=True) parser.add_argument("--num_events_total", dest='num_events_total', type=int, required=True) parser.add_argument("--num_events_bad", dest='num_events_bad', type=int, required=False, default=0) args, _ = parser.parse_known_args() extensions = args.extensions.split(',') add_extension_events(extensions=extensions, bad_event_count=args.num_events_bad, no_of_events_per_extension=args.num_events_total) # Ensure that the event collector ran after adding the events wait_for_extension_events_dir_empty(extensions=extensions) # Sleep for a min to ensure that the TelemetryService has enough time to send events and report errors if any time.sleep(60) found_error = False agent_log = AgentLog() log.info("") log.info("Check that the TelemetryEventsCollector did not emit any errors while collecting and reporting events...") telemetry_event_collector_name = "TelemetryEventsCollector" for agent_record in agent_log.read(): if agent_record.thread == telemetry_event_collector_name and agent_record.level == "ERROR": found_error = True log.info("waagent.log contains the following errors emitted by the {0} thread: \n{1}".format(telemetry_event_collector_name, agent_record)) if found_error: fail("Found error(s) emitted by the TelemetryEventsCollector, but none were expected.") log.info("The TelemetryEventsCollector did not emit any errors while collecting and reporting events") for ext in extensions: good_count = args.num_events_total - args.num_events_bad log.info("") if not agent_log.agent_log_contains("Collected {0} events for extension: {1}".format(good_count, ext)): fail("The TelemetryEventsCollector did not collect the expected number of events: {0} for {1}".format(good_count, ext)) log.info("All {0} good events for {1} were collected by the TelemetryEventsCollector".format(good_count, ext)) if args.num_events_bad != 0: log.info("") if not agent_log.agent_log_contains("Dropped events for Extension: {0}".format(ext)): fail("The TelemetryEventsCollector did not drop bad events for {0} as expected".format(ext)) log.info("The TelemetryEventsCollector dropped bad events for {0} as expected".format(ext)) sys.exit(0) if __name__ == "__main__": main() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/fips-check_fips_mariner000077500000000000000000000036001462617747000267550ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Verifies whether FIPS is enabled on Mariner 2.0 # set -euo pipefail # Check if FIPS mode is enabled by the kernel (returns 1 if enabled) fips_enabled=$(sudo cat /proc/sys/crypto/fips_enabled) if [ "$fips_enabled" != "1" ]; then echo "FIPS is not enabled by the kernel: $fips_enabled" exit 1 fi # Check if sysctl is configured (returns crypto.fips_enabled = 1 if enabled) sysctl_configured=$(sudo sysctl crypto.fips_enabled) if [ "$sysctl_configured" != "crypto.fips_enabled = 1" ]; then echo "sysctl is not configured for FIPS: $sysctl_configured" exit 1 fi # Check if openssl library is running in FIPS mode # MD5 should fail; the command's output should be similar to: # Error setting digest # 131590634539840:error:060800C8:digital envelope routines:EVP_DigestInit_ex:disabled for FIPS:crypto/evp/digest.c:135: openssl=$(openssl md5 < /dev/null 2>&1 || true) if [[ "$openssl" != *"disabled for FIPS"* ]]; then echo "openssl is not running in FIPS mode: $openssl" exit 1 fi # Check if dracut-fips is installed (returns dracut-fips-) dracut_fips=$( (rpm -qa | grep dracut-fips) || true ) if [[ "$dracut_fips" != *"dracut-fips"* ]]; then echo "dracut-fips is not installed: $dracut_fips" exit 1 fi echo "FIPS mode is enabled."Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/fips-enable_fips_mariner000077500000000000000000000027351462617747000271360ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Enables FIPS on Mariner 2.0 # set -euo pipefail echo "Installing packages required packages to enable FIPS..." sudo tdnf install -y grubby dracut-fips # # Set boot_uuid variable for the boot partition if different from the root # boot_dev="$(df /boot/ | tail -1 | cut -d' ' -f1)" echo "Boot partition: $boot_dev" root_dev="$(df / | tail -1 | cut -d' ' -f1)" echo "Root partition: $root_dev" boot_uuid="" if [ "$boot_dev" != "$root_dev" ]; then boot_uuid="boot=UUID=$(blkid $boot_dev -s UUID -o value)" echo "Boot UUID: $boot_uuid" fi # # Enable FIPS and set boot= parameter # echo "Enabling FIPS..." if sudo grub2-editenv - list | grep -q kernelopts; then set -x sudo grub2-editenv - set "$(sudo grub2-editenv - list | grep kernelopts) fips=1 $boot_uuid" else set -x sudo grubby --update-kernel=ALL --args="fips=1 $boot_uuid" fiAzure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/get-waagent-conf-value000077500000000000000000000021541462617747000264460ustar00rootroot00000000000000#!/usr/bin/env bash # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # Echos the value in waagent.conf for the specified setting if it exists. # set -euo pipefail if [[ $# -lt 1 ]]; then echo "Usage: get-waagent-conf-value " exit 1 fi PYTHON=$(get-agent-python) waagent_conf=$($PYTHON -c 'from azurelinuxagent.common.osutil import get_osutil; print(get_osutil().agent_conf_file_path)') cat $waagent_conf | while read line do if [[ $line == $1* ]]; then IFS='=' read -a values <<< "$line" echo ${values[1]} exit 0 fi done Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/get_distro.py000077500000000000000000000016571462617747000250070ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Prints the distro and version of the machine # import sys from azurelinuxagent.common.version import get_distro def main(): # Prints '_' distro = get_distro() print(distro[0] + "_" + distro[1].replace('.', '')) sys.exit(0) if __name__ == "__main__": main() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/recover_network_interface-get_nm_controlled.py000077500000000000000000000017741462617747000335760ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # import sys from azurelinuxagent.common.osutil import get_osutil def main(): os_util = get_osutil() ifname = os_util.get_if_name() nm_controlled = os_util.get_nm_controlled(ifname) if nm_controlled: print("Interface is NM controlled") else: print("Interface is NOT NM controlled") sys.exit(0) if __name__ == "__main__": main() Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/samples-error_remote_test.py000077500000000000000000000021161462617747000300400ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # A sample remote test that simulates an unexpected error # from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test def main(): log.info("Setting up test") log.info("Doing some operation") log.warning("Something went wrong, but the test can continue") log.info("Doing some other operation") raise Exception("Something went wrong") # simulate an unexpected error run_remote_test(main) Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/samples-fail_remote_test.py000077500000000000000000000020651462617747000276250ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # A sample remote test that fails # from assertpy import fail from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test def main(): log.info("Setting up test") log.info("Doing some operation") log.warning("Something went wrong, but the test can continue") log.info("Doing some other operation") fail("Verification of the operation failed") run_remote_test(main) Azure-WALinuxAgent-2b21de5/tests_e2e/tests/scripts/samples-pass_remote_test.py000077500000000000000000000020271462617747000276560ustar00rootroot00000000000000#!/usr/bin/env pypy3 # Microsoft Azure Linux Agent # # Copyright 2018 Microsoft Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # A sample remote test that passes # from tests_e2e.tests.lib.logging import log from tests_e2e.tests.lib.remote_test import run_remote_test def main(): log.info("Setting up test") log.info("Doing some operation") log.warning("Something went wrong, but the test can continue") log.info("Doing some other operation") log.info("All verifications succeeded") run_remote_test(main)