pax_global_header00006660000000000000000000000064127214051030014505gustar00rootroot0000000000000052 comment=ba54d42d9e23a9d812cd7289a00a628cb842862f lxd-2.0.2/000077500000000000000000000000001272140510300122755ustar00rootroot00000000000000lxd-2.0.2/.github/000077500000000000000000000000001272140510300136355ustar00rootroot00000000000000lxd-2.0.2/.github/ISSUE_TEMPLATE.md000066400000000000000000000013751272140510300163500ustar00rootroot00000000000000The template below is mostly useful for bug reports and support questions. Feel free to remove anything which doesn't apply to you and add more information where it makes sense. # Required information * Distribution: * Distribution version: * The output of "lxc info" or if that fails: * Kernel version: * LXC version: * LXD version: * Storage backend in use: # Issue description A brief description of what failed or what could be improved. # Steps to reproduce 1. Step one 2. Step two 3. Step three # Information to attach - [ ] any relevant kernel output (dmesg) - [ ] container log (lxc info NAME --show-log) - [ ] main daemon log (/var/log/lxd.log) - [ ] output of the client with --debug - [ ] output of the daemon with --debug lxd-2.0.2/.gitignore000066400000000000000000000001541272140510300142650ustar00rootroot00000000000000*.swp po/*.mo po/*.po~ lxd-*.tar.gz .vagrant test/deps/devlxd-client *~ tags # For Atom ctags .tags .tags1 lxd-2.0.2/.travis.yml000066400000000000000000000005211272140510300144040ustar00rootroot00000000000000language: go os: - osx go: - 1.5 - 1.6 - tip matrix: fast_finish: true allow_failures: - go: tip install: - "mkdir -p $GOPATH/github.com/lxc" - "rsync -az ${TRAVIS_BUILD_DIR}/ $HOME/gopath/src/github.com/lxc/lxd/" script: - "make client" notifications: webhooks: https://linuxcontainers.org/webhook-lxcbot/ lxd-2.0.2/AUTHORS000066400000000000000000000003631272140510300133470ustar00rootroot00000000000000Unless mentioned otherwise in a specific file's header, all code in this project is released under the Apache 2.0 license. The list of authors and contributors can be retrieved from the git commit history and in some cases, the file headers. lxd-2.0.2/CONTRIBUTING.md000066400000000000000000000051051272140510300145270ustar00rootroot00000000000000# Pull requests: Changes to this project should be proposed as pull requests on Github at: https://github.com/lxc/lxd Proposed changes will then go through code review there and once acked, be merged in the main branch. # License and copyright: By default, any contribution to this project is made under the Apache 2.0 license. The author of a change remains the copyright holder of their code (no copyright assignment). # Developer Certificate of Origin: To improve tracking of contributions to this project we use the DCO 1.1 and use a "sign-off" procedure for all changes going into the branch. The sign-off is a simple line at the end of the explanation for the commit which certifies that you wrote it or otherwise have the right to pass it on as an open-source contribution. > Developer Certificate of Origin > Version 1.1 > > Copyright (C) 2004, 2006 The Linux Foundation and its contributors. > 660 York Street, Suite 102, > San Francisco, CA 94110 USA > > Everyone is permitted to copy and distribute verbatim copies of this > license document, but changing it is not allowed. > > Developer's Certificate of Origin 1.1 > > By making a contribution to this project, I certify that: > > (a) The contribution was created in whole or in part by me and I > have the right to submit it under the open source license > indicated in the file; or > > (b) The contribution is based upon previous work that, to the best > of my knowledge, is covered under an appropriate open source > license and I have the right under that license to submit that > work with modifications, whether created in whole or in part > by me, under the same open source license (unless I am > permitted to submit under a different license), as indicated > in the file; or > > (c) The contribution was provided directly to me by some other > person who certified (a), (b) or (c) and I have not modified > it. > > (d) I understand and agree that this project and the contribution > are public and that a record of the contribution (including all > personal information I submit with it, including my sign-off) is > maintained indefinitely and may be redistributed consistent with > this project or the open source license(s) involved. An example of a valid sign-off line is: Signed-off-by: Random J Developer Use your real name and a valid e-mail address. Sorry, no pseudonyms or anonymous contributions are allowed. We also require each commit be individually signed-off by their author, even when part of a larger set. You may find `git commit -s` useful. lxd-2.0.2/COPYING000066400000000000000000000261361272140510300133400ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. lxd-2.0.2/Makefile000066400000000000000000000052511272140510300137400ustar00rootroot00000000000000DOMAIN=lxd POFILES=$(wildcard po/*.po) MOFILES=$(patsubst %.po,%.mo,$(POFILES)) LINGUAS=$(basename $(POFILES)) POTFILE=po/$(DOMAIN).pot # dist is primarily for use when packaging; for development we still manage # dependencies via `go get` explicitly. # TODO: use git describe for versioning VERSION=$(shell grep "var Version" shared/flex.go | cut -d'"' -f2) ARCHIVE=lxd-$(VERSION).tar .PHONY: default default: # Must run twice due to go get race -go get -t -v -d ./... -go get -t -v -d ./... go install -v $(DEBUG) ./... @echo "LXD built successfully" .PHONY: client client: # Must run twice due to go get race -go get -t -v -d ./... -go get -t -v -d ./... go install -v $(DEBUG) ./lxc @echo "LXD client built successfully" .PHONY: update update: # Must run twice due to go get race -go get -t -v -d -u ./... go get -t -v -d -u ./... @echo "Dependencies updated" # This only needs to be done when migrate.proto is actually changed; since we # commit the .pb.go in the tree and it's not expected to change very often, # it's not a default build step. .PHONY: protobuf protobuf: protoc --go_out=. ./lxd/migrate.proto .PHONY: check check: default go get -v -x github.com/remyoudompheng/go-misc/deadcode go test -v ./... cd test && ./main.sh gccgo: go build -compiler gccgo ./... @echo "LXD built successfully with gccgo" .PHONY: dist dist: rm -Rf lxd-$(VERSION) $(ARCHIVE) $(ARCHIVE).gz mkdir -p lxd-$(VERSION)/dist -GOPATH=$(shell pwd)/lxd-$(VERSION)/dist go get -t -v -d ./... GOPATH=$(shell pwd)/lxd-$(VERSION)/dist go get -t -v -d ./... rm -rf $(shell pwd)/lxd-$(VERSION)/dist/src/github.com/lxc/lxd ln -s ../../../.. ./lxd-$(VERSION)/dist/src/github.com/lxc/lxd git archive --prefix=lxd-$(VERSION)/ --output=$(ARCHIVE) HEAD tar -uf $(ARCHIVE) --exclude-vcs lxd-$(VERSION)/ gzip -9 $(ARCHIVE) rm -Rf dist lxd-$(VERSION) $(ARCHIVE) .PHONY: i18n update-po update-pot build-mo static-analysis i18n: update-pot po/%.mo: po/%.po msgfmt --statistics -o $@ $< po/%.po: po/$(DOMAIN).pot msgmerge -U po/$*.po po/$(DOMAIN).pot update-po: -for lang in $(LINGUAS); do\ msgmerge -U $$lang.po po/$(DOMAIN).pot; \ rm -f $$lang.po~; \ done update-pot: go get -v -x github.com/ubuntu-core/snappy/i18n/xgettext-go/ xgettext-go -o po/$(DOMAIN).pot --add-comments-tag=TRANSLATORS: --sort-output --package-name=$(DOMAIN) --msgid-bugs-address=lxc-devel@lists.linuxcontainers.org --keyword=i18n.G --keyword-plural=i18n.NG *.go shared/*.go lxc/*.go lxd/*.go build-mo: $(MOFILES) static-analysis: /bin/bash -x -c ". test/static_analysis.sh; static_analysis" tags: *.go lxd/*.go shared/*.go lxc/*.go find . | grep \.go | grep -v git | grep -v .swp | grep -v vagrant | xargs gotags > tags lxd-2.0.2/README.md000066400000000000000000000246701272140510300135650ustar00rootroot00000000000000# LXD REST API, command line tool and OpenStack integration plugin for LXC. LXD is pronounced lex-dee. To easily see what LXD is about, you can [try it online](https://linuxcontainers.org/lxd/try-it). ## CI status * Travis: [![Build Status](https://travis-ci.org/lxc/lxd.svg?branch=master)](https://travis-ci.org/lxc/lxd) * Jenkins: [![Build Status](https://jenkins.linuxcontainers.org/job/lxd-github-commit/badge/icon)](https://jenkins.linuxcontainers.org/job/lxd-github-commit/) ## Getting started with LXD Since LXD development is happening at such a rapid pace, we only provide daily builds right now. They're available via: sudo add-apt-repository ppa:ubuntu-lxc/lxd-git-master && sudo apt-get update sudo apt-get install lxd Because group membership is only applied at login, you then either need to close and re-open your user session or use the "newgrp lxd" command in the shell you're going to interact with lxd from. newgrp lxd After you've got LXD installed and a session with the right permissions, you can take your [first steps](#first-steps). ## Using the REST API The LXD REST API can be used locally via unauthenticated Unix socket or remotely via SSL encapsulated TCP. #### via Unix socket ```bash curl --unix-socket /var/lib/lxd/unix.socket \ -H "Content-Type: application/json" \ -X POST \ -d @hello-ubuntu.json \ "https://127.0.0.1:8443/1.0/containers" ``` #### via TCP TCP requires some additional configuration and is not enabled by default. ```bash lxc config set core.https_address "[::]:8443" ``` ```bash curl -k -L -I \ --cert ~/.config/lxc/client.crt \ --key ~/.config/lxc/client.key \ -H "Content-Type: application/json" \ -X POST \ -d @hello-ubuntu.json \ "https://127.0.0.1:8443/1.0/containers" ``` #### JSON payload The `hello-ubuntu.json` file referenced above could contain something like: ```json { "name":"some-ubuntu", "ephemeral":true, "config":{ "limits.cpu":"2" }, "source": { "type":"image", "mode":"pull", "protocol":"simplestreams", "server":"https://cloud-images.ubuntu.com/releases", "alias":"14.04" } } ``` ## Building from source We recommend having the latest versions of liblxc (>= 1.1 required) and CRIU (>= 1.7 recommended) available for LXD development. Additionally, LXD requires Golang 1.5 or later to work. All the right versions dependencies are available via the LXD PPA: sudo apt-get install software-properties-common sudo add-apt-repository ppa:ubuntu-lxc/lxd-git-master sudo apt-get update sudo apt-get install golang lxc lxc-dev mercurial git pkg-config protobuf-compiler golang-goprotobuf-dev xz-utils tar acl make There are a few storage backends for LXD besides the default "directory" backend. Installing these tools adds a bit to initramfs and may slow down your host boot, but are needed if you'd like to use a particular backend: sudo apt-get install lvm2 thin-provisioning-tools sudo apt-get install btrfs-tools To run the testsuite, you'll also need: sudo apt-get install curl gettext jq sqlite3 uuid-runtime pyflakes pep8 shellcheck bzr ### Building the tools LXD consists of two binaries, a client called `lxc` and a server called `lxd`. These live in the source tree in the `lxc/` and `lxd/` dirs, respectively. To get the code, set up your go environment: mkdir -p ~/go export GOPATH=~/go And then download it as usual: go get github.com/lxc/lxd cd $GOPATH/src/github.com/lxc/lxd make ...which will give you two binaries in $GOPATH/bin, `lxd` the daemon binary, and `lxc` a command line client to that daemon. ### Machine Setup You'll need sub{u,g}ids for root, so that LXD can create the unprivileged containers: echo "root:1000000:65536" | sudo tee -a /etc/subuid /etc/subgid Now you can run the daemon (the --group sudo bit allows everyone in the sudo group to talk to LXD; you can create your own group if you want): sudo -E $GOPATH/bin/lxd --group sudo ## First steps LXD has two parts, the daemon (the `lxd` binary), and the client (the `lxc` binary). Now that the daemon is all configured and running (either via the packaging or via the from-source instructions above), you can create a container: $GOPATH/bin/lxc launch ubuntu:14.04 Alternatively, you can also use a remote LXD host as a source of images. One comes pre-configured in LXD, called "images" (images.linuxcontainers.org) $GOPATH/bin/lxc launch images:centos/7/amd64 centos ## Bug reports Bug reports can be filed at https://github.com/lxc/lxd/issues/new ## Contributing Fixes and new features are greatly appreciated but please read our [contributing guidelines](CONTRIBUTING.md) first. Contributions to this project should be sent as pull requests on github. ## Hacking Sometimes it is useful to view the raw response that LXD sends; you can do this by: lxc config set core.trust_password foo lxc remote add local 127.0.0.1:8443 wget --no-check-certificate https://127.0.0.1:8443/1.0 --certificate=$HOME/.config/lxc/client.crt --private-key=$HOME/.config/lxc/client.key -O - -q ## Upgrading The `lxd` and `lxc` (`lxd-client`) binaries should be upgraded at the same time with: apt-get update apt-get install lxd lxd-client ## Support and discussions We use the LXC mailing-lists for developer and user discussions, you can find and subscribe to those at: https://lists.linuxcontainers.org If you prefer live discussions, some of us also hang out in [#lxcontainers](http://webchat.freenode.net/?channels=#lxcontainers) on irc.freenode.net. ## FAQ #### How to enable LXD server for remote access? By default LXD server is not accessible from the networks as it only listens on a local unix socket. You can make LXD available from the network by specifying additional addresses to listen to. This is done with the `core.https_address` config variable. To see the current server configuration, run: lxc config show To set the address to listen to, find out what addresses are available and use the `config set` command on the server: ip addr lxc config set core.https_address 192.168.1.15 #### When I do a `lxc remote add` over https, it asks for a password? By default, LXD has no password for security reasons, so you can't do a remote add this way. In order to set a password, do: lxc config set core.trust_password SECRET on the host LXD is running on. This will set the remote password that you can then use to do `lxc remote add`. You can also access the server without setting a password by copying the client certificate from `.config/lxc/client.crt` to the server and adding it with: lxc config trust add client.crt #### How do I configure alternative storage backends for LXD? LXD supports various storage backends; below are instructions on how to configure some of them. By default, we use a simple directory backed storage mechanism, but we recommend using ZFS for best results. ###### ZFS First, you need to install the ZFS tooling. On Wily and above this is just: sudo apt-get install zfsutils-linux ZFS has many different ways to procure a zpool, which is what you need to feed LXD. For example, if you have an extra block device laying around, you can just: sudo zpool create lxd /dev/sdc6 -m none However, if you want to test things out on a laptop or don't have an extra disk laying around, ZFS has its own loopback driver and can be used directly on a (sparse) file. To do this, first create the sparse file: sudo truncate -s 100G /var/lib/lxd.img then, sudo zpool create lxd /var/lib/lxd.img -m none Finally, whichever method you used to create your zpool, you need to tell LXD to use it: lxc config set storage.zfs_pool_name lxd ###### BTRFS The setup for btrfs is fairly simple, just mount /var/lib/lxd (or whatever your chosen `LXD_DIR` is) as a btrfs filesystem before you start LXD, and you're good to go. First install the btrfs userspace tools, sudo apt-get install btrfs-tools Now, you need to create a btrfs filesystem. If you don't have an extra disk laying around, you'll have to create your own loopback device manually: sudo truncate -s 100G /var/lib/lxd.img sudo losetup /dev/loop0 /var/lib/lxd.img Once you've got a loopback device (or an actual device), you can create the btrfs filesystem and mount it: sudo mkfs.btrfs /dev/loop0 # or your real device sudo mount /dev/loop0 /var/lib/lxd ###### LVM To set up LVM, the instructions are similar to the above. First, install the userspace tools: sudo apt-get install lvm2 thin-provisioning-tools Then, if you have a block device laying around: sudo pvcreate /dev/sdc6 sudo vgcreate lxd /dev/sdc6 lxc config set storage.lvm_vg_name lxd Alternatively, if you want to try it via a loopback device, there is a script provided in [/scripts/lxd-setup-lvm-storage](https://raw.githubusercontent.com/lxc/lxd/master/scripts/lxd-setup-lvm-storage) which will do it for you. It can be run via: sudo apt-get install lvm2 ./scripts/lxd-setup-lvm-storage -s 10G And it has a --destroy argument to clean up the bits as well: ./scripts/lxd-setup-lvm-storage --destroy #### How can I live migrate a container using LXD? Live migration requires a tool installed on both hosts called [CRIU](http://criu.org), which is available in Ubuntu via: sudo apt-get install criu Then, launch your container with the following, lxc launch ubuntu $somename sleep 5s # let the container get to an interesting state lxc move host1:$somename host2:$somename And with luck you'll have migrated the container :). Migration is still in experimental stages and may not work for all workloads. Please report bugs on lxc-devel, and we can escalate to CRIU lists as necessary. #### Can I bind mount my home directory in a container? Yes. The easiest way to do that is using a privileged container: lxc launch ubuntu priv -c security.privileged=true lxc config device add priv homedir disk source=/home/$USER path=/home/ubuntu #### How can I run docker inside a LXD container? To run docker inside a lxd container, you must be running a kernel with cgroup namespaces (Ubuntu 4.4 kernel or newer, or upstream 4.6 or newer), and must apply the docker profile to your container. lxc launch ubuntu:xenial my-docker-host -p default -p docker Note that the docker profile does not provide a network interface, so the common case will want to compose the default and docker profiles. The container must be using the Ubuntu 1.10.2-0ubuntu4 or newer docker package. lxd-2.0.2/Vagrantfile000066400000000000000000000011651272140510300144650ustar00rootroot00000000000000Vagrant.configure('2') do |config| # grab Ubuntu 14.04 boxcutter image: https://atlas.hashicorp.com/boxcutter config.vm.box = "boxcutter/ubuntu1404" # Ubuntu 14.04 # fix issues with slow dns https://www.virtualbox.org/ticket/13002 config.vm.provider :virtualbox do |vb, override| vb.customize ["modifyvm", :id, "--natdnsproxy1", "off"] end config.vm.network "forwarded_port", guest: 443, host: 8443 config.vm.provision :shell, :privileged => false, :path => "scripts/vagrant/install-go.sh" config.vm.provision :shell, :privileged => false, :path => "scripts/vagrant/install-lxd.sh" end lxd-2.0.2/client.go000066400000000000000000001515221272140510300141100ustar00rootroot00000000000000package lxd import ( "bytes" "crypto/x509" "encoding/base64" "encoding/json" "encoding/pem" "fmt" "io" "io/ioutil" "mime" "mime/multipart" "net" "net/http" "net/url" "os" "path" "path/filepath" "strconv" "strings" "github.com/gorilla/websocket" "github.com/lxc/lxd/shared" ) // Client can talk to a LXD daemon. type Client struct { BaseURL string BaseWSURL string Config Config Name string Remote *RemoteConfig Transport string Certificate string Http http.Client websocketDialer websocket.Dialer simplestreams *shared.SimpleStreams } type ResponseType string const ( Sync ResponseType = "sync" Async ResponseType = "async" Error ResponseType = "error" ) var ( // LXDErrors are special errors; the client library hoists error codes // to these errors internally so that user code can compare against // them. We probably shouldn't hoist BadRequest or InternalError, since // LXD passes an error string along which is more informative than // whatever static error message we would put here. LXDErrors = map[int]error{ http.StatusNotFound: fmt.Errorf("not found"), } ) type Response struct { Type ResponseType `json:"type"` /* Valid only for Sync responses */ Status string `json:"status"` StatusCode int `json:"status_code"` /* Valid only for Async responses */ Operation string `json:"operation"` /* Valid only for Error responses */ Code int `json:"error_code"` Error string `json:"error"` /* Valid for Sync and Error responses */ Metadata json.RawMessage `json:"metadata"` } func (r *Response) MetadataAsMap() (*shared.Jmap, error) { ret := shared.Jmap{} if err := json.Unmarshal(r.Metadata, &ret); err != nil { return nil, err } return &ret, nil } func (r *Response) MetadataAsOperation() (*shared.Operation, error) { op := shared.Operation{} if err := json.Unmarshal(r.Metadata, &op); err != nil { return nil, err } return &op, nil } // ParseResponse parses a lxd style response out of an http.Response. Note that // this does _not_ automatically convert error responses to golang errors. To // do that, use ParseError. Internal client library uses should probably use // HoistResponse, unless they are interested in accessing the underlying Error // response (e.g. to inspect the error code). func ParseResponse(r *http.Response) (*Response, error) { if r == nil { return nil, fmt.Errorf("no response!") } defer r.Body.Close() ret := Response{} s, err := ioutil.ReadAll(r.Body) if err != nil { return nil, err } shared.Debugf("Raw response: %s", string(s)) if err := json.Unmarshal(s, &ret); err != nil { return nil, err } return &ret, nil } // HoistResponse hoists a regular http response into a response of type rtype // or returns a golang error. func HoistResponse(r *http.Response, rtype ResponseType) (*Response, error) { resp, err := ParseResponse(r) if err != nil { return nil, err } if resp.Type == Error { // Try and use a known error if we have one for this code. err, ok := LXDErrors[resp.Code] if !ok { return nil, fmt.Errorf(resp.Error) } return nil, err } if resp.Type != rtype { return nil, fmt.Errorf("got bad response type, expected %s got %s", rtype, resp.Type) } return resp, nil } // NewClient returns a new LXD client. func NewClient(config *Config, remote string) (*Client, error) { if remote == "" { return nil, fmt.Errorf("A remote name must be provided.") } r, ok := config.Remotes[remote] if !ok { return nil, fmt.Errorf("unknown remote name: %q", remote) } info := ConnectInfo{ Name: remote, RemoteConfig: r, } if strings.HasPrefix(r.Addr, "unix:") { // replace "unix://" with the official "unix:/var/lib/lxd/unix.socket" if info.RemoteConfig.Addr == "unix://" { info.RemoteConfig.Addr = fmt.Sprintf("unix:%s", shared.VarPath("unix.socket")) } } else { // Read the client certificate (if it exists) clientCertPath := path.Join(config.ConfigDir, "client.crt") if shared.PathExists(clientCertPath) { certBytes, err := ioutil.ReadFile(clientCertPath) if err != nil { return nil, err } info.ClientPEMCert = string(certBytes) } // Read the client key (if it exists) clientKeyPath := path.Join(config.ConfigDir, "client.key") if shared.PathExists(clientKeyPath) { keyBytes, err := ioutil.ReadFile(clientKeyPath) if err != nil { return nil, err } info.ClientPEMKey = string(keyBytes) } // Read the server certificate (if it exists) serverCertPath := config.ServerCertPath(remote) if shared.PathExists(serverCertPath) { cert, err := shared.ReadCert(serverCertPath) if err != nil { return nil, err } info.ServerPEMCert = string(pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: cert.Raw})) } } c, err := NewClientFromInfo(info) if err != nil { return nil, err } c.Config = *config return c, nil } // ConnectInfo contains the information we need to connect to a specific LXD server type ConnectInfo struct { // Name is a simple identifier for the remote server. In 'lxc' it is // the name used to lookup the address and other information in the // config.yml file. Name string // RemoteConfig is the information about the Remote that we are // connecting to. This includes information like if the remote is // Public and/or Static. RemoteConfig RemoteConfig // ClientPEMCert is the PEM encoded bytes of the client's certificate. // If Addr indicates a Unix socket, the certificate and key bytes will // not be used. ClientPEMCert string // ClientPEMKey is the PEM encoded private bytes of the client's key associated with its certificate ClientPEMKey string // ServerPEMCert is the PEM encoded server certificate that we are // connecting to. It can be the empty string if we do not know the // server's certificate yet. ServerPEMCert string } func connectViaUnix(c *Client, remote *RemoteConfig) error { c.BaseURL = "http://unix.socket" c.BaseWSURL = "ws://unix.socket" c.Transport = "unix" uDial := func(network, addr string) (net.Conn, error) { // The arguments 'network' and 'addr' are ignored because // they are the wrong information. // addr is generated from BaseURL which becomes // 'unix.socket:80' which is certainly not what we want. // handle: // unix:///path/to/socket // unix:/path/to/socket // unix:path/to/socket path := strings.TrimPrefix(remote.Addr, "unix:") if strings.HasPrefix(path, "///") { // translate unix:///path/to, to just "/path/to" path = path[2:] } raddr, err := net.ResolveUnixAddr("unix", path) if err != nil { return nil, err } return net.DialUnix("unix", nil, raddr) } c.Http.Transport = &http.Transport{Dial: uDial} c.websocketDialer.NetDial = uDial c.Remote = remote st, err := c.ServerStatus() if err != nil { return err } c.Certificate = st.Environment.Certificate return nil } func connectViaHttp(c *Client, remote *RemoteConfig, clientCert, clientKey, serverCert string) error { tlsconfig, err := shared.GetTLSConfigMem(clientCert, clientKey, serverCert) if err != nil { return err } tr := &http.Transport{ TLSClientConfig: tlsconfig, Dial: shared.RFC3493Dialer, Proxy: shared.ProxyFromEnvironment, } c.websocketDialer.NetDial = shared.RFC3493Dialer c.websocketDialer.TLSClientConfig = tlsconfig justAddr := strings.TrimPrefix(remote.Addr, "https://") c.BaseURL = "https://" + justAddr c.BaseWSURL = "wss://" + justAddr c.Transport = "https" c.Http.Transport = tr c.Remote = remote c.Certificate = serverCert // We don't actually need to connect yet, defer that until someone // needs something from the server. return nil } // NewClientFromInfo returns a new LXD client. func NewClientFromInfo(info ConnectInfo) (*Client, error) { c := &Client{ // Config: *config, Http: http.Client{}, Config: Config{ Remotes: DefaultRemotes, Aliases: map[string]string{}, }, } c.Name = info.Name var err error if strings.HasPrefix(info.RemoteConfig.Addr, "unix:") { err = connectViaUnix(c, &info.RemoteConfig) } else { err = connectViaHttp(c, &info.RemoteConfig, info.ClientPEMCert, info.ClientPEMKey, info.ServerPEMCert) } if err != nil { return nil, err } if info.RemoteConfig.Protocol == "simplestreams" { ss, err := shared.SimpleStreamsClient(c.Remote.Addr, shared.ProxyFromEnvironment) if err != nil { return nil, err } c.simplestreams = ss } return c, nil } func (c *Client) Addresses() ([]string, error) { addresses := make([]string, 0) if c.Transport == "unix" { serverStatus, err := c.ServerStatus() if err != nil { return nil, err } addresses = serverStatus.Environment.Addresses } else if c.Transport == "https" { addresses = append(addresses, c.BaseURL[8:]) } else { return nil, fmt.Errorf("unknown transport type: %s", c.Transport) } if len(addresses) == 0 { return nil, fmt.Errorf("The source remote isn't available over the network") } return addresses, nil } func (c *Client) get(base string) (*Response, error) { uri := c.url(shared.APIVersion, base) return c.baseGet(uri) } func (c *Client) baseGet(getUrl string) (*Response, error) { req, err := http.NewRequest("GET", getUrl, nil) if err != nil { return nil, err } req.Header.Set("User-Agent", shared.UserAgent) resp, err := c.Http.Do(req) if err != nil { return nil, err } return HoistResponse(resp, Sync) } func (c *Client) put(base string, args interface{}, rtype ResponseType) (*Response, error) { uri := c.url(shared.APIVersion, base) buf := bytes.Buffer{} err := json.NewEncoder(&buf).Encode(args) if err != nil { return nil, err } shared.Debugf("Putting %s to %s", buf.String(), uri) req, err := http.NewRequest("PUT", uri, &buf) if err != nil { return nil, err } req.Header.Set("User-Agent", shared.UserAgent) req.Header.Set("Content-Type", "application/json") resp, err := c.Http.Do(req) if err != nil { return nil, err } return HoistResponse(resp, rtype) } func (c *Client) post(base string, args interface{}, rtype ResponseType) (*Response, error) { uri := c.url(shared.APIVersion, base) buf := bytes.Buffer{} err := json.NewEncoder(&buf).Encode(args) if err != nil { return nil, err } shared.Debugf("Posting %s to %s", buf.String(), uri) req, err := http.NewRequest("POST", uri, &buf) if err != nil { return nil, err } req.Header.Set("User-Agent", shared.UserAgent) req.Header.Set("Content-Type", "application/json") resp, err := c.Http.Do(req) if err != nil { return nil, err } return HoistResponse(resp, rtype) } func (c *Client) getRaw(uri string) (*http.Response, error) { req, err := http.NewRequest("GET", uri, nil) if err != nil { return nil, err } req.Header.Set("User-Agent", shared.UserAgent) raw, err := c.Http.Do(req) if err != nil { return nil, err } // because it is raw data, we need to check for http status if raw.StatusCode != 200 { resp, err := HoistResponse(raw, Sync) if err != nil { return nil, err } return nil, fmt.Errorf("expected error, got %v", *resp) } return raw, nil } func (c *Client) delete(base string, args interface{}, rtype ResponseType) (*Response, error) { uri := c.url(shared.APIVersion, base) buf := bytes.Buffer{} err := json.NewEncoder(&buf).Encode(args) if err != nil { return nil, err } shared.Debugf("Deleting %s to %s", buf.String(), uri) req, err := http.NewRequest("DELETE", uri, &buf) if err != nil { return nil, err } req.Header.Set("User-Agent", shared.UserAgent) req.Header.Set("Content-Type", "application/json") resp, err := c.Http.Do(req) if err != nil { return nil, err } return HoistResponse(resp, rtype) } func (c *Client) websocket(operation string, secret string) (*websocket.Conn, error) { query := url.Values{"secret": []string{secret}} url := c.BaseWSURL + path.Join(operation, "websocket") + "?" + query.Encode() return WebsocketDial(c.websocketDialer, url) } func (c *Client) url(elem ...string) string { path := strings.Join(elem, "/") uri := c.BaseURL + "/" + path if strings.HasPrefix(path, "1.0/images/aliases") { return uri } if strings.Contains(path, "?") { return uri } return strings.TrimSuffix(uri, "/") } func (c *Client) GetServerConfig() (*Response, error) { if c.Remote.Protocol == "simplestreams" { return nil, fmt.Errorf("This function isn't supported by simplestreams remote.") } return c.baseGet(c.url(shared.APIVersion)) } func (c *Client) AmTrusted() bool { resp, err := c.GetServerConfig() if err != nil { return false } shared.Debugf("%s", resp) jmap, err := resp.MetadataAsMap() if err != nil { return false } auth, err := jmap.GetString("auth") if err != nil { return false } return auth == "trusted" } func (c *Client) IsPublic() bool { resp, err := c.GetServerConfig() if err != nil { return false } shared.Debugf("%s", resp) jmap, err := resp.MetadataAsMap() if err != nil { return false } public, err := jmap.GetBool("public") if err != nil { return false } return public } func (c *Client) ListContainers() ([]shared.ContainerInfo, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } resp, err := c.get("containers?recursion=1") if err != nil { return nil, err } var result []shared.ContainerInfo if err := json.Unmarshal(resp.Metadata, &result); err != nil { return nil, err } return result, nil } func (c *Client) CopyImage(image string, dest *Client, copy_aliases bool, aliases []string, public bool, autoUpdate bool, progressHandler func(progress string)) error { source := shared.Jmap{ "type": "image", "mode": "pull", "server": c.BaseURL, "protocol": c.Remote.Protocol, "certificate": c.Certificate, "fingerprint": image} target := c.GetAlias(image) if target != "" { image = target } info, err := c.GetImageInfo(image) if err != nil { return err } if c.Remote.Protocol != "simplestreams" && !info.Public { var secret string resp, err := c.post("images/"+image+"/secret", nil, Async) if err != nil { return err } op, err := resp.MetadataAsOperation() if err != nil { return err } secret, err = op.Metadata.GetString("secret") if err != nil { return err } source["secret"] = secret source["fingerprint"] = image } addresses, err := c.Addresses() if err != nil { return err } operation := "" handler := func(msg interface{}) { if msg == nil { return } event := msg.(map[string]interface{}) if event["type"].(string) != "operation" { return } if event["metadata"] == nil { return } md := event["metadata"].(map[string]interface{}) if !strings.HasSuffix(operation, md["id"].(string)) { return } if md["metadata"] == nil { return } opMd := md["metadata"].(map[string]interface{}) _, ok := opMd["download_progress"] if ok { progressHandler(opMd["download_progress"].(string)) } } if progressHandler != nil { go dest.Monitor([]string{"operation"}, handler) } for _, addr := range addresses { sourceUrl := "https://" + addr source["server"] = sourceUrl body := shared.Jmap{"public": public, "auto_update": autoUpdate, "source": source} resp, err := dest.post("images", body, Async) if err != nil { continue } operation = resp.Operation err = dest.WaitForSuccess(resp.Operation) if err != nil { return err } break } if err != nil { return err } /* copy aliases from source image */ if copy_aliases { for _, alias := range info.Aliases { dest.DeleteAlias(alias.Name) err = dest.PostAlias(alias.Name, alias.Description, info.Fingerprint) if err != nil { return fmt.Errorf("Error adding alias %s: %s", alias.Name, err) } } } /* add new aliases */ for _, alias := range aliases { dest.DeleteAlias(alias) err = dest.PostAlias(alias, alias, info.Fingerprint) if err != nil { return fmt.Errorf("Error adding alias %s: %s\n", alias, err) } } return err } func (c *Client) ExportImage(image string, target string) (string, error) { if c.Remote.Protocol == "simplestreams" && c.simplestreams != nil { return c.simplestreams.ExportImage(image, target) } uri := c.url(shared.APIVersion, "images", image, "export") raw, err := c.getRaw(uri) if err != nil { return "", err } ctype, ctypeParams, err := mime.ParseMediaType(raw.Header.Get("Content-Type")) if err != nil { ctype = "application/octet-stream" } // Deal with split images if ctype == "multipart/form-data" { if !shared.IsDir(target) { return "", fmt.Errorf("Split images can only be written to a directory.") } // Parse the POST data mr := multipart.NewReader(raw.Body, ctypeParams["boundary"]) // Get the metadata tarball part, err := mr.NextPart() if err != nil { return "", err } if part.FormName() != "metadata" { return "", fmt.Errorf("Invalid multipart image") } imageTarf, err := os.OpenFile(filepath.Join(target, part.FileName()), os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600) if err != nil { return "", err } _, err = io.Copy(imageTarf, part) imageTarf.Close() if err != nil { return "", err } // Get the rootfs tarball part, err = mr.NextPart() if err != nil { return "", err } if part.FormName() != "rootfs" { return "", fmt.Errorf("Invalid multipart image") } rootfsTarf, err := os.OpenFile(filepath.Join(part.FileName()), os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600) if err != nil { return "", err } _, err = io.Copy(rootfsTarf, part) rootfsTarf.Close() if err != nil { return "", err } return target, nil } // Deal with unified images var wr io.Writer var destpath string if target == "-" { wr = os.Stdout destpath = "stdout" } else if fi, err := os.Stat(target); err == nil { // file exists, so check if folder switch mode := fi.Mode(); { case mode.IsDir(): // save in directory, header content-disposition can not be null // and will have a filename cd := strings.Split(raw.Header["Content-Disposition"][0], "=") // write filename from header destpath = filepath.Join(target, cd[1]) f, err := os.Create(destpath) defer f.Close() if err != nil { return "", err } wr = f default: // overwrite file destpath = target f, err := os.OpenFile(destpath, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600) defer f.Close() if err != nil { return "", err } wr = f } } else { // write as simple file destpath = target f, err := os.Create(destpath) defer f.Close() wr = f if err != nil { return "", err } } _, err = io.Copy(wr, raw.Body) if err != nil { return "", err } // it streams to stdout or file, so no response returned return destpath, nil } func (c *Client) PostImageURL(imageFile string, public bool, aliases []string) (string, error) { if c.Remote.Public { return "", fmt.Errorf("This function isn't supported by public remotes.") } source := shared.Jmap{ "type": "url", "mode": "pull", "url": imageFile} body := shared.Jmap{"public": public, "source": source} resp, err := c.post("images", body, Async) if err != nil { return "", err } op, err := c.WaitFor(resp.Operation) if err != nil { return "", err } if op.Metadata == nil { return "", fmt.Errorf("Missing operation metadata") } fingerprint, err := op.Metadata.GetString("fingerprint") if err != nil { return "", err } /* add new aliases */ for _, alias := range aliases { c.DeleteAlias(alias) err = c.PostAlias(alias, alias, fingerprint) if err != nil { return "", fmt.Errorf("Error adding alias %s: %s", alias, err) } } return fingerprint, nil } func (c *Client) PostImage(imageFile string, rootfsFile string, properties []string, public bool, aliases []string, progressHandler func(percent int)) (string, error) { if c.Remote.Public { return "", fmt.Errorf("This function isn't supported by public remotes.") } uri := c.url(shared.APIVersion, "images") var err error var fImage *os.File var fRootfs *os.File var req *http.Request if rootfsFile != "" { fImage, err = os.Open(imageFile) if err != nil { return "", err } defer fImage.Close() fRootfs, err = os.Open(rootfsFile) if err != nil { return "", err } defer fRootfs.Close() body, err := ioutil.TempFile("", "lxc_image_") if err != nil { return "", err } defer os.Remove(body.Name()) w := multipart.NewWriter(body) // Metadata file fw, err := w.CreateFormFile("metadata", path.Base(imageFile)) if err != nil { return "", err } _, err = io.Copy(fw, fImage) if err != nil { return "", err } // Rootfs file fw, err = w.CreateFormFile("rootfs", path.Base(rootfsFile)) if err != nil { return "", err } _, err = io.Copy(fw, fRootfs) if err != nil { return "", err } w.Close() size, err := body.Seek(0, 2) if err != nil { return "", err } _, err = body.Seek(0, 0) if err != nil { return "", err } progress := &shared.TransferProgress{Reader: body, Length: size, Handler: progressHandler} req, err = http.NewRequest("POST", uri, progress) req.Header.Set("Content-Type", w.FormDataContentType()) } else { fImage, err = os.Open(imageFile) if err != nil { return "", err } defer fImage.Close() stat, err := fImage.Stat() if err != nil { return "", err } progress := &shared.TransferProgress{Reader: fImage, Length: stat.Size(), Handler: progressHandler} req, err = http.NewRequest("POST", uri, progress) req.Header.Set("X-LXD-filename", filepath.Base(imageFile)) req.Header.Set("Content-Type", "application/octet-stream") } if err != nil { return "", err } req.Header.Set("User-Agent", shared.UserAgent) if public { req.Header.Set("X-LXD-public", "1") } else { req.Header.Set("X-LXD-public", "0") } if len(properties) != 0 { imgProps := url.Values{} for _, value := range properties { eqIndex := strings.Index(value, "=") // props must be in key=value format // if not, request will not be accepted if eqIndex > -1 { imgProps.Set(value[:eqIndex], value[eqIndex+1:]) } else { return "", fmt.Errorf("Bad image property: %s", value) } } req.Header.Set("X-LXD-properties", imgProps.Encode()) } raw, err := c.Http.Do(req) if err != nil { return "", err } resp, err := HoistResponse(raw, Async) if err != nil { return "", err } jmap, err := c.AsyncWaitMeta(resp) if err != nil { return "", err } fingerprint, err := jmap.GetString("fingerprint") if err != nil { return "", err } /* add new aliases */ for _, alias := range aliases { c.DeleteAlias(alias) err = c.PostAlias(alias, alias, fingerprint) if err != nil { return "", fmt.Errorf("Error adding alias %s: %s", alias, err) } } return fingerprint, nil } func (c *Client) GetImageInfo(image string) (*shared.ImageInfo, error) { if c.Remote.Protocol == "simplestreams" && c.simplestreams != nil { return c.simplestreams.GetImageInfo(image) } resp, err := c.get(fmt.Sprintf("images/%s", image)) if err != nil { return nil, err } info := shared.ImageInfo{} if err := json.Unmarshal(resp.Metadata, &info); err != nil { return nil, err } return &info, nil } func (c *Client) PutImageInfo(name string, p shared.BriefImageInfo) error { if c.Remote.Public { return fmt.Errorf("This function isn't supported by public remotes.") } _, err := c.put(fmt.Sprintf("images/%s", name), p, Sync) return err } func (c *Client) ListImages() ([]shared.ImageInfo, error) { if c.Remote.Protocol == "simplestreams" && c.simplestreams != nil { return c.simplestreams.ListImages() } resp, err := c.get("images?recursion=1") if err != nil { return nil, err } var result []shared.ImageInfo if err := json.Unmarshal(resp.Metadata, &result); err != nil { return nil, err } return result, nil } func (c *Client) DeleteImage(image string) error { if c.Remote.Public { return fmt.Errorf("This function isn't supported by public remotes.") } resp, err := c.delete(fmt.Sprintf("images/%s", image), nil, Async) if err != nil { return err } return c.WaitForSuccess(resp.Operation) } func (c *Client) PostAlias(alias string, desc string, target string) error { if c.Remote.Public { return fmt.Errorf("This function isn't supported by public remotes.") } body := shared.Jmap{"description": desc, "target": target, "name": alias} _, err := c.post("images/aliases", body, Sync) return err } func (c *Client) DeleteAlias(alias string) error { if c.Remote.Public { return fmt.Errorf("This function isn't supported by public remotes.") } _, err := c.delete(fmt.Sprintf("images/aliases/%s", alias), nil, Sync) return err } func (c *Client) ListAliases() (shared.ImageAliases, error) { if c.Remote.Protocol == "simplestreams" && c.simplestreams != nil { return c.simplestreams.ListAliases() } resp, err := c.get("images/aliases?recursion=1") if err != nil { return nil, err } var result shared.ImageAliases if err := json.Unmarshal(resp.Metadata, &result); err != nil { return nil, err } return result, nil } func (c *Client) CertificateList() ([]shared.CertInfo, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } resp, err := c.get("certificates?recursion=1") if err != nil { return nil, err } var result []shared.CertInfo if err := json.Unmarshal(resp.Metadata, &result); err != nil { return nil, err } return result, nil } func (c *Client) AddMyCertToServer(pwd string) error { if c.Remote.Public { return fmt.Errorf("This function isn't supported by public remotes.") } body := shared.Jmap{"type": "client", "password": pwd} _, err := c.post("certificates", body, Sync) return err } func (c *Client) CertificateAdd(cert *x509.Certificate, name string) error { if c.Remote.Public { return fmt.Errorf("This function isn't supported by public remotes.") } b64 := base64.StdEncoding.EncodeToString(cert.Raw) _, err := c.post("certificates", shared.Jmap{"type": "client", "certificate": b64, "name": name}, Sync) return err } func (c *Client) CertificateRemove(fingerprint string) error { if c.Remote.Public { return fmt.Errorf("This function isn't supported by public remotes.") } _, err := c.delete(fmt.Sprintf("certificates/%s", fingerprint), nil, Sync) return err } func (c *Client) IsAlias(alias string) (bool, error) { _, err := c.get(fmt.Sprintf("images/aliases/%s", alias)) if err != nil { if err == LXDErrors[http.StatusNotFound] { return false, nil } return false, err } return true, nil } func (c *Client) GetAlias(alias string) string { if c.Remote.Protocol == "simplestreams" && c.simplestreams != nil { return c.simplestreams.GetAlias(alias) } resp, err := c.get(fmt.Sprintf("images/aliases/%s", alias)) if err != nil { return "" } if resp.Type == Error { return "" } var result shared.ImageAliasesEntry if err := json.Unmarshal(resp.Metadata, &result); err != nil { return "" } return result.Target } // Init creates a container from either a fingerprint or an alias; you must // provide at least one. func (c *Client) Init(name string, imgremote string, image string, profiles *[]string, config map[string]string, ephem bool) (*Response, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } var tmpremote *Client var err error serverStatus, err := c.ServerStatus() if err != nil { return nil, err } architectures := serverStatus.Environment.Architectures source := shared.Jmap{"type": "image"} if image == "" { image = "default" } if imgremote != c.Name { source["type"] = "image" source["mode"] = "pull" tmpremote, err = NewClient(&c.Config, imgremote) if err != nil { return nil, err } if tmpremote.Remote.Protocol != "simplestreams" { target := tmpremote.GetAlias(image) if target == "" { target = image } imageinfo, err := tmpremote.GetImageInfo(target) if err != nil { return nil, err } if len(architectures) != 0 && !shared.StringInSlice(imageinfo.Architecture, architectures) { return nil, fmt.Errorf("The image architecture is incompatible with the target server") } if !imageinfo.Public { var secret string image = target resp, err := tmpremote.post("images/"+image+"/secret", nil, Async) if err != nil { return nil, err } op, err := resp.MetadataAsOperation() if err != nil { return nil, err } secret, err = op.Metadata.GetString("secret") if err != nil { return nil, err } source["secret"] = secret } } source["server"] = tmpremote.BaseURL source["protocol"] = tmpremote.Remote.Protocol source["certificate"] = tmpremote.Certificate source["fingerprint"] = image } else { fingerprint := c.GetAlias(image) if fingerprint == "" { fingerprint = image } imageinfo, err := c.GetImageInfo(fingerprint) if err != nil { return nil, fmt.Errorf("can't get info for image '%s': %s", image, err) } if len(architectures) != 0 && !shared.StringInSlice(imageinfo.Architecture, architectures) { return nil, fmt.Errorf("The image architecture is incompatible with the target server") } source["fingerprint"] = fingerprint } body := shared.Jmap{"source": source} if name != "" { body["name"] = name } if profiles != nil { body["profiles"] = *profiles } if config != nil { body["config"] = config } if ephem { body["ephemeral"] = ephem } var resp *Response if imgremote != c.Name { var addresses []string addresses, err = tmpremote.Addresses() if err != nil { return nil, err } for _, addr := range addresses { body["source"].(shared.Jmap)["server"] = "https://" + addr resp, err = c.post("containers", body, Async) if err != nil { continue } break } } else { resp, err = c.post("containers", body, Async) } if err != nil { if LXDErrors[http.StatusNotFound] == err { return nil, fmt.Errorf("image doesn't exist") } return nil, err } return resp, nil } func (c *Client) LocalCopy(source string, name string, config map[string]string, profiles []string, ephemeral bool) (*Response, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } body := shared.Jmap{ "source": shared.Jmap{ "type": "copy", "source": source, }, "name": name, "config": config, "profiles": profiles, "ephemeral": ephemeral, } return c.post("containers", body, Async) } func (c *Client) Monitor(types []string, handler func(interface{})) error { if c.Remote.Public { return fmt.Errorf("This function isn't supported by public remotes.") } url := c.BaseWSURL + path.Join("/", "1.0", "events") if len(types) != 0 { url += "?type=" + strings.Join(types, ",") } conn, err := WebsocketDial(c.websocketDialer, url) if err != nil { return err } defer conn.Close() for { message := make(map[string]interface{}) _, data, err := conn.ReadMessage() if err != nil { return err } err = json.Unmarshal(data, &message) if err != nil { return err } handler(message) } } // Exec runs a command inside the LXD container. For "interactive" use such as // `lxc exec ...`, one should pass a controlHandler that talks over the control // socket and handles things like SIGWINCH. If running non-interactive, passing // a nil controlHandler will cause Exec to return when all of the command // output is sent to the output buffers. func (c *Client) Exec(name string, cmd []string, env map[string]string, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser, controlHandler func(*Client, *websocket.Conn), width int, height int) (int, error) { if c.Remote.Public { return -1, fmt.Errorf("This function isn't supported by public remotes.") } body := shared.Jmap{ "command": cmd, "wait-for-websocket": true, "interactive": controlHandler != nil, "environment": env, } if width > 0 && height > 0 { body["width"] = width body["height"] = height } resp, err := c.post(fmt.Sprintf("containers/%s/exec", name), body, Async) if err != nil { return -1, err } var fds shared.Jmap op, err := resp.MetadataAsOperation() if err != nil { return -1, err } fds, err = op.Metadata.GetMap("fds") if err != nil { return -1, err } if controlHandler != nil { var control *websocket.Conn if wsControl, ok := fds["control"]; ok { control, err = c.websocket(resp.Operation, wsControl.(string)) if err != nil { return -1, err } defer control.Close() go controlHandler(c, control) } conn, err := c.websocket(resp.Operation, fds["0"].(string)) if err != nil { return -1, err } shared.WebsocketSendStream(conn, stdin) <-shared.WebsocketRecvStream(stdout, conn) conn.Close() } else { conns := make([]*websocket.Conn, 3) dones := make([]chan bool, 3) conns[0], err = c.websocket(resp.Operation, fds[strconv.Itoa(0)].(string)) if err != nil { return -1, err } defer conns[0].Close() dones[0] = shared.WebsocketSendStream(conns[0], stdin) outputs := []io.WriteCloser{stdout, stderr} for i := 1; i < 3; i++ { conns[i], err = c.websocket(resp.Operation, fds[strconv.Itoa(i)].(string)) if err != nil { return -1, err } defer conns[i].Close() dones[i] = shared.WebsocketRecvStream(outputs[i-1], conns[i]) } /* * We'll get a read signal from each of stdout, stderr when they've * both died. We need to wait for these in addition to the operation, * because the server may indicate that the operation is done before we * can actually read the last bits of data off these sockets and print * it to the screen. * * We don't wait for stdin here, because if we're interactive, the user * may not have closed it (e.g. if the command exits but the user * didn't ^D). */ for i := 1; i < 3; i++ { <-dones[i] } // Once we're done, we explicitly close stdin, to signal the websockets // we're done. stdin.Close() } // Now, get the operation's status too. op, err = c.WaitFor(resp.Operation) if err != nil { return -1, err } if op.StatusCode == shared.Failure { return -1, fmt.Errorf(op.Err) } if op.StatusCode != shared.Success { return -1, fmt.Errorf("got bad op status %s", op.Status) } if op.Metadata == nil { return -1, fmt.Errorf("no metadata received") } return op.Metadata.GetInt("return") } func (c *Client) Action(name string, action shared.ContainerAction, timeout int, force bool, stateful bool) (*Response, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } body := shared.Jmap{ "action": action, "timeout": timeout, "force": force} if shared.StringInSlice(string(action), []string{"start", "stop"}) { body["stateful"] = stateful } return c.put(fmt.Sprintf("containers/%s/state", name), body, Async) } func (c *Client) Delete(name string) (*Response, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } var url string s := strings.SplitN(name, "/", 2) if len(s) == 2 { url = fmt.Sprintf("containers/%s/snapshots/%s", s[0], s[1]) } else { url = fmt.Sprintf("containers/%s", name) } return c.delete(url, nil, Async) } func (c *Client) ServerStatus() (*shared.ServerState, error) { ss := shared.ServerState{} resp, err := c.GetServerConfig() if err != nil { return nil, err } if err := json.Unmarshal(resp.Metadata, &ss); err != nil { return nil, err } return &ss, nil } func (c *Client) ContainerInfo(name string) (*shared.ContainerInfo, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } ct := shared.ContainerInfo{} resp, err := c.get(fmt.Sprintf("containers/%s", name)) if err != nil { return nil, err } if err := json.Unmarshal(resp.Metadata, &ct); err != nil { return nil, err } return &ct, nil } func (c *Client) ContainerState(name string) (*shared.ContainerState, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } ct := shared.ContainerState{} resp, err := c.get(fmt.Sprintf("containers/%s/state", name)) if err != nil { return nil, err } if err := json.Unmarshal(resp.Metadata, &ct); err != nil { return nil, err } return &ct, nil } func (c *Client) GetLog(container string, log string) (io.Reader, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } uri := c.url(shared.APIVersion, "containers", container, "logs", log) resp, err := c.getRaw(uri) if err != nil { return nil, err } return resp.Body, nil } func (c *Client) ProfileConfig(name string) (*shared.ProfileConfig, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } ct := shared.ProfileConfig{} resp, err := c.get(fmt.Sprintf("profiles/%s", name)) if err != nil { return nil, err } if err := json.Unmarshal(resp.Metadata, &ct); err != nil { return nil, err } return &ct, nil } func (c *Client) PushFile(container string, p string, gid int, uid int, mode os.FileMode, buf io.ReadSeeker) error { if c.Remote.Public { return fmt.Errorf("This function isn't supported by public remotes.") } query := url.Values{"path": []string{p}} uri := c.url(shared.APIVersion, "containers", container, "files") + "?" + query.Encode() req, err := http.NewRequest("POST", uri, buf) if err != nil { return err } req.Header.Set("User-Agent", shared.UserAgent) req.Header.Set("X-LXD-mode", fmt.Sprintf("%04o", mode.Perm())) req.Header.Set("X-LXD-uid", strconv.FormatUint(uint64(uid), 10)) req.Header.Set("X-LXD-gid", strconv.FormatUint(uint64(gid), 10)) raw, err := c.Http.Do(req) if err != nil { return err } _, err = HoistResponse(raw, Sync) return err } func (c *Client) PullFile(container string, p string) (int, int, int, io.ReadCloser, error) { if c.Remote.Public { return 0, 0, 0, nil, fmt.Errorf("This function isn't supported by public remotes.") } uri := c.url(shared.APIVersion, "containers", container, "files") query := url.Values{"path": []string{p}} r, err := c.getRaw(uri + "?" + query.Encode()) if err != nil { return 0, 0, 0, nil, err } uid, gid, mode := shared.ParseLXDFileHeaders(r.Header) return uid, gid, mode, r.Body, nil } func (c *Client) GetMigrationSourceWS(container string) (*Response, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } body := shared.Jmap{"migration": true} url := fmt.Sprintf("containers/%s", container) if shared.IsSnapshot(container) { pieces := strings.SplitN(container, shared.SnapshotDelimiter, 2) if len(pieces) != 2 { return nil, fmt.Errorf("invalid snapshot name %s", container) } url = fmt.Sprintf("containers/%s/snapshots/%s", pieces[0], pieces[1]) } return c.post(url, body, Async) } func (c *Client) MigrateFrom(name string, operation string, certificate string, secrets map[string]string, architecture string, config map[string]string, devices shared.Devices, profiles []string, baseImage string, ephemeral bool) (*Response, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } source := shared.Jmap{ "type": "migration", "mode": "pull", "operation": operation, "certificate": certificate, "secrets": secrets, "base-image": baseImage, } body := shared.Jmap{ "architecture": architecture, "config": config, "devices": devices, "ephemeral": ephemeral, "name": name, "profiles": profiles, "source": source, } return c.post("containers", body, Async) } func (c *Client) Rename(name string, newName string) (*Response, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } oldNameParts := strings.SplitN(name, "/", 2) newNameParts := strings.SplitN(newName, "/", 2) if len(oldNameParts) != len(newNameParts) { return nil, fmt.Errorf("Attempting to rename container to snapshot or vice versa.") } if len(oldNameParts) == 1 { body := shared.Jmap{"name": newName} return c.post(fmt.Sprintf("containers/%s", name), body, Async) } if oldNameParts[0] != newNameParts[0] { return nil, fmt.Errorf("Attempting to rename snapshot of one container into a snapshot of another container.") } body := shared.Jmap{"name": newNameParts[1]} return c.post(fmt.Sprintf("containers/%s/snapshots/%s", oldNameParts[0], oldNameParts[1]), body, Async) } /* Wait for an operation */ func (c *Client) WaitFor(waitURL string) (*shared.Operation, error) { if len(waitURL) < 1 { return nil, fmt.Errorf("invalid wait url %s", waitURL) } /* For convenience, waitURL is expected to be in the form of a * Response.Operation string, i.e. it already has * "//operations/" in it; we chop off the leading / and pass * it to url directly. */ shared.Debugf(path.Join(waitURL[1:], "wait")) resp, err := c.baseGet(c.url(waitURL, "wait")) if err != nil { return nil, err } return resp.MetadataAsOperation() } func (c *Client) WaitForSuccess(waitURL string) error { op, err := c.WaitFor(waitURL) if err != nil { return err } if op.StatusCode == shared.Success { return nil } return fmt.Errorf(op.Err) } func (c *Client) RestoreSnapshot(container string, snapshotName string, stateful bool) (*Response, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } body := shared.Jmap{"restore": snapshotName, "stateful": stateful} return c.put(fmt.Sprintf("containers/%s", container), body, Async) } func (c *Client) Snapshot(container string, snapshotName string, stateful bool) (*Response, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } body := shared.Jmap{"name": snapshotName, "stateful": stateful} return c.post(fmt.Sprintf("containers/%s/snapshots", container), body, Async) } func (c *Client) ListSnapshots(container string) ([]shared.SnapshotInfo, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } qUrl := fmt.Sprintf("containers/%s/snapshots?recursion=1", container) resp, err := c.get(qUrl) if err != nil { return nil, err } var result []shared.SnapshotInfo if err := json.Unmarshal(resp.Metadata, &result); err != nil { return nil, err } return result, nil } func (c *Client) SnapshotInfo(snapName string) (*shared.SnapshotInfo, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } pieces := strings.SplitN(snapName, shared.SnapshotDelimiter, 2) if len(pieces) != 2 { return nil, fmt.Errorf("invalid snapshot name %s", snapName) } qUrl := fmt.Sprintf("containers/%s/snapshots/%s", pieces[0], pieces[1]) resp, err := c.get(qUrl) if err != nil { return nil, err } var result shared.SnapshotInfo if err := json.Unmarshal(resp.Metadata, &result); err != nil { return nil, err } return &result, nil } func (c *Client) GetServerConfigString() ([]string, error) { var resp []string ss, err := c.ServerStatus() if err != nil { return resp, err } if ss.Auth == "untrusted" { return resp, nil } if len(ss.Config) == 0 { resp = append(resp, "No config variables set.") } for k, v := range ss.Config { resp = append(resp, fmt.Sprintf("%s = %v", k, v)) } return resp, nil } func (c *Client) SetServerConfig(key string, value string) (*Response, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } ss, err := c.ServerStatus() if err != nil { return nil, err } ss.Config[key] = value return c.put("", ss, Sync) } func (c *Client) UpdateServerConfig(ss shared.BriefServerState) (*Response, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } return c.put("", ss, Sync) } /* * return string array representing a container's full configuration */ func (c *Client) GetContainerConfig(container string) ([]string, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } var resp []string st, err := c.ContainerInfo(container) if err != nil { return resp, err } profiles := strings.Join(st.Profiles, ",") pstr := fmt.Sprintf("Profiles: %s", profiles) resp = append(resp, pstr) for k, v := range st.Config { str := fmt.Sprintf("%s = %s", k, v) resp = append(resp, str) } return resp, nil } func (c *Client) SetContainerConfig(container, key, value string) error { if c.Remote.Public { return fmt.Errorf("This function isn't supported by public remotes.") } st, err := c.ContainerInfo(container) if err != nil { return err } if value == "" { delete(st.Config, key) } else { st.Config[key] = value } /* * Although container config is an async operation (we PUT to restore a * snapshot), we expect config to be a sync operation, so let's just * handle it here. */ resp, err := c.put(fmt.Sprintf("containers/%s", container), st, Async) if err != nil { return err } return c.WaitForSuccess(resp.Operation) } func (c *Client) UpdateContainerConfig(container string, st shared.BriefContainerInfo) error { if c.Remote.Public { return fmt.Errorf("This function isn't supported by public remotes.") } resp, err := c.put(fmt.Sprintf("containers/%s", container), st, Async) if err != nil { return err } return c.WaitForSuccess(resp.Operation) } func (c *Client) ProfileCreate(p string) error { if c.Remote.Public { return fmt.Errorf("This function isn't supported by public remotes.") } body := shared.Jmap{"name": p} _, err := c.post("profiles", body, Sync) return err } func (c *Client) ProfileDelete(p string) error { if c.Remote.Public { return fmt.Errorf("This function isn't supported by public remotes.") } _, err := c.delete(fmt.Sprintf("profiles/%s", p), nil, Sync) return err } func (c *Client) GetProfileConfig(profile string) (map[string]string, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } st, err := c.ProfileConfig(profile) if err != nil { return nil, err } return st.Config, nil } func (c *Client) SetProfileConfigItem(profile, key, value string) error { if c.Remote.Public { return fmt.Errorf("This function isn't supported by public remotes.") } st, err := c.ProfileConfig(profile) if err != nil { shared.Debugf("Error getting profile %s to update", profile) return err } if value == "" { delete(st.Config, key) } else { st.Config[key] = value } _, err = c.put(fmt.Sprintf("profiles/%s", profile), st, Sync) return err } func (c *Client) PutProfile(name string, profile shared.ProfileConfig) error { if c.Remote.Public { return fmt.Errorf("This function isn't supported by public remotes.") } if profile.Name != name { return fmt.Errorf("Cannot change profile name") } _, err := c.put(fmt.Sprintf("profiles/%s", name), profile, Sync) return err } func (c *Client) ListProfiles() ([]string, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } resp, err := c.get("profiles") if err != nil { return nil, err } var result []string if err := json.Unmarshal(resp.Metadata, &result); err != nil { return nil, err } names := []string{} for _, url := range result { toScan := strings.Replace(url, "/", " ", -1) version := "" name := "" count, err := fmt.Sscanf(toScan, " %s profiles %s", &version, &name) if err != nil { return nil, err } if count != 2 { return nil, fmt.Errorf("bad profile url %s", url) } if version != shared.APIVersion { return nil, fmt.Errorf("bad version in profile url") } names = append(names, name) } return names, nil } func (c *Client) ApplyProfile(container, profile string) (*Response, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } st, err := c.ContainerInfo(container) if err != nil { return nil, err } st.Profiles = strings.Split(profile, ",") return c.put(fmt.Sprintf("containers/%s", container), st, Async) } func (c *Client) ContainerDeviceDelete(container, devname string) (*Response, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } st, err := c.ContainerInfo(container) if err != nil { return nil, err } delete(st.Devices, devname) return c.put(fmt.Sprintf("containers/%s", container), st, Async) } func (c *Client) ContainerDeviceAdd(container, devname, devtype string, props []string) (*Response, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } st, err := c.ContainerInfo(container) if err != nil { return nil, err } newdev := shared.Device{} for _, p := range props { results := strings.SplitN(p, "=", 2) if len(results) != 2 { return nil, fmt.Errorf("no value found in %q", p) } k := results[0] v := results[1] newdev[k] = v } if st.Devices != nil && st.Devices.ContainsName(devname) { return nil, fmt.Errorf("device already exists") } newdev["type"] = devtype if st.Devices == nil { st.Devices = shared.Devices{} } st.Devices[devname] = newdev return c.put(fmt.Sprintf("containers/%s", container), st, Async) } func (c *Client) ContainerListDevices(container string) ([]string, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } st, err := c.ContainerInfo(container) if err != nil { return nil, err } devs := []string{} for n, d := range st.Devices { devs = append(devs, fmt.Sprintf("%s: %s", n, d["type"])) } return devs, nil } func (c *Client) ProfileDeviceDelete(profile, devname string) (*Response, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } st, err := c.ProfileConfig(profile) if err != nil { return nil, err } for n, _ := range st.Devices { if n == devname { delete(st.Devices, n) } } return c.put(fmt.Sprintf("profiles/%s", profile), st, Sync) } func (c *Client) ProfileDeviceAdd(profile, devname, devtype string, props []string) (*Response, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } st, err := c.ProfileConfig(profile) if err != nil { return nil, err } newdev := shared.Device{} for _, p := range props { results := strings.SplitN(p, "=", 2) if len(results) != 2 { return nil, fmt.Errorf("no value found in %q", p) } k := results[0] v := results[1] newdev[k] = v } if st.Devices != nil && st.Devices.ContainsName(devname) { return nil, fmt.Errorf("device already exists") } newdev["type"] = devtype if st.Devices == nil { st.Devices = shared.Devices{} } st.Devices[devname] = newdev return c.put(fmt.Sprintf("profiles/%s", profile), st, Sync) } func (c *Client) ProfileListDevices(profile string) ([]string, error) { if c.Remote.Public { return nil, fmt.Errorf("This function isn't supported by public remotes.") } st, err := c.ProfileConfig(profile) if err != nil { return nil, err } devs := []string{} for n, d := range st.Devices { devs = append(devs, fmt.Sprintf("%s: %s", n, d["type"])) } return devs, nil } // WebsocketDial attempts to dial a websocket to a LXD instance, parsing // LXD-style errors and returning them as go errors. func WebsocketDial(dialer websocket.Dialer, url string) (*websocket.Conn, error) { conn, raw, err := dialer.Dial(url, http.Header{}) if err != nil { _, err2 := HoistResponse(raw, Error) if err2 != nil { /* The response isn't one we understand, so return * whatever the original error was. */ return nil, err } return nil, err } return conn, err } func (c *Client) ProfileCopy(name, newname string, dest *Client) error { if c.Remote.Public { return fmt.Errorf("This function isn't supported by public remotes.") } st, err := c.ProfileConfig(name) if err != nil { return err } body := shared.Jmap{"config": st.Config, "name": newname, "devices": st.Devices} _, err = dest.post("profiles", body, Sync) return err } func (c *Client) AsyncWaitMeta(resp *Response) (*shared.Jmap, error) { op, err := c.WaitFor(resp.Operation) if err != nil { return nil, err } if op.StatusCode == shared.Failure { return nil, fmt.Errorf(op.Err) } if op.StatusCode != shared.Success { return nil, fmt.Errorf("got bad op status %s", op.Status) } return op.Metadata, nil } func (c *Client) ImageFromContainer(cname string, public bool, aliases []string, properties map[string]string) (string, error) { if c.Remote.Public { return "", fmt.Errorf("This function isn't supported by public remotes.") } source := shared.Jmap{"type": "container", "name": cname} if shared.IsSnapshot(cname) { source["type"] = "snapshot" } body := shared.Jmap{"public": public, "source": source, "properties": properties} resp, err := c.post("images", body, Async) if err != nil { return "", err } jmap, err := c.AsyncWaitMeta(resp) if err != nil { return "", err } fingerprint, err := jmap.GetString("fingerprint") if err != nil { return "", err } /* add new aliases */ for _, alias := range aliases { c.DeleteAlias(alias) err = c.PostAlias(alias, alias, fingerprint) if err != nil { return "", fmt.Errorf("Error adding alias %s: %s", alias, err) } } return fingerprint, nil } lxd-2.0.2/config.go000066400000000000000000000104601272140510300140720ustar00rootroot00000000000000package lxd import ( "fmt" "io/ioutil" "os" "path" "path/filepath" "strings" "gopkg.in/yaml.v2" "github.com/lxc/lxd/shared" ) // Config holds settings to be used by a client or daemon. type Config struct { // DefaultRemote holds the remote daemon name from the Remotes map // that the client should communicate with by default. // If empty it defaults to "local". DefaultRemote string `yaml:"default-remote"` // Remotes defines a map of remote daemon names to the details for // communication with the named daemon. // The implicit "local" remote is always available and communicates // with the local daemon over a unix socket. Remotes map[string]RemoteConfig `yaml:"remotes"` // Command line aliases for `lxc` Aliases map[string]string `yaml:"aliases"` // This is the path to the config directory, so the client can find // previously stored server certs, give good error messages, and save // new server certs, etc. // // We don't need to store it, because of course once we've loaded this // structure we already know where it is :) ConfigDir string `yaml:"-"` } // RemoteConfig holds details for communication with a remote daemon. type RemoteConfig struct { Addr string `yaml:"addr"` Public bool `yaml:"public"` Protocol string `yaml:"protocol,omitempty"` Static bool `yaml:"-"` } var LocalRemote = RemoteConfig{ Addr: "unix://", Static: true, Public: false} var ImagesRemote = RemoteConfig{ Addr: "https://images.linuxcontainers.org", Public: true} var UbuntuRemote = RemoteConfig{ Addr: "https://cloud-images.ubuntu.com/releases", Static: true, Public: true, Protocol: "simplestreams"} var UbuntuDailyRemote = RemoteConfig{ Addr: "https://cloud-images.ubuntu.com/daily", Static: true, Public: true, Protocol: "simplestreams"} var StaticRemotes = map[string]RemoteConfig{ "local": LocalRemote, "ubuntu": UbuntuRemote, "ubuntu-daily": UbuntuDailyRemote} var DefaultRemotes = map[string]RemoteConfig{ "images": ImagesRemote, "local": LocalRemote, "ubuntu": UbuntuRemote, "ubuntu-daily": UbuntuDailyRemote} var DefaultConfig = Config{ Remotes: DefaultRemotes, DefaultRemote: "local", Aliases: map[string]string{}, } // LoadConfig reads the configuration from the config path; if the path does // not exist, it returns a default configuration. func LoadConfig(path string) (*Config, error) { data, err := ioutil.ReadFile(path) if os.IsNotExist(err) { // A missing file is equivalent to the default configuration. withPath := DefaultConfig withPath.ConfigDir = filepath.Dir(path) return &withPath, nil } if err != nil { return nil, fmt.Errorf("cannot read config file: %v", err) } var c Config err = yaml.Unmarshal(data, &c) if err != nil { return nil, fmt.Errorf("cannot parse configuration: %v", err) } if c.Remotes == nil { c.Remotes = make(map[string]RemoteConfig) } c.ConfigDir = filepath.Dir(path) for k, v := range StaticRemotes { c.Remotes[k] = v } return &c, nil } // SaveConfig writes the provided configuration to the config file. func SaveConfig(c *Config, fname string) error { for k, _ := range StaticRemotes { delete(c.Remotes, k) } // Ignore errors on these two calls. Create will report any problems. os.Remove(fname + ".new") os.Mkdir(filepath.Dir(fname), 0700) f, err := os.Create(fname + ".new") if err != nil { return fmt.Errorf("cannot create config file: %v", err) } // If there are any errors, do not leave it around. defer f.Close() defer os.Remove(fname + ".new") data, err := yaml.Marshal(c) _, err = f.Write(data) if err != nil { return fmt.Errorf("cannot write configuration: %v", err) } f.Close() err = shared.FileMove(fname+".new", fname) if err != nil { return fmt.Errorf("cannot rename temporary config file: %v", err) } return nil } func (c *Config) ParseRemoteAndContainer(raw string) (string, string) { result := strings.SplitN(raw, ":", 2) if len(result) == 1 { return c.DefaultRemote, result[0] } return result[0], result[1] } func (c *Config) ParseRemote(raw string) string { return strings.SplitN(raw, ":", 2)[0] } func (c *Config) ConfigPath(file string) string { return path.Join(c.ConfigDir, file) } func (c *Config) ServerCertPath(name string) string { return path.Join(c.ConfigDir, "servercerts", fmt.Sprintf("%s.crt", name)) } lxd-2.0.2/config/000077500000000000000000000000001272140510300135425ustar00rootroot00000000000000lxd-2.0.2/config/bash/000077500000000000000000000000001272140510300144575ustar00rootroot00000000000000lxd-2.0.2/config/bash/lxd-client000066400000000000000000000045211272140510300164470ustar00rootroot00000000000000_have lxc && { _lxd_complete() { _lxd_names() { COMPREPLY=( $( compgen -W \ "$( lxc list --fast | tail -n +4 | awk '{print $2}' | egrep -v '^(\||^$)' )" "$cur" ) ) } _lxd_images() { COMPREPLY=( $( compgen -W \ "$( lxc image list | tail -n +4 | awk '{print $2}' | egrep -v '^(\||^$)' )" "$cur" ) ) } local cur prev COMPREPLY=() cur=${COMP_WORDS[COMP_CWORD]} prev=${COMP_WORDS[COMP_CWORD-1]} lxc_cmds="config copy delete exec file help image info init launch \ list move profile publish remote restart restore snapshot start stop \ version" if [ $COMP_CWORD -eq 1 ]; then COMPREPLY=( $(compgen -W "$lxc_cmds" -- $cur) ) elif [ $COMP_CWORD -eq 2 ]; then case "$prev" in "config") COMPREPLY=( $(compgen -W "device edit get set show trust" -- $cur) ) ;; "copy") _lxd_names ;; "delete") _lxd_names ;; "exec") _lxd_names ;; "file") COMPREPLY=( $(compgen -W "pull push edit" -- $cur) ) ;; "help") COMPREPLY=( $(compgen -W "$lxc_cmds" -- $cur) ) ;; "image") COMPREPLY=( $(compgen -W "import copy delete edit export info list show alias" -- $cur) ) ;; "info") _lxd_names ;; "init") _lxd_images ;; "launch") _lxd_images ;; "move") _lxd_names ;; "profile") COMPREPLY=( $(compgen -W \ "list show create edit copy get set delete apply device" -- $cur) ) ;; "publish") _lxd_names ;; "remote") COMPREPLY=( $(compgen -W \ "add remove list rename set-url set-default get-default" -- $cur) ) ;; "restart") _lxd_names ;; "restore") _lxd_names ;; "snapshot") _lxd_names ;; "start") # should check if containers are stopped _lxd_names ;; "stop") # should check if containers are started _lxd_names ;; *) ;; esac fi return 0 } complete -o default -F _lxd_complete lxc } lxd-2.0.2/doc/000077500000000000000000000000001272140510300130425ustar00rootroot00000000000000lxd-2.0.2/doc/architectures.md000066400000000000000000000027071272140510300162370ustar00rootroot00000000000000# Introduction LXD just like LXC can run on just about any architecture that's supported by the Linux kernel and by Go. Some objects in LXD are tied to an architecture, like the container, container snapshots and images. This document lists all the supported architectures, their unique identifier (used in the database), how they should be named and some notes. Please note that what LXD cares about is the kernel architecture, not the particular userspace flavor as determined by the toolchain. That means that LXD considers armv7 hard-float to be the same as armv7 soft-float and refers to both as "armv7". If useful to the user, the exact userspace ABI may be set as an image and container property, allowing easy query. # Architectures ID | Name | Notes | Personalities :--- | :--- | :---- | :------------ 1 | i686 | 32bit Intel x86 | 2 | x86\_64 | 64bit Intel x86 | x86 3 | armv7l | 32bit ARMv7 little-endian | 4 | aarch64 | 64bit ARMv8 little-endian | armv7 (optional) 5 | ppc | 32bit PowerPC big-endian | 6 | ppc64 | 64bit PowerPC big-endian | powerpc 7 | ppc64le | 64bit PowerPC little-endian | 8 | s390x | 64bit ESA/390 big-endian | The architecture names above are typically aligned with the Linux kernel architecture names. lxd-2.0.2/doc/configuration.md000066400000000000000000000450431272140510300162410ustar00rootroot00000000000000# Introduction Current LXD stores the following kind of configurations: - Server configuration (the LXD daemon itself) - Container configuration The server configuration is a simple set of key and values. The container configuration is a bit more complex as it uses both key/value configuration and some more complex configuration structures for devices, network interfaces and storage volumes. # Server configuration ## Key/value configuration The key/value configuration is namespaced with the following namespaces currently supported: - core (core daemon configuration) - images (image configuration) - storage (storage configuration) Key | Type | Default | Description :-- | :--- | :------ | :---------- core.https\_address | string | - | Address to bind for the remote API core.https\_allowed\_origin | string | - | Access-Control-Allow-Origin http header value core.https\_allowed\_methods | string | - | Access-Control-Allow-Methods http header value core.https\_allowed\_headers | string | - | Access-Control-Allow-Headers http header value core.proxy\_https | string | - | https proxy to use, if any (falls back to HTTPS\_PROXY environment variable) core.proxy\_http | string | - | http proxy to use, if any (falls back to HTTP\_PROXY environment variable) core.proxy\_ignore\_hosts | string | - | hosts which don't need the proxy for use (similar format to NO\_PROXY, e.g. 1.2.3.4,1.2.3.5, falls back to NO\_PROXY environment variable) core.trust\_password | string | - | Password to be provided by clients to setup a trust storage.lvm\_vg\_name | string | - | LVM Volume Group name to be used for container and image storage. A default Thin Pool is created using 100% of the free space in the Volume Group, unless `storage.lvm_thinpool_name` is set. storage.lvm\_thinpool\_name | string | "LXDPool" | LVM Thin Pool to use within the Volume Group specified in `storage.lvm_vg_name`, if the default pool parameters are undesirable. storage.lvm\_fstype | string | ext4 | Format LV with filesystem, for now it's value can be only ext4 (default) or xfs. storage.lvm\_volume\_size | string | 10GiB | Size of the logical volume storage.zfs\_pool\_name | string | - | ZFS pool name images.compression\_algorithm | string | gzip | Compression algorithm to use for new images (bzip2, gzip, lzma, xz or none) images.remote\_cache\_expiry | integer | 10 | Number of days after which an unused cached remote image will be flushed images.auto\_update\_interval | integer | 6 | Interval in hours at which to look for update to cached images (0 disables it) images.auto\_update\_cached | boolean | true | Whether to automatically update any image that LXD caches Those keys can be set using the lxc tool with: lxc config set # Container configuration ## Properties The following are direct container properties and can't be part of a profile: - name - architecture Name is the container name and can only be changed by renaming the container. ## Key/value configuration The key/value configuration is namespaced with the following namespaces currently supported: - boot (boot related options, timing, dependencies, ...) - environment (environment variables) - limits (resource limits) - raw (raw container configuration overrides) - security (security policies) - user (storage for user properties, searchable) - volatile (used internally by LXD to store settings that are specific to a specific container instance) The currently supported keys are: Key | Type | Default | Live update | Description :-- | :--- | :------ | :---------- | :---------- boot.autostart | boolean | false | n/a | Always start the container when LXD starts boot.autostart.delay | integer | 0 | n/a | Number of seconds to wait after the container started before starting the next one boot.autostart.priority | integer | 0 | n/a | What order to start the containers in (starting with highest) environment.\* | string | - | yes (exec) | key/value environment variables to export to the container and set on exec limits.cpu | string | - (all) | yes | Number or range of CPUs to expose to the container limits.cpu.allowance | string | 100% | yes | How much of the CPU can be used. Can be a percentage (e.g. 50%) for a soft limit or hard a chunk of time (25ms/100ms) limits.cpu.priority | integer | 10 (maximum) | yes | CPU scheduling priority compared to other containers sharing the same CPUs (overcommit) limits.disk.priority | integer | 5 (medium) | yes | When under load, how much priority to give to the container's I/O requests limits.memory | string | - (all) | yes | Percentage of the host's memory or fixed value in bytes (supports kB, MB, GB, TB, PB and EB suffixes) limits.memory.enforce | string | hard | yes | If hard, container can't exceed its memory limit. If soft, the container can exceed its memory limit when extra host memory is available. limits.memory.swap | boolean | true | yes | Whether to allow some of the container's memory to be swapped out to disk limits.memory.swap.priority | integer | 10 (maximum) | yes | The higher this is set, the least likely the container is to be swapped to disk limits.network.priority | integer | 0 (minimum) | yes | When under load, how much priority to give to the container's network requests limits.processes | integer | - (max) | yes | Maximum number of processes that can run in the container linux.kernel\_modules | string | - | yes | Comma separated list of kernel modules to load before starting the container raw.apparmor | blob | - | yes | Apparmor profile entries to be appended to the generated profile raw.lxc | blob | - | no | Raw LXC configuration to be appended to the generated one security.nesting | boolean | false | yes | Support running lxd (nested) inside the container security.privileged | boolean | false | no | Runs the container in privileged mode user.\* | string | - | n/a | Free form user key/value storage (can be used in search) The following volatile keys are currently internally used by LXD: Key | Type | Default | Description :-- | :--- | :------ | :---------- volatile.\.hwaddr | string | - | Network device MAC address (when no hwaddr property is set on the device itself) volatile.\.name | string | - | Network device name (when no name propery is set on the device itself) volatile.apply\_template | string | - | The name of a template hook which should be triggered upon next startup volatile.base\_image | string | - | The hash of the image the container was created from, if any. volatile.last\_state.idmap | string | - | Serialized container uid/gid map volatile.last\_state.power | string | - | Container state as of last host shutdown Additionally, those user keys have become common with images (support isn't guaranteed): Key | Type | Default | Description :-- | :--- | :------ | :---------- user.network\_mode | string | dhcp | One of "dhcp" or "link-local". Used to configure network in supported images. user.meta-data | string | - | Cloud-init meta-data, content is appended to seed value. user.user-data | string | #!cloud-config | Cloud-init user-data, content is used as seed value. user.vendor-data | string | #!cloud-config | Cloud-init vendor-data, content is used as seed value. Note that while a type is defined above as a convenience, all values are stored as strings and should be exported over the REST API as strings (which makes it possible to support any extra values without breaking backward compatibility). Those keys can be set using the lxc tool with: lxc config set Volatile keys can't be set by the user and can only be set directly against a container. The raw keys allow direct interaction with the backend features that LXD itself uses, setting those may very well break LXD in non-obvious ways and should whenever possible be avoided. ## Devices configuration LXD will always provide the container with the basic devices which are required for a standard POSIX system to work. These aren't visible in container or profile configuration and may not be overriden. Those includes: - /dev/null (character device) - /dev/zero (character device) - /dev/full (character device) - /dev/console (character device) - /dev/tty (character device) - /dev/random (character device) - /dev/urandom (character device) - lo (network interface) Anything else has to be defined in the container configuration or in one of its profiles. The default profile will typically contain a network interface to become eth0 in the container. To add extra devices to a container, device entries can be added directly to a container, or to a profile. Devices may be added or removed while the container is running. Every device entry is identified by a unique name. If the same name is used in a subsequent profile or in the container's own configuration, the whole entry is overriden by the new definition. Device entries are added through: lxc config device add [key=value]... lxc profile device add [key=value]... ### Device types LXD supports the following device types: ID (database) | Name | Description :-- | :-- | :-- 0 | none | Inheritance blocker 1 | nic | Network interface 2 | disk | Mountpoint inside the container 3 | unix-char | Unix character device 4 | unix-block | Unix block device ### Type: none A none type device doesn't have any property and doesn't create anything inside the container. It's only purpose it to stop inheritance of devices coming from profiles. To do so, just add a none type device with the same name of the one you wish to skip inheriting. It can be added in a profile being applied after the profile it originated from or directly on the container. ### Type: nic LXD supports different kind of network devices: - physical: Straight physical device passthrough from the host. The targeted device will vanish from the host and appear in the container. - bridged: Uses an existing bridge on the host and creates a virtual device pair to connect the host bridge to the container. - macvlan: Sets up a new network device based on an existing one but using a different MAC address. - p2p: Creates a virtual device pair, putting one side in the container and leaving the other side on the host. Different network interface types have different additional properties, the current list is: Key | Type | Default | Required | Used by | Description :-- | :-- | :-- | :-- | :-- | :-- nictype | string | - | yes | all | The device type, one of "physical", "bridged", "macvlan" or "p2p" limits.ingress | string | - | no | bridged, p2p | I/O limit in bit/s (supports kbit, Mbit, Gbit suffixes) limits.egress | string | - | no | bridged, p2p | I/O limit in bit/s (supports kbit, Mbit, Gbit suffixes) limits.max | string | - | no | bridged, p2p | Same as modifying both limits.read and limits.write name | string | kernel assigned | no | all | The name of the interface inside the container host\_name | string | randomly assigned | no | bridged, p2p, macvlan | The name of the interface inside the host hwaddr | string | randomly assigned | no | all | The MAC address of the new interface mtu | integer | parent MTU | no | all | The MTU of the new interface parent | string | - | yes | physical, bridged, macvlan | The name of the host device or bridge ### Type: disk Disk entries are essentially mountpoints inside the container. They can either be a bind-mount of an existing file or directory on the host, or if the source is a block device, a regular mount. The following properties exist: Key | Type | Default | Required | Description :-- | :-- | :-- | :-- | :-- limits.read | string | - | no | I/O limit in byte/s (supports kB, MB, GB, TB, PB and EB suffixes) or in iops (must be suffixed with "iops") limits.write | string | - | no | I/O limit in byte/s (supports kB, MB, GB, TB, PB and EB suffixes) or in iops (must be suffixed with "iops") limits.max | string | - | no | Same as modifying both limits.read and limits.write path | string | - | yes | Path inside the container where the disk will be mounted source | string | - | yes | Path on the host, either to a file/directory or to a block device optional | boolean | false | no | Controls whether to fail if the source doesn't exist readonly | boolean | false | no | Controls whether to make the mount read-only size | string | - | no | Disk size in bytes (supports kB, MB, GB, TB, PB and EB suffixes). This is only supported for the rootfs (/). recursive | boolean | false | no | Whether or not to recursively mount the source path If multiple disks, backed by the same block device, have I/O limits set, the average of the limits will be used. ### Type: unix-char Unix character device entries simply make the requested character device appear in the container's /dev and allow read/write operations to it. The following properties exist: Key | Type | Default | Required | Description :-- | :-- | :-- | :-- | :-- path | string | - | yes | Path inside the container major | int | device on host | no | Device major number minor | int | device on host | no | Device minor number uid | int | 0 | no | UID of the device owner in the container gid | int | 0 | no | GID of the device owner in the container mode | int | 0660 | no | Mode of the device in the container ### Type: unix-block Unix block device entries simply make the requested character device appear in the container's /dev and allow read/write operations to it. The following properties exist: Key | Type | Default | Required | Description :-- | :-- | :-- | :-- | :-- path | string | - | yes | Path inside the container major | int | device on host | no | Device major number minor | int | device on host | no | Device minor number uid | int | 0 | no | UID of the device owner in the container gid | int | 0 | no | GID of the device owner in the container mode | int | 0660 | no | Mode of the device in the container ## Profiles Profiles can store any configuration that a container can (key/value or devices) and any number of profiles can be applied to a container. Profiles are applied in the order they are specified so the last profile to specify a specific key wins. In any case, resource-specific configuration always overrides that coming from the profiles. If not present, LXD will create a "default" profile which comes with a network interface connected to LXD's default bridge (lxdbr0). The "default" profile is set for any new container created which doesn't specify a different profiles list. ## JSON representation A representation of a container using all the different types of configurations would look like: { 'name': "my-container", 'profiles': ["default"], 'architecture': 'x86_64', 'config': { 'limits.cpu': '3', 'security.privileged': 'true' }, 'devices': { 'nic-lxdbr0': { 'type': 'none' }, 'nic-mybr0': { 'type': 'nic', 'mtu': '9000', 'parent': 'mybr0' }, 'rootfs': { 'type': 'disk', 'path': '/', 'source': 'UUID=8f7fdf5e-dc60-4524-b9fe-634f82ac2fb6' }, }, 'status': { 'status': "Running", 'status_code': 103, 'ips': [{'interface': "eth0", 'protocol': "INET6", 'address': "2001:470:b368:1020:1::2"}, {'interface': "eth0", 'protocol': "INET", 'address': "172.16.15.30"}]} } lxd-2.0.2/doc/daemon-behavior.md000066400000000000000000000023221272140510300164230ustar00rootroot00000000000000# Introduction This specification covers some of the daemon's behavior, such as reaction to given signals, crashes, ... # Startup On every start, LXD checks that its directory structure exists. If it doesn't, it'll create the required directories, generate a keypair and initialize the database. Once the daemon is ready for work, LXD will scan the containers table for any container for which the stored power state differs from the current one. If a container's power state was recorded as running and the container isn't running, LXD will start it. # Signal handling ## SIGINT, SIGQUIT, SIGTERM For those signals, LXD assumes that it's being temporarily stopped and will be restarted at a later time to continue handling the containers. The containers will keep running and LXD will close all connections and exit cleanly. ## SIGPWR Indicates to LXD that the host is going down. LXD will attempt a clean shutdown of all the containers. After 30s, it will kill any remaining container. The container power\_state in the containers table is kept as it was so that LXD after the host is done rebooting can restore the containers as they were. ## SIGUSR1 Write a memory profile dump to the file specified with \-\-memprofile. lxd-2.0.2/doc/database.md000066400000000000000000000337561272140510300151460ustar00rootroot00000000000000# Introduction So first of all, why a database? Rather than keeping the configuration and state within each container's directory as is traditionally done by LXC, LXD has an internal database which stores all of that information. This allows very quick queries against all containers configuration. An example is the rather obvious question "what containers are using br0?". To answer that question without a database, LXD would have to iterate through every single container, load and parse its configuration and then look at what network devices are defined in there. While that may be quick with a few containers, imagine how many filesystem access would be required for 2000 containers. Instead with a database, it's only a matter of accessing the already cached database with a pretty simple query. # Database engine As this is a purely internal database with a single client and very little data, we'll be using sqlite3. We have no interest in replication or other HA features offered by the bigger database engines as LXD runs on each compute nodes and having the database accessible when the compute node itself isn't, wouldn't be terribly useful. # Design The design of the database is made to be as close as possible to the REST API. The main table and field names are exact match for the REST API. However this database isn't an exact match of the API, mostly because any runtime or external piece of information will not be stored in the database (as this would require constent polling and wouldn't gain us anything). We make no guarantee of stability for the database schema. This is a purely internal database which only LXD should ever use. Updating LXD may cause a schema update and data being shuffled. In those cases, LXD will make a copy of the old database as ".old" to allow for a revert. # Tables The list of tables is: * certificates * config * containers * containers\_config * containers\_devices * containers\_devices\_config * containers\_profiles * images * images\_properties * images\_aliases * images\_source * profiles * profiles\_config * profiles\_devices * profiles\_devices\_config * schema You'll notice that compared to the REST API, there are three main differences: 1. The extra "\*\_config" tables which are there for key/value config storage. 2. The extra "images\_properties" table which is there for key/value property storage. 3. The extra "schema" table whish is used for database schema version tracking. 4. There is no "snapshots" table. That's because snapshots are a copy of a container at a given point in time, including its configuration and on-disk state. So having snapshots in a separate table would only be needless duplication. # Notes on sqlite3 sqlite3 only supports 5 storage classes: NULL, INTEGER, REAL, TEXT and BLOB There are then a set of aliases for each of those storage classes which is what we use below. # Schema ## certificates Column | Type | Default | Constraint | Description :----- | :--- | :------ | :--------- | :---------- id | INTEGER | SERIAL | NOT NULL | SERIAL fingerprint | VARCHAR(255) | - | NOT NULL | HEX encoded certificate fingerprint type | INTEGER | - | NOT NULL | Certificate type (0 = client) name | VARCHAR(255) | - | NOT NULL | Certificate name (defaults to CN) certificate | TEXT | - | NOT NULL | PEM encoded certificate Index: UNIQUE ON id AND fingerprint ## config (server configuration) Column | Type | Default | Constraint | Description :----- | :--- | :------ | :--------- | :---------- id | INTEGER | SERIAL | NOT NULL | SERIAL key | VARCHAR(255) | - | NOT NULL | Configuration key value | TEXT | - | | Configuration value (NULL for unset) Index: UNIQUE ON id AND key ## containers Column | Type | Default | Constraint | Description :----- | :--- | :------ | :--------- | :---------- id | INTEGER | SERIAL | NOT NULL | SERIAL name | VARCHAR(255) | - | NOT NULL | Container name architecture | INTEGER | - | NOT NULL | Container architecture type | INTEGER | 0 | NOT NULL | Container type (0 = container, 1 = container snapshot) ephemeral | INTEGER | 0 | NOT NULL | Whether the container is ephemeral (0 = persistent, 1 = ephemeral) stateful | INTEGER | 0 | NOT NULL | Whether the snapshot contains state (snapshot only) creation\_date | DATETIME | - | | Image creation date (user supplied, 0 = unknown) Index: UNIQUE ON id AND name ## containers\_config Column | Type | Default | Constraint | Description :----- | :--- | :------ | :--------- | :---------- id | INTEGER | SERIAL | NOT NULL | SERIAL container\_id | INTEGER | - | NOT NULL | containers.id FK key | VARCHAR(255) | - | NOT NULL | Configuration key value | TEXT | - | | Configuration value (NULL for unset) Index: UNIQUE ON id AND container\_id + key Foreign keys: container\_id REFERENCES containers(id) ## containers\_devices Column | Type | Default | Constraint | Description :----- | :--- | :------ | :--------- | :---------- id | INTEGER | SERIAL | NOT NULL | SERIAL container\_id | INTEGER | - | NOT NULL | containers.id FK name | VARCHAR(255) | - | NOT NULL | Container name type | INTEGER | 0 | NOT NULL | Device type (see configuration.md) Index: UNIQUE ON id AND container\_id + name Foreign keys: container\_id REFERENCES containers(id) ## containers\_devices\_config Column | Type | Default | Constraint | Description :----- | :--- | :------ | :--------- | :---------- id | INTEGER | SERIAL | NOT NULL | SERIAL container\_device\_id | INTEGER | - | NOT NULL | containers\_devices.id FK key | VARCHAR(255) | - | NOT NULL | Configuration key value | TEXT | - | | Configuration value (NULL for unset) Index: UNIQUE ON id AND container\_device\_id + key Foreign keys: container\_device\_id REFERENCES containers\_devices(id) ## containers\_profiles Column | Type | Default | Constraint | Description :----- | :--- | :------ | :--------- | :---------- id | INTEGER | SERIAL | NOT NULL | SERIAL container\_id | INTEGER | - | NOT NULL | containers.id FK profile\_id | INTEGER | - | NOT NULL | profiles.id FK apply\_order | INTEGER | 0 | NOT NULL | Profile ordering Index: UNIQUE ON id AND container\_id + profile\_id Foreign keys: container\_id REFERENCES containers(id) and profile\_id REFERENCES profiles(id) ## images Column | Type | Default | Constraint | Description :----- | :--- | :------ | :--------- | :---------- id | INTEGER | SERIAL | NOT NULL | SERIAL cached | INTEGER | 0 | NOT NULL | Whether this is a cached image fingerprint | VARCHAR(255) | - | NOT NULL | Tarball fingerprint filename | VARCHAR(255) | - | NOT NULL | Tarball filename size | INTEGER | - | NOT NULL | Tarball size public | INTEGER | 0 | NOT NULL | Whether the image is public or not auto\_update | INTEGER | 0 | NOT NULL | Whether to update from the source of this image architecture | INTEGER | - | NOT NULL | Image architecture creation\_date | DATETIME | - | | Image creation date (user supplied, 0 = unknown) expiry\_date | DATETIME | - | | Image expiry (user supplied, 0 = never) upload\_date | DATETIME | - | NOT NULL | Image entry creation date last\_use\_date | DATETIME | - | | Last time the image was used to spawn a container Index: UNIQUE ON id AND fingerprint ## images\_aliases Column | Type | Default | Constraint | Description :----- | :--- | :------ | :--------- | :---------- id | INTEGER | SERIAL | NOT NULL | SERIAL name | VARCHAR(255) | - | NOT NULL | Alias name image\_id | INTEGER | - | NOT NULL | images.id FK description | VARCHAR(255) | - | | Description of the alias Index: UNIQUE ON id AND name Foreign keys: image\_id REFERENCES images(id) ## images\_properties Column | Type | Default | Constraint | Description :----- | :--- | :------ | :--------- | :---------- id | INTEGER | SERIAL | NOT NULL | SERIAL image\_id | INTEGER | - | NOT NULL | images.id FK type | INTEGER | 0 | NOT NULL | Property type (0 = string, 1 = text) key | VARCHAR(255) | - | NOT NULL | Property name value | TEXT | - | | Property value (NULL for unset) Index: UNIQUE ON id Foreign keys: image\_id REFERENCES images(id) ## images\_source Column | Type | Default | Constraint | Description :----- | :--- | :------ | :--------- | :---------- id | INTEGER | SERIAL | NOT NULL | SERIAL image\_id | INTEGER | - | NOT NULL | images.id FK server | TEXT | - | NOT NULL | Server URL protocol | INTEGER | 0 | NOT NULL | Protocol to access the remote (0 = lxd, 1 = direct, 2 = simplestreams) alias | VARCHAR(255) | - | NOT NULL | What remote alias to use as the source certificate | TEXT | - | | PEM encoded certificate of the server Index: UNIQUE ON id Foreign keys: image\_id REFERENCES images(id) ## profiles Column | Type | Default | Constraint | Description :----- | :--- | :------ | :--------- | :---------- id | INTEGER | SERIAL | NOT NULL | SERIAL name | VARCHAR(255) | - | NOT NULL | Profile name description | TEXT | - | | Description of the profile Index: UNIQUE on id AND name ## profiles\_config Column | Type | Default | Constraint | Description :----- | :--- | :------ | :--------- | :---------- id | INTEGER | SERIAL | NOT NULL | SERIAL profile\_id | INTEGER | - | NOT NULL | profiles.id FK key | VARCHAR(255) | - | NOT NULL | Configuration key value | VARCHAR(255) | - | | Configuration value (NULL for unset) Index: UNIQUE ON id AND profile\_id + key Foreign keys: profile\_id REFERENCES profiles(id) ## profiles\_devices Column | Type | Default | Constraint | Description :----- | :--- | :------ | :--------- | :---------- id | INTEGER | SERIAL | NOT NULL | SERIAL profile\_id | INTEGER | - | NOT NULL | profiles.id FK name | VARCHAR(255) | - | NOT NULL | Container name type | INTEGER | 0 | NOT NULL | Device type (see configuration.md) Index: UNIQUE ON id AND profile\_id + name Foreign keys: profile\_id REFERENCES profiles(id) ## profiles\_devices\_config Column | Type | Default | Constraint | Description :----- | :--- | :------ | :--------- | :---------- id | INTEGER | SERIAL | NOT NULL | SERIAL profile\_device\_id | INTEGER | - | NOT NULL | profiles\_devices.id FK key | VARCHAR(255) | - | NOT NULL | Configuration key value | TEXT | - | | Configuration value (NULL for unset) Index: UNIQUE ON id AND profile\_device\_id + key Foreign keys: profile\_device\_id REFERENCES profiles\_devices(id) ## schema Column | Type | Default | Constraint | Description :----- | :--- | :------ | :--------- | :---------- id | INTEGER | SERIAL | NOT NULL | SERIAL version | INTEGER | - | NOT NULL | Schema version updated\_at | DATETIME | - | NOT NULL | When the schema update was done Index: UNIQUE ON id AND version lxd-2.0.2/doc/dev-lxd.md000066400000000000000000000047021272140510300147320ustar00rootroot00000000000000# Introduction Communication between the hosted workload (container) and its host while not strictly needed is a pretty useful feature. In LXD, this feature is implemented through a /dev/lxd/sock node which is created and setup for all LXD containers. This file is a Unix socket which processes inside the container can connect to. It's multi-threaded so multiple clients can be connected at the same time. # Implementation details LXD on the host binds /var/lib/lxd/devlxd and starts listening for new connections on it. This socket is then bind-mounted into every single container started by LXD at /dev/lxd/sock. The bind-mount is required so we can exceed 4096 containers, otherwise, LXD would have to bind a different socket for every container, quickly reaching the FD limit. # Authentication Queries on /dev/lxd/sock will only return information related to the requesting container. To figure out where a request comes from, LXD will extract the initial socket ucred and compare that to the list of containers it manages. # Protocol The protocol on /dev/lxd/sock is plain-text HTTP with JSON messaging, so very similar to the local version of the LXD protocol. Unlike the main LXD API, there is no background operation and no authentication support in the /dev/lxd/sock API. # REST-API ## API structure * / * /1.0 * /1.0/config * /1.0/config/{key} * /1.0/meta-data ## API details ### / #### GET * Description: List of supported APIs * Return: list of supported API endpoint URLs (by default ['/1.0']) Return value: [ "/1.0" ] ### /1.0 #### GET * Description: Information about the 1.0 API * Return: dict Return value: { "api_version": "1.0" } ### /1.0/config #### GET * Description: List of configuration keys * Return: list of configuration keys URL Note that the configuration key names match those in the container config, however not all configuration namespaces will be exported to /dev/lxd/sock. Currently only the user.\* keys are accessible to the container. At this time, there also aren't any container-writable namespace. Return value: [ "/1.0/config/user.a" ] ### /1.0/config/\ #### GET * Description: Value of that key * Return: Plain-text value Return value: blah ### /1.0/meta-data #### GET * Description: Container meta-data compatible with cloud-init * Return: cloud-init meta-data Return value: #cloud-config instance-id: abc local-hostname: abc lxd-2.0.2/doc/environment.md000066400000000000000000000020271272140510300157310ustar00rootroot00000000000000# Introduction The LXD client and daemon respect some environment variables to adapt to the user's environment and to turn some advanced features on and off. # Common LXD\_DIR | The LXD data directory PATH | List of paths to look into when resolving binaries http\_proxy | Proxy server URL for HTTP https\_proxy | Proxy server URL for HTTPs no\_proxy | List of domains that don't require the use of a proxy # Client environment variable Name | Description :--- | :---- EDITOR | What text editor to use VISUAL | What text editor to use (if EDITOR isn't set) # Server environment variable Name | Description :--- | :---- LXD\_SECURITY\_APPARMOR | If set to "false", forces AppArmor off LXD\_LXC\_TEMPLATE\_CONFIG | Path to the LXC template configuration directory lxd-2.0.2/doc/image-handling.md000066400000000000000000000113361272140510300162340ustar00rootroot00000000000000# Introduction LXD uses an image based workflow. It comes with a built-in image store where the user or external tools can import images. Containers are then started from those images. It's possible to spawn remote containers using local images or local containers using remote images. In such cases, the image may be cached on the target LXD. # Caching When spawning a container from a remote image, the remote image is downloaded into the local image store with the cached bit set. The image will be kept locally as a private image until either it's been unused (no new container spawned) for the number of days set in images.remote\_cache\_expiry or until the image's expiry is reached whichever comes first. LXD keeps track of image usage by updating the last\_used\_at image property every time a new container is spawned from the image. # Auto-update LXD can keep images up to date. By default, any image which comes from a remote server and was requested through an alias will be automatically updated by LXD. This can be changed with images.auto\_update\_cached. On startup and then every 6 hours (unless images.auto\_update\_interval is set), the LXD daemon will go look for more recent version of all the images in the store which are marked as auto-update and have a recorded source server. When a new image is found, it is downloaded into the image store, the aliases pointing to the old image are moved to the new one and the old image is removed from the store. The user can also request a particular image be kept up to date when manually copying an image from a remote server. # Image format LXD currently supports two LXD-specific image formats. The first is a unified tarball, where a single tarball contains both the container rootfs and the needed metadata. The second is a split model, using two tarballs instead, one containing the rootfs, the other containing the metadata. The former is what's produced by LXD itself and what people should be using for LXD-specific images. The latter is designed to allow for easy image building from existing non-LXD rootfs tarballs already available today. ## Unified tarball Tarball, can be compressed and contains: - rootfs/ - metadata.yaml - templates/ (optional) In this mode, the image identifier is the SHA-256 of the tarball. ## Split tarballs Two (possibly compressed) tarballs. One for metadata, one for the rootfs. metadata.tar contains: - metadata.yaml - templates/ (optional) rootfs.tar contains a Linux root filesystem at its root. In this mode the image identifier is the SHA-256 of the concatenation of the metadata and rootfs tarball (in that order). ## Content The rootfs directory (or tarball) contains a full file system tree of what will become the container's /. The templates directory contains pongo2-formatted templates of files inside the container. metadata.yaml contains information relevant to running the image under LXD, at the moment, this contains: architecture: x86_64 creation_date: 1424284563 properties: description: Ubuntu 14.04 LTS Intel 64bit os: Ubuntu release: - trusty - 14.04 templates: /etc/hosts: when: - create - rename template: hosts.tpl properties: foo: bar /etc/hostname: when: - start template: hostname.tpl /etc/network/interfaces: when: - create template: interfaces.tpl create_only: true The architecture and creation\_date fields are mandatory, the properties are just a set of default properties for the image. The os, release, name and description fields while not mandatory in any way, should be pretty common. For templates, the "when" key can be one or more of: - create (run at the time a new container is created from the image) - copy (run when a container is created from an existing one) - start (run every time the container is started) The templates will always receive the following context: - trigger: name of the event which triggered the template (string) - path: path of the file being templated (string) - container: key/value map of container properties (name, architecture, privileged and ephemeral) (map[string]string) - config: key/value map of the container's configuration (map[string]string) - devices: key/value map of the devices assigned to this container (map[string]map[string]string) - properties: key/value map of the template properties specified in metadata.yaml (map[string]string) The "create\_only" key can be set to have LXD only only create missing files but not overwrite an existing file. As a general rule, you should never template a file which is owned by a package or is otherwise expected to be overwritten by normal operation of the container. lxd-2.0.2/doc/lxd-ssl-authentication.md000066400000000000000000000106441272140510300177740ustar00rootroot00000000000000# Introduction Local communications over the UNIX socket happen over a cleartext HTTP socket and access is restricted by socket ownership and mode. Remote communications with the LXD daemon happen using JSON over HTTPS. The supported protocol must be TLS1.2 or better. All communications must use perfect forward secrecy and ciphers must be limited to strong elliptic curve ones (such as ECDHE-RSA or ECDHE-ECDSA). Any generated key should be at least 4096bit RSA and when using signatures, only SHA-2 signatures should be trusted. Since we control both client and server, there is no reason to support any backward compatibility to broken protocol or ciphers. Both the client and the server will generate a keypair the first time they're launched. The server will use that for all https connections to the LXD socket and the client will use its certificate as a client certificate for any client-server communication. # Adding a remote with a default setup In the default setup, when the user adds a new server with "lxc remote add", the server will be contacted over HTTPs, its certificate downloaded and the fingerprint will be shown to the user. The user will then be asked to confirm that this is indeed the server's fingerprint which they can manually check by connecting to or asking someone with access to the server to run the status command and compare the fingerprints. After that, the user must enter the trust password for that server, if it matches, the client certificate is added to the server's trust store and the client can now connect to the server without having to provide any additional credentials. This is a workflow that's very similar to that of ssh where an initial connection to an unknown server triggers a prompt. A possible extension to that is to support something similar to ssh's fingerprint in DNS feature where the certificate fingerprint is added as a TXT record, then if the domain is signed by DNSSEC, the client will automatically accept the fingerprint if it matches that in the DNS record. # Adding a remote with a PKI based setup In the PKI setup, a system administrator is managing a central PKI, that PKI then issues client certificates for all the lxc clients and server certificates for all the LXD daemons. Those certificates and keys are manually put in place on the various machines, replacing the automatically generated ones. The CA certificate is also added to all machines. A CRL may also accompany the CA certificate. In that mode, any connection to a LXD daemon will be done using the preseeded CA certificate. If the server certificate isn't signed by the CA, or if it has been revoked, the connection will simply go through the normal authentication mechanism. If the server certificate is valid and signed by the CA, then the connection continues without prompting the user for the certificate. After that, the user must enter the trust password for that server, if it matches, the client certificate is added to the server's trust store and the client can now connect to the server without having to provide any additional credentials. # Password prompt To establish a new trust relationship, a password must be set on the server and send by the client when adding itself. A remote add operation should therefore go like this: 1. Call GET /1.0 2. If we're not in a PKI setup ask the user to confirm the fingerprint. 3. Look at the dict we received back from the server. If "auth" is "untrusted", ask the user for the server's password and do a POST to /1.0/certificates, then call /1.0 again to check that we're indeed trusted. 4. Remote is now ready # Failure scenarios ## Server certificate changes This will typically happen in two cases: * The server was fully reinstalled and so changed certificate * The connection is being intercepted (MITM) In such cases the client will refuse to connect to the server since the certificate fringerprint will not match that in the config for this remote. It is then up to the user to contact the server administrator to check if the certificate did in fact change. If it did, then the certificate can be replaced by the new one or the remote be removed altogether and re-added. ## Server trust relationship revoked In this case, the server still uses the same certificate but all API calls return a 403 with an error indicating that the client isn't trusted. This happens if another trusted client or the local server administrator removed the trust entry on the server. lxd-2.0.2/doc/migration.md000066400000000000000000000043221272140510300153560ustar00rootroot00000000000000# Live Migration in LXD ## Overview Migration has two pieces, a "source", that is, the host that already has the container, and a "sink", the host that's getting the container. Currently, in the 'pull' mode, the source sets up an operation, and the sink connects to the source and pulls the container. There are three websockets (channels) used in migration: 1. the control stream 2. the criu images stream 3. the filesystem stream When a migration is initiated, information about the container, its configuration, etc. are sent over the control channel (a full description of this process is below), the criu images and container filesystem are synced over their respective channels, and the result of the restore operation is sent from the sink to the source over the control channel. In particular, the protocol that is spoken over the criu channel and filesystem channel can vary, depending on what is negotiated over the control socket. For example, both the source and the sink's LXD directory is on btrfs, the filesystem socket can speak btrfs-send/receive. Additionally, although we do a "stop the world" type migration right now, support for criu's p.haul protocol will happen over the criu socket at some later time. ## Control Socket Once all three websockets are connected between the two endpoints, the source sends a MigrationHeader (protobuf description found in `/lxd/migration/migrate.proto`). This header contains the container configuration which will be added to the new container. There are also two fields indicating the filesystem and criu protocol to speak. For example, if a server is hosted on a btrfs filesystem, it can indicate that it wants to do a `btrfs send` instead of a simple rsync (similarly, it could indicate that it wants to speak the p.haul protocol, instead of just rsyncing the images over slowly). The sink then examines this message and responds with whatever it supports. Continuing our example, if the sink is not on a btrfs filesystem, it responds with the lowest common denominator (rsync, in this case), and the source is to send the root filesystem using rsync. Similarly with the criu connection; if the sink doesn't have support for the p.haul protocol (or whatever), we fall back to rsync. lxd-2.0.2/doc/requirements.md000066400000000000000000000015321272140510300161100ustar00rootroot00000000000000# Requirements ## Go LXD requires Go 1.5 or higher. Both the golang and gccgo compilers are supported. ## Kernel requirements The minimum supported kernel version is 3.13. LXD requires a kernel with support for: * Namespaces (pid, net, uts, ipc and mount) * Seccomp The following optional features also require extra kernel options: * Namespaces (user and cgroup) * AppArmor (including Ubuntu patch for mount mediation) * Control Groups (blkio, cpuset, devices, memory, pids and net\_prio) * CRIU (exact details to be found with CRIU upstream) As well as any other kernel feature required by the LXC version in use. ## LXC LXD requires LXC 1.1.5 or higher with the following build options: * apparmor (if using LXD's apparmor support) * seccomp To run recent version of various distributions, including Ubuntu, LXCFS should also be installed. lxd-2.0.2/doc/rest-api.md000066400000000000000000001526641272140510300151260ustar00rootroot00000000000000# Introduction All the communications between LXD and its clients happen using a RESTful API over http which is then encapsulated over either SSL for remote operations or a unix socket for local operations. Not all of the REST interface requires authentication: * GET to / is allowed for everyone (lists the API endpoints) * GET to /1.0 is allowed for everyone (but result varies) * POST to /1.0/certificates is allowed for everyone with a client certificate * GET to /1.0/images/\* is allowed for everyone but only returns public images for unauthenticated users Unauthenticated endpoints are clearly identified as such below. # API versioning The list of supported major API versions can be retrieved using GET /. The reason for a major API bump is if the API breaks backward compatibility. Feature additions done without breaking backward compatibility only result in addition to api\_extensions which can be used by the client to check if a given feature is supported by the server. # Return values There are three standard return types: * Standard return value * Background operation * Error ### Standard return value For a standard synchronous operation, the following dict is returned: { "type": "sync", "status": "Success", "status_code": 200, "metadata": {} # Extra resource/action specific metadata } HTTP code must be 200. ### Background operation When a request results in a background operation, the HTTP code is set to 202 (Accepted) and the Location HTTP header is set to the operation URL. The body is a dict with the following structure: { "type": "async", "status": "OK", "status_code": 100, "operation": "/1.0/containers/", # URL to the background operation "metadata": {} # Operation metadata (see below) } The operation metadata structure looks like: { "id": "a40f5541-5e98-454f-b3b6-8a51ef5dbd3c", # UUID of the operation "class": "websocket", # Class of the operation (task, websocket or token) "created_at": "2015-11-17T22:32:02.226176091-05:00", # When the operation was created "updated_at": "2015-11-17T22:32:02.226176091-05:00", # Last time the operation was updated "status": "Running", # String version of the operation's status "status_code": 103, # Integer version of the operation's status (use this rather than status) "resources": { # Dictionary of resource types (container, snapshots, images) and affected resources "containers": [ "/1.0/containers/test" ] }, "metadata": { # Metadata specific to the operation in question (in this case, exec) "fds": { "0": "2a4a97af81529f6608dca31f03a7b7e47acc0b8dc6514496eb25e325f9e4fa6a", "control": "5b64c661ef313b423b5317ba9cb6410e40b705806c28255f601c0ef603f079a7" } }, "may_cancel": false, # Whether the operation can be canceled (DELETE over REST) "err": "" # The error string should the operation have failed } The body is mostly provided as a user friendly way of seeing what's going on without having to pull the target operation, all information in the body can also be retrieved from the background operation URL. ### Error There are various situations in which something may immediately go wrong, in those cases, the following return value is used: { "type": "error", "error": "Failure", "error_code": 400, "metadata": {} # More details about the error } HTTP code must be one of of 400, 401, 403, 404, 409, 412 or 500. # Status codes The LXD REST API often has to return status information, be that the reason for an error, the current state of an operation or the state of the various resources it exports. To make it simple to debug, all of those are always doubled. There is a numeric representation of the state which is guaranteed never to change and can be relied on by API clients. Then there is a text version meant to make it easier for people manually using the API to figure out what's happening. In most cases, those will be called status and status\_code, the former being the user-friendly string representation and the latter the fixed numeric value. The codes are always 3 digits, with the following ranges: * 100 to 199: resource state (started, stopped, ready, ...) * 200 to 399: positive action result * 400 to 599: negative action result * 600 to 999: future use ## List of current status codes Code | Meaning :--- | :------ 100 | Operation created 101 | Started 102 | Stopped 103 | Running 104 | Cancelling 105 | Pending 106 | Starting 107 | Stopping 108 | Aborting 109 | Freezing 110 | Frozen 111 | Thawed 200 | Success 400 | Failure 401 | Cancelled # Recursion To optimize queries of large lists, recursion is implemented for collections. A "recursion" argument can be passed to a GET query against a collection. The default value is 0 which means that collection member URLs are returned. Setting it to 1 will have those URLs be replaced by the object they point to (typically a dict). Recursion is implemented by simply replacing any pointer to an job (URL) by the object itself. # Async operations Any operation which may take more than a second to be done must be done in the background, returning a background operation ID to the client. The client will then be able to either poll for a status update or wait for a notification using the long-poll API. # Notifications A websocket based API is available for notifications, different notification types exist to limit the traffic going to the client. It's recommended that the client always subscribes to the operations notification type before triggering remote operations so that it doesn't have to then poll for their status. # API structure * / * /1.0 * /1.0/certificates * /1.0/certificates/\ * /1.0/containers * /1.0/containers/\ * /1.0/containers/\/exec * /1.0/containers/\/files * /1.0/containers/\/snapshots * /1.0/containers/\/snapshots/\ * /1.0/containers/\/state * /1.0/containers/\/logs * /1.0/containers/\/logs/\ * /1.0/events * /1.0/images * /1.0/images/\ * /1.0/images/\/export * /1.0/images/aliases * /1.0/images/aliases/\ * /1.0/networks * /1.0/networks/\ * /1.0/operations * /1.0/operations/\ * /1.0/operations/\/wait * /1.0/operations/\/websocket * /1.0/profiles * /1.0/profiles/\ # API details ## / ### GET * Description: List of supported APIs * Authentication: guest * Operation: sync * Return: list of supported API endpoint URLs Return value: [ "/1.0" ] ## /1.0/ ### GET * Description: Server configuration and environment information * Authentication: guest, untrusted or trusted * Operation: sync * Return: Dict representing server state Return value (if trusted): { "api_extensions": [], # List of API extensions added after the API was marked stable "api_status": "stable", # API implementation status (one of, development, stable or deprecated) "api_version": "1.0", # The API version as a string "auth": "trusted", # Authentication state, one of "guest", "untrusted" or "trusted" "config": { # Host configuration "core.trust_password": true, "core.https_address": "[::]:8443" }, "environment": { # Various information about the host (OS, kernel, ...) "addresses": [ "1.2.3.4:8443", "[1234::1234]:8443" ], "architectures": [ "x86_64", "i686" ], "certificate": "PEM certificate", "driver": "lxc", "driver_version": "1.0.6", "kernel": "Linux", "kernel_architecture": "x86_64", "kernel_version": "3.16", "server": "lxd", "server_pid": 10224, "server_version": "0.8.1"} "storage": "btrfs", "storage_version": "3.19", }, "public": false, # Whether the server should be treated as a public (read-only) remote by the client } Return value (if guest or untrusted): { "api_extensions": [], # List of API extensions added after the API was marked stable "api_status": "stable", # API implementation status (one of, development, stable or deprecated) "api_version": "1.0", # The API version as a string "auth": "guest", # Authentication state, one of "guest", "untrusted" or "trusted" "public": false, # Whether the server should be treated as a public (read-only) remote by the client } ### PUT * Description: Updates the server configuration or other properties * Authentication: trusted * Operation: sync * Return: standard return value or standard error Input (replaces any existing config with the provided one): { "config": { "core.trust_password": "my-new-password", "storage.zfs_pool_name": "lxd" } } ## /1.0/certificates ### GET * Description: list of trusted certificates * Authentication: trusted * Operation: sync * Return: list of URLs for trusted certificates Return: [ "/1.0/certificates/3ee64be3c3c7d617a7470e14f2d847081ad467c8c26e1caad841c8f67f7c7b09" ] ### POST * Description: add a new trusted certificate * Authentication: trusted or untrusted * Operation: sync * Return: standard return value or standard error Input: { "type": "client", # Certificate type (keyring), currently only client "certificate": "PEM certificate", # If provided, a valid x509 certificate. If not, the client certificate of the connection will be used "name": "foo" # An optional name for the certificate. If nothing is provided, the host in the TLS header for the request is used. "password": "server-trust-password" # The trust password for that server (only required if untrusted) } ## /1.0/certificates/\ ### GET * Description: trusted certificate information * Authentication: trusted * Operation: sync * Return: dict representing a trusted certificate Output: { "type": "client", "certificate": "PEM certificate" "fingerprint": "SHA256 Hash of the raw certificate" } ### DELETE * Description: Remove a trusted certificate * Authentication: trusted * Operation: sync * Return: standard return value or standard error Input (none at present): { } HTTP code for this should be 202 (Accepted). ## /1.0/containers ### GET * Description: List of containers * Authentication: trusted * Operation: sync * Return: list of URLs for containers this server publishes Return value: [ "/1.0/containers/blah", "/1.0/containers/blah1" ] ### POST * Description: Create a new container * Authentication: trusted * Operation: async * Return: background operation or standard error Input (container based on a local image with the "ubuntu/devel" alias): { "name": "my-new-container", # 64 chars max, ASCII, no slash, no colon and no comma "architecture": "x86_64", "profiles": ["default"], # List of profiles "ephemeral": true, # Whether to destroy the container on shutdown "config": {"limits.cpu": "2"}, # Config override. "devices": { # optional list of devices the container should have "rootfs": { "path": "/dev/kvm", "type": "unix-char" }, }, "source": {"type": "image", # Can be: "image", "migration", "copy" or "none" "alias": "ubuntu/devel"}, # Name of the alias } Input (container based on a local image identified by its fingerprint): { "name": "my-new-container", # 64 chars max, ASCII, no slash, no colon and no comma "architecture": "x86_64", "profiles": ["default"], # List of profiles "ephemeral": true, # Whether to destroy the container on shutdown "config": {"limits.cpu": "2"}, # Config override. "devices": { # optional list of devices the container should have "rootfs": { "path": "/dev/kvm", "type": "unix-char" }, }, "source": {"type": "image", # Can be: "image", "migration", "copy" or "none" "fingerprint": "SHA-256"}, # Fingerprint } Input (container based on most recent match based on image properties): { "name": "my-new-container", # 64 chars max, ASCII, no slash, no colon and no comma "architecture": "x86_64", "profiles": ["default"], # List of profiles "ephemeral": true, # Whether to destroy the container on shutdown "config": {"limits.cpu": "2"}, # Config override. "devices": { # optional list of devices the container should have "rootfs": { "path": "/dev/kvm", "type": "unix-char" }, }, "source": {"type": "image", # Can be: "image", "migration", "copy" or "none" "properties": { # Properties "os": "ubuntu", "release": "14.04", "architecture": "x86_64" }}, } Input (container without a pre-populated rootfs, useful when attaching to an existing one): { "name": "my-new-container", # 64 chars max, ASCII, no slash, no colon and no comma "architecture": "x86_64", "profiles": ["default"], # List of profiles "ephemeral": true, # Whether to destroy the container on shutdown "config": {"limits.cpu": "2"}, # Config override. "devices": { # optional list of devices the container should have "rootfs": { "path": "/dev/kvm", "type": "unix-char" }, }, "source": {"type": "none"}, # Can be: "image", "migration", "copy" or "none" } Input (using a public remote image): { "name": "my-new-container", # 64 chars max, ASCII, no slash, no colon and no comma "architecture": "x86_64", "profiles": ["default"], # List of profiles "ephemeral": true, # Whether to destroy the container on shutdown "config": {"limits.cpu": "2"}, # Config override. "devices": { # optional list of devices the container should have "rootfs": { "path": "/dev/kvm", "type": "unix-char" }, }, "source": {"type": "image", # Can be: "image", "migration", "copy" or "none" "mode": "pull", # One of "local" (default) or "pull" "server": "https://10.0.2.3:8443", # Remote server (pull mode only) "protocol": "lxd", # Protocol (one of lxd or simplestreams, defaults to lxd) "certificate": "PEM certificate", # Optional PEM certificate. If not mentioned, system CA is used. "alias": "ubuntu/devel"}, # Name of the alias } Input (using a private remote image after having obtained a secret for that image): { "name": "my-new-container", # 64 chars max, ASCII, no slash, no colon and no comma "architecture": "x86_64", "profiles": ["default"], # List of profiles "ephemeral": true, # Whether to destroy the container on shutdown "config": {"limits.cpu": "2"}, # Config override. "devices": { # optional list of devices the container should have "rootfs": { "path": "/dev/kvm", "type": "unix-char" }, }, "source": {"type": "image", # Can be: "image", "migration", "copy" or "none" "mode": "pull", # One of "local" (default) or "pull" "server": "https://10.0.2.3:8443", # Remote server (pull mode only) "secret": "my-secret-string", # Secret to use to retrieve the image (pull mode only) "certificate": "PEM certificate", # Optional PEM certificate. If not mentioned, system CA is used. "alias": "ubuntu/devel"}, # Name of the alias } Input (using a remote container, sent over the migration websocket): { "name": "my-new-container", # 64 chars max, ASCII, no slash, no colon and no comma "architecture": "x86_64", "profiles": ["default"], # List of profiles "ephemeral": true, # Whether to destroy the container on shutdown "config": {"limits.cpu": "2"}, # Config override. "devices": { # optional list of devices the container should have "rootfs": { "path": "/dev/kvm", "type": "unix-char" }, }, "source": {"type": "migration", # Can be: "image", "migration", "copy" or "none" "mode": "pull", # Only "pull" is supported for now "operation": "https://10.0.2.3:8443/1.0/operations/", # Full URL to the remote operation (pull mode only) "certificate": "PEM certificate", # Optional PEM certificate. If not mentioned, system CA is used. "base-image": "", # Optional, the base image the container was created from "secrets": {"control": "my-secret-string", # Secrets to use when talking to the migration source "criu": "my-other-secret", "fs": "my third secret"}, } Input (using a local container): { "name": "my-new-container", # 64 chars max, ASCII, no slash, no colon and no comma "architecture": "x86_64", "profiles": ["default"], # List of profiles "ephemeral": true, # Whether to destroy the container on shutdown "config": {"limits.cpu": "2"}, # Config override. "source": {"type": "copy", # Can be: "image", "migration", "copy" or "none" "source": "my-old-container"} # Name of the source container } ## /1.0/containers/\ ### GET * Description: Container information * Authentication: trusted * Operation: sync * Return: dict of the container configuration and current state. Output: { "architecture": "x86_64", "config": { "limits.cpu": "3", "volatile.base_image": "97d97a3d1d053840ca19c86cdd0596cf1be060c5157d31407f2a4f9f350c78cc", "volatile.eth0.hwaddr": "00:16:3e:1c:94:38" }, "created_at": "2016-02-16T01:05:05Z", "devices": { "rootfs": { "path": "/", "type": "disk" } }, "ephemeral": false, "expanded_config": { # the result of expanding profiles and adding the container's local config "limits.cpu": "3", "volatile.base_image": "97d97a3d1d053840ca19c86cdd0596cf1be060c5157d31407f2a4f9f350c78cc", "volatile.eth0.hwaddr": "00:16:3e:1c:94:38" }, "expanded_devices": { # the result of expanding profiles and adding the container's local devices "eth0": { "name": "eth0", "nictype": "bridged", "parent": "lxdbr0", "type": "nic" }, "root": { "path": "/", "type": "disk" } }, "name": "my-container", "profiles": [ "default" ], "stateful": false, # If true, indicates that the container has some stored state that can be restored on startup "status": "Running", "status_code": 103 } ### PUT * Description: update container configuration or restore snapshot * Authentication: trusted * Operation: async * Return: background operation or standard error Input (update container configuration): { "architecture": "x86_64", "config": { "limits.cpu": "4", "volatile.base_image": "97d97a3d1d053840ca19c86cdd0596cf1be060c5157d31407f2a4f9f350c78cc", "volatile.eth0.hwaddr": "00:16:3e:1c:94:38" }, "devices": { "rootfs": { "path": "/", "type": "disk" } }, "ephemeral": true, "profiles": [ "default" ] } Takes the same structure as that returned by GET but doesn't allow name changes (see POST below) or changes to the status sub-dict (since that's read-only). Input (restore snapshot): { "restore": "snapshot-name" } ### POST * Description: used to rename/migrate the container * Authentication: trusted * Operation: async * Return: background operation or standard error Renaming to an existing name must return the 409 (Conflict) HTTP code. Input (simple rename): { "name": "new-name" } Input (migration across lxd instances): { "migration": true } The migration does not actually start until someone (i.e. another lxd instance) connects to all the websockets and begins negotiation with the source. Output in metadata section (for migration): { "control": "secret1", # Migration control socket "criu": "secret2", # State transfer socket (only if live migrating) "fs": "secret3" # Filesystem transfer socket } These are the secrets that should be passed to the create call. ### DELETE * Description: remove the container * Authentication: trusted * Operation: async * Return: background operation or standard error Input (none at present): { } HTTP code for this should be 202 (Accepted). ## /1.0/containers/\/state ### GET * Description: current state * Authentication: trusted * Operation: sync * Return: dict representing current state Output: { "type": "sync", "status": "Success", "status_code": 200, "metadata": { "status": "Running", "status_code": 103, "disk": { "root": { "usage": 422330368 } }, "memory": { "usage": 51126272, "usage_peak": 70246400, "swap_usage": 0, "swap_usage_peak": 0 }, "network": { "eth0": { "addresses": [ { "family": "inet", "address": "10.0.3.27", "netmask": "24", "scope": "global" }, { "family": "inet6", "address": "fe80::216:3eff:feec:65a8", "netmask": "64", "scope": "link" } ], "counters": { "bytes_received": 33942, "bytes_sent": 30810, "packets_received": 402, "packets_sent": 178 }, "hwaddr": "00:16:3e:ec:65:a8", "host_name": "vethBWTSU5", "mtu": 1500, "state": "up", "type": "broadcast" }, "lo": { "addresses": [ { "family": "inet", "address": "127.0.0.1", "netmask": "8", "scope": "local" }, { "family": "inet6", "address": "::1", "netmask": "128", "scope": "local" } ], "counters": { "bytes_received": 86816, "bytes_sent": 86816, "packets_received": 1226, "packets_sent": 1226 }, "hwaddr": "", "host_name": "", "mtu": 65536, "state": "up", "type": "loopback" }, "lxdbr0": { "addresses": [ { "family": "inet", "address": "10.0.3.1", "netmask": "24", "scope": "global" }, { "family": "inet6", "address": "fe80::68d4:87ff:fe40:7769", "netmask": "64", "scope": "link" } ], "counters": { "bytes_received": 0, "bytes_sent": 570, "packets_received": 0, "packets_sent": 7 }, "hwaddr": "6a:d4:87:40:77:69", "host_name": "", "mtu": 1500, "state": "up", "type": "broadcast" }, "zt0": { "addresses": [ { "family": "inet", "address": "29.17.181.59", "netmask": "7", "scope": "global" }, { "family": "inet6", "address": "fd80:56c2:e21c:0:199:9379:e711:b3e1", "netmask": "88", "scope": "global" }, { "family": "inet6", "address": "fe80::79:e7ff:fe0d:5123", "netmask": "64", "scope": "link" } ], "counters": { "bytes_received": 0, "bytes_sent": 806, "packets_received": 0, "packets_sent": 9 }, "hwaddr": "02:79:e7:0d:51:23", "host_name": "", "mtu": 2800, "state": "up", "type": "broadcast" } }, "pid": 13663, "processes": 32 } } ### PUT * Description: change the container state * Authentication: trusted * Operation: async * Return: background operation or standard error Input: { "action": "stop", # State change action (stop, start, restart, freeze or unfreeze) "timeout": 30, # A timeout after which the state change is considered as failed "force": true, # Force the state change (currently only valid for stop and restart where it means killing the container) "stateful": true # Whether to store or restore runtime state before stopping or startiong (only valid for stop and start, defaults to false) } ## /1.0/containers/\/files ### GET (?path=/path/inside/the/container) * Description: download a file from the container * Authentication: trusted * Operation: sync * Return: Raw file or standard error The following headers will be set (on top of standard size and mimetype headers): * X-LXD-uid: 0 * X-LXD-gid: 0 * X-LXD-mode: 0700 This is designed to be easily usable from the command line or even a web browser. ### POST (?path=/path/inside/the/container) * Description: upload a file to the container * Authentication: trusted * Operation: sync * Return: standard return value or standard error Input: * Standard http file upload The following headers may be set by the client: * X-LXD-uid: 0 * X-LXD-gid: 0 * X-LXD-mode: 0700 This is designed to be easily usable from the command line or even a web browser. ## /1.0/containers/\/snapshots ### GET * Description: List of snapshots * Authentication: trusted * Operation: sync * Return: list of URLs for snapshots for this container Return value: [ "/1.0/containers/blah/snapshots/snap0" ] ### POST * Description: create a new snapshot * Authentication: trusted * Operation: async * Return: background operation or standard error Input: { "name": "my-snapshot", # Name of the snapshot "stateful": true # Whether to include state too } ## /1.0/containers/\/snapshots/\ ### GET * Description: Snapshot information * Authentication: trusted * Operation: sync * Return: dict representing the snapshot Return: { "architecture": "x86_64", "config": { "security.nesting": "true", "volatile.base_image": "a49d26ce5808075f5175bf31f5cb90561f5023dcd408da8ac5e834096d46b2d8", "volatile.eth0.hwaddr": "00:16:3e:ec:65:a8", "volatile.last_state.idmap": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":100000,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":100000,\"Nsid\":0,\"Maprange\":65536}]", }, "created_at": "2016-03-08T23:55:08Z", "devices": { "eth0": { "name": "eth0", "nictype": "bridged", "parent": "lxdbr0", "type": "nic" }, "root": { "path": "/", "type": "disk" }, }, "ephemeral": false, "expanded_config": { "security.nesting": "true", "volatile.base_image": "a49d26ce5808075f5175bf31f5cb90561f5023dcd408da8ac5e834096d46b2d8", "volatile.eth0.hwaddr": "00:16:3e:ec:65:a8", "volatile.last_state.idmap": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":100000,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":100000,\"Nsid\":0,\"Maprange\":65536}]", }, "expanded_devices": { "eth0": { "name": "eth0", "nictype": "bridged", "parent": "lxdbr0", "type": "nic" }, "root": { "path": "/", "type": "disk" }, }, "name": "zerotier/blah", "profiles": [ "default" ], "stateful": false } ### POST * Description: used to rename/migrate the snapshot * Authentication: trusted * Operation: async * Return: background operation or standard error Input (rename the snapshot): { "name": "new-name" } Input (setup the migration source): { "migration": true, } Return (with migration=true): { "control": "secret1", # Migration control socket "fs": "secret3" # Filesystem transfer socket } Renaming to an existing name must return the 409 (Conflict) HTTP code. ### DELETE * Description: remove the snapshot * Authentication: trusted * Operation: async * Return: background operation or standard error Input (none at present): { } HTTP code for this should be 202 (Accepted). ## /1.0/containers/\/exec ### POST * Description: run a remote command * Authentication: trusted * Operation: async * Return: background operation + optional websocket information or standard error Input (run bash): { "command": ["/bin/bash"], # Command and arguments "environment": {}, # Optional extra environment variables to set "wait-for-websocket": false, # Whether to wait for a connection before starting the process "interactive": true, # Whether to allocate a pts device instead of PIPEs "width": 80, # Initial width of the terminal (optional) "height": 25, # Initial height of the terminal (optional) } `wait-for-websocket` indicates whether the operation should block and wait for a websocket connection to start (so that users can pass stdin and read stdout), or simply run to completion with /dev/null as stdin and stdout. If interactive is set to true, a single websocket is returned and is mapped to a pts device for stdin, stdout and stderr of the execed process. If interactive is set to false (default), three pipes will be setup, one for each of stdin, stdout and stderr. Depending on the state of the interactive flag, one or three different websocket/secret pairs will be returned, which are valid for connecting to this operations /websocket endpoint. Return (with wait-for-websocket=true and interactive=false): { "fds": { "0": "f5b6c760c0aa37a6430dd2a00c456430282d89f6e1661a077a926ed1bf3d1c21", "1": "464dcf9f8fdce29d0d6478284523a9f26f4a31ae365d94cd38bac41558b797cf", "2": "25b70415b686360e3b03131e33d6d94ee85a7f19b0f8d141d6dca5a1fc7b00eb", "control": "20c479d9532ab6d6c3060f6cdca07c1f177647c9d96f0c143ab61874160bd8a5" } } Return (with wait-for-websocket=true and interactive=true): { "fds": { "0": "f5b6c760c0aa37a6430dd2a00c456430282d89f6e1661a077a926ed1bf3d1c21", "control": "20c479d9532ab6d6c3060f6cdca07c1f177647c9d96f0c143ab61874160bd8a5" } } When the exec command finishes, its exit status is available from the operation's metadata: { "return": 0 } ## /1.0/containers/\/logs ### GET * Description: Returns a list of the log files available for this container. Note that this works on containers that have been deleted (or were never created) to enable people to get logs for failed creations. * Authentication: trusted * Operation: Sync * Return: a list of the available log files Return: [ "/1.0/containers/blah/logs/forkstart.log", "/1.0/containers/blah/logs/lxc.conf", "/1.0/containers/blah/logs/lxc.log" ] ## /1.0/containers/\/logs/\ ### GET * Description: returns the contents of a particular log file. * Authentication: trusted * Operation: N/A * Return: the contents of the log file ### DELETE * Description: delete a particular log file. * Authentication: trusted * Operation: Sync * Return: empty response or standard error ## /1.0/events This URL isn't a real REST API endpoint, instead doing a GET query on it will upgrade the connection to a websocket on which notifications will be sent. ### GET (?type=operation,logging) * Description: websocket upgrade * Authentication: trusted * Operation: sync * Return: none (never ending flow of events) Supported arguments are: * type: comma separated list of notifications to subscribe to (defaults to all) The notification types are: * operation (notification about creation, updates and termination of all background operations) * logging (every log entry from the server) This never returns. Each notification is sent as a separate JSON dict: { "timestamp": "2015-06-09T19:07:24.379615253-06:00", # Current timestamp "type": "operation", # Notification type "metadata": {} # Extra resource or type specific metadata } { "timestamp": "2016-02-17T11:44:28.572721913-05:00", "type": "logging", "metadata": { "context": { "ip": "@", "method": "GET" "url": "/1.0/containers/xen/snapshots", }, "level": "info", "message": "handling" } } ## /1.0/images ### GET * Description: list of images (public or private) * Authentication: guest or trusted * Operation: sync * Return: list of URLs for images this server publishes Return: [ "/1.0/images/54c8caac1f61901ed86c68f24af5f5d3672bdc62c71d04f06df3a59e95684473", "/1.0/images/97d97a3d1d053840ca19c86cdd0596cf1be060c5157d31407f2a4f9f350c78cc", "/1.0/images/a49d26ce5808075f5175bf31f5cb90561f5023dcd408da8ac5e834096d46b2d8", "/1.0/images/c9b6e738fae75286d52f497415463a8ecc61bbcb046536f220d797b0e500a41f" ] ### POST * Description: create and publish a new image * Authentication: trusted * Operation: async * Return: background operation or standard error Input (one of): * Standard http file upload * Source image dictionary (transfers a remote image) * Source container dictionary (makes an image out of a local container) * Remote image URL dictionary (downloads a remote image) In the http file upload case, The following headers may be set by the client: * X-LXD-fingerprint: SHA-256 (if set, uploaded file must match) * X-LXD-filename: FILENAME (used for export) * X-LXD-public: true/false (defaults to false) * X-LXD-properties: URL-encoded key value pairs without duplicate keys (optional properties) In the source image case, the following dict must be used: { "filename": filename, # Used for export (optional) "public": true, # Whether the image can be downloaded by untrusted users (defaults to false) "auto_update": true, # Whether the image should be auto-updated (optional; defaults to false) "properties": { # Image properties (optional, applied on top of source properties) "os": "Ubuntu" }, "source": { "type": "image", "mode": "pull", # Only pull is supported for now "server": "https://10.0.2.3:8443", # Remote server (pull mode only) "protocol": "lxd", # Protocol (one of lxd or simplestreams, defaults to lxd) "secret": "my-secret-string", # Secret (pull mode only, private images only) "certificate": "PEM certificate", # Optional PEM certificate. If not mentioned, system CA is used. "fingerprint": "SHA256", # Fingerprint of the image (must be set if alias isn't) "alias": "ubuntu/devel", # Name of the alias (must be set if fingerprint isn't) } } In the source container case, the following dict must be used: { "filename": filename, # Used for export (optional) "public": true, # Whether the image can be downloaded by untrusted users (defaults to false) "properties": { # Image properties (optional) "os": "Ubuntu" }, "source": { "type": "container", # One of "container" or "snapshot" "name": "abc" } } In the remote image URL case, the following dict must be used: { "filename": filename, # Used for export (optional) "public": true, # Whether the image can be downloaded by untrusted users (defaults to false) "properties": { # Image properties (optional) "os": "Ubuntu" }, "source": { "type": "url", "url": "https://www.some-server.com/image" # URL for the image } } After the input is received by LXD, a background operation is started which will add the image to the store and possibly do some backend filesystem-specific optimizations. ## /1.0/images/\ ### GET (optional ?secret=SECRET) * Description: Image description and metadata * Authentication: guest or trusted * Operation: sync * Return: dict representing an image properties Output: { "aliases": [ { "name": "trusty", "description": "", } ], "architecture": "x86_64", "auto_update": true, "cached": false, "fingerprint": "54c8caac1f61901ed86c68f24af5f5d3672bdc62c71d04f06df3a59e95684473", "filename": "ubuntu-trusty-14.04-amd64-server-20160201.tar.xz", "properties": { "architecture": "x86_64", "description": "Ubuntu 14.04 LTS server (20160201)", "os": "ubuntu", "release": "trusty" }, "update_source": { "server": "https://10.1.2.4:8443", "protocol": "lxd", "certificate": "PEM certificate", "alias": "ubuntu/trusty/amd64" }, "public": false, "size": 123792592, "created_at": "2016-02-01T21:07:41Z", "expires_at": "1970-01-01T00:00:00Z", "last_used_at": "1970-01-01T00:00:00Z", "uploaded_at": "2016-02-16T00:44:47Z" } ### DELETE * Description: Remove an image * Authentication: trusted * Operation: async * Return: background operaton or standard error Input (none at present): { } HTTP code for this should be 202 (Accepted). ### PUT * Description: Updates the image properties * Authentication: trusted * Operation: sync * Return: standard return value or standard error Input: { "auto_update": true, "properties": { "architecture": "x86_64", "description": "Ubuntu 14.04 LTS server (20160201)", "os": "ubuntu", "release": "trusty" }, "public": true, } ## /1.0/images/\/export ### GET (optional ?secret=SECRET) * Description: Download the image tarball * Authentication: guest or trusted * Operation: sync * Return: Raw file or standard error The secret string is required when an untrusted LXD is spawning a new container from a private image stored on a different LXD. Rather than require a trust relationship between the two LXDs, the client will POST to /1.0/images/\/export to get a secret token which it'll then pass to the target LXD. That target LXD will then GET the image as a guest, passing the secret token. ## /1.0/images/\/secret ### POST * Description: Generate a random token and tell LXD to expect it be used by a guest * Authentication: guest or trusted * Operation: async * Return: background operation or standard error Input: { } Return: { "secret": "52e9ec5885562aa24d05d7b4846ebb8b5f1f7bf5cd6e285639b569d9eaf54c9b" } Standard backround operation with "secret" set to the generated secret string in metadata. The secret is automatically invalidated 5s after an image URL using it has been accessed. This allows to both retried the image information and then hit /export with the same secret. ## /1.0/images/aliases ### GET * Description: list of aliases (public or private based on image visibility) * Authentication: guest or trusted * Operation: sync * Return: list of URLs for aliases this server knows about Return: [ "/1.0/images/aliases/sl6", "/1.0/images/aliases/trusty", "/1.0/images/aliases/xenial" ] ### POST * Description: create a new alias * Authentication: trusted * Operation: sync * Return: standard return value or standard error Input: { "description": "The alias description", "target": "SHA-256", "name": "alias-name" } ## /1.0/images/aliases/\ ### GET * Description: Alias description and target * Authentication: guest or trusted * Operation: sync * Return: dict representing an alias description and target Output: { "name": "test", "description": "my description", "target": "c9b6e738fae75286d52f497415463a8ecc61bbcb046536f220d797b0e500a41f" } ### PUT * Description: Updates the alias target or description * Authentication: trusted * Operation: sync * Return: standard return value or standard error Input: { "description": "New description", "target": "54c8caac1f61901ed86c68f24af5f5d3672bdc62c71d04f06df3a59e95684473" } ### POST * Description: rename an alias * Authentication: trusted * Operation: sync * Return: standard return value or standard error Input: { "name": "new-name" } Renaming to an existing name must return the 409 (Conflict) HTTP code. ### DELETE * Description: Remove an alias * Authentication: trusted * Operation: sync * Return: standard return value or standard error Input (none at present): { } ## /1.0/networks ### GET * Description: list of networks * Authentication: trusted * Operation: sync * Return: list of URLs for networks that are current defined on the host [ "/1.0/networks/eth0",, "/1.0/networks/lxdbr0" ] ## /1.0/networks/\ ### GET * Description: information about a network * Authentication: trusted * Operation: sync * Return: dict representing a network { "name": "lxdbr0", "type": "bridge", "used_by": [ "/1.0/containers/blah" ] } ## /1.0/operations ### GET * Description: list of operations * Authentication: trusted * Operation: sync * Return: list of URLs for operations that are currently going on/queued [ "/1.0/operations/c0fc0d0d-a997-462b-842b-f8bd0df82507", "/1.0/operations/092a8755-fd90-4ce4-bf91-9f87d03fd5bc" ] ## /1.0/operations/\ ### GET * Description: background operation * Authentication: trusted * Operation: sync * Return: dict representing a background operation Return: { "id": "b8d84888-1dc2-44fd-b386-7f679e171ba5", "class": "token", # One of "task" (background task), "websocket" (set of websockets and crendentials) or "token" (temporary credentials) "created_at": "2016-02-17T16:59:27.237628195-05:00", # Creation timestamp "updated_at": "2016-02-17T16:59:27.237628195-05:00", # Last update timestamp "status": "Running", "status_code": 103, "resources": { # List of affected resources "images": [ "/1.0/images/54c8caac1f61901ed86c68f24af5f5d3672bdc62c71d04f06df3a59e95684473" ] }, "metadata": { # Extra information about the operation (action, target, ...) "secret": "c9209bee6df99315be1660dd215acde4aec89b8e5336039712fc11008d918b0d" }, "may_cancel": true, # Whether it's possible to cancel the operation (DELETE) "err": "" } ### DELETE * Description: cancel an operation. Calling this will change the state to "cancelling" rather than actually removing the entry. * Authentication: trusted * Operation: sync * Return: standard return value or standard error Input (none at present): { } HTTP code for this should be 202 (Accepted). ## /1.0/operations/\/wait ### GET (optional ?timeout=30) * Description: Wait for an operation to finish * Authentication: trusted * Operation: sync * Return: dict of the operation after it's reached its final state Input (wait indefinitely for a final state): no argument Input (similar but times out after 30s): ?timeout=30 ## /1.0/operations/\/websocket ### GET (?secret=SECRET) * Description: This connection is upgraded into a websocket connection speaking the protocol defined by the operation type. For example, in the case of an exec operation, the websocket is the bidirectional pipe for stdin/stdout/stderr to flow to and from the process inside the container. In the case of migration, it will be the primary interface over which the migration information is communicated. The secret here is the one that was provided when the operation was created. Guests are allowed to connect provided they have the right secret. * Authentication: guest or trusted * Operation: sync * Return: websocket stream or standard error ## /1.0/profiles ### GET * Description: List of configuration profiles * Authentication: trusted * Operation: sync * Return: list of URLs to defined profiles Return: [ "/1.0/profiles/default" ] ### POST * Description: define a new profile * Authentication: trusted * Operation: sync * Return: standard return value or standard error Input: { "name": "my-profilename", "description": "Some description string", "config": { "limits.memory": "2GB" }, "devices": { "kvm": { "type": "unix-char", "path": "/dev/kvm" } } } ## /1.0/profiles/\ ### GET * Description: profile configuration * Authentication: trusted * Operation: sync * Return: dict representing the profile content Output: { "name": "test", "description": "Some description string", "config": { "limits.memory": "2GB" }, "devices": { "kvm": { "path": "/dev/kvm", "type": "unix-char" } } } ### PUT * Description: update the profile * Authentication: trusted * Operation: sync * Return: standard return value or standard error Input: { "config": { "limits.memory": "4GB" }, "description": "Some description string", "devices": { "kvm": { "path": "/dev/kvm", "type": "unix-char" } } } Same dict as used for initial creation and coming from GET. The name property can't be changed (see POST for that). ### POST * Description: rename a profile * Authentication: trusted * Operation: sync * Return: standard return value or standard error Input (rename a profile): { "name": "new-name" } HTTP return value must be 204 (No content) and Location must point to the renamed resource. Renaming to an existing name must return the 409 (Conflict) HTTP code. ### DELETE * Description: remove a profile * Authentication: trusted * Operation: sync * Return: standard return value or standard error Input (none at present): { } HTTP code for this should be 202 (Accepted). lxd-2.0.2/doc/storage-backends.md000066400000000000000000000070411272140510300166020ustar00rootroot00000000000000# Storage Backends and supported functions ## Feature comparison LXD supports using plain dirs, Btrfs, LVM, and ZFS for storage of images and containers. Where possible, LXD tries to use the advanced features of each system to optimize operations. Feature | Directory | Btrfs | LVM | ZFS :--- | :--- | :--- | :--- | :--- Optimized image storage | no | yes | yes | yes Optimized container creation | no | yes | yes | yes Optimized snapshot creation | no | yes | yes | yes Optimized image transfer | no | yes | no | yes Optimized container transfer | no | yes | no | yes Copy on write | no | yes | yes | yes Block based | no | no | yes | no Instant cloning | no | yes | yes | yes Nesting support | yes | yes | no | no Restore from older snapshots (not latest) | yes | yes | yes | no Storage quotas | no | yes | no | yes ## Mixed storage When switching storage backend after some containers or images already exist, LXD will create any new container using the new backend and converting older images to the new backend as needed. ## Non-optimized container transfer When the filesystem on the source and target hosts differs or when there is no faster way, rsync is used to transfer the container content across. ## Notes ### Directory - The directory backend is the fallback backend when nothing else is configured or detected. - While this backend is fully functional, it's also much slower than all the others due to it having to unpack images or do instant copies of containers, snapshots and images. ### Btrfs - The btrfs backend is automatically used if /var/lib/lxd is on a btrfs filesystem. - Uses a subvolume per container, image and snapshot, creating btrfs snapshots when creating a new object. ### LVM - A LVM VG must be created and then storage.lvm\_vg\_name set to point to it. - If a thinpool doesn't already exist, one will be created, the name of the thinpool can be set with storage.lvm\_thinpool\_name . - Uses LVs for images, then LV snapshots for containers and container snapshots. - The filesystem used for the LVs is ext4 (can be configured to use xfs instead). - LVs are created with a default size of 10GiB (can be configured through). ### ZFS - LXD can use any zpool or part of a zpool. storage.zfs\_pool\_name must be set to the path to be used. - ZFS doesn't have to (and shouldn't be) mounted on /var/lib/lxd - Uses ZFS filesystems for images, then snapshots and clones to create containers and snapshots. - Due to the way copy-on-write works in ZFS, parent filesystems can't be removed until all children are gone. As a result, LXD will automatically rename any removed but still referenced object to a random deleted/ path and keep it until such time the references are gone and it can safely be removed. - ZFS as it is today doesn't support delegating part of a pool to a container user. Upstream is actively working on this. - ZFS doesn't support restoring from snapshots other than the latest one. You can however create new containers from older snapshots which makes it possible to confirm the snapshots is indeed what you want to restore before you remove the newer snapshots. lxd-2.0.2/doc/userns-idmap.md000066400000000000000000000036441272140510300160020ustar00rootroot00000000000000# Introduction LXD runs safe containers. This is achieved mostly through the use of user namespaces which make it possible to run containers unprivileged, greatly limiting the attack surface. User namespaces work by mapping a set of uids and gids on the host to a set of uids and gids in the container. For example, we can define that the host uids and gids from 100000 to 165535 may be used by LXD and should be mapped to uid/gid 0 through 65535 in the container. As a result a process running as uid 0 in the container will actually be running as uid 100000. Allocations should always be of at least 65536 uids and gids to cover the POSIX range including root (0) and nobody (65534). To simplify things, at this point, we will only deal with identical allocations for uids and gids and only support a single contiguous range per container. # Kernel support User namespaces require a kernel >= 3.12, LXD will start even on older kernels but will refuse to start containers. # Allowed ranges On most hosts, LXD will check /etc/subuid and /etc/subgid for allocations for the "lxd" user and on first start, set the default profile to use the first 65536 uids and gids from that range. If the range is shorter than 65536 (which includes no range at all), then LXD will fail to create or start any container until this is corrected. If some but not all of /etc/subuid, /etc/subgid, newuidmap (path lookup) and newgidmap (path lookup) can't be found on the system, LXD will fail the startup of any container until this is corrected as this shows a broken shadow setup. If none of those 4 files can be found, then LXD will assume it's running on a host using an old version of shadow. In this mode, LXD will assume it can use any uids and gids above 65535 and will take the first 65536 as its default map. # Varying ranges between hosts The source map is sent when moving containers between hosts so that they can be remapped on the receiving host. lxd-2.0.2/fuidshift/000077500000000000000000000000001272140510300142625ustar00rootroot00000000000000lxd-2.0.2/fuidshift/Makefile000066400000000000000000000001511272140510300157170ustar00rootroot00000000000000# we let go build figure out dependency changes .PHONY: fuidmap lxc: go build clean: -rm -f fuidshift lxd-2.0.2/fuidshift/main.go000066400000000000000000000031301272140510300155320ustar00rootroot00000000000000package main import ( "fmt" "os" "github.com/lxc/lxd/shared" ) func help(me string, status int) { fmt.Printf("Usage: %s directory [-t] [-r] [ ...]\n", me) fmt.Printf(" -t implies test mode. No file ownerships will be changed.\n") fmt.Printf(" -r means reverse, that is shift the uids out of the container.\n") fmt.Printf("\n") fmt.Printf(" A range is [u|b|g]:.\n") fmt.Printf(" where u means shift uids, g means shift gids, b means shift both.\n") fmt.Printf(" For example: %s directory b:0:100000:65536 u:10000:1000:1\n", me) os.Exit(status) } func main() { if err := run(); err != nil { fmt.Printf("Error: %q\n", err) help(os.Args[0], 1) } } func run() error { if len(os.Args) < 3 { if len(os.Args) > 1 && (os.Args[1] == "-h" || os.Args[1] == "--help" || os.Args[1] == "help") { help(os.Args[0], 0) } else { help(os.Args[0], 1) } } directory := os.Args[1] idmap := shared.IdmapSet{} testmode := false reverse := false for pos := 2; pos < len(os.Args); pos++ { switch os.Args[pos] { case "-r", "--reverse": reverse = true case "t", "-t", "--test", "test": testmode = true default: var err error idmap, err = idmap.Append(os.Args[pos]) if err != nil { return err } } } if idmap.Len() == 0 { fmt.Printf("No idmaps given\n") help(os.Args[0], 1) } if !testmode && os.Geteuid() != 0 { fmt.Printf("This must be run as root\n") os.Exit(1) } if reverse { return idmap.UidshiftFromContainer(directory, testmode) } return idmap.UidshiftIntoContainer(directory, testmode) } lxd-2.0.2/lxc/000077500000000000000000000000001272140510300130635ustar00rootroot00000000000000lxd-2.0.2/lxc/action.go000066400000000000000000000044171272140510300146750ustar00rootroot00000000000000package main import ( "fmt" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/gnuflag" "github.com/lxc/lxd/shared/i18n" ) type actionCmd struct { action shared.ContainerAction hasTimeout bool visible bool name string timeout int force bool stateful bool stateless bool } func (c *actionCmd) showByDefault() bool { return c.visible } func (c *actionCmd) usage() string { return fmt.Sprintf(i18n.G( `Changes state of one or more containers to %s. lxc %s [...]`), c.name, c.name) } func (c *actionCmd) flags() { if c.hasTimeout { gnuflag.IntVar(&c.timeout, "timeout", -1, i18n.G("Time to wait for the container before killing it.")) gnuflag.BoolVar(&c.force, "force", false, i18n.G("Force the container to shutdown.")) } gnuflag.BoolVar(&c.stateful, "stateful", false, i18n.G("Store the container state (only for stop).")) gnuflag.BoolVar(&c.stateless, "stateless", false, i18n.G("Ignore the container state (only for start).")) } func (c *actionCmd) run(config *lxd.Config, args []string) error { if len(args) == 0 { return errArgs } state := false // Only store state if asked to if c.action == "stop" && c.stateful { state = true } for _, nameArg := range args { remote, name := config.ParseRemoteAndContainer(nameArg) d, err := lxd.NewClient(config, remote) if err != nil { return err } if name == "" { return fmt.Errorf(i18n.G("Must supply container name for: ")+"\"%s\"", nameArg) } if c.action == shared.Start || c.action == shared.Stop { current, err := d.ContainerInfo(name) if err != nil { return err } // "start" for a frozen container means "unfreeze" if current.StatusCode == shared.Frozen { c.action = shared.Unfreeze } // Always restore state (if present) unless asked not to if c.action == shared.Start && current.Stateful && !c.stateless { state = true } } resp, err := d.Action(name, c.action, c.timeout, c.force, state) if err != nil { return err } if resp.Type != lxd.Async { return fmt.Errorf(i18n.G("bad result type from action")) } if err := d.WaitForSuccess(resp.Operation); err != nil { return fmt.Errorf("%s\n"+i18n.G("Try `lxc info --show-log %s` for more info"), err, name) } } return nil } lxd-2.0.2/lxc/config.go000066400000000000000000000461101272140510300146610ustar00rootroot00000000000000package main import ( "crypto/x509" "encoding/pem" "fmt" "io/ioutil" "os" "sort" "strings" "syscall" "github.com/olekukonko/tablewriter" "gopkg.in/yaml.v2" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/gnuflag" "github.com/lxc/lxd/shared/i18n" "github.com/lxc/lxd/shared/termios" ) type configCmd struct { httpAddr string expanded bool } func (c *configCmd) showByDefault() bool { return true } func (c *configCmd) flags() { gnuflag.BoolVar(&c.expanded, "expanded", false, i18n.G("Whether to show the expanded configuration")) } func (c *configCmd) configEditHelp() string { return i18n.G( `### This is a yaml representation of the configuration. ### Any line starting with a '# will be ignored. ### ### A sample configuration looks like: ### name: container1 ### profiles: ### - default ### config: ### volatile.eth0.hwaddr: 00:16:3e:e9:f8:7f ### devices: ### homedir: ### path: /extra ### source: /home/user ### type: disk ### ephemeral: false ### ### Note that the name is shown but cannot be changed`) } func (c *configCmd) usage() string { return i18n.G( `Manage configuration. lxc config device add <[remote:]container> [key=value]... Add a device to a container. lxc config device get <[remote:]container> Get a device property. lxc config device set <[remote:]container> Set a device property. lxc config device unset <[remote:]container> Unset a device property. lxc config device list <[remote:]container> List devices for container. lxc config device show <[remote:]container> Show full device details for container. lxc config device remove <[remote:]container> Remove device from container. lxc config get [remote:][container] Get container or server configuration key. lxc config set [remote:][container] Set container or server configuration key. lxc config unset [remote:][container] Unset container or server configuration key. lxc config show [remote:][container] [--expanded] Show container or server configuration. lxc config edit [remote:][container] Edit container or server configuration in external editor. Edit configuration, either by launching external editor or reading STDIN. Example: lxc config edit # launch editor cat config.yml | lxc config edit # read from config.yml lxc config trust list [remote] List all trusted certs. lxc config trust add [remote] Add certfile.crt to trusted hosts. lxc config trust remove [remote] [hostname|fingerprint] Remove the cert from trusted hosts. Examples: To mount host's /share/c1 onto /opt in the container: lxc config device add [remote:]container1 disk source=/share/c1 path=opt To set an lxc config value: lxc config set [remote:] raw.lxc 'lxc.aa_allow_incomplete = 1' To listen on IPv4 and IPv6 port 8443 (you can omit the 8443 its the default): lxc config set core.https_address [::]:8443 To set the server trust password: lxc config set core.trust_password blah`) } func (c *configCmd) doSet(config *lxd.Config, args []string, unset bool) error { if len(args) != 4 { return errArgs } // [[lxc config]] set dakara:c1 limits.memory 200000 remote, container := config.ParseRemoteAndContainer(args[1]) d, err := lxd.NewClient(config, remote) if err != nil { return err } key := args[2] value := args[3] if !termios.IsTerminal(int(syscall.Stdin)) && value == "-" { buf, err := ioutil.ReadAll(os.Stdin) if err != nil { return fmt.Errorf(i18n.G("Can't read from stdin: %s"), err) } value = string(buf[:]) } if unset { st, err := d.ContainerInfo(container) if err != nil { return err } _, ok := st.Config[key] if !ok { return fmt.Errorf(i18n.G("Can't unset key '%s', it's not currently set."), key) } } return d.SetContainerConfig(container, key, value) } func (c *configCmd) run(config *lxd.Config, args []string) error { if len(args) < 1 { return errArgs } switch args[0] { case "unset": if len(args) < 2 { return errArgs } // Deal with local server if len(args) == 2 { c, err := lxd.NewClient(config, config.DefaultRemote) if err != nil { return err } ss, err := c.ServerStatus() if err != nil { return err } _, ok := ss.Config[args[1]] if !ok { return fmt.Errorf(i18n.G("Can't unset key '%s', it's not currently set."), args[1]) } _, err = c.SetServerConfig(args[1], "") return err } // Deal with remote server remote, container := config.ParseRemoteAndContainer(args[1]) if container == "" { c, err := lxd.NewClient(config, remote) if err != nil { return err } ss, err := c.ServerStatus() if err != nil { return err } _, ok := ss.Config[args[1]] if !ok { return fmt.Errorf(i18n.G("Can't unset key '%s', it's not currently set."), args[1]) } _, err = c.SetServerConfig(args[2], "") return err } // Deal with container args = append(args, "") return c.doSet(config, args, true) case "set": if len(args) < 3 { return errArgs } // Deal with local server if len(args) == 3 { c, err := lxd.NewClient(config, config.DefaultRemote) if err != nil { return err } _, err = c.SetServerConfig(args[1], args[2]) return err } // Deal with remote server remote, container := config.ParseRemoteAndContainer(args[1]) if container == "" { c, err := lxd.NewClient(config, remote) if err != nil { return err } _, err = c.SetServerConfig(args[2], args[3]) return err } // Deal with container return c.doSet(config, args, false) case "trust": if len(args) < 2 { return errArgs } switch args[1] { case "list": var remote string if len(args) == 3 { remote = config.ParseRemote(args[2]) } else { remote = config.DefaultRemote } d, err := lxd.NewClient(config, remote) if err != nil { return err } trust, err := d.CertificateList() if err != nil { return err } data := [][]string{} for _, cert := range trust { fp := cert.Fingerprint[0:12] certBlock, _ := pem.Decode([]byte(cert.Certificate)) cert, err := x509.ParseCertificate(certBlock.Bytes) if err != nil { return err } const layout = "Jan 2, 2006 at 3:04pm (MST)" issue := cert.NotBefore.Format(layout) expiry := cert.NotAfter.Format(layout) data = append(data, []string{fp, cert.Subject.CommonName, issue, expiry}) } table := tablewriter.NewWriter(os.Stdout) table.SetAutoWrapText(false) table.SetAlignment(tablewriter.ALIGN_LEFT) table.SetRowLine(true) table.SetHeader([]string{ i18n.G("FINGERPRINT"), i18n.G("COMMON NAME"), i18n.G("ISSUE DATE"), i18n.G("EXPIRY DATE")}) sort.Sort(SortImage(data)) table.AppendBulk(data) table.Render() return nil case "add": var remote string if len(args) < 3 { return fmt.Errorf(i18n.G("No certificate provided to add")) } else if len(args) == 4 { remote = config.ParseRemote(args[2]) } else { remote = config.DefaultRemote } d, err := lxd.NewClient(config, remote) if err != nil { return err } fname := args[len(args)-1] cert, err := shared.ReadCert(fname) if err != nil { return err } name, _ := shared.SplitExt(fname) return d.CertificateAdd(cert, name) case "remove": var remote string if len(args) < 3 { return fmt.Errorf(i18n.G("No fingerprint specified.")) } else if len(args) == 4 { remote = config.ParseRemote(args[2]) } else { remote = config.DefaultRemote } d, err := lxd.NewClient(config, remote) if err != nil { return err } return d.CertificateRemove(args[len(args)-1]) default: return errArgs } case "show": remote := config.DefaultRemote container := "" if len(args) > 1 { remote, container = config.ParseRemoteAndContainer(args[1]) } d, err := lxd.NewClient(config, remote) if err != nil { return err } var data []byte if len(args) == 1 || container == "" { config, err := d.ServerStatus() if err != nil { return err } brief := config.Brief() data, err = yaml.Marshal(&brief) } else { var brief shared.BriefContainerInfo if shared.IsSnapshot(container) { config, err := d.SnapshotInfo(container) if err != nil { return err } brief = shared.BriefContainerInfo{ Profiles: config.Profiles, Config: config.Config, Devices: config.Devices, Ephemeral: config.Ephemeral, } if c.expanded { brief = shared.BriefContainerInfo{ Profiles: config.Profiles, Config: config.ExpandedConfig, Devices: config.ExpandedDevices, Ephemeral: config.Ephemeral, } } } else { config, err := d.ContainerInfo(container) if err != nil { return err } brief = config.Brief() if c.expanded { brief = config.BriefExpanded() } } data, err = yaml.Marshal(&brief) if err != nil { return err } } fmt.Printf("%s", data) return nil case "get": if len(args) > 3 || len(args) < 2 { return errArgs } remote := config.DefaultRemote container := "" key := args[1] if len(args) > 2 { remote, container = config.ParseRemoteAndContainer(args[1]) key = args[2] } d, err := lxd.NewClient(config, remote) if err != nil { return err } if container != "" { resp, err := d.ContainerInfo(container) if err != nil { return err } fmt.Println(resp.Config[key]) } else { resp, err := d.ServerStatus() if err != nil { return err } value := resp.Config[key] if value == nil { value = "" } else if value == true { value = "true" } else if value == false { value = "false" } fmt.Println(value) } return nil case "profile": case "device": if len(args) < 2 { return errArgs } switch args[1] { case "list": return c.deviceList(config, "container", args) case "add": return c.deviceAdd(config, "container", args) case "remove": return c.deviceRm(config, "container", args) case "get": return c.deviceGet(config, "container", args) case "set": return c.deviceSet(config, "container", args) case "unset": return c.deviceUnset(config, "container", args) case "show": return c.deviceShow(config, "container", args) default: return errArgs } case "edit": if len(args) < 1 { return errArgs } remote := config.DefaultRemote container := "" if len(args) > 1 { remote, container = config.ParseRemoteAndContainer(args[1]) } d, err := lxd.NewClient(config, remote) if err != nil { return err } if len(args) == 1 || container == "" { return c.doDaemonConfigEdit(d) } return c.doContainerConfigEdit(d, container) default: return errArgs } return errArgs } func (c *configCmd) doContainerConfigEdit(client *lxd.Client, cont string) error { // If stdin isn't a terminal, read text from it if !termios.IsTerminal(int(syscall.Stdin)) { contents, err := ioutil.ReadAll(os.Stdin) if err != nil { return err } newdata := shared.BriefContainerInfo{} err = yaml.Unmarshal(contents, &newdata) if err != nil { return err } return client.UpdateContainerConfig(cont, newdata) } // Extract the current value config, err := client.ContainerInfo(cont) if err != nil { return err } brief := config.Brief() data, err := yaml.Marshal(&brief) if err != nil { return err } // Spawn the editor content, err := shared.TextEditor("", []byte(c.configEditHelp()+"\n\n"+string(data))) if err != nil { return err } for { // Parse the text received from the editor newdata := shared.BriefContainerInfo{} err = yaml.Unmarshal(content, &newdata) if err == nil { err = client.UpdateContainerConfig(cont, newdata) } // Respawn the editor if err != nil { fmt.Fprintf(os.Stderr, i18n.G("Config parsing error: %s")+"\n", err) fmt.Println(i18n.G("Press enter to start the editor again")) _, err := os.Stdin.Read(make([]byte, 1)) if err != nil { return err } content, err = shared.TextEditor("", content) if err != nil { return err } continue } break } return nil } func (c *configCmd) doDaemonConfigEdit(client *lxd.Client) error { // If stdin isn't a terminal, read text from it if !termios.IsTerminal(int(syscall.Stdin)) { contents, err := ioutil.ReadAll(os.Stdin) if err != nil { return err } newdata := shared.BriefServerState{} err = yaml.Unmarshal(contents, &newdata) if err != nil { return err } _, err = client.UpdateServerConfig(newdata) return err } // Extract the current value config, err := client.ServerStatus() if err != nil { return err } brief := config.Brief() data, err := yaml.Marshal(&brief) if err != nil { return err } // Spawn the editor content, err := shared.TextEditor("", []byte(c.configEditHelp()+"\n\n"+string(data))) if err != nil { return err } for { // Parse the text received from the editor newdata := shared.BriefServerState{} err = yaml.Unmarshal(content, &newdata) if err == nil { _, err = client.UpdateServerConfig(newdata) } // Respawn the editor if err != nil { fmt.Fprintf(os.Stderr, i18n.G("Config parsing error: %s")+"\n", err) fmt.Println(i18n.G("Press enter to start the editor again")) _, err := os.Stdin.Read(make([]byte, 1)) if err != nil { return err } content, err = shared.TextEditor("", content) if err != nil { return err } continue } break } return nil } func (c *configCmd) deviceAdd(config *lxd.Config, which string, args []string) error { if len(args) < 5 { return errArgs } remote, name := config.ParseRemoteAndContainer(args[2]) client, err := lxd.NewClient(config, remote) if err != nil { return err } devname := args[3] devtype := args[4] var props []string if len(args) > 5 { props = args[5:] } else { props = []string{} } var resp *lxd.Response if which == "profile" { resp, err = client.ProfileDeviceAdd(name, devname, devtype, props) } else { resp, err = client.ContainerDeviceAdd(name, devname, devtype, props) } if err != nil { return err } if which != "profile" { err = client.WaitForSuccess(resp.Operation) } if err == nil { fmt.Printf(i18n.G("Device %s added to %s")+"\n", devname, name) } return err } func (c *configCmd) deviceGet(config *lxd.Config, which string, args []string) error { if len(args) < 5 { return errArgs } remote, name := config.ParseRemoteAndContainer(args[2]) client, err := lxd.NewClient(config, remote) if err != nil { return err } devname := args[3] key := args[4] if which == "profile" { st, err := client.ProfileConfig(name) if err != nil { return err } dev, ok := st.Devices[devname] if !ok { return fmt.Errorf(i18n.G("The device doesn't exist")) } fmt.Println(dev[key]) } else { st, err := client.ContainerInfo(name) if err != nil { return err } dev, ok := st.Devices[devname] if !ok { return fmt.Errorf(i18n.G("The device doesn't exist")) } fmt.Println(dev[key]) } return nil } func (c *configCmd) deviceSet(config *lxd.Config, which string, args []string) error { if len(args) < 6 { return errArgs } remote, name := config.ParseRemoteAndContainer(args[2]) client, err := lxd.NewClient(config, remote) if err != nil { return err } devname := args[3] key := args[4] value := args[5] if which == "profile" { st, err := client.ProfileConfig(name) if err != nil { return err } dev, ok := st.Devices[devname] if !ok { return fmt.Errorf(i18n.G("The device doesn't exist")) } dev[key] = value st.Devices[devname] = dev err = client.PutProfile(name, *st) if err != nil { return err } } else { st, err := client.ContainerInfo(name) if err != nil { return err } dev, ok := st.Devices[devname] if !ok { return fmt.Errorf(i18n.G("The device doesn't exist")) } dev[key] = value st.Devices[devname] = dev err = client.UpdateContainerConfig(name, st.Brief()) if err != nil { return err } } return err } func (c *configCmd) deviceUnset(config *lxd.Config, which string, args []string) error { if len(args) < 5 { return errArgs } remote, name := config.ParseRemoteAndContainer(args[2]) client, err := lxd.NewClient(config, remote) if err != nil { return err } devname := args[3] key := args[4] if which == "profile" { st, err := client.ProfileConfig(name) if err != nil { return err } dev, ok := st.Devices[devname] if !ok { return fmt.Errorf(i18n.G("The device doesn't exist")) } delete(dev, key) st.Devices[devname] = dev err = client.PutProfile(name, *st) if err != nil { return err } } else { st, err := client.ContainerInfo(name) if err != nil { return err } dev, ok := st.Devices[devname] if !ok { return fmt.Errorf(i18n.G("The device doesn't exist")) } delete(dev, key) st.Devices[devname] = dev err = client.UpdateContainerConfig(name, st.Brief()) if err != nil { return err } } return err } func (c *configCmd) deviceRm(config *lxd.Config, which string, args []string) error { if len(args) < 4 { return errArgs } remote, name := config.ParseRemoteAndContainer(args[2]) client, err := lxd.NewClient(config, remote) if err != nil { return err } devname := args[3] var resp *lxd.Response if which == "profile" { resp, err = client.ProfileDeviceDelete(name, devname) } else { resp, err = client.ContainerDeviceDelete(name, devname) } if err != nil { return err } if which != "profile" { err = client.WaitForSuccess(resp.Operation) } if err == nil { fmt.Printf(i18n.G("Device %s removed from %s")+"\n", devname, name) } return err } func (c *configCmd) deviceList(config *lxd.Config, which string, args []string) error { if len(args) < 3 { return errArgs } remote, name := config.ParseRemoteAndContainer(args[2]) client, err := lxd.NewClient(config, remote) if err != nil { return err } var resp []string if which == "profile" { resp, err = client.ProfileListDevices(name) } else { resp, err = client.ContainerListDevices(name) } if err != nil { return err } fmt.Printf("%s\n", strings.Join(resp, "\n")) return nil } func (c *configCmd) deviceShow(config *lxd.Config, which string, args []string) error { if len(args) < 3 { return errArgs } remote, name := config.ParseRemoteAndContainer(args[2]) client, err := lxd.NewClient(config, remote) if err != nil { return err } var devices map[string]shared.Device if which == "profile" { resp, err := client.ProfileConfig(name) if err != nil { return err } devices = resp.Devices } else { resp, err := client.ContainerInfo(name) if err != nil { return err } devices = resp.Devices } data, err := yaml.Marshal(&devices) if err != nil { return err } fmt.Printf(string(data)) return nil } lxd-2.0.2/lxc/copy.go000066400000000000000000000110131272140510300143600ustar00rootroot00000000000000package main import ( "fmt" "strings" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/gnuflag" "github.com/lxc/lxd/shared/i18n" ) type copyCmd struct { ephem bool } func (c *copyCmd) showByDefault() bool { return true } func (c *copyCmd) usage() string { return i18n.G( `Copy containers within or in between lxd instances. lxc copy [remote:] [remote:] [--ephemeral|e]`) } func (c *copyCmd) flags() { gnuflag.BoolVar(&c.ephem, "ephemeral", false, i18n.G("Ephemeral container")) gnuflag.BoolVar(&c.ephem, "e", false, i18n.G("Ephemeral container")) } func (c *copyCmd) copyContainer(config *lxd.Config, sourceResource string, destResource string, keepVolatile bool, ephemeral int) error { sourceRemote, sourceName := config.ParseRemoteAndContainer(sourceResource) destRemote, destName := config.ParseRemoteAndContainer(destResource) if sourceName == "" { return fmt.Errorf(i18n.G("you must specify a source container name")) } if destName == "" { destName = sourceName } source, err := lxd.NewClient(config, sourceRemote) if err != nil { return err } var status struct { Architecture string Devices shared.Devices Config map[string]string Profiles []string } // TODO: presumably we want to do this for copying snapshots too? We // need to think a bit more about how we track the baseImage in the // face of LVM and snapshots in general; this will probably make more // sense once that work is done. baseImage := "" if !shared.IsSnapshot(sourceName) { result, err := source.ContainerInfo(sourceName) if err != nil { return err } status.Architecture = result.Architecture status.Devices = result.Devices status.Config = result.Config status.Profiles = result.Profiles } else { result, err := source.SnapshotInfo(sourceName) if err != nil { return err } status.Architecture = result.Architecture status.Devices = result.Devices status.Config = result.Config status.Profiles = result.Profiles } baseImage = status.Config["volatile.base_image"] if !keepVolatile { for k := range status.Config { if strings.HasPrefix(k, "volatile") { delete(status.Config, k) } } } // Do a local copy if the remotes are the same, otherwise do a migration if sourceRemote == destRemote { if sourceName == destName { return fmt.Errorf(i18n.G("can't copy to the same container name")) } cp, err := source.LocalCopy(sourceName, destName, status.Config, status.Profiles, ephemeral == 1) if err != nil { return err } return source.WaitForSuccess(cp.Operation) } dest, err := lxd.NewClient(config, destRemote) if err != nil { return err } sourceProfs := shared.NewStringSet(status.Profiles) destProfs, err := dest.ListProfiles() if err != nil { return err } if !sourceProfs.IsSubset(shared.NewStringSet(destProfs)) { return fmt.Errorf(i18n.G("not all the profiles from the source exist on the target")) } if ephemeral == -1 { ct, err := source.ContainerInfo(sourceName) if err != nil { return err } if ct.Ephemeral { ephemeral = 1 } else { ephemeral = 0 } } sourceWSResponse, err := source.GetMigrationSourceWS(sourceName) if err != nil { return err } secrets := map[string]string{} op, err := sourceWSResponse.MetadataAsOperation() if err != nil { return err } for k, v := range *op.Metadata { secrets[k] = v.(string) } addresses, err := source.Addresses() if err != nil { return err } /* Since we're trying a bunch of different network ports that * may be invalid, we can get "bad handshake" errors when the * websocket code tries to connect. If the first error is a * real error, but the subsequent errors are only network * errors, we should try to report the first real error. Of * course, if all the errors are websocket errors, let's just * report that. */ for _, addr := range addresses { var migration *lxd.Response sourceWSUrl := "https://" + addr + sourceWSResponse.Operation migration, err = dest.MigrateFrom(destName, sourceWSUrl, source.Certificate, secrets, status.Architecture, status.Config, status.Devices, status.Profiles, baseImage, ephemeral == 1) if err != nil { continue } if err = dest.WaitForSuccess(migration.Operation); err != nil { return err } return nil } return err } func (c *copyCmd) run(config *lxd.Config, args []string) error { if len(args) != 2 { return errArgs } ephem := 0 if c.ephem { ephem = 1 } return c.copyContainer(config, args[0], args[1], false, ephem) } lxd-2.0.2/lxc/delete.go000066400000000000000000000051241272140510300146560ustar00rootroot00000000000000package main import ( "bufio" "fmt" "os" "strings" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/gnuflag" "github.com/lxc/lxd/shared/i18n" ) type deleteCmd struct { force bool interactive bool } func (c *deleteCmd) showByDefault() bool { return true } func (c *deleteCmd) usage() string { return i18n.G( `Delete containers or container snapshots. lxc delete [remote:][/] [remote:][[/]...] Destroy containers or snapshots with any attached data (configuration, snapshots, ...).`) } func (c *deleteCmd) flags() { gnuflag.BoolVar(&c.force, "f", false, i18n.G("Force the removal of stopped containers.")) gnuflag.BoolVar(&c.force, "force", false, i18n.G("Force the removal of stopped containers.")) gnuflag.BoolVar(&c.interactive, "i", false, i18n.G("Require user confirmation.")) gnuflag.BoolVar(&c.interactive, "interactive", false, i18n.G("Require user confirmation.")) } func (c *deleteCmd) promptDelete(name string) error { reader := bufio.NewReader(os.Stdin) fmt.Printf(i18n.G("Remove %s (yes/no): "), name) input, _ := reader.ReadString('\n') input = strings.TrimSuffix(input, "\n") if !shared.StringInSlice(strings.ToLower(input), []string{i18n.G("yes")}) { return fmt.Errorf(i18n.G("User aborted delete operation.")) } return nil } func (c *deleteCmd) doDelete(d *lxd.Client, name string) error { resp, err := d.Delete(name) if err != nil { return err } return d.WaitForSuccess(resp.Operation) } func (c *deleteCmd) run(config *lxd.Config, args []string) error { if len(args) == 0 { return errArgs } for _, nameArg := range args { remote, name := config.ParseRemoteAndContainer(nameArg) d, err := lxd.NewClient(config, remote) if err != nil { return err } if c.interactive { err := c.promptDelete(name) if err != nil { return err } } if shared.IsSnapshot(name) { return c.doDelete(d, name) } ct, err := d.ContainerInfo(name) if err != nil { return err } if ct.StatusCode != 0 && ct.StatusCode != shared.Stopped { if !c.force { return fmt.Errorf(i18n.G("The container is currently running, stop it first or pass --force.")) } resp, err := d.Action(name, shared.Stop, -1, true, false) if err != nil { return err } op, err := d.WaitFor(resp.Operation) if err != nil { return err } if op.StatusCode == shared.Failure { return fmt.Errorf(i18n.G("Stopping container failed!")) } if ct.Ephemeral == true { return nil } } if err := c.doDelete(d, name); err != nil { return err } } return nil } lxd-2.0.2/lxc/exec.go000066400000000000000000000071251272140510300143430ustar00rootroot00000000000000package main import ( "encoding/json" "fmt" "os" "strconv" "strings" "syscall" "github.com/gorilla/websocket" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/gnuflag" "github.com/lxc/lxd/shared/i18n" "github.com/lxc/lxd/shared/termios" ) type envFlag []string func (f *envFlag) String() string { return fmt.Sprint(*f) } func (f *envFlag) Set(value string) error { if f == nil { *f = make(envFlag, 1) } else { *f = append(*f, value) } return nil } type execCmd struct { modeFlag string envArgs envFlag } func (c *execCmd) showByDefault() bool { return true } func (c *execCmd) usage() string { return i18n.G( `Execute the specified command in a container. lxc exec [remote:]container [--mode=auto|interactive|non-interactive] [--env EDITOR=/usr/bin/vim]... Mode defaults to non-interactive, interactive mode is selected if both stdin AND stdout are terminals (stderr is ignored).`) } func (c *execCmd) flags() { gnuflag.Var(&c.envArgs, "env", i18n.G("An environment variable of the form HOME=/home/foo")) gnuflag.StringVar(&c.modeFlag, "mode", "auto", i18n.G("Override the terminal mode (auto, interactive or non-interactive)")) } func (c *execCmd) sendTermSize(control *websocket.Conn) error { width, height, err := termios.GetSize(int(syscall.Stdout)) if err != nil { return err } shared.Debugf("Window size is now: %dx%d", width, height) w, err := control.NextWriter(websocket.TextMessage) if err != nil { return err } msg := shared.ContainerExecControl{} msg.Command = "window-resize" msg.Args = make(map[string]string) msg.Args["width"] = strconv.Itoa(width) msg.Args["height"] = strconv.Itoa(height) buf, err := json.Marshal(msg) if err != nil { return err } _, err = w.Write(buf) w.Close() return err } func (c *execCmd) run(config *lxd.Config, args []string) error { if len(args) < 2 { return errArgs } remote, name := config.ParseRemoteAndContainer(args[0]) d, err := lxd.NewClient(config, remote) if err != nil { return err } env := map[string]string{"HOME": "/root", "USER": "root"} myEnv := os.Environ() for _, ent := range myEnv { if strings.HasPrefix(ent, "TERM=") { env["TERM"] = ent[len("TERM="):] } } for _, arg := range c.envArgs { pieces := strings.SplitN(arg, "=", 2) value := "" if len(pieces) > 1 { value = pieces[1] } env[pieces[0]] = value } cfd := int(syscall.Stdin) var interactive bool if c.modeFlag == "interactive" { interactive = true } else if c.modeFlag == "non-interactive" { interactive = false } else { interactive = termios.IsTerminal(cfd) && termios.IsTerminal(int(syscall.Stdout)) } var oldttystate *termios.State if interactive { oldttystate, err = termios.MakeRaw(cfd) if err != nil { return err } defer termios.Restore(cfd, oldttystate) } handler := c.controlSocketHandler if !interactive { handler = nil } var width, height int if interactive { width, height, err = termios.GetSize(int(syscall.Stdout)) if err != nil { return err } } stdout := c.getStdout() ret, err := d.Exec(name, args[1:], env, os.Stdin, stdout, os.Stderr, handler, width, height) if err != nil { return err } if oldttystate != nil { /* A bit of a special case here: we want to exit with the same code as * the process inside the container, so we explicitly exit here * instead of returning an error. * * Additionally, since os.Exit() exits without running deferred * functions, we restore the terminal explicitly. */ termios.Restore(cfd, oldttystate) } os.Exit(ret) return fmt.Errorf(i18n.G("unreachable return reached")) } lxd-2.0.2/lxc/exec_unix.go000066400000000000000000000013331272140510300154010ustar00rootroot00000000000000// +build !windows package main import ( "io" "os" "os/signal" "syscall" "github.com/gorilla/websocket" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" ) func (c *execCmd) getStdout() io.WriteCloser { return os.Stdout } func (c *execCmd) controlSocketHandler(d *lxd.Client, control *websocket.Conn) { ch := make(chan os.Signal) signal.Notify(ch, syscall.SIGWINCH) for { sig := <-ch shared.Debugf("Received '%s signal', updating window geometry.", sig) err := c.sendTermSize(control) if err != nil { shared.Debugf("error setting term size %s", err) break } } closeMsg := websocket.FormatCloseMessage(websocket.CloseNormalClosure, "") control.WriteMessage(websocket.CloseMessage, closeMsg) } lxd-2.0.2/lxc/exec_windows.go000066400000000000000000000016461272140510300161170ustar00rootroot00000000000000// +build windows package main import ( "io" "os" "github.com/gorilla/websocket" "github.com/mattn/go-colorable" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" ) // Windows doesn't process ANSI sequences natively, so we wrap // os.Stdout for improved user experience for Windows client type WrappedWriteCloser struct { io.Closer wrapper io.Writer } func (wwc *WrappedWriteCloser) Write(p []byte) (int, error) { return wwc.wrapper.Write(p) } func (c *execCmd) getStdout() io.WriteCloser { return &WrappedWriteCloser{os.Stdout, colorable.NewColorableStdout()} } func (c *execCmd) controlSocketHandler(d *lxd.Client, control *websocket.Conn) { // TODO: figure out what the equivalent of signal.SIGWINCH is on // windows and use that; for now if you resize your terminal it just // won't work quite correctly. err := c.sendTermSize(control) if err != nil { shared.Debugf("error setting term size %s", err) } } lxd-2.0.2/lxc/file.go000066400000000000000000000127531272140510300143410ustar00rootroot00000000000000package main import ( "fmt" "io" "io/ioutil" "os" "path" "path/filepath" "strconv" "strings" "syscall" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/gnuflag" "github.com/lxc/lxd/shared/i18n" "github.com/lxc/lxd/shared/termios" ) type fileCmd struct { uid int gid int mode string } func (c *fileCmd) showByDefault() bool { return true } func (c *fileCmd) usage() string { return i18n.G( `Manage files on a container. lxc file pull [...] lxc file push [--uid=UID] [--gid=GID] [--mode=MODE] [...] lxc file edit in the case of pull, in the case of push and in the case of edit are /`) } func (c *fileCmd) flags() { gnuflag.IntVar(&c.uid, "uid", -1, i18n.G("Set the file's uid on push")) gnuflag.IntVar(&c.gid, "gid", -1, i18n.G("Set the file's gid on push")) gnuflag.StringVar(&c.mode, "mode", "", i18n.G("Set the file's perms on push")) } func (c *fileCmd) push(config *lxd.Config, args []string) error { if len(args) < 2 { return errArgs } target := args[len(args)-1] pathSpec := strings.SplitN(target, "/", 2) if len(pathSpec) != 2 { return fmt.Errorf(i18n.G("Invalid target %s"), target) } targetPath := pathSpec[1] remote, container := config.ParseRemoteAndContainer(pathSpec[0]) d, err := lxd.NewClient(config, remote) if err != nil { return err } mode := os.FileMode(0755) if c.mode != "" { if len(c.mode) == 3 { c.mode = "0" + c.mode } m, err := strconv.ParseInt(c.mode, 0, 0) if err != nil { return err } mode = os.FileMode(m) } uid := 0 if c.uid >= 0 { uid = c.uid } gid := 0 if c.gid >= 0 { gid = c.gid } _, targetfilename := filepath.Split(targetPath) var sourcefilenames []string for _, fname := range args[:len(args)-1] { if !strings.HasPrefix(fname, "--") { sourcefilenames = append(sourcefilenames, fname) } } if (targetfilename != "") && (len(sourcefilenames) > 1) { return errArgs } /* Make sure all of the files are accessible by us before trying to * push any of them. */ var files []*os.File for _, f := range sourcefilenames { var file *os.File if f == "-" { file = os.Stdin } else { file, err = os.Open(f) if err != nil { return err } } defer file.Close() files = append(files, file) } for _, f := range files { fpath := targetPath if targetfilename == "" { fpath = path.Join(fpath, path.Base(f.Name())) } if c.mode == "" || c.uid == -1 || c.gid == -1 { fMode, fUid, fGid, err := c.getOwner(f) if err != nil { return err } if c.mode == "" { mode = fMode } if c.uid == -1 { uid = fUid } if c.gid == -1 { gid = fGid } } err = d.PushFile(container, fpath, gid, uid, mode, f) if err != nil { return err } } return nil } func (c *fileCmd) pull(config *lxd.Config, args []string) error { if len(args) < 2 { return errArgs } target := args[len(args)-1] targetIsDir := false sb, err := os.Stat(target) if err != nil && !os.IsNotExist(err) { return err } /* * If the path exists, just use it. If it doesn't exist, it might be a * directory in one of two cases: * 1. Someone explicitly put "/" at the end * 2. Someone provided more than one source. In this case the target * should be a directory so we can save all the files into it. */ if err == nil { targetIsDir = sb.IsDir() if !targetIsDir && len(args)-1 > 1 { return fmt.Errorf(i18n.G("More than one file to download, but target is not a directory")) } } else if strings.HasSuffix(target, string(os.PathSeparator)) || len(args)-1 > 1 { if err := os.MkdirAll(target, 0755); err != nil { return err } targetIsDir = true } for _, f := range args[:len(args)-1] { pathSpec := strings.SplitN(f, "/", 2) if len(pathSpec) != 2 { return fmt.Errorf(i18n.G("Invalid source %s"), f) } remote, container := config.ParseRemoteAndContainer(pathSpec[0]) d, err := lxd.NewClient(config, remote) if err != nil { return err } _, _, _, buf, err := d.PullFile(container, pathSpec[1]) if err != nil { return err } var targetPath string if targetIsDir { targetPath = path.Join(target, path.Base(pathSpec[1])) } else { targetPath = target } var f *os.File if targetPath == "-" { f = os.Stdout } else { f, err = os.Create(targetPath) if err != nil { return err } defer f.Close() } _, err = io.Copy(f, buf) if err != nil { return err } } return nil } func (c *fileCmd) edit(config *lxd.Config, args []string) error { if len(args) != 1 { return errArgs } // If stdin isn't a terminal, read text from it if !termios.IsTerminal(int(syscall.Stdin)) { return c.push(config, append([]string{os.Stdin.Name()}, args[0])) } // Create temp file f, err := ioutil.TempFile("", "lxd_file_edit_") fname := f.Name() f.Close() os.Remove(fname) defer os.Remove(fname) // Extract current value err = c.pull(config, append([]string{args[0]}, fname)) if err != nil { return err } _, err = shared.TextEditor(fname, []byte{}) if err != nil { return err } err = c.push(config, append([]string{fname}, args[0])) if err != nil { return err } return nil } func (c *fileCmd) run(config *lxd.Config, args []string) error { if len(args) < 1 { return errArgs } switch args[0] { case "push": return c.push(config, args[1:]) case "pull": return c.pull(config, args[1:]) case "edit": return c.edit(config, args[1:]) default: return errArgs } } lxd-2.0.2/lxc/file_unix.go000066400000000000000000000005541272140510300154000ustar00rootroot00000000000000// +build !windows package main import ( "os" "syscall" ) func (c *fileCmd) getOwner(f *os.File) (os.FileMode, int, int, error) { fInfo, err := f.Stat() if err != nil { return os.FileMode(0), -1, -1, err } mode := fInfo.Mode() uid := int(fInfo.Sys().(*syscall.Stat_t).Uid) gid := int(fInfo.Sys().(*syscall.Stat_t).Gid) return mode, uid, gid, nil } lxd-2.0.2/lxc/file_windows.go000066400000000000000000000002411272140510300161000ustar00rootroot00000000000000// +build windows package main import ( "os" ) func (c *fileCmd) getOwner(f *os.File) (os.FileMode, int, int, error) { return os.FileMode(0), -1, -1, nil } lxd-2.0.2/lxc/finger.go000066400000000000000000000015541272140510300146710ustar00rootroot00000000000000package main import ( "github.com/lxc/lxd" "github.com/lxc/lxd/shared/i18n" ) type fingerCmd struct { httpAddr string } func (c *fingerCmd) showByDefault() bool { return false } func (c *fingerCmd) usage() string { return i18n.G( `Fingers the LXD instance to check if it is up and working. lxc finger `) } func (c *fingerCmd) flags() {} func (c *fingerCmd) run(config *lxd.Config, args []string) error { if len(args) > 1 { return errArgs } var remote string if len(args) == 1 { remote = config.ParseRemote(args[0]) } else { remote = config.DefaultRemote } // New client may or may not need to connect to the remote host, but // client.ServerStatus will at least request the basic information from // the server. client, err := lxd.NewClient(config, remote) if err != nil { return err } _, err = client.ServerStatus() return err } lxd-2.0.2/lxc/help.go000066400000000000000000000041711272140510300143450ustar00rootroot00000000000000package main import ( "bufio" "bytes" "fmt" "os" "sort" "strings" "github.com/lxc/lxd" "github.com/lxc/lxd/shared/gnuflag" "github.com/lxc/lxd/shared/i18n" ) type helpCmd struct { showAll bool } func (c *helpCmd) showByDefault() bool { return true } func (c *helpCmd) usage() string { return i18n.G( `Presents details on how to use LXD. lxd help [--all]`) } func (c *helpCmd) flags() { gnuflag.BoolVar(&c.showAll, "all", false, i18n.G("Show all commands (not just interesting ones)")) } func (c *helpCmd) run(_ *lxd.Config, args []string) error { if len(args) > 0 { for _, name := range args { cmd, ok := commands[name] if !ok { fmt.Fprintf(os.Stderr, i18n.G("error: unknown command: %s")+"\n", name) } else { fmt.Fprintf(os.Stderr, cmd.usage()+"\n") } } return nil } fmt.Println(i18n.G("Usage: lxc [subcommand] [options]")) fmt.Println(i18n.G("Available commands:")) var names []string for name := range commands { names = append(names, name) } sort.Strings(names) for _, name := range names { cmd := commands[name] if c.showAll || cmd.showByDefault() { fmt.Printf("\t%-10s - %s\n", name, c.summaryLine(cmd.usage())) } } if !c.showAll { fmt.Println() fmt.Println(i18n.G("Options:")) fmt.Println(" --all " + i18n.G("Print less common commands.")) fmt.Println(" --debug " + i18n.G("Print debug information.")) fmt.Println(" --verbose " + i18n.G("Print verbose information.")) fmt.Println() fmt.Println(i18n.G("Environment:")) fmt.Println(" LXD_CONF " + i18n.G("Path to an alternate client configuration directory.")) fmt.Println(" LXD_DIR " + i18n.G("Path to an alternate server directory.")) } return nil } // summaryLine returns the first line of the help text. Conventionally, this // should be a one-line command summary, potentially followed by a longer // explanation. func (c *helpCmd) summaryLine(usage string) string { usage = strings.TrimSpace(usage) s := bufio.NewScanner(bytes.NewBufferString(usage)) if s.Scan() { if len(s.Text()) > 1 { return s.Text() } } return i18n.G("Missing summary.") } lxd-2.0.2/lxc/image.go000066400000000000000000000421471272140510300145040ustar00rootroot00000000000000package main import ( "fmt" "io/ioutil" "os" "regexp" "sort" "strings" "syscall" "github.com/olekukonko/tablewriter" "gopkg.in/yaml.v2" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/gnuflag" "github.com/lxc/lxd/shared/i18n" "github.com/lxc/lxd/shared/termios" ) type SortImage [][]string func (a SortImage) Len() int { return len(a) } func (a SortImage) Swap(i, j int) { a[i], a[j] = a[j], a[i] } func (a SortImage) Less(i, j int) bool { if a[i][0] == a[j][0] { if a[i][3] == "" { return false } if a[j][3] == "" { return true } return a[i][3] < a[j][3] } if a[i][0] == "" { return false } if a[j][0] == "" { return true } return a[i][0] < a[j][0] } type aliasList []string func (f *aliasList) String() string { return fmt.Sprint(*f) } func (f *aliasList) Set(value string) error { if f == nil { *f = make(aliasList, 1) } else { *f = append(*f, value) } return nil } type imageCmd struct { addAliases aliasList publicImage bool copyAliases bool autoUpdate bool } func (c *imageCmd) showByDefault() bool { return true } func (c *imageCmd) imageEditHelp() string { return i18n.G( `### This is a yaml representation of the image properties. ### Any line starting with a '# will be ignored. ### ### Each property is represented by a single line: ### An example would be: ### description: My custom image`) } func (c *imageCmd) usage() string { return i18n.G( `Manipulate container images. In LXD containers are created from images. Those images were themselves either generated from an existing container or downloaded from an image server. When using remote images, LXD will automatically cache images for you and remove them upon expiration. The image unique identifier is the hash (sha-256) of its representation as a compressed tarball (or for split images, the concatenation of the metadata and rootfs tarballs). Images can be referenced by their full hash, shortest unique partial hash or alias name (if one is set). lxc image import [rootfs tarball|URL] [remote:] [--public] [--created-at=ISO-8601] [--expires-at=ISO-8601] [--fingerprint=FINGERPRINT] [--alias=ALIAS].. [prop=value] Import an image tarball (or tarballs) into the LXD image store. lxc image copy [remote:] : [--alias=ALIAS].. [--copy-aliases] [--public] [--auto-update] Copy an image from one LXD daemon to another over the network. The auto-update flag instructs the server to keep this image up to date. It requires the source to be an alias and for it to be public. lxc image delete [remote:] Delete an image from the LXD image store. lxc image export [remote:] Export an image from the LXD image store into a distributable tarball. lxc image info [remote:] Print everything LXD knows about a given image. lxc image list [remote:] [filter] List images in the LXD image store. Filters may be of the = form for property based filtering, or part of the image hash or part of the image alias name. lxc image show [remote:] Yaml output of the user modifiable properties of an image. lxc image edit [remote:] Edit image, either by launching external editor or reading STDIN. Example: lxc image edit # launch editor cat image.yml | lxc image edit # read from image.yml lxc image alias create [remote:] Create a new alias for an existing image. lxc image alias delete [remote:] Delete an alias. lxc image alias list [remote:] [filter] List the aliases. Filters may be part of the image hash or part of the image alias name. `) } func (c *imageCmd) flags() { gnuflag.BoolVar(&c.publicImage, "public", false, i18n.G("Make image public")) gnuflag.BoolVar(&c.copyAliases, "copy-aliases", false, i18n.G("Copy aliases from source")) gnuflag.BoolVar(&c.autoUpdate, "auto-update", false, i18n.G("Keep the image up to date after initial copy")) gnuflag.Var(&c.addAliases, "alias", i18n.G("New alias to define at target")) } func (c *imageCmd) doImageAlias(config *lxd.Config, args []string) error { var remote string switch args[1] { case "list": filters := []string{} if len(args) > 2 { result := strings.SplitN(args[2], ":", 2) if len(result) == 1 { filters = append(filters, args[2]) remote, _ = config.ParseRemoteAndContainer("") } else { remote, _ = config.ParseRemoteAndContainer(args[2]) } } else { remote, _ = config.ParseRemoteAndContainer("") } if len(args) > 3 { for _, filter := range args[3:] { filters = append(filters, filter) } } d, err := lxd.NewClient(config, remote) if err != nil { return err } resp, err := d.ListAliases() if err != nil { return err } c.showAliases(resp, filters) return nil case "create": /* alias create [:] */ if len(args) < 4 { return errArgs } remote, alias := config.ParseRemoteAndContainer(args[2]) target := args[3] d, err := lxd.NewClient(config, remote) if err != nil { return err } /* TODO - what about description? */ err = d.PostAlias(alias, alias, target) return err case "delete": /* alias delete [:] */ if len(args) < 3 { return errArgs } remote, alias := config.ParseRemoteAndContainer(args[2]) d, err := lxd.NewClient(config, remote) if err != nil { return err } err = d.DeleteAlias(alias) return err } return errArgs } func (c *imageCmd) run(config *lxd.Config, args []string) error { var remote string if len(args) < 1 { return errArgs } switch args[0] { case "alias": if len(args) < 2 { return errArgs } return c.doImageAlias(config, args) case "copy": /* copy [:] [:] */ if len(args) != 3 { return errArgs } remote, inName := config.ParseRemoteAndContainer(args[1]) if inName == "" { inName = "default" } destRemote, outName := config.ParseRemoteAndContainer(args[2]) if outName != "" { return errArgs } d, err := lxd.NewClient(config, remote) if err != nil { return err } dest, err := lxd.NewClient(config, destRemote) if err != nil { return err } progressHandler := func(progress string) { fmt.Printf(i18n.G("Copying the image: %s")+"\r", progress) } err = d.CopyImage(inName, dest, c.copyAliases, c.addAliases, c.publicImage, c.autoUpdate, progressHandler) if err == nil { fmt.Println(i18n.G("Image copied successfully!")) } return err case "delete": /* delete [:] */ if len(args) < 2 { return errArgs } remote, inName := config.ParseRemoteAndContainer(args[1]) if inName == "" { inName = "default" } d, err := lxd.NewClient(config, remote) if err != nil { return err } image := c.dereferenceAlias(d, inName) err = d.DeleteImage(image) return err case "info": if len(args) < 2 { return errArgs } remote, inName := config.ParseRemoteAndContainer(args[1]) if inName == "" { inName = "default" } d, err := lxd.NewClient(config, remote) if err != nil { return err } image := c.dereferenceAlias(d, inName) info, err := d.GetImageInfo(image) if err != nil { return err } public := i18n.G("no") if info.Public { public = i18n.G("yes") } autoUpdate := i18n.G("disabled") if info.AutoUpdate { autoUpdate = i18n.G("enabled") } fmt.Printf(i18n.G("Fingerprint: %s")+"\n", info.Fingerprint) fmt.Printf(i18n.G("Size: %.2fMB")+"\n", float64(info.Size)/1024.0/1024.0) fmt.Printf(i18n.G("Architecture: %s")+"\n", info.Architecture) fmt.Printf(i18n.G("Public: %s")+"\n", public) fmt.Printf(i18n.G("Timestamps:") + "\n") const layout = "2006/01/02 15:04 UTC" if info.CreationDate.UTC().Unix() != 0 { fmt.Printf(" "+i18n.G("Created: %s")+"\n", info.CreationDate.UTC().Format(layout)) } fmt.Printf(" "+i18n.G("Uploaded: %s")+"\n", info.UploadDate.UTC().Format(layout)) if info.ExpiryDate.UTC().Unix() != 0 { fmt.Printf(" "+i18n.G("Expires: %s")+"\n", info.ExpiryDate.UTC().Format(layout)) } else { fmt.Printf(" " + i18n.G("Expires: never") + "\n") } fmt.Println(i18n.G("Properties:")) for key, value := range info.Properties { fmt.Printf(" %s: %s\n", key, value) } fmt.Println(i18n.G("Aliases:")) for _, alias := range info.Aliases { fmt.Printf(" - %s\n", alias.Name) } fmt.Printf(i18n.G("Auto update: %s")+"\n", autoUpdate) if info.Source != nil { fmt.Println(i18n.G("Source:")) fmt.Printf(" Server: %s\n", info.Source.Server) fmt.Printf(" Protocol: %s\n", info.Source.Protocol) fmt.Printf(" Alias: %s\n", info.Source.Alias) } return nil case "import": if len(args) < 2 { return errArgs } var fingerprint string var imageFile string var rootfsFile string var properties []string var remote string for _, arg := range args[1:] { split := strings.Split(arg, "=") if len(split) == 1 || shared.PathExists(arg) { if strings.HasSuffix(arg, ":") { remote = config.ParseRemote(arg) } else { if imageFile == "" { imageFile = args[1] } else { rootfsFile = arg } } } else { properties = append(properties, arg) } } if remote == "" { remote = config.DefaultRemote } if imageFile == "" { return errArgs } d, err := lxd.NewClient(config, remote) if err != nil { return err } handler := func(percent int) { fmt.Printf(i18n.G("Transferring image: %d%%")+"\r", percent) if percent == 100 { fmt.Printf("\n") } } if strings.HasPrefix(imageFile, "https://") { fingerprint, err = d.PostImageURL(imageFile, c.publicImage, c.addAliases) } else if strings.HasPrefix(imageFile, "http://") { return fmt.Errorf(i18n.G("Only https:// is supported for remote image import.")) } else { fingerprint, err = d.PostImage(imageFile, rootfsFile, properties, c.publicImage, c.addAliases, handler) } if err != nil { return err } fmt.Printf(i18n.G("Image imported with fingerprint: %s")+"\n", fingerprint) return nil case "list": filters := []string{} if len(args) > 1 { result := strings.SplitN(args[1], ":", 2) if len(result) == 1 { filters = append(filters, args[1]) remote, _ = config.ParseRemoteAndContainer("") } else { remote, _ = config.ParseRemoteAndContainer(args[1]) } } else { remote, _ = config.ParseRemoteAndContainer("") } if len(args) > 2 { for _, filter := range args[2:] { filters = append(filters, filter) } } d, err := lxd.NewClient(config, remote) if err != nil { return err } images, err := d.ListImages() if err != nil { return err } return c.showImages(images, filters) case "edit": if len(args) < 2 { return errArgs } remote, inName := config.ParseRemoteAndContainer(args[1]) if inName == "" { inName = "default" } d, err := lxd.NewClient(config, remote) if err != nil { return err } image := c.dereferenceAlias(d, inName) if image == "" { image = inName } return c.doImageEdit(d, image) case "export": if len(args) < 2 { return errArgs } remote, inName := config.ParseRemoteAndContainer(args[1]) if inName == "" { inName = "default" } d, err := lxd.NewClient(config, remote) if err != nil { return err } image := c.dereferenceAlias(d, inName) target := "." if len(args) > 2 { target = args[2] } outfile, err := d.ExportImage(image, target) if err != nil { return err } if target != "-" { fmt.Printf(i18n.G("Output is in %s")+"\n", outfile) } return nil case "show": if len(args) < 2 { return errArgs } remote, inName := config.ParseRemoteAndContainer(args[1]) if inName == "" { inName = "default" } d, err := lxd.NewClient(config, remote) if err != nil { return err } image := c.dereferenceAlias(d, inName) info, err := d.GetImageInfo(image) if err != nil { return err } properties := info.Brief() data, err := yaml.Marshal(&properties) fmt.Printf("%s", data) return err default: return errArgs } } func (c *imageCmd) dereferenceAlias(d *lxd.Client, inName string) string { result := d.GetAlias(inName) if result == "" { return inName } return result } func (c *imageCmd) shortestAlias(list []shared.ImageAlias) string { shortest := "" for _, l := range list { if shortest == "" { shortest = l.Name continue } if len(l.Name) != 0 && len(l.Name) < len(shortest) { shortest = l.Name } } return shortest } func (c *imageCmd) findDescription(props map[string]string) string { for k, v := range props { if k == "description" { return v } } return "" } func (c *imageCmd) showImages(images []shared.ImageInfo, filters []string) error { data := [][]string{} for _, image := range images { if !c.imageShouldShow(filters, &image) { continue } shortest := c.shortestAlias(image.Aliases) if len(image.Aliases) > 1 { shortest = fmt.Sprintf(i18n.G("%s (%d more)"), shortest, len(image.Aliases)-1) } fp := image.Fingerprint[0:12] public := i18n.G("no") description := c.findDescription(image.Properties) if image.Public { public = i18n.G("yes") } const layout = "Jan 2, 2006 at 3:04pm (MST)" uploaded := image.UploadDate.UTC().Format(layout) size := fmt.Sprintf("%.2fMB", float64(image.Size)/1024.0/1024.0) data = append(data, []string{shortest, fp, public, description, image.Architecture, size, uploaded}) } table := tablewriter.NewWriter(os.Stdout) table.SetAutoWrapText(false) table.SetAlignment(tablewriter.ALIGN_LEFT) table.SetRowLine(true) table.SetHeader([]string{ i18n.G("ALIAS"), i18n.G("FINGERPRINT"), i18n.G("PUBLIC"), i18n.G("DESCRIPTION"), i18n.G("ARCH"), i18n.G("SIZE"), i18n.G("UPLOAD DATE")}) sort.Sort(SortImage(data)) table.AppendBulk(data) table.Render() return nil } func (c *imageCmd) showAliases(aliases shared.ImageAliases, filters []string) error { data := [][]string{} for _, alias := range aliases { if !c.aliasShouldShow(filters, &alias) { continue } data = append(data, []string{alias.Name, alias.Target[0:12], alias.Description}) } table := tablewriter.NewWriter(os.Stdout) table.SetAutoWrapText(false) table.SetAlignment(tablewriter.ALIGN_LEFT) table.SetRowLine(true) table.SetHeader([]string{ i18n.G("ALIAS"), i18n.G("FINGERPRINT"), i18n.G("DESCRIPTION")}) sort.Sort(SortImage(data)) table.AppendBulk(data) table.Render() return nil } func (c *imageCmd) doImageEdit(client *lxd.Client, image string) error { // If stdin isn't a terminal, read text from it if !termios.IsTerminal(int(syscall.Stdin)) { contents, err := ioutil.ReadAll(os.Stdin) if err != nil { return err } newdata := shared.BriefImageInfo{} err = yaml.Unmarshal(contents, &newdata) if err != nil { return err } return client.PutImageInfo(image, newdata) } // Extract the current value config, err := client.GetImageInfo(image) if err != nil { return err } brief := config.Brief() data, err := yaml.Marshal(&brief) if err != nil { return err } // Spawn the editor content, err := shared.TextEditor("", []byte(c.imageEditHelp()+"\n\n"+string(data))) if err != nil { return err } for { // Parse the text received from the editor newdata := shared.BriefImageInfo{} err = yaml.Unmarshal(content, &newdata) if err == nil { err = client.PutImageInfo(image, newdata) } // Respawn the editor if err != nil { fmt.Fprintf(os.Stderr, i18n.G("Config parsing error: %s")+"\n", err) fmt.Println(i18n.G("Press enter to start the editor again")) _, err := os.Stdin.Read(make([]byte, 1)) if err != nil { return err } content, err = shared.TextEditor("", content) if err != nil { return err } continue } break } return nil } func (c *imageCmd) imageShouldShow(filters []string, state *shared.ImageInfo) bool { if len(filters) == 0 { return true } for _, filter := range filters { found := false if strings.Contains(filter, "=") { membs := strings.SplitN(filter, "=", 2) key := membs[0] var value string if len(membs) < 2 { value = "" } else { value = membs[1] } for configKey, configValue := range state.Properties { list := listCmd{} if list.dotPrefixMatch(key, configKey) { //try to test filter value as a regexp regexpValue := value if !(strings.Contains(value, "^") || strings.Contains(value, "$")) { regexpValue = "^" + regexpValue + "$" } r, err := regexp.Compile(regexpValue) //if not regexp compatible use original value if err != nil { if value == configValue { found = true break } } else if r.MatchString(configValue) == true { found = true break } } } } else { for _, alias := range state.Aliases { if strings.Contains(alias.Name, filter) { found = true break } } if strings.Contains(state.Fingerprint, filter) { found = true } } if !found { return false } } return true } func (c *imageCmd) aliasShouldShow(filters []string, state *shared.ImageAliasesEntry) bool { if len(filters) == 0 { return true } for _, filter := range filters { if strings.Contains(state.Name, filter) || strings.Contains(state.Target, filter) { return true } } return false } lxd-2.0.2/lxc/info.go000066400000000000000000000123071272140510300143500ustar00rootroot00000000000000package main import ( "fmt" "io/ioutil" "strings" "gopkg.in/yaml.v2" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/gnuflag" "github.com/lxc/lxd/shared/i18n" ) type infoCmd struct { showLog bool } func (c *infoCmd) showByDefault() bool { return true } func (c *infoCmd) usage() string { return i18n.G( `List information on LXD servers and containers. For a container: lxc info [:]container [--show-log] For a server: lxc info [:]`) } func (c *infoCmd) flags() { gnuflag.BoolVar(&c.showLog, "show-log", false, i18n.G("Show the container's last 100 log lines?")) } func (c *infoCmd) run(config *lxd.Config, args []string) error { var remote string var cName string if len(args) == 1 { remote, cName = config.ParseRemoteAndContainer(args[0]) } else { remote, cName = config.ParseRemoteAndContainer("") } d, err := lxd.NewClient(config, remote) if err != nil { return err } if cName == "" { return c.remoteInfo(d) } else { return c.containerInfo(d, cName, c.showLog) } } func (c *infoCmd) remoteInfo(d *lxd.Client) error { serverStatus, err := d.ServerStatus() if err != nil { return err } data, err := yaml.Marshal(&serverStatus) if err != nil { return err } fmt.Printf("%s", data) return nil } func (c *infoCmd) containerInfo(d *lxd.Client, name string, showLog bool) error { ct, err := d.ContainerInfo(name) if err != nil { return err } cs, err := d.ContainerState(name) if err != nil { return err } const layout = "2006/01/02 15:04 UTC" fmt.Printf(i18n.G("Name: %s")+"\n", ct.Name) fmt.Printf(i18n.G("Architecture: %s")+"\n", ct.Architecture) if ct.CreationDate.UTC().Unix() != 0 { fmt.Printf(i18n.G("Created: %s")+"\n", ct.CreationDate.UTC().Format(layout)) } fmt.Printf(i18n.G("Status: %s")+"\n", ct.Status) if ct.Ephemeral { fmt.Printf(i18n.G("Type: ephemeral") + "\n") } else { fmt.Printf(i18n.G("Type: persistent") + "\n") } fmt.Printf(i18n.G("Profiles: %s")+"\n", strings.Join(ct.Profiles, ", ")) if cs.Pid != 0 { fmt.Printf(i18n.G("Pid: %d")+"\n", cs.Pid) // IP addresses ipInfo := "" if cs.Network != nil { for netName, net := range cs.Network { vethStr := "" if net.HostName != "" { vethStr = fmt.Sprintf("\t%s", net.HostName) } for _, addr := range net.Addresses { ipInfo += fmt.Sprintf(" %s:\t%s\t%s%s\n", netName, addr.Family, addr.Address, vethStr) } } } if ipInfo != "" { fmt.Println(i18n.G("Ips:")) fmt.Printf(ipInfo) } fmt.Println(i18n.G("Resources:")) // Processes fmt.Printf(" "+i18n.G("Processes: %d")+"\n", cs.Processes) // Disk usage diskInfo := "" if cs.Disk != nil { for entry, disk := range cs.Disk { if disk.Usage != 0 { diskInfo += fmt.Sprintf(" %s: %s\n", entry, shared.GetByteSizeString(disk.Usage)) } } } if diskInfo != "" { fmt.Println(i18n.G(" Disk usage:")) fmt.Printf(diskInfo) } // Memory usage memoryInfo := "" if cs.Memory.Usage != 0 { memoryInfo += fmt.Sprintf(" %s: %s\n", i18n.G("Memory (current)"), shared.GetByteSizeString(cs.Memory.Usage)) } if cs.Memory.UsagePeak != 0 { memoryInfo += fmt.Sprintf(" %s: %s\n", i18n.G("Memory (peak)"), shared.GetByteSizeString(cs.Memory.UsagePeak)) } if cs.Memory.SwapUsage != 0 { memoryInfo += fmt.Sprintf(" %s: %s\n", i18n.G("Swap (current)"), shared.GetByteSizeString(cs.Memory.SwapUsage)) } if cs.Memory.SwapUsagePeak != 0 { memoryInfo += fmt.Sprintf(" %s: %s\n", i18n.G("Swap (peak)"), shared.GetByteSizeString(cs.Memory.SwapUsagePeak)) } if memoryInfo != "" { fmt.Println(i18n.G(" Memory usage:")) fmt.Printf(memoryInfo) } // Network usage networkInfo := "" if cs.Network != nil { for netName, net := range cs.Network { networkInfo += fmt.Sprintf(" %s:\n", netName) networkInfo += fmt.Sprintf(" %s: %s\n", i18n.G("Bytes received"), shared.GetByteSizeString(net.Counters.BytesReceived)) networkInfo += fmt.Sprintf(" %s: %s\n", i18n.G("Bytes sent"), shared.GetByteSizeString(net.Counters.BytesSent)) networkInfo += fmt.Sprintf(" %s: %d\n", i18n.G("Packets received"), net.Counters.PacketsReceived) networkInfo += fmt.Sprintf(" %s: %d\n", i18n.G("Packets sent"), net.Counters.PacketsSent) } } if networkInfo != "" { fmt.Println(i18n.G(" Network usage:")) fmt.Printf(networkInfo) } } // List snapshots first_snapshot := true snaps, err := d.ListSnapshots(name) if err != nil { return nil } for _, snap := range snaps { if first_snapshot { fmt.Println(i18n.G("Snapshots:")) } fields := strings.Split(snap.Name, shared.SnapshotDelimiter) fmt.Printf(" %s", fields[len(fields)-1]) if snap.CreationDate.UTC().Unix() != 0 { fmt.Printf(" ("+i18n.G("taken at %s")+")", snap.CreationDate.UTC().Format(layout)) } if snap.Stateful { fmt.Printf(" (" + i18n.G("stateful") + ")") } else { fmt.Printf(" (" + i18n.G("stateless") + ")") } fmt.Printf("\n") first_snapshot = false } if showLog { log, err := d.GetLog(name, "lxc.log") if err != nil { return err } stuff, err := ioutil.ReadAll(log) if err != nil { return err } fmt.Printf("\n"+i18n.G("Log:")+"\n\n%s\n", string(stuff)) } return nil } lxd-2.0.2/lxc/init.go000066400000000000000000000137301272140510300143610ustar00rootroot00000000000000package main import ( "fmt" "os" "strings" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/gnuflag" "github.com/lxc/lxd/shared/i18n" ) type profileList []string var configMap map[string]string func (f *profileList) String() string { return fmt.Sprint(*f) } type configList []string func (f *configList) String() string { return fmt.Sprint(configMap) } func (f *configList) Set(value string) error { if value == "" { return fmt.Errorf(i18n.G("Invalid configuration key")) } items := strings.SplitN(value, "=", 2) if len(items) < 2 { return fmt.Errorf(i18n.G("Invalid configuration key")) } if configMap == nil { configMap = map[string]string{} } configMap[items[0]] = items[1] return nil } func (f *profileList) Set(value string) error { if value == "" { initRequestedEmptyProfiles = true return nil } if f == nil { *f = make(profileList, 1) } else { *f = append(*f, value) } return nil } var initRequestedEmptyProfiles bool type initCmd struct { profArgs profileList confArgs configList ephem bool } func (c *initCmd) showByDefault() bool { return false } func (c *initCmd) usage() string { return i18n.G( `Initialize a container from a particular image. lxc init [remote:] [remote:][] [--ephemeral|-e] [--profile|-p ...] [--config|-c ...] Initializes a container using the specified image and name. Not specifying -p will result in the default profile. Specifying "-p" with no argument will result in no profile. Example: lxc init ubuntu u1`) } func (c *initCmd) is_ephem(s string) bool { switch s { case "-e": return true case "--ephemeral": return true } return false } func (c *initCmd) is_profile(s string) bool { switch s { case "-p": return true case "--profile": return true } return false } func (c *initCmd) massage_args() { l := len(os.Args) if l < 2 { return } if c.is_profile(os.Args[l-1]) { initRequestedEmptyProfiles = true os.Args = os.Args[0 : l-1] return } if l < 3 { return } /* catch "lxc init ubuntu -p -e */ if c.is_ephem(os.Args[l-1]) && c.is_profile(os.Args[l-2]) { initRequestedEmptyProfiles = true newargs := os.Args[0 : l-2] newargs = append(newargs, os.Args[l-1]) return } } func (c *initCmd) flags() { c.massage_args() gnuflag.Var(&c.confArgs, "config", i18n.G("Config key/value to apply to the new container")) gnuflag.Var(&c.confArgs, "c", i18n.G("Config key/value to apply to the new container")) gnuflag.Var(&c.profArgs, "profile", i18n.G("Profile to apply to the new container")) gnuflag.Var(&c.profArgs, "p", i18n.G("Profile to apply to the new container")) gnuflag.BoolVar(&c.ephem, "ephemeral", false, i18n.G("Ephemeral container")) gnuflag.BoolVar(&c.ephem, "e", false, i18n.G("Ephemeral container")) } func (c *initCmd) run(config *lxd.Config, args []string) error { if len(args) > 2 || len(args) < 1 { return errArgs } iremote, image := config.ParseRemoteAndContainer(args[0]) var name string var remote string if len(args) == 2 { remote, name = config.ParseRemoteAndContainer(args[1]) } else { remote, name = config.ParseRemoteAndContainer("") } d, err := lxd.NewClient(config, remote) if err != nil { return err } // TODO: implement the syntax for supporting other image types/remotes /* * initRequestedEmptyProfiles means user requested empty * !initRequestedEmptyProfiles but len(profArgs) == 0 means use profile default */ profiles := []string{} for _, p := range c.profArgs { profiles = append(profiles, p) } var resp *lxd.Response if name == "" { fmt.Printf(i18n.G("Creating the container") + "\n") } else { fmt.Printf(i18n.G("Creating %s")+"\n", name) } iremote, image = c.guessImage(config, d, remote, iremote, image) if !initRequestedEmptyProfiles && len(profiles) == 0 { resp, err = d.Init(name, iremote, image, nil, configMap, c.ephem) } else { resp, err = d.Init(name, iremote, image, &profiles, configMap, c.ephem) } if err != nil { return err } c.initProgressTracker(d, resp.Operation) err = d.WaitForSuccess(resp.Operation) if err != nil { return err } else { op, err := resp.MetadataAsOperation() if err != nil { return fmt.Errorf(i18n.G("didn't get any affected image, container or snapshot from server")) } containers, ok := op.Resources["containers"] if !ok || len(containers) == 0 { return fmt.Errorf(i18n.G("didn't get any affected image, container or snapshot from server")) } if len(containers) == 1 && name == "" { fields := strings.Split(containers[0], "/") fmt.Printf(i18n.G("Container name is: %s")+"\n", fields[len(fields)-1]) } } return nil } func (c *initCmd) initProgressTracker(d *lxd.Client, operation string) { handler := func(msg interface{}) { if msg == nil { return } event := msg.(map[string]interface{}) if event["type"].(string) != "operation" { return } if event["metadata"] == nil { return } md := event["metadata"].(map[string]interface{}) if !strings.HasSuffix(operation, md["id"].(string)) { return } if md["metadata"] == nil { return } if shared.StatusCode(md["status_code"].(float64)).IsFinal() { return } opMd := md["metadata"].(map[string]interface{}) _, ok := opMd["download_progress"] if ok { fmt.Printf(i18n.G("Retrieving image: %s")+"\r", opMd["download_progress"].(string)) } if opMd["download_progress"].(string) == "100%" { fmt.Printf("\n") } } go d.Monitor([]string{"operation"}, handler) } func (c *initCmd) guessImage(config *lxd.Config, d *lxd.Client, remote string, iremote string, image string) (string, string) { if remote != iremote { return iremote, image } _, ok := config.Remotes[image] if !ok { return iremote, image } target := d.GetAlias(image) if target != "" { return iremote, image } _, err := d.GetImageInfo(image) if err == nil { return iremote, image } fmt.Fprintf(os.Stderr, i18n.G("The local image '%s' couldn't be found, trying '%s:' instead.")+"\n", image, image) return image, "default" } lxd-2.0.2/lxc/launch.go000066400000000000000000000067731272140510300147010ustar00rootroot00000000000000package main import ( "fmt" "strings" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/gnuflag" "github.com/lxc/lxd/shared/i18n" ) type launchCmd struct { init initCmd } func (c *launchCmd) showByDefault() bool { return true } func (c *launchCmd) usage() string { return i18n.G( `Launch a container from a particular image. lxc launch [remote:] [remote:][] [--ephemeral|-e] [--profile|-p ...] [--config|-c ...] Launches a container using the specified image and name. Not specifying -p will result in the default profile. Specifying "-p" with no argument will result in no profile. Example: lxc launch ubuntu:16.04 u1`) } func (c *launchCmd) flags() { c.init = initCmd{} c.init.massage_args() gnuflag.Var(&c.init.confArgs, "config", i18n.G("Config key/value to apply to the new container")) gnuflag.Var(&c.init.confArgs, "c", i18n.G("Config key/value to apply to the new container")) gnuflag.Var(&c.init.profArgs, "profile", i18n.G("Profile to apply to the new container")) gnuflag.Var(&c.init.profArgs, "p", i18n.G("Profile to apply to the new container")) gnuflag.BoolVar(&c.init.ephem, "ephemeral", false, i18n.G("Ephemeral container")) gnuflag.BoolVar(&c.init.ephem, "e", false, i18n.G("Ephemeral container")) } func (c *launchCmd) run(config *lxd.Config, args []string) error { if len(args) > 2 || len(args) < 1 { return errArgs } iremote, image := config.ParseRemoteAndContainer(args[0]) var name string var remote string if len(args) == 2 { remote, name = config.ParseRemoteAndContainer(args[1]) } else { remote, name = config.ParseRemoteAndContainer("") } d, err := lxd.NewClient(config, remote) if err != nil { return err } /* * initRequestedEmptyProfiles means user requested empty * !initRequestedEmptyProfiles but len(profArgs) == 0 means use profile default */ var resp *lxd.Response profiles := []string{} for _, p := range c.init.profArgs { profiles = append(profiles, p) } iremote, image = c.init.guessImage(config, d, remote, iremote, image) if !initRequestedEmptyProfiles && len(profiles) == 0 { resp, err = d.Init(name, iremote, image, nil, configMap, c.init.ephem) } else { resp, err = d.Init(name, iremote, image, &profiles, configMap, c.init.ephem) } if err != nil { return err } c.init.initProgressTracker(d, resp.Operation) if name == "" { op, err := resp.MetadataAsOperation() if err != nil { return fmt.Errorf(i18n.G("didn't get any affected image, container or snapshot from server")) } containers, ok := op.Resources["containers"] if !ok || len(containers) == 0 { return fmt.Errorf(i18n.G("didn't get any affected image, container or snapshot from server")) } var version string toScan := strings.Replace(containers[0], "/", " ", -1) count, err := fmt.Sscanf(toScan, " %s containers %s", &version, &name) if err != nil { return err } if count != 2 { return fmt.Errorf(i18n.G("bad number of things scanned from image, container or snapshot")) } if version != shared.APIVersion { return fmt.Errorf(i18n.G("got bad version")) } } fmt.Printf(i18n.G("Creating %s")+"\n", name) if err = d.WaitForSuccess(resp.Operation); err != nil { return err } fmt.Printf(i18n.G("Starting %s")+"\n", name) resp, err = d.Action(name, shared.Start, -1, false, false) if err != nil { return err } err = d.WaitForSuccess(resp.Operation) if err != nil { return fmt.Errorf("%s\n"+i18n.G("Try `lxc info --show-log %s` for more info"), err, name) } return nil } lxd-2.0.2/lxc/list.go000066400000000000000000000274171272140510300144000ustar00rootroot00000000000000package main import ( "encoding/json" "fmt" "os" "regexp" "sort" "strings" "sync" "github.com/olekukonko/tablewriter" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/gnuflag" "github.com/lxc/lxd/shared/i18n" ) type column struct { Name string Data columnData NeedsState bool NeedsSnapshots bool } type columnData func(shared.ContainerInfo, *shared.ContainerState, []shared.SnapshotInfo) string type byName [][]string func (a byName) Len() int { return len(a) } func (a byName) Swap(i, j int) { a[i], a[j] = a[j], a[i] } func (a byName) Less(i, j int) bool { if a[i][0] == "" { return false } if a[j][0] == "" { return true } return a[i][0] < a[j][0] } const ( listFormatTable = "table" listFormatJSON = "json" ) type listCmd struct { chosenColumnRunes string fast bool format string } func (c *listCmd) showByDefault() bool { return true } func (c *listCmd) usage() string { return i18n.G( `Lists the available resources. lxc list [resource] [filters] [--format table|json] [-c columns] [--fast] The filters are: * A single keyword like "web" which will list any container with a name starting by "web". * A regular expression on the container name. (e.g. .*web.*01$) * A key/value pair referring to a configuration item. For those, the namespace can be abreviated to the smallest unambiguous identifier: * "user.blah=abc" will list all containers with the "blah" user property set to "abc". * "u.blah=abc" will do the same * "security.privileged=1" will list all privileged containers * "s.privileged=1" will do the same * A regular expression matching a configuration item or its value. (e.g. volatile.eth0.hwaddr=00:16:3e:.*) Columns for table format are: * 4 - IPv4 address * 6 - IPv6 address * a - architecture * c - creation date * n - name * p - pid of container init process * P - profiles * s - state * S - number of snapshots * t - type (persistent or ephemeral) Default column layout: ns46tS Fast column layout: nsacPt`) } func (c *listCmd) flags() { gnuflag.StringVar(&c.chosenColumnRunes, "c", "ns46tS", i18n.G("Columns")) gnuflag.StringVar(&c.chosenColumnRunes, "columns", "ns46tS", i18n.G("Columns")) gnuflag.StringVar(&c.format, "format", "table", i18n.G("Format")) gnuflag.BoolVar(&c.fast, "fast", false, i18n.G("Fast mode (same as --columns=nsacPt")) } // This seems a little excessive. func (c *listCmd) dotPrefixMatch(short string, full string) bool { fullMembs := strings.Split(full, ".") shortMembs := strings.Split(short, ".") if len(fullMembs) != len(shortMembs) { return false } for i, _ := range fullMembs { if !strings.HasPrefix(fullMembs[i], shortMembs[i]) { return false } } return true } func (c *listCmd) shouldShow(filters []string, state *shared.ContainerInfo) bool { for _, filter := range filters { if strings.Contains(filter, "=") { membs := strings.SplitN(filter, "=", 2) key := membs[0] var value string if len(membs) < 2 { value = "" } else { value = membs[1] } found := false for configKey, configValue := range state.ExpandedConfig { if c.dotPrefixMatch(key, configKey) { //try to test filter value as a regexp regexpValue := value if !(strings.Contains(value, "^") || strings.Contains(value, "$")) { regexpValue = "^" + regexpValue + "$" } r, err := regexp.Compile(regexpValue) //if not regexp compatible use original value if err != nil { if value == configValue { found = true break } else { // the property was found but didn't match return false } } else if r.MatchString(configValue) == true { found = true break } } } if state.ExpandedConfig[key] == value { return true } if !found { return false } } else { regexpValue := filter if !(strings.Contains(filter, "^") || strings.Contains(filter, "$")) { regexpValue = "^" + regexpValue + "$" } r, err := regexp.Compile(regexpValue) if err == nil && r.MatchString(state.Name) == true { return true } if !strings.HasPrefix(state.Name, filter) { return false } } } return true } func (c *listCmd) listContainers(d *lxd.Client, cinfos []shared.ContainerInfo, filters []string, columns []column) error { headers := []string{} for _, column := range columns { headers = append(headers, column.Name) } threads := 10 if len(cinfos) < threads { threads = len(cinfos) } cStates := map[string]*shared.ContainerState{} cStatesLock := sync.Mutex{} cStatesQueue := make(chan string, threads) cStatesWg := sync.WaitGroup{} cSnapshots := map[string][]shared.SnapshotInfo{} cSnapshotsLock := sync.Mutex{} cSnapshotsQueue := make(chan string, threads) cSnapshotsWg := sync.WaitGroup{} for i := 0; i < threads; i++ { cStatesWg.Add(1) go func() { for { cName, more := <-cStatesQueue if !more { break } state, err := d.ContainerState(cName) if err != nil { continue } cStatesLock.Lock() cStates[cName] = state cStatesLock.Unlock() } cStatesWg.Done() }() cSnapshotsWg.Add(1) go func() { for { cName, more := <-cSnapshotsQueue if !more { break } snaps, err := d.ListSnapshots(cName) if err != nil { continue } cSnapshotsLock.Lock() cSnapshots[cName] = snaps cSnapshotsLock.Unlock() } cSnapshotsWg.Done() }() } for _, cInfo := range cinfos { for _, column := range columns { if column.NeedsState && cInfo.IsActive() { _, ok := cStates[cInfo.Name] if ok { continue } cStatesLock.Lock() cStates[cInfo.Name] = nil cStatesLock.Unlock() cStatesQueue <- cInfo.Name } if column.NeedsSnapshots { _, ok := cSnapshots[cInfo.Name] if ok { continue } cSnapshotsLock.Lock() cSnapshots[cInfo.Name] = nil cSnapshotsLock.Unlock() cSnapshotsQueue <- cInfo.Name } } } close(cStatesQueue) close(cSnapshotsQueue) cStatesWg.Wait() cSnapshotsWg.Wait() switch c.format { case listFormatTable: data := [][]string{} for _, cInfo := range cinfos { if !c.shouldShow(filters, &cInfo) { continue } col := []string{} for _, column := range columns { col = append(col, column.Data(cInfo, cStates[cInfo.Name], cSnapshots[cInfo.Name])) } data = append(data, col) } table := tablewriter.NewWriter(os.Stdout) table.SetAutoWrapText(false) table.SetAlignment(tablewriter.ALIGN_LEFT) table.SetRowLine(true) table.SetHeader(headers) sort.Sort(byName(data)) table.AppendBulk(data) table.Render() case listFormatJSON: data := make([]listContainerItem, len(cinfos)) for i := range cinfos { data[i].ContainerInfo = &cinfos[i] data[i].State = cStates[cinfos[i].Name] data[i].Snapshots = cSnapshots[cinfos[i].Name] } enc := json.NewEncoder(os.Stdout) err := enc.Encode(data) if err != nil { return err } default: return fmt.Errorf("invalid format %q", c.format) } return nil } type listContainerItem struct { *shared.ContainerInfo State *shared.ContainerState `json:"state"` Snapshots []shared.SnapshotInfo `json:"snapshots"` } func (c *listCmd) run(config *lxd.Config, args []string) error { var remote string name := "" filters := []string{} if len(args) != 0 { filters = args if strings.Contains(args[0], ":") && !strings.Contains(args[0], "=") { remote, name = config.ParseRemoteAndContainer(args[0]) filters = args[1:] } else if !strings.Contains(args[0], "=") { remote = config.DefaultRemote name = args[0] } } filters = append(filters, name) if remote == "" { remote = config.DefaultRemote } d, err := lxd.NewClient(config, remote) if err != nil { return err } var cts []shared.ContainerInfo ctslist, err := d.ListContainers() if err != nil { return err } for _, cinfo := range ctslist { if !c.shouldShow(filters, &cinfo) { continue } cts = append(cts, cinfo) } columns_map := map[rune]column{ '4': column{i18n.G("IPV4"), c.IP4ColumnData, true, false}, '6': column{i18n.G("IPV6"), c.IP6ColumnData, true, false}, 'a': column{i18n.G("ARCHITECTURE"), c.ArchitectureColumnData, false, false}, 'c': column{i18n.G("CREATED AT"), c.CreatedColumnData, false, false}, 'n': column{i18n.G("NAME"), c.nameColumnData, false, false}, 'p': column{i18n.G("PID"), c.PIDColumnData, true, false}, 'P': column{i18n.G("PROFILES"), c.ProfilesColumnData, false, false}, 'S': column{i18n.G("SNAPSHOTS"), c.numberSnapshotsColumnData, false, true}, 's': column{i18n.G("STATE"), c.statusColumnData, false, false}, 't': column{i18n.G("TYPE"), c.typeColumnData, false, false}, } if c.fast { c.chosenColumnRunes = "nsacPt" } columns := []column{} for _, columnRune := range c.chosenColumnRunes { if column, ok := columns_map[columnRune]; ok { columns = append(columns, column) } else { return fmt.Errorf("%s does contain invalid column characters\n", c.chosenColumnRunes) } } return c.listContainers(d, cts, filters, columns) } func (c *listCmd) nameColumnData(cInfo shared.ContainerInfo, cState *shared.ContainerState, cSnaps []shared.SnapshotInfo) string { return cInfo.Name } func (c *listCmd) statusColumnData(cInfo shared.ContainerInfo, cState *shared.ContainerState, cSnaps []shared.SnapshotInfo) string { return strings.ToUpper(cInfo.Status) } func (c *listCmd) IP4ColumnData(cInfo shared.ContainerInfo, cState *shared.ContainerState, cSnaps []shared.SnapshotInfo) string { if cInfo.IsActive() && cState != nil && cState.Network != nil { ipv4s := []string{} for netName, net := range cState.Network { if net.Type == "loopback" { continue } for _, addr := range net.Addresses { if shared.StringInSlice(addr.Scope, []string{"link", "local"}) { continue } if addr.Family == "inet" { ipv4s = append(ipv4s, fmt.Sprintf("%s (%s)", addr.Address, netName)) } } } return strings.Join(ipv4s, "\n") } else { return "" } } func (c *listCmd) IP6ColumnData(cInfo shared.ContainerInfo, cState *shared.ContainerState, cSnaps []shared.SnapshotInfo) string { if cInfo.IsActive() && cState != nil && cState.Network != nil { ipv6s := []string{} for netName, net := range cState.Network { if net.Type == "loopback" { continue } for _, addr := range net.Addresses { if shared.StringInSlice(addr.Scope, []string{"link", "local"}) { continue } if addr.Family == "inet6" { ipv6s = append(ipv6s, fmt.Sprintf("%s (%s)", addr.Address, netName)) } } } return strings.Join(ipv6s, "\n") } else { return "" } } func (c *listCmd) typeColumnData(cInfo shared.ContainerInfo, cState *shared.ContainerState, cSnaps []shared.SnapshotInfo) string { if cInfo.Ephemeral { return i18n.G("EPHEMERAL") } else { return i18n.G("PERSISTENT") } } func (c *listCmd) numberSnapshotsColumnData(cInfo shared.ContainerInfo, cState *shared.ContainerState, cSnaps []shared.SnapshotInfo) string { if cSnaps != nil { return fmt.Sprintf("%d", len(cSnaps)) } return "" } func (c *listCmd) PIDColumnData(cInfo shared.ContainerInfo, cState *shared.ContainerState, cSnaps []shared.SnapshotInfo) string { if cInfo.IsActive() && cState != nil { return fmt.Sprintf("%d", cState.Pid) } return "" } func (c *listCmd) ArchitectureColumnData(cInfo shared.ContainerInfo, cState *shared.ContainerState, cSnaps []shared.SnapshotInfo) string { return cInfo.Architecture } func (c *listCmd) ProfilesColumnData(cInfo shared.ContainerInfo, cState *shared.ContainerState, cSnaps []shared.SnapshotInfo) string { return strings.Join(cInfo.Profiles, "\n") } func (c *listCmd) CreatedColumnData(cInfo shared.ContainerInfo, cState *shared.ContainerState, cSnaps []shared.SnapshotInfo) string { layout := "2006/01/02 15:04 UTC" if cInfo.CreationDate.UTC().Unix() != 0 { return cInfo.CreationDate.UTC().Format(layout) } return "" } lxd-2.0.2/lxc/list_test.go000066400000000000000000000016641272140510300154330ustar00rootroot00000000000000package main import ( "testing" "github.com/lxc/lxd/shared" ) func TestDotPrefixMatch(t *testing.T) { list := listCmd{} pass := true pass = pass && list.dotPrefixMatch("s.privileged", "security.privileged") pass = pass && list.dotPrefixMatch("u.blah", "user.blah") if !pass { t.Error("failed prefix matching") } } func TestShouldShow(t *testing.T) { list := listCmd{} state := &shared.ContainerInfo{ Name: "foo", ExpandedConfig: map[string]string{ "security.privileged": "1", "user.blah": "abc", }, } if !list.shouldShow([]string{"u.blah=abc"}, state) { t.Error("u.blah=abc didn't match") } if !list.shouldShow([]string{"user.blah=abc"}, state) { t.Error("user.blah=abc didn't match") } if list.shouldShow([]string{"bar", "u.blah=abc"}, state) { t.Errorf("name filter didn't work") } if list.shouldShow([]string{"bar", "u.blah=other"}, state) { t.Errorf("value filter didn't work") } } lxd-2.0.2/lxc/main.go000066400000000000000000000160461272140510300143450ustar00rootroot00000000000000package main import ( "fmt" "net" "net/url" "os" "os/exec" "path" "strings" "syscall" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/gnuflag" "github.com/lxc/lxd/shared/i18n" "github.com/lxc/lxd/shared/logging" ) var configPath string func main() { if err := run(); err != nil { // The action we take depends on the error we get. msg := fmt.Sprintf(i18n.G("error: %v"), err) switch t := err.(type) { case *url.Error: switch u := t.Err.(type) { case *net.OpError: if u.Op == "dial" && u.Net == "unix" { switch errno := u.Err.(type) { case syscall.Errno: switch errno { case syscall.ENOENT: msg = i18n.G("LXD socket not found; is LXD running?") case syscall.ECONNREFUSED: msg = i18n.G("Connection refused; is LXD running?") case syscall.EACCES: msg = i18n.G("Permisson denied, are you in the lxd group?") default: msg = fmt.Sprintf("%d %s", uintptr(errno), errno.Error()) } } } } } fmt.Fprintln(os.Stderr, fmt.Sprintf("%s", msg)) os.Exit(1) } } func run() error { verbose := gnuflag.Bool("verbose", false, i18n.G("Enables verbose mode.")) debug := gnuflag.Bool("debug", false, i18n.G("Enables debug mode.")) forceLocal := gnuflag.Bool("force-local", false, i18n.G("Force using the local unix socket.")) noAlias := gnuflag.Bool("no-alias", false, i18n.G("Ignore aliases when determining what command to run.")) configDir := "$HOME/.config/lxc" if os.Getenv("LXD_CONF") != "" { configDir = os.Getenv("LXD_CONF") } configPath = os.ExpandEnv(path.Join(configDir, "config.yml")) if len(os.Args) >= 3 && os.Args[1] == "config" && os.Args[2] == "profile" { fmt.Fprintf(os.Stderr, i18n.G("`lxc config profile` is deprecated, please use `lxc profile`")+"\n") os.Args = append(os.Args[:1], os.Args[2:]...) } if len(os.Args) >= 2 && (os.Args[1] == "-h" || os.Args[1] == "--help") { os.Args[1] = "help" } if len(os.Args) >= 2 && (os.Args[1] == "--all") { os.Args[1] = "help" os.Args = append(os.Args, "--all") } if len(os.Args) == 2 && os.Args[1] == "--version" { os.Args[1] = "version" } if len(os.Args) < 2 { commands["help"].run(nil, nil) os.Exit(1) } var config *lxd.Config var err error if *forceLocal { config = &lxd.DefaultConfig } else { config, err = lxd.LoadConfig(configPath) if err != nil { return err } } // This is quite impolite, but it seems gnuflag needs us to shift our // own exename out of the arguments before parsing them. However, this // is useful for execIfAlias, which wants to know exactly the command // line we received, and in some cases is called before this shift, and // in others after. So, let's save the original args. origArgs := os.Args name := os.Args[1] /* at this point we haven't parsed the args, so we have to look for * --no-alias by hand. */ if !shared.StringInSlice("--no-alias", origArgs) { execIfAliases(config, origArgs) } cmd, ok := commands[name] if !ok { commands["help"].run(nil, nil) fmt.Fprintf(os.Stderr, "\n"+i18n.G("error: unknown command: %s")+"\n", name) os.Exit(1) } cmd.flags() gnuflag.Usage = func() { fmt.Fprintf(os.Stderr, i18n.G("Usage: %s")+"\n\n"+i18n.G("Options:")+"\n\n", strings.TrimSpace(cmd.usage())) gnuflag.PrintDefaults() } os.Args = os.Args[1:] gnuflag.Parse(true) shared.Log, err = logging.GetLogger("", "", *verbose, *debug, nil) if err != nil { return err } certf := config.ConfigPath("client.crt") keyf := config.ConfigPath("client.key") if !*forceLocal && os.Args[0] != "help" && os.Args[0] != "version" && (!shared.PathExists(certf) || !shared.PathExists(keyf)) { fmt.Fprintf(os.Stderr, i18n.G("Generating a client certificate. This may take a minute...")+"\n") err = shared.FindOrGenCert(certf, keyf) if err != nil { return err } if shared.PathExists("/var/lib/lxd/") { fmt.Fprintf(os.Stderr, i18n.G("If this is your first time using LXD, you should also run: sudo lxd init")+"\n") fmt.Fprintf(os.Stderr, i18n.G("To start your first container, try: lxc launch ubuntu:16.04")+"\n\n") } } err = cmd.run(config, gnuflag.Args()) if err == errArgs { /* If we got an error about invalid arguments, let's try to * expand this as an alias */ if !*noAlias { execIfAliases(config, origArgs) } fmt.Fprintf(os.Stderr, "%s\n\n"+i18n.G("error: %v")+"\n", cmd.usage(), err) os.Exit(1) } return err } type command interface { usage() string flags() showByDefault() bool run(config *lxd.Config, args []string) error } var commands = map[string]command{ "config": &configCmd{}, "copy": ©Cmd{}, "delete": &deleteCmd{}, "exec": &execCmd{}, "file": &fileCmd{}, "finger": &fingerCmd{}, "help": &helpCmd{}, "image": &imageCmd{}, "info": &infoCmd{}, "init": &initCmd{}, "launch": &launchCmd{}, "list": &listCmd{}, "monitor": &monitorCmd{}, "move": &moveCmd{}, "pause": &actionCmd{shared.Freeze, false, false, "pause", -1, false, false, false}, "profile": &profileCmd{}, "publish": &publishCmd{}, "remote": &remoteCmd{}, "restart": &actionCmd{shared.Restart, true, true, "restart", -1, false, false, false}, "restore": &restoreCmd{}, "snapshot": &snapshotCmd{}, "start": &actionCmd{shared.Start, false, true, "start", -1, false, false, false}, "stop": &actionCmd{shared.Stop, true, true, "stop", -1, false, false, false}, "version": &versionCmd{}, } var errArgs = fmt.Errorf(i18n.G("wrong number of subcommand arguments")) func expandAlias(config *lxd.Config, origArgs []string) ([]string, bool) { foundAlias := false aliasKey := []string{} aliasValue := []string{} for k, v := range config.Aliases { matches := false for i, key := range strings.Split(k, " ") { if len(origArgs) <= i+1 { break } if origArgs[i+1] == key { matches = true aliasKey = strings.Split(k, " ") aliasValue = strings.Split(v, " ") break } } if !matches { continue } foundAlias = true break } if !foundAlias { return []string{}, false } newArgs := []string{origArgs[0]} hasReplacedArgsVar := false for i, aliasArg := range aliasValue { if aliasArg == "@ARGS@" && len(origArgs) > i { newArgs = append(newArgs, origArgs[i+1:]...) hasReplacedArgsVar = true } else { newArgs = append(newArgs, aliasArg) } } if !hasReplacedArgsVar { /* add the rest of the arguments */ newArgs = append(newArgs, origArgs[len(aliasKey)+1:]...) } /* don't re-do aliases the next time; this allows us to have recursive * aliases, e.g. `lxc list` to `lxc list -c n` */ newArgs = append(newArgs[:2], append([]string{"--no-alias"}, newArgs[2:]...)...) return newArgs, true } func execIfAliases(config *lxd.Config, origArgs []string) { newArgs, expanded := expandAlias(config, origArgs) if !expanded { return } path, err := exec.LookPath(origArgs[0]) if err != nil { fmt.Fprintf(os.Stderr, i18n.G("processing aliases failed %s\n"), err) os.Exit(5) } ret := syscall.Exec(path, newArgs, syscall.Environ()) fmt.Fprintf(os.Stderr, i18n.G("processing aliases failed %s\n"), ret) os.Exit(5) } lxd-2.0.2/lxc/main_test.go000066400000000000000000000023731272140510300154020ustar00rootroot00000000000000package main import ( "testing" "github.com/lxc/lxd" ) type aliasTestcase struct { input []string expected []string } func slicesEqual(a, b []string) bool { if a == nil && b == nil { return true } if a == nil || b == nil { return false } if len(a) != len(b) { return false } for i := range a { if a[i] != b[i] { return false } } return true } func TestExpandAliases(t *testing.T) { aliases := map[string]string{ "tester 12": "list", "foo": "list @ARGS@ -c n", } testcases := []aliasTestcase{ aliasTestcase{ input: []string{"lxc", "list"}, expected: []string{"lxc", "list"}, }, aliasTestcase{ input: []string{"lxc", "tester", "12"}, expected: []string{"lxc", "list", "--no-alias"}, }, aliasTestcase{ input: []string{"lxc", "foo", "asdf"}, expected: []string{"lxc", "list", "--no-alias", "asdf", "-c", "n"}, }, } conf := &lxd.Config{Aliases: aliases} for _, tc := range testcases { result, expanded := expandAlias(conf, tc.input) if !expanded { if !slicesEqual(tc.input, tc.expected) { t.Errorf("didn't expand when expected to: %s", tc.input) } continue } if !slicesEqual(result, tc.expected) { t.Errorf("%s didn't match %s", result, tc.expected) } } } lxd-2.0.2/lxc/monitor.go000066400000000000000000000030351272140510300151020ustar00rootroot00000000000000package main import ( "fmt" "gopkg.in/yaml.v2" "github.com/lxc/lxd" "github.com/lxc/lxd/shared/gnuflag" "github.com/lxc/lxd/shared/i18n" ) type typeList []string func (f *typeList) String() string { return fmt.Sprint(*f) } func (f *typeList) Set(value string) error { if value == "" { return fmt.Errorf("Invalid type: %s", value) } if f == nil { *f = make(typeList, 1) } else { *f = append(*f, value) } return nil } type monitorCmd struct { typeArgs typeList } func (c *monitorCmd) showByDefault() bool { return false } func (c *monitorCmd) usage() string { return i18n.G( `Monitor activity on the LXD server. lxc monitor [remote:] [--type=TYPE...] Connects to the monitoring interface of the specified LXD server. By default will listen to all message types. Specific types to listen to can be specified with --type. Example: lxc monitor --type=logging`) } func (c *monitorCmd) flags() { gnuflag.Var(&c.typeArgs, "type", i18n.G("Event type to listen for")) } func (c *monitorCmd) run(config *lxd.Config, args []string) error { var remote string if len(args) > 1 { return errArgs } if len(args) == 0 { remote, _ = config.ParseRemoteAndContainer("") } else { remote, _ = config.ParseRemoteAndContainer(args[0]) } d, err := lxd.NewClient(config, remote) if err != nil { return err } handler := func(message interface{}) { render, err := yaml.Marshal(&message) if err != nil { fmt.Printf("error: %s\n", err) return } fmt.Printf("%s\n\n", render) } return d.Monitor(c.typeArgs, handler) } lxd-2.0.2/lxc/move.go000066400000000000000000000031741272140510300143650ustar00rootroot00000000000000package main import ( "github.com/lxc/lxd" "github.com/lxc/lxd/shared/i18n" ) type moveCmd struct { httpAddr string } func (c *moveCmd) showByDefault() bool { return true } func (c *moveCmd) usage() string { return i18n.G( `Move containers within or in between lxd instances. lxc move [remote:] [remote:] Move a container between two hosts, renaming it if destination name differs. lxc move Rename a local container. `) } func (c *moveCmd) flags() {} func (c *moveCmd) run(config *lxd.Config, args []string) error { if len(args) != 2 { return errArgs } sourceRemote, sourceName := config.ParseRemoteAndContainer(args[0]) destRemote, destName := config.ParseRemoteAndContainer(args[1]) // As an optimization, if the source an destination are the same, do // this via a simple rename. This only works for containers that aren't // running, containers that are running should be live migrated (of // course, this changing of hostname isn't supported right now, so this // simply won't work). if sourceRemote == destRemote { source, err := lxd.NewClient(config, sourceRemote) if err != nil { return err } rename, err := source.Rename(sourceName, destName) if err != nil { return err } return source.WaitForSuccess(rename.Operation) } cpy := copyCmd{} // A move is just a copy followed by a delete; however, we want to // keep the volatile entries around since we are moving the container. if err := cpy.copyContainer(config, args[0], args[1], true, -1); err != nil { return err } return commands["delete"].run(config, args[:1]) } lxd-2.0.2/lxc/profile.go000066400000000000000000000214101272140510300150500ustar00rootroot00000000000000package main import ( "fmt" "io/ioutil" "os" "strings" "syscall" "gopkg.in/yaml.v2" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/i18n" "github.com/lxc/lxd/shared/termios" ) type profileCmd struct { httpAddr string } func (c *profileCmd) showByDefault() bool { return true } func (c *profileCmd) profileEditHelp() string { return i18n.G( `### This is a yaml representation of the profile. ### Any line starting with a '# will be ignored. ### ### A profile consists of a set of configuration items followed by a set of ### devices. ### ### An example would look like: ### name: onenic ### config: ### raw.lxc: lxc.aa_profile=unconfined ### devices: ### eth0: ### nictype: bridged ### parent: lxdbr0 ### type: nic ### ### Note that the name is shown but cannot be changed`) } func (c *profileCmd) usage() string { return i18n.G( `Manage configuration profiles. lxc profile list [filters] List available profiles. lxc profile show Show details of a profile. lxc profile create Create a profile. lxc profile copy Copy the profile to the specified remote. lxc profile get Get profile configuration. lxc profile set Set profile configuration. lxc profile delete Delete a profile. lxc profile edit Edit profile, either by launching external editor or reading STDIN. Example: lxc profile edit # launch editor cat profile.yml | lxc profile edit # read from profile.yml lxc profile apply Apply a comma-separated list of profiles to a container, in order. All profiles passed in this call (and only those) will be applied to the specified container. Example: lxc profile apply foo default,bar # Apply default and bar lxc profile apply foo default # Only default is active lxc profile apply '' # no profiles are applied anymore lxc profile apply bar,default # Apply default second now Devices: lxc profile device list List devices in the given profile. lxc profile device show Show full device details in the given profile. lxc profile device remove Remove a device from a profile. lxc profile device get <[remote:]profile> Get a device property. lxc profile device set <[remote:]profile> Set a device property. lxc profile device unset <[remote:]profile> Unset a device property. lxc profile device add [key=value]... Add a profile device, such as a disk or a nic, to the containers using the specified profile.`) } func (c *profileCmd) flags() {} func (c *profileCmd) run(config *lxd.Config, args []string) error { if len(args) < 1 { return errArgs } if args[0] == "list" { return c.doProfileList(config, args) } if len(args) < 2 { return errArgs } remote, profile := config.ParseRemoteAndContainer(args[1]) client, err := lxd.NewClient(config, remote) if err != nil { return err } switch args[0] { case "create": return c.doProfileCreate(client, profile) case "delete": return c.doProfileDelete(client, profile) case "device": return c.doProfileDevice(config, args) case "edit": return c.doProfileEdit(client, profile) case "apply": container := profile switch len(args) { case 2: profile = "" case 3: profile = args[2] default: return errArgs } return c.doProfileApply(client, container, profile) case "get": return c.doProfileGet(client, profile, args[2:]) case "set": return c.doProfileSet(client, profile, args[2:]) case "unset": return c.doProfileSet(client, profile, args[2:]) case "copy": return c.doProfileCopy(config, client, profile, args[2:]) case "show": return c.doProfileShow(client, profile) default: return errArgs } } func (c *profileCmd) doProfileCreate(client *lxd.Client, p string) error { err := client.ProfileCreate(p) if err == nil { fmt.Printf(i18n.G("Profile %s created")+"\n", p) } return err } func (c *profileCmd) doProfileEdit(client *lxd.Client, p string) error { // If stdin isn't a terminal, read text from it if !termios.IsTerminal(int(syscall.Stdin)) { contents, err := ioutil.ReadAll(os.Stdin) if err != nil { return err } newdata := shared.ProfileConfig{} err = yaml.Unmarshal(contents, &newdata) if err != nil { return err } return client.PutProfile(p, newdata) } // Extract the current value profile, err := client.ProfileConfig(p) if err != nil { return err } data, err := yaml.Marshal(&profile) if err != nil { return err } // Spawn the editor content, err := shared.TextEditor("", []byte(c.profileEditHelp()+"\n\n"+string(data))) if err != nil { return err } for { // Parse the text received from the editor newdata := shared.ProfileConfig{} err = yaml.Unmarshal(content, &newdata) if err == nil { err = client.PutProfile(p, newdata) } // Respawn the editor if err != nil { fmt.Fprintf(os.Stderr, i18n.G("Config parsing error: %s")+"\n", err) fmt.Println(i18n.G("Press enter to open the editor again")) _, err := os.Stdin.Read(make([]byte, 1)) if err != nil { return err } content, err = shared.TextEditor("", content) if err != nil { return err } continue } break } return nil } func (c *profileCmd) doProfileDelete(client *lxd.Client, p string) error { err := client.ProfileDelete(p) if err == nil { fmt.Printf(i18n.G("Profile %s deleted")+"\n", p) } return err } func (c *profileCmd) doProfileApply(client *lxd.Client, d string, p string) error { resp, err := client.ApplyProfile(d, p) if err != nil { return err } err = client.WaitForSuccess(resp.Operation) if err == nil { if p == "" { p = i18n.G("(none)") } fmt.Printf(i18n.G("Profile %s applied to %s")+"\n", p, d) } return err } func (c *profileCmd) doProfileShow(client *lxd.Client, p string) error { profile, err := client.ProfileConfig(p) if err != nil { return err } data, err := yaml.Marshal(&profile) fmt.Printf("%s", data) return nil } func (c *profileCmd) doProfileCopy(config *lxd.Config, client *lxd.Client, p string, args []string) error { if len(args) != 1 { return errArgs } remote, newname := config.ParseRemoteAndContainer(args[0]) if newname == "" { newname = p } dest, err := lxd.NewClient(config, remote) if err != nil { return err } return client.ProfileCopy(p, newname, dest) } func (c *profileCmd) doProfileDevice(config *lxd.Config, args []string) error { // device add b1 eth0 nic type=bridged // device list b1 // device remove b1 eth0 if len(args) < 3 { return errArgs } cfg := configCmd{} switch args[1] { case "add": return cfg.deviceAdd(config, "profile", args) case "remove": return cfg.deviceRm(config, "profile", args) case "list": return cfg.deviceList(config, "profile", args) case "show": return cfg.deviceShow(config, "profile", args) case "get": return cfg.deviceGet(config, "profile", args) case "set": return cfg.deviceSet(config, "profile", args) case "unset": return cfg.deviceUnset(config, "profile", args) default: return errArgs } } func (c *profileCmd) doProfileGet(client *lxd.Client, p string, args []string) error { // we shifted @args so so it should read "" if len(args) != 1 { return errArgs } resp, err := client.GetProfileConfig(p) if err != nil { return err } for k, v := range resp { if k == args[0] { fmt.Printf("%s\n", v) } } return nil } func (c *profileCmd) doProfileSet(client *lxd.Client, p string, args []string) error { // we shifted @args so so it should read " []" if len(args) < 1 { return errArgs } key := args[0] var value string if len(args) < 2 { value = "" } else { value = args[1] } if !termios.IsTerminal(int(syscall.Stdin)) && value == "-" { buf, err := ioutil.ReadAll(os.Stdin) if err != nil { return fmt.Errorf("Can't read from stdin: %s", err) } value = string(buf[:]) } err := client.SetProfileConfigItem(p, key, value) return err } func (c *profileCmd) doProfileList(config *lxd.Config, args []string) error { var remote string if len(args) > 1 { var name string remote, name = config.ParseRemoteAndContainer(args[1]) if name != "" { return fmt.Errorf(i18n.G("Cannot provide container name to list")) } } else { remote = config.DefaultRemote } client, err := lxd.NewClient(config, remote) if err != nil { return err } profiles, err := client.ListProfiles() if err != nil { return err } fmt.Printf("%s\n", strings.Join(profiles, "\n")) return nil } lxd-2.0.2/lxc/publish.go000066400000000000000000000070131272140510300150610ustar00rootroot00000000000000package main import ( "fmt" "strings" "github.com/lxc/lxd" "github.com/lxc/lxd/shared/gnuflag" "github.com/lxc/lxd/shared/i18n" "github.com/lxc/lxd/shared" ) type publishCmd struct { pAliases aliasList // aliasList defined in lxc/image.go makePublic bool Force bool } func (c *publishCmd) showByDefault() bool { return true } func (c *publishCmd) usage() string { return i18n.G( `Publish containers as images. lxc publish [remote:]container [remote:] [--alias=ALIAS]... [prop-key=prop-value]...`) } func (c *publishCmd) flags() { gnuflag.BoolVar(&c.makePublic, "public", false, i18n.G("Make the image public")) gnuflag.Var(&c.pAliases, "alias", i18n.G("New alias to define at target")) gnuflag.BoolVar(&c.Force, "force", false, i18n.G("Stop the container if currently running")) gnuflag.BoolVar(&c.Force, "f", false, i18n.G("Stop the container if currently running")) } func (c *publishCmd) run(config *lxd.Config, args []string) error { var cRemote string var cName string iName := "" iRemote := "" properties := map[string]string{} firstprop := 1 // first property is arg[2] if arg[1] is image remote, else arg[1] if len(args) < 1 { return errArgs } cRemote, cName = config.ParseRemoteAndContainer(args[0]) if len(args) >= 2 && !strings.Contains(args[1], "=") { firstprop = 2 iRemote, iName = config.ParseRemoteAndContainer(args[1]) } else { iRemote, iName = config.ParseRemoteAndContainer("") } if cName == "" { return fmt.Errorf(i18n.G("Container name is mandatory")) } if iName != "" { return fmt.Errorf(i18n.G("There is no \"image name\". Did you want an alias?")) } d, err := lxd.NewClient(config, iRemote) if err != nil { return err } s := d if cRemote != iRemote { s, err = lxd.NewClient(config, cRemote) if err != nil { return err } } if !shared.IsSnapshot(cName) { ct, err := s.ContainerInfo(cName) if err != nil { return err } wasRunning := ct.StatusCode != 0 && ct.StatusCode != shared.Stopped wasEphemeral := ct.Ephemeral if wasRunning { if !c.Force { return fmt.Errorf(i18n.G("The container is currently running. Use --force to have it stopped and restarted.")) } if ct.Ephemeral { ct.Ephemeral = false err := s.UpdateContainerConfig(cName, ct.Brief()) if err != nil { return err } } resp, err := s.Action(cName, shared.Stop, -1, true, false) if err != nil { return err } op, err := s.WaitFor(resp.Operation) if err != nil { return err } if op.StatusCode == shared.Failure { return fmt.Errorf(i18n.G("Stopping container failed!")) } defer s.Action(cName, shared.Start, -1, true, false) if wasEphemeral { ct.Ephemeral = true err := s.UpdateContainerConfig(cName, ct.Brief()) if err != nil { return err } } } } for i := firstprop; i < len(args); i++ { entry := strings.SplitN(args[i], "=", 2) if len(entry) < 2 { return errArgs } properties[entry[0]] = entry[1] } var fp string // Optimized local publish if cRemote == iRemote { fp, err = d.ImageFromContainer(cName, c.makePublic, c.pAliases, properties) if err != nil { return err } fmt.Printf(i18n.G("Container published with fingerprint: %s")+"\n", fp) return nil } fp, err = s.ImageFromContainer(cName, false, nil, properties) if err != nil { return err } defer s.DeleteImage(fp) err = s.CopyImage(fp, d, false, c.pAliases, c.makePublic, false, nil) if err != nil { return err } fmt.Printf(i18n.G("Container published with fingerprint: %s")+"\n", fp) return nil } lxd-2.0.2/lxc/remote.go000066400000000000000000000245571272140510300147220ustar00rootroot00000000000000package main import ( "crypto/sha256" "crypto/x509" "encoding/pem" "fmt" "net" "net/http" "net/url" "os" "path/filepath" "sort" "strings" "github.com/olekukonko/tablewriter" "golang.org/x/crypto/ssh/terminal" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/gnuflag" "github.com/lxc/lxd/shared/i18n" ) type remoteCmd struct { httpAddr string acceptCert bool password string public bool protocol string } func (c *remoteCmd) showByDefault() bool { return true } func (c *remoteCmd) usage() string { return i18n.G( `Manage remote LXD servers. lxc remote add [--accept-certificate] [--password=PASSWORD] [--public] [--protocol=PROTOCOL] Add the remote at . lxc remote remove Remove the remote . lxc remote list List all remotes. lxc remote rename Rename remote to . lxc remote set-url Update 's url to . lxc remote set-default Set the default remote. lxc remote get-default Print the default remote.`) } func (c *remoteCmd) flags() { gnuflag.BoolVar(&c.acceptCert, "accept-certificate", false, i18n.G("Accept certificate")) gnuflag.StringVar(&c.password, "password", "", i18n.G("Remote admin password")) gnuflag.StringVar(&c.protocol, "protocol", "", i18n.G("Server protocol (lxd or simplestreams)")) gnuflag.BoolVar(&c.public, "public", false, i18n.G("Public image server")) } func getRemoteCertificate(address string) (*x509.Certificate, error) { // Setup a permissive TLS config tlsConfig, err := shared.GetTLSConfig("", "", nil) if err != nil { return nil, err } tlsConfig.InsecureSkipVerify = true tr := &http.Transport{ TLSClientConfig: tlsConfig, Dial: shared.RFC3493Dialer, Proxy: shared.ProxyFromEnvironment, } // Connect client := &http.Client{Transport: tr} resp, err := client.Get(address) if err != nil { return nil, err } // Retrieve the certificate if resp.TLS == nil || len(resp.TLS.PeerCertificates) == 0 { return nil, fmt.Errorf(i18n.G("Unable to read remote TLS certificate")) } return resp.TLS.PeerCertificates[0], nil } func (c *remoteCmd) addServer(config *lxd.Config, server string, addr string, acceptCert bool, password string, public bool, protocol string) error { var rScheme string var rHost string var rPort string // Setup the remotes list if config.Remotes == nil { config.Remotes = make(map[string]lxd.RemoteConfig) } /* Complex remote URL parsing */ remoteURL, err := url.Parse(addr) if err != nil { return err } // Fast track simplestreams if protocol == "simplestreams" { if remoteURL.Scheme != "https" { return fmt.Errorf(i18n.G("Only https URLs are supported for simplestreams")) } config.Remotes[server] = lxd.RemoteConfig{Addr: addr, Public: true, Protocol: protocol} return nil } // Fix broken URL parser if !strings.Contains(addr, "://") && remoteURL.Scheme != "" && remoteURL.Scheme != "unix" && remoteURL.Host == "" { remoteURL.Host = remoteURL.Scheme remoteURL.Scheme = "" } if remoteURL.Scheme != "" { if remoteURL.Scheme != "unix" && remoteURL.Scheme != "https" { return fmt.Errorf(i18n.G("Invalid URL scheme \"%s\" in \"%s\""), remoteURL.Scheme, addr) } rScheme = remoteURL.Scheme } else if addr[0] == '/' { rScheme = "unix" } else { if !shared.PathExists(addr) { rScheme = "https" } else { rScheme = "unix" } } if remoteURL.Host != "" { rHost = remoteURL.Host } else { rHost = addr } host, port, err := net.SplitHostPort(rHost) if err == nil { rHost = host rPort = port } else { rPort = shared.DefaultPort } if rScheme == "unix" { if addr[0:5] == "unix:" { if addr[0:7] == "unix://" { if len(addr) > 8 { rHost = addr[8:] } else { rHost = "" } } else { rHost = addr[6:] } } rPort = "" } if strings.Contains(rHost, ":") && !strings.HasPrefix(rHost, "[") { rHost = fmt.Sprintf("[%s]", rHost) } if rPort != "" { addr = rScheme + "://" + rHost + ":" + rPort } else { addr = rScheme + "://" + rHost } /* Actually add the remote */ config.Remotes[server] = lxd.RemoteConfig{Addr: addr, Protocol: protocol} remote := config.ParseRemote(server) d, err := lxd.NewClient(config, remote) if err != nil { return err } if len(addr) > 5 && addr[0:5] == "unix:" { // NewClient succeeded so there was a lxd there (we fingered // it) so just accept it return nil } var certificate *x509.Certificate /* Attempt to connect using the system root CA */ _, err = d.GetServerConfig() if err != nil { // Failed to connect using the system CA, so retrieve the remote certificate certificate, err = getRemoteCertificate(addr) if err != nil { return err } } if certificate != nil { if !acceptCert { digest := sha256.Sum256(certificate.Raw) fmt.Printf(i18n.G("Certificate fingerprint: %x")+"\n", digest) fmt.Printf(i18n.G("ok (y/n)?") + " ") line, err := shared.ReadStdin() if err != nil { return err } if len(line) < 1 || line[0] != 'y' && line[0] != 'Y' { return fmt.Errorf(i18n.G("Server certificate NACKed by user")) } } dnam := d.Config.ConfigPath("servercerts") err := os.MkdirAll(dnam, 0750) if err != nil { return fmt.Errorf(i18n.G("Could not create server cert dir")) } certf := fmt.Sprintf("%s/%s.crt", dnam, d.Name) certOut, err := os.Create(certf) if err != nil { return err } pem.Encode(certOut, &pem.Block{Type: "CERTIFICATE", Bytes: certificate.Raw}) certOut.Close() // Setup a new connection, this time with the remote certificate d, err = lxd.NewClient(config, remote) if err != nil { return err } } if d.IsPublic() || public { config.Remotes[server] = lxd.RemoteConfig{Addr: addr, Public: true} if _, err := d.GetServerConfig(); err != nil { return err } return nil } if d.AmTrusted() { // server already has our cert, so we're done return nil } if password == "" { fmt.Printf(i18n.G("Admin password for %s: "), server) pwd, err := terminal.ReadPassword(0) if err != nil { /* We got an error, maybe this isn't a terminal, let's try to * read it as a file */ pwd, err = shared.ReadStdin() if err != nil { return err } } fmt.Println("") password = string(pwd) } err = d.AddMyCertToServer(password) if err != nil { return err } if !d.AmTrusted() { return fmt.Errorf(i18n.G("Server doesn't trust us after adding our cert")) } fmt.Println(i18n.G("Client certificate stored at server: "), server) return nil } func (c *remoteCmd) removeCertificate(config *lxd.Config, remote string) { certf := config.ServerCertPath(remote) shared.Debugf("Trying to remove %s", certf) os.Remove(certf) } func (c *remoteCmd) run(config *lxd.Config, args []string) error { if len(args) < 1 { return errArgs } switch args[0] { case "add": if len(args) < 3 { return errArgs } if rc, ok := config.Remotes[args[1]]; ok { return fmt.Errorf(i18n.G("remote %s exists as <%s>"), args[1], rc.Addr) } err := c.addServer(config, args[1], args[2], c.acceptCert, c.password, c.public, c.protocol) if err != nil { delete(config.Remotes, args[1]) c.removeCertificate(config, args[1]) return err } case "remove": if len(args) != 2 { return errArgs } rc, ok := config.Remotes[args[1]] if !ok { return fmt.Errorf(i18n.G("remote %s doesn't exist"), args[1]) } if rc.Static { return fmt.Errorf(i18n.G("remote %s is static and cannot be modified"), args[1]) } if config.DefaultRemote == args[1] { return fmt.Errorf(i18n.G("can't remove the default remote")) } delete(config.Remotes, args[1]) c.removeCertificate(config, args[1]) case "list": data := [][]string{} for name, rc := range config.Remotes { strPublic := i18n.G("NO") if rc.Public { strPublic = i18n.G("YES") } strStatic := i18n.G("NO") if rc.Static { strStatic = i18n.G("YES") } if rc.Protocol == "" { rc.Protocol = "lxd" } strName := name if name == config.DefaultRemote { strName = fmt.Sprintf("%s (%s)", name, i18n.G("default")) } data = append(data, []string{strName, rc.Addr, rc.Protocol, strPublic, strStatic}) } table := tablewriter.NewWriter(os.Stdout) table.SetAutoWrapText(false) table.SetAlignment(tablewriter.ALIGN_LEFT) table.SetRowLine(true) table.SetHeader([]string{ i18n.G("NAME"), i18n.G("URL"), i18n.G("PROTOCOL"), i18n.G("PUBLIC"), i18n.G("STATIC")}) sort.Sort(byName(data)) table.AppendBulk(data) table.Render() return nil case "rename": if len(args) != 3 { return errArgs } rc, ok := config.Remotes[args[1]] if !ok { return fmt.Errorf(i18n.G("remote %s doesn't exist"), args[1]) } if rc.Static { return fmt.Errorf(i18n.G("remote %s is static and cannot be modified"), args[1]) } if _, ok := config.Remotes[args[2]]; ok { return fmt.Errorf(i18n.G("remote %s already exists"), args[2]) } // Rename the certificate file oldPath := filepath.Join(config.ConfigPath("servercerts"), fmt.Sprintf("%s.crt", args[1])) newPath := filepath.Join(config.ConfigPath("servercerts"), fmt.Sprintf("%s.crt", args[2])) if shared.PathExists(oldPath) { err := os.Rename(oldPath, newPath) if err != nil { return err } } config.Remotes[args[2]] = rc delete(config.Remotes, args[1]) if config.DefaultRemote == args[1] { config.DefaultRemote = args[2] } case "set-url": if len(args) != 3 { return errArgs } rc, ok := config.Remotes[args[1]] if !ok { return fmt.Errorf(i18n.G("remote %s doesn't exist"), args[1]) } if rc.Static { return fmt.Errorf(i18n.G("remote %s is static and cannot be modified"), args[1]) } config.Remotes[args[1]] = lxd.RemoteConfig{Addr: args[2]} case "set-default": if len(args) != 2 { return errArgs } _, ok := config.Remotes[args[1]] if !ok { return fmt.Errorf(i18n.G("remote %s doesn't exist"), args[1]) } config.DefaultRemote = args[1] case "get-default": if len(args) != 1 { return errArgs } fmt.Println(config.DefaultRemote) return nil default: return errArgs } return lxd.SaveConfig(config, configPath) } lxd-2.0.2/lxc/restore.go000066400000000000000000000024431272140510300151000ustar00rootroot00000000000000package main import ( "fmt" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/gnuflag" "github.com/lxc/lxd/shared/i18n" ) type restoreCmd struct { stateful bool } func (c *restoreCmd) showByDefault() bool { return true } func (c *restoreCmd) usage() string { return i18n.G( `Set the current state of a resource back to a snapshot. lxc restore [remote:] [--stateful] Restores a container from a snapshot (optionally with running state, see snapshot help for details). For example: lxc snapshot u1 snap0 # create the snapshot lxc restore u1 snap0 # restore the snapshot`) } func (c *restoreCmd) flags() { gnuflag.BoolVar(&c.stateful, "stateful", false, i18n.G("Whether or not to restore the container's running state from snapshot (if available)")) } func (c *restoreCmd) run(config *lxd.Config, args []string) error { if len(args) < 2 { return errArgs } var snapname = args[1] remote, name := config.ParseRemoteAndContainer(args[0]) d, err := lxd.NewClient(config, remote) if err != nil { return err } if !shared.IsSnapshot(snapname) { snapname = fmt.Sprintf("%s/%s", name, snapname) } resp, err := d.RestoreSnapshot(name, snapname, c.stateful) if err != nil { return err } return d.WaitForSuccess(resp.Operation) } lxd-2.0.2/lxc/snapshot.go000066400000000000000000000031301272140510300152460ustar00rootroot00000000000000package main import ( "fmt" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/gnuflag" "github.com/lxc/lxd/shared/i18n" ) type snapshotCmd struct { stateful bool } func (c *snapshotCmd) showByDefault() bool { return true } func (c *snapshotCmd) usage() string { return i18n.G( `Create a read-only snapshot of a container. lxc snapshot [remote:] [--stateful] Creates a snapshot of the container (optionally with the container's memory state). When --stateful is used, LXD attempts to checkpoint the container's running state, including process memory state, TCP connections, etc. so that it can be restored (via lxc restore) at a later time (although some things, e.g. TCP connections after the TCP timeout window has expired, may not be restored successfully). Example: lxc snapshot u1 snap0`) } func (c *snapshotCmd) flags() { gnuflag.BoolVar(&c.stateful, "stateful", false, i18n.G("Whether or not to snapshot the container's running state")) } func (c *snapshotCmd) run(config *lxd.Config, args []string) error { if len(args) < 1 { return errArgs } var snapname string if len(args) < 2 { snapname = "" } else { snapname = args[1] } remote, name := config.ParseRemoteAndContainer(args[0]) d, err := lxd.NewClient(config, remote) if err != nil { return err } // we don't allow '/' in snapshot names if shared.IsSnapshot(snapname) { return fmt.Errorf(i18n.G("'/' not allowed in snapshot name")) } resp, err := d.Snapshot(name, snapname, c.stateful) if err != nil { return err } return d.WaitForSuccess(resp.Operation) } lxd-2.0.2/lxc/version.go000066400000000000000000000007731272140510300151060ustar00rootroot00000000000000package main import ( "fmt" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/i18n" ) type versionCmd struct{} func (c *versionCmd) showByDefault() bool { return true } func (c *versionCmd) usage() string { return i18n.G( `Prints the version number of this client tool. lxc version`) } func (c *versionCmd) flags() { } func (c *versionCmd) run(_ *lxd.Config, args []string) error { if len(args) > 0 { return errArgs } fmt.Println(shared.Version) return nil } lxd-2.0.2/lxd-bridge/000077500000000000000000000000001272140510300143165ustar00rootroot00000000000000lxd-2.0.2/lxd-bridge/lxd-bridge000077500000000000000000000201641272140510300162700ustar00rootroot00000000000000#!/bin/sh config="/etc/default/lxd-bridge" varrun="/run/lxd-bridge/" varlib="/var/lib/lxd-bridge/" # lxdbr0 defaults to only setting up the standard IPv6 link-local network # to enable routable IPv4 and/or IPv6, please edit /etc/default/lxd # The values below are defaults USE_LXD_BRIDGE="true" LXD_BRIDGE="lxdbr0" LXD_CONFILE="" LXD_DOMAIN="" # IPv4 LXD_IPV4_ADDR="" LXD_IPV4_NETMASK="" LXD_IPV4_NETWORK="" LXD_IPV4_DHCP_RANGE="" LXD_IPV4_DHCP_MAX="" LXD_IPV4_NAT="false" # IPv6 LXD_IPV6_ADDR="" LXD_IPV6_MASK="" LXD_IPV6_NETWORK="" LXD_IPV6_NAT="false" LXD_IPV6_PROXY="true" [ ! -f "${config}" ] || . "${config}" use_iptables_lock="-w" iptables -w -L -n > /dev/null 2>&1 || use_iptables_lock="" HAS_IPV6=false [ -e "/proc/sys/net/ipv6/conf/default/disable_ipv6" ] && \ [ "$(cat /proc/sys/net/ipv6/conf/default/disable_ipv6)" = "0" ] && HAS_IPV6=true _netmask2cidr () { # Assumes there's no "255." after a non-255 byte in the mask local x=${1##*255.} set -- "0^^^128^192^224^240^248^252^254^" "$(( (${#1} - ${#x})*2 ))" "${x%%.*}" x=${1%%${3}*} echo $(( ${2} + (${#x}/4) )) } ifdown() { ip addr flush dev "${1}" ip link set dev "${1}" down } ifup() { [ "${HAS_IPV6}" = "true" ] && [ "${LXD_IPV6_PROXY}" = "true" ] && ip addr add fe80::1/64 dev "${1}" if [ -n "${LXD_IPV4_NETMASK}" ] && [ -n "${LXD_IPV4_ADDR}" ]; then MASK=$(_netmask2cidr ${LXD_IPV4_NETMASK}) CIDR_ADDR="${LXD_IPV4_ADDR}/${MASK}" ip addr add "${CIDR_ADDR}" dev "${1}" fi ip link set dev "${1}" up } start() { [ "x${USE_LXD_BRIDGE}" = "xtrue" ] || { exit 0; } [ ! -f "${varrun}/network_up" ] || { echo "lxd-bridge is already running"; exit 1; } if [ -d /sys/class/net/${LXD_BRIDGE} ]; then stop force 2>/dev/null || true fi FAILED=1 cleanup() { set +e if [ "${FAILED}" = "1" ]; then echo "Failed to setup lxd-bridge." >&2 stop force fi } trap cleanup EXIT HUP INT TERM set -e # set up the lxd network [ ! -d "/sys/class/net/${LXD_BRIDGE}" ] && ip link add dev "${LXD_BRIDGE}" type bridge if [ "${HAS_IPV6}" = "true" ]; then echo 0 > "/proc/sys/net/ipv6/conf/${LXD_BRIDGE}/autoconf" || true echo 0 > "/proc/sys/net/ipv6/conf/${LXD_BRIDGE}/accept_dad" || true fi # if we are run from systemd on a system with selinux enabled, # the mkdir will create /run/lxd as init_var_run_t which dnsmasq # can't write its pid into, so we restorecon it (to var_run_t) if [ ! -d "${varrun}" ]; then mkdir -p "${varrun}" if which restorecon >/dev/null 2>&1; then restorecon "${varrun}" fi fi if [ ! -d "${varlib}" ]; then mkdir -p "${varlib}" if which restorecon >/dev/null 2>&1; then restorecon "${varlib}" fi fi ifup "${LXD_BRIDGE}" "${LXD_IPV4_ADDR}" "${LXD_IPV4_NETMASK}" LXD_IPV4_ARG="" if [ -n "${LXD_IPV4_ADDR}" ] && [ -n "${LXD_IPV4_NETMASK}" ] && [ -n "${LXD_IPV4_NETWORK}" ]; then echo 1 > /proc/sys/net/ipv4/ip_forward if [ "${LXD_IPV4_NAT}" = "true" ]; then iptables "${use_iptables_lock}" -t nat -A POSTROUTING -s "${LXD_IPV4_NETWORK}" ! -d "${LXD_IPV4_NETWORK}" -j MASQUERADE fi LXD_IPV4_ARG="--listen-address ${LXD_IPV4_ADDR} --dhcp-range ${LXD_IPV4_DHCP_RANGE} --dhcp-lease-max=${LXD_IPV4_DHCP_MAX}" fi LXD_IPV6_ARG="" if [ "${HAS_IPV6}" = "true" ] && [ -n "${LXD_IPV6_ADDR}" ] && [ -n "${LXD_IPV6_MASK}" ] && [ -n "${LXD_IPV6_NETWORK}" ]; then # IPv6 sysctls don't respect the "all" path... for interface in /proc/sys/net/ipv6/conf/*; do echo 2 > "${interface}/accept_ra" done for interface in /proc/sys/net/ipv6/conf/*; do echo 1 > "${interface}/forwarding" done ip -6 addr add dev "${LXD_BRIDGE}" "${LXD_IPV6_ADDR}/${LXD_IPV6_MASK}" if [ "${LXD_IPV6_NAT}" = "true" ]; then ip6tables "${use_iptables_lock}" -t nat -A POSTROUTING -s "${LXD_IPV6_NETWORK}" ! -d "${LXD_IPV6_NETWORK}" -j MASQUERADE fi LXD_IPV6_ARG="--dhcp-range=${LXD_IPV6_ADDR},ra-only --listen-address ${LXD_IPV6_ADDR}" fi iptables "${use_iptables_lock}" -I INPUT -i "${LXD_BRIDGE}" -p udp --dport 67 -j ACCEPT iptables "${use_iptables_lock}" -I INPUT -i "${LXD_BRIDGE}" -p tcp --dport 67 -j ACCEPT iptables "${use_iptables_lock}" -I INPUT -i "${LXD_BRIDGE}" -p udp --dport 53 -j ACCEPT iptables "${use_iptables_lock}" -I INPUT -i "${LXD_BRIDGE}" -p tcp --dport 53 -j ACCEPT iptables "${use_iptables_lock}" -I FORWARD -i "${LXD_BRIDGE}" -j ACCEPT iptables "${use_iptables_lock}" -I FORWARD -o "${LXD_BRIDGE}" -j ACCEPT iptables "${use_iptables_lock}" -t mangle -A POSTROUTING -o "${LXD_BRIDGE}" -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill LXD_DOMAIN_ARG="" if [ -n "${LXD_DOMAIN}" ]; then LXD_DOMAIN_ARG="-s ${LXD_DOMAIN} -S /${LXD_DOMAIN}/" fi LXD_CONFILE_ARG="" if [ -n "${LXD_CONFILE}" ]; then LXD_CONFILE_ARG="--conf-file=${LXD_CONFILE}" fi # https://lists.linuxcontainers.org/pipermail/lxc-devel/2014-October/010561.html for DNSMASQ_USER in lxd dnsmasq nobody do if getent passwd "${DNSMASQ_USER}" >/dev/null; then break fi done if [ -n "${LXD_IPV4_ADDR}" ] || [ -n "${LXD_IPV6_ADDR}" ]; then # shellcheck disable=SC2086 dnsmasq ${LXD_CONFILE_ARG} ${LXD_DOMAIN_ARG} -u "${DNSMASQ_USER}" --strict-order --bind-interfaces --pid-file="${varrun}/dnsmasq.pid" --dhcp-no-override --except-interface=lo --interface="${LXD_BRIDGE}" --dhcp-leasefile="${varlib}/dnsmasq.${LXD_BRIDGE}.leases" --dhcp-authoritative ${LXD_IPV4_ARG} ${LXD_IPV6_ARG} || cleanup fi if [ "${HAS_IPV6}" = "true" ] && [ "${LXD_IPV6_PROXY}" = "true" ]; then PATH="${PATH}:$(dirname "${0}")" lxd-bridge-proxy --addr="[fe80::1%${LXD_BRIDGE}]:13128" & PID=$! echo "${PID}" > "${varrun}/proxy.pid" fi touch "${varrun}/network_up" FAILED=0 } stop() { [ -f "${varrun}/network_up" ] || [ "${1}" = "force" ] || { echo "lxd-bridge isn't running"; exit 1; } if [ -d /sys/class/net/${LXD_BRIDGE} ]; then ifdown ${LXD_BRIDGE} iptables ${use_iptables_lock} -D INPUT -i ${LXD_BRIDGE} -p udp --dport 67 -j ACCEPT iptables ${use_iptables_lock} -D INPUT -i ${LXD_BRIDGE} -p tcp --dport 67 -j ACCEPT iptables ${use_iptables_lock} -D INPUT -i ${LXD_BRIDGE} -p udp --dport 53 -j ACCEPT iptables ${use_iptables_lock} -D INPUT -i ${LXD_BRIDGE} -p tcp --dport 53 -j ACCEPT iptables ${use_iptables_lock} -D FORWARD -i ${LXD_BRIDGE} -j ACCEPT iptables ${use_iptables_lock} -D FORWARD -o ${LXD_BRIDGE} -j ACCEPT iptables ${use_iptables_lock} -t mangle -D POSTROUTING -o ${LXD_BRIDGE} -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill if [ -n "${LXD_IPV4_NETWORK}" ] && [ "${LXD_IPV4_NAT}" = "true" ]; then iptables ${use_iptables_lock} -t nat -D POSTROUTING -s ${LXD_IPV4_NETWORK} ! -d ${LXD_IPV4_NETWORK} -j MASQUERADE fi if [ "${HAS_IPV6}" = "true" ] && [ -n "${LXD_IPV6_NETWORK}" ] && [ "${LXD_IPV6_NAT}" = "true" ]; then ip6tables ${use_iptables_lock} -t nat -D POSTROUTING -s ${LXD_IPV6_NETWORK} ! -d ${LXD_IPV6_NETWORK} -j MASQUERADE fi if [ -e "${varrun}/dnsmasq.pid" ]; then pid=$(cat "${varrun}/dnsmasq.pid" 2>/dev/null) && kill -9 "${pid}" rm -f "${varrun}/dnsmasq.pid" fi if [ -e "${varrun}/proxy.pid" ]; then pid=$(cat "${varrun}/proxy.pid" 2>/dev/null) && kill -9 "${pid}" rm -f "${varrun}/proxy.pid" fi # if ${LXD_BRIDGE} has attached interfaces, don't destroy the bridge ls /sys/class/net/${LXD_BRIDGE}/brif/* > /dev/null 2>&1 || ip link delete "${LXD_BRIDGE}" fi rm -f "${varrun}/network_up" } # See how we were called. case "${1}" in start) start ;; stop) stop ;; restart|reload|force-reload) ${0} stop ${0} start ;; *) echo "Usage: ${0} {start|stop|restart|reload|force-reload}" exit 2 esac exit $? lxd-2.0.2/lxd-bridge/lxd-bridge-proxy/000077500000000000000000000000001272140510300175165ustar00rootroot00000000000000lxd-2.0.2/lxd-bridge/lxd-bridge-proxy/main.go000066400000000000000000000007071272140510300207750ustar00rootroot00000000000000package main import ( "flag" "fmt" "log" "net/http" "net/http/httputil" ) func NewProxy() *httputil.ReverseProxy { director := func(req *http.Request) { if req.Method == "CONNECT" { fmt.Printf("CONNECT: %s\n", req.Host) } } return &httputil.ReverseProxy{Director: director} } func main() { addr := flag.String("addr", "[fe80::1%lxdbr0]:13128", "proxy listen address") flag.Parse() log.Fatal(http.ListenAndServe(*addr, NewProxy())) } lxd-2.0.2/lxd/000077500000000000000000000000001272140510300130645ustar00rootroot00000000000000lxd-2.0.2/lxd/api_1.0.go000066400000000000000000000104021272140510300145370ustar00rootroot00000000000000package main import ( "encoding/pem" "fmt" "net/http" "os" "reflect" "syscall" "gopkg.in/lxc/go-lxc.v2" "github.com/lxc/lxd/shared" ) var api10 = []Command{ containersCmd, containerCmd, containerStateCmd, containerFileCmd, containerLogsCmd, containerLogCmd, containerSnapshotsCmd, containerSnapshotCmd, containerExecCmd, aliasCmd, aliasesCmd, eventsCmd, imageCmd, imagesCmd, imagesExportCmd, imagesSecretCmd, operationsCmd, operationCmd, operationWait, operationWebsocket, networksCmd, networkCmd, api10Cmd, certificatesCmd, certificateFingerprintCmd, profilesCmd, profileCmd, } func api10Get(d *Daemon, r *http.Request) Response { body := shared.Jmap{ "api_extensions": []string{}, "api_status": "stable", "api_version": shared.APIVersion, } if d.isTrustedClient(r) { body["auth"] = "trusted" /* * Based on: https://groups.google.com/forum/#!topic/golang-nuts/Jel8Bb-YwX8 * there is really no better way to do this, which is * unfortunate. Also, we ditch the more accepted CharsToString * version in that thread, since it doesn't seem as portable, * viz. github issue #206. */ uname := syscall.Utsname{} if err := syscall.Uname(&uname); err != nil { return InternalError(err) } kernel := "" for _, c := range uname.Sysname { if c == 0 { break } kernel += string(byte(c)) } kernelVersion := "" for _, c := range uname.Release { if c == 0 { break } kernelVersion += string(byte(c)) } kernelArchitecture := "" for _, c := range uname.Machine { if c == 0 { break } kernelArchitecture += string(byte(c)) } addresses, err := d.ListenAddresses() if err != nil { return InternalError(err) } var certificate string if len(d.tlsConfig.Certificates) != 0 { certificate = string(pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: d.tlsConfig.Certificates[0].Certificate[0]})) } architectures := []string{} for _, architecture := range d.architectures { architectureName, err := shared.ArchitectureName(architecture) if err != nil { return InternalError(err) } architectures = append(architectures, architectureName) } env := shared.Jmap{ "addresses": addresses, "architectures": architectures, "certificate": certificate, "driver": "lxc", "driver_version": lxc.Version(), "kernel": kernel, "kernel_architecture": kernelArchitecture, "kernel_version": kernelVersion, "storage": d.Storage.GetStorageTypeName(), "storage_version": d.Storage.GetStorageTypeVersion(), "server": "lxd", "server_pid": os.Getpid(), "server_version": shared.Version} body["environment"] = env body["public"] = false body["config"] = daemonConfigRender() } else { body["auth"] = "untrusted" body["public"] = false } return SyncResponse(true, body) } type apiPut struct { Config shared.Jmap `json:"config"` } func api10Put(d *Daemon, r *http.Request) Response { oldConfig, err := dbConfigValuesGet(d.db) if err != nil { return InternalError(err) } req := apiPut{} if err := shared.ReadToJSON(r.Body, &req); err != nil { return BadRequest(err) } // Deal with special keys for k, v := range req.Config { config := daemonConfig[k] if config != nil && config.hiddenValue && v == true { req.Config[k] = oldConfig[k] } } // Diff the configs changedConfig := map[string]interface{}{} for key, value := range oldConfig { if req.Config[key] != value { changedConfig[key] = req.Config[key] } } for key, value := range req.Config { if oldConfig[key] != value { changedConfig[key] = req.Config[key] } } for key, valueRaw := range changedConfig { if valueRaw == nil { valueRaw = "" } s := reflect.ValueOf(valueRaw) if !s.IsValid() || s.Kind() != reflect.String { return BadRequest(fmt.Errorf("Invalid value type for '%s'", key)) } value := valueRaw.(string) confKey, ok := daemonConfig[key] if !ok { return BadRequest(fmt.Errorf("Bad server config key: '%s'", key)) } err := confKey.Set(d, value) if err != nil { return BadRequest(err) } } return EmptySyncResponse } var api10Cmd = Command{name: "", untrustedGet: true, get: api10Get, put: api10Put} lxd-2.0.2/lxd/api_internal.go000066400000000000000000000034601272140510300160630ustar00rootroot00000000000000package main import ( "fmt" "net/http" "strconv" "github.com/gorilla/mux" ) var apiInternal = []Command{ internalReadyCmd, internalShutdownCmd, internalContainerOnStartCmd, internalContainerOnStopCmd, } func internalReady(d *Daemon, r *http.Request) Response { if !d.SetupMode { return InternalError(fmt.Errorf("The server isn't currently in setup mode")) } err := d.Ready() if err != nil { return InternalError(err) } d.SetupMode = false return EmptySyncResponse } func internalWaitReady(d *Daemon, r *http.Request) Response { <-d.readyChan return EmptySyncResponse } func internalShutdown(d *Daemon, r *http.Request) Response { d.shutdownChan <- true return EmptySyncResponse } func internalContainerOnStart(d *Daemon, r *http.Request) Response { id, err := strconv.Atoi(mux.Vars(r)["id"]) if err != nil { return SmartError(err) } c, err := containerLoadById(d, id) if err != nil { return SmartError(err) } err = c.OnStart() if err != nil { return SmartError(err) } return EmptySyncResponse } func internalContainerOnStop(d *Daemon, r *http.Request) Response { id, err := strconv.Atoi(mux.Vars(r)["id"]) if err != nil { return SmartError(err) } target := r.FormValue("target") if target == "" { target = "unknown" } c, err := containerLoadById(d, id) if err != nil { return SmartError(err) } err = c.OnStop(target) if err != nil { return SmartError(err) } return EmptySyncResponse } var internalShutdownCmd = Command{name: "shutdown", put: internalShutdown} var internalReadyCmd = Command{name: "ready", put: internalReady, get: internalWaitReady} var internalContainerOnStartCmd = Command{name: "containers/{id}/onstart", get: internalContainerOnStart} var internalContainerOnStopCmd = Command{name: "containers/{id}/onstop", get: internalContainerOnStop} lxd-2.0.2/lxd/apparmor.go000066400000000000000000000120571272140510300152410ustar00rootroot00000000000000package main import ( "crypto/sha256" "fmt" "io" "io/ioutil" "os" "os/exec" "path" "strings" "github.com/lxc/lxd/shared" log "gopkg.in/inconshreveable/log15.v2" ) const ( APPARMOR_CMD_LOAD = "r" APPARMOR_CMD_UNLOAD = "R" APPARMOR_CMD_PARSE = "Q" ) var aaPath = shared.VarPath("security", "apparmor") const NESTING_AA_PROFILE = ` pivot_root, mount /var/lib/lxd/shmounts/ -> /var/lib/lxd/shmounts/, mount none -> /var/lib/lxd/shmounts/, mount fstype=proc -> /usr/lib/*/lxc/**, mount fstype=sysfs -> /usr/lib/*/lxc/**, mount options=(rw,bind), mount options=(rw,rbind), deny /dev/.lxd/proc/** rw, deny /dev/.lxd/sys/** rw, mount options=(rw,make-rshared), # there doesn't seem to be a way to ask for: # mount options=(ro,nosuid,nodev,noexec,remount,bind), # as we always get mount to $cdir/proc/sys with those flags denied # So allow all mounts until that is straightened out: mount, mount options=bind /var/lib/lxd/shmounts/** -> /var/lib/lxd/**, # lxc-container-default-with-nesting also inherited these # from start-container, and seems to need them. ptrace, signal, ` const DEFAULT_AA_PROFILE = ` #include profile "%s" flags=(attach_disconnected,mediate_deleted) { #include # Special exception for cgroup namespaces %s # user input raw.apparmor below here %s # nesting support goes here if needed %s change_profile -> "%s", }` func AAProfileFull(c container) string { lxddir := shared.VarPath("") if len(c.Name())+len(lxddir)+7 >= 253 { hash := sha256.New() io.WriteString(hash, lxddir) lxddir = fmt.Sprintf("%x", hash.Sum(nil)) } return fmt.Sprintf("lxd-%s_<%s>", c.Name(), lxddir) } func AAProfileShort(c container) string { return fmt.Sprintf("lxd-%s", c.Name()) } func AAProfileCgns() string { if shared.PathExists("/proc/self/ns/cgroup") { return " mount fstype=cgroup -> /sys/fs/cgroup/**," } return "" } // getProfileContent generates the apparmor profile template from the given // container. This includes the stock lxc includes as well as stuff from // raw.apparmor. func getAAProfileContent(c container) string { rawApparmor, ok := c.ExpandedConfig()["raw.apparmor"] if !ok { rawApparmor = "" } nesting := "" if c.IsNesting() { nesting = NESTING_AA_PROFILE } return fmt.Sprintf(DEFAULT_AA_PROFILE, AAProfileFull(c), AAProfileCgns(), rawApparmor, nesting, AAProfileFull(c)) } func runApparmor(command string, c container) error { if !aaAvailable { return nil } cmd := exec.Command("apparmor_parser", []string{ fmt.Sprintf("-%sWL", command), path.Join(aaPath, "cache"), path.Join(aaPath, "profiles", AAProfileShort(c)), }...) output, err := cmd.CombinedOutput() if err != nil { shared.Log.Error("Running apparmor", log.Ctx{"action": command, "output": string(output), "err": err}) } return err } // Ensure that the container's policy is loaded into the kernel so the // container can boot. func AALoadProfile(c container) error { if !aaAdmin { return nil } /* In order to avoid forcing a profile parse (potentially slow) on * every container start, let's use apparmor's binary policy cache, * which checks mtime of the files to figure out if the policy needs to * be regenerated. * * Since it uses mtimes, we shouldn't just always write out our local * apparmor template; instead we should check to see whether the * template is the same as ours. If it isn't we should write our * version out so that the new changes are reflected and we definitely * force a recompile. */ profile := path.Join(aaPath, "profiles", AAProfileShort(c)) content, err := ioutil.ReadFile(profile) if err != nil && !os.IsNotExist(err) { return err } updated := getAAProfileContent(c) if string(content) != string(updated) { if err := os.MkdirAll(path.Join(aaPath, "cache"), 0700); err != nil { return err } if err := os.MkdirAll(path.Join(aaPath, "profiles"), 0700); err != nil { return err } if err := ioutil.WriteFile(profile, []byte(updated), 0600); err != nil { return err } } return runApparmor(APPARMOR_CMD_LOAD, c) } // Ensure that the container's policy is unloaded to free kernel memory. This // does not delete the policy from disk or cache. func AAUnloadProfile(c container) error { if !aaAdmin { return nil } return runApparmor(APPARMOR_CMD_UNLOAD, c) } // Parse the profile without loading it into the kernel. func AAParseProfile(c container) error { if !aaAvailable { return nil } return runApparmor(APPARMOR_CMD_PARSE, c) } // Delete the policy from cache/disk. func AADeleteProfile(c container) { if !aaAdmin { return } /* It's ok if these deletes fail: if the container was never started, * we'll have never written a profile or cached it. */ os.Remove(path.Join(aaPath, "cache", AAProfileShort(c))) os.Remove(path.Join(aaPath, "profiles", AAProfileShort(c))) } // What's current apparmor profile func aaProfile() string { contents, err := ioutil.ReadFile("/proc/self/attr/current") if err == nil { return strings.TrimSpace(string(contents)) } return "" } lxd-2.0.2/lxd/certificates.go000066400000000000000000000113501272140510300160600ustar00rootroot00000000000000package main import ( "crypto/sha256" "crypto/x509" "encoding/base64" "encoding/pem" "fmt" "net" "net/http" "github.com/gorilla/mux" "github.com/lxc/lxd/shared" ) func certGenerateFingerprint(cert *x509.Certificate) string { return fmt.Sprintf("%x", sha256.Sum256(cert.Raw)) } func certificatesGet(d *Daemon, r *http.Request) Response { recursion := d.isRecursionRequest(r) if recursion { certResponses := []shared.CertInfo{} baseCerts, err := dbCertsGet(d.db) if err != nil { return SmartError(err) } for _, baseCert := range baseCerts { resp := shared.CertInfo{} resp.Fingerprint = baseCert.Fingerprint resp.Certificate = baseCert.Certificate if baseCert.Type == 1 { resp.Type = "client" } else { resp.Type = "unknown" } certResponses = append(certResponses, resp) } return SyncResponse(true, certResponses) } body := []string{} for _, cert := range d.clientCerts { fingerprint := fmt.Sprintf("/%s/certificates/%s", shared.APIVersion, certGenerateFingerprint(&cert)) body = append(body, fingerprint) } return SyncResponse(true, body) } type certificatesPostBody struct { Type string `json:"type"` Certificate string `json:"certificate"` Name string `json:"name"` Password string `json:"password"` } func readSavedClientCAList(d *Daemon) { d.clientCerts = []x509.Certificate{} dbCerts, err := dbCertsGet(d.db) if err != nil { shared.Logf("Error reading certificates from database: %s", err) return } for _, dbCert := range dbCerts { certBlock, _ := pem.Decode([]byte(dbCert.Certificate)) cert, err := x509.ParseCertificate(certBlock.Bytes) if err != nil { shared.Logf("Error reading certificate for %s: %s", dbCert.Name, err) continue } d.clientCerts = append(d.clientCerts, *cert) } } func saveCert(d *Daemon, host string, cert *x509.Certificate) error { baseCert := new(dbCertInfo) baseCert.Fingerprint = certGenerateFingerprint(cert) baseCert.Type = 1 baseCert.Name = host baseCert.Certificate = string( pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: cert.Raw}), ) return dbCertSave(d.db, baseCert) } func certificatesPost(d *Daemon, r *http.Request) Response { req := certificatesPostBody{} if err := shared.ReadToJSON(r.Body, &req); err != nil { return BadRequest(err) } if req.Type != "client" { return BadRequest(fmt.Errorf("Unknown request type %s", req.Type)) } var cert *x509.Certificate var name string if req.Certificate != "" { data, err := base64.StdEncoding.DecodeString(req.Certificate) if err != nil { return BadRequest(err) } cert, err = x509.ParseCertificate(data) if err != nil { return BadRequest(err) } name = req.Name } else if r.TLS != nil { if len(r.TLS.PeerCertificates) < 1 { return BadRequest(fmt.Errorf("No client certificate provided")) } cert = r.TLS.PeerCertificates[len(r.TLS.PeerCertificates)-1] remoteHost, _, err := net.SplitHostPort(r.RemoteAddr) if err != nil { return InternalError(err) } name = remoteHost } else { return BadRequest(fmt.Errorf("Can't use TLS data on non-TLS link")) } fingerprint := certGenerateFingerprint(cert) for _, existingCert := range d.clientCerts { if fingerprint == certGenerateFingerprint(&existingCert) { return EmptySyncResponse } } if !d.isTrustedClient(r) && d.PasswordCheck(req.Password) != nil { return Forbidden } err := saveCert(d, name, cert) if err != nil { return SmartError(err) } d.clientCerts = append(d.clientCerts, *cert) return EmptySyncResponse } var certificatesCmd = Command{ "certificates", false, true, certificatesGet, nil, certificatesPost, nil, } func certificateFingerprintGet(d *Daemon, r *http.Request) Response { fingerprint := mux.Vars(r)["fingerprint"] cert, err := doCertificateGet(d, fingerprint) if err != nil { return SmartError(err) } return SyncResponse(true, cert) } func doCertificateGet(d *Daemon, fingerprint string) (shared.CertInfo, error) { resp := shared.CertInfo{} dbCertInfo, err := dbCertGet(d.db, fingerprint) if err != nil { return resp, err } resp.Fingerprint = dbCertInfo.Fingerprint resp.Certificate = dbCertInfo.Certificate if dbCertInfo.Type == 1 { resp.Type = "client" } else { resp.Type = "unknown" } return resp, nil } func certificateFingerprintDelete(d *Daemon, r *http.Request) Response { fingerprint := mux.Vars(r)["fingerprint"] certInfo, err := dbCertGet(d.db, fingerprint) if err != nil { return NotFound } err = dbCertDelete(d.db, certInfo.Fingerprint) if err != nil { return SmartError(err) } readSavedClientCAList(d) return EmptySyncResponse } var certificateFingerprintCmd = Command{ "certificates/{fingerprint}", false, false, certificateFingerprintGet, nil, nil, certificateFingerprintDelete, } lxd-2.0.2/lxd/cgroup.go000066400000000000000000000022311272140510300147100ustar00rootroot00000000000000package main import ( "bufio" "io/ioutil" "os" "path" "strings" ) func getInitCgroupPath(controller string) string { f, err := os.Open("/proc/1/cgroup") if err != nil { return "/" } defer f.Close() scan := bufio.NewScanner(f) for scan.Scan() { line := scan.Text() fields := strings.Split(line, ":") if len(fields) != 3 { return "/" } if fields[2] != controller { continue } initPath := string(fields[3]) // ignore trailing /init.scope if it is there dir, file := path.Split(initPath) if file == "init.scope" { return dir } else { return initPath } } return "/" } func cGroupGet(controller, cgroup, file string) (string, error) { initPath := getInitCgroupPath(controller) path := path.Join("/sys/fs/cgroup", controller, initPath, cgroup, file) contents, err := ioutil.ReadFile(path) if err != nil { return "", err } return strings.Trim(string(contents), "\n"), nil } func cGroupSet(controller, cgroup, file string, value string) error { initPath := getInitCgroupPath(controller) path := path.Join("/sys/fs/cgroup", controller, initPath, cgroup, file) return ioutil.WriteFile(path, []byte(value), 0755) } lxd-2.0.2/lxd/container.go000066400000000000000000000360101272140510300153750ustar00rootroot00000000000000package main import ( "fmt" "io" "os" "strconv" "strings" "time" "gopkg.in/lxc/go-lxc.v2" "github.com/lxc/lxd/shared" log "gopkg.in/inconshreveable/log15.v2" ) // Helper functions func containerPath(name string, isSnapshot bool) string { if isSnapshot { return shared.VarPath("snapshots", name) } return shared.VarPath("containers", name) } func containerValidName(name string) error { if strings.Contains(name, shared.SnapshotDelimiter) { return fmt.Errorf( "The character '%s' is reserved for snapshots.", shared.SnapshotDelimiter) } if !shared.ValidHostname(name) { return fmt.Errorf("Container name isn't a valid hostname.") } return nil } func containerValidConfigKey(key string, value string) error { isInt64 := func(key string, value string) error { if value == "" { return nil } _, err := strconv.ParseInt(value, 10, 64) if err != nil { return fmt.Errorf("Invalid value for an integer: %s", value) } return nil } isBool := func(key string, value string) error { if value == "" { return nil } if !shared.StringInSlice(strings.ToLower(value), []string{"true", "false", "yes", "no", "1", "0", "on", "off"}) { return fmt.Errorf("Invalid value for a boolean: %s", value) } return nil } isOneOf := func(key string, value string, valid []string) error { if value == "" { return nil } if !shared.StringInSlice(value, valid) { return fmt.Errorf("Invalid value: %s (not one of %s)", value, valid) } return nil } switch key { case "boot.autostart": return isBool(key, value) case "boot.autostart.delay": return isInt64(key, value) case "boot.autostart.priority": return isInt64(key, value) case "limits.cpu": return nil case "limits.cpu.allowance": return nil case "limits.cpu.priority": return isInt64(key, value) case "limits.disk.priority": return isInt64(key, value) case "limits.memory": return nil case "limits.memory.enforce": return isOneOf(key, value, []string{"soft", "hard"}) case "limits.memory.swap": return isBool(key, value) case "limits.memory.swap.priority": return isInt64(key, value) case "limits.network.priority": return isInt64(key, value) case "limits.processes": return isInt64(key, value) case "linux.kernel_modules": return nil case "security.privileged": return isBool(key, value) case "security.nesting": return isBool(key, value) case "raw.apparmor": return nil case "raw.lxc": return lxcValidConfig(value) case "volatile.apply_template": return nil case "volatile.base_image": return nil case "volatile.last_state.idmap": return nil case "volatile.last_state.power": return nil } if strings.HasPrefix(key, "volatile.") { if strings.HasSuffix(key, ".hwaddr") { return nil } if strings.HasSuffix(key, ".name") { return nil } } if strings.HasPrefix(key, "environment.") { return nil } if strings.HasPrefix(key, "user.") { return nil } return fmt.Errorf("Bad key: %s", key) } func containerValidDeviceConfigKey(t, k string) bool { if k == "type" { return true } switch t { case "unix-char", "unix-block": switch k { case "gid": return true case "major": return true case "minor": return true case "mode": return true case "path": return true case "uid": return true default: return false } case "nic": switch k { case "limits.max": return true case "limits.ingress": return true case "limits.egress": return true case "host_name": return true case "hwaddr": return true case "mtu": return true case "name": return true case "nictype": return true case "parent": return true default: return false } case "disk": switch k { case "limits.max": return true case "limits.read": return true case "limits.write": return true case "optional": return true case "path": return true case "readonly": return true case "size": return true case "source": return true case "recursive": return true default: return false } case "none": return false default: return false } } func containerValidConfig(config map[string]string, profile bool, expanded bool) error { if config == nil { return nil } for k, v := range config { if profile && strings.HasPrefix(k, "volatile.") { return fmt.Errorf("Volatile keys can only be set on containers.") } err := containerValidConfigKey(k, v) if err != nil { return err } } return nil } func containerValidDevices(devices shared.Devices, profile bool, expanded bool) error { // Empty device list if devices == nil { return nil } // Check each device individually for _, m := range devices { for k, _ := range m { if !containerValidDeviceConfigKey(m["type"], k) { return fmt.Errorf("Invalid device configuration key for %s: %s", m["type"], k) } } if m["type"] == "nic" { if m["nictype"] == "" { return fmt.Errorf("Missing nic type") } if !shared.StringInSlice(m["nictype"], []string{"bridged", "physical", "p2p", "macvlan"}) { return fmt.Errorf("Bad nic type: %s", m["nictype"]) } if shared.StringInSlice(m["nictype"], []string{"bridged", "physical", "macvlan"}) && m["parent"] == "" { return fmt.Errorf("Missing parent for %s type nic.", m["nictype"]) } } else if m["type"] == "disk" { if m["path"] == "" { return fmt.Errorf("Disk entry is missing the required \"path\" property.") } if m["source"] == "" && m["path"] != "/" { return fmt.Errorf("Disk entry is missing the required \"source\" property.") } if m["path"] == "/" && m["source"] != "" { return fmt.Errorf("Root disk entry may not have a \"source\" property set.") } if m["size"] != "" && m["path"] != "/" { return fmt.Errorf("Only the root disk may have a size quota.") } if (m["path"] == "/" || !shared.IsDir(m["source"])) && m["recursive"] != "" { return fmt.Errorf("The recursive option is only supported for additional bind-mounted paths.") } } else if shared.StringInSlice(m["type"], []string{"unix-char", "unix-block"}) { if m["path"] == "" { return fmt.Errorf("Unix device entry is missing the required \"path\" property.") } } else if m["type"] == "none" { continue } else { return fmt.Errorf("Invalid device type: %s", m["type"]) } } // Checks on the expanded config if expanded { foundRootfs := false for _, m := range devices { if m["type"] == "disk" && m["path"] == "/" { foundRootfs = true } } if !foundRootfs { return fmt.Errorf("Container is lacking rootfs entry") } } return nil } // The container arguments type containerArgs struct { // Don't set manually Id int Architecture int BaseImage string Config map[string]string CreationDate time.Time Ctype containerType Devices shared.Devices Ephemeral bool Name string Profiles []string Stateful bool } // The container interface type container interface { // Container actions Freeze() error Shutdown(timeout time.Duration) error Start(stateful bool) error Stop(stateful bool) error Unfreeze() error // Snapshots & migration Restore(sourceContainer container) error Checkpoint(opts lxc.CheckpointOptions) error StartFromMigration(imagesDir string) error Snapshots() ([]container, error) // Config handling Rename(newName string) error Update(newConfig containerArgs, userRequested bool) error Delete() error Export(w io.Writer) error // Live configuration CGroupGet(key string) (string, error) CGroupSet(key string, value string) error ConfigKeySet(key string, value string) error // File handling FilePull(srcpath string, dstpath string) (int, int, os.FileMode, error) FilePush(srcpath string, dstpath string, uid int, gid int, mode int) error // Command execution Exec(command []string, env map[string]string, stdin *os.File, stdout *os.File, stderr *os.File) (int, error) // Status Render() (interface{}, error) RenderState() (*shared.ContainerState, error) IsPrivileged() bool IsRunning() bool IsFrozen() bool IsEphemeral() bool IsSnapshot() bool IsStateful() bool IsNesting() bool // Hooks OnStart() error OnStop(target string) error // Properties Id() int Name() string Architecture() int CreationDate() time.Time ExpandedConfig() map[string]string ExpandedDevices() shared.Devices LocalConfig() map[string]string LocalDevices() shared.Devices Profiles() []string InitPID() int State() string // Paths Path() string RootfsPath() string TemplatesPath() string StatePath() string LogFilePath() string LogPath() string // FIXME: Those should be internal functions StorageStart() error StorageStop() error Storage() storage IdmapSet() *shared.IdmapSet LastIdmapSet() (*shared.IdmapSet, error) TemplateApply(trigger string) error Daemon() *Daemon } // Loader functions func containerCreateAsEmpty(d *Daemon, args containerArgs) (container, error) { // Create the container c, err := containerCreateInternal(d, args) if err != nil { return nil, err } // Now create the empty storage if err := c.Storage().ContainerCreate(c); err != nil { c.Delete() return nil, err } // Apply any post-storage configuration err = containerConfigureInternal(c) if err != nil { c.Delete() return nil, err } return c, nil } func containerCreateEmptySnapshot(d *Daemon, args containerArgs) (container, error) { // Create the snapshot c, err := containerCreateInternal(d, args) if err != nil { return nil, err } // Now create the empty snapshot if err := c.Storage().ContainerSnapshotCreateEmpty(c); err != nil { c.Delete() return nil, err } return c, nil } func containerCreateFromImage(d *Daemon, args containerArgs, hash string) (container, error) { // Create the container c, err := containerCreateInternal(d, args) if err != nil { return nil, err } if err := dbImageLastAccessUpdate(d.db, hash, time.Now().UTC()); err != nil { return nil, fmt.Errorf("Error updating image last use date: %s", err) } // Now create the storage from an image if err := c.Storage().ContainerCreateFromImage(c, hash); err != nil { c.Delete() return nil, err } // Apply any post-storage configuration err = containerConfigureInternal(c) if err != nil { c.Delete() return nil, err } return c, nil } func containerCreateAsCopy(d *Daemon, args containerArgs, sourceContainer container) (container, error) { // Create the container c, err := containerCreateInternal(d, args) if err != nil { return nil, err } // Now clone the storage if err := c.Storage().ContainerCopy(c, sourceContainer); err != nil { c.Delete() return nil, err } // Apply any post-storage configuration err = containerConfigureInternal(c) if err != nil { c.Delete() return nil, err } return c, nil } func containerCreateAsSnapshot(d *Daemon, args containerArgs, sourceContainer container) (container, error) { // Deal with state if args.Stateful { if !sourceContainer.IsRunning() { return nil, fmt.Errorf("Container not running, cannot do stateful snapshot") } if err := findCriu("snapshot"); err != nil { return nil, err } stateDir := sourceContainer.StatePath() err := os.MkdirAll(stateDir, 0700) if err != nil { return nil, err } /* TODO: ideally we would freeze here and unfreeze below after * we've copied the filesystem, to make sure there are no * changes by the container while snapshotting. Unfortunately * there is abug in CRIU where it doesn't leave the container * in the same state it found it w.r.t. freezing, i.e. CRIU * freezes too, and then /always/ thaws, even if the container * was frozen. Until that's fixed, all calls to Unfreeze() * after snapshotting will fail. */ opts := lxc.CheckpointOptions{Directory: stateDir, Stop: false, Verbose: true} err = sourceContainer.Checkpoint(opts) err2 := CollectCRIULogFile(sourceContainer, stateDir, "snapshot", "dump") if err2 != nil { shared.Log.Warn("failed to collect criu log file", log.Ctx{"error": err2}) } if err != nil { os.RemoveAll(sourceContainer.StatePath()) return nil, err } } // Create the snapshot c, err := containerCreateInternal(d, args) if err != nil { return nil, err } // Clone the container if err := sourceContainer.Storage().ContainerSnapshotCreate(c, sourceContainer); err != nil { c.Delete() return nil, err } // Once we're done, remove the state directory if args.Stateful { os.RemoveAll(sourceContainer.StatePath()) } return c, nil } func containerCreateInternal(d *Daemon, args containerArgs) (container, error) { // Set default values if args.Profiles == nil { args.Profiles = []string{"default"} } if args.Config == nil { args.Config = map[string]string{} } if args.BaseImage != "" { args.Config["volatile.base_image"] = args.BaseImage } if args.Devices == nil { args.Devices = shared.Devices{} } if args.Architecture == 0 { args.Architecture = d.architectures[0] } // Validate container name if args.Ctype == cTypeRegular { err := containerValidName(args.Name) if err != nil { return nil, err } } // Validate container config err := containerValidConfig(args.Config, false, false) if err != nil { return nil, err } // Validate container devices err = containerValidDevices(args.Devices, false, false) if err != nil { return nil, err } // Validate architecture _, err = shared.ArchitectureName(args.Architecture) if err != nil { return nil, err } // Validate profiles profiles, err := dbProfiles(d.db) if err != nil { return nil, err } for _, profile := range args.Profiles { if !shared.StringInSlice(profile, profiles) { return nil, fmt.Errorf("Requested profile '%s' doesn't exist", profile) } } path := containerPath(args.Name, args.Ctype == cTypeSnapshot) if shared.PathExists(path) { if shared.IsSnapshot(args.Name) { return nil, fmt.Errorf("Snapshot '%s' already exists", args.Name) } return nil, fmt.Errorf("The container already exists") } // Wipe any existing log for this container name os.RemoveAll(shared.LogPath(args.Name)) // Create the container entry id, err := dbContainerCreate(d.db, args) if err != nil { return nil, err } args.Id = id // Read the timestamp from the database dbArgs, err := dbContainerGet(d.db, args.Name) if err != nil { return nil, err } args.CreationDate = dbArgs.CreationDate return containerLXCCreate(d, args) } func containerConfigureInternal(c container) error { // Find the root device for _, m := range c.ExpandedDevices() { if m["type"] != "disk" || m["path"] != "/" || m["size"] == "" { continue } size, err := shared.ParseByteSizeString(m["size"]) if err != nil { return err } err = c.Storage().ContainerSetQuota(c, size) if err != nil { return err } break } return nil } func containerLoadById(d *Daemon, id int) (container, error) { // Get the DB record name, err := dbContainerName(d.db, id) if err != nil { return nil, err } return containerLoadByName(d, name) } func containerLoadByName(d *Daemon, name string) (container, error) { // Get the DB record args, err := dbContainerGet(d.db, name) if err != nil { return nil, err } return containerLXCLoad(d, args) } lxd-2.0.2/lxd/container_delete.go000066400000000000000000000011641272140510300167210ustar00rootroot00000000000000package main import ( "fmt" "net/http" "github.com/gorilla/mux" ) func containerDelete(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] c, err := containerLoadByName(d, name) if err != nil { return SmartError(err) } if c.IsRunning() { return BadRequest(fmt.Errorf("container is running")) } rmct := func(op *operation) error { return c.Delete() } resources := map[string][]string{} resources["containers"] = []string{name} op, err := operationCreate(operationClassTask, resources, nil, rmct, nil, nil) if err != nil { return InternalError(err) } return OperationResponse(op) } lxd-2.0.2/lxd/container_exec.go000066400000000000000000000157361272140510300164150ustar00rootroot00000000000000package main import ( "encoding/json" "fmt" "io/ioutil" "net/http" "os" "strconv" "strings" "sync" "github.com/gorilla/mux" "github.com/gorilla/websocket" "github.com/lxc/lxd/shared" ) type commandPostContent struct { Command []string `json:"command"` WaitForWS bool `json:"wait-for-websocket"` Interactive bool `json:"interactive"` Environment map[string]string `json:"environment"` Width int `json:"width"` Height int `json:"height"` } type execWs struct { command []string container container env map[string]string rootUid int rootGid int conns map[int]*websocket.Conn connsLock sync.Mutex allConnected chan bool controlConnected chan bool interactive bool fds map[int]string width int height int } func (s *execWs) Metadata() interface{} { fds := shared.Jmap{} for fd, secret := range s.fds { if fd == -1 { fds["control"] = secret } else { fds[strconv.Itoa(fd)] = secret } } return shared.Jmap{"fds": fds} } func (s *execWs) Connect(op *operation, r *http.Request, w http.ResponseWriter) error { secret := r.FormValue("secret") if secret == "" { return fmt.Errorf("missing secret") } for fd, fdSecret := range s.fds { if secret == fdSecret { conn, err := shared.WebsocketUpgrader.Upgrade(w, r, nil) if err != nil { return err } s.connsLock.Lock() s.conns[fd] = conn s.connsLock.Unlock() if fd == -1 { s.controlConnected <- true return nil } for i, c := range s.conns { if i != -1 && c == nil { return nil } } s.allConnected <- true return nil } } /* If we didn't find the right secret, the user provided a bad one, * which 403, not 404, since this operation actually exists */ return os.ErrPermission } func (s *execWs) Do(op *operation) error { <-s.allConnected var err error var ttys []*os.File var ptys []*os.File var stdin *os.File var stdout *os.File var stderr *os.File if s.interactive { ttys = make([]*os.File, 1) ptys = make([]*os.File, 1) ptys[0], ttys[0], err = shared.OpenPty(s.rootUid, s.rootGid) stdin = ttys[0] stdout = ttys[0] stderr = ttys[0] if s.width > 0 && s.height > 0 { shared.SetSize(int(ptys[0].Fd()), s.width, s.height) } } else { ttys = make([]*os.File, 3) ptys = make([]*os.File, 3) for i := 0; i < len(ttys); i++ { ptys[i], ttys[i], err = shared.Pipe() if err != nil { return err } } stdin = ptys[0] stdout = ttys[1] stderr = ttys[2] } controlExit := make(chan bool) var wgEOF sync.WaitGroup if s.interactive { wgEOF.Add(1) go func() { select { case <-s.controlConnected: break case <-controlExit: return } for { mt, r, err := s.conns[-1].NextReader() if mt == websocket.CloseMessage { break } if err != nil { shared.Debugf("Got error getting next reader %s", err) break } buf, err := ioutil.ReadAll(r) if err != nil { shared.Debugf("Failed to read message %s", err) break } command := shared.ContainerExecControl{} if err := json.Unmarshal(buf, &command); err != nil { shared.Debugf("Failed to unmarshal control socket command: %s", err) continue } if command.Command == "window-resize" { winchWidth, err := strconv.Atoi(command.Args["width"]) if err != nil { shared.Debugf("Unable to extract window width: %s", err) continue } winchHeight, err := strconv.Atoi(command.Args["height"]) if err != nil { shared.Debugf("Unable to extract window height: %s", err) continue } err = shared.SetSize(int(ptys[0].Fd()), winchWidth, winchHeight) if err != nil { shared.Debugf("Failed to set window size to: %dx%d", winchWidth, winchHeight) continue } } } }() go func() { readDone, writeDone := shared.WebsocketMirror(s.conns[0], ptys[0], ptys[0]) <-readDone <-writeDone s.conns[0].Close() wgEOF.Done() }() } else { wgEOF.Add(len(ttys) - 1) for i := 0; i < len(ttys); i++ { go func(i int) { if i == 0 { <-shared.WebsocketRecvStream(ttys[i], s.conns[i]) ttys[i].Close() } else { <-shared.WebsocketSendStream(s.conns[i], ptys[i]) ptys[i].Close() wgEOF.Done() } }(i) } } cmdResult, cmdErr := s.container.Exec(s.command, s.env, stdin, stdout, stderr) for _, tty := range ttys { tty.Close() } if s.conns[-1] == nil { if s.interactive { controlExit <- true } } else { s.conns[-1].Close() } wgEOF.Wait() for _, pty := range ptys { pty.Close() } metadata := shared.Jmap{"return": cmdResult} err = op.UpdateMetadata(metadata) if err != nil { return err } return cmdErr } func containerExecPost(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] c, err := containerLoadByName(d, name) if err != nil { return SmartError(err) } if !c.IsRunning() { return BadRequest(fmt.Errorf("Container is not running.")) } if c.IsFrozen() { return BadRequest(fmt.Errorf("Container is frozen.")) } post := commandPostContent{} buf, err := ioutil.ReadAll(r.Body) if err != nil { return BadRequest(err) } if err := json.Unmarshal(buf, &post); err != nil { return BadRequest(err) } env := map[string]string{} for k, v := range c.ExpandedConfig() { if strings.HasPrefix(k, "environment.") { env[strings.TrimPrefix(k, "environment.")] = v } } if post.Environment != nil { for k, v := range post.Environment { env[k] = v } } if post.WaitForWS { ws := &execWs{} ws.fds = map[int]string{} idmapset := c.IdmapSet() if idmapset != nil { ws.rootUid, ws.rootGid = idmapset.ShiftIntoNs(0, 0) } ws.conns = map[int]*websocket.Conn{} ws.conns[-1] = nil ws.conns[0] = nil if !post.Interactive { ws.conns[1] = nil ws.conns[2] = nil } ws.allConnected = make(chan bool, 1) ws.controlConnected = make(chan bool, 1) ws.interactive = post.Interactive for i := -1; i < len(ws.conns)-1; i++ { ws.fds[i], err = shared.RandomCryptoString() if err != nil { return InternalError(err) } } ws.command = post.Command ws.container = c ws.env = env ws.width = post.Width ws.height = post.Height resources := map[string][]string{} resources["containers"] = []string{ws.container.Name()} op, err := operationCreate(operationClassWebsocket, resources, ws.Metadata(), ws.Do, nil, ws.Connect) if err != nil { return InternalError(err) } return OperationResponse(op) } run := func(op *operation) error { nullDev, err := os.OpenFile(os.DevNull, os.O_RDWR, 0666) if err != nil { return err } defer nullDev.Close() _, cmdErr := c.Exec(post.Command, env, nil, nil, nil) return cmdErr } resources := map[string][]string{} resources["containers"] = []string{name} op, err := operationCreate(operationClassTask, resources, nil, run, nil, nil) if err != nil { return InternalError(err) } return OperationResponse(op) } lxd-2.0.2/lxd/container_file.go000066400000000000000000000043251272140510300164000ustar00rootroot00000000000000package main import ( "fmt" "io" "io/ioutil" "net/http" "os" "path/filepath" "github.com/gorilla/mux" "github.com/lxc/lxd/shared" ) func containerFileHandler(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] c, err := containerLoadByName(d, name) if err != nil { return SmartError(err) } path := r.FormValue("path") if path == "" { return BadRequest(fmt.Errorf("missing path argument")) } switch r.Method { case "GET": return containerFileGet(c, path, r) case "POST": return containerFilePut(c, path, r) default: return NotFound } } func containerFileGet(c container, path string, r *http.Request) Response { /* * Copy out of the ns to a temporary file, and then use that to serve * the request from. This prevents us from having to worry about stuff * like people breaking out of the container root by symlinks or * ../../../s etc. in the path, since we can just rely on the kernel * for correctness. */ temp, err := ioutil.TempFile("", "lxd_forkgetfile_") if err != nil { return InternalError(err) } defer temp.Close() // Pul the file from the container uid, gid, mode, err := c.FilePull(path, temp.Name()) if err != nil { return InternalError(err) } headers := map[string]string{ "X-LXD-uid": fmt.Sprintf("%d", uid), "X-LXD-gid": fmt.Sprintf("%d", gid), "X-LXD-mode": fmt.Sprintf("%04o", mode), } // Make a file response struct files := make([]fileResponseEntry, 1) files[0].identifier = filepath.Base(path) files[0].path = temp.Name() files[0].filename = filepath.Base(path) return FileResponse(r, files, headers, true) } func containerFilePut(c container, path string, r *http.Request) Response { // Extract file ownership and mode from headers uid, gid, mode := shared.ParseLXDFileHeaders(r.Header) // Write file content to a tempfile temp, err := ioutil.TempFile("", "lxd_forkputfile_") if err != nil { return InternalError(err) } defer func() { temp.Close() os.Remove(temp.Name()) }() _, err = io.Copy(temp, r.Body) if err != nil { return InternalError(err) } // Transfer the file into the container err = c.FilePush(temp.Name(), path, uid, gid, mode) if err != nil { return InternalError(err) } return EmptySyncResponse } lxd-2.0.2/lxd/container_get.go000066400000000000000000000005341272140510300162360ustar00rootroot00000000000000package main import ( "net/http" "github.com/gorilla/mux" ) func containerGet(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] c, err := containerLoadByName(d, name) if err != nil { return SmartError(err) } state, err := c.Render() if err != nil { return InternalError(err) } return SyncResponse(true, state) } lxd-2.0.2/lxd/container_logs.go000066400000000000000000000043621272140510300164260ustar00rootroot00000000000000package main import ( "fmt" "io/ioutil" "net/http" "os" "strings" "github.com/gorilla/mux" "github.com/lxc/lxd/shared" ) func containerLogsGet(d *Daemon, r *http.Request) Response { /* Let's explicitly *not* try to do a containerLoadByName here. In some * cases (e.g. when container creation failed), the container won't * exist in the DB but it does have some log files on disk. * * However, we should check this name and ensure it's a valid container * name just so that people can't list arbitrary directories. */ name := mux.Vars(r)["name"] if err := containerValidName(name); err != nil { return BadRequest(err) } result := []string{} dents, err := ioutil.ReadDir(shared.LogPath(name)) if err != nil { return SmartError(err) } for _, f := range dents { result = append(result, fmt.Sprintf("/%s/containers/%s/logs/%s", shared.APIVersion, name, f.Name())) } return SyncResponse(true, result) } var containerLogsCmd = Command{ name: "containers/{name}/logs", get: containerLogsGet, } func validLogFileName(fname string) bool { /* Let's just require that the paths be relative, so that we don't have * to deal with any escaping or whatever. */ return fname == "lxc.log" || fname == "lxc.conf" || strings.HasPrefix(fname, "migration_") || strings.HasPrefix(fname, "snapshot_") } func containerLogGet(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] file := mux.Vars(r)["file"] if err := containerValidName(name); err != nil { return BadRequest(err) } if !validLogFileName(file) { return BadRequest(fmt.Errorf("log file name %s not valid", file)) } ent := fileResponseEntry{ path: shared.LogPath(name, file), filename: file, } return FileResponse(r, []fileResponseEntry{ent}, nil, false) } func containerLogDelete(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] file := mux.Vars(r)["file"] if err := containerValidName(name); err != nil { return BadRequest(err) } if !validLogFileName(file) { return BadRequest(fmt.Errorf("log file name %s not valid", file)) } return SmartError(os.Remove(shared.LogPath(name, file))) } var containerLogCmd = Command{ name: "containers/{name}/logs/{file}", get: containerLogGet, delete: containerLogDelete, } lxd-2.0.2/lxd/container_lxc.go000066400000000000000000003122311272140510300162450ustar00rootroot00000000000000package main import ( "archive/tar" "encoding/json" "fmt" "io" "io/ioutil" "net" "os" "os/exec" "path" "path/filepath" "reflect" "strconv" "strings" "sync" "syscall" "time" "gopkg.in/flosch/pongo2.v3" "gopkg.in/lxc/go-lxc.v2" "gopkg.in/yaml.v2" "github.com/lxc/lxd/shared" log "gopkg.in/inconshreveable/log15.v2" ) // Global variables var lxcStoppingContainersLock sync.Mutex var lxcStoppingContainers map[int]*sync.WaitGroup = make(map[int]*sync.WaitGroup) // Helper functions func lxcSetConfigItem(c *lxc.Container, key string, value string) error { if c == nil { return fmt.Errorf("Uninitialized go-lxc struct") } err := c.SetConfigItem(key, value) if err != nil { return fmt.Errorf("Failed to set LXC config: %s=%s", key, value) } return nil } func lxcValidConfig(rawLxc string) error { for _, line := range strings.Split(rawLxc, "\n") { // Ignore empty lines if len(line) == 0 { continue } // Ignore comments if strings.HasPrefix(line, "#") { continue } // Ensure the format is valid membs := strings.SplitN(line, "=", 2) if len(membs) != 2 { return fmt.Errorf("Invalid raw.lxc line: %s", line) } key := strings.ToLower(strings.Trim(membs[0], " \t")) // Blacklist some keys if key == "lxc.logfile" { return fmt.Errorf("Setting lxc.logfile is not allowed") } if strings.HasPrefix(key, "lxc.network.") { fields := strings.Split(key, ".") if len(fields) == 4 && shared.StringInSlice(fields[3], []string{"ipv4", "ipv6"}) { continue } if len(fields) == 5 && shared.StringInSlice(fields[3], []string{"ipv4", "ipv6"}) && fields[4] == "gateway" { continue } return fmt.Errorf("Only interface-specific ipv4/ipv6 lxc.network keys are allowed") } } return nil } // Loader functions func containerLXCCreate(d *Daemon, args containerArgs) (container, error) { // Create the container struct c := &containerLXC{ daemon: d, id: args.Id, name: args.Name, ephemeral: args.Ephemeral, architecture: args.Architecture, cType: args.Ctype, stateful: args.Stateful, creationDate: args.CreationDate, profiles: args.Profiles, localConfig: args.Config, localDevices: args.Devices, } // No need to detect storage here, its a new container. c.storage = d.Storage // Load the config err := c.init() if err != nil { c.Delete() return nil, err } // Look for a rootfs entry rootfs := false for _, m := range c.expandedDevices { if m["type"] == "disk" && m["path"] == "/" { rootfs = true break } } if !rootfs { deviceName := "root" for { if c.expandedDevices[deviceName] == nil { break } deviceName += "_" } c.localDevices[deviceName] = shared.Device{"type": "disk", "path": "/"} updateArgs := containerArgs{ Architecture: c.architecture, Config: c.localConfig, Devices: c.localDevices, Ephemeral: c.ephemeral, Profiles: c.profiles, } err = c.Update(updateArgs, false) if err != nil { c.Delete() return nil, err } } // Validate expanded config err = containerValidConfig(c.expandedConfig, false, true) if err != nil { c.Delete() return nil, err } err = containerValidDevices(c.expandedDevices, false, true) if err != nil { c.Delete() return nil, err } // Setup initial idmap config idmap := c.IdmapSet() var jsonIdmap string if idmap != nil { idmapBytes, err := json.Marshal(idmap.Idmap) if err != nil { c.Delete() return nil, err } jsonIdmap = string(idmapBytes) } else { jsonIdmap = "[]" } err = c.ConfigKeySet("volatile.last_state.idmap", jsonIdmap) if err != nil { c.Delete() return nil, err } return c, nil } func containerLXCLoad(d *Daemon, args containerArgs) (container, error) { // Create the container struct c := &containerLXC{ daemon: d, id: args.Id, name: args.Name, ephemeral: args.Ephemeral, architecture: args.Architecture, cType: args.Ctype, creationDate: args.CreationDate, profiles: args.Profiles, localConfig: args.Config, localDevices: args.Devices, stateful: args.Stateful} // Detect the storage backend s, err := storageForFilename(d, shared.VarPath("containers", strings.Split(c.name, "/")[0])) if err != nil { return nil, err } c.storage = s // Load the config err = c.init() if err != nil { return nil, err } return c, nil } // The LXC container driver type containerLXC struct { // Properties architecture int cType containerType creationDate time.Time ephemeral bool id int name string stateful bool // Config expandedConfig map[string]string expandedDevices shared.Devices fromHook bool localConfig map[string]string localDevices shared.Devices profiles []string // Cache c *lxc.Container daemon *Daemon idmapset *shared.IdmapSet storage storage } func (c *containerLXC) init() error { // Compute the expanded config and device list err := c.expandConfig() if err != nil { return err } err = c.expandDevices() if err != nil { return err } // Setup the Idmap if !c.IsPrivileged() { if c.daemon.IdmapSet == nil { return fmt.Errorf("LXD doesn't have a uid/gid allocation. In this mode, only privileged containers are supported.") } c.idmapset = c.daemon.IdmapSet } return nil } func (c *containerLXC) initLXC() error { // No need to go through all that for snapshots if c.IsSnapshot() { return nil } // Check if being called from a hook if c.fromHook { return fmt.Errorf("You can't use go-lxc from inside a LXC hook.") } // Check if already initialized if c.c != nil { return nil } // Load the go-lxc struct cc, err := lxc.NewContainer(c.Name(), c.daemon.lxcpath) if err != nil { return err } // Base config err = lxcSetConfigItem(cc, "lxc.cap.drop", "mac_admin mac_override sys_time sys_module sys_rawio") if err != nil { return err } // Set an appropriate /proc, /sys/ and /sys/fs/cgroup mounts := []string{} if c.IsPrivileged() && !runningInUserns { mounts = append(mounts, "proc:mixed") mounts = append(mounts, "sys:mixed") } else { mounts = append(mounts, "proc:rw") mounts = append(mounts, "sys:rw") } if !shared.PathExists("/proc/self/ns/cgroup") { mounts = append(mounts, "cgroup:mixed") } err = lxcSetConfigItem(cc, "lxc.mount.auto", strings.Join(mounts, " ")) if err != nil { return err } err = lxcSetConfigItem(cc, "lxc.autodev", "1") if err != nil { return err } err = lxcSetConfigItem(cc, "lxc.pts", "1024") if err != nil { return err } bindMounts := []string{ "/proc/sys/fs/binfmt_misc", "/sys/firmware/efi/efivars", "/sys/fs/fuse/connections", "/sys/fs/pstore", "/sys/kernel/debug", "/sys/kernel/security"} if c.IsPrivileged() && !runningInUserns { err = lxcSetConfigItem(cc, "lxc.mount.entry", "mqueue dev/mqueue mqueue rw,relatime,create=dir,optional") if err != nil { return err } } else { bindMounts = append(bindMounts, "/dev/mqueue") } for _, mnt := range bindMounts { err = lxcSetConfigItem(cc, "lxc.mount.entry", fmt.Sprintf("%s %s none rbind,create=dir,optional", mnt, strings.TrimPrefix(mnt, "/"))) if err != nil { return err } } // For lxcfs templateConfDir := os.Getenv("LXD_LXC_TEMPLATE_CONFIG") if templateConfDir == "" { templateConfDir = "/usr/share/lxc/config" } if shared.PathExists(fmt.Sprintf("%s/common.conf.d/", templateConfDir)) { err = lxcSetConfigItem(cc, "lxc.include", fmt.Sprintf("%s/common.conf.d/", templateConfDir)) if err != nil { return err } } // Configure devices cgroup if c.IsPrivileged() && !runningInUserns && cgDevicesController { err = lxcSetConfigItem(cc, "lxc.cgroup.devices.deny", "a") if err != nil { return err } for _, dev := range []string{"c *:* m", "b *:* m", "c 5:0 rwm", "c 5:1 rwm", "c 1:5 rwm", "c 1:7 rwm", "c 1:3 rwm", "c 1:8 rwm", "c 1:9 rwm", "c 5:2 rwm", "c 136:* rwm"} { err = lxcSetConfigItem(cc, "lxc.cgroup.devices.allow", dev) if err != nil { return err } } } if c.IsNesting() { /* * mount extra /proc and /sys to work around kernel * restrictions on remounting them when covered */ err = lxcSetConfigItem(cc, "lxc.mount.entry", "proc dev/.lxc/proc proc create=dir,optional") if err != nil { return err } err = lxcSetConfigItem(cc, "lxc.mount.entry", "sys dev/.lxc/sys sysfs create=dir,optional") if err != nil { return err } } // Setup logging logfile := c.LogFilePath() err = cc.SetLogFile(logfile) if err != nil { return err } err = lxcSetConfigItem(cc, "lxc.loglevel", "0") if err != nil { return err } // Setup architecture personality, err := shared.ArchitecturePersonality(c.architecture) if err != nil { personality, err = shared.ArchitecturePersonality(c.daemon.architectures[0]) if err != nil { return err } } err = lxcSetConfigItem(cc, "lxc.arch", personality) if err != nil { return err } // Setup the hooks err = lxcSetConfigItem(cc, "lxc.hook.pre-start", fmt.Sprintf("%s callhook %s %d start", execPath, shared.VarPath(""), c.id)) if err != nil { return err } err = lxcSetConfigItem(cc, "lxc.hook.post-stop", fmt.Sprintf("%s callhook %s %d stop", execPath, shared.VarPath(""), c.id)) if err != nil { return err } // Setup the console err = lxcSetConfigItem(cc, "lxc.tty", "0") if err != nil { return err } // Setup the hostname err = lxcSetConfigItem(cc, "lxc.utsname", c.Name()) if err != nil { return err } // Setup devlxd err = lxcSetConfigItem(cc, "lxc.mount.entry", fmt.Sprintf("%s dev/lxd none bind,create=dir 0 0", shared.VarPath("devlxd"))) if err != nil { return err } // Setup AppArmor if aaAvailable { if aaConfined || !aaAdmin { // If confined but otherwise able to use AppArmor, use our own profile curProfile := aaProfile() curProfile = strings.TrimSuffix(curProfile, " (enforce)") err = lxcSetConfigItem(cc, "lxc.aa_profile", curProfile) if err != nil { return err } } else { // If not currently confined, use the container's profile err := lxcSetConfigItem(cc, "lxc.aa_profile", AAProfileFull(c)) if err != nil { return err } } } // Setup Seccomp err = lxcSetConfigItem(cc, "lxc.seccomp", SeccompProfilePath(c)) if err != nil { return err } // Setup idmap if c.idmapset != nil { lines := c.idmapset.ToLxcString() for _, line := range lines { err := lxcSetConfigItem(cc, "lxc.id_map", strings.TrimSuffix(line, "\n")) if err != nil { return err } } } // Setup environment for k, v := range c.expandedConfig { if strings.HasPrefix(k, "environment.") { err = lxcSetConfigItem(cc, "lxc.environment", fmt.Sprintf("%s=%s", strings.TrimPrefix(k, "environment."), v)) if err != nil { return err } } } // Memory limits if cgMemoryController { memory := c.expandedConfig["limits.memory"] memoryEnforce := c.expandedConfig["limits.memory.enforce"] memorySwap := c.expandedConfig["limits.memory.swap"] memorySwapPriority := c.expandedConfig["limits.memory.swap.priority"] // Configure the memory limits if memory != "" { var valueInt int64 if strings.HasSuffix(memory, "%") { percent, err := strconv.ParseInt(strings.TrimSuffix(memory, "%"), 10, 64) if err != nil { return err } memoryTotal, err := deviceTotalMemory() if err != nil { return err } valueInt = int64((memoryTotal / 100) * percent) } else { valueInt, err = shared.ParseByteSizeString(memory) if err != nil { return err } } if memoryEnforce == "soft" { err = lxcSetConfigItem(cc, "lxc.cgroup.memory.soft_limit_in_bytes", fmt.Sprintf("%d", valueInt)) if err != nil { return err } } else { if cgSwapAccounting && (memorySwap == "" || shared.IsTrue(memorySwap)) { err = lxcSetConfigItem(cc, "lxc.cgroup.memory.limit_in_bytes", fmt.Sprintf("%d", valueInt)) if err != nil { return err } err = lxcSetConfigItem(cc, "lxc.cgroup.memory.memsw.limit_in_bytes", fmt.Sprintf("%d", valueInt)) if err != nil { return err } } else { err = lxcSetConfigItem(cc, "lxc.cgroup.memory.limit_in_bytes", fmt.Sprintf("%d", valueInt)) if err != nil { return err } } } } // Configure the swappiness if memorySwap != "" && !shared.IsTrue(memorySwap) { err = lxcSetConfigItem(cc, "lxc.cgroup.memory.swappiness", "0") if err != nil { return err } } else if memorySwapPriority != "" { priority, err := strconv.Atoi(memorySwapPriority) if err != nil { return err } err = lxcSetConfigItem(cc, "lxc.cgroup.memory.swappiness", fmt.Sprintf("%d", 60-10+priority)) if err != nil { return err } } } // CPU limits cpuPriority := c.expandedConfig["limits.cpu.priority"] cpuAllowance := c.expandedConfig["limits.cpu.allowance"] if (cpuPriority != "" || cpuAllowance != "") && cgCpuController { cpuShares, cpuCfsQuota, cpuCfsPeriod, err := deviceParseCPU(cpuAllowance, cpuPriority) if err != nil { return err } if cpuShares != "1024" { err = lxcSetConfigItem(cc, "lxc.cgroup.cpu.shares", cpuShares) if err != nil { return err } } if cpuCfsPeriod != "-1" { err = lxcSetConfigItem(cc, "lxc.cgroup.cpu.cfs_period_us", cpuCfsPeriod) if err != nil { return err } } if cpuCfsQuota != "-1" { err = lxcSetConfigItem(cc, "lxc.cgroup.cpu.cfs_quota_us", cpuCfsQuota) if err != nil { return err } } } // Disk limits if cgBlkioController { diskPriority := c.expandedConfig["limits.disk.priority"] if diskPriority != "" { priorityInt, err := strconv.Atoi(diskPriority) if err != nil { return err } err = lxcSetConfigItem(cc, "lxc.cgroup.blkio.weight", fmt.Sprintf("%d", priorityInt*100)) if err != nil { return err } } hasDiskLimits := false for _, m := range c.expandedDevices { if m["type"] != "disk" { continue } if m["limits.read"] != "" || m["limits.write"] != "" || m["limits.max"] != "" { hasDiskLimits = true break } } if hasDiskLimits { diskLimits, err := c.getDiskLimits() if err != nil { return err } for block, limit := range diskLimits { if limit.readBps > 0 { err = lxcSetConfigItem(cc, "lxc.cgroup.blkio.throttle.read_bps_device", fmt.Sprintf("%s %d", block, limit.readBps)) if err != nil { return err } } if limit.readIops > 0 { err = lxcSetConfigItem(cc, "lxc.cgroup.blkio.throttle.read_iops_device", fmt.Sprintf("%s %d", block, limit.readIops)) if err != nil { return err } } if limit.writeBps > 0 { err = lxcSetConfigItem(cc, "lxc.cgroup.blkio.throttle.write_bps_device", fmt.Sprintf("%s %d", block, limit.writeBps)) if err != nil { return err } } if limit.writeIops > 0 { err = lxcSetConfigItem(cc, "lxc.cgroup.blkio.throttle.write_iops_device", fmt.Sprintf("%s %d", block, limit.writeIops)) if err != nil { return err } } } } } // Processes if cgPidsController { processes := c.expandedConfig["limits.processes"] if processes != "" { valueInt, err := strconv.ParseInt(processes, 10, 64) if err != nil { return err } err = lxcSetConfigItem(cc, "lxc.cgroup.pids.max", fmt.Sprintf("%d", valueInt)) if err != nil { return err } } } // Setup devices for k, m := range c.expandedDevices { if shared.StringInSlice(m["type"], []string{"unix-char", "unix-block"}) { // Prepare all the paths srcPath := m["path"] tgtPath := strings.TrimPrefix(srcPath, "/") devName := fmt.Sprintf("unix.%s", strings.Replace(tgtPath, "/", "-", -1)) devPath := filepath.Join(c.DevicesPath(), devName) // Set the bind-mount entry err = lxcSetConfigItem(cc, "lxc.mount.entry", fmt.Sprintf("%s %s none bind,create=file", devPath, tgtPath)) if err != nil { return err } } else if m["type"] == "nic" { // Fill in some fields from volatile m, err = c.fillNetworkDevice(k, m) if err != nil { return err } // Interface type specific configuration if shared.StringInSlice(m["nictype"], []string{"bridged", "p2p"}) { err = lxcSetConfigItem(cc, "lxc.network.type", "veth") if err != nil { return err } } else if m["nictype"] == "physical" { err = lxcSetConfigItem(cc, "lxc.network.type", "phys") if err != nil { return err } } else if m["nictype"] == "macvlan" { err = lxcSetConfigItem(cc, "lxc.network.type", "macvlan") if err != nil { return err } err = lxcSetConfigItem(cc, "lxc.network.macvlan.mode", "bridge") if err != nil { return err } } err = lxcSetConfigItem(cc, "lxc.network.flags", "up") if err != nil { return err } if shared.StringInSlice(m["nictype"], []string{"bridged", "physical", "macvlan"}) { err = lxcSetConfigItem(cc, "lxc.network.link", m["parent"]) if err != nil { return err } } // Host Virtual NIC name if m["host_name"] != "" { err = lxcSetConfigItem(cc, "lxc.network.veth.pair", m["host_name"]) if err != nil { return err } } // MAC address if m["hwaddr"] != "" { err = lxcSetConfigItem(cc, "lxc.network.hwaddr", m["hwaddr"]) if err != nil { return err } } // MTU if m["mtu"] != "" { err = lxcSetConfigItem(cc, "lxc.network.mtu", m["mtu"]) if err != nil { return err } } // Name if m["name"] != "" { err = lxcSetConfigItem(cc, "lxc.network.name", m["name"]) if err != nil { return err } } } else if m["type"] == "disk" { // Prepare all the paths srcPath := m["source"] tgtPath := strings.TrimPrefix(m["path"], "/") devName := fmt.Sprintf("disk.%s", strings.Replace(tgtPath, "/", "-", -1)) devPath := filepath.Join(c.DevicesPath(), devName) // Various option checks isOptional := shared.IsTrue(m["optional"]) isReadOnly := shared.IsTrue(m["readonly"]) isRecursive := shared.IsTrue(m["recursive"]) isFile := !shared.IsDir(srcPath) && !deviceIsBlockdev(srcPath) // Deal with a rootfs if tgtPath == "" { // Set the rootfs backend type if supported (must happen before any other lxc.rootfs) err := lxcSetConfigItem(cc, "lxc.rootfs.backend", "dir") if err == nil { value := cc.ConfigItem("lxc.rootfs.backend") if len(value) == 0 || value[0] != "dir" { lxcSetConfigItem(cc, "lxc.rootfs.backend", "") } } // Set the rootfs path err = lxcSetConfigItem(cc, "lxc.rootfs", c.RootfsPath()) if err != nil { return err } // Read-only rootfs (unlikely to work very well) if isReadOnly { err = lxcSetConfigItem(cc, "lxc.rootfs.options", "ro") if err != nil { return err } } } else { rbind := "" options := []string{} if isReadOnly { options = append(options, "ro") } if isOptional { options = append(options, "optional") } if isRecursive { rbind = "r" } if isFile { options = append(options, "create=file") } else { options = append(options, "create=dir") } err = lxcSetConfigItem(cc, "lxc.mount.entry", fmt.Sprintf("%s %s none %sbind,%s", devPath, tgtPath, rbind, strings.Join(options, ","))) if err != nil { return err } } } } // Setup shmounts err = lxcSetConfigItem(cc, "lxc.mount.entry", fmt.Sprintf("%s dev/.lxd-mounts none bind,create=dir 0 0", shared.VarPath("shmounts", c.Name()))) if err != nil { return err } // Apply raw.lxc if lxcConfig, ok := c.expandedConfig["raw.lxc"]; ok { f, err := ioutil.TempFile("", "lxd_config_") if err != nil { return err } err = shared.WriteAll(f, []byte(lxcConfig)) f.Close() defer os.Remove(f.Name()) if err != nil { return err } if err := cc.LoadConfigFile(f.Name()); err != nil { return fmt.Errorf("Failed to load raw.lxc") } } c.c = cc return nil } // Config handling func (c *containerLXC) expandConfig() error { config := map[string]string{} // Apply all the profiles for _, name := range c.profiles { profileConfig, err := dbProfileConfig(c.daemon.db, name) if err != nil { return err } for k, v := range profileConfig { config[k] = v } } // Stick the local config on top for k, v := range c.localConfig { config[k] = v } c.expandedConfig = config return nil } func (c *containerLXC) expandDevices() error { devices := shared.Devices{} // Apply all the profiles for _, p := range c.profiles { profileDevices, err := dbDevices(c.daemon.db, p, true) if err != nil { return err } for k, v := range profileDevices { devices[k] = v } } // Stick local devices on top for k, v := range c.localDevices { devices[k] = v } c.expandedDevices = devices return nil } // Start functions func (c *containerLXC) startCommon() (string, error) { // Load the go-lxc struct err := c.initLXC() if err != nil { return "", err } // Check that we're not already running if c.IsRunning() { return "", fmt.Errorf("The container is already running") } // Load any required kernel modules kernelModules := c.expandedConfig["linux.kernel_modules"] if kernelModules != "" { for _, module := range strings.Split(kernelModules, ",") { module = strings.TrimPrefix(module, " ") out, err := exec.Command("modprobe", module).CombinedOutput() if err != nil { return "", fmt.Errorf("Failed to load kernel module '%s': %s", module, out) } } } /* Deal with idmap changes */ idmap := c.IdmapSet() lastIdmap, err := c.LastIdmapSet() if err != nil { return "", err } var jsonIdmap string if idmap != nil { idmapBytes, err := json.Marshal(idmap.Idmap) if err != nil { return "", err } jsonIdmap = string(idmapBytes) } else { jsonIdmap = "[]" } if !reflect.DeepEqual(idmap, lastIdmap) { shared.Debugf("Container idmap changed, remapping") err := c.StorageStart() if err != nil { return "", err } if lastIdmap != nil { err = lastIdmap.UnshiftRootfs(c.RootfsPath()) if err != nil { c.StorageStop() return "", err } } if idmap != nil { err = idmap.ShiftRootfs(c.RootfsPath()) if err != nil { c.StorageStop() return "", err } } var mode os.FileMode var uid int var gid int if c.IsPrivileged() { mode = 0700 } else { mode = 0755 if idmap != nil { uid, gid = idmap.ShiftIntoNs(0, 0) } } err = os.Chmod(c.Path(), mode) if err != nil { return "", err } err = os.Chown(c.Path(), uid, gid) if err != nil { return "", err } err = c.StorageStop() if err != nil { return "", err } } err = c.ConfigKeySet("volatile.last_state.idmap", jsonIdmap) if err != nil { return "", err } // Generate the Seccomp profile if err := SeccompCreateProfile(c); err != nil { return "", err } // Cleanup any existing leftover devices c.removeUnixDevices() c.removeDiskDevices() // Create the devices for k, m := range c.expandedDevices { if shared.StringInSlice(m["type"], []string{"unix-char", "unix-block"}) { // Unix device devPath, err := c.createUnixDevice(k, m) if err != nil { return "", err } // Add the new device cgroup rule dType, dMajor, dMinor, err := deviceGetAttributes(devPath) if err != nil { return "", err } err = lxcSetConfigItem(c.c, "lxc.cgroup.devices.allow", fmt.Sprintf("%s %d:%d rwm", dType, dMajor, dMinor)) if err != nil { return "", fmt.Errorf("Failed to add cgroup rule for device") } } else if m["type"] == "disk" { // Disk device if m["path"] != "/" { _, err := c.createDiskDevice(k, m) if err != nil { return "", err } } } } // Create any missing directory err = os.MkdirAll(c.LogPath(), 0700) if err != nil { return "", err } err = os.MkdirAll(shared.VarPath("devices", c.Name()), 0711) if err != nil { return "", err } err = os.MkdirAll(shared.VarPath("shmounts", c.Name()), 0711) if err != nil { return "", err } // Cleanup any leftover volatile entries netNames := []string{} for k, v := range c.expandedDevices { if v["type"] == "nic" { netNames = append(netNames, k) } } for k, _ := range c.localConfig { // We only care about volatile if !strings.HasPrefix(k, "volatile.") { continue } // Confirm it's a key of format volatile.. fields := strings.SplitN(k, ".", 3) if len(fields) != 3 { continue } // The only device keys we care about are name and hwaddr if !shared.StringInSlice(fields[2], []string{"name", "hwaddr"}) { continue } // Check if the device still exists if shared.StringInSlice(fields[1], netNames) { // Don't remove the volatile entry if the device doesn't have the matching field set if c.expandedDevices[fields[1]][fields[2]] == "" { continue } } // Remove the volatile key from the DB err := dbContainerConfigRemove(c.daemon.db, c.id, k) if err != nil { return "", err } // Remove the volatile key from the in-memory configs delete(c.localConfig, k) delete(c.expandedConfig, k) } // Rotate the log file logfile := c.LogFilePath() if shared.PathExists(logfile) { os.Remove(logfile + ".old") err := os.Rename(logfile, logfile+".old") if err != nil { return "", err } } // Generate the LXC config configPath := filepath.Join(c.LogPath(), "lxc.conf") err = c.c.SaveConfigFile(configPath) if err != nil { os.Remove(configPath) return "", err } return configPath, nil } func (c *containerLXC) Start(stateful bool) error { // Wait for container tear down to finish lxcStoppingContainersLock.Lock() wgStopping, stopping := lxcStoppingContainers[c.id] lxcStoppingContainersLock.Unlock() if stopping { wgStopping.Wait() } if err := setupSharedMounts(); err != nil { return fmt.Errorf("Daemon failed to setup shared mounts base: %s.\nDoes security.nesting need to be turned on?", err) } // Run the shared start code configPath, err := c.startCommon() if err != nil { return err } // If stateful, restore now if stateful { if !c.stateful { return fmt.Errorf("Container has no existing state to restore.") } if err := findCriu("snapshot"); err != nil { return err } if !c.IsPrivileged() { if err := c.IdmapSet().ShiftRootfs(c.StatePath()); err != nil { return err } } out, err := exec.Command( execPath, "forkmigrate", c.name, c.daemon.lxcpath, configPath, c.StatePath()).CombinedOutput() if string(out) != "" { for _, line := range strings.Split(strings.TrimRight(string(out), "\n"), "\n") { shared.Debugf("forkmigrate: %s", line) } } CollectCRIULogFile(c, c.StatePath(), "snapshot", "restore") if err != nil && !c.IsRunning() { return err } os.RemoveAll(c.StatePath()) c.stateful = false return dbContainerSetStateful(c.daemon.db, c.id, false) } else if c.stateful { /* stateless start required when we have state, let's delete it */ err := os.RemoveAll(c.StatePath()) if err != nil { return err } c.stateful = false err = dbContainerSetStateful(c.daemon.db, c.id, false) if err != nil { return err } } // Start the LXC container out, err := exec.Command( execPath, "forkstart", c.name, c.daemon.lxcpath, configPath).CombinedOutput() if string(out) != "" { for _, line := range strings.Split(strings.TrimRight(string(out), "\n"), "\n") { shared.Debugf("forkstart: %s", line) } } if err != nil && !c.IsRunning() { return fmt.Errorf( "Error calling 'lxd forkstart %s %s %s': err='%v'", c.name, c.daemon.lxcpath, filepath.Join(c.LogPath(), "lxc.conf"), err) } return nil } func (c *containerLXC) StartFromMigration(imagesDir string) error { // Run the shared start code configPath, err := c.startCommon() if err != nil { return err } // Start the LXC container out, err := exec.Command( execPath, "forkmigrate", c.name, c.daemon.lxcpath, configPath, imagesDir).CombinedOutput() if string(out) != "" { for _, line := range strings.Split(strings.TrimRight(string(out), "\n"), "\n") { shared.Debugf("forkmigrate: %s", line) } } if err != nil && !c.IsRunning() { return fmt.Errorf( "Error calling 'lxd forkmigrate %s %s %s %s': err='%v'", c.name, c.daemon.lxcpath, filepath.Join(c.LogPath(), "lxc.conf"), imagesDir, err) } return nil } func (c *containerLXC) OnStart() error { // Make sure we can't call go-lxc functions by mistake c.fromHook = true // Start the storage for this container err := c.StorageStart() if err != nil { return err } // Load the container AppArmor profile err = AALoadProfile(c) if err != nil { c.StorageStop() return err } // Template anything that needs templating key := "volatile.apply_template" if c.localConfig[key] != "" { // Run any template that needs running err = c.templateApplyNow(c.localConfig[key]) if err != nil { c.StorageStop() return err } // Remove the volatile key from the DB err := dbContainerConfigRemove(c.daemon.db, c.id, key) if err != nil { c.StorageStop() return err } } err = c.templateApplyNow("start") if err != nil { c.StorageStop() return err } // Trigger a rebalance deviceTaskSchedulerTrigger("container", c.name, "started") // Apply network priority if c.expandedConfig["limits.network.priority"] != "" { go func(c *containerLXC) { c.fromHook = false err := c.setNetworkPriority() if err != nil { shared.Log.Error("Failed to apply network priority", log.Ctx{"container": c.name, "err": err}) } }(c) } // Apply network limits for name, m := range c.expandedDevices { if m["type"] != "nic" { continue } if m["limits.max"] == "" && m["limits.ingress"] == "" && m["limits.egress"] == "" { continue } go func(c *containerLXC, name string, m shared.Device) { c.fromHook = false err = c.setNetworkLimits(name, m) if err != nil { shared.Log.Error("Failed to apply network limits", log.Ctx{"container": c.name, "err": err}) } }(c, name, m) } return nil } // Container shutdown locking func (c *containerLXC) setupStopping() *sync.WaitGroup { // Handle locking lxcStoppingContainersLock.Lock() defer lxcStoppingContainersLock.Unlock() // Existing entry wg, stopping := lxcStoppingContainers[c.id] if stopping { return wg } // Setup new entry lxcStoppingContainers[c.id] = &sync.WaitGroup{} go func(wg *sync.WaitGroup, id int) { wg.Wait() lxcStoppingContainersLock.Lock() defer lxcStoppingContainersLock.Unlock() delete(lxcStoppingContainers, id) }(lxcStoppingContainers[c.id], c.id) return lxcStoppingContainers[c.id] } // Stop functions func (c *containerLXC) Stop(stateful bool) error { // Handle stateful stop if stateful { if err := findCriu("snapshot"); err != nil { return err } // Cleanup any existing state stateDir := c.StatePath() os.RemoveAll(stateDir) err := os.MkdirAll(stateDir, 0700) if err != nil { return err } // Checkpoint opts := lxc.CheckpointOptions{Directory: stateDir, Stop: true, Verbose: true} err = c.Checkpoint(opts) err2 := CollectCRIULogFile(c, stateDir, "snapshot", "dump") if err2 != nil { shared.Log.Warn("failed to collect criu log file", log.Ctx{"error": err2}) } if err != nil { return err } c.stateful = true err = dbContainerSetStateful(c.daemon.db, c.id, true) if err != nil { return err } return nil } // Load the go-lxc struct err := c.initLXC() if err != nil { return err } // Attempt to freeze the container first, helps massively with fork bombs c.Freeze() // Handle locking wg := c.setupStopping() // Stop the container wg.Add(1) if err := c.c.Stop(); err != nil { wg.Done() return err } // Mark ourselves as done wg.Done() // Wait for any other teardown routines to finish wg.Wait() return nil } func (c *containerLXC) Shutdown(timeout time.Duration) error { // Load the go-lxc struct err := c.initLXC() if err != nil { return err } // Handle locking wg := c.setupStopping() // Shutdown the container wg.Add(1) if err := c.c.Shutdown(timeout); err != nil { wg.Done() return err } // Mark ourselves as done wg.Done() // Wait for any other teardown routines to finish wg.Wait() return nil } func (c *containerLXC) OnStop(target string) error { // Get locking lxcStoppingContainersLock.Lock() wg, stopping := lxcStoppingContainers[c.id] lxcStoppingContainersLock.Unlock() if wg != nil { wg.Add(1) } // Make sure we can't call go-lxc functions by mistake c.fromHook = true // Stop the storage for this container err := c.StorageStop() if err != nil { wg.Done() return err } // Unload the apparmor profile AAUnloadProfile(c) // FIXME: The go routine can go away once we can rely on LXC_TARGET go func(c *containerLXC, target string, wg *sync.WaitGroup) { c.fromHook = false // Unlock on return if wg != nil { defer wg.Done() } if target == "unknown" && stopping { target = "stop" } if target == "unknown" { time.Sleep(5 * time.Second) newContainer, err := containerLoadByName(c.daemon, c.Name()) if err != nil { return } if newContainer.Id() != c.id { return } if newContainer.IsRunning() { return } } // Clean all the unix devices err = c.removeUnixDevices() if err != nil { shared.Log.Error("Unable to remove unix devices", log.Ctx{"err": err}) } // Clean all the disk devices err = c.removeDiskDevices() if err != nil { shared.Log.Error("Unable to remove disk devices", log.Ctx{"err": err}) } // Reboot the container if target == "reboot" { /* This part is a hack to workaround a LXC bug where a failure from a post-stop script doesn't prevent the container to restart. */ ephemeral := c.ephemeral args := containerArgs{ Architecture: c.Architecture(), Config: c.LocalConfig(), Devices: c.LocalDevices(), Ephemeral: false, Profiles: c.Profiles(), } c.Update(args, false) c.Stop(false) args.Ephemeral = ephemeral c.Update(args, true) // Start the container again c.Start(false) return } // Trigger a rebalance deviceTaskSchedulerTrigger("container", c.name, "stopped") // Destroy ephemeral containers if c.ephemeral { c.Delete() } }(c, target, wg) return nil } // Freezer functions func (c *containerLXC) Freeze() error { // Load the go-lxc struct err := c.initLXC() if err != nil { return err } return c.c.Freeze() } func (c *containerLXC) Unfreeze() error { // Load the go-lxc struct err := c.initLXC() if err != nil { return err } return c.c.Unfreeze() } var LxcMonitorStateError = fmt.Errorf("Monitor is hung") // Get lxc container state, with 1 second timeout // If we don't get a reply, assume the lxc monitor is hung func (c *containerLXC) getLxcState() (lxc.State, error) { if c.IsSnapshot() { return lxc.StateMap["STOPPED"], nil } monitor := make(chan lxc.State, 1) go func(c *lxc.Container) { monitor <- c.State() }(c.c) select { case state := <-monitor: return state, nil case <-time.After(5 * time.Second): return lxc.StateMap["FROZEN"], LxcMonitorStateError } } func (c *containerLXC) Render() (interface{}, error) { // Load the go-lxc struct err := c.initLXC() if err != nil { return nil, err } // Ignore err as the arch string on error is correct (unknown) architectureName, _ := shared.ArchitectureName(c.architecture) if c.IsSnapshot() { return &shared.SnapshotInfo{ Architecture: architectureName, Config: c.localConfig, CreationDate: c.creationDate, Devices: c.localDevices, Ephemeral: c.ephemeral, ExpandedConfig: c.expandedConfig, ExpandedDevices: c.expandedDevices, Name: c.name, Profiles: c.profiles, Stateful: c.stateful, }, nil } else { // FIXME: Render shouldn't directly access the go-lxc struct cState, err := c.getLxcState() if err != nil { return nil, err } statusCode := shared.FromLXCState(int(cState)) return &shared.ContainerInfo{ Architecture: architectureName, Config: c.localConfig, CreationDate: c.creationDate, Devices: c.localDevices, Ephemeral: c.ephemeral, ExpandedConfig: c.expandedConfig, ExpandedDevices: c.expandedDevices, Name: c.name, Profiles: c.profiles, Status: statusCode.String(), StatusCode: statusCode, Stateful: c.stateful, }, nil } } func (c *containerLXC) RenderState() (*shared.ContainerState, error) { // Load the go-lxc struct err := c.initLXC() if err != nil { return nil, err } cState, err := c.getLxcState() if err != nil { return nil, err } statusCode := shared.FromLXCState(int(cState)) status := shared.ContainerState{ Status: statusCode.String(), StatusCode: statusCode, } if c.IsRunning() { pid := c.InitPID() status.Disk = c.diskState() status.Memory = c.memoryState() status.Network = c.networkState() status.Pid = int64(pid) status.Processes = c.processesState() } return &status, nil } func (c *containerLXC) Snapshots() ([]container, error) { // Get all the snapshots snaps, err := dbContainerGetSnapshots(c.daemon.db, c.name) if err != nil { return nil, err } // Build the snapshot list containers := []container{} for _, snapName := range snaps { snap, err := containerLoadByName(c.daemon, snapName) if err != nil { return nil, err } containers = append(containers, snap) } return containers, nil } func (c *containerLXC) Restore(sourceContainer container) error { // Check if we can restore the container err := c.storage.ContainerCanRestore(c, sourceContainer) if err != nil { return err } /* let's also check for CRIU if necessary, before doing a bunch of * filesystem manipulations */ if shared.PathExists(c.StatePath()) { if err := findCriu("snapshot"); err != nil { return err } } // Stop the container wasRunning := false if c.IsRunning() { wasRunning = true if err := c.Stop(false); err != nil { shared.Log.Error( "Could not stop container", log.Ctx{ "container": c.Name(), "err": err}) return err } } // Restore the rootfs err = c.storage.ContainerRestore(c, sourceContainer) if err != nil { shared.Log.Error("Restoring the filesystem failed", log.Ctx{ "source": sourceContainer.Name(), "destination": c.Name()}) return err } // Restore the configuration args := containerArgs{ Architecture: sourceContainer.Architecture(), Config: sourceContainer.LocalConfig(), Devices: sourceContainer.LocalDevices(), Ephemeral: sourceContainer.IsEphemeral(), Profiles: sourceContainer.Profiles(), } err = c.Update(args, false) if err != nil { shared.Log.Error("Restoring the configuration failed", log.Ctx{ "source": sourceContainer.Name(), "destination": c.Name()}) return err } // If the container wasn't running but was stateful, should we restore // it as running? if shared.PathExists(c.StatePath()) { configPath, err := c.startCommon() if err != nil { return err } if !c.IsPrivileged() { if err := c.IdmapSet().ShiftRootfs(c.StatePath()); err != nil { return err } } out, err := exec.Command( execPath, "forkmigrate", c.name, c.daemon.lxcpath, configPath, c.StatePath()).CombinedOutput() if string(out) != "" { for _, line := range strings.Split(strings.TrimRight(string(out), "\n"), "\n") { shared.Debugf("forkmigrate: %s", line) } } CollectCRIULogFile(c, c.StatePath(), "snapshot", "restore") if err != nil { return err } // Remove the state from the parent container; we only keep // this in snapshots. err2 := os.RemoveAll(c.StatePath()) if err2 != nil { shared.Log.Error("failed to delete snapshot state", "path", c.StatePath(), "err", err2) } if err != nil { return err } return nil } // Restart the container if wasRunning { return c.Start(false) } return nil } func (c *containerLXC) cleanup() { // Unmount any leftovers c.removeUnixDevices() c.removeDiskDevices() // Remove the security profiles AADeleteProfile(c) SeccompDeleteProfile(c) // Remove the devices path os.Remove(c.DevicesPath()) // Remove the shmounts path os.RemoveAll(shared.VarPath("shmounts", c.Name())) } func (c *containerLXC) Delete() error { if c.IsSnapshot() { // Remove the snapshot if err := c.storage.ContainerSnapshotDelete(c); err != nil { shared.Log.Warn("failed to delete snapshot", "name", c.Name(), "err", err) } } else { // Remove all snapshot if err := containerDeleteSnapshots(c.daemon, c.Name()); err != nil { shared.Log.Warn("failed to delete snapshots", "name", c.Name(), "err", err) } // Clean things up c.cleanup() // Delete the container from disk if shared.PathExists(c.Path()) { if err := c.storage.ContainerDelete(c); err != nil { return err } } } // Remove the database record if err := dbContainerRemove(c.daemon.db, c.Name()); err != nil { return err } return nil } func (c *containerLXC) Rename(newName string) error { oldName := c.Name() // Sanity checks if !c.IsSnapshot() && !shared.ValidHostname(newName) { return fmt.Errorf("Invalid container name") } if c.IsRunning() { return fmt.Errorf("renaming of running container not allowed") } // Clean things up c.cleanup() // Rename the logging path os.RemoveAll(shared.LogPath(newName)) if shared.PathExists(c.LogPath()) { err := os.Rename(c.LogPath(), shared.LogPath(newName)) if err != nil { return err } } // Rename the storage entry if c.IsSnapshot() { if err := c.storage.ContainerSnapshotRename(c, newName); err != nil { return err } } else { if err := c.storage.ContainerRename(c, newName); err != nil { return err } } // Rename the database entry if err := dbContainerRename(c.daemon.db, oldName, newName); err != nil { return err } if !c.IsSnapshot() { // Rename all the snapshots results, err := dbContainerGetSnapshots(c.daemon.db, oldName) if err != nil { return err } for _, sname := range results { // Rename the snapshot baseSnapName := filepath.Base(sname) newSnapshotName := newName + shared.SnapshotDelimiter + baseSnapName if err := dbContainerRename(c.daemon.db, sname, newSnapshotName); err != nil { return err } } } // Set the new name in the struct c.name = newName // Invalidate the go-lxc cache c.c = nil return nil } func (c *containerLXC) CGroupGet(key string) (string, error) { // Load the go-lxc struct err := c.initLXC() if err != nil { return "", err } // Make sure the container is running if !c.IsRunning() { return "", fmt.Errorf("Can't get cgroups on a stopped container") } value := c.c.CgroupItem(key) return strings.Join(value, "\n"), nil } func (c *containerLXC) CGroupSet(key string, value string) error { // Load the go-lxc struct err := c.initLXC() if err != nil { return err } // Make sure the container is running if !c.IsRunning() { return fmt.Errorf("Can't set cgroups on a stopped container") } err = c.c.SetCgroupItem(key, value) if err != nil { return fmt.Errorf("Failed to set cgroup %s=\"%s\": %s", key, value, err) } return nil } func (c *containerLXC) ConfigKeySet(key string, value string) error { c.localConfig[key] = value args := containerArgs{ Architecture: c.architecture, Config: c.localConfig, Devices: c.localDevices, Ephemeral: c.ephemeral, Profiles: c.profiles, } return c.Update(args, false) } func (c *containerLXC) Update(args containerArgs, userRequested bool) error { // Set sane defaults for unset keys if args.Architecture == 0 { args.Architecture = c.architecture } if args.Config == nil { args.Config = map[string]string{} } if args.Devices == nil { args.Devices = shared.Devices{} } if args.Profiles == nil { args.Profiles = []string{} } // Validate the new config err := containerValidConfig(args.Config, false, false) if err != nil { return err } // Validate the new devices err = containerValidDevices(args.Devices, false, false) if err != nil { return err } // Validate the new profiles profiles, err := dbProfiles(c.daemon.db) if err != nil { return err } for _, name := range args.Profiles { if !shared.StringInSlice(name, profiles) { return fmt.Errorf("Profile doesn't exist: %s", name) } } // Validate the new architecture if args.Architecture != 0 { _, err = shared.ArchitectureName(args.Architecture) if err != nil { return fmt.Errorf("Invalid architecture id: %s", err) } } // Check that volatile wasn't modified if userRequested { for k, v := range args.Config { if strings.HasPrefix(k, "volatile.") && c.localConfig[k] != v { return fmt.Errorf("Volatile keys are read-only.") } } for k, v := range c.localConfig { if strings.HasPrefix(k, "volatile.") && args.Config[k] != v { return fmt.Errorf("Volatile keys are read-only.") } } } // Get a copy of the old configuration oldArchitecture := 0 err = shared.DeepCopy(&c.architecture, &oldArchitecture) if err != nil { return err } oldEphemeral := false err = shared.DeepCopy(&c.ephemeral, &oldEphemeral) if err != nil { return err } oldExpandedDevices := shared.Devices{} err = shared.DeepCopy(&c.expandedDevices, &oldExpandedDevices) if err != nil { return err } oldExpandedConfig := map[string]string{} err = shared.DeepCopy(&c.expandedConfig, &oldExpandedConfig) if err != nil { return err } oldLocalDevices := shared.Devices{} err = shared.DeepCopy(&c.localDevices, &oldLocalDevices) if err != nil { return err } oldLocalConfig := map[string]string{} err = shared.DeepCopy(&c.localConfig, &oldLocalConfig) if err != nil { return err } oldProfiles := []string{} err = shared.DeepCopy(&c.profiles, &oldProfiles) if err != nil { return err } // Define a function which reverts everything undoChanges := func() { c.architecture = oldArchitecture c.ephemeral = oldEphemeral c.expandedConfig = oldExpandedConfig c.expandedDevices = oldExpandedDevices c.localConfig = oldLocalConfig c.localDevices = oldLocalDevices c.profiles = oldProfiles c.initLXC() deviceTaskSchedulerTrigger("container", c.name, "changed") } // Apply the various changes c.architecture = args.Architecture c.ephemeral = args.Ephemeral c.localConfig = args.Config c.localDevices = args.Devices c.profiles = args.Profiles // Expand the config and refresh the LXC config err = c.expandConfig() if err != nil { undoChanges() return err } err = c.expandDevices() if err != nil { undoChanges() return err } err = c.initLXC() if err != nil { undoChanges() return err } // Diff the configurations changedConfig := []string{} for key, _ := range oldExpandedConfig { if oldExpandedConfig[key] != c.expandedConfig[key] { if !shared.StringInSlice(key, changedConfig) { changedConfig = append(changedConfig, key) } } } for key, _ := range c.expandedConfig { if oldExpandedConfig[key] != c.expandedConfig[key] { if !shared.StringInSlice(key, changedConfig) { changedConfig = append(changedConfig, key) } } } // Diff the devices removeDevices, addDevices, updateDevices := oldExpandedDevices.Update(c.expandedDevices) // Do some validation of the config diff err = containerValidConfig(c.expandedConfig, false, true) if err != nil { undoChanges() return err } // Do some validation of the devices diff err = containerValidDevices(c.expandedDevices, false, true) if err != nil { undoChanges() return err } // If apparmor changed, re-validate the apparmor profile for _, key := range changedConfig { if key == "raw.apparmor" || key == "security.nesting" { err = AAParseProfile(c) if err != nil { undoChanges() return err } } } // Apply disk quota changes for _, m := range addDevices { var oldRootfsSize string for _, m := range oldExpandedDevices { if m["type"] == "disk" && m["path"] == "/" { oldRootfsSize = m["size"] break } } if m["size"] != oldRootfsSize { size, err := shared.ParseByteSizeString(m["size"]) if err != nil { undoChanges() return err } err = c.storage.ContainerSetQuota(c, size) if err != nil { undoChanges() return err } } } // Apply the live changes if c.IsRunning() { // Confirm that the rootfs source didn't change var oldRootfs shared.Device for _, m := range oldExpandedDevices { if m["type"] == "disk" && m["path"] == "/" { oldRootfs = m break } } var newRootfs shared.Device for _, m := range c.expandedDevices { if m["type"] == "disk" && m["path"] == "/" { newRootfs = m break } } if oldRootfs["source"] != newRootfs["source"] { undoChanges() return fmt.Errorf("Cannot change the rootfs path of a running container") } // Live update the container config for _, key := range changedConfig { value := c.expandedConfig[key] if key == "raw.apparmor" || key == "security.nesting" { // Update the AppArmor profile err = AALoadProfile(c) if err != nil { undoChanges() return err } } else if key == "linux.kernel_modules" && value != "" { for _, module := range strings.Split(value, ",") { module = strings.TrimPrefix(module, " ") out, err := exec.Command("modprobe", module).CombinedOutput() if err != nil { undoChanges() return fmt.Errorf("Failed to load kernel module '%s': %s", module, out) } } } else if key == "limits.disk.priority" { if !cgBlkioController { continue } priorityInt := 5 diskPriority := c.expandedConfig["limits.disk.priority"] if diskPriority != "" { priorityInt, err = strconv.Atoi(diskPriority) if err != nil { return err } } err = c.CGroupSet("blkio.weight", fmt.Sprintf("%d", priorityInt*100)) if err != nil { return err } } else if key == "limits.memory" || strings.HasPrefix(key, "limits.memory.") { // Skip if no memory CGroup if !cgMemoryController { continue } // Set the new memory limit memory := c.expandedConfig["limits.memory"] memoryEnforce := c.expandedConfig["limits.memory.enforce"] memorySwap := c.expandedConfig["limits.memory.swap"] // Parse memory if memory == "" { memory = "-1" } else if strings.HasSuffix(memory, "%") { percent, err := strconv.ParseInt(strings.TrimSuffix(memory, "%"), 10, 64) if err != nil { return err } memoryTotal, err := deviceTotalMemory() if err != nil { return err } memory = fmt.Sprintf("%d", int64((memoryTotal/100)*percent)) } else { valueInt, err := shared.ParseByteSizeString(memory) if err != nil { undoChanges() return err } memory = fmt.Sprintf("%d", valueInt) } // Reset everything if cgSwapAccounting { err = c.CGroupSet("memory.memsw.limit_in_bytes", "-1") if err != nil { undoChanges() return err } } err = c.CGroupSet("memory.limit_in_bytes", "-1") if err != nil { undoChanges() return err } err = c.CGroupSet("memory.soft_limit_in_bytes", "-1") if err != nil { undoChanges() return err } // Set the new values if memoryEnforce == "soft" { // Set new limit err = c.CGroupSet("memory.soft_limit_in_bytes", memory) if err != nil { undoChanges() return err } } else { if cgSwapAccounting && (memorySwap == "" || shared.IsTrue(memorySwap)) { err = c.CGroupSet("memory.limit_in_bytes", memory) if err != nil { undoChanges() return err } err = c.CGroupSet("memory.memsw.limit_in_bytes", memory) if err != nil { undoChanges() return err } } else { err = c.CGroupSet("memory.limit_in_bytes", memory) if err != nil { undoChanges() return err } } } // Configure the swappiness if key == "limits.memory.swap" || key == "limits.memory.swap.priority" { memorySwap := c.expandedConfig["limits.memory.swap"] memorySwapPriority := c.expandedConfig["limits.memory.swap.priority"] if memorySwap != "" && !shared.IsTrue(memorySwap) { err = c.CGroupSet("memory.swappiness", "0") if err != nil { undoChanges() return err } } else { priority := 0 if memorySwapPriority != "" { priority, err = strconv.Atoi(memorySwapPriority) if err != nil { undoChanges() return err } } err = c.CGroupSet("memory.swappiness", fmt.Sprintf("%d", 60-10+priority)) if err != nil { undoChanges() return err } } } } else if key == "limits.network.priority" { err := c.setNetworkPriority() if err != nil { return err } } else if key == "limits.cpu" { // Trigger a scheduler re-run deviceTaskSchedulerTrigger("container", c.name, "changed") } else if key == "limits.cpu.priority" || key == "limits.cpu.allowance" { // Skip if no cpu CGroup if !cgCpuController { continue } // Apply new CPU limits cpuShares, cpuCfsQuota, cpuCfsPeriod, err := deviceParseCPU(c.expandedConfig["limits.cpu.allowance"], c.expandedConfig["limits.cpu.priority"]) if err != nil { undoChanges() return err } err = c.CGroupSet("cpu.shares", cpuShares) if err != nil { undoChanges() return err } err = c.CGroupSet("cpu.cfs_period_us", cpuCfsPeriod) if err != nil { undoChanges() return err } err = c.CGroupSet("cpu.cfs_quota_us", cpuCfsQuota) if err != nil { undoChanges() return err } } else if key == "limits.processes" { if !cgPidsController { continue } if value == "" { err = c.CGroupSet("pids.max", "max") if err != nil { undoChanges() return err } } else { valueInt, err := strconv.ParseInt(value, 10, 64) if err != nil { undoChanges() return err } err = c.CGroupSet("pids.max", fmt.Sprintf("%d", valueInt)) if err != nil { undoChanges() return err } } } } // Live update the devices for k, m := range removeDevices { if shared.StringInSlice(m["type"], []string{"unix-char", "unix-block"}) { err = c.removeUnixDevice(k, m) if err != nil { undoChanges() return err } } else if m["type"] == "disk" && m["path"] != "/" { err = c.removeDiskDevice(k, m) if err != nil { undoChanges() return err } } else if m["type"] == "nic" { err = c.removeNetworkDevice(k, m) if err != nil { undoChanges() return err } } } for k, m := range addDevices { if shared.StringInSlice(m["type"], []string{"unix-char", "unix-block"}) { err = c.insertUnixDevice(k, m) if err != nil { undoChanges() return err } } else if m["type"] == "disk" && m["path"] != "/" { err = c.insertDiskDevice(k, m) if err != nil { undoChanges() return err } } else if m["type"] == "nic" { err = c.insertNetworkDevice(k, m) if err != nil { undoChanges() return err } } } updateDiskLimit := false for k, m := range updateDevices { if m["type"] == "disk" { updateDiskLimit = true } else if m["type"] == "nic" { err = c.setNetworkLimits(k, m) if err != nil { undoChanges() return err } } } // Disk limits parse all devices, so just apply them once if updateDiskLimit && cgBlkioController { diskLimits, err := c.getDiskLimits() if err != nil { undoChanges() return err } for block, limit := range diskLimits { err = c.CGroupSet("blkio.throttle.read_bps_device", fmt.Sprintf("%s %d", block, limit.readBps)) if err != nil { undoChanges() return err } err = c.CGroupSet("blkio.throttle.read_iops_device", fmt.Sprintf("%s %d", block, limit.readIops)) if err != nil { undoChanges() return err } err = c.CGroupSet("blkio.throttle.write_bps_device", fmt.Sprintf("%s %d", block, limit.writeBps)) if err != nil { undoChanges() return err } err = c.CGroupSet("blkio.throttle.write_iops_device", fmt.Sprintf("%s %d", block, limit.writeIops)) if err != nil { undoChanges() return err } } } } // Finally, apply the changes to the database tx, err := dbBegin(c.daemon.db) if err != nil { undoChanges() return err } err = dbContainerConfigClear(tx, c.id) if err != nil { tx.Rollback() undoChanges() return err } err = dbContainerConfigInsert(tx, c.id, args.Config) if err != nil { tx.Rollback() undoChanges() return err } err = dbContainerProfilesInsert(tx, c.id, args.Profiles) if err != nil { tx.Rollback() undoChanges() return err } err = dbDevicesAdd(tx, "container", int64(c.id), args.Devices) if err != nil { tx.Rollback() undoChanges() return err } err = dbContainerUpdate(tx, c.id, c.architecture, c.ephemeral) if err != nil { tx.Rollback() undoChanges() return err } if err := txCommit(tx); err != nil { undoChanges() return err } return nil } func (c *containerLXC) Export(w io.Writer) error { if c.IsRunning() { return fmt.Errorf("Cannot export a running container as an image") } // Start the storage err := c.StorageStart() if err != nil { return err } defer c.StorageStop() // Unshift the container idmap, err := c.LastIdmapSet() if err != nil { return err } if idmap != nil { if err := idmap.UnshiftRootfs(c.RootfsPath()); err != nil { return err } defer idmap.ShiftRootfs(c.RootfsPath()) } // Create the tarball tw := tar.NewWriter(w) // Keep track of the first path we saw for each path with nlink>1 linkmap := map[uint64]string{} cDir := c.Path() // Path inside the tar image is the pathname starting after cDir offset := len(cDir) + 1 writeToTar := func(path string, fi os.FileInfo, err error) error { if err := c.tarStoreFile(linkmap, offset, tw, path, fi); err != nil { shared.Debugf("Error tarring up %s: %s", path, err) return err } return nil } // Look for metadata.yaml fnam := filepath.Join(cDir, "metadata.yaml") if !shared.PathExists(fnam) { // Generate a new metadata.yaml f, err := ioutil.TempFile("", "lxd_lxd_metadata_") if err != nil { tw.Close() return err } defer os.Remove(f.Name()) // Get the container's architecture var arch string if c.IsSnapshot() { parentName := strings.SplitN(c.name, shared.SnapshotDelimiter, 2)[0] parent, err := containerLoadByName(c.daemon, parentName) if err != nil { tw.Close() return err } arch, _ = shared.ArchitectureName(parent.Architecture()) } else { arch, _ = shared.ArchitectureName(c.architecture) } if arch == "" { arch, err = shared.ArchitectureName(c.daemon.architectures[0]) if err != nil { return err } } // Fill in the metadata meta := imageMetadata{} meta.Architecture = arch meta.CreationDate = time.Now().UTC().Unix() data, err := yaml.Marshal(&meta) if err != nil { tw.Close() return err } // Write the actual file f.Write(data) f.Close() fi, err := os.Lstat(f.Name()) if err != nil { tw.Close() return err } tmpOffset := len(path.Dir(f.Name())) + 1 if err := c.tarStoreFile(linkmap, tmpOffset, tw, f.Name(), fi); err != nil { shared.Debugf("Error writing to tarfile: %s", err) tw.Close() return err } fnam = f.Name() } else { // Include metadata.yaml in the tarball fi, err := os.Lstat(fnam) if err != nil { shared.Debugf("Error statting %s during export", fnam) tw.Close() return err } if err := c.tarStoreFile(linkmap, offset, tw, fnam, fi); err != nil { shared.Debugf("Error writing to tarfile: %s", err) tw.Close() return err } } // Include all the rootfs files fnam = c.RootfsPath() filepath.Walk(fnam, writeToTar) // Include all the templates fnam = c.TemplatesPath() if shared.PathExists(fnam) { filepath.Walk(fnam, writeToTar) } return tw.Close() } func (c *containerLXC) Checkpoint(opts lxc.CheckpointOptions) error { // Load the go-lxc struct err := c.initLXC() if err != nil { return err } return c.c.Checkpoint(opts) } func (c *containerLXC) TemplateApply(trigger string) error { // "create" and "copy" are deferred until next start if shared.StringInSlice(trigger, []string{"create", "copy"}) { // The two events are mutually exclusive so only keep the last one err := c.ConfigKeySet("volatile.apply_template", trigger) if err != nil { return err } } return c.templateApplyNow(trigger) } func (c *containerLXC) templateApplyNow(trigger string) error { // If there's no metadata, just return fname := filepath.Join(c.Path(), "metadata.yaml") if !shared.PathExists(fname) { return nil } // Parse the metadata content, err := ioutil.ReadFile(fname) if err != nil { return err } metadata := new(imageMetadata) err = yaml.Unmarshal(content, &metadata) if err != nil { return fmt.Errorf("Could not parse %s: %v", fname, err) } // Go through the templates for templatePath, template := range metadata.Templates { var w *os.File // Check if the template should be applied now found := false for _, tplTrigger := range template.When { if tplTrigger == trigger { found = true break } } if !found { continue } // Open the file to template, create if needed fullpath := filepath.Join(c.RootfsPath(), strings.TrimLeft(templatePath, "/")) if shared.PathExists(fullpath) { if template.CreateOnly { continue } // Open the existing file w, err = os.Create(fullpath) if err != nil { return err } } else { // Create a new one uid := 0 gid := 0 // Get the right uid and gid for the container if !c.IsPrivileged() { uid, gid = c.idmapset.ShiftIntoNs(0, 0) } // Create the directories leading to the file shared.MkdirAllOwner(path.Dir(fullpath), 0755, uid, gid) // Create the file itself w, err = os.Create(fullpath) if err != nil { return err } // Fix ownership and mode if !c.IsPrivileged() { w.Chown(uid, gid) } w.Chmod(0644) } defer w.Close() // Read the template tplString, err := ioutil.ReadFile(filepath.Join(c.TemplatesPath(), template.Template)) if err != nil { return err } tpl, err := pongo2.FromString("{% autoescape off %}" + string(tplString) + "{% endautoescape %}") if err != nil { return err } // Figure out the architecture arch, err := shared.ArchitectureName(c.architecture) if err != nil { arch, err = shared.ArchitectureName(c.daemon.architectures[0]) if err != nil { return err } } // Generate the metadata containerMeta := make(map[string]string) containerMeta["name"] = c.name containerMeta["architecture"] = arch if c.ephemeral { containerMeta["ephemeral"] = "true" } else { containerMeta["ephemeral"] = "false" } if c.IsPrivileged() { containerMeta["privileged"] = "true" } else { containerMeta["privileged"] = "false" } configGet := func(confKey, confDefault *pongo2.Value) *pongo2.Value { val, ok := c.expandedConfig[confKey.String()] if !ok { return confDefault } return pongo2.AsValue(strings.TrimRight(val, "\r\n")) } // Render the template tpl.ExecuteWriter(pongo2.Context{"trigger": trigger, "path": templatePath, "container": containerMeta, "config": c.expandedConfig, "devices": c.expandedDevices, "properties": template.Properties, "config_get": configGet}, w) } return nil } func (c *containerLXC) FilePull(srcpath string, dstpath string) (int, int, os.FileMode, error) { // Setup container storage if needed if !c.IsRunning() { err := c.StorageStart() if err != nil { return -1, -1, 0, err } } // Get the file from the container out, err := exec.Command( execPath, "forkgetfile", c.RootfsPath(), fmt.Sprintf("%d", c.InitPID()), dstpath, srcpath, ).CombinedOutput() // Tear down container storage if needed if !c.IsRunning() { err := c.StorageStop() if err != nil { return -1, -1, 0, err } } uid := -1 gid := -1 mode := -1 // Process forkgetfile response for _, line := range strings.Split(strings.TrimRight(string(out), "\n"), "\n") { if line == "" { continue } // Extract errors if strings.HasPrefix(line, "error: ") { return -1, -1, 0, fmt.Errorf(strings.TrimPrefix(line, "error: ")) } // Extract the uid if strings.HasPrefix(line, "uid: ") { uid, err = strconv.Atoi(strings.TrimPrefix(line, "uid: ")) if err != nil { return -1, -1, 0, err } continue } // Extract the gid if strings.HasPrefix(line, "gid: ") { gid, err = strconv.Atoi(strings.TrimPrefix(line, "gid: ")) if err != nil { return -1, -1, 0, err } continue } // Extract the mode if strings.HasPrefix(line, "mode: ") { mode, err = strconv.Atoi(strings.TrimPrefix(line, "mode: ")) if err != nil { return -1, -1, 0, err } continue } shared.Debugf("forkgetfile: %s", line) } if err != nil { return -1, -1, 0, fmt.Errorf( "Error calling 'lxd forkgetfile %s %d %s': err='%v'", dstpath, c.InitPID(), srcpath, err) } // Unmap uid and gid if needed idmapset, err := c.LastIdmapSet() if err != nil { return -1, -1, 0, err } if idmapset != nil { uid, gid = idmapset.ShiftFromNs(uid, gid) } return uid, gid, os.FileMode(mode), nil } func (c *containerLXC) FilePush(srcpath string, dstpath string, uid int, gid int, mode int) error { var rootUid = 0 var rootGid = 0 // Map uid and gid if needed idmapset, err := c.LastIdmapSet() if err != nil { return err } if idmapset != nil { uid, gid = idmapset.ShiftIntoNs(uid, gid) rootUid, rootGid = idmapset.ShiftIntoNs(0, 0) } // Setup container storage if needed if !c.IsRunning() { err := c.StorageStart() if err != nil { return err } } // Push the file to the container out, err := exec.Command( execPath, "forkputfile", c.RootfsPath(), fmt.Sprintf("%d", c.InitPID()), srcpath, dstpath, fmt.Sprintf("%d", uid), fmt.Sprintf("%d", gid), fmt.Sprintf("%d", mode), fmt.Sprintf("%d", rootUid), fmt.Sprintf("%d", rootGid), fmt.Sprintf("%d", int(os.FileMode(0640)&os.ModePerm)), ).CombinedOutput() // Tear down container storage if needed if !c.IsRunning() { err := c.StorageStop() if err != nil { return err } } // Process forkputfile response if string(out) != "" { if strings.HasPrefix(string(out), "error:") { return fmt.Errorf(strings.TrimPrefix(strings.TrimSuffix(string(out), "\n"), "error: ")) } for _, line := range strings.Split(strings.TrimRight(string(out), "\n"), "\n") { shared.Debugf("forkgetfile: %s", line) } } if err != nil { return fmt.Errorf( "Error calling 'lxd forkputfile %s %d %s %d %d %d': err='%v'", srcpath, c.InitPID(), dstpath, uid, gid, mode, err) } return nil } func (c *containerLXC) Exec(command []string, env map[string]string, stdin *os.File, stdout *os.File, stderr *os.File) (int, error) { envSlice := []string{} for k, v := range env { envSlice = append(envSlice, fmt.Sprintf("%s=%s", k, v)) } args := []string{execPath, "forkexec", c.name, c.daemon.lxcpath, filepath.Join(c.LogPath(), "lxc.conf")} args = append(args, "--") args = append(args, "env") args = append(args, envSlice...) args = append(args, "--") args = append(args, "cmd") args = append(args, command...) cmd := exec.Cmd{} cmd.Path = execPath cmd.Args = args cmd.Stdin = stdin cmd.Stdout = stdout cmd.Stderr = stderr err := cmd.Run() if err != nil { exitErr, ok := err.(*exec.ExitError) if ok { status, ok := exitErr.Sys().(syscall.WaitStatus) if ok { return status.ExitStatus(), nil } } return -1, err } return 0, nil } func (c *containerLXC) diskState() map[string]shared.ContainerStateDisk { disk := map[string]shared.ContainerStateDisk{} for name, d := range c.expandedDevices { if d["type"] != "disk" { continue } if d["path"] != "/" { continue } usage, err := c.storage.ContainerGetUsage(c) if err != nil { continue } disk[name] = shared.ContainerStateDisk{Usage: usage} } return disk } func (c *containerLXC) memoryState() shared.ContainerStateMemory { memory := shared.ContainerStateMemory{} if !cgMemoryController { return memory } // Memory in bytes value, err := c.CGroupGet("memory.usage_in_bytes") valueInt, err := strconv.ParseInt(value, 10, 64) if err != nil { valueInt = -1 } memory.Usage = valueInt // Memory peak in bytes value, err = c.CGroupGet("memory.max_usage_in_bytes") valueInt, err = strconv.ParseInt(value, 10, 64) if err != nil { valueInt = -1 } memory.UsagePeak = valueInt if cgSwapAccounting { // Swap in bytes value, err := c.CGroupGet("memory.memsw.usage_in_bytes") valueInt, err := strconv.ParseInt(value, 10, 64) if err != nil { valueInt = -1 } memory.SwapUsage = valueInt - memory.Usage // Swap peak in bytes value, err = c.CGroupGet("memory.memsw.max_usage_in_bytes") valueInt, err = strconv.ParseInt(value, 10, 64) if err != nil { valueInt = -1 } memory.SwapUsagePeak = valueInt - memory.UsagePeak } return memory } func (c *containerLXC) networkState() map[string]shared.ContainerStateNetwork { result := map[string]shared.ContainerStateNetwork{} pid := c.InitPID() if pid < 1 { return result } // Get the network state from the container out, err := exec.Command( execPath, "forkgetnet", fmt.Sprintf("%d", pid)).CombinedOutput() // Process forkgetnet response if err != nil { shared.Log.Error("Error calling 'lxd forkgetnet", log.Ctx{"container": c.name, "output": string(out), "pid": pid}) return result } networks := map[string]shared.ContainerStateNetwork{} err = json.Unmarshal(out, &networks) if err != nil { shared.Log.Error("Failure to read forkgetnet json", log.Ctx{"container": c.name, "err": err}) return result } // Add HostName field for netName, net := range networks { net.HostName = c.getHostInterface(netName) result[netName] = net } return result } func (c *containerLXC) processesState() int64 { // Return 0 if not running pid := c.InitPID() if pid == -1 { return 0 } if cgPidsController { value, err := c.CGroupGet("pids.current") valueInt, err := strconv.ParseInt(value, 10, 64) if err != nil { return -1 } return valueInt } pids := []int64{int64(pid)} // Go through the pid list, adding new pids at the end so we go through them all for i := 0; i < len(pids); i++ { fname := fmt.Sprintf("/proc/%d/task/%d/children", pids[i], pids[i]) fcont, err := ioutil.ReadFile(fname) if err != nil { // the process terminated during execution of this loop continue } content := strings.Split(string(fcont), " ") for j := 0; j < len(content); j++ { pid, err := strconv.ParseInt(content[j], 10, 64) if err == nil { pids = append(pids, pid) } } } return int64(len(pids)) } func (c *containerLXC) tarStoreFile(linkmap map[uint64]string, offset int, tw *tar.Writer, path string, fi os.FileInfo) error { var err error var major, minor, nlink int var ino uint64 link := "" if fi.Mode()&os.ModeSymlink == os.ModeSymlink { link, err = os.Readlink(path) if err != nil { return err } } hdr, err := tar.FileInfoHeader(fi, link) if err != nil { return err } hdr.Name = path[offset:] if fi.IsDir() || fi.Mode()&os.ModeSymlink == os.ModeSymlink { hdr.Size = 0 } else { hdr.Size = fi.Size() } hdr.Uid, hdr.Gid, major, minor, ino, nlink, err = shared.GetFileStat(path) if err != nil { return fmt.Errorf("error getting file info: %s", err) } // Unshift the id under /rootfs/ for unpriv containers if !c.IsPrivileged() && strings.HasPrefix(hdr.Name, "/rootfs") { hdr.Uid, hdr.Gid = c.idmapset.ShiftFromNs(hdr.Uid, hdr.Gid) if hdr.Uid == -1 || hdr.Gid == -1 { return nil } } if major != -1 { hdr.Devmajor = int64(major) hdr.Devminor = int64(minor) } // If it's a hardlink we've already seen use the old name if fi.Mode().IsRegular() && nlink > 1 { if firstpath, found := linkmap[ino]; found { hdr.Typeflag = tar.TypeLink hdr.Linkname = firstpath hdr.Size = 0 } else { linkmap[ino] = hdr.Name } } // TODO: handle xattrs if err := tw.WriteHeader(hdr); err != nil { return fmt.Errorf("error writing header: %s", err) } if hdr.Typeflag == tar.TypeReg { f, err := os.Open(path) if err != nil { return fmt.Errorf("tarStoreFile: error opening file: %s", err) } defer f.Close() if _, err := io.Copy(tw, f); err != nil { return fmt.Errorf("error copying file %s", err) } } return nil } // Storage functions func (c *containerLXC) Storage() storage { return c.storage } func (c *containerLXC) StorageStart() error { if c.IsSnapshot() { return c.storage.ContainerSnapshotStart(c) } return c.storage.ContainerStart(c) } func (c *containerLXC) StorageStop() error { if c.IsSnapshot() { return c.storage.ContainerSnapshotStop(c) } return c.storage.ContainerStop(c) } // Mount handling func (c *containerLXC) insertMount(source, target, fstype string, flags int) error { var err error // Get the init PID pid := c.InitPID() if pid == -1 { // Container isn't running return fmt.Errorf("Can't insert mount into stopped container") } // Create the temporary mount target var tmpMount string if shared.IsDir(source) { tmpMount, err = ioutil.TempDir(shared.VarPath("shmounts", c.name), "lxdmount_") if err != nil { return fmt.Errorf("Failed to create shmounts path: %s", err) } } else { f, err := ioutil.TempFile(shared.VarPath("shmounts", c.name), "lxdmount_") if err != nil { return fmt.Errorf("Failed to create shmounts path: %s", err) } tmpMount = f.Name() f.Close() } defer os.Remove(tmpMount) // Mount the filesystem err = syscall.Mount(source, tmpMount, fstype, uintptr(flags), "") if err != nil { return fmt.Errorf("Failed to setup temporary mount: %s", err) } defer syscall.Unmount(tmpMount, syscall.MNT_DETACH) // Move the mount inside the container mntsrc := filepath.Join("/dev/.lxd-mounts", filepath.Base(tmpMount)) pidStr := fmt.Sprintf("%d", pid) out, err := exec.Command(execPath, "forkmount", pidStr, mntsrc, target).CombinedOutput() if string(out) != "" { for _, line := range strings.Split(strings.TrimRight(string(out), "\n"), "\n") { shared.Debugf("forkmount: %s", line) } } if err != nil { return fmt.Errorf( "Error calling 'lxd forkmount %s %s %s': err='%v'", pidStr, mntsrc, target, err) } return nil } func (c *containerLXC) removeMount(mount string) error { // Get the init PID pid := c.InitPID() if pid == -1 { // Container isn't running return fmt.Errorf("Can't remove mount from stopped container") } // Remove the mount from the container pidStr := fmt.Sprintf("%d", pid) out, err := exec.Command(execPath, "forkumount", pidStr, mount).CombinedOutput() if string(out) != "" { for _, line := range strings.Split(strings.TrimRight(string(out), "\n"), "\n") { shared.Debugf("forkumount: %s", line) } } if err != nil { return fmt.Errorf( "Error calling 'lxd forkumount %s %s': err='%v'", pidStr, mount, err) } return nil } // Unix devices handling func (c *containerLXC) createUnixDevice(name string, m shared.Device) (string, error) { var err error var major, minor int // Our device paths srcPath := m["path"] tgtPath := strings.TrimPrefix(srcPath, "/") devName := fmt.Sprintf("unix.%s", strings.Replace(tgtPath, "/", "-", -1)) devPath := filepath.Join(c.DevicesPath(), devName) // Get the major/minor of the device we want to create if m["major"] == "" && m["minor"] == "" { // If no major and minor are set, use those from the device on the host _, major, minor, err = deviceGetAttributes(srcPath) if err != nil { return "", fmt.Errorf("Failed to get device attributes: %s", err) } } else if m["major"] == "" || m["minor"] == "" { return "", fmt.Errorf("Both major and minor must be supplied for devices") } else { major, err = strconv.Atoi(m["major"]) if err != nil { return "", fmt.Errorf("Bad major %s in device %s", m["major"], m["path"]) } minor, err = strconv.Atoi(m["minor"]) if err != nil { return "", fmt.Errorf("Bad minor %s in device %s", m["minor"], m["path"]) } } // Get the device mode mode := os.FileMode(0660) if m["mode"] != "" { tmp, err := deviceModeOct(m["mode"]) if err != nil { return "", fmt.Errorf("Bad mode %s in device %s", m["mode"], m["path"]) } mode = os.FileMode(tmp) } if m["type"] == "unix-block" { mode |= syscall.S_IFBLK } else { mode |= syscall.S_IFCHR } // Get the device owner uid := 0 gid := 0 if m["uid"] != "" { uid, err = strconv.Atoi(m["uid"]) if err != nil { return "", fmt.Errorf("Invalid uid %s in device %s", m["uid"], m["path"]) } } if m["gid"] != "" { gid, err = strconv.Atoi(m["gid"]) if err != nil { return "", fmt.Errorf("Invalid gid %s in device %s", m["gid"], m["path"]) } } // Create the devices directory if missing if !shared.PathExists(c.DevicesPath()) { os.Mkdir(c.DevicesPath(), 0711) if err != nil { return "", fmt.Errorf("Failed to create devices path: %s", err) } } // Clean any existing entry if shared.PathExists(devPath) { err = os.Remove(devPath) if err != nil { return "", fmt.Errorf("Failed to remove existing entry: %s", err) } } // Create the new entry if err := syscall.Mknod(devPath, uint32(mode), minor|(major<<8)); err != nil { return "", fmt.Errorf("Failed to create device %s for %s: %s", devPath, m["path"], err) } if err := os.Chown(devPath, uid, gid); err != nil { return "", fmt.Errorf("Failed to chown device %s: %s", devPath, err) } // Needed as mknod respects the umask if err := os.Chmod(devPath, mode); err != nil { return "", fmt.Errorf("Failed to chmod device %s: %s", devPath, err) } if c.idmapset != nil { if err := c.idmapset.ShiftFile(devPath); err != nil { // uidshift failing is weird, but not a big problem. Log and proceed shared.Debugf("Failed to uidshift device %s: %s\n", m["path"], err) } } return devPath, nil } func (c *containerLXC) insertUnixDevice(name string, m shared.Device) error { // Check that the container is running if !c.IsRunning() { return fmt.Errorf("Can't insert device into stopped container") } // Create the device on the host devPath, err := c.createUnixDevice(name, m) if err != nil { return fmt.Errorf("Failed to setup device: %s", err) } // Bind-mount it into the container tgtPath := strings.TrimSuffix(m["path"], "/") err = c.insertMount(devPath, tgtPath, "none", syscall.MS_BIND) if err != nil { return fmt.Errorf("Failed to add mount for device: %s", err) } // Add the new device cgroup rule dType, dMajor, dMinor, err := deviceGetAttributes(devPath) if err != nil { return fmt.Errorf("Failed to get device attributes: %s", err) } if err := c.CGroupSet("devices.allow", fmt.Sprintf("%s %d:%d rwm", dType, dMajor, dMinor)); err != nil { return fmt.Errorf("Failed to add cgroup rule for device") } return nil } func (c *containerLXC) removeUnixDevice(name string, m shared.Device) error { // Check that the container is running pid := c.InitPID() if pid == -1 { return fmt.Errorf("Can't remove device from stopped container") } // Figure out the paths srcPath := m["path"] tgtPath := strings.TrimPrefix(srcPath, "/") devName := fmt.Sprintf("unix.%s", strings.Replace(tgtPath, "/", "-", -1)) devPath := filepath.Join(c.DevicesPath(), devName) // Remove the device cgroup rule dType, dMajor, dMinor, err := deviceGetAttributes(devPath) if err != nil { return err } err = c.CGroupSet("devices.deny", fmt.Sprintf("%s %d:%d rwm", dType, dMajor, dMinor)) if err != nil { return err } // Remove the bind-mount from the container ctnPath := fmt.Sprintf("/proc/%d/root/%s", pid, tgtPath) if shared.PathExists(ctnPath) { err = c.removeMount(m["path"]) if err != nil { return fmt.Errorf("Error unmounting the device: %s", err) } err = os.Remove(ctnPath) if err != nil { return fmt.Errorf("Error removing the device: %s", err) } } // Remove the host side err = os.Remove(devPath) if err != nil { return err } return nil } func (c *containerLXC) removeUnixDevices() error { // Check that we indeed have devices to remove if !shared.PathExists(c.DevicesPath()) { return nil } // Load the directory listing dents, err := ioutil.ReadDir(c.DevicesPath()) if err != nil { return err } // Go through all the unix devices for _, f := range dents { // Skip non-Unix devices if !strings.HasPrefix(f.Name(), "unix.") { continue } // Remove the entry devicePath := filepath.Join(c.DevicesPath(), f.Name()) err := os.Remove(devicePath) if err != nil { shared.Log.Error("failed removing unix device", log.Ctx{"err": err, "path": devicePath}) } } return nil } // Network device handling func (c *containerLXC) createNetworkDevice(name string, m shared.Device) (string, error) { var dev, n1 string if shared.StringInSlice(m["nictype"], []string{"bridged", "p2p", "macvlan"}) { // Host Virtual NIC name if m["host_name"] != "" { n1 = m["host_name"] } else { n1 = deviceNextVeth() } } // Handle bridged and p2p if shared.StringInSlice(m["nictype"], []string{"bridged", "p2p"}) { n2 := deviceNextVeth() err := exec.Command("ip", "link", "add", n1, "type", "veth", "peer", "name", n2).Run() if err != nil { return "", fmt.Errorf("Failed to create the veth interface: %s", err) } err = exec.Command("ip", "link", "set", n1, "up").Run() if err != nil { return "", fmt.Errorf("Failed to bring up the veth interface %s: %s", n1, err) } if m["nictype"] == "bridged" { err = exec.Command("ip", "link", "set", n1, "master", m["parent"]).Run() if err != nil { deviceRemoveInterface(n2) return "", fmt.Errorf("Failed to add interface to bridge: %s", err) } } dev = n2 } // Handle physical if m["nictype"] == "physical" { dev = m["parent"] } // Handle macvlan if m["nictype"] == "macvlan" { err := exec.Command("ip", "link", "add", n1, "link", m["parent"], "type", "macvlan", "mode", "bridge").Run() if err != nil { return "", fmt.Errorf("Failed to create the new macvlan interface: %s", err) } dev = n1 } // Set the MAC address if m["hwaddr"] != "" { err := exec.Command("ip", "link", "set", "dev", dev, "address", m["hwaddr"]).Run() if err != nil { deviceRemoveInterface(dev) return "", fmt.Errorf("Failed to set the MAC address: %s", err) } } // Bring the interface up err := exec.Command("ip", "link", "set", "dev", dev, "up").Run() if err != nil { deviceRemoveInterface(dev) return "", fmt.Errorf("Failed to bring up the interface: %s", err) } return dev, nil } func (c *containerLXC) fillNetworkDevice(name string, m shared.Device) (shared.Device, error) { newDevice := shared.Device{} err := shared.DeepCopy(&m, &newDevice) if err != nil { return nil, err } // Function to try and guess an available name nextInterfaceName := func() (string, error) { devNames := []string{} // Include all static interface names for _, v := range c.expandedDevices { if v["name"] != "" && !shared.StringInSlice(v["name"], devNames) { devNames = append(devNames, v["name"]) } } // Include all currently allocated interface names for k, v := range c.expandedConfig { if !strings.HasPrefix(k, "volatile.") { continue } fields := strings.SplitN(k, ".", 3) if len(fields) != 3 { continue } if fields[2] != "name" || shared.StringInSlice(v, devNames) { continue } devNames = append(devNames, v) } // Attempt to include all existing interfaces cc, err := lxc.NewContainer(c.Name(), c.daemon.lxcpath) if err == nil { interfaces, err := cc.Interfaces() if err == nil { for _, name := range interfaces { if shared.StringInSlice(name, devNames) { continue } devNames = append(devNames, name) } } } // Find a free ethX device i := 0 for { name := fmt.Sprintf("eth%d", i) if !shared.StringInSlice(name, devNames) { return name, nil } i += 1 } } // Fill in the MAC address if m["nictype"] != "physical" && m["hwaddr"] == "" { configKey := fmt.Sprintf("volatile.%s.hwaddr", name) volatileHwaddr := c.localConfig[configKey] if volatileHwaddr == "" { // Generate a new MAC address volatileHwaddr, err = deviceNextInterfaceHWAddr() if err != nil { return nil, err } c.localConfig[configKey] = volatileHwaddr c.expandedConfig[configKey] = volatileHwaddr // Update the database tx, err := dbBegin(c.daemon.db) if err != nil { return nil, err } err = dbContainerConfigInsert(tx, c.id, map[string]string{configKey: volatileHwaddr}) if err != nil { tx.Rollback() return nil, err } err = txCommit(tx) if err != nil { return nil, err } } newDevice["hwaddr"] = volatileHwaddr } // File in the name if m["name"] == "" { configKey := fmt.Sprintf("volatile.%s.name", name) volatileName := c.localConfig[configKey] if volatileName == "" { // Generate a new interface name volatileName, err = nextInterfaceName() if err != nil { return nil, err } c.localConfig[configKey] = volatileName c.expandedConfig[configKey] = volatileName // Update the database tx, err := dbBegin(c.daemon.db) if err != nil { return nil, err } err = dbContainerConfigInsert(tx, c.id, map[string]string{configKey: volatileName}) if err != nil { tx.Rollback() return nil, err } err = txCommit(tx) if err != nil { return nil, err } } newDevice["name"] = volatileName } return newDevice, nil } func (c *containerLXC) insertNetworkDevice(name string, m shared.Device) error { // Load the go-lxc struct err := c.initLXC() if err != nil { return nil } // Fill in some fields from volatile m, err = c.fillNetworkDevice(name, m) if err != nil { return nil } if m["hwaddr"] == "" || m["name"] == "" { return fmt.Errorf("wtf? hwaddr=%s name=%s", m["hwaddr"], m["name"]) } // Return empty list if not running if !c.IsRunning() { return fmt.Errorf("Can't insert device into stopped container") } // Create the interface devName, err := c.createNetworkDevice(name, m) if err != nil { return err } // Add the interface to the container err = c.c.AttachInterface(devName, m["name"]) if err != nil { return fmt.Errorf("Failed to attach interface: %s: %s", devName, err) } return nil } func (c *containerLXC) removeNetworkDevice(name string, m shared.Device) error { // Load the go-lxc struct err := c.initLXC() if err != nil { return nil } // Fill in some fields from volatile m, err = c.fillNetworkDevice(name, m) if err != nil { return nil } // Return empty list if not running if !c.IsRunning() { return fmt.Errorf("Can't insert device into stopped container") } // Get a temporary device name var hostName string if m["nictype"] == "physical" { hostName = m["parent"] } else { hostName = deviceNextVeth() } // For some reason, having network config confuses detach, so get our own go-lxc struct cc, err := lxc.NewContainer(c.Name(), c.daemon.lxcpath) if err != nil { return err } // Remove the interface from the container err = cc.DetachInterfaceRename(m["name"], hostName) if err != nil { return fmt.Errorf("Failed to detach interface: %s: %s", m["name"], err) } // If a veth, destroy it if m["nictype"] != "physical" { deviceRemoveInterface(hostName) } return nil } // Disk device handling func (c *containerLXC) createDiskDevice(name string, m shared.Device) (string, error) { // Prepare all the paths srcPath := m["source"] tgtPath := strings.TrimPrefix(m["path"], "/") devName := fmt.Sprintf("disk.%s", strings.Replace(tgtPath, "/", "-", -1)) devPath := filepath.Join(c.DevicesPath(), devName) // Check if read-only isOptional := shared.IsTrue(m["optional"]) isReadOnly := shared.IsTrue(m["readonly"]) isRecursive := shared.IsTrue(m["recursive"]) isFile := !shared.IsDir(srcPath) && !deviceIsBlockdev(srcPath) // Check if the source exists if !shared.PathExists(srcPath) { if isOptional { return "", nil } return "", fmt.Errorf("Source path doesn't exist") } // Create the devices directory if missing if !shared.PathExists(c.DevicesPath()) { err := os.Mkdir(c.DevicesPath(), 0711) if err != nil { return "", err } } // Clean any existing entry if shared.PathExists(devPath) { err := os.Remove(devPath) if err != nil { return "", err } } // Create the mount point if isFile { f, err := os.Create(devPath) if err != nil { return "", err } f.Close() } else { err := os.Mkdir(devPath, 0700) if err != nil { return "", err } } // Mount the fs err := deviceMountDisk(srcPath, devPath, isReadOnly, isRecursive) if err != nil { return "", err } return devPath, nil } func (c *containerLXC) insertDiskDevice(name string, m shared.Device) error { // Check that the container is running if !c.IsRunning() { return fmt.Errorf("Can't insert device into stopped container") } isRecursive := shared.IsTrue(m["recursive"]) // Create the device on the host devPath, err := c.createDiskDevice(name, m) if err != nil { return fmt.Errorf("Failed to setup device: %s", err) } flags := syscall.MS_BIND if isRecursive { flags |= syscall.MS_REC } // Bind-mount it into the container tgtPath := strings.TrimSuffix(m["path"], "/") err = c.insertMount(devPath, tgtPath, "none", flags) if err != nil { return fmt.Errorf("Failed to add mount for device: %s", err) } return nil } func (c *containerLXC) removeDiskDevice(name string, m shared.Device) error { // Check that the container is running pid := c.InitPID() if pid == -1 { return fmt.Errorf("Can't remove device from stopped container") } // Figure out the paths tgtPath := strings.TrimPrefix(m["path"], "/") devName := fmt.Sprintf("disk.%s", strings.Replace(tgtPath, "/", "-", -1)) devPath := filepath.Join(c.DevicesPath(), devName) // Remove the bind-mount from the container ctnPath := fmt.Sprintf("/proc/%d/root/%s", pid, tgtPath) if shared.PathExists(ctnPath) { err := c.removeMount(m["path"]) if err != nil { return fmt.Errorf("Error unmounting the device: %s", err) } } // Unmount the host side err := syscall.Unmount(devPath, syscall.MNT_DETACH) if err != nil { return err } // Remove the host side err = os.Remove(devPath) if err != nil { return err } return nil } func (c *containerLXC) removeDiskDevices() error { // Check that we indeed have devices to remove if !shared.PathExists(c.DevicesPath()) { return nil } // Load the directory listing dents, err := ioutil.ReadDir(c.DevicesPath()) if err != nil { return err } // Go through all the unix devices for _, f := range dents { // Skip non-Unix devices if !strings.HasPrefix(f.Name(), "disk.") { continue } // Always try to unmount the host side _ = syscall.Unmount(filepath.Join(c.DevicesPath(), f.Name()), syscall.MNT_DETACH) // Remove the entry diskPath := filepath.Join(c.DevicesPath(), f.Name()) err := os.Remove(diskPath) if err != nil { shared.Log.Error("Failed to remove disk device path", log.Ctx{"err": err, "path": diskPath}) } } return nil } // Block I/O limits func (c *containerLXC) getDiskLimits() (map[string]deviceBlockLimit, error) { result := map[string]deviceBlockLimit{} // Build a list of all valid block devices validBlocks := []string{} dents, err := ioutil.ReadDir("/sys/class/block/") if err != nil { return nil, err } for _, f := range dents { fPath := filepath.Join("/sys/class/block/", f.Name()) if shared.PathExists(fmt.Sprintf("%s/partition", fPath)) { continue } if !shared.PathExists(fmt.Sprintf("%s/dev", fPath)) { continue } block, err := ioutil.ReadFile(fmt.Sprintf("%s/dev", fPath)) if err != nil { return nil, err } validBlocks = append(validBlocks, strings.TrimSuffix(string(block), "\n")) } // Process all the limits blockLimits := map[string][]deviceBlockLimit{} for _, m := range c.expandedDevices { if m["type"] != "disk" { continue } // Apply max limit if m["limits.max"] != "" { m["limits.read"] = m["limits.max"] m["limits.write"] = m["limits.max"] } // Parse the user input readBps, readIops, writeBps, writeIops, err := deviceParseDiskLimit(m["limits.read"], m["limits.write"]) if err != nil { return nil, err } // Set the source path source := m["source"] if source == "" { source = c.RootfsPath() } // Get the backing block devices (major:minor) blocks, err := deviceGetParentBlocks(source) if err != nil { if readBps == 0 && readIops == 0 && writeBps == 0 && writeIops == 0 { // If the device doesn't exist, there is no limit to clear so ignore the failure continue } else { return nil, err } } device := deviceBlockLimit{readBps: readBps, readIops: readIops, writeBps: writeBps, writeIops: writeIops} for _, block := range blocks { blockStr := "" if shared.StringInSlice(block, validBlocks) { // Straightforward entry (full block device) blockStr = block } else { // Attempt to deal with a partition (guess its parent) fields := strings.SplitN(block, ":", 2) fields[1] = "0" if shared.StringInSlice(fmt.Sprintf("%s:%s", fields[0], fields[1]), validBlocks) { blockStr = fmt.Sprintf("%s:%s", fields[0], fields[1]) } } if blockStr == "" { return nil, fmt.Errorf("Block device doesn't support quotas: %s", block) } if blockLimits[blockStr] == nil { blockLimits[blockStr] = []deviceBlockLimit{} } blockLimits[blockStr] = append(blockLimits[blockStr], device) } } // Average duplicate limits for block, limits := range blockLimits { var readBpsCount, readBpsTotal, readIopsCount, readIopsTotal, writeBpsCount, writeBpsTotal, writeIopsCount, writeIopsTotal int64 for _, limit := range limits { if limit.readBps > 0 { readBpsCount += 1 readBpsTotal += limit.readBps } if limit.readIops > 0 { readIopsCount += 1 readIopsTotal += limit.readIops } if limit.writeBps > 0 { writeBpsCount += 1 writeBpsTotal += limit.writeBps } if limit.writeIops > 0 { writeIopsCount += 1 writeIopsTotal += limit.writeIops } } device := deviceBlockLimit{} if readBpsCount > 0 { device.readBps = readBpsTotal / readBpsCount } if readIopsCount > 0 { device.readIops = readIopsTotal / readIopsCount } if writeBpsCount > 0 { device.writeBps = writeBpsTotal / writeBpsCount } if writeIopsCount > 0 { device.writeIops = writeIopsTotal / writeIopsCount } result[block] = device } return result, nil } // Network I/O limits func (c *containerLXC) setNetworkPriority() error { // Check that the container is running if !c.IsRunning() { return fmt.Errorf("Can't set network priority on stopped container") } // Don't bother if the cgroup controller doesn't exist if !cgNetPrioController { return nil } // Extract the current priority networkPriority := c.expandedConfig["limits.network.priority"] if networkPriority == "" { networkPriority = "0" } networkInt, err := strconv.Atoi(networkPriority) if err != nil { return err } // Get all the interfaces netifs, err := net.Interfaces() if err != nil { return err } // Check that we at least succeeded to set an entry success := false var last_error error for _, netif := range netifs { err = c.CGroupSet("net_prio.ifpriomap", fmt.Sprintf("%s %d", netif.Name, networkInt)) if err == nil { success = true } else { last_error = err } } if !success { return fmt.Errorf("Failed to set network device priority: %s", last_error) } return nil } func (c *containerLXC) getHostInterface(name string) string { if c.IsRunning() { for i := 0; i < len(c.c.ConfigItem("lxc.network")); i++ { nicName := c.c.RunningConfigItem(fmt.Sprintf("lxc.network.%d.name", i))[0] if nicName != name { continue } veth := c.c.RunningConfigItem(fmt.Sprintf("lxc.network.%d.veth.pair", i))[0] if veth != "" { return veth } } } for k, dev := range c.expandedDevices { if dev["type"] != "nic" { continue } m, err := c.fillNetworkDevice(k, dev) if err != nil { m = dev } if m["name"] != name { continue } return m["host_name"] } return "" } func (c *containerLXC) setNetworkLimits(name string, m shared.Device) error { // We can only do limits on some network type if m["nictype"] != "bridged" && m["nictype"] != "p2p" { return fmt.Errorf("Network limits are only supported on bridged and p2p interfaces") } // Load the go-lxc struct err := c.initLXC() if err != nil { return err } // Check that the container is running if !c.IsRunning() { return fmt.Errorf("Can't set network limits on stopped container") } // Fill in some fields from volatile m, err = c.fillNetworkDevice(name, m) if err != nil { return nil } // Look for the host side interface name veth := c.getHostInterface(m["name"]) if veth == "" { return fmt.Errorf("LXC doesn't now about this device and the host_name property isn't set, can't find host side veth name") } // Apply max limit if m["limits.max"] != "" { m["limits.ingress"] = m["limits.max"] m["limits.egress"] = m["limits.max"] } // Parse the values var ingressInt int64 if m["limits.ingress"] != "" { ingressInt, err = shared.ParseBitSizeString(m["limits.ingress"]) if err != nil { return err } } var egressInt int64 if m["limits.egress"] != "" { egressInt, err = shared.ParseBitSizeString(m["limits.egress"]) if err != nil { return err } } // Clean any existing entry _ = exec.Command("tc", "qdisc", "del", "dev", veth, "root").Run() _ = exec.Command("tc", "qdisc", "del", "dev", veth, "ingress").Run() // Apply new limits if m["limits.ingress"] != "" { out, err := exec.Command("tc", "qdisc", "add", "dev", veth, "root", "handle", "1:0", "htb", "default", "10").CombinedOutput() if err != nil { return fmt.Errorf("Failed to create root tc qdisc: %s", out) } out, err = exec.Command("tc", "class", "add", "dev", veth, "parent", "1:0", "classid", "1:10", "htb", "rate", fmt.Sprintf("%dbit", ingressInt)).CombinedOutput() if err != nil { return fmt.Errorf("Failed to create limit tc class: %s", out) } out, err = exec.Command("tc", "filter", "add", "dev", veth, "parent", "1:0", "protocol", "all", "u32", "match", "u32", "0", "0", "flowid", "1:1").CombinedOutput() if err != nil { return fmt.Errorf("Failed to create tc filter: %s", out) } } if m["limits.egress"] != "" { out, err := exec.Command("tc", "qdisc", "add", "dev", veth, "handle", "ffff:0", "ingress").CombinedOutput() if err != nil { return fmt.Errorf("Failed to create ingress tc qdisc: %s", out) } out, err = exec.Command("tc", "filter", "add", "dev", veth, "parent", "ffff:0", "protocol", "all", "u32", "match", "u32", "0", "0", "police", "rate", fmt.Sprintf("%dbit", egressInt), "burst", "1024k", "mtu", "64kb", "drop", "flowid", ":1").CombinedOutput() if err != nil { return fmt.Errorf("Failed to create ingress tc qdisc: %s", out) } } return nil } // Various state query functions func (c *containerLXC) IsStateful() bool { return c.stateful } func (c *containerLXC) IsEphemeral() bool { return c.ephemeral } func (c *containerLXC) IsFrozen() bool { return c.State() == "FROZEN" } func (c *containerLXC) IsNesting() bool { return shared.IsTrue(c.expandedConfig["security.nesting"]) } func (c *containerLXC) IsPrivileged() bool { return shared.IsTrue(c.expandedConfig["security.privileged"]) } func (c *containerLXC) IsRunning() bool { state := c.State() return state != "BROKEN" && state != "STOPPED" } func (c *containerLXC) IsSnapshot() bool { return c.cType == cTypeSnapshot } // Various property query functions func (c *containerLXC) Architecture() int { return c.architecture } func (c *containerLXC) CreationDate() time.Time { return c.creationDate } func (c *containerLXC) ExpandedConfig() map[string]string { return c.expandedConfig } func (c *containerLXC) ExpandedDevices() shared.Devices { return c.expandedDevices } func (c *containerLXC) Id() int { return c.id } func (c *containerLXC) IdmapSet() *shared.IdmapSet { return c.idmapset } func (c *containerLXC) InitPID() int { // Load the go-lxc struct err := c.initLXC() if err != nil { return -1 } return c.c.InitPid() } func (c *containerLXC) LocalConfig() map[string]string { return c.localConfig } func (c *containerLXC) LocalDevices() shared.Devices { return c.localDevices } func (c *containerLXC) LastIdmapSet() (*shared.IdmapSet, error) { lastJsonIdmap := c.LocalConfig()["volatile.last_state.idmap"] if lastJsonIdmap == "" { return c.IdmapSet(), nil } lastIdmap := new(shared.IdmapSet) err := json.Unmarshal([]byte(lastJsonIdmap), &lastIdmap.Idmap) if err != nil { return nil, err } if len(lastIdmap.Idmap) == 0 { return nil, nil } return lastIdmap, nil } func (c *containerLXC) Daemon() *Daemon { // FIXME: This function should go away return c.daemon } func (c *containerLXC) Name() string { return c.name } func (c *containerLXC) Profiles() []string { return c.profiles } func (c *containerLXC) State() string { // Load the go-lxc struct err := c.initLXC() if err != nil { return "BROKEN" } state, err := c.getLxcState() if err != nil { return shared.Error.String() } return state.String() } // Various container paths func (c *containerLXC) Path() string { return containerPath(c.Name(), c.IsSnapshot()) } func (c *containerLXC) DevicesPath() string { return shared.VarPath("devices", c.Name()) } func (c *containerLXC) LogPath() string { return shared.LogPath(c.Name()) } func (c *containerLXC) LogFilePath() string { return filepath.Join(c.LogPath(), "lxc.log") } func (c *containerLXC) RootfsPath() string { return filepath.Join(c.Path(), "rootfs") } func (c *containerLXC) TemplatesPath() string { return filepath.Join(c.Path(), "templates") } func (c *containerLXC) StatePath() string { /* FIXME: backwards compatibility: we used to use Join(RootfsPath(), * "state"), which was bad. Let's just check to see if that directory * exists. */ oldStatePath := filepath.Join(c.RootfsPath(), "state") if shared.IsDir(oldStatePath) { return oldStatePath } return filepath.Join(c.Path(), "state") } lxd-2.0.2/lxd/container_post.go000066400000000000000000000023441272140510300164450ustar00rootroot00000000000000package main import ( "encoding/json" "io/ioutil" "net/http" "github.com/gorilla/mux" ) type containerPostBody struct { Migration bool `json:"migration"` Name string `json:"name"` } func containerPost(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] c, err := containerLoadByName(d, name) if err != nil { return SmartError(err) } buf, err := ioutil.ReadAll(r.Body) if err != nil { return InternalError(err) } body := containerPostBody{} if err := json.Unmarshal(buf, &body); err != nil { return BadRequest(err) } if body.Migration { ws, err := NewMigrationSource(c) if err != nil { return InternalError(err) } resources := map[string][]string{} resources["containers"] = []string{name} op, err := operationCreate(operationClassWebsocket, resources, ws.Metadata(), ws.Do, nil, ws.Connect) if err != nil { return InternalError(err) } return OperationResponse(op) } run := func(*operation) error { return c.Rename(body.Name) } resources := map[string][]string{} resources["containers"] = []string{name} op, err := operationCreate(operationClassTask, resources, nil, run, nil, nil) if err != nil { return InternalError(err) } return OperationResponse(op) } lxd-2.0.2/lxd/container_put.go000066400000000000000000000051171272140510300162710ustar00rootroot00000000000000package main import ( "database/sql" "encoding/json" "fmt" "net/http" "github.com/gorilla/mux" "github.com/lxc/lxd/shared" log "gopkg.in/inconshreveable/log15.v2" ) type containerPutReq struct { Architecture string `json:"architecture"` Config map[string]string `json:"config"` Devices shared.Devices `json:"devices"` Ephemeral bool `json:"ephemeral"` Profiles []string `json:"profiles"` Restore string `json:"restore"` } /* * Update configuration, or, if 'restore:snapshot-name' is present, restore * the named snapshot */ func containerPut(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] c, err := containerLoadByName(d, name) if err != nil { return NotFound } configRaw := containerPutReq{} if err := json.NewDecoder(r.Body).Decode(&configRaw); err != nil { return BadRequest(err) } architecture, err := shared.ArchitectureId(configRaw.Architecture) if err != nil { architecture = 0 } var do = func(*operation) error { return nil } if configRaw.Restore == "" { // Update container configuration do = func(op *operation) error { args := containerArgs{ Architecture: architecture, Config: configRaw.Config, Devices: configRaw.Devices, Ephemeral: configRaw.Ephemeral, Profiles: configRaw.Profiles} // FIXME: should set to true when not migrating err = c.Update(args, false) if err != nil { return err } return nil } } else { // Snapshot Restore do = func(op *operation) error { return containerSnapRestore(d, name, configRaw.Restore) } } resources := map[string][]string{} resources["containers"] = []string{name} op, err := operationCreate(operationClassTask, resources, nil, do, nil, nil) if err != nil { return InternalError(err) } return OperationResponse(op) } func containerSnapRestore(d *Daemon, name string, snap string) error { // normalize snapshot name if !shared.IsSnapshot(snap) { snap = name + shared.SnapshotDelimiter + snap } shared.Log.Info( "RESTORE => Restoring snapshot", log.Ctx{ "snapshot": snap, "container": name}) c, err := containerLoadByName(d, name) if err != nil { shared.Log.Error( "RESTORE => loadcontainerLXD() failed", log.Ctx{ "container": name, "err": err}) return err } source, err := containerLoadByName(d, snap) if err != nil { switch err { case sql.ErrNoRows: return fmt.Errorf("snapshot %s does not exist", snap) default: return err } } if err := c.Restore(source); err != nil { return err } return nil } lxd-2.0.2/lxd/container_snapshot.go000066400000000000000000000127231272140510300173210ustar00rootroot00000000000000package main import ( "encoding/json" "fmt" "net/http" "strconv" "strings" "github.com/gorilla/mux" "github.com/lxc/lxd/shared" ) type containerSnapshotPostReq struct { Name string `json:"name"` Stateful bool `json:"stateful"` } func containerSnapshotsGet(d *Daemon, r *http.Request) Response { recursionStr := r.FormValue("recursion") recursion, err := strconv.Atoi(recursionStr) if err != nil { recursion = 0 } cname := mux.Vars(r)["name"] c, err := containerLoadByName(d, cname) if err != nil { return SmartError(err) } snaps, err := c.Snapshots() if err != nil { return SmartError(err) } resultString := []string{} resultMap := []*shared.SnapshotInfo{} for _, snap := range snaps { snapName := strings.SplitN(snap.Name(), shared.SnapshotDelimiter, 2)[1] if recursion == 0 { url := fmt.Sprintf("/%s/containers/%s/snapshots/%s", shared.APIVersion, cname, snapName) resultString = append(resultString, url) } else { render, err := snap.Render() if err != nil { continue } resultMap = append(resultMap, render.(*shared.SnapshotInfo)) } } if recursion == 0 { return SyncResponse(true, resultString) } return SyncResponse(true, resultMap) } /* * Note, the code below doesn't deal with snapshots of snapshots. * To do that, we'll need to weed out based on # slashes in names */ func nextSnapshot(d *Daemon, name string) int { base := name + shared.SnapshotDelimiter + "snap" length := len(base) q := fmt.Sprintf("SELECT MAX(name) FROM containers WHERE type=? AND SUBSTR(name,1,?)=?") var numstr string inargs := []interface{}{cTypeSnapshot, length, base} outfmt := []interface{}{numstr} results, err := dbQueryScan(d.db, q, inargs, outfmt) if err != nil { return 0 } max := 0 for _, r := range results { numstr = r[0].(string) if len(numstr) <= length { continue } substr := numstr[length:] var num int count, err := fmt.Sscanf(substr, "%d", &num) if err != nil || count != 1 { continue } if num >= max { max = num + 1 } } return max } func containerSnapshotsPost(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] /* * snapshot is a three step operation: * 1. choose a new name * 2. copy the database info over * 3. copy over the rootfs */ c, err := containerLoadByName(d, name) if err != nil { return SmartError(err) } req := containerSnapshotPostReq{} if err := json.NewDecoder(r.Body).Decode(&req); err != nil { return BadRequest(err) } if req.Name == "" { // come up with a name i := nextSnapshot(d, name) req.Name = fmt.Sprintf("snap%d", i) } fullName := name + shared.SnapshotDelimiter + req.Name snapshot := func(op *operation) error { args := containerArgs{ Name: fullName, Ctype: cTypeSnapshot, Config: c.LocalConfig(), Profiles: c.Profiles(), Ephemeral: c.IsEphemeral(), BaseImage: c.ExpandedConfig()["volatile.base_image"], Architecture: c.Architecture(), Devices: c.LocalDevices(), Stateful: req.Stateful, } _, err := containerCreateAsSnapshot(d, args, c) if err != nil { return err } return nil } resources := map[string][]string{} resources["containers"] = []string{name} op, err := operationCreate(operationClassTask, resources, nil, snapshot, nil, nil) if err != nil { return InternalError(err) } return OperationResponse(op) } func snapshotHandler(d *Daemon, r *http.Request) Response { containerName := mux.Vars(r)["name"] snapshotName := mux.Vars(r)["snapshotName"] sc, err := containerLoadByName( d, containerName+ shared.SnapshotDelimiter+ snapshotName) if err != nil { return SmartError(err) } switch r.Method { case "GET": return snapshotGet(sc, snapshotName) case "POST": return snapshotPost(r, sc, containerName) case "DELETE": return snapshotDelete(sc, snapshotName) default: return NotFound } } func snapshotGet(sc container, name string) Response { render, err := sc.Render() if err != nil { return SmartError(err) } return SyncResponse(true, render.(*shared.SnapshotInfo)) } func snapshotPost(r *http.Request, sc container, containerName string) Response { raw := shared.Jmap{} if err := json.NewDecoder(r.Body).Decode(&raw); err != nil { return BadRequest(err) } migration, err := raw.GetBool("migration") if err == nil && migration { ws, err := NewMigrationSource(sc) if err != nil { return SmartError(err) } resources := map[string][]string{} resources["containers"] = []string{containerName} op, err := operationCreate(operationClassWebsocket, resources, ws.Metadata(), ws.Do, nil, ws.Connect) if err != nil { return InternalError(err) } return OperationResponse(op) } newName, err := raw.GetString("name") if err != nil { return BadRequest(err) } rename := func(op *operation) error { return sc.Rename(containerName + shared.SnapshotDelimiter + newName) } resources := map[string][]string{} resources["containers"] = []string{containerName} op, err := operationCreate(operationClassTask, resources, nil, rename, nil, nil) if err != nil { return InternalError(err) } return OperationResponse(op) } func snapshotDelete(sc container, name string) Response { remove := func(op *operation) error { return sc.Delete() } resources := map[string][]string{} resources["containers"] = []string{sc.Name()} op, err := operationCreate(operationClassTask, resources, nil, remove, nil, nil) if err != nil { return InternalError(err) } return OperationResponse(op) } lxd-2.0.2/lxd/container_state.go000066400000000000000000000052541272140510300166030ustar00rootroot00000000000000package main import ( "encoding/json" "fmt" "net/http" "time" "github.com/gorilla/mux" "github.com/lxc/lxd/shared" ) type containerStatePutReq struct { Action string `json:"action"` Timeout int `json:"timeout"` Force bool `json:"force"` Stateful bool `json:"stateful"` } func containerState(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] c, err := containerLoadByName(d, name) if err != nil { return SmartError(err) } state, err := c.RenderState() if err != nil { return InternalError(err) } return SyncResponse(true, state) } func containerStatePut(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] raw := containerStatePutReq{} // We default to -1 (i.e. no timeout) here instead of 0 (instant // timeout). raw.Timeout = -1 if err := json.NewDecoder(r.Body).Decode(&raw); err != nil { return BadRequest(err) } // Don't mess with containers while in setup mode <-d.readyChan c, err := containerLoadByName(d, name) if err != nil { return SmartError(err) } var do func(*operation) error switch shared.ContainerAction(raw.Action) { case shared.Start: do = func(op *operation) error { if err = c.Start(raw.Stateful); err != nil { return err } return nil } case shared.Stop: if raw.Stateful { do = func(op *operation) error { err := c.Stop(raw.Stateful) if err != nil { return err } return nil } } else if raw.Timeout == 0 || raw.Force { do = func(op *operation) error { err = c.Stop(false) if err != nil { return err } if c.IsEphemeral() { c.Delete() } return nil } } else { do = func(op *operation) error { err = c.Shutdown(time.Duration(raw.Timeout) * time.Second) if err != nil { return err } if c.IsEphemeral() { c.Delete() } return nil } } case shared.Restart: do = func(op *operation) error { if raw.Timeout == 0 || raw.Force { err = c.Stop(false) if err != nil { return err } } else { err = c.Shutdown(time.Duration(raw.Timeout) * time.Second) if err != nil { return err } } err = c.Start(false) if err != nil { return err } return nil } case shared.Freeze: do = func(op *operation) error { return c.Freeze() } case shared.Unfreeze: do = func(op *operation) error { return c.Unfreeze() } default: return BadRequest(fmt.Errorf("unknown action %s", raw.Action)) } resources := map[string][]string{} resources["containers"] = []string{name} op, err := operationCreate(operationClassTask, resources, nil, do, nil, nil) if err != nil { return InternalError(err) } return OperationResponse(op) } lxd-2.0.2/lxd/container_test.go000066400000000000000000000124141272140510300164360ustar00rootroot00000000000000package main import ( "github.com/lxc/lxd/shared" ) func (suite *lxdTestSuite) TestContainer_ProfilesDefault() { args := containerArgs{ Ctype: cTypeRegular, Ephemeral: false, Name: "testFoo", } c, err := containerCreateInternal(suite.d, args) suite.Req.Nil(err) defer c.Delete() profiles := c.Profiles() suite.Len( profiles, 1, "No default profile created on containerCreateInternal.") suite.Equal( "default", profiles[0], "First profile should be the default profile.") } func (suite *lxdTestSuite) TestContainer_ProfilesMulti() { // Create an unprivileged profile _, err := dbProfileCreate( suite.d.db, "unprivileged", "unprivileged", map[string]string{"security.privileged": "true"}, shared.Devices{}) suite.Req.Nil(err, "Failed to create the unprivileged profile.") defer func() { dbProfileDelete(suite.d.db, "unprivileged") }() args := containerArgs{ Ctype: cTypeRegular, Ephemeral: false, Profiles: []string{"default", "unprivileged"}, Name: "testFoo", } c, err := containerCreateInternal(suite.d, args) suite.Req.Nil(err) defer c.Delete() profiles := c.Profiles() suite.Len( profiles, 2, "Didn't get both profiles in containerCreateInternal.") suite.True( c.IsPrivileged(), "The container is not privileged (didn't apply the unprivileged profile?).") } func (suite *lxdTestSuite) TestContainer_ProfilesOverwriteDefaultNic() { args := containerArgs{ Ctype: cTypeRegular, Ephemeral: false, Config: map[string]string{"security.privileged": "true"}, Devices: shared.Devices{ "eth0": shared.Device{ "type": "nic", "nictype": "bridged", "parent": "unknownbr0"}}, Name: "testFoo", } c, err := containerCreateInternal(suite.d, args) suite.Req.Nil(err) suite.True(c.IsPrivileged(), "This container should be privileged.") out, err := c.Render() suite.Req.Nil(err) state := out.(*shared.ContainerInfo) defer c.Delete() suite.Equal( "unknownbr0", state.Devices["eth0"]["parent"], "Container config doesn't overwrite profile config.") } func (suite *lxdTestSuite) TestContainer_LoadFromDB() { args := containerArgs{ Ctype: cTypeRegular, Ephemeral: false, Config: map[string]string{"security.privileged": "true"}, Devices: shared.Devices{ "eth0": shared.Device{ "type": "nic", "nictype": "bridged", "parent": "unknownbr0"}}, Name: "testFoo", } // Create the container c, err := containerCreateInternal(suite.d, args) suite.Req.Nil(err) defer c.Delete() // Load the container and trigger initLXC() c2, err := containerLoadByName(suite.d, "testFoo") c2.IsRunning() suite.Req.Nil(err) suite.Exactly( c, c2, "The loaded container isn't excactly the same as the created one.") } func (suite *lxdTestSuite) TestContainer_Path_Regular() { // Regular args := containerArgs{ Ctype: cTypeRegular, Ephemeral: false, Name: "testFoo", } c, err := containerCreateInternal(suite.d, args) suite.Req.Nil(err) defer c.Delete() suite.Req.False(c.IsSnapshot(), "Shouldn't be a snapshot.") suite.Req.Equal(shared.VarPath("containers", "testFoo"), c.Path()) suite.Req.Equal(shared.VarPath("containers", "testFoo2"), containerPath("testFoo2", false)) } func (suite *lxdTestSuite) TestContainer_Path_Snapshot() { // Snapshot args := containerArgs{ Ctype: cTypeSnapshot, Ephemeral: false, Name: "test/snap0", } c, err := containerCreateInternal(suite.d, args) suite.Req.Nil(err) defer c.Delete() suite.Req.True(c.IsSnapshot(), "Should be a snapshot.") suite.Req.Equal( shared.VarPath("snapshots", "test", "snap0"), c.Path()) suite.Req.Equal( shared.VarPath("snapshots", "test", "snap1"), containerPath("test/snap1", true)) } func (suite *lxdTestSuite) TestContainer_LogPath() { args := containerArgs{ Ctype: cTypeRegular, Ephemeral: false, Name: "testFoo", } c, err := containerCreateInternal(suite.d, args) suite.Req.Nil(err) defer c.Delete() suite.Req.Equal(shared.VarPath("logs", "testFoo"), c.LogPath()) } func (suite *lxdTestSuite) TestContainer_IsPrivileged_Privileged() { args := containerArgs{ Ctype: cTypeRegular, Ephemeral: false, Config: map[string]string{"security.privileged": "true"}, Name: "testFoo", } c, err := containerCreateInternal(suite.d, args) suite.Req.Nil(err) defer c.Delete() suite.Req.True(c.IsPrivileged(), "This container should be privileged.") suite.Req.Nil(c.Delete(), "Failed to delete the container.") } func (suite *lxdTestSuite) TestContainer_IsPrivileged_Unprivileged() { args := containerArgs{ Ctype: cTypeRegular, Ephemeral: false, Config: map[string]string{"security.privileged": "false"}, Name: "testFoo", } c, err := containerCreateInternal(suite.d, args) suite.Req.Nil(err) defer c.Delete() suite.Req.False(c.IsPrivileged(), "This container should be unprivileged.") suite.Req.Nil(c.Delete(), "Failed to delete the container.") } func (suite *lxdTestSuite) TestContainer_Rename() { args := containerArgs{ Ctype: cTypeRegular, Ephemeral: false, Name: "testFoo", } c, err := containerCreateInternal(suite.d, args) suite.Req.Nil(err) defer c.Delete() suite.Req.Nil(c.Rename("testFoo2"), "Failed to rename the container.") suite.Req.Equal(shared.VarPath("containers", "testFoo2"), c.Path()) } lxd-2.0.2/lxd/containers.go000066400000000000000000000156771272140510300156000ustar00rootroot00000000000000package main import ( "fmt" "os" "sort" "strconv" "strings" "sync" "syscall" "time" "gopkg.in/lxc/go-lxc.v2" "github.com/lxc/lxd/shared" log "gopkg.in/inconshreveable/log15.v2" ) var containersCmd = Command{ name: "containers", get: containersGet, post: containersPost, } var containerCmd = Command{ name: "containers/{name}", get: containerGet, put: containerPut, delete: containerDelete, post: containerPost, } var containerStateCmd = Command{ name: "containers/{name}/state", get: containerState, put: containerStatePut, } var containerFileCmd = Command{ name: "containers/{name}/files", get: containerFileHandler, post: containerFileHandler, } var containerSnapshotsCmd = Command{ name: "containers/{name}/snapshots", get: containerSnapshotsGet, post: containerSnapshotsPost, } var containerSnapshotCmd = Command{ name: "containers/{name}/snapshots/{snapshotName}", get: snapshotHandler, post: snapshotHandler, delete: snapshotHandler, } var containerExecCmd = Command{ name: "containers/{name}/exec", post: containerExecPost, } type containerAutostartList []container func (slice containerAutostartList) Len() int { return len(slice) } func (slice containerAutostartList) Less(i, j int) bool { iOrder := slice[i].ExpandedConfig()["boot.autostart.priority"] jOrder := slice[j].ExpandedConfig()["boot.autostart.priority"] if iOrder != jOrder { iOrderInt, _ := strconv.Atoi(iOrder) jOrderInt, _ := strconv.Atoi(jOrder) return iOrderInt > jOrderInt } return slice[i].Name() < slice[j].Name() } func (slice containerAutostartList) Swap(i, j int) { slice[i], slice[j] = slice[j], slice[i] } func containersRestart(d *Daemon) error { result, err := dbContainersList(d.db, cTypeRegular) if err != nil { return err } containers := []container{} for _, name := range result { c, err := containerLoadByName(d, name) if err != nil { return err } containers = append(containers, c) } sort.Sort(containerAutostartList(containers)) for _, c := range containers { config := c.ExpandedConfig() lastState := config["volatile.last_state.power"] autoStart := config["boot.autostart"] autoStartDelay := config["boot.autostart.delay"] if lastState == "RUNNING" || shared.IsTrue(autoStart) { if c.IsRunning() { continue } c.Start(false) autoStartDelayInt, err := strconv.Atoi(autoStartDelay) if err == nil { time.Sleep(time.Duration(autoStartDelayInt) * time.Second) } } } _, err = dbExec(d.db, "DELETE FROM containers_config WHERE key='volatile.last_state.power'") if err != nil { return err } return nil } func containersShutdown(d *Daemon) error { results, err := dbContainersList(d.db, cTypeRegular) if err != nil { return err } var wg sync.WaitGroup for _, r := range results { c, err := containerLoadByName(d, r) if err != nil { return err } err = c.ConfigKeySet("volatile.last_state.power", c.State()) if err != nil { return err } if c.IsRunning() { wg.Add(1) go func() { c.Shutdown(time.Second * 30) c.Stop(false) wg.Done() }() } } wg.Wait() return nil } func containerDeleteSnapshots(d *Daemon, cname string) error { shared.Log.Debug("containerDeleteSnapshots", log.Ctx{"container": cname}) results, err := dbContainerGetSnapshots(d.db, cname) if err != nil { return err } for _, sname := range results { sc, err := containerLoadByName(d, sname) if err != nil { shared.Log.Error( "containerDeleteSnapshots: Failed to load the snapshotcontainer", log.Ctx{"container": cname, "snapshot": sname}) continue } if err := sc.Delete(); err != nil { shared.Log.Error( "containerDeleteSnapshots: Failed to delete a snapshotcontainer", log.Ctx{"container": cname, "snapshot": sname, "err": err}) } } return nil } /* * This is called by lxd when called as "lxd forkstart " * 'forkstart' is used instead of just 'start' in the hopes that people * do not accidentally type 'lxd start' instead of 'lxc start' */ func startContainer(args []string) error { if len(args) != 4 { return fmt.Errorf("Bad arguments: %q", args) } name := args[1] lxcpath := args[2] configPath := args[3] c, err := lxc.NewContainer(name, lxcpath) if err != nil { return fmt.Errorf("Error initializing container for start: %q", err) } err = c.LoadConfigFile(configPath) if err != nil { return fmt.Errorf("Error opening startup config file: %q", err) } /* due to https://github.com/golang/go/issues/13155 and the * CollectOutput call we make for the forkstart process, we need to * close our stdin/stdout/stderr here. Collecting some of the logs is * better than collecting no logs, though. */ os.Stdin.Close() os.Stderr.Close() os.Stdout.Close() // Redirect stdout and stderr to a log file logPath := shared.LogPath(name, "forkstart.log") if shared.PathExists(logPath) { os.Remove(logPath) } logFile, err := os.OpenFile(logPath, os.O_WRONLY|os.O_CREATE|os.O_SYNC, 0644) if err == nil { syscall.Dup3(int(logFile.Fd()), 1, 0) syscall.Dup3(int(logFile.Fd()), 2, 0) } return c.Start() } /* * This is called by lxd when called as "lxd forkexec " */ func execContainer(args []string) (int, error) { if len(args) < 6 { return -1, fmt.Errorf("Bad arguments: %q", args) } name := args[1] lxcpath := args[2] configPath := args[3] c, err := lxc.NewContainer(name, lxcpath) if err != nil { return -1, fmt.Errorf("Error initializing container for start: %q", err) } err = c.LoadConfigFile(configPath) if err != nil { return -1, fmt.Errorf("Error opening startup config file: %q", err) } syscall.Dup3(int(os.Stdin.Fd()), 200, 0) syscall.Dup3(int(os.Stdout.Fd()), 201, 0) syscall.Dup3(int(os.Stderr.Fd()), 202, 0) syscall.Close(int(os.Stdin.Fd())) syscall.Close(int(os.Stdout.Fd())) syscall.Close(int(os.Stderr.Fd())) opts := lxc.DefaultAttachOptions opts.ClearEnv = true opts.StdinFd = 200 opts.StdoutFd = 201 opts.StderrFd = 202 logPath := shared.LogPath(name, "forkexec.log") if shared.PathExists(logPath) { os.Remove(logPath) } logFile, err := os.OpenFile(logPath, os.O_WRONLY|os.O_CREATE|os.O_SYNC, 0644) if err == nil { syscall.Dup3(int(logFile.Fd()), 1, 0) syscall.Dup3(int(logFile.Fd()), 2, 0) } env := []string{} cmd := []string{} section := "" for _, arg := range args[5:len(args)] { // The "cmd" section must come last as it may contain a -- if arg == "--" && section != "cmd" { section = "" continue } if section == "" { section = arg continue } if section == "env" { fields := strings.SplitN(arg, "=", 2) if len(fields) == 2 && fields[0] == "HOME" { opts.Cwd = fields[1] } env = append(env, arg) } else if section == "cmd" { cmd = append(cmd, arg) } else { return -1, fmt.Errorf("Invalid exec section: %s", section) } } opts.Env = env status, err := c.RunCommandStatus(cmd, opts) if err != nil { return -1, fmt.Errorf("Failed running command: %q", err) } return status >> 8, nil } lxd-2.0.2/lxd/containers_get.go000066400000000000000000000032711272140510300164220ustar00rootroot00000000000000package main import ( "fmt" "net/http" "time" "github.com/lxc/lxd/shared" ) func containersGet(d *Daemon, r *http.Request) Response { for i := 0; i < 100; i++ { result, err := doContainersGet(d, d.isRecursionRequest(r)) if err == nil { return SyncResponse(true, result) } if !isDbLockedError(err) { shared.Debugf("DBERR: containersGet: error %q", err) return InternalError(err) } // 1 s may seem drastic, but we really don't want to thrash // perhaps we should use a random amount time.Sleep(100 * time.Millisecond) } shared.Debugf("DBERR: containersGet, db is locked") shared.PrintStack() return InternalError(fmt.Errorf("DB is locked")) } func doContainersGet(d *Daemon, recursion bool) (interface{}, error) { result, err := dbContainersList(d.db, cTypeRegular) if err != nil { return nil, err } resultString := []string{} resultList := []*shared.ContainerInfo{} if err != nil { return []string{}, err } for _, container := range result { if !recursion { url := fmt.Sprintf("/%s/containers/%s", shared.APIVersion, container) resultString = append(resultString, url) } else { c, err := doContainerGet(d, container) if err != nil { c = &shared.ContainerInfo{ Name: container, Status: shared.Error.String(), StatusCode: shared.Error} } resultList = append(resultList, c) } } if !recursion { return resultString, nil } return resultList, nil } func doContainerGet(d *Daemon, cname string) (*shared.ContainerInfo, error) { c, err := containerLoadByName(d, cname) if err != nil { return nil, err } cts, err := c.Render() if err != nil { return nil, err } return cts.(*shared.ContainerInfo), nil } lxd-2.0.2/lxd/containers_post.go000066400000000000000000000241041272140510300166260ustar00rootroot00000000000000package main import ( "crypto/x509" "encoding/json" "encoding/pem" "fmt" "net/http" "strings" "github.com/dustinkirkland/golang-petname" "github.com/gorilla/websocket" "github.com/lxc/lxd/shared" log "gopkg.in/inconshreveable/log15.v2" ) type containerImageSource struct { Type string `json:"type"` Certificate string `json:"certificate"` /* for "image" type */ Alias string `json:"alias"` Fingerprint string `json:"fingerprint"` Properties map[string]string `json:"properties"` Server string `json:"server"` Secret string `json:"secret"` Protocol string `json:"protocol"` /* * for "migration" and "copy" types, as an optimization users can * provide an image hash to extract before the filesystem is rsync'd, * potentially cutting down filesystem transfer time. LXD will not go * and fetch this image, it will simply use it if it exists in the * image store. */ BaseImage string `json:"base-image"` /* for "migration" type */ Mode string `json:"mode"` Operation string `json:"operation"` Websockets map[string]string `json:"secrets"` /* for "copy" type */ Source string `json:"source"` } type containerPostReq struct { Architecture string `json:"architecture"` Config map[string]string `json:"config"` Devices shared.Devices `json:"devices"` Ephemeral bool `json:"ephemeral"` Name string `json:"name"` Profiles []string `json:"profiles"` Source containerImageSource `json:"source"` } func createFromImage(d *Daemon, req *containerPostReq) Response { var hash string var err error if req.Source.Fingerprint != "" { hash = req.Source.Fingerprint } else if req.Source.Alias != "" { if req.Source.Server != "" { hash = req.Source.Alias } else { _, alias, err := dbImageAliasGet(d.db, req.Source.Alias, true) if err != nil { return InternalError(err) } hash = alias.Target } } else if req.Source.Fingerprint != "" { hash = req.Source.Fingerprint } else if req.Source.Properties != nil { if req.Source.Server != "" { return BadRequest(fmt.Errorf("Property match is only supported for local images")) } hashes, err := dbImagesGet(d.db, false) if err != nil { return InternalError(err) } var image *shared.ImageInfo for _, hash := range hashes { _, img, err := dbImageGet(d.db, hash, false, true) if err != nil { continue } if image != nil && img.CreationDate.Before(image.CreationDate) { continue } match := true for key, value := range req.Source.Properties { if img.Properties[key] != value { match = false break } } if !match { continue } image = img } if image == nil { return BadRequest(fmt.Errorf("No matching image could be found")) } hash = image.Fingerprint } else { return BadRequest(fmt.Errorf("Must specify one of alias, fingerprint or properties for init from image")) } run := func(op *operation) error { if req.Source.Server != "" { hash, err = d.ImageDownload( op, req.Source.Server, req.Source.Protocol, req.Source.Certificate, req.Source.Secret, hash, true, daemonConfig["images.auto_update_cached"].GetBool()) if err != nil { return err } } _, imgInfo, err := dbImageGet(d.db, hash, false, false) if err != nil { return err } hash = imgInfo.Fingerprint architecture, err := shared.ArchitectureId(imgInfo.Architecture) if err != nil { architecture = 0 } args := containerArgs{ Architecture: architecture, BaseImage: hash, Config: req.Config, Ctype: cTypeRegular, Devices: req.Devices, Ephemeral: req.Ephemeral, Name: req.Name, Profiles: req.Profiles, } _, err = containerCreateFromImage(d, args, hash) return err } resources := map[string][]string{} resources["containers"] = []string{req.Name} op, err := operationCreate(operationClassTask, resources, nil, run, nil, nil) if err != nil { return InternalError(err) } return OperationResponse(op) } func createFromNone(d *Daemon, req *containerPostReq) Response { architecture, err := shared.ArchitectureId(req.Architecture) if err != nil { architecture = 0 } args := containerArgs{ Architecture: architecture, Config: req.Config, Ctype: cTypeRegular, Devices: req.Devices, Ephemeral: req.Ephemeral, Name: req.Name, Profiles: req.Profiles, } run := func(op *operation) error { _, err := containerCreateAsEmpty(d, args) return err } resources := map[string][]string{} resources["containers"] = []string{req.Name} op, err := operationCreate(operationClassTask, resources, nil, run, nil, nil) if err != nil { return InternalError(err) } return OperationResponse(op) } func createFromMigration(d *Daemon, req *containerPostReq) Response { if req.Source.Mode != "pull" { return NotImplemented } architecture, err := shared.ArchitectureId(req.Architecture) if err != nil { architecture = 0 } run := func(op *operation) error { args := containerArgs{ Architecture: architecture, BaseImage: req.Source.BaseImage, Config: req.Config, Ctype: cTypeRegular, Devices: req.Devices, Ephemeral: req.Ephemeral, Name: req.Name, Profiles: req.Profiles, } var c container _, _, err := dbImageGet(d.db, req.Source.BaseImage, false, true) /* Only create a container from an image if we're going to * rsync over the top of it. In the case of a better file * transfer mechanism, let's just use that. * * TODO: we could invent some negotiation here, where if the * source and sink both have the same image, we can clone from * it, but we have to know before sending the snapshot that * we're sending the whole thing or just a delta from the * image, so one extra negotiation round trip is needed. An * alternative is to move actual container object to a later * point and just negotiate it over the migration control * socket. Anyway, it'll happen later :) */ if err == nil && d.Storage.MigrationType() == MigrationFSType_RSYNC { c, err = containerCreateFromImage(d, args, req.Source.BaseImage) if err != nil { return err } } else { c, err = containerCreateAsEmpty(d, args) if err != nil { return err } } var cert *x509.Certificate if req.Source.Certificate != "" { certBlock, _ := pem.Decode([]byte(req.Source.Certificate)) cert, err = x509.ParseCertificate(certBlock.Bytes) if err != nil { return err } } config, err := shared.GetTLSConfig("", "", cert) if err != nil { c.Delete() return err } migrationArgs := MigrationSinkArgs{ Url: req.Source.Operation, Dialer: websocket.Dialer{ TLSClientConfig: config, NetDial: shared.RFC3493Dialer}, Container: c, Secrets: req.Source.Websockets, } sink, err := NewMigrationSink(&migrationArgs) if err != nil { c.Delete() return err } // Start the storage for this container (LVM mount/umount) c.StorageStart() // And finaly run the migration. err = sink() if err != nil { c.StorageStop() shared.Log.Error("Error during migration sink", "err", err) c.Delete() return fmt.Errorf("Error transferring container data: %s", err) } defer c.StorageStop() err = c.TemplateApply("copy") if err != nil { return err } return nil } resources := map[string][]string{} resources["containers"] = []string{req.Name} op, err := operationCreate(operationClassTask, resources, nil, run, nil, nil) if err != nil { return InternalError(err) } return OperationResponse(op) } func createFromCopy(d *Daemon, req *containerPostReq) Response { if req.Source.Source == "" { return BadRequest(fmt.Errorf("must specify a source container")) } source, err := containerLoadByName(d, req.Source.Source) if err != nil { return SmartError(err) } // Config override sourceConfig := source.LocalConfig() if req.Config == nil { req.Config = make(map[string]string) } for key, value := range sourceConfig { if len(key) > 8 && key[0:8] == "volatile" && key[9:] != "base_image" { shared.Log.Debug("Skipping volatile key from copy source", log.Ctx{"key": key}) continue } _, exists := req.Config[key] if exists { continue } req.Config[key] = value } // Profiles override if req.Profiles == nil { req.Profiles = source.Profiles() } args := containerArgs{ Architecture: source.Architecture(), BaseImage: req.Source.BaseImage, Config: req.Config, Ctype: cTypeRegular, Devices: source.LocalDevices(), Ephemeral: req.Ephemeral, Name: req.Name, Profiles: req.Profiles, } run := func(op *operation) error { _, err := containerCreateAsCopy(d, args, source) if err != nil { return err } return nil } resources := map[string][]string{} resources["containers"] = []string{req.Name, req.Source.Source} op, err := operationCreate(operationClassTask, resources, nil, run, nil, nil) if err != nil { return InternalError(err) } return OperationResponse(op) } func containersPost(d *Daemon, r *http.Request) Response { shared.Debugf("Responding to container create") req := containerPostReq{} if err := json.NewDecoder(r.Body).Decode(&req); err != nil { return BadRequest(err) } if req.Name == "" { req.Name = strings.ToLower(petname.Generate(2, "-")) shared.Debugf("No name provided, creating %s", req.Name) } if req.Devices == nil { req.Devices = shared.Devices{} } if req.Config == nil { req.Config = map[string]string{} } if strings.Contains(req.Name, shared.SnapshotDelimiter) { return BadRequest(fmt.Errorf("Invalid container name: '%s' is reserved for snapshots", shared.SnapshotDelimiter)) } switch req.Source.Type { case "image": return createFromImage(d, &req) case "none": return createFromNone(d, &req) case "migration": return createFromMigration(d, &req) case "copy": return createFromCopy(d, &req) default: return BadRequest(fmt.Errorf("unknown source type %s", req.Source.Type)) } } lxd-2.0.2/lxd/daemon.go000066400000000000000000000676551272140510300147010ustar00rootroot00000000000000package main import ( "bytes" "crypto/tls" "crypto/x509" "database/sql" "encoding/hex" "encoding/pem" "fmt" "io" "io/ioutil" "net" "net/http" "net/url" "os" "os/exec" "runtime" "strconv" "strings" "sync" "syscall" "time" "golang.org/x/crypto/scrypt" "github.com/coreos/go-systemd/activation" "github.com/gorilla/mux" _ "github.com/mattn/go-sqlite3" "github.com/syndtr/gocapability/capability" "gopkg.in/tomb.v2" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/logging" log "gopkg.in/inconshreveable/log15.v2" ) // AppArmor var aaAdmin = true var aaAvailable = true var aaConfined = false // CGroup var cgBlkioController = false var cgCpuController = false var cgCpusetController = false var cgDevicesController = false var cgMemoryController = false var cgNetPrioController = false var cgPidsController = false var cgSwapAccounting = false // UserNS var runningInUserns = false type Socket struct { Socket net.Listener CloseOnExit bool } // A Daemon can respond to requests from a shared client. type Daemon struct { architectures []int BackingFs string clientCerts []x509.Certificate db *sql.DB group string IdmapSet *shared.IdmapSet lxcpath string mux *mux.Router tomb tomb.Tomb readyChan chan bool pruneChan chan bool shutdownChan chan bool resetAutoUpdateChan chan bool Storage storage TCPSocket *Socket UnixSocket *Socket devlxd *net.UnixListener MockMode bool SetupMode bool imagesDownloading map[string]chan bool imagesDownloadingLock sync.RWMutex tlsConfig *tls.Config proxy func(req *http.Request) (*url.URL, error) } // Command is the basic structure for every API call. type Command struct { name string untrustedGet bool untrustedPost bool get func(d *Daemon, r *http.Request) Response put func(d *Daemon, r *http.Request) Response post func(d *Daemon, r *http.Request) Response delete func(d *Daemon, r *http.Request) Response } func (d *Daemon) httpGetSync(url string, certificate string) (*lxd.Response, error) { var err error var cert *x509.Certificate if certificate != "" { certBlock, _ := pem.Decode([]byte(certificate)) cert, err = x509.ParseCertificate(certBlock.Bytes) if err != nil { return nil, err } } tlsConfig, err := shared.GetTLSConfig("", "", cert) if err != nil { return nil, err } tr := &http.Transport{ TLSClientConfig: tlsConfig, Dial: shared.RFC3493Dialer, Proxy: d.proxy, } myhttp := http.Client{ Transport: tr, } req, err := http.NewRequest("GET", url, nil) if err != nil { return nil, err } req.Header.Set("User-Agent", shared.UserAgent) r, err := myhttp.Do(req) if err != nil { return nil, err } resp, err := lxd.ParseResponse(r) if err != nil { return nil, err } if resp.Type != lxd.Sync { return nil, fmt.Errorf("unexpected non-sync response") } return resp, nil } func (d *Daemon) httpGetFile(url string, certificate string) (*http.Response, error) { var err error var cert *x509.Certificate if certificate != "" { certBlock, _ := pem.Decode([]byte(certificate)) cert, err = x509.ParseCertificate(certBlock.Bytes) if err != nil { return nil, err } } tlsConfig, err := shared.GetTLSConfig("", "", cert) if err != nil { return nil, err } tr := &http.Transport{ TLSClientConfig: tlsConfig, Dial: shared.RFC3493Dialer, Proxy: d.proxy, } myhttp := http.Client{ Transport: tr, } req, err := http.NewRequest("GET", url, nil) if err != nil { return nil, err } req.Header.Set("User-Agent", shared.UserAgent) raw, err := myhttp.Do(req) if err != nil { return nil, err } if raw.StatusCode != 200 { _, err := lxd.HoistResponse(raw, lxd.Error) if err != nil { return nil, err } return nil, fmt.Errorf("non-200 status with no error response?") } return raw, nil } func readMyCert() (string, string, error) { certf := shared.VarPath("server.crt") keyf := shared.VarPath("server.key") shared.Log.Info("Looking for existing certificates", log.Ctx{"cert": certf, "key": keyf}) err := shared.FindOrGenCert(certf, keyf) return certf, keyf, err } func (d *Daemon) isTrustedClient(r *http.Request) bool { if r.RemoteAddr == "@" { // Unix socket return true } if r.TLS == nil { return false } for i := range r.TLS.PeerCertificates { if d.CheckTrustState(*r.TLS.PeerCertificates[i]) { return true } } return false } func isJSONRequest(r *http.Request) bool { for k, vs := range r.Header { if strings.ToLower(k) == "content-type" && len(vs) == 1 && strings.ToLower(vs[0]) == "application/json" { return true } } return false } func (d *Daemon) isRecursionRequest(r *http.Request) bool { recursionStr := r.FormValue("recursion") recursion, err := strconv.Atoi(recursionStr) if err != nil { return false } return recursion == 1 } func (d *Daemon) createCmd(version string, c Command) { var uri string if c.name == "" { uri = fmt.Sprintf("/%s", version) } else { uri = fmt.Sprintf("/%s/%s", version, c.name) } d.mux.HandleFunc(uri, func(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "application/json") if d.isTrustedClient(r) { shared.Log.Info( "handling", log.Ctx{"method": r.Method, "url": r.URL.RequestURI(), "ip": r.RemoteAddr}) } else if r.Method == "GET" && c.untrustedGet { shared.Log.Info( "allowing untrusted GET", log.Ctx{"url": r.URL.RequestURI(), "ip": r.RemoteAddr}) } else if r.Method == "POST" && c.untrustedPost { shared.Log.Info( "allowing untrusted POST", log.Ctx{"url": r.URL.RequestURI(), "ip": r.RemoteAddr}) } else { shared.Log.Warn( "rejecting request from untrusted client", log.Ctx{"ip": r.RemoteAddr}) Forbidden.Render(w) return } if debug && r.Method != "GET" && isJSONRequest(r) { newBody := &bytes.Buffer{} captured := &bytes.Buffer{} multiW := io.MultiWriter(newBody, captured) if _, err := io.Copy(multiW, r.Body); err != nil { InternalError(err).Render(w) return } r.Body = shared.BytesReadCloser{Buf: newBody} shared.DebugJson(captured) } var resp Response resp = NotImplemented switch r.Method { case "GET": if c.get != nil { resp = c.get(d, r) } case "PUT": if c.put != nil { resp = c.put(d, r) } case "POST": if c.post != nil { resp = c.post(d, r) } case "DELETE": if c.delete != nil { resp = c.delete(d, r) } default: resp = NotFound } if err := resp.Render(w); err != nil { err := InternalError(err).Render(w) if err != nil { shared.Log.Error("Failed writing error for error, giving up") } } /* * When we create a new lxc.Container, it adds a finalizer (via * SetFinalizer) that frees the struct. However, it sometimes * takes the go GC a while to actually free the struct, * presumably since it is a small amount of memory. * Unfortunately, the struct also keeps the log fd open, so if * we leave too many of these around, we end up running out of * fds. So, let's explicitly do a GC to collect these at the * end of each request. */ runtime.GC() }) } func (d *Daemon) SetupStorageDriver() error { var err error lvmVgName := daemonConfig["storage.lvm_vg_name"].Get() zfsPoolName := daemonConfig["storage.zfs_pool_name"].Get() if lvmVgName != "" { d.Storage, err = newStorage(d, storageTypeLvm) if err != nil { shared.Logf("Could not initialize storage type LVM: %s - falling back to dir", err) } else { return nil } } else if zfsPoolName != "" { d.Storage, err = newStorage(d, storageTypeZfs) if err != nil { shared.Logf("Could not initialize storage type ZFS: %s - falling back to dir", err) } else { return nil } } else if d.BackingFs == "btrfs" { d.Storage, err = newStorage(d, storageTypeBtrfs) if err != nil { shared.Logf("Could not initialize storage type btrfs: %s - falling back to dir", err) } else { return nil } } d.Storage, err = newStorage(d, storageTypeDir) return err } // have we setup shared mounts? var sharedMounted bool var sharedMountsLock sync.Mutex func setupSharedMounts() error { if sharedMounted { return nil } sharedMountsLock.Lock() defer sharedMountsLock.Unlock() if sharedMounted { return nil } path := shared.VarPath("shmounts") isShared, err := shared.IsOnSharedMount(path) if err != nil { return err } if isShared { // / may already be ms-shared, or shmounts may have // been mounted by a previous lxd run sharedMounted = true return nil } if err := syscall.Mount(path, path, "none", syscall.MS_BIND, ""); err != nil { return err } var flags uintptr = syscall.MS_SHARED | syscall.MS_REC if err := syscall.Mount(path, path, "none", flags, ""); err != nil { return err } sharedMounted = true return nil } func (d *Daemon) ListenAddresses() ([]string, error) { addresses := make([]string, 0) value := daemonConfig["core.https_address"].Get() if value == "" { return addresses, nil } localHost, localPort, err := net.SplitHostPort(value) if err != nil { localHost = value localPort = shared.DefaultPort } if localHost == "0.0.0.0" || localHost == "::" || localHost == "[::]" { ifaces, err := net.Interfaces() if err != nil { return addresses, err } for _, i := range ifaces { addrs, err := i.Addrs() if err != nil { continue } for _, addr := range addrs { var ip net.IP switch v := addr.(type) { case *net.IPNet: ip = v.IP case *net.IPAddr: ip = v.IP } if !ip.IsGlobalUnicast() { continue } if ip.To4() == nil { if localHost == "0.0.0.0" { continue } addresses = append(addresses, fmt.Sprintf("[%s]:%s", ip, localPort)) } else { addresses = append(addresses, fmt.Sprintf("%s:%s", ip, localPort)) } } } } else { if strings.Contains(localHost, ":") { addresses = append(addresses, fmt.Sprintf("[%s]:%s", localHost, localPort)) } else { addresses = append(addresses, fmt.Sprintf("%s:%s", localHost, localPort)) } } return addresses, nil } func (d *Daemon) UpdateHTTPsPort(newAddress string) error { oldAddress := daemonConfig["core.https_address"].Get() if oldAddress == newAddress { return nil } if d.TCPSocket != nil { d.TCPSocket.Socket.Close() } if newAddress != "" { _, _, err := net.SplitHostPort(newAddress) if err != nil { ip := net.ParseIP(newAddress) if ip != nil && ip.To4() == nil { newAddress = fmt.Sprintf("[%s]:%s", newAddress, shared.DefaultPort) } else { newAddress = fmt.Sprintf("%s:%s", newAddress, shared.DefaultPort) } } var tcpl net.Listener for i := 0; i < 10; i++ { tcpl, err = tls.Listen("tcp", newAddress, d.tlsConfig) if err == nil { break } time.Sleep(100 * time.Millisecond) } if err != nil { return fmt.Errorf("cannot listen on https socket: %v", err) } d.tomb.Go(func() error { return http.Serve(tcpl, d.mux) }) d.TCPSocket = &Socket{Socket: tcpl, CloseOnExit: true} } return nil } func haveMacAdmin() bool { c, err := capability.NewPid(0) if err != nil { return false } if c.Get(capability.EFFECTIVE, capability.CAP_MAC_ADMIN) { return true } return false } func (d *Daemon) Init() error { /* Initialize some variables */ d.imagesDownloading = map[string]chan bool{} d.readyChan = make(chan bool) d.shutdownChan = make(chan bool) /* Set the executable path */ /* Set the LVM environment */ err := os.Setenv("LVM_SUPPRESS_FD_WARNINGS", "1") if err != nil { return err } /* Setup logging if that wasn't done before */ if shared.Log == nil { shared.Log, err = logging.GetLogger("", "", true, true, nil) if err != nil { return err } } /* Print welcome message */ if d.MockMode { shared.Log.Info("LXD is starting in mock mode", log.Ctx{"path": shared.VarPath("")}) } else if d.SetupMode { shared.Log.Info("LXD is starting in setup mode", log.Ctx{"path": shared.VarPath("")}) } else { shared.Log.Info("LXD is starting in normal mode", log.Ctx{"path": shared.VarPath("")}) } /* Detect user namespaces */ runningInUserns = shared.RunningInUserNS() /* Detect AppArmor support */ if aaAvailable && os.Getenv("LXD_SECURITY_APPARMOR") == "false" { aaAvailable = false aaAdmin = false shared.Log.Warn("AppArmor support has been manually disabled") } if aaAvailable && !shared.IsDir("/sys/kernel/security/apparmor") { aaAvailable = false aaAdmin = false shared.Log.Warn("AppArmor support has been disabled because of lack of kernel support") } _, err = exec.LookPath("apparmor_parser") if aaAvailable && err != nil { aaAvailable = false aaAdmin = false shared.Log.Warn("AppArmor support has been disabled because 'apparmor_parser' couldn't be found") } /* Detect AppArmor admin support */ if aaAdmin && !haveMacAdmin() { aaAdmin = false shared.Log.Warn("Per-container AppArmor profiles are disabled because the mac_admin capability is missing.") } if aaAdmin && runningInUserns { aaAdmin = false shared.Log.Warn("Per-container AppArmor profiles are disabled because LXD is running in an unprivileged container.") } /* Detect AppArmor confinment */ if !aaConfined { profile := aaProfile() if profile != "unconfined" && profile != "" { aaConfined = true shared.Log.Warn("Per-container AppArmor profiles are disabled because LXD is already protected by AppArmor.") } } /* Detect CGroup support */ cgBlkioController = shared.PathExists("/sys/fs/cgroup/blkio/") if !cgBlkioController { shared.Log.Warn("Couldn't find the CGroup blkio controller, I/O limits will be ignored.") } cgCpuController = shared.PathExists("/sys/fs/cgroup/cpu/") if !cgCpuController { shared.Log.Warn("Couldn't find the CGroup CPU controller, CPU time limits will be ignored.") } cgCpusetController = shared.PathExists("/sys/fs/cgroup/cpuset/") if !cgCpusetController { shared.Log.Warn("Couldn't find the CGroup CPUset controller, CPU pinning will be ignored.") } cgDevicesController = shared.PathExists("/sys/fs/cgroup/devices/") if !cgDevicesController { shared.Log.Warn("Couldn't find the CGroup devices controller, device access control won't work.") } cgMemoryController = shared.PathExists("/sys/fs/cgroup/memory/") if !cgMemoryController { shared.Log.Warn("Couldn't find the CGroup memory controller, memory limits will be ignored.") } cgNetPrioController = shared.PathExists("/sys/fs/cgroup/net_prio/") if !cgNetPrioController { shared.Log.Warn("Couldn't find the CGroup network class controller, network limits will be ignored.") } cgPidsController = shared.PathExists("/sys/fs/cgroup/pids/") if !cgPidsController { shared.Log.Warn("Couldn't find the CGroup pids controller, process limits will be ignored.") } cgSwapAccounting = shared.PathExists("/sys/fs/cgroup/memory/memory.memsw.limit_in_bytes") if !cgSwapAccounting { shared.Log.Warn("CGroup memory swap accounting is disabled, swap limits will be ignored.") } /* Get the list of supported architectures */ var architectures = []int{} architectureName, err := shared.ArchitectureGetLocal() if err != nil { return err } architecture, err := shared.ArchitectureId(architectureName) if err != nil { return err } architectures = append(architectures, architecture) personalities, err := shared.ArchitecturePersonalities(architecture) if err != nil { return err } for _, personality := range personalities { architectures = append(architectures, personality) } d.architectures = architectures /* Set container path */ d.lxcpath = shared.VarPath("containers") /* Make sure all our directories are available */ if err := os.MkdirAll(shared.VarPath("containers"), 0711); err != nil { return err } if err := os.MkdirAll(shared.VarPath("devices"), 0711); err != nil { return err } if err := os.MkdirAll(shared.VarPath("devlxd"), 0755); err != nil { return err } if err := os.MkdirAll(shared.VarPath("images"), 0700); err != nil { return err } if err := os.MkdirAll(shared.LogPath(), 0700); err != nil { return err } if err := os.MkdirAll(shared.VarPath("security"), 0700); err != nil { return err } if err := os.MkdirAll(shared.VarPath("shmounts"), 0711); err != nil { return err } if err := os.MkdirAll(shared.VarPath("snapshots"), 0700); err != nil { return err } /* Detect the filesystem */ d.BackingFs, err = filesystemDetect(d.lxcpath) if err != nil { shared.Log.Error("Error detecting backing fs", log.Ctx{"err": err}) } /* Read the uid/gid allocation */ d.IdmapSet, err = shared.DefaultIdmapSet() if err != nil { shared.Log.Warn("Error reading idmap", log.Ctx{"err": err.Error()}) shared.Log.Warn("Only privileged containers will be able to run") } else { shared.Log.Info("Default uid/gid map:") for _, lxcmap := range d.IdmapSet.ToLxcString() { shared.Log.Info(strings.TrimRight(" - "+lxcmap, "\n")) } } /* Initialize the database */ err = initializeDbObject(d, shared.VarPath("lxd.db")) if err != nil { return err } /* Load all config values from the database */ err = daemonConfigInit(d.db) if err != nil { return err } /* Setup the storage driver */ if !d.MockMode { err = d.SetupStorageDriver() if err != nil { return fmt.Errorf("Failed to setup storage: %s", err) } } /* Log expiry */ go func() { t := time.NewTicker(24 * time.Hour) for { shared.Debugf("Expiring log files") err := d.ExpireLogs() if err != nil { shared.Log.Error("Failed to expire logs", log.Ctx{"err": err}) } shared.Debugf("Done expiring log files") <-t.C } }() /* set the initial proxy function based on config values in the DB */ d.proxy = shared.ProxyFromConfig( daemonConfig["core.proxy_https"].Get(), daemonConfig["core.proxy_http"].Get(), daemonConfig["core.proxy_ignore_hosts"].Get(), ) /* Setup /dev/lxd */ d.devlxd, err = createAndBindDevLxd() if err != nil { return err } if !d.MockMode { /* Start the scheduler */ go deviceEventListener(d) /* Setup the TLS authentication */ certf, keyf, err := readMyCert() if err != nil { return err } cert, err := tls.LoadX509KeyPair(certf, keyf) if err != nil { return err } tlsConfig := &tls.Config{ InsecureSkipVerify: true, ClientAuth: tls.RequestClientCert, Certificates: []tls.Certificate{cert}, MinVersion: tls.VersionTLS12, MaxVersion: tls.VersionTLS12, CipherSuites: []uint16{ tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256}, PreferServerCipherSuites: true, } tlsConfig.BuildNameToCertificate() d.tlsConfig = tlsConfig readSavedClientCAList(d) } /* Setup the web server */ d.mux = mux.NewRouter() d.mux.StrictSlash(false) d.mux.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "application/json") SyncResponse(true, []string{"/1.0"}).Render(w) }) for _, c := range api10 { d.createCmd("1.0", c) } for _, c := range apiInternal { d.createCmd("internal", c) } d.mux.NotFoundHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { shared.Log.Debug("Sending top level 404", log.Ctx{"url": r.URL}) w.Header().Set("Content-Type", "application/json") NotFound.Render(w) }) listeners, err := activation.Listeners(false) if err != nil { return err } if len(listeners) > 0 { shared.Log.Info("LXD is socket activated") for _, listener := range listeners { if shared.PathExists(listener.Addr().String()) { d.UnixSocket = &Socket{Socket: listener, CloseOnExit: false} } else { tlsListener := tls.NewListener(listener, d.tlsConfig) d.TCPSocket = &Socket{Socket: tlsListener, CloseOnExit: false} } } } else { shared.Log.Info("LXD isn't socket activated") localSocketPath := shared.VarPath("unix.socket") // If the socket exists, let's try to connect to it and see if there's // a lxd running. if shared.PathExists(localSocketPath) { _, err := lxd.NewClient(&lxd.DefaultConfig, "local") if err != nil { shared.Log.Debug("Detected stale unix socket, deleting") // Connecting failed, so let's delete the socket and // listen on it ourselves. err = os.Remove(localSocketPath) if err != nil { return err } } else { return fmt.Errorf("LXD is already running.") } } unixAddr, err := net.ResolveUnixAddr("unix", localSocketPath) if err != nil { return fmt.Errorf("cannot resolve unix socket address: %v", err) } unixl, err := net.ListenUnix("unix", unixAddr) if err != nil { return fmt.Errorf("cannot listen on unix socket: %v", err) } if err := os.Chmod(localSocketPath, 0660); err != nil { return err } var gid int if d.group != "" { gid, err = shared.GroupId(d.group) if err != nil { return err } } else { gid = os.Getgid() } if err := os.Chown(localSocketPath, os.Getuid(), gid); err != nil { return err } d.UnixSocket = &Socket{Socket: unixl, CloseOnExit: true} } listenAddr := daemonConfig["core.https_address"].Get() if listenAddr != "" { _, _, err := net.SplitHostPort(listenAddr) if err != nil { listenAddr = fmt.Sprintf("%s:%s", listenAddr, shared.DefaultPort) } tcpl, err := tls.Listen("tcp", listenAddr, d.tlsConfig) if err != nil { shared.Log.Error("cannot listen on https socket, skipping...", log.Ctx{"err": err}) } else { if d.TCPSocket != nil { shared.Log.Info("Replacing systemd TCP socket by configure one") d.TCPSocket.Socket.Close() } d.TCPSocket = &Socket{Socket: tcpl, CloseOnExit: true} } } d.tomb.Go(func() error { shared.Log.Info("REST API daemon:") if d.UnixSocket != nil { shared.Log.Info(" - binding Unix socket", log.Ctx{"socket": d.UnixSocket.Socket.Addr()}) d.tomb.Go(func() error { return http.Serve(d.UnixSocket.Socket, &lxdHttpServer{d.mux, d}) }) } if d.TCPSocket != nil { shared.Log.Info(" - binding TCP socket", log.Ctx{"socket": d.TCPSocket.Socket.Addr()}) d.tomb.Go(func() error { return http.Serve(d.TCPSocket.Socket, &lxdHttpServer{d.mux, d}) }) } d.tomb.Go(func() error { server := devLxdServer(d) return server.Serve(d.devlxd) }) return nil }) if !d.MockMode && !d.SetupMode { err := d.Ready() if err != nil { return err } } return nil } func (d *Daemon) Ready() error { /* Prune images */ d.pruneChan = make(chan bool) go func() { pruneExpiredImages(d) for { timer := time.NewTimer(24 * time.Hour) timeChan := timer.C select { case <-timeChan: /* run once per day */ pruneExpiredImages(d) case <-d.pruneChan: /* run when image.remote_cache_expiry is changed */ pruneExpiredImages(d) timer.Stop() } } }() /* Auto-update images */ d.resetAutoUpdateChan = make(chan bool) go func() { autoUpdateImages(d) for { interval := daemonConfig["images.auto_update_interval"].GetInt64() if interval > 0 { timer := time.NewTimer(time.Duration(interval) * time.Hour) timeChan := timer.C select { case <-timeChan: autoUpdateImages(d) case <-d.resetAutoUpdateChan: timer.Stop() } } else { select { case <-d.resetAutoUpdateChan: continue } } } }() /* Restore containers */ go containersRestart(d) /* Re-balance in case things changed while LXD was down */ deviceTaskBalance(d) close(d.readyChan) return nil } // CheckTrustState returns True if the client is trusted else false. func (d *Daemon) CheckTrustState(cert x509.Certificate) bool { for k, v := range d.clientCerts { if bytes.Compare(cert.Raw, v.Raw) == 0 { shared.Log.Debug("Found cert", log.Ctx{"k": k}) return true } shared.Log.Debug("Client cert != key", log.Ctx{"k": k}) } return false } func (d *Daemon) numRunningContainers() (int, error) { results, err := dbContainersList(d.db, cTypeRegular) if err != nil { return 0, err } count := 0 for _, r := range results { container, err := containerLoadByName(d, r) if err != nil { continue } if container.IsRunning() { count = count + 1 } } return count, nil } var errStop = fmt.Errorf("requested stop") // Stop stops the shared daemon. func (d *Daemon) Stop() error { forceStop := false d.tomb.Kill(errStop) shared.Log.Info("Stopping REST API handler:") for _, socket := range []*Socket{d.TCPSocket, d.UnixSocket} { if socket == nil { continue } if socket.CloseOnExit { shared.Log.Info(" - closing socket", log.Ctx{"socket": socket.Socket.Addr()}) socket.Socket.Close() } else { shared.Log.Info(" - skipping socket-activated socket", log.Ctx{"socket": socket.Socket.Addr()}) forceStop = true } } if n, err := d.numRunningContainers(); err != nil || n == 0 { shared.Log.Debug("Unmounting shmounts") syscall.Unmount(shared.VarPath("shmounts"), syscall.MNT_DETACH) } else { shared.Debugf("Not unmounting shmounts (containers are still running)") } shared.Log.Debug("Closing the database") d.db.Close() shared.Log.Debug("Stopping /dev/lxd handler") d.devlxd.Close() if d.MockMode || forceStop { return nil } err := d.tomb.Wait() if err == errStop { return nil } return err } func (d *Daemon) PasswordCheck(password string) error { value := daemonConfig["core.trust_password"].Get() // No password set if value == "" { return fmt.Errorf("No password is set") } // Compare the password buff, err := hex.DecodeString(value) if err != nil { return err } salt := buff[0:32] hash, err := scrypt.Key([]byte(password), salt, 1<<14, 8, 1, 64) if err != nil { return err } if !bytes.Equal(hash, buff[32:]) { return fmt.Errorf("Bad password provided") } return nil } func (d *Daemon) ExpireLogs() error { entries, err := ioutil.ReadDir(shared.LogPath()) if err != nil { return err } result, err := dbContainersList(d.db, cTypeRegular) if err != nil { return err } newestFile := func(path string, dir os.FileInfo) time.Time { newest := dir.ModTime() entries, err := ioutil.ReadDir(path) if err != nil { return newest } for _, entry := range entries { if entry.ModTime().After(newest) { newest = entry.ModTime() } } return newest } for _, entry := range entries { // Check if the container still exists if shared.StringInSlice(entry.Name(), result) { // Remove any log file which wasn't modified in the past 48 hours logs, err := ioutil.ReadDir(shared.LogPath(entry.Name())) if err != nil { return err } for _, logfile := range logs { path := shared.LogPath(entry.Name(), logfile.Name()) // Always keep the LXC config if logfile.Name() == "lxc.conf" { continue } // Deal with directories (snapshots) if logfile.IsDir() { newest := newestFile(path, logfile) if time.Since(newest).Hours() >= 48 { os.RemoveAll(path) if err != nil { return err } } continue } // Individual files if time.Since(logfile.ModTime()).Hours() >= 48 { err := os.Remove(path) if err != nil { return err } } } } else { // Empty directory if unchanged in the past 24 hours path := shared.LogPath(entry.Name()) newest := newestFile(path, entry) if time.Since(newest).Hours() >= 24 { err := os.RemoveAll(path) if err != nil { return err } } } } return nil } type lxdHttpServer struct { r *mux.Router d *Daemon } func (s *lxdHttpServer) ServeHTTP(rw http.ResponseWriter, req *http.Request) { allowedOrigin := daemonConfig["core.https_allowed_origin"].Get() origin := req.Header.Get("Origin") if allowedOrigin != "" && origin != "" { rw.Header().Set("Access-Control-Allow-Origin", allowedOrigin) } allowedMethods := daemonConfig["core.https_allowed_methods"].Get() if allowedMethods != "" && origin != "" { rw.Header().Set("Access-Control-Allow-Methods", allowedMethods) } allowedHeaders := daemonConfig["core.https_allowed_headers"].Get() if allowedHeaders != "" && origin != "" { rw.Header().Set("Access-Control-Allow-Headers", allowedHeaders) } // OPTIONS request don't need any further processing if req.Method == "OPTIONS" { return } // Call the original server s.r.ServeHTTP(rw, req) } lxd-2.0.2/lxd/daemon_config.go000066400000000000000000000175571272140510300162220ustar00rootroot00000000000000package main import ( "crypto/rand" "database/sql" "encoding/hex" "fmt" "io" "os/exec" "strconv" "strings" "sync" "golang.org/x/crypto/scrypt" log "gopkg.in/inconshreveable/log15.v2" "github.com/lxc/lxd/shared" ) var daemonConfigLock sync.Mutex var daemonConfig map[string]*daemonConfigKey type daemonConfigKey struct { valueType string defaultValue string validValues []string currentValue string hiddenValue bool validator func(d *Daemon, key string, value string) error setter func(d *Daemon, key string, value string) (string, error) trigger func(d *Daemon, key string, value string) } func (k *daemonConfigKey) name() string { name := "" // Look for a matching entry in daemonConfig daemonConfigLock.Lock() for key, value := range daemonConfig { if value == k { name = key break } } daemonConfigLock.Unlock() return name } func (k *daemonConfigKey) Validate(d *Daemon, value string) error { // No need to validate when unsetting if value == "" { return nil } // Validate booleans if k.valueType == "bool" && !shared.StringInSlice(strings.ToLower(value), []string{"true", "false", "1", "0", "yes", "no", "on", "off"}) { return fmt.Errorf("Invalid value for a boolean: %s", value) } // Validate integers if k.valueType == "int" { _, err := strconv.ParseInt(value, 10, 64) if err != nil { return err } } // Check against valid values if k.validValues != nil && !shared.StringInSlice(value, k.validValues) { return fmt.Errorf("Invalid value, only the following values are allowed: %s", k.validValues) } // Run external validation function if k.validator != nil { err := k.validator(d, k.name(), value) if err != nil { return err } } return nil } func (k *daemonConfigKey) Set(d *Daemon, value string) error { var name string // Check if we are actually changing things oldValue := k.currentValue if oldValue == value { return nil } // Validate the new value err := k.Validate(d, value) if err != nil { return err } // Run external setting function if k.setter != nil { value, err = k.setter(d, k.name(), value) if err != nil { return err } } // Get the configuration key and make sure daemonConfig is sane name = k.name() if name == "" { return fmt.Errorf("Corrupted configuration cache") } // Actually apply the change daemonConfigLock.Lock() k.currentValue = value daemonConfigLock.Unlock() err = dbConfigValueSet(d.db, name, value) if err != nil { return err } return nil } func (k *daemonConfigKey) Get() string { value := k.currentValue // Get the default value if not set if value == "" { value = k.defaultValue } return value } func (k *daemonConfigKey) GetBool() bool { value := k.currentValue // Get the default value if not set if value == "" { value = k.defaultValue } // Convert to boolean return shared.IsTrue(value) } func (k *daemonConfigKey) GetInt64() int64 { value := k.currentValue // Get the default value if not set if value == "" { value = k.defaultValue } // Convert to int64 ret, _ := strconv.ParseInt(value, 10, 64) return ret } func daemonConfigInit(db *sql.DB) error { // Set all the keys daemonConfig = map[string]*daemonConfigKey{ "core.https_address": &daemonConfigKey{valueType: "string", setter: daemonConfigSetAddress}, "core.https_allowed_headers": &daemonConfigKey{valueType: "string"}, "core.https_allowed_methods": &daemonConfigKey{valueType: "string"}, "core.https_allowed_origin": &daemonConfigKey{valueType: "string"}, "core.proxy_http": &daemonConfigKey{valueType: "string", setter: daemonConfigSetProxy}, "core.proxy_https": &daemonConfigKey{valueType: "string", setter: daemonConfigSetProxy}, "core.proxy_ignore_hosts": &daemonConfigKey{valueType: "string", setter: daemonConfigSetProxy}, "core.trust_password": &daemonConfigKey{valueType: "string", hiddenValue: true, setter: daemonConfigSetPassword}, "images.auto_update_cached": &daemonConfigKey{valueType: "bool", defaultValue: "true"}, "images.auto_update_interval": &daemonConfigKey{valueType: "int", defaultValue: "6"}, "images.compression_algorithm": &daemonConfigKey{valueType: "string", validator: daemonConfigValidateCommand, defaultValue: "gzip"}, "images.remote_cache_expiry": &daemonConfigKey{valueType: "int", defaultValue: "10", trigger: daemonConfigTriggerExpiry}, "storage.lvm_fstype": &daemonConfigKey{valueType: "string", defaultValue: "ext4", validValues: []string{"ext4", "xfs"}}, "storage.lvm_thinpool_name": &daemonConfigKey{valueType: "string", defaultValue: "LXDPool", validator: storageLVMValidateThinPoolName}, "storage.lvm_vg_name": &daemonConfigKey{valueType: "string", validator: storageLVMValidateVolumeGroupName, setter: daemonConfigSetStorage}, "storage.lvm_volume_size": &daemonConfigKey{valueType: "string", defaultValue: "10GiB"}, "storage.zfs_pool_name": &daemonConfigKey{valueType: "string", validator: storageZFSValidatePoolName, setter: daemonConfigSetStorage}, } // Load the values from the DB dbValues, err := dbConfigValuesGet(db) if err != nil { return err } daemonConfigLock.Lock() for k, v := range dbValues { _, ok := daemonConfig[k] if !ok { shared.Log.Error("Found invalid configuration key in database", log.Ctx{"key": k}) } daemonConfig[k].currentValue = v } daemonConfigLock.Unlock() return nil } func daemonConfigRender() map[string]interface{} { config := map[string]interface{}{} // Turn the config into a JSON-compatible map for k, v := range daemonConfig { value := v.Get() if value != v.defaultValue { if v.hiddenValue { config[k] = true } else { config[k] = value } } } return config } func daemonConfigSetPassword(d *Daemon, key string, value string) (string, error) { // Nothing to do on unset if value == "" { return value, nil } // Hash the password buf := make([]byte, 32) _, err := io.ReadFull(rand.Reader, buf) if err != nil { return "", err } hash, err := scrypt.Key([]byte(value), buf, 1<<14, 8, 1, 64) if err != nil { return "", err } buf = append(buf, hash...) value = hex.EncodeToString(buf) return value, nil } func daemonConfigSetStorage(d *Daemon, key string, value string) (string, error) { // The storage driver looks at daemonConfig so just set it temporarily daemonConfigLock.Lock() oldValue := daemonConfig[key].Get() daemonConfig[key].currentValue = value daemonConfigLock.Unlock() defer func() { daemonConfigLock.Lock() daemonConfig[key].currentValue = oldValue daemonConfigLock.Unlock() }() // Update the current storage driver err := d.SetupStorageDriver() if err != nil { return "", err } return value, nil } func daemonConfigSetAddress(d *Daemon, key string, value string) (string, error) { // Update the current https address err := d.UpdateHTTPsPort(value) if err != nil { return "", err } return value, nil } func daemonConfigSetProxy(d *Daemon, key string, value string) (string, error) { // Get the current config config := map[string]string{} config["core.proxy_https"] = daemonConfig["core.proxy_https"].Get() config["core.proxy_http"] = daemonConfig["core.proxy_http"].Get() config["core.proxy_ignore_hosts"] = daemonConfig["core.proxy_ignore_hosts"].Get() // Apply the change config[key] = value // Update the cached proxy function d.proxy = shared.ProxyFromConfig( config["core.proxy_https"], config["core.proxy_http"], config["core.proxy_ignore_hosts"], ) // Clear the simplestreams cache as it's tied to the old proxy config imageStreamCacheLock.Lock() for k, _ := range imageStreamCache { delete(imageStreamCache, k) } imageStreamCacheLock.Unlock() return value, nil } func daemonConfigTriggerExpiry(d *Daemon, key string, value string) { // Trigger an image pruning run d.pruneChan <- true } func daemonConfigValidateCommand(d *Daemon, key string, value string) error { _, err := exec.LookPath(value) return err } lxd-2.0.2/lxd/daemon_images.go000066400000000000000000000213641272140510300162110ustar00rootroot00000000000000package main import ( "encoding/json" "fmt" "io" "mime" "mime/multipart" "os" "path/filepath" "sync" "time" "github.com/lxc/lxd/shared" log "gopkg.in/inconshreveable/log15.v2" ) type imageStreamCacheEntry struct { ss *shared.SimpleStreams expiry time.Time } var imageStreamCache = map[string]*imageStreamCacheEntry{} var imageStreamCacheLock sync.Mutex // ImageDownload checks if we have that Image Fingerprint else // downloads the image from a remote server. func (d *Daemon) ImageDownload(op *operation, server string, protocol string, certificate string, secret string, alias string, forContainer bool, autoUpdate bool) (string, error) { var err error var ss *shared.SimpleStreams if protocol == "" { protocol = "lxd" } fp := alias // Expand aliases if protocol == "simplestreams" { imageStreamCacheLock.Lock() entry, _ := imageStreamCache[server] if entry == nil || entry.expiry.Before(time.Now()) { ss, err = shared.SimpleStreamsClient(server, d.proxy) if err != nil { imageStreamCacheLock.Unlock() return "", err } entry = &imageStreamCacheEntry{ss: ss, expiry: time.Now().Add(time.Hour)} imageStreamCache[server] = entry } else { shared.Debugf("Using SimpleStreams cache entry for %s, expires at %s", server, entry.expiry) ss = entry.ss } imageStreamCacheLock.Unlock() target := ss.GetAlias(fp) if target != "" { fp = target } image, err := ss.GetImageInfo(fp) if err != nil { return "", err } if fp == alias { alias = image.Fingerprint } fp = image.Fingerprint } else if protocol == "lxd" { target, err := remoteGetImageFingerprint(d, server, certificate, fp) if err == nil && target != "" { fp = target } } if _, _, err := dbImageGet(d.db, fp, false, false); err == nil { shared.Log.Debug("Image already exists in the db", log.Ctx{"image": fp}) // already have it return fp, nil } shared.Log.Info( "Image not in the db, downloading it", log.Ctx{"image": fp, "server": server}) // Now check if we already downloading the image d.imagesDownloadingLock.RLock() if waitChannel, ok := d.imagesDownloading[fp]; ok { // We already download the image d.imagesDownloadingLock.RUnlock() shared.Log.Info( "Already downloading the image, waiting for it to succeed", log.Ctx{"image": fp}) // Wait until the download finishes (channel closes) if _, ok := <-waitChannel; ok { shared.Log.Warn("Value transmitted over image lock semaphore?") } if _, _, err := dbImageGet(d.db, fp, false, true); err != nil { shared.Log.Error( "Previous download didn't succeed", log.Ctx{"image": fp}) return "", fmt.Errorf("Previous download didn't succeed") } shared.Log.Info( "Previous download succeeded", log.Ctx{"image": fp}) return fp, nil } d.imagesDownloadingLock.RUnlock() shared.Log.Info( "Downloading the image", log.Ctx{"image": fp}) // Add the download to the queue d.imagesDownloadingLock.Lock() d.imagesDownloading[fp] = make(chan bool) d.imagesDownloadingLock.Unlock() // Unlock once this func ends. defer func() { d.imagesDownloadingLock.Lock() if waitChannel, ok := d.imagesDownloading[fp]; ok { close(waitChannel) delete(d.imagesDownloading, fp) } d.imagesDownloadingLock.Unlock() }() exporturl := server var info shared.ImageInfo info.Fingerprint = fp destDir := shared.VarPath("images") destName := filepath.Join(destDir, fp) if shared.PathExists(destName) { d.Storage.ImageDelete(fp) } progress := func(progressInt int) { if op == nil { return } meta := op.metadata if meta == nil { meta = make(map[string]interface{}) } progress := fmt.Sprintf("%d%%", progressInt) if meta["download_progress"] != progress { meta["download_progress"] = progress op.UpdateMetadata(meta) } } if protocol == "lxd" { /* grab the metadata from /1.0/images/%s */ var url string if secret != "" { url = fmt.Sprintf( "%s/%s/images/%s?secret=%s", server, shared.APIVersion, fp, secret) } else { url = fmt.Sprintf("%s/%s/images/%s", server, shared.APIVersion, fp) } resp, err := d.httpGetSync(url, certificate) if err != nil { shared.Log.Error( "Failed to download image metadata", log.Ctx{"image": fp, "err": err}) return "", err } if err := json.Unmarshal(resp.Metadata, &info); err != nil { return "", err } /* now grab the actual file from /1.0/images/%s/export */ if secret != "" { exporturl = fmt.Sprintf( "%s/%s/images/%s/export?secret=%s", server, shared.APIVersion, fp, secret) } else { exporturl = fmt.Sprintf( "%s/%s/images/%s/export", server, shared.APIVersion, fp) } } else if protocol == "simplestreams" { err := ss.Download(fp, "meta", destName, nil) if err != nil { return "", err } err = ss.Download(fp, "root", destName+".rootfs", progress) if err != nil { return "", err } info, err := ss.GetImageInfo(fp) if err != nil { return "", err } info.Public = false info.AutoUpdate = autoUpdate _, err = imageBuildFromInfo(d, *info) if err != nil { return "", err } if alias != fp { id, _, err := dbImageGet(d.db, fp, false, true) if err != nil { return "", err } err = dbImageSourceInsert(d.db, id, server, protocol, "", alias) if err != nil { return "", err } } if forContainer { return fp, dbImageLastAccessInit(d.db, fp) } return fp, nil } raw, err := d.httpGetFile(exporturl, certificate) if err != nil { shared.Log.Error( "Failed to download image", log.Ctx{"image": fp, "err": err}) return "", err } info.Size = raw.ContentLength ctype, ctypeParams, err := mime.ParseMediaType(raw.Header.Get("Content-Type")) if err != nil { ctype = "application/octet-stream" } body := &shared.TransferProgress{Reader: raw.Body, Length: raw.ContentLength, Handler: progress} if ctype == "multipart/form-data" { // Parse the POST data mr := multipart.NewReader(body, ctypeParams["boundary"]) // Get the metadata tarball part, err := mr.NextPart() if err != nil { shared.Log.Error( "Invalid multipart image", log.Ctx{"image": fp, "err": err}) return "", err } if part.FormName() != "metadata" { shared.Log.Error( "Invalid multipart image", log.Ctx{"image": fp, "err": err}) return "", fmt.Errorf("Invalid multipart image") } destName = filepath.Join(destDir, info.Fingerprint) f, err := os.Create(destName) if err != nil { shared.Log.Error( "Failed to save image", log.Ctx{"image": fp, "err": err}) return "", err } _, err = io.Copy(f, part) f.Close() if err != nil { shared.Log.Error( "Failed to save image", log.Ctx{"image": fp, "err": err}) return "", err } // Get the rootfs tarball part, err = mr.NextPart() if err != nil { shared.Log.Error( "Invalid multipart image", log.Ctx{"image": fp, "err": err}) return "", err } if part.FormName() != "rootfs" { shared.Log.Error( "Invalid multipart image", log.Ctx{"image": fp}) return "", fmt.Errorf("Invalid multipart image") } destName = filepath.Join(destDir, info.Fingerprint+".rootfs") f, err = os.Create(destName) if err != nil { shared.Log.Error( "Failed to save image", log.Ctx{"image": fp, "err": err}) return "", err } _, err = io.Copy(f, part) f.Close() if err != nil { shared.Log.Error( "Failed to save image", log.Ctx{"image": fp, "err": err}) return "", err } } else { destName = filepath.Join(destDir, info.Fingerprint) f, err := os.Create(destName) if err != nil { shared.Log.Error( "Failed to save image", log.Ctx{"image": fp, "err": err}) return "", err } _, err = io.Copy(f, body) f.Close() if err != nil { shared.Log.Error( "Failed to save image", log.Ctx{"image": fp, "err": err}) return "", err } } if protocol == "direct" { imageMeta, err := getImageMetadata(destName) if err != nil { return "", err } info.Architecture = imageMeta.Architecture info.CreationDate = time.Unix(imageMeta.CreationDate, 0) info.ExpiryDate = time.Unix(imageMeta.ExpiryDate, 0) info.Properties = imageMeta.Properties } // By default, make all downloaded images private info.Public = false if alias != fp && secret == "" { info.AutoUpdate = autoUpdate } _, err = imageBuildFromInfo(d, info) if err != nil { shared.Log.Error( "Failed to create image", log.Ctx{"image": fp, "err": err}) return "", err } if alias != fp { id, _, err := dbImageGet(d.db, fp, false, true) if err != nil { return "", err } err = dbImageSourceInsert(d.db, id, server, protocol, "", alias) if err != nil { return "", err } } shared.Log.Info( "Download succeeded", log.Ctx{"image": fp}) if forContainer { return fp, dbImageLastAccessInit(d.db, fp) } return fp, nil } lxd-2.0.2/lxd/daemon_test.go000066400000000000000000000012331272140510300157140ustar00rootroot00000000000000package main func (suite *lxdTestSuite) Test_config_value_set_empty_removes_val() { var err error d := suite.d err = daemonConfig["core.trust_password"].Set(d, "foo") suite.Req.Nil(err) val := daemonConfig["core.trust_password"].Get() suite.Req.Equal(len(val), 192) valMap := daemonConfigRender() value, present := valMap["core.trust_password"] suite.Req.True(present) suite.Req.Equal(value, true) err = daemonConfig["core.trust_password"].Set(d, "") suite.Req.Nil(err) val = daemonConfig["core.trust_password"].Get() suite.Req.Equal(val, "") valMap = daemonConfigRender() _, present = valMap["core.trust_password"] suite.Req.False(present) } lxd-2.0.2/lxd/db.go000066400000000000000000000274751272140510300140170ustar00rootroot00000000000000package main import ( "database/sql" "fmt" "time" "github.com/mattn/go-sqlite3" "github.com/lxc/lxd/shared" ) var ( // DbErrAlreadyDefined hapens when the given entry already exists, // for example a container. DbErrAlreadyDefined = fmt.Errorf("The container/snapshot already exists") /* NoSuchObjectError is in the case of joins (and probably other) queries, * we don't get back sql.ErrNoRows when no rows are returned, even though we do * on selects without joins. Instead, you can use this error to * propagate up and generate proper 404s to the client when something * isn't found so we don't abuse sql.ErrNoRows any more than we * already do. */ NoSuchObjectError = fmt.Errorf("No such object") ) // Profile is here to order Profiles. type Profile struct { name string order int } // Profiles will contain a list of all Profiles. type Profiles []Profile const DB_CURRENT_VERSION int = 31 // CURRENT_SCHEMA contains the current SQLite SQL Schema. const CURRENT_SCHEMA string = ` CREATE TABLE IF NOT EXISTS certificates ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, fingerprint VARCHAR(255) NOT NULL, type INTEGER NOT NULL, name VARCHAR(255) NOT NULL, certificate TEXT NOT NULL, UNIQUE (fingerprint) ); CREATE TABLE IF NOT EXISTS config ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, key VARCHAR(255) NOT NULL, value TEXT, UNIQUE (key) ); CREATE TABLE IF NOT EXISTS containers ( id INTEGER primary key AUTOINCREMENT NOT NULL, name VARCHAR(255) NOT NULL, architecture INTEGER NOT NULL, type INTEGER NOT NULL, ephemeral INTEGER NOT NULL DEFAULT 0, stateful INTEGER NOT NULL DEFAULT 0, creation_date DATETIME, UNIQUE (name) ); CREATE TABLE IF NOT EXISTS containers_config ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, container_id INTEGER NOT NULL, key VARCHAR(255) NOT NULL, value TEXT, FOREIGN KEY (container_id) REFERENCES containers (id) ON DELETE CASCADE, UNIQUE (container_id, key) ); CREATE TABLE IF NOT EXISTS containers_devices ( id INTEGER primary key AUTOINCREMENT NOT NULL, container_id INTEGER NOT NULL, name VARCHAR(255) NOT NULL, type INTEGER NOT NULL default 0, FOREIGN KEY (container_id) REFERENCES containers (id) ON DELETE CASCADE, UNIQUE (container_id, name) ); CREATE TABLE IF NOT EXISTS containers_devices_config ( id INTEGER primary key AUTOINCREMENT NOT NULL, container_device_id INTEGER NOT NULL, key VARCHAR(255) NOT NULL, value TEXT, FOREIGN KEY (container_device_id) REFERENCES containers_devices (id) ON DELETE CASCADE, UNIQUE (container_device_id, key) ); CREATE TABLE IF NOT EXISTS containers_profiles ( id INTEGER primary key AUTOINCREMENT NOT NULL, container_id INTEGER NOT NULL, profile_id INTEGER NOT NULL, apply_order INTEGER NOT NULL default 0, UNIQUE (container_id, profile_id), FOREIGN KEY (container_id) REFERENCES containers(id) ON DELETE CASCADE, FOREIGN KEY (profile_id) REFERENCES profiles(id) ON DELETE CASCADE ); CREATE TABLE IF NOT EXISTS images ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, cached INTEGER NOT NULL DEFAULT 0, fingerprint VARCHAR(255) NOT NULL, filename VARCHAR(255) NOT NULL, size INTEGER NOT NULL, public INTEGER NOT NULL DEFAULT 0, auto_update INTEGER NOT NULL DEFAULT 0, architecture INTEGER NOT NULL, creation_date DATETIME, expiry_date DATETIME, upload_date DATETIME NOT NULL, last_use_date DATETIME, UNIQUE (fingerprint) ); CREATE TABLE IF NOT EXISTS images_aliases ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, name VARCHAR(255) NOT NULL, image_id INTEGER NOT NULL, description VARCHAR(255), FOREIGN KEY (image_id) REFERENCES images (id) ON DELETE CASCADE, UNIQUE (name) ); CREATE TABLE IF NOT EXISTS images_properties ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, image_id INTEGER NOT NULL, type INTEGER NOT NULL, key VARCHAR(255) NOT NULL, value TEXT, FOREIGN KEY (image_id) REFERENCES images (id) ON DELETE CASCADE ); CREATE TABLE IF NOT EXISTS images_source ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, image_id INTEGER NOT NULL, server TEXT NOT NULL, protocol INTEGER NOT NULL, certificate TEXT NOT NULL, alias VARCHAR(255) NOT NULL, FOREIGN KEY (image_id) REFERENCES images (id) ON DELETE CASCADE ); CREATE TABLE IF NOT EXISTS profiles ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, name VARCHAR(255) NOT NULL, description TEXT, UNIQUE (name) ); CREATE TABLE IF NOT EXISTS profiles_config ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, profile_id INTEGER NOT NULL, key VARCHAR(255) NOT NULL, value VARCHAR(255), UNIQUE (profile_id, key), FOREIGN KEY (profile_id) REFERENCES profiles(id) ON DELETE CASCADE ); CREATE TABLE IF NOT EXISTS profiles_devices ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, profile_id INTEGER NOT NULL, name VARCHAR(255) NOT NULL, type INTEGER NOT NULL default 0, UNIQUE (profile_id, name), FOREIGN KEY (profile_id) REFERENCES profiles (id) ON DELETE CASCADE ); CREATE TABLE IF NOT EXISTS profiles_devices_config ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, profile_device_id INTEGER NOT NULL, key VARCHAR(255) NOT NULL, value TEXT, UNIQUE (profile_device_id, key), FOREIGN KEY (profile_device_id) REFERENCES profiles_devices (id) ON DELETE CASCADE ); CREATE TABLE IF NOT EXISTS schema ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, version INTEGER NOT NULL, updated_at DATETIME NOT NULL, UNIQUE (version) );` // Create the initial (current) schema for a given SQLite DB connection. func createDb(db *sql.DB) (err error) { latestVersion := dbGetSchema(db) if latestVersion != 0 { return nil } _, err = db.Exec(CURRENT_SCHEMA) if err != nil { return err } // There isn't an entry for schema version, let's put it in. insertStmt := `INSERT INTO schema (version, updated_at) values (?, strftime("%s"));` _, err = db.Exec(insertStmt, DB_CURRENT_VERSION) if err != nil { return err } err = dbProfileCreateDefault(db) if err != nil { return err } return dbProfileCreateDocker(db) } func dbGetSchema(db *sql.DB) (v int) { arg1 := []interface{}{} arg2 := []interface{}{&v} q := "SELECT max(version) FROM schema" err := dbQueryRowScan(db, q, arg1, arg2) if err != nil { return 0 } return v } // Create a database connection object and return it. func initializeDbObject(d *Daemon, path string) (err error) { var openPath string timeout := 5 // TODO - make this command-line configurable? // These are used to tune the transaction BEGIN behavior instead of using the // similar "locking_mode" pragma (locking for the whole database connection). openPath = fmt.Sprintf("%s?_busy_timeout=%d&_txlock=exclusive", path, timeout*1000) // Open the database. If the file doesn't exist it is created. d.db, err = sql.Open("sqlite3", openPath) if err != nil { return err } // Create the DB if it doesn't exist. err = createDb(d.db) if err != nil { return fmt.Errorf("Error creating database: %s", err) } // Run PRAGMA statements now since they are *per-connection*. d.db.Exec("PRAGMA foreign_keys=ON;") // This allows us to use ON DELETE CASCADE v := dbGetSchema(d.db) if v != DB_CURRENT_VERSION { err = dbUpdate(d, v) if err != nil { return err } } return nil } func isDbLockedError(err error) bool { if err == nil { return false } if err == sqlite3.ErrLocked || err == sqlite3.ErrBusy { return true } if err.Error() == "database is locked" { return true } return false } func isNoMatchError(err error) bool { if err == nil { return false } if err.Error() == "sql: no rows in result set" { return true } return false } func dbBegin(db *sql.DB) (*sql.Tx, error) { for i := 0; i < 100; i++ { tx, err := db.Begin() if err == nil { return tx, nil } if !isDbLockedError(err) { shared.Debugf("DbBegin: error %q", err) return nil, err } time.Sleep(100 * time.Millisecond) } shared.Debugf("DbBegin: DB still locked") shared.PrintStack() return nil, fmt.Errorf("DB is locked") } func txCommit(tx *sql.Tx) error { for i := 0; i < 100; i++ { err := tx.Commit() if err == nil { return nil } if !isDbLockedError(err) { shared.Debugf("Txcommit: error %q", err) return err } time.Sleep(100 * time.Millisecond) } shared.Debugf("Txcommit: db still locked") shared.PrintStack() return fmt.Errorf("DB is locked") } func dbQueryRowScan(db *sql.DB, q string, args []interface{}, outargs []interface{}) error { for i := 0; i < 100; i++ { err := db.QueryRow(q, args...).Scan(outargs...) if err == nil { return nil } if isNoMatchError(err) { return err } if !isDbLockedError(err) { return err } time.Sleep(100 * time.Millisecond) } shared.Debugf("DbQueryRowScan: query %q args %q, DB still locked", q, args) shared.PrintStack() return fmt.Errorf("DB is locked") } func dbQuery(db *sql.DB, q string, args ...interface{}) (*sql.Rows, error) { for i := 0; i < 100; i++ { result, err := db.Query(q, args...) if err == nil { return result, nil } if !isDbLockedError(err) { shared.Debugf("DbQuery: query %q error %q", q, err) return nil, err } time.Sleep(100 * time.Millisecond) } shared.Debugf("DbQuery: query %q args %q, DB still locked", q, args) shared.PrintStack() return nil, fmt.Errorf("DB is locked") } func doDbQueryScan(db *sql.DB, q string, args []interface{}, outargs []interface{}) ([][]interface{}, error) { rows, err := db.Query(q, args...) if err != nil { return [][]interface{}{}, err } defer rows.Close() result := [][]interface{}{} for rows.Next() { ptrargs := make([]interface{}, len(outargs)) for i := range outargs { switch t := outargs[i].(type) { case string: str := "" ptrargs[i] = &str case int: integer := 0 ptrargs[i] = &integer default: return [][]interface{}{}, fmt.Errorf("Bad interface type: %s", t) } } err = rows.Scan(ptrargs...) if err != nil { return [][]interface{}{}, err } newargs := make([]interface{}, len(outargs)) for i := range ptrargs { switch t := outargs[i].(type) { case string: newargs[i] = *ptrargs[i].(*string) case int: newargs[i] = *ptrargs[i].(*int) default: return [][]interface{}{}, fmt.Errorf("Bad interface type: %s", t) } } result = append(result, newargs) } err = rows.Err() if err != nil { return [][]interface{}{}, err } return result, nil } /* * . q is the database query * . inargs is an array of interfaces containing the query arguments * . outfmt is an array of interfaces containing the right types of output * arguments, i.e. * var arg1 string * var arg2 int * outfmt := {}interface{}{arg1, arg2} * * The result will be an array (one per output row) of arrays (one per output argument) * of interfaces, containing pointers to the actual output arguments. */ func dbQueryScan(db *sql.DB, q string, inargs []interface{}, outfmt []interface{}) ([][]interface{}, error) { for i := 0; i < 100; i++ { result, err := doDbQueryScan(db, q, inargs, outfmt) if err == nil { return result, nil } if !isDbLockedError(err) { shared.Debugf("DbQuery: query %q error %q", q, err) return nil, err } time.Sleep(100 * time.Millisecond) } shared.Debugf("DbQueryscan: query %q inargs %q, DB still locked", q, inargs) shared.PrintStack() return nil, fmt.Errorf("DB is locked") } func dbExec(db *sql.DB, q string, args ...interface{}) (sql.Result, error) { for i := 0; i < 100; i++ { result, err := db.Exec(q, args...) if err == nil { return result, nil } if !isDbLockedError(err) { shared.Debugf("DbExec: query %q error %q", q, err) return nil, err } time.Sleep(100 * time.Millisecond) } shared.Debugf("DbExec: query %q args %q, DB still locked", q, args) shared.PrintStack() return nil, fmt.Errorf("DB is locked") } lxd-2.0.2/lxd/db_certificates.go000066400000000000000000000045421272140510300165320ustar00rootroot00000000000000package main import ( "database/sql" _ "github.com/mattn/go-sqlite3" ) // dbCertInfo is here to pass the certificates content // from the database around type dbCertInfo struct { ID int Fingerprint string Type int Name string Certificate string } // dbCertsGet returns all certificates from the DB as CertBaseInfo objects. func dbCertsGet(db *sql.DB) (certs []*dbCertInfo, err error) { rows, err := dbQuery( db, "SELECT id, fingerprint, type, name, certificate FROM certificates", ) if err != nil { return certs, err } defer rows.Close() for rows.Next() { cert := new(dbCertInfo) rows.Scan( &cert.ID, &cert.Fingerprint, &cert.Type, &cert.Name, &cert.Certificate, ) certs = append(certs, cert) } return certs, nil } // dbCertGet gets an CertBaseInfo object from the database. // The argument fingerprint will be queried with a LIKE query, means you can // pass a shortform and will get the full fingerprint. // There can never be more than one image with a given fingerprint, as it is // enforced by a UNIQUE constraint in the schema. func dbCertGet(db *sql.DB, fingerprint string) (cert *dbCertInfo, err error) { cert = new(dbCertInfo) inargs := []interface{}{fingerprint + "%"} outfmt := []interface{}{ &cert.ID, &cert.Fingerprint, &cert.Type, &cert.Name, &cert.Certificate, } query := ` SELECT id, fingerprint, type, name, certificate FROM certificates WHERE fingerprint LIKE ?` if err = dbQueryRowScan(db, query, inargs, outfmt); err != nil { return nil, err } return cert, err } // dbCertSave stores a CertBaseInfo object in the db, // it will ignore the ID field from the dbCertInfo. func dbCertSave(db *sql.DB, cert *dbCertInfo) error { tx, err := dbBegin(db) if err != nil { return err } stmt, err := tx.Prepare(` INSERT INTO certificates ( fingerprint, type, name, certificate ) VALUES (?, ?, ?, ?)`, ) if err != nil { tx.Rollback() return err } defer stmt.Close() _, err = stmt.Exec( cert.Fingerprint, cert.Type, cert.Name, cert.Certificate, ) if err != nil { tx.Rollback() return err } return txCommit(tx) } // dbCertDelete deletes a certificate from the db. func dbCertDelete(db *sql.DB, fingerprint string) error { _, err := dbExec( db, "DELETE FROM certificates WHERE fingerprint=?", fingerprint, ) return err } lxd-2.0.2/lxd/db_config.go000066400000000000000000000017571272140510300153370ustar00rootroot00000000000000package main import ( "database/sql" _ "github.com/mattn/go-sqlite3" ) func dbConfigValuesGet(db *sql.DB) (map[string]string, error) { q := "SELECT key, value FROM config" rows, err := dbQuery(db, q) if err != nil { return map[string]string{}, err } defer rows.Close() results := map[string]string{} for rows.Next() { var key, value string rows.Scan(&key, &value) results[key] = value } return results, nil } func dbConfigValueSet(db *sql.DB, key string, value string) error { tx, err := dbBegin(db) if err != nil { return err } _, err = tx.Exec("DELETE FROM config WHERE key=?", key) if err != nil { tx.Rollback() return err } if value != "" { str := `INSERT INTO config (key, value) VALUES (?, ?);` stmt, err := tx.Prepare(str) if err != nil { tx.Rollback() return err } defer stmt.Close() _, err = stmt.Exec(key, value) if err != nil { tx.Rollback() return err } } err = txCommit(tx) if err != nil { return err } return nil } lxd-2.0.2/lxd/db_containers.go000066400000000000000000000214261272140510300162320ustar00rootroot00000000000000package main import ( "database/sql" "fmt" "time" "github.com/lxc/lxd/shared" log "gopkg.in/inconshreveable/log15.v2" ) type containerType int const ( cTypeRegular containerType = 0 cTypeSnapshot containerType = 1 ) func dbContainerRemove(db *sql.DB, name string) error { id, err := dbContainerId(db, name) if err != nil { return err } tx, err := dbBegin(db) if err != nil { return err } err = dbContainerConfigClear(tx, id) if err != nil { tx.Rollback() return err } _, err = tx.Exec("DELETE FROM containers WHERE id=?", id) if err != nil { tx.Rollback() return err } return txCommit(tx) } func dbContainerName(db *sql.DB, id int) (string, error) { q := "SELECT name FROM containers WHERE id=?" name := "" arg1 := []interface{}{id} arg2 := []interface{}{&name} err := dbQueryRowScan(db, q, arg1, arg2) return name, err } func dbContainerId(db *sql.DB, name string) (int, error) { q := "SELECT id FROM containers WHERE name=?" id := -1 arg1 := []interface{}{name} arg2 := []interface{}{&id} err := dbQueryRowScan(db, q, arg1, arg2) return id, err } func dbContainerGet(db *sql.DB, name string) (containerArgs, error) { args := containerArgs{} args.Name = name ephemInt := -1 statefulInt := -1 q := "SELECT id, architecture, type, ephemeral, stateful, creation_date FROM containers WHERE name=?" arg1 := []interface{}{name} arg2 := []interface{}{&args.Id, &args.Architecture, &args.Ctype, &ephemInt, &statefulInt, &args.CreationDate} err := dbQueryRowScan(db, q, arg1, arg2) if err != nil { return args, err } if args.Id == -1 { return args, fmt.Errorf("Unknown container") } if ephemInt == 1 { args.Ephemeral = true } if statefulInt == 1 { args.Stateful = true } config, err := dbContainerConfig(db, args.Id) if err != nil { return args, err } args.Config = config profiles, err := dbContainerProfiles(db, args.Id) if err != nil { return args, err } args.Profiles = profiles /* get container_devices */ args.Devices = shared.Devices{} newdevs, err := dbDevices(db, name, false) if err != nil { return args, err } for k, v := range newdevs { args.Devices[k] = v } return args, nil } func dbContainerCreate(db *sql.DB, args containerArgs) (int, error) { id, err := dbContainerId(db, args.Name) if err == nil { return 0, DbErrAlreadyDefined } tx, err := dbBegin(db) if err != nil { return 0, err } ephemInt := 0 if args.Ephemeral == true { ephemInt = 1 } statefulInt := 0 if args.Stateful == true { statefulInt = 1 } args.CreationDate = time.Now().UTC() str := fmt.Sprintf("INSERT INTO containers (name, architecture, type, ephemeral, creation_date, stateful) VALUES (?, ?, ?, ?, ?, ?)") stmt, err := tx.Prepare(str) if err != nil { tx.Rollback() return 0, err } defer stmt.Close() result, err := stmt.Exec(args.Name, args.Architecture, args.Ctype, ephemInt, args.CreationDate.Unix(), statefulInt) if err != nil { tx.Rollback() return 0, err } id64, err := result.LastInsertId() if err != nil { tx.Rollback() return 0, fmt.Errorf("Error inserting %s into database", args.Name) } // TODO: is this really int64? we should fix it everywhere if so id = int(id64) if err := dbContainerConfigInsert(tx, id, args.Config); err != nil { tx.Rollback() return 0, err } if err := dbContainerProfilesInsert(tx, id, args.Profiles); err != nil { tx.Rollback() return 0, err } if err := dbDevicesAdd(tx, "container", int64(id), args.Devices); err != nil { tx.Rollback() return 0, err } return id, txCommit(tx) } func dbContainerConfigClear(tx *sql.Tx, id int) error { _, err := tx.Exec("DELETE FROM containers_config WHERE container_id=?", id) if err != nil { return err } _, err = tx.Exec("DELETE FROM containers_profiles WHERE container_id=?", id) if err != nil { return err } _, err = tx.Exec(`DELETE FROM containers_devices_config WHERE id IN (SELECT containers_devices_config.id FROM containers_devices_config JOIN containers_devices ON containers_devices_config.container_device_id=containers_devices.id WHERE containers_devices.container_id=?)`, id) if err != nil { return err } _, err = tx.Exec("DELETE FROM containers_devices WHERE container_id=?", id) return err } func dbContainerConfigInsert(tx *sql.Tx, id int, config map[string]string) error { str := "INSERT INTO containers_config (container_id, key, value) values (?, ?, ?)" stmt, err := tx.Prepare(str) if err != nil { return err } defer stmt.Close() for k, v := range config { _, err := stmt.Exec(id, k, v) if err != nil { shared.Debugf("Error adding configuration item %s = %s to container %d", k, v, id) return err } } return nil } func dbContainerConfigRemove(db *sql.DB, id int, name string) error { _, err := dbExec(db, "DELETE FROM containers_config WHERE key=? AND container_id=?", name, id) return err } func dbContainerSetStateful(db *sql.DB, id int, stateful bool) error { statefulInt := 0 if stateful { statefulInt = 1 } _, err := dbExec(db, "UPDATE containers SET stateful=? WHERE id=?", statefulInt, id) return err } func dbContainerProfilesInsert(tx *sql.Tx, id int, profiles []string) error { applyOrder := 1 str := `INSERT INTO containers_profiles (container_id, profile_id, apply_order) VALUES (?, (SELECT id FROM profiles WHERE name=?), ?);` stmt, err := tx.Prepare(str) if err != nil { return err } defer stmt.Close() for _, p := range profiles { _, err = stmt.Exec(id, p, applyOrder) if err != nil { shared.Debugf("Error adding profile %s to container: %s", p, err) return err } applyOrder = applyOrder + 1 } return nil } // Get a list of profiles for a given container id. func dbContainerProfiles(db *sql.DB, containerId int) ([]string, error) { var name string var profiles []string query := ` SELECT name FROM containers_profiles JOIN profiles ON containers_profiles.profile_id=profiles.id WHERE container_id=? ORDER BY containers_profiles.apply_order` inargs := []interface{}{containerId} outfmt := []interface{}{name} results, err := dbQueryScan(db, query, inargs, outfmt) if err != nil { return nil, err } for _, r := range results { name = r[0].(string) profiles = append(profiles, name) } return profiles, nil } // dbContainerConfig gets the container configuration map from the DB func dbContainerConfig(db *sql.DB, containerId int) (map[string]string, error) { var key, value string q := `SELECT key, value FROM containers_config WHERE container_id=?` inargs := []interface{}{containerId} outfmt := []interface{}{key, value} // Results is already a slice here, not db Rows anymore. results, err := dbQueryScan(db, q, inargs, outfmt) if err != nil { return nil, err //SmartError will wrap this and make "not found" errors pretty } config := map[string]string{} for _, r := range results { key = r[0].(string) value = r[1].(string) config[key] = value } return config, nil } func dbContainersList(db *sql.DB, cType containerType) ([]string, error) { q := fmt.Sprintf("SELECT name FROM containers WHERE type=? ORDER BY name") inargs := []interface{}{cType} var container string outfmt := []interface{}{container} result, err := dbQueryScan(db, q, inargs, outfmt) if err != nil { return nil, err } var ret []string for _, container := range result { ret = append(ret, container[0].(string)) } return ret, nil } func dbContainerRename(db *sql.DB, oldName string, newName string) error { tx, err := dbBegin(db) if err != nil { return err } str := fmt.Sprintf("UPDATE containers SET name = ? WHERE name = ?") stmt, err := tx.Prepare(str) if err != nil { tx.Rollback() return err } defer stmt.Close() shared.Log.Debug( "Calling SQL Query", log.Ctx{ "query": "UPDATE containers SET name = ? WHERE name = ?", "oldName": oldName, "newName": newName}) if _, err := stmt.Exec(newName, oldName); err != nil { tx.Rollback() return err } return txCommit(tx) } func dbContainerUpdate(tx *sql.Tx, id int, architecture int, ephemeral bool) error { str := fmt.Sprintf("UPDATE containers SET architecture=?, ephemeral=? WHERE id=?") stmt, err := tx.Prepare(str) if err != nil { return err } defer stmt.Close() ephemeralInt := 0 if ephemeral { ephemeralInt = 1 } if _, err := stmt.Exec(architecture, ephemeralInt, id); err != nil { return err } return nil } func dbContainerGetSnapshots(db *sql.DB, name string) ([]string, error) { result := []string{} regexp := name + shared.SnapshotDelimiter length := len(regexp) q := "SELECT name FROM containers WHERE type=? AND SUBSTR(name,1,?)=?" inargs := []interface{}{cTypeSnapshot, length, regexp} outfmt := []interface{}{name} dbResults, err := dbQueryScan(db, q, inargs, outfmt) if err != nil { return result, err } for _, r := range dbResults { result = append(result, r[0].(string)) } return result, nil } lxd-2.0.2/lxd/db_devices.go000066400000000000000000000067521272140510300155140ustar00rootroot00000000000000package main import ( "database/sql" "fmt" _ "github.com/mattn/go-sqlite3" "github.com/lxc/lxd/shared" ) func dbDeviceTypeToString(t int) (string, error) { switch t { case 0: return "none", nil case 1: return "nic", nil case 2: return "disk", nil case 3: return "unix-char", nil case 4: return "unix-block", nil default: return "", fmt.Errorf("Invalid device type %d", t) } } func dbDeviceTypeToInt(t string) (int, error) { switch t { case "none": return 0, nil case "nic": return 1, nil case "disk": return 2, nil case "unix-char": return 3, nil case "unix-block": return 4, nil default: return -1, fmt.Errorf("Invalid device type %s", t) } } func dbDevicesAdd(tx *sql.Tx, w string, cID int64, devices shared.Devices) error { // Prepare the devices entry SQL str1 := fmt.Sprintf("INSERT INTO %ss_devices (%s_id, name, type) VALUES (?, ?, ?)", w, w) stmt1, err := tx.Prepare(str1) if err != nil { return err } defer stmt1.Close() // Prepare the devices config entry SQL str2 := fmt.Sprintf("INSERT INTO %ss_devices_config (%s_device_id, key, value) VALUES (?, ?, ?)", w, w) stmt2, err := tx.Prepare(str2) if err != nil { return err } defer stmt2.Close() // Insert all the devices for k, v := range devices { t, err := dbDeviceTypeToInt(v["type"]) if err != nil { return err } result, err := stmt1.Exec(cID, k, t) if err != nil { return err } id64, err := result.LastInsertId() if err != nil { return fmt.Errorf("Error inserting device %s into database", k) } id := int(id64) for ck, cv := range v { // The type is stored as int in the parent entry if ck == "type" { continue } _, err = stmt2.Exec(id, ck, cv) if err != nil { return err } } } return nil } func dbDeviceConfig(db *sql.DB, id int, isprofile bool) (shared.Device, error) { var query string var key, value string newdev := shared.Device{} // That's a map[string]string inargs := []interface{}{id} outfmt := []interface{}{key, value} if isprofile { query = `SELECT key, value FROM profiles_devices_config WHERE profile_device_id=?` } else { query = `SELECT key, value FROM containers_devices_config WHERE container_device_id=?` } results, err := dbQueryScan(db, query, inargs, outfmt) if err != nil { return newdev, err } for _, r := range results { key = r[0].(string) value = r[1].(string) newdev[key] = value } return newdev, nil } func dbDevices(db *sql.DB, qName string, isprofile bool) (shared.Devices, error) { var q string if isprofile { q = `SELECT profiles_devices.id, profiles_devices.name, profiles_devices.type FROM profiles_devices JOIN profiles ON profiles_devices.profile_id = profiles.id WHERE profiles.name=?` } else { q = `SELECT containers_devices.id, containers_devices.name, containers_devices.type FROM containers_devices JOIN containers ON containers_devices.container_id = containers.id WHERE containers.name=?` } var id, dtype int var name, stype string inargs := []interface{}{qName} outfmt := []interface{}{id, name, dtype} results, err := dbQueryScan(db, q, inargs, outfmt) if err != nil { return nil, err } devices := shared.Devices{} for _, r := range results { id = r[0].(int) name = r[1].(string) stype, err = dbDeviceTypeToString(r[2].(int)) if err != nil { return nil, err } newdev, err := dbDeviceConfig(db, id, isprofile) if err != nil { return nil, err } newdev["type"] = stype devices[name] = newdev } return devices, nil } lxd-2.0.2/lxd/db_images.go000066400000000000000000000256061272140510300153360ustar00rootroot00000000000000package main import ( "database/sql" "fmt" "time" _ "github.com/mattn/go-sqlite3" "github.com/lxc/lxd/shared" ) var dbImageSourceProtocol = map[int]string{ 0: "lxd", 1: "direct", 2: "simplestreams", } func dbImagesGet(db *sql.DB, public bool) ([]string, error) { q := "SELECT fingerprint FROM images" if public == true { q = "SELECT fingerprint FROM images WHERE public=1" } var fp string inargs := []interface{}{} outfmt := []interface{}{fp} dbResults, err := dbQueryScan(db, q, inargs, outfmt) if err != nil { return []string{}, err } results := []string{} for _, r := range dbResults { results = append(results, r[0].(string)) } return results, nil } func dbImagesGetExpired(db *sql.DB, expiry int64) ([]string, error) { q := `SELECT fingerprint FROM images WHERE cached=1 AND creation_date<=strftime('%s', date('now', '-` + fmt.Sprintf("%d", expiry) + ` day'))` var fp string inargs := []interface{}{} outfmt := []interface{}{fp} dbResults, err := dbQueryScan(db, q, inargs, outfmt) if err != nil { return []string{}, err } results := []string{} for _, r := range dbResults { results = append(results, r[0].(string)) } return results, nil } func dbImageSourceInsert(db *sql.DB, imageId int, server string, protocol string, certificate string, alias string) error { stmt := `INSERT INTO images_source (image_id, server, protocol, certificate, alias) values (?, ?, ?, ?, ?)` protocolInt := -1 for protoInt, protoString := range dbImageSourceProtocol { if protoString == protocol { protocolInt = protoInt } } if protocolInt == -1 { return fmt.Errorf("Invalid protocol: %s", protocol) } _, err := dbExec(db, stmt, imageId, server, protocolInt, certificate, alias) return err } func dbImageSourceGet(db *sql.DB, imageId int) (int, shared.ImageSource, error) { q := `SELECT id, server, protocol, certificate, alias FROM images_source WHERE image_id=?` id := 0 protocolInt := -1 result := shared.ImageSource{} arg1 := []interface{}{imageId} arg2 := []interface{}{&id, &result.Server, &protocolInt, &result.Certificate, &result.Alias} err := dbQueryRowScan(db, q, arg1, arg2) if err != nil { if err == sql.ErrNoRows { return -1, shared.ImageSource{}, NoSuchObjectError } return -1, shared.ImageSource{}, err } protocol, found := dbImageSourceProtocol[protocolInt] if !found { return -1, shared.ImageSource{}, fmt.Errorf("Invalid protocol: %d", protocolInt) } result.Protocol = protocol return id, result, nil } // dbImageGet gets an ImageBaseInfo object from the database. // The argument fingerprint will be queried with a LIKE query, means you can // pass a shortform and will get the full fingerprint. // There can never be more than one image with a given fingerprint, as it is // enforced by a UNIQUE constraint in the schema. func dbImageGet(db *sql.DB, fingerprint string, public bool, strictMatching bool) (int, *shared.ImageInfo, error) { var err error var create, expire, used, upload *time.Time // These hold the db-returned times // The object we'll actually return image := shared.ImageInfo{} id := -1 arch := -1 // These two humongous things will be filled by the call to DbQueryRowScan outfmt := []interface{}{&id, &image.Fingerprint, &image.Filename, &image.Size, &image.Cached, &image.Public, &image.AutoUpdate, &arch, &create, &expire, &used, &upload} var query string var inargs []interface{} if strictMatching { inargs = []interface{}{fingerprint} query = ` SELECT id, fingerprint, filename, size, cached, public, auto_update, architecture, creation_date, expiry_date, last_use_date, upload_date FROM images WHERE fingerprint = ?` } else { inargs = []interface{}{fingerprint + "%"} query = ` SELECT id, fingerprint, filename, size, cached, public, auto_update, architecture, creation_date, expiry_date, last_use_date, upload_date FROM images WHERE fingerprint LIKE ?` } if public { query = query + " AND public=1" } err = dbQueryRowScan(db, query, inargs, outfmt) if err != nil { return -1, nil, err // Likely: there are no rows for this fingerprint } // Some of the dates can be nil in the DB, let's process them. if create != nil { image.CreationDate = *create } else { image.CreationDate = time.Time{} } if expire != nil { image.ExpiryDate = *expire } else { image.ExpiryDate = time.Time{} } if used != nil { image.LastUsedDate = *used } else { image.LastUsedDate = time.Time{} } image.Architecture, _ = shared.ArchitectureName(arch) // The upload date is enforced by NOT NULL in the schema, so it can never be nil. image.UploadDate = *upload // Get the properties q := "SELECT key, value FROM images_properties where image_id=?" var key, value, name, desc string inargs = []interface{}{id} outfmt = []interface{}{key, value} results, err := dbQueryScan(db, q, inargs, outfmt) if err != nil { return -1, nil, err } properties := map[string]string{} for _, r := range results { key = r[0].(string) value = r[1].(string) properties[key] = value } image.Properties = properties // Get the aliases q = "SELECT name, description FROM images_aliases WHERE image_id=?" inargs = []interface{}{id} outfmt = []interface{}{name, desc} results, err = dbQueryScan(db, q, inargs, outfmt) if err != nil { return -1, nil, err } aliases := []shared.ImageAlias{} for _, r := range results { name = r[0].(string) desc = r[0].(string) a := shared.ImageAlias{Name: name, Description: desc} aliases = append(aliases, a) } image.Aliases = aliases _, source, err := dbImageSourceGet(db, id) if err == nil { image.Source = &source } return id, &image, nil } func dbImageDelete(db *sql.DB, id int) error { tx, err := dbBegin(db) if err != nil { return err } _, _ = tx.Exec("DELETE FROM images_aliases WHERE image_id=?", id) _, _ = tx.Exec("DELETE FROM images_properties WHERE image_id=?", id) _, _ = tx.Exec("DELETE FROM images_source WHERE image_id=?", id) _, _ = tx.Exec("DELETE FROM images WHERE id=?", id) if err := txCommit(tx); err != nil { return err } return nil } func dbImageAliasGet(db *sql.DB, name string, isTrustedClient bool) (int, shared.ImageAliasesEntry, error) { q := `SELECT images_aliases.id, images.fingerprint, images_aliases.description FROM images_aliases INNER JOIN images ON images_aliases.image_id=images.id WHERE images_aliases.name=?` if !isTrustedClient { q = q + ` AND images.public=1` } var fingerprint, description string id := -1 arg1 := []interface{}{name} arg2 := []interface{}{&id, &fingerprint, &description} err := dbQueryRowScan(db, q, arg1, arg2) if err != nil { if err == sql.ErrNoRows { return -1, shared.ImageAliasesEntry{}, NoSuchObjectError } return -1, shared.ImageAliasesEntry{}, err } return id, shared.ImageAliasesEntry{Name: name, Target: fingerprint, Description: description}, nil } func dbImageAliasRename(db *sql.DB, id int, name string) error { _, err := dbExec(db, "UPDATE images_aliases SET name=? WHERE id=?", name, id) return err } func dbImageAliasDelete(db *sql.DB, name string) error { _, err := dbExec(db, "DELETE FROM images_aliases WHERE name=?", name) return err } func dbImageAliasesMove(db *sql.DB, source int, destination int) error { _, err := dbExec(db, "UPDATE images_aliases SET image_id=? WHERE image_id=?", destination, source) return err } // Insert an alias ento the database. func dbImageAliasAdd(db *sql.DB, name string, imageID int, desc string) error { stmt := `INSERT INTO images_aliases (name, image_id, description) values (?, ?, ?)` _, err := dbExec(db, stmt, name, imageID, desc) return err } func dbImageAliasUpdate(db *sql.DB, id int, imageID int, desc string) error { stmt := `UPDATE images_aliases SET image_id=?, description=? WHERE id=?` _, err := dbExec(db, stmt, imageID, desc, id) return err } func dbImageLastAccessUpdate(db *sql.DB, fingerprint string, date time.Time) error { stmt := `UPDATE images SET last_use_date=? WHERE fingerprint=?` _, err := dbExec(db, stmt, date, fingerprint) return err } func dbImageLastAccessInit(db *sql.DB, fingerprint string) error { stmt := `UPDATE images SET cached=1, last_use_date=strftime("%s") WHERE fingerprint=?` _, err := dbExec(db, stmt, fingerprint) return err } func dbImageUpdate(db *sql.DB, id int, fname string, sz int64, public bool, autoUpdate bool, architecture string, creationDate time.Time, expiryDate time.Time, properties map[string]string) error { arch, err := shared.ArchitectureId(architecture) if err != nil { arch = 0 } tx, err := dbBegin(db) if err != nil { return err } publicInt := 0 if public { publicInt = 1 } autoUpdateInt := 0 if autoUpdate { autoUpdateInt = 1 } stmt, err := tx.Prepare(`UPDATE images SET filename=?, size=?, public=?, auto_update=?, architecture=?, creation_date=?, expiry_date=? WHERE id=?`) if err != nil { tx.Rollback() return err } defer stmt.Close() _, err = stmt.Exec(fname, sz, publicInt, autoUpdateInt, arch, creationDate, expiryDate, id) if err != nil { tx.Rollback() return err } _, err = tx.Exec(`DELETE FROM images_properties WHERE image_id=?`, id) stmt, err = tx.Prepare(`INSERT INTO images_properties (image_id, type, key, value) VALUES (?, ?, ?, ?)`) if err != nil { tx.Rollback() return err } for key, value := range properties { _, err = stmt.Exec(id, 0, key, value) if err != nil { tx.Rollback() return err } } if err := txCommit(tx); err != nil { return err } return nil } func dbImageInsert(db *sql.DB, fp string, fname string, sz int64, public bool, autoUpdate bool, architecture string, creationDate time.Time, expiryDate time.Time, properties map[string]string) error { arch, err := shared.ArchitectureId(architecture) if err != nil { arch = 0 } tx, err := dbBegin(db) if err != nil { return err } publicInt := 0 if public { publicInt = 1 } autoUpdateInt := 0 if autoUpdate { autoUpdateInt = 1 } stmt, err := tx.Prepare(`INSERT INTO images (fingerprint, filename, size, public, auto_update, architecture, creation_date, expiry_date, upload_date) VALUES (?, ?, ?, ?, ?, ?, ?, ?, strftime("%s"))`) if err != nil { tx.Rollback() return err } defer stmt.Close() result, err := stmt.Exec(fp, fname, sz, publicInt, autoUpdateInt, arch, creationDate, expiryDate) if err != nil { tx.Rollback() return err } if len(properties) > 0 { id64, err := result.LastInsertId() if err != nil { tx.Rollback() return err } id := int(id64) pstmt, err := tx.Prepare(`INSERT INTO images_properties (image_id, type, key, value) VALUES (?, 0, ?, ?)`) if err != nil { tx.Rollback() return err } defer pstmt.Close() for k, v := range properties { // we can assume, that there is just one // value per key _, err = pstmt.Exec(id, k, v) if err != nil { tx.Rollback() return err } } } if err := txCommit(tx); err != nil { return err } return nil } lxd-2.0.2/lxd/db_profiles.go000066400000000000000000000145371272140510300157150ustar00rootroot00000000000000package main import ( "database/sql" "fmt" _ "github.com/mattn/go-sqlite3" "github.com/lxc/lxd/shared" ) // dbProfiles returns a string list of profiles. func dbProfiles(db *sql.DB) ([]string, error) { q := fmt.Sprintf("SELECT name FROM profiles") inargs := []interface{}{} var name string outfmt := []interface{}{name} result, err := dbQueryScan(db, q, inargs, outfmt) if err != nil { return []string{}, err } response := []string{} for _, r := range result { response = append(response, r[0].(string)) } return response, nil } func dbProfileGet(db *sql.DB, profile string) (int64, *shared.ProfileConfig, error) { id := int64(-1) description := sql.NullString{} q := "SELECT id, description FROM profiles WHERE name=?" arg1 := []interface{}{profile} arg2 := []interface{}{&id, &description} err := dbQueryRowScan(db, q, arg1, arg2) if err != nil { return -1, nil, err } config, err := dbProfileConfig(db, profile) if err != nil { return -1, nil, err } devices, err := dbDevices(db, profile, true) if err != nil { return -1, nil, err } return id, &shared.ProfileConfig{ Name: profile, Config: config, Description: description.String, Devices: devices, }, nil } func dbProfileCreate(db *sql.DB, profile string, description string, config map[string]string, devices shared.Devices) (int64, error) { tx, err := dbBegin(db) if err != nil { return -1, err } result, err := tx.Exec("INSERT INTO profiles (name, description) VALUES (?, ?)", profile, description) if err != nil { tx.Rollback() return -1, err } id, err := result.LastInsertId() if err != nil { tx.Rollback() return -1, err } err = dbProfileConfigAdd(tx, id, config) if err != nil { tx.Rollback() return -1, err } err = dbDevicesAdd(tx, "profile", id, devices) if err != nil { tx.Rollback() return -1, err } err = txCommit(tx) if err != nil { return -1, err } return id, nil } func dbProfileCreateDefault(db *sql.DB) error { id, _, _ := dbProfileGet(db, "default") if id != -1 { // default profile already exists return nil } // TODO: We should scan for bridges and use the best available as default. devices := shared.Devices{ "eth0": shared.Device{ "name": "eth0", "type": "nic", "nictype": "bridged", "parent": "lxdbr0"}} id, err := dbProfileCreate(db, "default", "Default LXD profile", map[string]string{}, devices) if err != nil { return err } return nil } func dbProfileCreateDocker(db *sql.DB) error { id, _, err := dbProfileGet(db, "docker") if id != -1 { // docker profile already exists return nil } config := map[string]string{ "security.nesting": "true", "linux.kernel_modules": "overlay, nf_nat"} fusedev := map[string]string{ "path": "/dev/fuse", "type": "unix-char", } aadisable := map[string]string{ "path": "/sys/module/apparmor/parameters/enabled", "type": "disk", "source": "/dev/null", } devices := map[string]shared.Device{"fuse": fusedev, "aadisable": aadisable} _, err = dbProfileCreate(db, "docker", "Profile supporting docker in containers", config, devices) return err } // Get the profile configuration map from the DB func dbProfileConfig(db *sql.DB, name string) (map[string]string, error) { var key, value string query := ` SELECT key, value FROM profiles_config JOIN profiles ON profiles_config.profile_id=profiles.id WHERE name=?` inargs := []interface{}{name} outfmt := []interface{}{key, value} results, err := dbQueryScan(db, query, inargs, outfmt) if err != nil { return nil, fmt.Errorf("Failed to get profile '%s'", name) } if len(results) == 0 { /* * If we didn't get any rows here, let's check to make sure the * profile really exists; if it doesn't, let's send back a 404. */ query := "SELECT id FROM profiles WHERE name=?" var id int results, err := dbQueryScan(db, query, []interface{}{name}, []interface{}{id}) if err != nil { return nil, err } if len(results) == 0 { return nil, NoSuchObjectError } } config := map[string]string{} for _, r := range results { key = r[0].(string) value = r[1].(string) config[key] = value } return config, nil } func dbProfileDelete(db *sql.DB, name string) error { tx, err := dbBegin(db) if err != nil { return err } _, err = tx.Exec("DELETE FROM profiles WHERE name=?", name) if err != nil { tx.Rollback() return err } err = txCommit(tx) return err } func dbProfileUpdate(db *sql.DB, name string, newName string) error { tx, err := dbBegin(db) if err != nil { return err } _, err = tx.Exec("UPDATE profiles SET name=? WHERE name=?", newName, name) if err != nil { tx.Rollback() return err } err = txCommit(tx) return err } func dbProfileDescriptionUpdate(tx *sql.Tx, id int64, description string) error { _, err := tx.Exec("UPDATE profiles SET description=? WHERE id=?", description, id) return err } func dbProfileConfigClear(tx *sql.Tx, id int64) error { _, err := tx.Exec("DELETE FROM profiles_config WHERE profile_id=?", id) if err != nil { return err } _, err = tx.Exec(`DELETE FROM profiles_devices_config WHERE id IN (SELECT profiles_devices_config.id FROM profiles_devices_config JOIN profiles_devices ON profiles_devices_config.profile_device_id=profiles_devices.id WHERE profiles_devices.profile_id=?)`, id) if err != nil { return err } _, err = tx.Exec("DELETE FROM profiles_devices WHERE profile_id=?", id) if err != nil { return err } return nil } func dbProfileConfigAdd(tx *sql.Tx, id int64, config map[string]string) error { str := fmt.Sprintf("INSERT INTO profiles_config (profile_id, key, value) VALUES(?, ?, ?)") stmt, err := tx.Prepare(str) defer stmt.Close() for k, v := range config { _, err = stmt.Exec(id, k, v) if err != nil { return err } } return nil } func dbProfileContainersGet(db *sql.DB, profile string) ([]string, error) { q := `SELECT containers.name FROM containers JOIN containers_profiles ON containers.id == containers_profiles.container_id JOIN profiles ON containers_profiles.profile_id == profiles.id WHERE profiles.name == ?` results := []string{} inargs := []interface{}{profile} var name string outfmt := []interface{}{name} output, err := dbQueryScan(db, q, inargs, outfmt) if err != nil { return results, err } for _, r := range output { results = append(results, r[0].(string)) } return results, nil } lxd-2.0.2/lxd/db_test.go000066400000000000000000000406651272140510300150520ustar00rootroot00000000000000package main import ( "database/sql" "fmt" "testing" "time" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/logging" ) const DB_FIXTURES string = ` INSERT INTO containers (name, architecture, type) VALUES ('thename', 1, 1); INSERT INTO profiles (name) VALUES ('theprofile'); INSERT INTO containers_profiles (container_id, profile_id) VALUES (1, 3); INSERT INTO containers_config (container_id, key, value) VALUES (1, 'thekey', 'thevalue'); INSERT INTO containers_devices (container_id, name, type) VALUES (1, 'somename', 1); INSERT INTO containers_devices_config (key, value, container_device_id) VALUES ('configkey', 'configvalue', 1); INSERT INTO images (fingerprint, filename, size, architecture, creation_date, expiry_date, upload_date) VALUES ('fingerprint', 'filename', 1024, 0, 1431547174, 1431547175, 1431547176); INSERT INTO images_aliases (name, image_id, description) VALUES ('somealias', 1, 'some description'); INSERT INTO images_properties (image_id, type, key, value) VALUES (1, 0, 'thekey', 'some value'); INSERT INTO profiles_config (profile_id, key, value) VALUES (3, 'thekey', 'thevalue'); INSERT INTO profiles_devices (profile_id, name, type) VALUES (3, 'devicename', 1); INSERT INTO profiles_devices_config (profile_device_id, key, value) VALUES (4, 'devicekey', 'devicevalue'); ` // This Helper will initialize a test in-memory DB. func createTestDb(t *testing.T) (db *sql.DB) { // Setup logging if main() hasn't been called/when testing if shared.Log == nil { var err error shared.Log, err = logging.GetLogger("", "", true, true, nil) if err != nil { t.Fatal(err) } } var err error d := &Daemon{MockMode: true} err = initializeDbObject(d, ":memory:") db = d.db if err != nil { t.Fatal(err) } _, err = db.Exec(DB_FIXTURES) if err != nil { t.Fatal(err) } return // db is a named output param } func Test_deleting_a_container_cascades_on_related_tables(t *testing.T) { var db *sql.DB var err error var count int var statements string // Insert a container and a related profile. db = createTestDb(t) defer db.Close() // Drop the container we just created. statements = `DELETE FROM containers WHERE name = 'thename';` _, err = db.Exec(statements) if err != nil { t.Errorf("Error deleting container! %s", err) } // Make sure there are 0 container_profiles entries left. statements = `SELECT count(*) FROM containers_profiles;` err = db.QueryRow(statements).Scan(&count) if count != 0 { t.Errorf("Deleting a container didn't delete the profile association! There are %d left", count) } // Make sure there are 0 containers_config entries left. statements = `SELECT count(*) FROM containers_config;` err = db.QueryRow(statements).Scan(&count) if count != 0 { t.Errorf("Deleting a container didn't delete the associated container_config! There are %d left", count) } // Make sure there are 0 containers_devices entries left. statements = `SELECT count(*) FROM containers_devices;` err = db.QueryRow(statements).Scan(&count) if count != 0 { t.Errorf("Deleting a container didn't delete the associated container_devices! There are %d left", count) } // Make sure there are 0 containers_devices_config entries left. statements = `SELECT count(*) FROM containers_devices_config;` err = db.QueryRow(statements).Scan(&count) if count != 0 { t.Errorf("Deleting a container didn't delete the associated container_devices_config! There are %d left", count) } } func Test_deleting_a_profile_cascades_on_related_tables(t *testing.T) { var db *sql.DB var err error var count int var statements string // Insert a container and a related profile. db = createTestDb(t) defer db.Close() // Drop the profile we just created. statements = `DELETE FROM profiles WHERE name = 'theprofile';` _, err = db.Exec(statements) if err != nil { t.Errorf("Error deleting profile! %s", err) } // Make sure there are 0 container_profiles entries left. statements = `SELECT count(*) FROM containers_profiles WHERE profile_id = 3;` err = db.QueryRow(statements).Scan(&count) if count != 0 { t.Errorf("Deleting a profile didn't delete the container association! There are %d left", count) } // Make sure there are 0 profiles_devices entries left. statements = `SELECT count(*) FROM profiles_devices WHERE profile_id == 3;` err = db.QueryRow(statements).Scan(&count) if count != 0 { t.Errorf("Deleting a profile didn't delete the related profiles_devices! There are %d left", count) } // Make sure there are 0 profiles_config entries left. statements = `SELECT count(*) FROM profiles_config WHERE profile_id == 3;` err = db.QueryRow(statements).Scan(&count) if count != 0 { t.Errorf("Deleting a profile didn't delete the related profiles_config! There are %d left", count) } // Make sure there are 0 profiles_devices_config entries left. statements = `SELECT count(*) FROM profiles_devices_config WHERE profile_device_id == 4;` err = db.QueryRow(statements).Scan(&count) if count != 0 { t.Errorf("Deleting a profile didn't delete the related profiles_devices_config! There are %d left", count) } } func Test_deleting_an_image_cascades_on_related_tables(t *testing.T) { var db *sql.DB var err error var count int var statements string db = createTestDb(t) defer db.Close() // Drop the image we just created. statements = `DELETE FROM images;` _, err = db.Exec(statements) if err != nil { t.Errorf("Error deleting image! %s", err) } // Make sure there are 0 images_aliases entries left. statements = `SELECT count(*) FROM images_aliases;` err = db.QueryRow(statements).Scan(&count) if count != 0 { t.Errorf("Deleting an image didn't delete the image alias association! There are %d left", count) } // Make sure there are 0 images_properties entries left. statements = `SELECT count(*) FROM images_properties;` err = db.QueryRow(statements).Scan(&count) if count != 0 { t.Errorf("Deleting an image didn't delete the related images_properties! There are %d left", count) } } func Test_initializing_db_is_indempotent(t *testing.T) { var db *sql.DB var err error // This calls "createDb" once already. d := &Daemon{MockMode: true} err = initializeDbObject(d, ":memory:") db = d.db defer db.Close() // Let's call it a second time. err = createDb(db) if err != nil { t.Errorf("The database schema is not indempotent, err='%s'", err) } } func Test_get_schema_returns_0_on_uninitialized_db(t *testing.T) { var db *sql.DB var err error db, err = sql.Open("sqlite3", ":memory:") if err != nil { t.Error(err) } result := dbGetSchema(db) if result != 0 { t.Error("getSchema should return 0 on uninitialized db!") } } func Test_running_dbUpdateFromV6_adds_on_delete_cascade(t *testing.T) { // Upgrading the database schema with updateFromV6 adds ON DELETE CASCADE // to sqlite tables that require it, and conserve the data. var err error var count int d := &Daemon{MockMode: true} err = initializeDbObject(d, ":memory:") defer d.db.Close() statements := ` CREATE TABLE IF NOT EXISTS containers ( id INTEGER primary key AUTOINCREMENT NOT NULL, name VARCHAR(255) NOT NULL, architecture INTEGER NOT NULL, type INTEGER NOT NULL, power_state INTEGER NOT NULL DEFAULT 0, ephemeral INTEGER NOT NULL DEFAULT 0, UNIQUE (name) ); CREATE TABLE IF NOT EXISTS containers_config ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, container_id INTEGER NOT NULL, key VARCHAR(255) NOT NULL, value TEXT, FOREIGN KEY (container_id) REFERENCES containers (id), UNIQUE (container_id, key) ); INSERT INTO containers (name, architecture, type) VALUES ('thename', 1, 1); INSERT INTO containers_config (container_id, key, value) VALUES (1, 'thekey', 'thevalue');` _, err = d.db.Exec(statements) if err != nil { t.Error(err) } // Run the upgrade from V6 code err = dbUpdateFromV6(d.db) // Make sure the inserted data is still there. statements = `SELECT count(*) FROM containers_config;` err = d.db.QueryRow(statements).Scan(&count) if count != 1 { t.Fatalf("There should be exactly one entry in containers_config! There are %d.", count) } // Drop the container. statements = `DELETE FROM containers WHERE name = 'thename';` _, err = d.db.Exec(statements) if err != nil { t.Errorf("Error deleting container! %s", err) } // Make sure there are 0 container_profiles entries left. statements = `SELECT count(*) FROM containers_profiles;` err = d.db.QueryRow(statements).Scan(&count) if count != 0 { t.Errorf("Deleting a container didn't delete the profile association! There are %d left", count) } } func Test_run_database_upgrades_with_some_foreign_keys_inconsistencies(t *testing.T) { var db *sql.DB var err error var count int var statements string db, err = sql.Open("sqlite3", ":memory:") defer db.Close() if err != nil { t.Fatal(err) } // This schema is a part of schema rev 1. statements = ` CREATE TABLE containers ( id INTEGER primary key AUTOINCREMENT NOT NULL, name VARCHAR(255) NOT NULL, architecture INTEGER NOT NULL, type INTEGER NOT NULL, UNIQUE (name) ); CREATE TABLE containers_config ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, container_id INTEGER NOT NULL, key VARCHAR(255) NOT NULL, value TEXT, FOREIGN KEY (container_id) REFERENCES containers (id), UNIQUE (container_id, key) ); CREATE TABLE schema ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, version INTEGER NOT NULL, updated_at DATETIME NOT NULL, UNIQUE (version) ); CREATE TABLE images ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, fingerprint VARCHAR(255) NOT NULL, filename VARCHAR(255) NOT NULL, size INTEGER NOT NULL, public INTEGER NOT NULL DEFAULT 0, architecture INTEGER NOT NULL, creation_date DATETIME, expiry_date DATETIME, upload_date DATETIME NOT NULL, UNIQUE (fingerprint) ); CREATE TABLE images_properties ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, image_id INTEGER NOT NULL, type INTEGER NOT NULL, key VARCHAR(255) NOT NULL, value TEXT, FOREIGN KEY (image_id) REFERENCES images (id) ); CREATE TABLE certificates ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, fingerprint VARCHAR(255) NOT NULL, type INTEGER NOT NULL, name VARCHAR(255) NOT NULL, certificate TEXT NOT NULL, UNIQUE (fingerprint) ); INSERT INTO schema (version, updated_at) values (1, "now"); INSERT INTO containers (name, architecture, type) VALUES ('thename', 1, 1); INSERT INTO containers_config (container_id, key, value) VALUES (1, 'thekey', 'thevalue');` _, err = db.Exec(statements) if err != nil { t.Fatal("Error creating schema!") } // Now that we have a consistent schema, let's remove the container entry // *without* the ON DELETE CASCADE in place. statements = `DELETE FROM containers;` _, err = db.Exec(statements) if err != nil { t.Fatal("Error truncating the container table!") } // The "foreign key" on containers_config now points to nothing. // Let's run the schema upgrades. d := &Daemon{MockMode: true} d.db = db daemonConfigInit(db) err = dbUpdate(d, 1) if err != nil { t.Error("Error upgrading database schema!") t.Fatal(err) } result := dbGetSchema(db) if result != DB_CURRENT_VERSION { t.Fatal(fmt.Sprintf("The schema is not at the latest version after update! Found: %d, should be: %d", result, DB_CURRENT_VERSION)) } // Make sure there are 0 containers_config entries left. statements = `SELECT count(*) FROM containers_config;` err = db.QueryRow(statements).Scan(&count) if count != 0 { t.Fatal("updateDb did not delete orphaned child entries after adding ON DELETE CASCADE!") } } func Test_dbImageGet_finds_image_for_fingerprint(t *testing.T) { var db *sql.DB var err error var result *shared.ImageInfo db = createTestDb(t) defer db.Close() _, result, err = dbImageGet(db, "fingerprint", false, false) if err != nil { t.Fatal(err) } if result == nil { t.Fatal("No image returned!") } if result.Filename != "filename" { t.Fatal("Filename should be set.") } if result.CreationDate.UTC() != time.Unix(1431547174, 0).UTC() { t.Fatal(fmt.Sprintf("%s != %s", result.CreationDate, time.Unix(1431547174, 0))) } if result.ExpiryDate.UTC() != time.Unix(1431547175, 0).UTC() { // It was short lived t.Fatal(fmt.Sprintf("%s != %s", result.ExpiryDate, time.Unix(1431547175, 0))) } if result.UploadDate.UTC() != time.Unix(1431547176, 0).UTC() { t.Fatal(fmt.Sprintf("%s != %s", result.UploadDate, time.Unix(1431547176, 0))) } } func Test_dbImageGet_for_missing_fingerprint(t *testing.T) { var db *sql.DB var err error db = createTestDb(t) defer db.Close() _, _, err = dbImageGet(db, "unknown", false, false) if err != sql.ErrNoRows { t.Fatal("Wrong err type returned") } } func Test_dbImageAliasGet_alias_exists(t *testing.T) { var db *sql.DB var err error var result string db = createTestDb(t) defer db.Close() _, alias, err := dbImageAliasGet(db, "somealias", true) result = alias.Target if err != nil { t.Fatal(err) } if result != "fingerprint" { t.Fatal("Fingerprint is not the expected fingerprint!") } } func Test_dbImageAliasGet_alias_does_not_exists(t *testing.T) { var db *sql.DB var err error db = createTestDb(t) defer db.Close() _, _, err = dbImageAliasGet(db, "whatever", true) if err != NoSuchObjectError { t.Fatal("Error should be NoSuchObjectError") } } func Test_dbImageAliasAdd(t *testing.T) { var db *sql.DB var err error var result string db = createTestDb(t) defer db.Close() err = dbImageAliasAdd(db, "Chaosphere", 1, "Someone will like the name") if err != nil { t.Fatal("Error inserting Image alias.") } _, alias, err := dbImageAliasGet(db, "Chaosphere", true) if err != nil { t.Fatal(err) } result = alias.Target if result != "fingerprint" { t.Fatal("Couldn't retrieve newly created alias.") } } func Test_dbContainerConfig(t *testing.T) { var db *sql.DB var err error var result map[string]string var expected map[string]string db = createTestDb(t) defer db.Close() _, err = db.Exec("INSERT INTO containers_config (container_id, key, value) VALUES (1, 'something', 'something else');") result, err = dbContainerConfig(db, 1) if err != nil { t.Fatal(err) } expected = map[string]string{"thekey": "thevalue", "something": "something else"} for key, value := range expected { if result[key] != value { t.Errorf("Mismatching value for key %s: %s != %s", key, result[key], value) } } } func Test_dbProfileConfig(t *testing.T) { var db *sql.DB var err error var result map[string]string var expected map[string]string db = createTestDb(t) defer db.Close() _, err = db.Exec("INSERT INTO profiles_config (profile_id, key, value) VALUES (3, 'something', 'something else');") result, err = dbProfileConfig(db, "theprofile") if err != nil { t.Fatal(err) } expected = map[string]string{"thekey": "thevalue", "something": "something else"} for key, value := range expected { if result[key] != value { t.Errorf("Mismatching value for key %s: %s != %s", key, result[key], value) } } } func Test_dbContainerProfiles(t *testing.T) { var db *sql.DB var err error var result []string var expected []string db = createTestDb(t) defer db.Close() expected = []string{"theprofile"} result, err = dbContainerProfiles(db, 1) if err != nil { t.Fatal(err) } for i := range expected { if expected[i] != result[i] { t.Fatal(fmt.Sprintf("Mismatching contents for profile list: %s != %s", result[i], expected[i])) } } } func Test_dbDevices_profiles(t *testing.T) { var db *sql.DB var err error var result shared.Devices var subresult shared.Device var expected shared.Device db = createTestDb(t) defer db.Close() result, err = dbDevices(db, "theprofile", true) if err != nil { t.Fatal(err) } expected = shared.Device{"type": "nic", "devicekey": "devicevalue"} subresult = result["devicename"] for key, value := range expected { if subresult[key] != value { t.Errorf("Mismatching value for key %s: %v != %v", key, subresult[key], value) } } } func Test_dbDevices_containers(t *testing.T) { var db *sql.DB var err error var result shared.Devices var subresult shared.Device var expected shared.Device db = createTestDb(t) defer db.Close() result, err = dbDevices(db, "thename", false) if err != nil { t.Fatal(err) } expected = shared.Device{"type": "nic", "configkey": "configvalue"} subresult = result["somename"] for key, value := range expected { if subresult[key] != value { t.Errorf("Mismatching value for key %s: %s != %s", key, subresult[key], value) } } } lxd-2.0.2/lxd/db_update.go000066400000000000000000000742151272140510300153530ustar00rootroot00000000000000package main import ( "database/sql" "encoding/hex" "fmt" "io/ioutil" "os" "os/exec" "path/filepath" "strconv" "strings" "syscall" "github.com/lxc/lxd/shared" log "gopkg.in/inconshreveable/log15.v2" ) func dbUpdateFromV30(db *sql.DB) error { entries, err := ioutil.ReadDir(shared.VarPath("containers")) if err != nil { return err } for _, entry := range entries { if !shared.IsDir(shared.VarPath("containers", entry.Name(), "rootfs")) { continue } info, err := os.Stat(shared.VarPath("containers", entry.Name(), "rootfs")) if err != nil { return err } if int(info.Sys().(*syscall.Stat_t).Uid) == 0 { err := os.Chmod(shared.VarPath("containers", entry.Name()), 0700) if err != nil { return err } err = os.Chown(shared.VarPath("containers", entry.Name()), 0, 0) if err != nil { return err } } } stmt := `INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err = db.Exec(stmt, 31) return err } func dbUpdateFromV29(db *sql.DB) error { if shared.PathExists(shared.VarPath("zfs.img")) { err := os.Chmod(shared.VarPath("zfs.img"), 0600) if err != nil { return err } } stmt := `INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 30) return err } func dbUpdateFromV28(db *sql.DB) error { stmt := ` INSERT INTO profiles_devices (profile_id, name, type) SELECT id, "aadisable", 2 FROM profiles WHERE name="docker"; INSERT INTO profiles_devices_config (profile_device_id, key, value) SELECT profiles_devices.id, "source", "/dev/null" FROM profiles_devices LEFT JOIN profiles WHERE profiles_devices.profile_id = profiles.id AND profiles.name = "docker" AND profiles_devices.name = "aadisable"; INSERT INTO profiles_devices_config (profile_device_id, key, value) SELECT profiles_devices.id, "path", "/sys/module/apparmor/parameters/enabled" FROM profiles_devices LEFT JOIN profiles WHERE profiles_devices.profile_id = profiles.id AND profiles.name = "docker" AND profiles_devices.name = "aadisable";` db.Exec(stmt) stmt = `INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 29) return err } func dbUpdateFromV27(db *sql.DB) error { stmt := ` UPDATE profiles_devices SET type=3 WHERE type='unix-char'; INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 28) return err } func dbUpdateFromV26(db *sql.DB) error { stmt := ` ALTER TABLE images ADD COLUMN auto_update INTEGER NOT NULL DEFAULT 0; CREATE TABLE IF NOT EXISTS images_source ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, image_id INTEGER NOT NULL, server TEXT NOT NULL, protocol INTEGER NOT NULL, certificate TEXT NOT NULL, alias VARCHAR(255) NOT NULL, FOREIGN KEY (image_id) REFERENCES images (id) ON DELETE CASCADE ); INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 27) return err } func dbUpdateFromV25(db *sql.DB) error { stmt := ` INSERT INTO profiles (name, description) VALUES ("docker", "Profile supporting docker in containers"); INSERT INTO profiles_config (profile_id, key, value) SELECT id, "security.nesting", "true" FROM profiles WHERE name="docker"; INSERT INTO profiles_config (profile_id, key, value) SELECT id, "linux.kernel_modules", "overlay, nf_nat" FROM profiles WHERE name="docker"; INSERT INTO profiles_devices (profile_id, name, type) SELECT id, "fuse", "unix-char" FROM profiles WHERE name="docker"; INSERT INTO profiles_devices_config (profile_device_id, key, value) SELECT profiles_devices.id, "path", "/dev/fuse" FROM profiles_devices LEFT JOIN profiles WHERE profiles_devices.profile_id = profiles.id AND profiles.name = "docker";` db.Exec(stmt) stmt = `INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 26) return err } func dbUpdateFromV24(db *sql.DB) error { stmt := ` ALTER TABLE containers ADD COLUMN stateful INTEGER NOT NULL DEFAULT 0; INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 25) return err } func dbUpdateFromV23(db *sql.DB) error { stmt := ` ALTER TABLE profiles ADD COLUMN description TEXT; INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 24) return err } func dbUpdateFromV22(db *sql.DB) error { stmt := ` DELETE FROM containers_devices_config WHERE key='type'; DELETE FROM profiles_devices_config WHERE key='type'; INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 23) return err } func dbUpdateFromV21(db *sql.DB) error { stmt := ` ALTER TABLE containers ADD COLUMN creation_date DATETIME NOT NULL DEFAULT 0; INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 22) return err } func dbUpdateFromV20(db *sql.DB) error { stmt := ` UPDATE containers_devices SET name='__lxd_upgrade_root' WHERE name='root'; UPDATE profiles_devices SET name='__lxd_upgrade_root' WHERE name='root'; INSERT INTO containers_devices (container_id, name, type) SELECT id, "root", 2 FROM containers; INSERT INTO containers_devices_config (container_device_id, key, value) SELECT id, "path", "/" FROM containers_devices WHERE name='root'; INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 21) return err } func dbUpdateFromV19(db *sql.DB) error { stmt := ` DELETE FROM containers_config WHERE container_id NOT IN (SELECT id FROM containers); DELETE FROM containers_devices_config WHERE container_device_id NOT IN (SELECT id FROM containers_devices WHERE container_id IN (SELECT id FROM containers)); DELETE FROM containers_devices WHERE container_id NOT IN (SELECT id FROM containers); DELETE FROM containers_profiles WHERE container_id NOT IN (SELECT id FROM containers); DELETE FROM images_aliases WHERE image_id NOT IN (SELECT id FROM images); DELETE FROM images_properties WHERE image_id NOT IN (SELECT id FROM images); INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 20) return err } func dbUpdateFromV18(db *sql.DB) error { var id int var value string // Update container config rows, err := dbQueryScan(db, "SELECT id, value FROM containers_config WHERE key='limits.memory'", nil, []interface{}{id, value}) if err != nil { return err } for _, row := range rows { id = row[0].(int) value = row[1].(string) // If already an integer, don't touch _, err := strconv.Atoi(value) if err == nil { continue } // Generate the new value value = strings.ToUpper(value) value += "B" // Deal with completely broken values _, err = shared.ParseByteSizeString(value) if err != nil { shared.Debugf("Invalid container memory limit, id=%d value=%s, removing.", id, value) _, err = db.Exec("DELETE FROM containers_config WHERE id=?;", id) if err != nil { return err } } // Set the new value _, err = db.Exec("UPDATE containers_config SET value=? WHERE id=?", value, id) if err != nil { return err } } // Update profiles config rows, err = dbQueryScan(db, "SELECT id, value FROM profiles_config WHERE key='limits.memory'", nil, []interface{}{id, value}) if err != nil { return err } for _, row := range rows { id = row[0].(int) value = row[1].(string) // If already an integer, don't touch _, err := strconv.Atoi(value) if err == nil { continue } // Generate the new value value = strings.ToUpper(value) value += "B" // Deal with completely broken values _, err = shared.ParseByteSizeString(value) if err != nil { shared.Debugf("Invalid profile memory limit, id=%d value=%s, removing.", id, value) _, err = db.Exec("DELETE FROM profiles_config WHERE id=?;", id) if err != nil { return err } } // Set the new value _, err = db.Exec("UPDATE profiles_config SET value=? WHERE id=?", value, id) if err != nil { return err } } _, err = db.Exec("INSERT INTO schema (version, updated_at) VALUES (?, strftime(\"%s\"));", 19) return err } func dbUpdateFromV17(db *sql.DB) error { stmt := ` DELETE FROM profiles_config WHERE key LIKE 'volatile.%'; UPDATE containers_config SET key='limits.cpu' WHERE key='limits.cpus'; UPDATE profiles_config SET key='limits.cpu' WHERE key='limits.cpus'; INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 18) return err } func dbUpdateFromV16(db *sql.DB) error { stmt := ` UPDATE config SET key='storage.lvm_vg_name' WHERE key = 'core.lvm_vg_name'; UPDATE config SET key='storage.lvm_thinpool_name' WHERE key = 'core.lvm_thinpool_name'; INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 17) return err } func dbUpdateFromV15(d *Daemon) error { // munge all LVM-backed containers' LV names to match what is // required for snapshot support cNames, err := dbContainersList(d.db, cTypeRegular) if err != nil { return err } err = daemonConfigInit(d.db) if err != nil { return err } vgName := daemonConfig["storage.lvm_vg_name"].Get() for _, cName := range cNames { var lvLinkPath string if strings.Contains(cName, shared.SnapshotDelimiter) { lvLinkPath = shared.VarPath("snapshots", fmt.Sprintf("%s.lv", cName)) } else { lvLinkPath = shared.VarPath("containers", fmt.Sprintf("%s.lv", cName)) } if !shared.PathExists(lvLinkPath) { continue } newLVName := strings.Replace(cName, "-", "--", -1) newLVName = strings.Replace(newLVName, shared.SnapshotDelimiter, "-", -1) if cName == newLVName { shared.Log.Debug("No need to rename, skipping", log.Ctx{"cName": cName, "newLVName": newLVName}) continue } shared.Log.Debug("About to rename cName in lv upgrade", log.Ctx{"lvLinkPath": lvLinkPath, "cName": cName, "newLVName": newLVName}) output, err := exec.Command("lvrename", vgName, cName, newLVName).CombinedOutput() if err != nil { return fmt.Errorf("Could not rename LV '%s' to '%s': %v\noutput:%s", cName, newLVName, err, string(output)) } if err := os.Remove(lvLinkPath); err != nil { return fmt.Errorf("Couldn't remove lvLinkPath '%s'", lvLinkPath) } newLinkDest := fmt.Sprintf("/dev/%s/%s", vgName, newLVName) if err := os.Symlink(newLinkDest, lvLinkPath); err != nil { return fmt.Errorf("Couldn't recreate symlink '%s'->'%s'", lvLinkPath, newLinkDest) } } stmt := ` INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err = d.db.Exec(stmt, 16) return err } func dbUpdateFromV14(db *sql.DB) error { stmt := ` PRAGMA foreign_keys=OFF; -- So that integrity doesn't get in the way for now DELETE FROM containers_config WHERE key="volatile.last_state.power"; INSERT INTO containers_config (container_id, key, value) SELECT id, "volatile.last_state.power", "RUNNING" FROM containers WHERE power_state=1; INSERT INTO containers_config (container_id, key, value) SELECT id, "volatile.last_state.power", "STOPPED" FROM containers WHERE power_state != 1; CREATE TABLE tmp ( id INTEGER primary key AUTOINCREMENT NOT NULL, name VARCHAR(255) NOT NULL, architecture INTEGER NOT NULL, type INTEGER NOT NULL, ephemeral INTEGER NOT NULL DEFAULT 0, UNIQUE (name) ); INSERT INTO tmp SELECT id, name, architecture, type, ephemeral FROM containers; DROP TABLE containers; ALTER TABLE tmp RENAME TO containers; PRAGMA foreign_keys=ON; -- Make sure we turn integrity checks back on. INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 15) return err } func dbUpdateFromV13(db *sql.DB) error { stmt := ` UPDATE containers_config SET key='volatile.base_image' WHERE key = 'volatile.baseImage'; INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 14) return err } func dbUpdateFromV12(db *sql.DB) error { stmt := ` ALTER TABLE images ADD COLUMN cached INTEGER NOT NULL DEFAULT 0; ALTER TABLE images ADD COLUMN last_use_date DATETIME; INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 13) return err } func dbUpdateFromV11(d *Daemon) error { if d.MockMode { // No need to move snapshots no mock runs, // dbUpdateFromV12 will then set the db version to 13 return nil } cNames, err := dbContainersList(d.db, cTypeSnapshot) if err != nil { return err } errors := 0 for _, cName := range cNames { snappieces := strings.SplitN(cName, shared.SnapshotDelimiter, 2) oldPath := shared.VarPath("containers", snappieces[0], "snapshots", snappieces[1]) newPath := shared.VarPath("snapshots", snappieces[0], snappieces[1]) if shared.PathExists(oldPath) && !shared.PathExists(newPath) { shared.Log.Info( "Moving snapshot", log.Ctx{ "snapshot": cName, "oldPath": oldPath, "newPath": newPath}) // Rsync // containers//snapshots/ // to // snapshots// output, err := storageRsyncCopy(oldPath, newPath) if err != nil { shared.Log.Error( "Failed rsync snapshot", log.Ctx{ "snapshot": cName, "output": string(output), "err": err}) errors++ continue } // Remove containers//snapshots/ if err := os.RemoveAll(oldPath); err != nil { shared.Log.Error( "Failed to remove the old snapshot path", log.Ctx{ "snapshot": cName, "oldPath": oldPath, "err": err}) // Ignore this error. // errors++ // continue } // Remove /var/lib/lxd/containers//snapshots // if its empty. cPathParent := filepath.Dir(oldPath) if ok, _ := shared.PathIsEmpty(cPathParent); ok { os.Remove(cPathParent) } } // if shared.PathExists(oldPath) && !shared.PathExists(newPath) { } // for _, cName := range cNames { // Refuse to start lxd if a rsync failed. if errors > 0 { return fmt.Errorf("Got errors while moving snapshots, see the log output.") } stmt := ` INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err = d.db.Exec(stmt, 12) return err } func dbUpdateFromV10(d *Daemon) error { if d.MockMode { // No need to move lxc to containers in mock runs, // dbUpdateFromV12 will then set the db version to 13 return nil } if shared.PathExists(shared.VarPath("lxc")) { err := os.Rename(shared.VarPath("lxc"), shared.VarPath("containers")) if err != nil { return err } shared.Debugf("Restarting all the containers following directory rename") containersShutdown(d) containersRestart(d) } stmt := ` INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := d.db.Exec(stmt, 11) return err } func dbUpdateFromV9(db *sql.DB) error { stmt := ` CREATE TABLE tmp ( id INTEGER primary key AUTOINCREMENT NOT NULL, container_id INTEGER NOT NULL, name VARCHAR(255) NOT NULL, type VARCHAR(255) NOT NULL default "none", FOREIGN KEY (container_id) REFERENCES containers (id) ON DELETE CASCADE, UNIQUE (container_id, name) ); INSERT INTO tmp SELECT * FROM containers_devices; UPDATE containers_devices SET type=0 WHERE id IN (SELECT id FROM tmp WHERE type="none"); UPDATE containers_devices SET type=1 WHERE id IN (SELECT id FROM tmp WHERE type="nic"); UPDATE containers_devices SET type=2 WHERE id IN (SELECT id FROM tmp WHERE type="disk"); UPDATE containers_devices SET type=3 WHERE id IN (SELECT id FROM tmp WHERE type="unix-char"); UPDATE containers_devices SET type=4 WHERE id IN (SELECT id FROM tmp WHERE type="unix-block"); DROP TABLE tmp; CREATE TABLE tmp ( id INTEGER primary key AUTOINCREMENT NOT NULL, profile_id INTEGER NOT NULL, name VARCHAR(255) NOT NULL, type VARCHAR(255) NOT NULL default "none", FOREIGN KEY (profile_id) REFERENCES profiles (id) ON DELETE CASCADE, UNIQUE (profile_id, name) ); INSERT INTO tmp SELECT * FROM profiles_devices; UPDATE profiles_devices SET type=0 WHERE id IN (SELECT id FROM tmp WHERE type="none"); UPDATE profiles_devices SET type=1 WHERE id IN (SELECT id FROM tmp WHERE type="nic"); UPDATE profiles_devices SET type=2 WHERE id IN (SELECT id FROM tmp WHERE type="disk"); UPDATE profiles_devices SET type=3 WHERE id IN (SELECT id FROM tmp WHERE type="unix-char"); UPDATE profiles_devices SET type=4 WHERE id IN (SELECT id FROM tmp WHERE type="unix-block"); DROP TABLE tmp; INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 10) return err } func dbUpdateFromV8(db *sql.DB) error { stmt := ` UPDATE certificates SET fingerprint = replace(fingerprint, " ", ""); INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 9) return err } func dbUpdateFromV7(db *sql.DB) error { stmt := ` UPDATE config SET key='core.trust_password' WHERE key IN ('password', 'trust_password', 'trust-password', 'core.trust-password'); DELETE FROM config WHERE key != 'core.trust_password'; INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 8) return err } func dbUpdateFromV6(db *sql.DB) error { // This update recreates the schemas that need an ON DELETE CASCADE foreign // key. stmt := ` PRAGMA foreign_keys=OFF; -- So that integrity doesn't get in the way for now CREATE TEMP TABLE tmp AS SELECT * FROM containers_config; DROP TABLE containers_config; CREATE TABLE IF NOT EXISTS containers_config ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, container_id INTEGER NOT NULL, key VARCHAR(255) NOT NULL, value TEXT, FOREIGN KEY (container_id) REFERENCES containers (id) ON DELETE CASCADE, UNIQUE (container_id, key) ); INSERT INTO containers_config SELECT * FROM tmp; DROP TABLE tmp; CREATE TEMP TABLE tmp AS SELECT * FROM containers_devices; DROP TABLE containers_devices; CREATE TABLE IF NOT EXISTS containers_devices ( id INTEGER primary key AUTOINCREMENT NOT NULL, container_id INTEGER NOT NULL, name VARCHAR(255) NOT NULL, type INTEGER NOT NULL default 0, FOREIGN KEY (container_id) REFERENCES containers (id) ON DELETE CASCADE, UNIQUE (container_id, name) ); INSERT INTO containers_devices SELECT * FROM tmp; DROP TABLE tmp; CREATE TEMP TABLE tmp AS SELECT * FROM containers_devices_config; DROP TABLE containers_devices_config; CREATE TABLE IF NOT EXISTS containers_devices_config ( id INTEGER primary key AUTOINCREMENT NOT NULL, container_device_id INTEGER NOT NULL, key VARCHAR(255) NOT NULL, value TEXT, FOREIGN KEY (container_device_id) REFERENCES containers_devices (id) ON DELETE CASCADE, UNIQUE (container_device_id, key) ); INSERT INTO containers_devices_config SELECT * FROM tmp; DROP TABLE tmp; CREATE TEMP TABLE tmp AS SELECT * FROM containers_profiles; DROP TABLE containers_profiles; CREATE TABLE IF NOT EXISTS containers_profiles ( id INTEGER primary key AUTOINCREMENT NOT NULL, container_id INTEGER NOT NULL, profile_id INTEGER NOT NULL, apply_order INTEGER NOT NULL default 0, UNIQUE (container_id, profile_id), FOREIGN KEY (container_id) REFERENCES containers(id) ON DELETE CASCADE, FOREIGN KEY (profile_id) REFERENCES profiles(id) ON DELETE CASCADE ); INSERT INTO containers_profiles SELECT * FROM tmp; DROP TABLE tmp; CREATE TEMP TABLE tmp AS SELECT * FROM images_aliases; DROP TABLE images_aliases; CREATE TABLE IF NOT EXISTS images_aliases ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, name VARCHAR(255) NOT NULL, image_id INTEGER NOT NULL, description VARCHAR(255), FOREIGN KEY (image_id) REFERENCES images (id) ON DELETE CASCADE, UNIQUE (name) ); INSERT INTO images_aliases SELECT * FROM tmp; DROP TABLE tmp; CREATE TEMP TABLE tmp AS SELECT * FROM images_properties; DROP TABLE images_properties; CREATE TABLE IF NOT EXISTS images_properties ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, image_id INTEGER NOT NULL, type INTEGER NOT NULL, key VARCHAR(255) NOT NULL, value TEXT, FOREIGN KEY (image_id) REFERENCES images (id) ON DELETE CASCADE ); INSERT INTO images_properties SELECT * FROM tmp; DROP TABLE tmp; CREATE TEMP TABLE tmp AS SELECT * FROM profiles_config; DROP TABLE profiles_config; CREATE TABLE IF NOT EXISTS profiles_config ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, profile_id INTEGER NOT NULL, key VARCHAR(255) NOT NULL, value VARCHAR(255), UNIQUE (profile_id, key), FOREIGN KEY (profile_id) REFERENCES profiles(id) ON DELETE CASCADE ); INSERT INTO profiles_config SELECT * FROM tmp; DROP TABLE tmp; CREATE TEMP TABLE tmp AS SELECT * FROM profiles_devices; DROP TABLE profiles_devices; CREATE TABLE IF NOT EXISTS profiles_devices ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, profile_id INTEGER NOT NULL, name VARCHAR(255) NOT NULL, type INTEGER NOT NULL default 0, UNIQUE (profile_id, name), FOREIGN KEY (profile_id) REFERENCES profiles (id) ON DELETE CASCADE ); INSERT INTO profiles_devices SELECT * FROM tmp; DROP TABLE tmp; CREATE TEMP TABLE tmp AS SELECT * FROM profiles_devices_config; DROP TABLE profiles_devices_config; CREATE TABLE IF NOT EXISTS profiles_devices_config ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, profile_device_id INTEGER NOT NULL, key VARCHAR(255) NOT NULL, value TEXT, UNIQUE (profile_device_id, key), FOREIGN KEY (profile_device_id) REFERENCES profiles_devices (id) ON DELETE CASCADE ); INSERT INTO profiles_devices_config SELECT * FROM tmp; DROP TABLE tmp; PRAGMA foreign_keys=ON; -- Make sure we turn integrity checks back on. INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 7) if err != nil { return err } // Get the rows with broken foreign keys an nuke them rows, err := db.Query("PRAGMA foreign_key_check;") if err != nil { return err } var tablestodelete []string var rowidtodelete []int defer rows.Close() for rows.Next() { var tablename string var rowid int var targetname string var keynumber int rows.Scan(&tablename, &rowid, &targetname, &keynumber) tablestodelete = append(tablestodelete, tablename) rowidtodelete = append(rowidtodelete, rowid) } rows.Close() for i := range tablestodelete { _, err = db.Exec(fmt.Sprintf("DELETE FROM %s WHERE rowid = %d;", tablestodelete[i], rowidtodelete[i])) if err != nil { return err } } return err } func dbUpdateFromV5(db *sql.DB) error { stmt := ` ALTER TABLE containers ADD COLUMN power_state INTEGER NOT NULL DEFAULT 0; ALTER TABLE containers ADD COLUMN ephemeral INTEGER NOT NULL DEFAULT 0; INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 6) return err } func dbUpdateFromV4(db *sql.DB) error { stmt := ` CREATE TABLE IF NOT EXISTS config ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, key VARCHAR(255) NOT NULL, value TEXT, UNIQUE (key) ); INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 5) if err != nil { return err } passfname := shared.VarPath("adminpwd") passOut, err := os.Open(passfname) oldPassword := "" if err == nil { defer passOut.Close() buff := make([]byte, 96) _, err = passOut.Read(buff) if err != nil { return err } oldPassword = hex.EncodeToString(buff) stmt := `INSERT INTO config (key, value) VALUES ("core.trust_password", ?);` _, err := db.Exec(stmt, oldPassword) if err != nil { return err } return os.Remove(passfname) } return nil } func dbUpdateFromV3(db *sql.DB) error { // Attempt to create a default profile (but don't fail if already there) stmt := `INSERT INTO profiles (name) VALUES ("default"); INSERT INTO profiles_devices (profile_id, name, type) SELECT id, "eth0", "nic" FROM profiles WHERE profiles.name="default"; INSERT INTO profiles_devices_config (profile_device_id, key, value) SELECT profiles_devices.id, "nictype", "bridged" FROM profiles_devices LEFT JOIN profiles ON profiles.id=profiles_devices.profile_id WHERE profiles.name == "default"; INSERT INTO profiles_devices_config (profile_device_id, key, value) SELECT profiles_devices.id, 'name', "eth0" FROM profiles_devices LEFT JOIN profiles ON profiles.id=profiles_devices.profile_id WHERE profiles.name == "default"; INSERT INTO profiles_devices_config (profile_device_id, key, value) SELECT profiles_devices.id, "parent", "lxdbr0" FROM profiles_devices LEFT JOIN profiles ON profiles.id=profiles_devices.profile_id WHERE profiles.name == "default";` db.Exec(stmt) stmt = `INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 4) return err } func dbUpdateFromV2(db *sql.DB) error { stmt := ` CREATE TABLE IF NOT EXISTS containers_devices ( id INTEGER primary key AUTOINCREMENT NOT NULL, container_id INTEGER NOT NULL, name VARCHAR(255) NOT NULL, type INTEGER NOT NULL default 0, FOREIGN KEY (container_id) REFERENCES containers (id) ON DELETE CASCADE, UNIQUE (container_id, name) ); CREATE TABLE IF NOT EXISTS containers_devices_config ( id INTEGER primary key AUTOINCREMENT NOT NULL, container_device_id INTEGER NOT NULL, key VARCHAR(255) NOT NULL, value TEXT, FOREIGN KEY (container_device_id) REFERENCES containers_devices (id), UNIQUE (container_device_id, key) ); CREATE TABLE IF NOT EXISTS containers_profiles ( id INTEGER primary key AUTOINCREMENT NOT NULL, container_id INTEGER NOT NULL, profile_id INTEGER NOT NULL, apply_order INTEGER NOT NULL default 0, UNIQUE (container_id, profile_id), FOREIGN KEY (container_id) REFERENCES containers(id) ON DELETE CASCADE, FOREIGN KEY (profile_id) REFERENCES profiles(id) ON DELETE CASCADE ); CREATE TABLE IF NOT EXISTS profiles ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, name VARCHAR(255) NOT NULL, UNIQUE (name) ); CREATE TABLE IF NOT EXISTS profiles_config ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, profile_id INTEGER NOT NULL, key VARCHAR(255) NOT NULL, value VARCHAR(255), UNIQUE (profile_id, key), FOREIGN KEY (profile_id) REFERENCES profiles(id) ON DELETE CASCADE ); CREATE TABLE IF NOT EXISTS profiles_devices ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, profile_id INTEGER NOT NULL, name VARCHAR(255) NOT NULL, type INTEGER NOT NULL default 0, UNIQUE (profile_id, name), FOREIGN KEY (profile_id) REFERENCES profiles (id) ON DELETE CASCADE ); CREATE TABLE IF NOT EXISTS profiles_devices_config ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, profile_device_id INTEGER NOT NULL, key VARCHAR(255) NOT NULL, value TEXT, UNIQUE (profile_device_id, key), FOREIGN KEY (profile_device_id) REFERENCES profiles_devices (id) ); INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 3) return err } /* Yeah we can do htis in a more clever way */ func dbUpdateFromV1(db *sql.DB) error { // v1..v2 adds images aliases stmt := ` CREATE TABLE IF NOT EXISTS images_aliases ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, name VARCHAR(255) NOT NULL, image_id INTEGER NOT NULL, description VARCHAR(255), FOREIGN KEY (image_id) REFERENCES images (id) ON DELETE CASCADE, UNIQUE (name) ); INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 2) return err } func dbUpdateFromV0(db *sql.DB) error { // v0..v1 adds schema table stmt := ` CREATE TABLE IF NOT EXISTS schema ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, version INTEGER NOT NULL, updated_at DATETIME NOT NULL, UNIQUE (version) ); INSERT INTO schema (version, updated_at) VALUES (?, strftime("%s"));` _, err := db.Exec(stmt, 1) return err } func dbUpdate(d *Daemon, prevVersion int) error { db := d.db if prevVersion < 0 || prevVersion > DB_CURRENT_VERSION { return fmt.Errorf("Bad database version: %d", prevVersion) } if prevVersion == DB_CURRENT_VERSION { return nil } var err error if prevVersion < 1 { err = dbUpdateFromV0(db) if err != nil { return err } } if prevVersion < 2 { err = dbUpdateFromV1(db) if err != nil { return err } } if prevVersion < 3 { err = dbUpdateFromV2(db) if err != nil { return err } } if prevVersion < 4 { err = dbUpdateFromV3(db) if err != nil { return err } } if prevVersion < 5 { err = dbUpdateFromV4(db) if err != nil { return err } } if prevVersion < 6 { err = dbUpdateFromV5(db) if err != nil { return err } } if prevVersion < 7 { err = dbUpdateFromV6(db) if err != nil { return err } } if prevVersion < 8 { err = dbUpdateFromV7(db) if err != nil { return err } } if prevVersion < 9 { err = dbUpdateFromV8(db) if err != nil { return err } } if prevVersion < 10 { err = dbUpdateFromV9(db) if err != nil { return err } } if prevVersion < 11 { err = dbUpdateFromV10(d) if err != nil { return err } } if prevVersion < 12 { err = dbUpdateFromV11(d) if err != nil { return err } } if prevVersion < 13 { err = dbUpdateFromV12(db) if err != nil { return err } } if prevVersion < 14 { err = dbUpdateFromV13(db) if err != nil { return err } } if prevVersion < 15 { err = dbUpdateFromV14(db) if err != nil { return err } } if prevVersion < 16 { err = dbUpdateFromV15(d) if err != nil { return err } } if prevVersion < 17 { err = dbUpdateFromV16(db) if err != nil { return err } } if prevVersion < 18 { err = dbUpdateFromV17(db) if err != nil { return err } } if prevVersion < 19 { err = dbUpdateFromV18(db) if err != nil { return err } } if prevVersion < 20 { err = dbUpdateFromV19(db) if err != nil { return err } } if prevVersion < 21 { err = dbUpdateFromV20(db) if err != nil { return err } } if prevVersion < 22 { err = dbUpdateFromV21(db) if err != nil { return err } } if prevVersion < 23 { err = dbUpdateFromV22(db) if err != nil { return err } } if prevVersion < 24 { err = dbUpdateFromV23(db) if err != nil { return err } } if prevVersion < 25 { err = dbUpdateFromV24(db) if err != nil { return err } } if prevVersion < 26 { err = dbUpdateFromV25(db) if err != nil { return err } } if prevVersion < 27 { err = dbUpdateFromV26(db) if err != nil { return err } } if prevVersion < 28 { err = dbUpdateFromV27(db) if err != nil { return err } } if prevVersion < 29 { err = dbUpdateFromV28(db) if err != nil { return err } } if prevVersion < 30 { err = dbUpdateFromV29(db) if err != nil { return err } } if prevVersion < 31 { err = dbUpdateFromV30(db) if err != nil { return err } } return nil } lxd-2.0.2/lxd/debug.go000066400000000000000000000010131272140510300144740ustar00rootroot00000000000000package main import ( "os" "os/signal" "runtime/pprof" "syscall" "github.com/lxc/lxd/shared" ) func doMemDump(memProfile string) { f, err := os.Create(memProfile) if err != nil { shared.Debugf("Error opening memory profile file '%s': %s", err) return } pprof.WriteHeapProfile(f) f.Close() } func memProfiler(memProfile string) { ch := make(chan os.Signal) signal.Notify(ch, syscall.SIGUSR1) for { sig := <-ch shared.Debugf("Received '%s signal', dumping memory.", sig) doMemDump(memProfile) } } lxd-2.0.2/lxd/devices.go000066400000000000000000000441311272140510300150400ustar00rootroot00000000000000package main import ( "bufio" "bytes" "crypto/rand" "encoding/hex" "fmt" "math/big" "os" "os/exec" "path" "path/filepath" "sort" "strconv" "strings" "syscall" _ "github.com/mattn/go-sqlite3" "github.com/lxc/lxd/shared" log "gopkg.in/inconshreveable/log15.v2" ) var deviceSchedRebalance = make(chan []string, 2) type deviceBlockLimit struct { readBps int64 readIops int64 writeBps int64 writeIops int64 } type deviceTaskCPU struct { id int strId string count *int } type deviceTaskCPUs []deviceTaskCPU func (c deviceTaskCPUs) Len() int { return len(c) } func (c deviceTaskCPUs) Less(i, j int) bool { return *c[i].count < *c[j].count } func (c deviceTaskCPUs) Swap(i, j int) { c[i], c[j] = c[j], c[i] } func deviceNetlinkListener() (chan []string, chan []string, error) { NETLINK_KOBJECT_UEVENT := 15 UEVENT_BUFFER_SIZE := 2048 fd, err := syscall.Socket( syscall.AF_NETLINK, syscall.SOCK_RAW, NETLINK_KOBJECT_UEVENT, ) if err != nil { return nil, nil, err } nl := syscall.SockaddrNetlink{ Family: syscall.AF_NETLINK, Pid: uint32(os.Getpid()), Groups: 1, } err = syscall.Bind(fd, &nl) if err != nil { return nil, nil, err } chCPU := make(chan []string, 1) chNetwork := make(chan []string, 0) go func(chCPU chan []string, chNetwork chan []string) { b := make([]byte, UEVENT_BUFFER_SIZE*2) for { _, err := syscall.Read(fd, b) if err != nil { continue } props := map[string]string{} last := 0 for i, e := range b { if i == len(b) || e == 0 { msg := string(b[last+1 : i]) last = i if len(msg) == 0 || msg == "\x00" { continue } fields := strings.SplitN(msg, "=", 2) if len(fields) != 2 { continue } props[fields[0]] = fields[1] } } if props["SUBSYSTEM"] == "cpu" { if props["DRIVER"] != "processor" { continue } if props["ACTION"] != "offline" && props["ACTION"] != "online" { continue } // As CPU re-balancing affects all containers, no need to queue them select { case chCPU <- []string{path.Base(props["DEVPATH"]), props["ACTION"]}: default: // Channel is full, drop the event } } if props["SUBSYSTEM"] == "net" { if props["ACTION"] != "add" && props["ACTION"] != "removed" { continue } if !shared.PathExists(fmt.Sprintf("/sys/class/net/%s", props["INTERFACE"])) { continue } // Network balancing is interface specific, so queue everything chNetwork <- []string{props["INTERFACE"], props["ACTION"]} } } }(chCPU, chNetwork) return chCPU, chNetwork, nil } func parseCpuset(cpu string) ([]int, error) { cpus := []int{} chunks := strings.Split(cpu, ",") for _, chunk := range chunks { if strings.Contains(chunk, "-") { // Range fields := strings.SplitN(chunk, "-", 2) if len(fields) != 2 { return nil, fmt.Errorf("Invalid cpuset value: %s", cpu) } low, err := strconv.Atoi(fields[0]) if err != nil { return nil, fmt.Errorf("Invalid cpuset value: %s", cpu) } high, err := strconv.Atoi(fields[1]) if err != nil { return nil, fmt.Errorf("Invalid cpuset value: %s", cpu) } for i := low; i <= high; i++ { cpus = append(cpus, i) } } else { // Simple entry nr, err := strconv.Atoi(chunk) if err != nil { return nil, fmt.Errorf("Invalid cpuset value: %s", cpu) } cpus = append(cpus, nr) } } return cpus, nil } func deviceTaskBalance(d *Daemon) { min := func(x, y int) int { if x < y { return x } return y } // Don't bother running when CGroup support isn't there if !cgCpusetController { return } // Get effective cpus list - those are all guaranteed to be online effectiveCpus, err := cGroupGet("cpuset", "/", "cpuset.effective_cpus") if err != nil { // Older kernel - use cpuset.cpus effectiveCpus, err = cGroupGet("cpuset", "/", "cpuset.cpus") if err != nil { shared.Log.Error("Error reading host's cpuset.cpus") return } } err = cGroupSet("cpuset", "/lxc", "cpuset.cpus", effectiveCpus) if err != nil && shared.PathExists("/sys/fs/cgroup/cpuset/lxc") { shared.Log.Warn("Error setting lxd's cpuset.cpus", log.Ctx{"err": err}) } cpus, err := parseCpuset(effectiveCpus) if err != nil { shared.Log.Error("Error parsing host's cpu set", log.Ctx{"cpuset": effectiveCpus, "err": err}) return } // Iterate through the containers containers, err := dbContainersList(d.db, cTypeRegular) fixedContainers := map[int][]container{} balancedContainers := map[container]int{} for _, name := range containers { c, err := containerLoadByName(d, name) if err != nil { continue } conf := c.ExpandedConfig() cpulimit, ok := conf["limits.cpu"] if !ok || cpulimit == "" { cpulimit = effectiveCpus } if !c.IsRunning() { continue } count, err := strconv.Atoi(cpulimit) if err == nil { // Load-balance count = min(count, len(cpus)) balancedContainers[c] = count } else { // Pinned containerCpus, err := parseCpuset(cpulimit) if err != nil { return } for _, nr := range containerCpus { if !shared.IntInSlice(nr, cpus) { continue } _, ok := fixedContainers[nr] if ok { fixedContainers[nr] = append(fixedContainers[nr], c) } else { fixedContainers[nr] = []container{c} } } } } // Balance things pinning := map[container][]string{} usage := map[int]deviceTaskCPU{} for _, id := range cpus { cpu := deviceTaskCPU{} cpu.id = id cpu.strId = fmt.Sprintf("%d", id) count := 0 cpu.count = &count usage[id] = cpu } for cpu, ctns := range fixedContainers { c, ok := usage[cpu] if !ok { shared.Log.Error("Internal error: container using unavailable cpu") continue } id := c.strId for _, ctn := range ctns { _, ok := pinning[ctn] if ok { pinning[ctn] = append(pinning[ctn], id) } else { pinning[ctn] = []string{id} } *c.count += 1 } } sortedUsage := make(deviceTaskCPUs, 0) for _, value := range usage { sortedUsage = append(sortedUsage, value) } for ctn, count := range balancedContainers { sort.Sort(sortedUsage) for _, cpu := range sortedUsage { if count == 0 { break } count -= 1 id := cpu.strId _, ok := pinning[ctn] if ok { pinning[ctn] = append(pinning[ctn], id) } else { pinning[ctn] = []string{id} } *cpu.count += 1 } } // Set the new pinning for ctn, set := range pinning { // Confirm the container didn't just stop if !ctn.IsRunning() { continue } sort.Strings(set) err := ctn.CGroupSet("cpuset.cpus", strings.Join(set, ",")) if err != nil { shared.Log.Error("balance: Unable to set cpuset", log.Ctx{"name": ctn.Name(), "err": err, "value": strings.Join(set, ",")}) } } } func deviceNetworkPriority(d *Daemon, netif string) { // Don't bother running when CGroup support isn't there if !cgNetPrioController { return } containers, err := dbContainersList(d.db, cTypeRegular) if err != nil { return } for _, name := range containers { // Get the container struct c, err := containerLoadByName(d, name) if err != nil { continue } // Extract the current priority networkPriority := c.ExpandedConfig()["limits.network.priority"] if networkPriority == "" { continue } networkInt, err := strconv.Atoi(networkPriority) if err != nil { continue } // Set the value for the new interface c.CGroupSet("net_prio.ifpriomap", fmt.Sprintf("%s %d", netif, networkInt)) } return } func deviceEventListener(d *Daemon) { chNetlinkCPU, chNetlinkNetwork, err := deviceNetlinkListener() if err != nil { shared.Log.Error("scheduler: couldn't setup netlink listener") return } for { select { case e := <-chNetlinkCPU: if len(e) != 2 { shared.Log.Error("Scheduler: received an invalid cpu hotplug event") continue } if !cgCpusetController { continue } shared.Debugf("Scheduler: cpu: %s is now %s: re-balancing", e[0], e[1]) deviceTaskBalance(d) case e := <-chNetlinkNetwork: if len(e) != 2 { shared.Log.Error("Scheduler: received an invalid network hotplug event") continue } if !cgNetPrioController { continue } shared.Debugf("Scheduler: network: %s has been added: updating network priorities", e[0]) deviceNetworkPriority(d, e[0]) case e := <-deviceSchedRebalance: if len(e) != 3 { shared.Log.Error("Scheduler: received an invalid rebalance event") continue } if !cgCpusetController { continue } shared.Debugf("Scheduler: %s %s %s: re-balancing", e[0], e[1], e[2]) deviceTaskBalance(d) } } } func deviceTaskSchedulerTrigger(srcType string, srcName string, srcStatus string) { // Spawn a go routine which then triggers the scheduler select { case deviceSchedRebalance <- []string{srcType, srcName, srcStatus}: default: // Channel is full, drop the event } } func deviceIsBlockdev(path string) bool { // Get a stat struct from the provided path stat := syscall.Stat_t{} err := syscall.Stat(path, &stat) if err != nil { return false } // Check if it's a block device if stat.Mode&syscall.S_IFMT == syscall.S_IFBLK { return true } // Not a device return false } func deviceModeOct(strmode string) (int, error) { // Default mode if strmode == "" { return 0600, nil } // Converted mode i, err := strconv.ParseInt(strmode, 8, 32) if err != nil { return 0, fmt.Errorf("Bad device mode: %s", strmode) } return int(i), nil } func deviceGetAttributes(path string) (string, int, int, error) { // Get a stat struct from the provided path stat := syscall.Stat_t{} err := syscall.Stat(path, &stat) if err != nil { return "", 0, 0, err } // Check what kind of file it is dType := "" if stat.Mode&syscall.S_IFMT == syscall.S_IFBLK { dType = "b" } else if stat.Mode&syscall.S_IFMT == syscall.S_IFCHR { dType = "c" } else { return "", 0, 0, fmt.Errorf("Not a device") } // Return the device information major := int(stat.Rdev / 256) minor := int(stat.Rdev % 256) return dType, major, minor, nil } func deviceNextInterfaceHWAddr() (string, error) { // Generate a new random MAC address using the usual prefix ret := bytes.Buffer{} for _, c := range "00:16:3e:xx:xx:xx" { if c == 'x' { c, err := rand.Int(rand.Reader, big.NewInt(16)) if err != nil { return "", err } ret.WriteString(fmt.Sprintf("%x", c.Int64())) } else { ret.WriteString(string(c)) } } return ret.String(), nil } func deviceNextVeth() string { // Return a new random veth device name randBytes := make([]byte, 4) rand.Read(randBytes) return "veth" + hex.EncodeToString(randBytes) } func deviceRemoveInterface(nic string) error { return exec.Command("ip", "link", "del", nic).Run() } func deviceMountDisk(srcPath string, dstPath string, readonly bool, recursive bool) error { var err error // Prepare the mount flags flags := 0 if readonly { flags |= syscall.MS_RDONLY } // Detect the filesystem fstype := "none" if deviceIsBlockdev(srcPath) { fstype, err = shared.BlockFsDetect(srcPath) if err != nil { return err } } else { flags |= syscall.MS_BIND if recursive { flags |= syscall.MS_REC } } // Mount the filesystem if err = syscall.Mount(srcPath, dstPath, fstype, uintptr(flags), ""); err != nil { return fmt.Errorf("Unable to mount %s at %s: %s", srcPath, dstPath, err) } flags = syscall.MS_REC | syscall.MS_SLAVE if err = syscall.Mount("", dstPath, "", uintptr(flags), ""); err != nil { return fmt.Errorf("unable to make mount %s private: %s", dstPath, err) } return nil } func deviceParseCPU(cpuAllowance string, cpuPriority string) (string, string, string, error) { var err error // Parse priority cpuShares := 0 cpuPriorityInt := 10 if cpuPriority != "" { cpuPriorityInt, err = strconv.Atoi(cpuPriority) if err != nil { return "", "", "", err } } cpuShares -= 10 - cpuPriorityInt // Parse allowance cpuCfsQuota := "-1" cpuCfsPeriod := "100000" if cpuAllowance != "" { if strings.HasSuffix(cpuAllowance, "%") { // Percentage based allocation percent, err := strconv.Atoi(strings.TrimSuffix(cpuAllowance, "%")) if err != nil { return "", "", "", err } cpuShares += (10 * percent) + 24 } else { // Time based allocation fields := strings.SplitN(cpuAllowance, "/", 2) if len(fields) != 2 { return "", "", "", fmt.Errorf("Invalid allowance: %s", cpuAllowance) } quota, err := strconv.Atoi(strings.TrimSuffix(fields[0], "ms")) if err != nil { return "", "", "", err } period, err := strconv.Atoi(strings.TrimSuffix(fields[1], "ms")) if err != nil { return "", "", "", err } // Set limit in ms cpuCfsQuota = fmt.Sprintf("%d", quota*1000) cpuCfsPeriod = fmt.Sprintf("%d", period*1000) cpuShares += 1024 } } else { // Default is 100% cpuShares += 1024 } // Deal with a potential negative score if cpuShares < 0 { cpuShares = 0 } return fmt.Sprintf("%d", cpuShares), cpuCfsQuota, cpuCfsPeriod, nil } func deviceTotalMemory() (int64, error) { // Open /proc/meminfo f, err := os.Open("/proc/meminfo") if err != nil { return -1, err } defer f.Close() // Read it line by line scan := bufio.NewScanner(f) for scan.Scan() { line := scan.Text() // We only care about MemTotal if !strings.HasPrefix(line, "MemTotal:") { continue } // Extract the before last (value) and last (unit) fields fields := strings.Split(line, " ") value := fields[len(fields)-2] + fields[len(fields)-1] // Feed the result to shared.ParseByteSizeString to get an int value valueBytes, err := shared.ParseByteSizeString(value) if err != nil { return -1, err } return valueBytes, nil } return -1, fmt.Errorf("Couldn't find MemTotal") } func deviceGetParentBlocks(path string) ([]string, error) { var devices []string var device []string // Expand the mount path absPath, err := filepath.Abs(path) if err != nil { return nil, err } expPath, err := filepath.EvalSymlinks(absPath) if err != nil { expPath = absPath } // Find the source mount of the path file, err := os.Open("/proc/self/mountinfo") if err != nil { return nil, err } defer file.Close() scanner := bufio.NewScanner(file) match := "" for scanner.Scan() { line := scanner.Text() rows := strings.Fields(line) if len(rows[4]) <= len(match) { continue } if expPath != rows[4] && !strings.HasPrefix(expPath, rows[4]) { continue } match = rows[4] // Go backward to avoid problems with optional fields device = []string{rows[2], rows[len(rows)-2]} } if device == nil { return nil, fmt.Errorf("Couldn't find a match /proc/self/mountinfo entry") } // Handle the most simple case if !strings.HasPrefix(device[0], "0:") { return []string{device[0]}, nil } // Deal with per-filesystem oddities. We don't care about failures here // because any non-special filesystem => directory backend. fs, _ := filesystemDetect(expPath) if fs == "zfs" && shared.PathExists("/dev/zfs") { // Accessible zfs filesystems poolName := strings.Split(device[1], "/")[0] output, err := exec.Command("zpool", "status", poolName).CombinedOutput() if err != nil { return nil, fmt.Errorf("Failed to query zfs filesystem information for %s: %s", device[1], output) } header := true for _, line := range strings.Split(string(output), "\n") { fields := strings.Fields(line) if len(fields) < 5 { continue } if fields[1] != "ONLINE" { continue } if header { header = false continue } var path string if shared.PathExists(fields[0]) { if shared.IsBlockdevPath(fields[0]) { path = fields[0] } else { subDevices, err := deviceGetParentBlocks(fields[0]) if err != nil { return nil, err } for _, dev := range subDevices { devices = append(devices, dev) } } } else if shared.PathExists(fmt.Sprintf("/dev/%s", fields[0])) { path = fmt.Sprintf("/dev/%s", fields[0]) } else if shared.PathExists(fmt.Sprintf("/dev/disk/by-id/%s", fields[0])) { path = fmt.Sprintf("/dev/disk/by-id/%s", fields[0]) } else if shared.PathExists(fmt.Sprintf("/dev/mapper/%s", fields[0])) { path = fmt.Sprintf("/dev/mapper/%s", fields[0]) } else { continue } if path != "" { _, major, minor, err := deviceGetAttributes(path) if err != nil { continue } devices = append(devices, fmt.Sprintf("%d:%d", major, minor)) } } if len(devices) == 0 { return nil, fmt.Errorf("Unable to find backing block for zfs pool: %s", poolName) } } else if fs == "btrfs" && shared.PathExists(device[1]) { // Accessible btrfs filesystems output, err := exec.Command("btrfs", "filesystem", "show", device[1]).CombinedOutput() if err != nil { return nil, fmt.Errorf("Failed to query btrfs filesystem information for %s: %s", device[1], output) } for _, line := range strings.Split(string(output), "\n") { fields := strings.Fields(line) if len(fields) == 0 || fields[0] != "devid" { continue } _, major, minor, err := deviceGetAttributes(fields[len(fields)-1]) if err != nil { return nil, err } devices = append(devices, fmt.Sprintf("%d:%d", major, minor)) } } else if shared.PathExists(device[1]) { // Anything else with a valid path _, major, minor, err := deviceGetAttributes(device[1]) if err != nil { return nil, err } devices = append(devices, fmt.Sprintf("%d:%d", major, minor)) } else { return nil, fmt.Errorf("Invalid block device: %s", device[1]) } return devices, nil } func deviceParseDiskLimit(readSpeed string, writeSpeed string) (int64, int64, int64, int64, error) { parseValue := func(value string) (int64, int64, error) { var err error bps := int64(0) iops := int64(0) if readSpeed == "" { return bps, iops, nil } if strings.HasSuffix(value, "iops") { iops, err = strconv.ParseInt(strings.TrimSuffix(value, "iops"), 10, 64) if err != nil { return -1, -1, err } } else { bps, err = shared.ParseByteSizeString(value) if err != nil { return -1, -1, err } } return bps, iops, nil } readBps, readIops, err := parseValue(readSpeed) if err != nil { return -1, -1, -1, -1, err } writeBps, writeIops, err := parseValue(writeSpeed) if err != nil { return -1, -1, -1, -1, err } return readBps, readIops, writeBps, writeIops, nil } lxd-2.0.2/lxd/devlxd.go000066400000000000000000000240611272140510300147040ustar00rootroot00000000000000package main import ( "fmt" "io/ioutil" "net" "net/http" "os" "path" "reflect" "regexp" "strconv" "strings" "unsafe" "github.com/gorilla/mux" "github.com/lxc/lxd/shared" ) type devLxdResponse struct { content interface{} code int ctype string } func okResponse(ct interface{}, ctype string) *devLxdResponse { return &devLxdResponse{ct, http.StatusOK, ctype} } type devLxdHandler struct { path string /* * This API will have to be changed slightly when we decide to support * websocket events upgrading, but since we don't have events on the * server side right now either, I went the simple route to avoid * needless noise. */ f func(c container, r *http.Request) *devLxdResponse } var configGet = devLxdHandler{"/1.0/config", func(c container, r *http.Request) *devLxdResponse { filtered := []string{} for k, _ := range c.ExpandedConfig() { if strings.HasPrefix(k, "user.") { filtered = append(filtered, fmt.Sprintf("/1.0/config/%s", k)) } } return okResponse(filtered, "json") }} var configKeyGet = devLxdHandler{"/1.0/config/{key}", func(c container, r *http.Request) *devLxdResponse { key := mux.Vars(r)["key"] if !strings.HasPrefix(key, "user.") { return &devLxdResponse{"not authorized", http.StatusForbidden, "raw"} } value, ok := c.ExpandedConfig()[key] if !ok { return &devLxdResponse{"not found", http.StatusNotFound, "raw"} } return okResponse(value, "raw") }} var metadataGet = devLxdHandler{"/1.0/meta-data", func(c container, r *http.Request) *devLxdResponse { value := c.ExpandedConfig()["user.meta-data"] return okResponse(fmt.Sprintf("#cloud-config\ninstance-id: %s\nlocal-hostname: %s\n%s", c.Name(), c.Name(), value), "raw") }} var handlers = []devLxdHandler{ devLxdHandler{"/", func(c container, r *http.Request) *devLxdResponse { return okResponse([]string{"/1.0"}, "json") }}, devLxdHandler{"/1.0", func(c container, r *http.Request) *devLxdResponse { return okResponse(shared.Jmap{"api_version": shared.APIVersion}, "json") }}, configGet, configKeyGet, metadataGet, /* TODO: events */ } func hoistReq(f func(container, *http.Request) *devLxdResponse, d *Daemon) func(http.ResponseWriter, *http.Request) { return func(w http.ResponseWriter, r *http.Request) { conn := extractUnderlyingConn(w) cred, ok := pidMapper.m[conn] if !ok { http.Error(w, pidNotInContainerErr.Error(), 500) return } c, err := findContainerForPid(cred.pid, d) if err != nil { http.Error(w, err.Error(), 500) return } // Access control rootUid := int64(0) idmapset, err := c.LastIdmapSet() if err == nil && idmapset != nil { uid, _ := idmapset.ShiftIntoNs(0, 0) rootUid = int64(uid) } if rootUid != cred.uid { http.Error(w, "Access denied for non-root user", 401) return } resp := f(c, r) if resp.code != http.StatusOK { http.Error(w, fmt.Sprintf("%s", resp.content), resp.code) } else if resp.ctype == "json" { w.Header().Set("Content-Type", "application/json") WriteJSON(w, resp.content) } else { w.Header().Set("Content-Type", "application/octet-stream") fmt.Fprintf(w, resp.content.(string)) } } } func createAndBindDevLxd() (*net.UnixListener, error) { sockFile := path.Join(shared.VarPath("devlxd"), "sock") /* * If this socket exists, that means a previous lxd died and didn't * clean up after itself. We assume that the LXD is actually dead if we * get this far, since StartDaemon() tries to connect to the actual lxd * socket to make sure that it is actually dead. So, it is safe to * remove it here without any checks. * * Also, it would be nice to SO_REUSEADDR here so we don't have to * delete the socket, but we can't: * http://stackoverflow.com/questions/15716302/so-reuseaddr-and-af-unix * * Note that this will force clients to reconnect when LXD is restarted. */ if err := os.Remove(sockFile); err != nil && !os.IsNotExist(err) { return nil, err } unixAddr, err := net.ResolveUnixAddr("unix", sockFile) if err != nil { return nil, err } unixl, err := net.ListenUnix("unix", unixAddr) if err != nil { return nil, err } if err := os.Chmod(sockFile, 0666); err != nil { return nil, err } return unixl, nil } func devLxdServer(d *Daemon) *http.Server { m := mux.NewRouter() for _, handler := range handlers { m.HandleFunc(handler.path, hoistReq(handler.f, d)) } return &http.Server{ Handler: m, ConnState: pidMapper.ConnStateHandler, } } /* * Everything below here is the guts of the unix socket bits. Unfortunately, * golang's API does not make this easy. What happens is: * * 1. We install a ConnState listener on the http.Server, which does the * initial unix socket credential exchange. When we get a connection started * event, we use SO_PEERCRED to extract the creds for the socket. * * 2. We store a map from the connection pointer to the pid for that * connection, so that once the HTTP negotiation occurrs and we get a * ResponseWriter, we know (because we negotiated on the first byte) which * pid the connection belogs to. * * 3. Regular HTTP negotiation and dispatch occurs via net/http. * * 4. When rendering the response via ResponseWriter, we match its underlying * connection against what we stored in step (2) to figure out which container * it came from. */ /* * We keep this in a global so that we can reference it from the server and * from our http handlers, since there appears to be no way to pass information * around here. */ var pidMapper = ConnPidMapper{m: map[*net.UnixConn]*ucred{}} type ucred struct { pid int32 uid int64 gid int64 } type ConnPidMapper struct { m map[*net.UnixConn]*ucred } func (m *ConnPidMapper) ConnStateHandler(conn net.Conn, state http.ConnState) { unixConn := conn.(*net.UnixConn) switch state { case http.StateNew: cred, err := getCred(unixConn) if err != nil { shared.Debugf("Error getting ucred for conn %s", err) } else { m.m[unixConn] = cred } case http.StateActive: return case http.StateIdle: return case http.StateHijacked: /* * The "Hijacked" state indicates that the connection has been * taken over from net/http. This is useful for things like * developing websocket libraries, who want to upgrade the * connection to a websocket one, and not use net/http any * more. Whatever the case, we want to forget about it since we * won't see it either. */ delete(m.m, unixConn) case http.StateClosed: delete(m.m, unixConn) default: shared.Debugf("Unknown state for connection %s", state) } } /* * I also don't see that golang exports an API to get at the underlying FD, but * we need it to get at SO_PEERCRED, so let's grab it. */ func extractUnderlyingFd(unixConnPtr *net.UnixConn) int { conn := reflect.Indirect(reflect.ValueOf(unixConnPtr)) netFdPtr := conn.FieldByName("fd") netFd := reflect.Indirect(netFdPtr) fd := netFd.FieldByName("sysfd") return int(fd.Int()) } func getCred(conn *net.UnixConn) (*ucred, error) { fd := extractUnderlyingFd(conn) uid, gid, pid, err := getUcred(fd) if err != nil { return nil, err } return &ucred{pid, int64(uid), int64(gid)}, nil } /* * As near as I can tell, there is no nice way of extracting an underlying * net.Conn (or in our case, net.UnixConn) from an http.Request or * ResponseWriter without hijacking it [1]. Since we want to send and recieve * unix creds to figure out which container this request came from, we need to * do this. * * [1]: https://groups.google.com/forum/#!topic/golang-nuts/_FWdFXJa6QA */ func extractUnderlyingConn(w http.ResponseWriter) *net.UnixConn { v := reflect.Indirect(reflect.ValueOf(w)) connPtr := v.FieldByName("conn") conn := reflect.Indirect(connPtr) rwc := conn.FieldByName("rwc") netConnPtr := (*net.Conn)(unsafe.Pointer(rwc.UnsafeAddr())) unixConnPtr := (*netConnPtr).(*net.UnixConn) return unixConnPtr } var pidNotInContainerErr = fmt.Errorf("pid not in container?") func findContainerForPid(pid int32, d *Daemon) (container, error) { /* * Try and figure out which container a pid is in. There is probably a * better way to do this. Based on rharper's initial performance * metrics, looping over every container and calling newLxdContainer is * expensive, so I wanted to avoid that if possible, so this happens in * a two step process: * * 1. Walk up the process tree until you see something that looks like * an lxc monitor process and extract its name from there. * * 2. If this fails, it may be that someone did an `lxc exec foo bash`, * so the process isn't actually a decendant of the container's * init. In this case we just look through all the containers until * we find an init with a matching pid namespace. This is probably * uncommon, so hopefully the slowness won't hurt us. */ origpid := pid for pid > 1 { cmdline, err := ioutil.ReadFile(fmt.Sprintf("/proc/%d/cmdline", pid)) if err != nil { return nil, err } if strings.HasPrefix(string(cmdline), "[lxc monitor]") { // container names can't have spaces parts := strings.Split(string(cmdline), " ") name := strings.TrimSuffix(parts[len(parts)-1], "\x00") return containerLoadByName(d, name) } status, err := ioutil.ReadFile(fmt.Sprintf("/proc/%d/status", pid)) if err != nil { return nil, err } re := regexp.MustCompile("PPid:\\s*([0-9]*)") for _, line := range strings.Split(string(status), "\n") { m := re.FindStringSubmatch(line) if m != nil && len(m) > 1 { result, err := strconv.Atoi(m[1]) if err != nil { return nil, err } pid = int32(result) break } } } origPidNs, err := os.Readlink(fmt.Sprintf("/proc/%d/ns/pid", origpid)) if err != nil { return nil, err } containers, err := dbContainersList(d.db, cTypeRegular) if err != nil { return nil, err } for _, container := range containers { c, err := containerLoadByName(d, container) if err != nil { return nil, err } if !c.IsRunning() { continue } initpid := c.InitPID() pidNs, err := os.Readlink(fmt.Sprintf("/proc/%d/ns/pid", initpid)) if err != nil { return nil, err } if origPidNs == pidNs { return c, nil } } return nil, pidNotInContainerErr } lxd-2.0.2/lxd/devlxd_gc.go000066400000000000000000000004251272140510300153530ustar00rootroot00000000000000// +build gc package main import ( "syscall" ) func getUcred(fd int) (uint32, uint32, int32, error) { cred, err := syscall.GetsockoptUcred(fd, syscall.SOL_SOCKET, syscall.SO_PEERCRED) if err != nil { return 0, 0, -1, err } return cred.Uid, cred.Gid, cred.Pid, nil } lxd-2.0.2/lxd/devlxd_gccgo.go000066400000000000000000000016031272140510300160430ustar00rootroot00000000000000// +build gccgo // +build cgo package main import ( "errors" ) /* #define _GNU_SOURCE #include #include #include #include #include void getucred(int sock, uint *uid, uint *gid, int *pid) { struct ucred peercred; socklen_t len; len = sizeof(struct ucred); if (getsockopt(sock, SOL_SOCKET, SO_PEERCRED, &peercred, &len) != 0 || len != sizeof(peercred)) { fprintf(stderr, "getsockopt failed: %s\n", strerror(errno)); return; } *uid = peercred.uid; *gid = peercred.gid; *pid = peercred.pid; return; } */ import "C" func getUcred(fd int) (uint32, uint32, int32, error) { uid := C.uint(0) gid := C.uint(0) pid := C.int(-1) C.getucred(C.int(fd), &uid, &gid, &pid) if uid == 0 || gid == 0 || pid == -1 { return 0, 0, -1, errors.New("Failed to get the ucred") } return uint32(uid), uint32(gid), int32(pid), nil } lxd-2.0.2/lxd/devlxd_test.go000066400000000000000000000051651272140510300157470ustar00rootroot00000000000000package main import ( "fmt" "io/ioutil" "net" "net/http" "os" "strings" "testing" ) var testDir string type DevLxdDialer struct { Path string } func (d DevLxdDialer) DevLxdDial(network, path string) (net.Conn, error) { addr, err := net.ResolveUnixAddr("unix", d.Path) if err != nil { return nil, err } conn, err := net.DialUnix("unix", nil, addr) if err != nil { return nil, err } return conn, err } func setupDir() error { var err error testDir, err = ioutil.TempDir("", "lxd_test_devlxd_") if err != nil { return err } err = os.Chmod(testDir, 0700) if err != nil { return err } os.MkdirAll(fmt.Sprintf("%s/devlxd", testDir), 0755) return os.Setenv("LXD_DIR", testDir) } func setupSocket() (*net.UnixListener, error) { setupDir() return createAndBindDevLxd() } func connect(path string) (*net.UnixConn, error) { addr, err := net.ResolveUnixAddr("unix", path) if err != nil { return nil, err } conn, err := net.DialUnix("unix", nil, addr) if err != nil { return nil, err } return conn, nil } func TestCredsSendRecv(t *testing.T) { result := make(chan int32, 1) listener, err := setupSocket() if err != nil { t.Fatal(err) } defer listener.Close() defer os.RemoveAll(testDir) go func() { conn, err := listener.AcceptUnix() if err != nil { t.Log(err) result <- -1 return } defer conn.Close() cred, err := getCred(conn) if err != nil { t.Log(err) result <- -1 return } result <- cred.pid }() conn, err := connect(fmt.Sprintf("%s/devlxd/sock", testDir)) if err != nil { t.Fatal(err) } defer conn.Close() pid := <-result if pid != int32(os.Getpid()) { t.Fatal("pid mismatch: ", pid, os.Getpid()) } } /* * Here we're not really testing the API functionality (we can't, since it * expects us to be inside a container to work), but it is useful to test that * all the grotty connection extracting stuff works (that is, it gets to the * point where it realizes the pid isn't in a container without crashing). */ func TestHttpRequest(t *testing.T) { if err := setupDir(); err != nil { t.Fatal(err) } defer os.RemoveAll(testDir) d := &Daemon{} err := d.Init() if err != nil { t.Fatal(err) } defer d.Stop() c := http.Client{Transport: &http.Transport{Dial: DevLxdDialer{Path: fmt.Sprintf("%s/devlxd/sock", testDir)}.DevLxdDial}} raw, err := c.Get("http://1.0") if err != nil { t.Fatal(err) } if raw.StatusCode != 500 { t.Fatal(err) } resp, err := ioutil.ReadAll(raw.Body) if err != nil { t.Fatal(err) } if !strings.Contains(string(resp), pidNotInContainerErr.Error()) { t.Fatal("resp error not expected: ", string(resp)) } } lxd-2.0.2/lxd/events.go000066400000000000000000000054251272140510300147250ustar00rootroot00000000000000package main import ( "encoding/json" "fmt" "net/http" "strings" "sync" "time" "github.com/gorilla/websocket" "github.com/pborman/uuid" log "gopkg.in/inconshreveable/log15.v2" "github.com/lxc/lxd/shared" ) type eventsHandler struct { } func logContextMap(ctx []interface{}) map[string]string { var key string ctxMap := map[string]string{} for _, entry := range ctx { if key == "" { key = entry.(string) } else { ctxMap[key] = fmt.Sprintf("%s", entry) key = "" } } return ctxMap } func (h eventsHandler) Log(r *log.Record) error { eventSend("logging", shared.Jmap{ "message": r.Msg, "level": r.Lvl.String(), "context": logContextMap(r.Ctx)}) return nil } var eventsLock sync.Mutex var eventListeners map[string]*eventListener = make(map[string]*eventListener) type eventListener struct { connection *websocket.Conn messageTypes []string active chan bool id string msgLock sync.Mutex } type eventsServe struct { req *http.Request } func (r *eventsServe) Render(w http.ResponseWriter) error { return eventsSocket(r.req, w) } func (r *eventsServe) String() string { return "event handler" } func eventsSocket(r *http.Request, w http.ResponseWriter) error { listener := eventListener{} typeStr := r.FormValue("type") if typeStr == "" { typeStr = "logging,operation" } c, err := shared.WebsocketUpgrader.Upgrade(w, r, nil) if err != nil { return err } listener.active = make(chan bool, 1) listener.connection = c listener.id = uuid.NewRandom().String() listener.messageTypes = strings.Split(typeStr, ",") eventsLock.Lock() eventListeners[listener.id] = &listener eventsLock.Unlock() shared.Debugf("New events listener: %s", listener.id) <-listener.active eventsLock.Lock() delete(eventListeners, listener.id) eventsLock.Unlock() listener.connection.Close() shared.Debugf("Disconnected events listener: %s", listener.id) return nil } func eventsGet(d *Daemon, r *http.Request) Response { return &eventsServe{r} } var eventsCmd = Command{name: "events", get: eventsGet} func eventSend(eventType string, eventMessage interface{}) error { event := shared.Jmap{} event["type"] = eventType event["timestamp"] = time.Now() event["metadata"] = eventMessage body, err := json.Marshal(event) if err != nil { return err } eventsLock.Lock() listeners := eventListeners for _, listener := range listeners { if !shared.StringInSlice(eventType, listener.messageTypes) { continue } go func(listener *eventListener, body []byte) { if listener == nil { return } listener.msgLock.Lock() err = listener.connection.WriteMessage(websocket.TextMessage, body) listener.msgLock.Unlock() if err != nil { listener.active <- false } }(listener, body) } eventsLock.Unlock() return nil } lxd-2.0.2/lxd/images.go000066400000000000000000000751461272140510300146750ustar00rootroot00000000000000package main import ( "bytes" "crypto/sha256" "encoding/json" "fmt" "io" "io/ioutil" "mime" "mime/multipart" "net/http" "net/url" "os" "os/exec" "strconv" "strings" "sync" "time" "github.com/gorilla/mux" "gopkg.in/yaml.v2" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/logging" log "gopkg.in/inconshreveable/log15.v2" ) /* We only want a single publish running at any one time. The CPU and I/O load of publish is such that running multiple ones in parallel takes longer than running them serially. Additionaly, publishing the same container or container snapshot twice would lead to storage problem, not to mention a conflict at the end for whichever finishes last. */ var imagePublishLock sync.Mutex func detectCompression(fname string) ([]string, string, error) { f, err := os.Open(fname) if err != nil { return []string{""}, "", err } defer f.Close() // read header parts to detect compression method // bz2 - 2 bytes, 'BZ' signature/magic number // gz - 2 bytes, 0x1f 0x8b // lzma - 6 bytes, { [0x000, 0xE0], '7', 'z', 'X', 'Z', 0x00 } - // xy - 6 bytes, header format { 0xFD, '7', 'z', 'X', 'Z', 0x00 } // tar - 263 bytes, trying to get ustar from 257 - 262 header := make([]byte, 263) _, err = f.Read(header) if err != nil { return []string{""}, "", err } switch { case bytes.Equal(header[0:2], []byte{'B', 'Z'}): return []string{"-jxf"}, ".tar.bz2", nil case bytes.Equal(header[0:2], []byte{0x1f, 0x8b}): return []string{"-zxf"}, ".tar.gz", nil case (bytes.Equal(header[1:5], []byte{'7', 'z', 'X', 'Z'}) && header[0] == 0xFD): return []string{"-Jxf"}, ".tar.xz", nil case (bytes.Equal(header[1:5], []byte{'7', 'z', 'X', 'Z'}) && header[0] != 0xFD): return []string{"--lzma", "-xf"}, ".tar.lzma", nil case bytes.Equal(header[257:262], []byte{'u', 's', 't', 'a', 'r'}): return []string{"-xf"}, ".tar", nil default: return []string{""}, "", fmt.Errorf("Unsupported compression.") } } func untar(tarball string, path string) error { extractArgs, _, err := detectCompression(tarball) if err != nil { return err } command := "tar" args := []string{} if runningInUserns { args = append(args, "--wildcards") args = append(args, "--exclude=dev/*") args = append(args, "--exclude=./dev/*") args = append(args, "--exclude=rootfs/dev/*") args = append(args, "--exclude=rootfs/./dev/*") } args = append(args, "-C", path, "--numeric-owner") args = append(args, extractArgs...) args = append(args, tarball) output, err := exec.Command(command, args...).CombinedOutput() if err != nil { shared.Debugf("Unpacking failed") shared.Debugf(string(output)) return err } return nil } func untarImage(imagefname string, destpath string) error { err := untar(imagefname, destpath) if err != nil { return err } rootfsPath := fmt.Sprintf("%s/rootfs", destpath) if shared.PathExists(imagefname + ".rootfs") { err = os.MkdirAll(rootfsPath, 0755) if err != nil { return fmt.Errorf("Error creating rootfs directory") } err = untar(imagefname+".rootfs", rootfsPath) if err != nil { return err } } if !shared.PathExists(rootfsPath) { return fmt.Errorf("Image is missing a rootfs: %s", imagefname) } return nil } func compressFile(path string, compress string) (string, error) { reproducible := []string{"gzip"} args := []string{path, "-c"} if shared.StringInSlice(compress, reproducible) { args = append(args, "-n") } cmd := exec.Command(compress, args...) outfile, err := os.Create(path + ".compressed") if err != nil { return "", err } defer outfile.Close() cmd.Stdout = outfile err = cmd.Run() if err != nil { os.Remove(outfile.Name()) return "", err } return outfile.Name(), nil } type templateEntry struct { When []string `yaml:"when"` CreateOnly bool `yaml:"create_only"` Template string `yaml:"template"` Properties map[string]string `yaml:"properties"` } type imagePostReq struct { Filename string `json:"filename"` Public bool `json:"public"` Source map[string]string `json:"source"` Properties map[string]string `json:"properties"` AutoUpdate bool `json:"auto_update"` } type imageMetadata struct { Architecture string `yaml:"architecture"` CreationDate int64 `yaml:"creation_date"` ExpiryDate int64 `yaml:"expiry_date"` Properties map[string]string `yaml:"properties"` Templates map[string]*templateEntry `yaml:"templates"` } /* * This function takes a container or snapshot from the local image server and * exports it as an image. */ func imgPostContInfo(d *Daemon, r *http.Request, req imagePostReq, builddir string) (info shared.ImageInfo, err error) { info.Properties = map[string]string{} name := req.Source["name"] ctype := req.Source["type"] if ctype == "" || name == "" { return info, fmt.Errorf("No source provided") } switch ctype { case "snapshot": if !shared.IsSnapshot(name) { return info, fmt.Errorf("Not a snapshot") } case "container": if shared.IsSnapshot(name) { return info, fmt.Errorf("This is a snapshot") } default: return info, fmt.Errorf("Bad type") } info.Filename = req.Filename switch req.Public { case true: info.Public = true case false: info.Public = false } c, err := containerLoadByName(d, name) if err != nil { return info, err } // Build the actual image file tarfile, err := ioutil.TempFile(builddir, "lxd_build_tar_") if err != nil { return info, err } defer os.Remove(tarfile.Name()) if err := c.Export(tarfile); err != nil { tarfile.Close() return info, err } tarfile.Close() var compressedPath string compress := daemonConfig["images.compression_algorithm"].Get() if compress != "none" { compressedPath, err = compressFile(tarfile.Name(), compress) if err != nil { return info, err } } else { compressedPath = tarfile.Name() } defer os.Remove(compressedPath) sha256 := sha256.New() tarf, err := os.Open(compressedPath) if err != nil { return info, err } info.Size, err = io.Copy(sha256, tarf) tarf.Close() if err != nil { return info, err } info.Fingerprint = fmt.Sprintf("%x", sha256.Sum(nil)) _, _, err = dbImageGet(d.db, info.Fingerprint, false, true) if err == nil { return info, fmt.Errorf("The image already exists: %s", info.Fingerprint) } /* rename the the file to the expected name so our caller can use it */ finalName := shared.VarPath("images", info.Fingerprint) err = shared.FileMove(compressedPath, finalName) if err != nil { return info, err } info.Architecture, _ = shared.ArchitectureName(c.Architecture()) info.Properties = req.Properties return info, nil } func imgPostRemoteInfo(d *Daemon, req imagePostReq, op *operation) error { var err error var hash string if req.Source["fingerprint"] != "" { hash = req.Source["fingerprint"] } else if req.Source["alias"] != "" { hash = req.Source["alias"] } else { return fmt.Errorf("must specify one of alias or fingerprint for init from image") } hash, err = d.ImageDownload(op, req.Source["server"], req.Source["protocol"], req.Source["certificate"], req.Source["secret"], hash, false, req.AutoUpdate) if err != nil { return err } id, info, err := dbImageGet(d.db, hash, false, false) if err != nil { return err } // Allow overriding or adding properties for k, v := range req.Properties { info.Properties[k] = v } // Update the DB record if needed if req.Public || req.AutoUpdate || req.Filename != "" || len(req.Properties) > 0 { err = dbImageUpdate(d.db, id, req.Filename, info.Size, req.Public, req.AutoUpdate, info.Architecture, info.CreationDate, info.ExpiryDate, info.Properties) if err != nil { return err } } metadata := make(map[string]string) metadata["fingerprint"] = info.Fingerprint metadata["size"] = strconv.FormatInt(info.Size, 10) op.UpdateMetadata(metadata) return nil } func imgPostURLInfo(d *Daemon, req imagePostReq, op *operation) error { var err error if req.Source["url"] == "" { return fmt.Errorf("Missing URL") } // Resolve the image URL tlsConfig, err := shared.GetTLSConfig("", "", nil) if err != nil { return err } tr := &http.Transport{ TLSClientConfig: tlsConfig, Dial: shared.RFC3493Dialer, Proxy: d.proxy, } myhttp := http.Client{ Transport: tr, } head, err := http.NewRequest("HEAD", req.Source["url"], nil) if err != nil { return err } architecturesStr := []string{} for _, arch := range d.architectures { architecturesStr = append(architecturesStr, fmt.Sprintf("%d", arch)) } head.Header.Set("User-Agent", shared.UserAgent) head.Header.Set("LXD-Server-Architectures", strings.Join(architecturesStr, ", ")) head.Header.Set("LXD-Server-Version", shared.Version) raw, err := myhttp.Do(head) if err != nil { return err } hash := raw.Header.Get("LXD-Image-Hash") if hash == "" { return fmt.Errorf("Missing LXD-Image-Hash header") } url := raw.Header.Get("LXD-Image-URL") if url == "" { return fmt.Errorf("Missing LXD-Image-URL header") } // Import the image hash, err = d.ImageDownload(op, url, "direct", "", "", hash, false, req.AutoUpdate) if err != nil { return err } id, info, err := dbImageGet(d.db, hash, false, false) if err != nil { return err } // Allow overriding or adding properties for k, v := range req.Properties { info.Properties[k] = v } if req.Public || req.AutoUpdate || req.Filename != "" || len(req.Properties) > 0 { err = dbImageUpdate(d.db, id, req.Filename, info.Size, req.Public, req.AutoUpdate, info.Architecture, info.CreationDate, info.ExpiryDate, info.Properties) if err != nil { return err } } metadata := make(map[string]string) metadata["fingerprint"] = info.Fingerprint metadata["size"] = strconv.FormatInt(info.Size, 10) op.UpdateMetadata(metadata) return nil } func getImgPostInfo(d *Daemon, r *http.Request, builddir string, post *os.File) (info shared.ImageInfo, err error) { var imageMeta *imageMetadata logger := logging.AddContext(shared.Log, log.Ctx{"function": "getImgPostInfo"}) public, _ := strconv.Atoi(r.Header.Get("X-LXD-public")) info.Public = public == 1 propHeaders := r.Header[http.CanonicalHeaderKey("X-LXD-properties")] ctype, ctypeParams, err := mime.ParseMediaType(r.Header.Get("Content-Type")) if err != nil { ctype = "application/octet-stream" } sha256 := sha256.New() var size int64 // Create a temporary file for the image tarball imageTarf, err := ioutil.TempFile(builddir, "lxd_tar_") if err != nil { return info, err } defer os.Remove(imageTarf.Name()) if ctype == "multipart/form-data" { // Parse the POST data post.Seek(0, 0) mr := multipart.NewReader(post, ctypeParams["boundary"]) // Get the metadata tarball part, err := mr.NextPart() if err != nil { return info, err } if part.FormName() != "metadata" { return info, fmt.Errorf("Invalid multipart image") } size, err = io.Copy(io.MultiWriter(imageTarf, sha256), part) info.Size += size imageTarf.Close() if err != nil { logger.Error( "Failed to copy the image tarfile", log.Ctx{"err": err}) return info, err } // Get the rootfs tarball part, err = mr.NextPart() if err != nil { logger.Error( "Failed to get the next part", log.Ctx{"err": err}) return info, err } if part.FormName() != "rootfs" { logger.Error( "Invalid multipart image") return info, fmt.Errorf("Invalid multipart image") } // Create a temporary file for the rootfs tarball rootfsTarf, err := ioutil.TempFile(builddir, "lxd_tar_") if err != nil { return info, err } defer os.Remove(rootfsTarf.Name()) size, err = io.Copy(io.MultiWriter(rootfsTarf, sha256), part) info.Size += size rootfsTarf.Close() if err != nil { logger.Error( "Failed to copy the rootfs tarfile", log.Ctx{"err": err}) return info, err } info.Filename = part.FileName() info.Fingerprint = fmt.Sprintf("%x", sha256.Sum(nil)) expectedFingerprint := r.Header.Get("X-LXD-fingerprint") if expectedFingerprint != "" && info.Fingerprint != expectedFingerprint { err = fmt.Errorf("fingerprints don't match, got %s expected %s", info.Fingerprint, expectedFingerprint) return info, err } imageMeta, err = getImageMetadata(imageTarf.Name()) if err != nil { logger.Error( "Failed to get image metadata", log.Ctx{"err": err}) return info, err } imgfname := shared.VarPath("images", info.Fingerprint) err = shared.FileMove(imageTarf.Name(), imgfname) if err != nil { logger.Error( "Failed to move the image tarfile", log.Ctx{ "err": err, "source": imageTarf.Name(), "dest": imgfname}) return info, err } rootfsfname := shared.VarPath("images", info.Fingerprint+".rootfs") err = shared.FileMove(rootfsTarf.Name(), rootfsfname) if err != nil { logger.Error( "Failed to move the rootfs tarfile", log.Ctx{ "err": err, "source": rootfsTarf.Name(), "dest": imgfname}) return info, err } } else { post.Seek(0, 0) size, err = io.Copy(io.MultiWriter(imageTarf, sha256), post) info.Size = size imageTarf.Close() logger.Debug("Tar size", log.Ctx{"size": size}) if err != nil { logger.Error( "Failed to copy the tarfile", log.Ctx{"err": err}) return info, err } info.Filename = r.Header.Get("X-LXD-filename") info.Fingerprint = fmt.Sprintf("%x", sha256.Sum(nil)) expectedFingerprint := r.Header.Get("X-LXD-fingerprint") if expectedFingerprint != "" && info.Fingerprint != expectedFingerprint { logger.Error( "Fingerprints don't match", log.Ctx{ "got": info.Fingerprint, "expected": expectedFingerprint}) err = fmt.Errorf( "fingerprints don't match, got %s expected %s", info.Fingerprint, expectedFingerprint) return info, err } imageMeta, err = getImageMetadata(imageTarf.Name()) if err != nil { logger.Error( "Failed to get image metadata", log.Ctx{"err": err}) return info, err } imgfname := shared.VarPath("images", info.Fingerprint) err = shared.FileMove(imageTarf.Name(), imgfname) if err != nil { logger.Error( "Failed to move the tarfile", log.Ctx{ "err": err, "source": imageTarf.Name(), "dest": imgfname}) return info, err } } info.Architecture = imageMeta.Architecture info.CreationDate = time.Unix(imageMeta.CreationDate, 0) info.ExpiryDate = time.Unix(imageMeta.ExpiryDate, 0) info.Properties = imageMeta.Properties if len(propHeaders) > 0 { for _, ph := range propHeaders { p, _ := url.ParseQuery(ph) for pkey, pval := range p { info.Properties[pkey] = pval[0] } } } return info, nil } func imageBuildFromInfo(d *Daemon, info shared.ImageInfo) (metadata map[string]string, err error) { err = d.Storage.ImageCreate(info.Fingerprint) if err != nil { os.Remove(shared.VarPath("images", info.Fingerprint)) os.Remove(shared.VarPath("images", info.Fingerprint) + ".rootfs") return metadata, err } err = dbImageInsert( d.db, info.Fingerprint, info.Filename, info.Size, info.Public, info.AutoUpdate, info.Architecture, info.CreationDate, info.ExpiryDate, info.Properties) if err != nil { return metadata, err } metadata = make(map[string]string) metadata["fingerprint"] = info.Fingerprint metadata["size"] = strconv.FormatInt(info.Size, 10) return metadata, nil } func imagesPost(d *Daemon, r *http.Request) Response { var err error // create a directory under which we keep everything while building builddir, err := ioutil.TempDir(shared.VarPath("images"), "lxd_build_") if err != nil { return InternalError(err) } cleanup := func(path string, fd *os.File) { if fd != nil { fd.Close() } if err := os.RemoveAll(path); err != nil { shared.Debugf("Error deleting temporary directory \"%s\": %s", path, err) } } // Store the post data to disk post, err := ioutil.TempFile(builddir, "lxd_post_") if err != nil { cleanup(builddir, nil) return InternalError(err) } _, err = io.Copy(post, r.Body) if err != nil { cleanup(builddir, post) return InternalError(err) } // Is this a container request? post.Seek(0, 0) decoder := json.NewDecoder(post) imageUpload := false req := imagePostReq{} err = decoder.Decode(&req) if err != nil { if r.Header.Get("Content-Type") == "application/json" { return BadRequest(err) } imageUpload = true } if !imageUpload && !shared.StringInSlice(req.Source["type"], []string{"container", "snapshot", "image", "url"}) { cleanup(builddir, post) return InternalError(fmt.Errorf("Invalid images JSON")) } // Begin background operation run := func(op *operation) error { var info shared.ImageInfo // Setup the cleanup function defer cleanup(builddir, post) /* Processing image copy from remote */ if !imageUpload && req.Source["type"] == "image" { err := imgPostRemoteInfo(d, req, op) if err != nil { return err } return nil } /* Processing image copy from URL */ if !imageUpload && req.Source["type"] == "url" { err := imgPostURLInfo(d, req, op) if err != nil { return err } return nil } if imageUpload { /* Processing image upload */ info, err = getImgPostInfo(d, r, builddir, post) if err != nil { return err } } else { /* Processing image creation from container */ imagePublishLock.Lock() info, err = imgPostContInfo(d, r, req, builddir) if err != nil { imagePublishLock.Unlock() return err } imagePublishLock.Unlock() } metadata, err := imageBuildFromInfo(d, info) if err != nil { return err } op.UpdateMetadata(metadata) return nil } op, err := operationCreate(operationClassTask, nil, nil, run, nil, nil) if err != nil { return InternalError(err) } return OperationResponse(op) } func getImageMetadata(fname string) (*imageMetadata, error) { metadataName := "metadata.yaml" compressionArgs, _, err := detectCompression(fname) if err != nil { return nil, fmt.Errorf( "detectCompression failed, err='%v', tarfile='%s'", err, fname) } args := []string{"-O"} args = append(args, compressionArgs...) args = append(args, fname, metadataName) // read the metadata.yaml output, err := exec.Command("tar", args...).CombinedOutput() if err != nil { outputLines := strings.Split(string(output), "\n") return nil, fmt.Errorf("Could not extract image %s from tar: %v (%s)", metadataName, err, outputLines[0]) } metadata := imageMetadata{} err = yaml.Unmarshal(output, &metadata) if err != nil { return nil, fmt.Errorf("Could not parse %s: %v", metadataName, err) } _, err = shared.ArchitectureId(metadata.Architecture) if err != nil { return nil, err } if metadata.CreationDate == 0 { return nil, fmt.Errorf("Missing creation date.") } return &metadata, nil } func doImagesGet(d *Daemon, recursion bool, public bool) (interface{}, error) { results, err := dbImagesGet(d.db, public) if err != nil { return []string{}, err } resultString := make([]string, len(results)) resultMap := make([]*shared.ImageInfo, len(results)) i := 0 for _, name := range results { if !recursion { url := fmt.Sprintf("/%s/images/%s", shared.APIVersion, name) resultString[i] = url } else { image, response := doImageGet(d, name, public) if response != nil { continue } resultMap[i] = image } i++ } if !recursion { return resultString, nil } return resultMap, nil } func imagesGet(d *Daemon, r *http.Request) Response { public := !d.isTrustedClient(r) result, err := doImagesGet(d, d.isRecursionRequest(r), public) if err != nil { return SmartError(err) } return SyncResponse(true, result) } var imagesCmd = Command{name: "images", post: imagesPost, untrustedGet: true, get: imagesGet} func autoUpdateImages(d *Daemon) { shared.Debugf("Updating images") images, err := dbImagesGet(d.db, false) if err != nil { shared.Log.Error("Unable to retrieve the list of images", log.Ctx{"err": err}) return } for _, fp := range images { id, info, err := dbImageGet(d.db, fp, false, true) if err != nil { shared.Log.Error("Error loading image", log.Ctx{"err": err, "fp": fp}) continue } if !info.AutoUpdate { continue } _, source, err := dbImageSourceGet(d.db, id) if err != nil { continue } shared.Log.Debug("Processing image", log.Ctx{"fp": fp, "server": source.Server, "protocol": source.Protocol, "alias": source.Alias}) hash, err := d.ImageDownload(nil, source.Server, source.Protocol, "", "", source.Alias, false, true) if hash == fp { shared.Log.Debug("Already up to date", log.Ctx{"fp": fp}) continue } else if err != nil { shared.Log.Error("Failed to update the image", log.Ctx{"err": err, "fp": fp}) continue } newId, _, err := dbImageGet(d.db, hash, false, true) if err != nil { shared.Log.Error("Error loading image", log.Ctx{"err": err, "fp": hash}) continue } err = dbImageLastAccessUpdate(d.db, hash, info.LastUsedDate) if err != nil { shared.Log.Error("Error setting last use date", log.Ctx{"err": err, "fp": hash}) continue } err = dbImageAliasesMove(d.db, id, newId) if err != nil { shared.Log.Error("Error moving aliases", log.Ctx{"err": err, "fp": hash}) continue } err = doDeleteImage(d, fp) if err != nil { shared.Log.Error("Error deleting image", log.Ctx{"err": err, "fp": fp}) } } shared.Debugf("Done updating images") } func pruneExpiredImages(d *Daemon) { shared.Debugf("Pruning expired images") // Get the list of expires images expiry := daemonConfig["images.remote_cache_expiry"].GetInt64() images, err := dbImagesGetExpired(d.db, expiry) if err != nil { shared.Log.Error("Unable to retrieve the list of expired images", log.Ctx{"err": err}) return } // Delete them for _, fp := range images { if err := doDeleteImage(d, fp); err != nil { shared.Log.Error("Error deleting image", log.Ctx{"err": err, "fp": fp}) } } shared.Debugf("Done pruning expired images") } func doDeleteImage(d *Daemon, fingerprint string) error { id, imgInfo, err := dbImageGet(d.db, fingerprint, false, false) if err != nil { return err } // get storage before deleting images/$fp because we need to // look at the path s, err := storageForImage(d, imgInfo) if err != nil { shared.Log.Error("error detecting image storage backend", log.Ctx{"fingerprint": imgInfo.Fingerprint, "err": err}) } else { // Remove the image from storage backend if err = s.ImageDelete(imgInfo.Fingerprint); err != nil { shared.Log.Error("error deleting the image from storage backend", log.Ctx{"fingerprint": imgInfo.Fingerprint, "err": err}) } } // Remove main image file fname := shared.VarPath("images", imgInfo.Fingerprint) if shared.PathExists(fname) { err = os.Remove(fname) if err != nil { shared.Debugf("Error deleting image file %s: %s", fname, err) } } // Remove the rootfs file fname = shared.VarPath("images", imgInfo.Fingerprint) + ".rootfs" if shared.PathExists(fname) { err = os.Remove(fname) if err != nil { shared.Debugf("Error deleting image file %s: %s", fname, err) } } // Remove the DB entry if err = dbImageDelete(d.db, id); err != nil { return err } return nil } func imageDelete(d *Daemon, r *http.Request) Response { fingerprint := mux.Vars(r)["fingerprint"] rmimg := func(op *operation) error { return doDeleteImage(d, fingerprint) } resources := map[string][]string{} resources["images"] = []string{fingerprint} op, err := operationCreate(operationClassTask, resources, nil, rmimg, nil, nil) if err != nil { return InternalError(err) } return OperationResponse(op) } func doImageGet(d *Daemon, fingerprint string, public bool) (*shared.ImageInfo, Response) { _, imgInfo, err := dbImageGet(d.db, fingerprint, public, false) if err != nil { return nil, SmartError(err) } return imgInfo, nil } func imageValidSecret(fingerprint string, secret string) bool { for _, op := range operations { if op.resources == nil { continue } opImages, ok := op.resources["images"] if !ok { continue } if !shared.StringInSlice(fingerprint, opImages) { continue } opSecret, ok := op.metadata["secret"] if !ok { continue } if opSecret == secret { // Token is single-use, so cancel it now op.Cancel() return true } } return false } func imageGet(d *Daemon, r *http.Request) Response { fingerprint := mux.Vars(r)["fingerprint"] public := !d.isTrustedClient(r) secret := r.FormValue("secret") if public == true && imageValidSecret(fingerprint, secret) == true { public = false } info, response := doImageGet(d, fingerprint, public) if response != nil { return response } return SyncResponse(true, info) } type imagePutReq struct { Properties map[string]string `json:"properties"` Public bool `json:"public"` AutoUpdate bool `json:"auto_update"` } func imagePut(d *Daemon, r *http.Request) Response { fingerprint := mux.Vars(r)["fingerprint"] req := imagePutReq{} if err := json.NewDecoder(r.Body).Decode(&req); err != nil { return BadRequest(err) } id, info, err := dbImageGet(d.db, fingerprint, false, false) if err != nil { return SmartError(err) } err = dbImageUpdate(d.db, id, info.Filename, info.Size, req.Public, req.AutoUpdate, info.Architecture, info.CreationDate, info.ExpiryDate, req.Properties) if err != nil { return SmartError(err) } return EmptySyncResponse } var imageCmd = Command{name: "images/{fingerprint}", untrustedGet: true, get: imageGet, put: imagePut, delete: imageDelete} type aliasPostReq struct { Name string `json:"name"` Description string `json:"description"` Target string `json:"target"` } type aliasPutReq struct { Description string `json:"description"` Target string `json:"target"` } func aliasesPost(d *Daemon, r *http.Request) Response { req := aliasPostReq{} if err := json.NewDecoder(r.Body).Decode(&req); err != nil { return BadRequest(err) } if req.Name == "" || req.Target == "" { return BadRequest(fmt.Errorf("name and target are required")) } // This is just to see if the alias name already exists. _, _, err := dbImageAliasGet(d.db, req.Name, true) if err == nil { return Conflict } id, _, err := dbImageGet(d.db, req.Target, false, false) if err != nil { return SmartError(err) } err = dbImageAliasAdd(d.db, req.Name, id, req.Description) if err != nil { return InternalError(err) } return EmptySyncResponse } func aliasesGet(d *Daemon, r *http.Request) Response { recursion := d.isRecursionRequest(r) q := "SELECT name FROM images_aliases" var name string inargs := []interface{}{} outfmt := []interface{}{name} results, err := dbQueryScan(d.db, q, inargs, outfmt) if err != nil { return BadRequest(err) } responseStr := []string{} responseMap := shared.ImageAliases{} for _, res := range results { name = res[0].(string) if !recursion { url := fmt.Sprintf("/%s/images/aliases/%s", shared.APIVersion, name) responseStr = append(responseStr, url) } else { _, alias, err := dbImageAliasGet(d.db, name, d.isTrustedClient(r)) if err != nil { continue } responseMap = append(responseMap, alias) } } if !recursion { return SyncResponse(true, responseStr) } return SyncResponse(true, responseMap) } func aliasGet(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] _, alias, err := dbImageAliasGet(d.db, name, d.isTrustedClient(r)) if err != nil { return SmartError(err) } return SyncResponse(true, alias) } func aliasDelete(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] _, _, err := dbImageAliasGet(d.db, name, true) if err != nil { return SmartError(err) } err = dbImageAliasDelete(d.db, name) if err != nil { return SmartError(err) } return EmptySyncResponse } func aliasPut(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] req := aliasPutReq{} if err := json.NewDecoder(r.Body).Decode(&req); err != nil { return BadRequest(err) } if req.Target == "" { return BadRequest(fmt.Errorf("The target field is required")) } id, _, err := dbImageAliasGet(d.db, name, true) if err != nil { return SmartError(err) } imageId, _, err := dbImageGet(d.db, req.Target, false, false) if err != nil { return SmartError(err) } err = dbImageAliasUpdate(d.db, id, imageId, req.Description) if err != nil { return SmartError(err) } return EmptySyncResponse } func aliasPost(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] req := aliasPostReq{} if err := json.NewDecoder(r.Body).Decode(&req); err != nil { return BadRequest(err) } id, _, err := dbImageAliasGet(d.db, name, true) if err != nil { return SmartError(err) } err = dbImageAliasRename(d.db, id, req.Name) if err != nil { return SmartError(err) } return EmptySyncResponse } func imageExport(d *Daemon, r *http.Request) Response { fingerprint := mux.Vars(r)["fingerprint"] public := !d.isTrustedClient(r) secret := r.FormValue("secret") if public == true && imageValidSecret(fingerprint, secret) == true { public = false } _, imgInfo, err := dbImageGet(d.db, fingerprint, public, false) if err != nil { return SmartError(err) } filename := imgInfo.Filename imagePath := shared.VarPath("images", imgInfo.Fingerprint) rootfsPath := imagePath + ".rootfs" if filename == "" { _, ext, err := detectCompression(imagePath) if err != nil { ext = "" } filename = fmt.Sprintf("%s%s", fingerprint, ext) } if shared.PathExists(rootfsPath) { files := make([]fileResponseEntry, 2) files[0].identifier = "metadata" files[0].path = imagePath files[0].filename = "meta-" + filename files[1].identifier = "rootfs" files[1].path = rootfsPath files[1].filename = filename return FileResponse(r, files, nil, false) } files := make([]fileResponseEntry, 1) files[0].identifier = filename files[0].path = imagePath files[0].filename = filename return FileResponse(r, files, nil, false) } func imageSecret(d *Daemon, r *http.Request) Response { fingerprint := mux.Vars(r)["fingerprint"] _, _, err := dbImageGet(d.db, fingerprint, false, false) if err != nil { return SmartError(err) } secret, err := shared.RandomCryptoString() if err != nil { return InternalError(err) } meta := shared.Jmap{} meta["secret"] = secret resources := map[string][]string{} resources["images"] = []string{fingerprint} op, err := operationCreate(operationClassToken, resources, meta, nil, nil, nil) if err != nil { return InternalError(err) } return OperationResponse(op) } var imagesExportCmd = Command{name: "images/{fingerprint}/export", untrustedGet: true, get: imageExport} var imagesSecretCmd = Command{name: "images/{fingerprint}/secret", post: imageSecret} var aliasesCmd = Command{name: "images/aliases", post: aliasesPost, get: aliasesGet} var aliasCmd = Command{name: "images/aliases/{name:.*}", untrustedGet: true, get: aliasGet, delete: aliasDelete, put: aliasPut, post: aliasPost} lxd-2.0.2/lxd/main.go000066400000000000000000000641031272140510300143430ustar00rootroot00000000000000package main import ( "bufio" "encoding/json" "fmt" "io/ioutil" "math/rand" "net" "net/http" "os" "os/exec" "os/signal" "runtime/pprof" "strconv" "strings" "sync" "syscall" "time" "golang.org/x/crypto/ssh/terminal" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/gnuflag" "github.com/lxc/lxd/shared/logging" ) // Global arguments var argAuto = gnuflag.Bool("auto", false, "") var argCPUProfile = gnuflag.String("cpuprofile", "", "") var argDebug = gnuflag.Bool("debug", false, "") var argGroup = gnuflag.String("group", "", "") var argHelp = gnuflag.Bool("help", false, "") var argLogfile = gnuflag.String("logfile", "", "") var argMemProfile = gnuflag.String("memprofile", "", "") var argNetworkAddress = gnuflag.String("network-address", "", "") var argNetworkPort = gnuflag.Int("network-port", -1, "") var argPrintGoroutinesEvery = gnuflag.Int("print-goroutines-every", -1, "") var argStorageBackend = gnuflag.String("storage-backend", "", "") var argStorageCreateDevice = gnuflag.String("storage-create-device", "", "") var argStorageCreateLoop = gnuflag.Int("storage-create-loop", -1, "") var argStoragePool = gnuflag.String("storage-pool", "", "") var argSyslog = gnuflag.Bool("syslog", false, "") var argTimeout = gnuflag.Int("timeout", -1, "") var argTrustPassword = gnuflag.String("trust-password", "", "") var argVerbose = gnuflag.Bool("verbose", false, "") var argVersion = gnuflag.Bool("version", false, "") // Global variables var debug bool var verbose bool var execPath string func init() { rand.Seed(time.Now().UTC().UnixNano()) absPath, err := os.Readlink("/proc/self/exe") if err != nil { absPath = "bad-exec-path" } execPath = absPath } func main() { if err := run(); err != nil { fmt.Fprintf(os.Stderr, "error: %v\n", err) os.Exit(1) } } func run() error { // Our massive custom usage gnuflag.Usage = func() { fmt.Printf("Usage: lxd [command] [options]\n") fmt.Printf("\nCommands:\n") fmt.Printf(" activateifneeded\n") fmt.Printf(" Check if LXD should be started (at boot) and if so, spawns it through socket activation\n") fmt.Printf(" daemon [--group=lxd] (default command)\n") fmt.Printf(" Start the main LXD daemon\n") fmt.Printf(" init [--auto] [--network-address=IP] [--network-port=8443] [--storage-backend=dir]\n") fmt.Printf(" [--storage-create-device=DEVICE] [--storage-create-loop=SIZE] [--storage-pool=POOL]\n") fmt.Printf(" [--trust-password=]\n") fmt.Printf(" Setup storage and networking\n") fmt.Printf(" ready\n") fmt.Printf(" Tells LXD that any setup-mode configuration has been done and that it can start containers.\n") fmt.Printf(" shutdown [--timeout=60]\n") fmt.Printf(" Perform a clean shutdown of LXD and all running containers\n") fmt.Printf(" waitready [--timeout=15]\n") fmt.Printf(" Wait until LXD is ready to handle requests\n") fmt.Printf("\n\nCommon options:\n") fmt.Printf(" --debug\n") fmt.Printf(" Enable debug mode\n") fmt.Printf(" --help\n") fmt.Printf(" Print this help message\n") fmt.Printf(" --logfile FILE\n") fmt.Printf(" Logfile to log to (e.g., /var/log/lxd/lxd.log)\n") fmt.Printf(" --syslog\n") fmt.Printf(" Enable syslog logging\n") fmt.Printf(" --verbose\n") fmt.Printf(" Enable verbose mode\n") fmt.Printf(" --version\n") fmt.Printf(" Print LXD's version number and exit\n") fmt.Printf("\nDaemon options:\n") fmt.Printf(" --group GROUP\n") fmt.Printf(" Group which owns the shared socket\n") fmt.Printf("\nDaemon debug options:\n") fmt.Printf(" --cpuprofile FILE\n") fmt.Printf(" Enable cpu profiling into the specified file\n") fmt.Printf(" --memprofile FILE\n") fmt.Printf(" Enable memory profiling into the specified file\n") fmt.Printf(" --print-goroutines-every SECONDS\n") fmt.Printf(" For debugging, print a complete stack trace every n seconds\n") fmt.Printf("\nInit options:\n") fmt.Printf(" --auto\n") fmt.Printf(" Automatic (non-interactive) mode\n") fmt.Printf("\nInit options for non-interactive mode (--auto):\n") fmt.Printf(" --network-address ADDRESS\n") fmt.Printf(" Address to bind LXD to (default: none)\n") fmt.Printf(" --network-port PORT\n") fmt.Printf(" Port to bind LXD to (default: 8443)\n") fmt.Printf(" --storage-backend NAME\n") fmt.Printf(" Storage backend to use (zfs or dir, default: dir)\n") fmt.Printf(" --storage-create-device DEVICE\n") fmt.Printf(" Setup device based storage using DEVICE\n") fmt.Printf(" --storage-create-loop SIZE\n") fmt.Printf(" Setup loop based storage with SIZE in GB\n") fmt.Printf(" --storage-pool NAME\n") fmt.Printf(" Storage pool to use or create\n") fmt.Printf(" --trust-password PASSWORD\n") fmt.Printf(" Password required to add new clients\n") fmt.Printf("\nShutdown options:\n") fmt.Printf(" --timeout SECONDS\n") fmt.Printf(" How long to wait before failing\n") fmt.Printf("\nWaitready options:\n") fmt.Printf(" --timeout SECONDS\n") fmt.Printf(" How long to wait before failing\n") fmt.Printf("\n\nInternal commands (don't call these directly):\n") fmt.Printf(" forkexec\n") fmt.Printf(" Execute a command in a container\n") fmt.Printf(" forkgetnet\n") fmt.Printf(" Get container network information\n") fmt.Printf(" forkgetfile\n") fmt.Printf(" Grab a file from a running container\n") fmt.Printf(" forkmigrate\n") fmt.Printf(" Restore a container after migration\n") fmt.Printf(" forkputfile\n") fmt.Printf(" Push a file to a running container\n") fmt.Printf(" forkstart\n") fmt.Printf(" Start a container\n") fmt.Printf(" callhook\n") fmt.Printf(" Call a container hook\n") fmt.Printf(" netcat\n") fmt.Printf(" Mirror a unix socket to stdin/stdout") } // Parse the arguments gnuflag.Parse(true) // Set the global variables debug = *argDebug verbose = *argVerbose if *argHelp { // The user asked for help via --help, so we shouldn't print to // stderr. gnuflag.SetOut(os.Stdout) gnuflag.Usage() return nil } // Deal with --version right here if *argVersion { fmt.Println(shared.Version) return nil } if len(shared.VarPath("unix.sock")) > 107 { return fmt.Errorf("LXD_DIR is too long, must be < %d", 107-len("unix.sock")) } // Configure logging syslog := "" if *argSyslog { syslog = "lxd" } handler := eventsHandler{} var err error shared.Log, err = logging.GetLogger(syslog, *argLogfile, *argVerbose, *argDebug, handler) if err != nil { fmt.Printf("%s", err) return nil } // Process sub-commands if len(os.Args) > 1 { // "forkputfile", "forkgetfile", "forkmount" and "forkumount" are handled specially in nsexec.go // "forkgetnet" is partially handled in nsexec.go (setns) switch os.Args[1] { // Main commands case "activateifneeded": return cmdActivateIfNeeded() case "daemon": return cmdDaemon() case "callhook": return cmdCallHook(os.Args[1:]) case "init": return cmdInit() case "ready": return cmdReady() case "shutdown": return cmdShutdown() case "waitready": return cmdWaitReady() // Internal commands case "forkgetnet": return printnet() case "forkmigrate": return MigrateContainer(os.Args[1:]) case "forkstart": return startContainer(os.Args[1:]) case "forkexec": ret, err := execContainer(os.Args[1:]) if err != nil { fmt.Fprintf(os.Stderr, "error: %v\n", err) } os.Exit(ret) case "netcat": return Netcat(os.Args[1:]) } } // Fail if some other command is passed if gnuflag.NArg() > 0 { gnuflag.Usage() return fmt.Errorf("Unknown arguments") } return cmdDaemon() } func cmdCallHook(args []string) error { if len(args) < 4 { return fmt.Errorf("Invalid arguments") } path := args[1] id := args[2] state := args[3] target := "" err := os.Setenv("LXD_DIR", path) if err != nil { return err } c, err := lxd.NewClient(&lxd.DefaultConfig, "local") if err != nil { return err } url := fmt.Sprintf("%s/internal/containers/%s/on%s", c.BaseURL, id, state) if state == "stop" { target = os.Getenv("LXC_TARGET") if target == "" { target = "unknown" } url = fmt.Sprintf("%s?target=%s", url, target) } req, err := http.NewRequest("GET", url, nil) if err != nil { return err } hook := make(chan error, 1) go func() { raw, err := c.Http.Do(req) if err != nil { hook <- err return } _, err = lxd.HoistResponse(raw, lxd.Sync) if err != nil { hook <- err return } hook <- nil }() select { case err := <-hook: if err != nil { return err } break case <-time.After(30 * time.Second): return fmt.Errorf("Hook didn't finish within 30s") } if target == "reboot" { return fmt.Errorf("Reboot must be handled by LXD.") } return nil } func cmdDaemon() error { if *argCPUProfile != "" { f, err := os.Create(*argCPUProfile) if err != nil { fmt.Printf("Error opening cpu profile file: %s\n", err) return nil } pprof.StartCPUProfile(f) defer pprof.StopCPUProfile() } if *argMemProfile != "" { go memProfiler(*argMemProfile) } neededPrograms := []string{"setfacl", "rsync", "tar", "xz"} for _, p := range neededPrograms { _, err := exec.LookPath(p) if err != nil { return err } } if *argPrintGoroutinesEvery > 0 { go func() { for { time.Sleep(time.Duration(*argPrintGoroutinesEvery) * time.Second) shared.PrintStack() } }() } d := &Daemon{ group: *argGroup, SetupMode: shared.PathExists(shared.VarPath(".setup_mode"))} err := d.Init() if err != nil { if d != nil && d.db != nil { d.db.Close() } return err } var ret error var wg sync.WaitGroup wg.Add(1) go func() { ch := make(chan os.Signal) signal.Notify(ch, syscall.SIGPWR) sig := <-ch shared.Log.Info( fmt.Sprintf("Received '%s signal', shutting down containers.", sig)) containersShutdown(d) ret = d.Stop() wg.Done() }() go func() { <-d.shutdownChan shared.Log.Info( fmt.Sprintf("Asked to shutdown by API, shutting down containers.")) containersShutdown(d) ret = d.Stop() wg.Done() }() go func() { ch := make(chan os.Signal) signal.Notify(ch, syscall.SIGINT) signal.Notify(ch, syscall.SIGQUIT) signal.Notify(ch, syscall.SIGTERM) sig := <-ch shared.Log.Info(fmt.Sprintf("Received '%s signal', exiting.", sig)) ret = d.Stop() wg.Done() }() wg.Wait() return ret } func cmdReady() error { c, err := lxd.NewClient(&lxd.DefaultConfig, "local") if err != nil { return err } req, err := http.NewRequest("PUT", c.BaseURL+"/internal/ready", nil) if err != nil { return err } raw, err := c.Http.Do(req) if err != nil { return err } _, err = lxd.HoistResponse(raw, lxd.Sync) if err != nil { return err } return nil } func cmdShutdown() error { var timeout int if *argTimeout == -1 { timeout = 60 } else { timeout = *argTimeout } c, err := lxd.NewClient(&lxd.DefaultConfig, "local") if err != nil { return err } req, err := http.NewRequest("PUT", c.BaseURL+"/internal/shutdown", nil) if err != nil { return err } _, err = c.Http.Do(req) if err != nil { return err } monitor := make(chan error, 1) go func() { monitor <- c.Monitor(nil, func(m interface{}) {}) }() select { case <-monitor: break case <-time.After(time.Second * time.Duration(timeout)): return fmt.Errorf("LXD still running after %ds timeout.", timeout) } return nil } func cmdActivateIfNeeded() error { // Don't start a full daemon, we just need DB access d := &Daemon{ imagesDownloading: map[string]chan bool{}, imagesDownloadingLock: sync.RWMutex{}, lxcpath: shared.VarPath("containers"), } err := initializeDbObject(d, shared.VarPath("lxd.db")) if err != nil { return err } /* Load all config values from the database */ err = daemonConfigInit(d.db) if err != nil { return err } // Look for network socket value := daemonConfig["core.https_address"].Get() if value != "" { shared.Debugf("Daemon has core.https_address set, activating...") _, err := lxd.NewClient(&lxd.DefaultConfig, "local") return err } // Look for auto-started or previously started containers d.IdmapSet, err = shared.DefaultIdmapSet() if err != nil { return err } result, err := dbContainersList(d.db, cTypeRegular) if err != nil { return err } for _, name := range result { c, err := containerLoadByName(d, name) if err != nil { return err } config := c.ExpandedConfig() lastState := config["volatile.last_state.power"] autoStart := config["boot.autostart"] if c.IsRunning() { shared.Debugf("Daemon has running containers, activating...") _, err := lxd.NewClient(&lxd.DefaultConfig, "local") return err } if lastState == "RUNNING" || lastState == "Running" || shared.IsTrue(autoStart) { shared.Debugf("Daemon has auto-started containers, activating...") _, err := lxd.NewClient(&lxd.DefaultConfig, "local") return err } } shared.Debugf("No need to start the daemon now.") return nil } func cmdWaitReady() error { var timeout int if *argTimeout == -1 { timeout = 15 } else { timeout = *argTimeout } finger := make(chan error, 1) go func() { for { c, err := lxd.NewClient(&lxd.DefaultConfig, "local") if err != nil { time.Sleep(500 * time.Millisecond) continue } req, err := http.NewRequest("GET", c.BaseURL+"/internal/ready", nil) if err != nil { time.Sleep(500 * time.Millisecond) continue } raw, err := c.Http.Do(req) if err != nil { time.Sleep(500 * time.Millisecond) continue } _, err = lxd.HoistResponse(raw, lxd.Sync) if err != nil { time.Sleep(500 * time.Millisecond) continue } finger <- nil return } }() select { case <-finger: break case <-time.After(time.Second * time.Duration(timeout)): return fmt.Errorf("LXD still not running after %ds timeout.", timeout) } return nil } func cmdInit() error { var defaultPrivileged int // controls whether we set security.privileged=true var storageBackend string // dir or zfs var storageMode string // existing, loop or device var storageLoopSize int // Size in GB var storageDevice string // Path var storagePool string // pool name var networkAddress string // Address var networkPort int // Port var trustPassword string // Trust password // Detect userns defaultPrivileged = -1 runningInUserns = shared.RunningInUserNS() // Only root should run this if os.Geteuid() != 0 { return fmt.Errorf("This must be run as root") } backendsAvailable := []string{"dir"} backendsSupported := []string{"dir", "zfs"} // Detect zfs out, err := exec.LookPath("zfs") if err == nil && len(out) != 0 { backendsAvailable = append(backendsAvailable, "zfs") } reader := bufio.NewReader(os.Stdin) askBool := func(question string) bool { for { fmt.Printf(question) input, _ := reader.ReadString('\n') input = strings.TrimSuffix(input, "\n") if shared.StringInSlice(strings.ToLower(input), []string{"yes", "y"}) { return true } else if shared.StringInSlice(strings.ToLower(input), []string{"no", "n"}) { return false } fmt.Printf("Invalid input, try again.\n\n") } } askChoice := func(question string, choices []string) string { for { fmt.Printf(question) input, _ := reader.ReadString('\n') input = strings.TrimSuffix(input, "\n") if shared.StringInSlice(input, choices) { return input } fmt.Printf("Invalid input, try again.\n\n") } } askInt := func(question string, min int, max int) int { for { fmt.Printf(question) input, _ := reader.ReadString('\n') input = strings.TrimSuffix(input, "\n") intInput, err := strconv.Atoi(input) if err == nil && (min == -1 || intInput >= min) && (max == -1 || intInput <= max) { return intInput } fmt.Printf("Invalid input, try again.\n\n") } } askString := func(question string) string { for { fmt.Printf(question) input, _ := reader.ReadString('\n') input = strings.TrimSuffix(input, "\n") if len(input) != 0 { return input } fmt.Printf("Invalid input, try again.\n\n") } } askPassword := func(question string) string { for { fmt.Printf(question) pwd, _ := terminal.ReadPassword(0) fmt.Printf("\n") inFirst := string(pwd) inFirst = strings.TrimSuffix(inFirst, "\n") fmt.Printf("Again: ") pwd, _ = terminal.ReadPassword(0) fmt.Printf("\n") inSecond := string(pwd) inSecond = strings.TrimSuffix(inSecond, "\n") if inFirst == inSecond { return inFirst } fmt.Printf("Invalid input, try again.\n\n") } } // Confirm that LXD is online c, err := lxd.NewClient(&lxd.DefaultConfig, "local") if err != nil { return fmt.Errorf("Unable to talk to LXD: %s", err) } // Check that we have no containers or images in the store containers, err := c.ListContainers() if err != nil { return fmt.Errorf("Unable to list the LXD containers: %s", err) } images, err := c.ListImages() if err != nil { return fmt.Errorf("Unable to list the LXD images: %s", err) } if len(containers) > 0 || len(images) > 0 { return fmt.Errorf("You have existing containers or images. lxd init requires an empty LXD.") } if *argAuto { if *argStorageBackend == "" { *argStorageBackend = "dir" } // Do a bunch of sanity checks if !shared.StringInSlice(*argStorageBackend, backendsSupported) { return fmt.Errorf("The requested backend '%s' isn't supported by lxd init.", *argStorageBackend) } if !shared.StringInSlice(*argStorageBackend, backendsAvailable) { return fmt.Errorf("The requested backend '%s' isn't available on your system (missing tools).", *argStorageBackend) } if *argStorageBackend == "dir" { if *argStorageCreateLoop != -1 || *argStorageCreateDevice != "" || *argStoragePool != "" { return fmt.Errorf("None of --storage-pool, --storage-create-device or --storage-create-pool may be used with the 'dir' backend.") } } if *argStorageBackend == "zfs" { if *argStorageCreateLoop != -1 && *argStorageCreateDevice != "" { return fmt.Errorf("Only one of --storage-create-device or --storage-create-pool can be specified with the 'zfs' backend.") } if *argStoragePool == "" { return fmt.Errorf("--storage-pool must be specified with the 'zfs' backend.") } } if *argNetworkAddress == "" { if *argNetworkPort != -1 { return fmt.Errorf("--network-port cannot be used without --network-address.") } if *argTrustPassword != "" { return fmt.Errorf("--trust-password cannot be used without --network-address.") } } // Set the local variables if *argStorageCreateDevice != "" { storageMode = "device" } else if *argStorageCreateLoop != -1 { storageMode = "loop" } else { storageMode = "existing" } storageBackend = *argStorageBackend storageLoopSize = *argStorageCreateLoop storageDevice = *argStorageCreateDevice storagePool = *argStoragePool networkAddress = *argNetworkAddress networkPort = *argNetworkPort trustPassword = *argTrustPassword } else { if *argStorageBackend != "" || *argStorageCreateDevice != "" || *argStorageCreateLoop != -1 || *argStoragePool != "" || *argNetworkAddress != "" || *argNetworkPort != -1 || *argTrustPassword != "" { return fmt.Errorf("Init configuration is only valid with --auto") } storageBackend = askChoice("Name of the storage backend to use (dir or zfs): ", backendsSupported) if !shared.StringInSlice(storageBackend, backendsSupported) { return fmt.Errorf("The requested backend '%s' isn't supported by lxd init.", storageBackend) } if !shared.StringInSlice(storageBackend, backendsAvailable) { return fmt.Errorf("The requested backend '%s' isn't available on your system (missing tools).", storageBackend) } if storageBackend == "zfs" { if askBool("Create a new ZFS pool (yes/no)? ") { storagePool = askString("Name of the new ZFS pool: ") if askBool("Would you like to use an existing block device (yes/no)? ") { storageDevice = askString("Path to the existing block device: ") storageMode = "device" } else { storageLoopSize = askInt("Size in GB of the new loop device (1GB minimum): ", 1, -1) storageMode = "loop" } } else { storagePool = askString("Name of the existing ZFS pool or dataset: ") storageMode = "existing" } } if runningInUserns { fmt.Printf(` We detected that you are running inside an unprivileged container. This means that unless you manually configured your host otherwise, you will not have enough uid and gid to allocate to your containers. LXD can re-use your container's own allocation to avoid the problem. Doing so makes your nested containers slightly less safe as they could in theory attack their parent container and gain more privileges than they otherwise would. `) if askBool("Would you like to have your containers share their parent's allocation (yes/no)? ") { defaultPrivileged = 1 } else { defaultPrivileged = 0 } } if askBool("Would you like LXD to be available over the network (yes/no)? ") { networkAddress = askString("Address to bind LXD to (not including port): ") networkPort = askInt("Port to bind LXD to (8443 recommended): ", 1, 65535) trustPassword = askPassword("Trust password for new clients: ") } } if !shared.StringInSlice(storageBackend, []string{"dir", "zfs"}) { return fmt.Errorf("Invalid storage backend: %s", storageBackend) } // Unset all storage keys, core.https_address and core.trust_password for _, key := range []string{"core.https_address", "core.trust_password"} { _, err = c.SetServerConfig(key, "") if err != nil { return err } } // Destroy any existing loop device for _, file := range []string{"zfs.img"} { os.Remove(shared.VarPath(file)) } if storageBackend == "zfs" { _ = exec.Command("modprobe", "zfs").Run() if storageMode == "loop" { storageDevice = shared.VarPath("zfs.img") f, err := os.Create(storageDevice) if err != nil { return fmt.Errorf("Failed to open %s: %s", storageDevice, err) } err = f.Chmod(0600) if err != nil { return fmt.Errorf("Failed to chmod %s: %s", storageDevice, err) } err = f.Truncate(int64(storageLoopSize * 1024 * 1024 * 1024)) if err != nil { return fmt.Errorf("Failed to create sparse file %s: %s", storageDevice, err) } err = f.Close() if err != nil { return fmt.Errorf("Failed to close %s: %s", storageDevice, err) } } if shared.StringInSlice(storageMode, []string{"loop", "device"}) { output, err := exec.Command( "zpool", "create", storagePool, storageDevice, "-f", "-m", "none").CombinedOutput() if err != nil { return fmt.Errorf("Failed to create the ZFS pool: %s", output) } } // Configure LXD to use the pool _, err = c.SetServerConfig("storage.zfs_pool_name", storagePool) if err != nil { return err } } if defaultPrivileged == 0 { err = c.SetProfileConfigItem("default", "security.privileged", "") if err != nil { return err } } else if defaultPrivileged == 1 { err = c.SetProfileConfigItem("default", "security.privileged", "true") if err != nil { } } if networkAddress != "" { _, err = c.SetServerConfig("core.https_address", fmt.Sprintf("%s:%d", networkAddress, networkPort)) if err != nil { return err } if trustPassword != "" { _, err = c.SetServerConfig("core.trust_password", trustPassword) if err != nil { return err } } } fmt.Printf("LXD has been successfully configured.\n") return nil } func printnet() error { networks := map[string]shared.ContainerStateNetwork{} interfaces, err := net.Interfaces() if err != nil { return err } stats := map[string][]int64{} content, err := ioutil.ReadFile("/proc/net/dev") if err == nil { for _, line := range strings.Split(string(content), "\n") { fields := strings.Fields(line) if len(fields) != 17 { continue } rxBytes, err := strconv.ParseInt(fields[1], 10, 64) if err != nil { continue } rxPackets, err := strconv.ParseInt(fields[2], 10, 64) if err != nil { continue } txBytes, err := strconv.ParseInt(fields[9], 10, 64) if err != nil { continue } txPackets, err := strconv.ParseInt(fields[10], 10, 64) if err != nil { continue } intName := strings.TrimSuffix(fields[0], ":") stats[intName] = []int64{rxBytes, rxPackets, txBytes, txPackets} } } for _, netIf := range interfaces { netState := "down" netType := "unknown" if netIf.Flags&net.FlagBroadcast > 0 { netType = "broadcast" } if netIf.Flags&net.FlagPointToPoint > 0 { netType = "point-to-point" } if netIf.Flags&net.FlagLoopback > 0 { netType = "loopback" } if netIf.Flags&net.FlagUp > 0 { netState = "up" } network := shared.ContainerStateNetwork{ Addresses: []shared.ContainerStateNetworkAddress{}, Counters: shared.ContainerStateNetworkCounters{}, Hwaddr: netIf.HardwareAddr.String(), Mtu: netIf.MTU, State: netState, Type: netType, } addrs, err := netIf.Addrs() if err == nil { for _, addr := range addrs { fields := strings.SplitN(addr.String(), "/", 2) if len(fields) != 2 { continue } family := "inet" if strings.Contains(fields[0], ":") { family = "inet6" } scope := "global" if strings.HasPrefix(fields[0], "127") { scope = "local" } if fields[0] == "::1" { scope = "local" } if strings.HasPrefix(fields[0], "169.254") { scope = "link" } if strings.HasPrefix(fields[0], "fe80:") { scope = "link" } address := shared.ContainerStateNetworkAddress{} address.Family = family address.Address = fields[0] address.Netmask = fields[1] address.Scope = scope network.Addresses = append(network.Addresses, address) } } counters, ok := stats[netIf.Name] if ok { network.Counters.BytesReceived = counters[0] network.Counters.PacketsReceived = counters[1] network.Counters.BytesSent = counters[2] network.Counters.PacketsSent = counters[3] } networks[netIf.Name] = network } buf, err := json.Marshal(networks) if err != nil { return err } fmt.Printf("%s\n", buf) return nil } lxd-2.0.2/lxd/main_test.go000066400000000000000000000024421272140510300154000ustar00rootroot00000000000000package main import ( "io/ioutil" "os" "sync" "testing" "github.com/stretchr/testify/require" "github.com/stretchr/testify/suite" ) func mockStartDaemon() (*Daemon, error) { d := &Daemon{ MockMode: true, imagesDownloading: map[string]chan bool{}, imagesDownloadingLock: sync.RWMutex{}, } if err := d.Init(); err != nil { return nil, err } // Call this after Init so we have a log object. storageConfig := make(map[string]interface{}) d.Storage = &storageLogWrapper{w: &storageMock{d: d}} if _, err := d.Storage.Init(storageConfig); err != nil { return nil, err } return d, nil } type lxdTestSuite struct { suite.Suite d *Daemon Req *require.Assertions tmpdir string } func (suite *lxdTestSuite) SetupSuite() { tmpdir, err := ioutil.TempDir("", "lxd_testrun_") if err != nil { os.Exit(1) } suite.tmpdir = tmpdir if err := os.Setenv("LXD_DIR", suite.tmpdir); err != nil { os.Exit(1) } suite.d, err = mockStartDaemon() if err != nil { os.Exit(1) } } func (suite *lxdTestSuite) TearDownSuite() { suite.d.Stop() err := os.RemoveAll(suite.tmpdir) if err != nil { os.Exit(1) } } func (suite *lxdTestSuite) SetupTest() { suite.Req = require.New(suite.T()) } func TestLxdTestSuite(t *testing.T) { suite.Run(t, new(lxdTestSuite)) } lxd-2.0.2/lxd/migrate.go000066400000000000000000000411741272140510300150520ustar00rootroot00000000000000// Package migration provides the primitives for migration in LXD. // // See https://github.com/lxc/lxd/blob/master/specs/migration.md for a complete // description. package main import ( "bufio" "fmt" "io/ioutil" "net/http" "net/url" "os" "os/exec" "path" "path/filepath" "strings" "sync" "time" "github.com/golang/protobuf/proto" "github.com/gorilla/websocket" "gopkg.in/lxc/go-lxc.v2" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" ) type migrationFields struct { live bool controlSecret string controlConn *websocket.Conn controlLock sync.Mutex criuSecret string criuConn *websocket.Conn fsSecret string fsConn *websocket.Conn container container } func (c *migrationFields) send(m proto.Message) error { /* gorilla websocket doesn't allow concurrent writes, and * panic()s if it sees them (which is reasonable). If e.g. we * happen to fail, get scheduled, start our write, then get * unscheduled before the write is bit to a new thread which is * receiving an error from the other side (due to our previous * close), we can engage in these concurrent writes, which * casuses the whole daemon to panic. * * Instead, let's lock sends to the controlConn so that we only ever * write one message at the time. */ c.controlLock.Lock() defer c.controlLock.Unlock() w, err := c.controlConn.NextWriter(websocket.BinaryMessage) if err != nil { return err } defer w.Close() data, err := proto.Marshal(m) if err != nil { return err } return shared.WriteAll(w, data) } func findCriu(host string) error { _, err := exec.LookPath("criu") if err != nil { return fmt.Errorf("CRIU is required for live migration but its binary couldn't be found on the %s server. Is it installed in LXD's path?", host) } return nil } func (c *migrationFields) recv(m proto.Message) error { mt, r, err := c.controlConn.NextReader() if err != nil { return err } if mt != websocket.BinaryMessage { return fmt.Errorf("Only binary messages allowed") } buf, err := ioutil.ReadAll(r) if err != nil { return err } return proto.Unmarshal(buf, m) } func (c *migrationFields) disconnect() { closeMsg := websocket.FormatCloseMessage(websocket.CloseNormalClosure, "") c.controlLock.Lock() if c.controlConn != nil { c.controlConn.WriteMessage(websocket.CloseMessage, closeMsg) c.controlConn = nil /* don't close twice */ } c.controlLock.Unlock() /* Below we just Close(), which doesn't actually write to the * websocket, it just closes the underlying connection. If e.g. there * is still a filesystem transfer going on, but the other side has run * out of disk space, writing an actual CloseMessage here will cause * gorilla websocket to panic. Instead, we just force close this * connection, since we report the error over the control channel * anyway. */ if c.fsConn != nil { c.fsConn.Close() } if c.criuConn != nil { c.criuConn.Close() } } func (c *migrationFields) sendControl(err error) { message := "" if err != nil { message = err.Error() } msg := MigrationControl{ Success: proto.Bool(err == nil), Message: proto.String(message), } c.send(&msg) if err != nil { c.disconnect() } } func (c *migrationFields) controlChannel() <-chan MigrationControl { ch := make(chan MigrationControl) go func() { msg := MigrationControl{} err := c.recv(&msg) if err != nil { shared.Debugf("Got error reading migration control socket %s", err) close(ch) return } ch <- msg }() return ch } func CollectCRIULogFile(c container, imagesDir string, function string, method string) error { t := time.Now().Format(time.RFC3339) newPath := shared.LogPath(c.Name(), fmt.Sprintf("%s_%s_%s.log", function, method, t)) return shared.FileCopy(filepath.Join(imagesDir, fmt.Sprintf("%s.log", method)), newPath) } func GetCRIULogErrors(imagesDir string, method string) (string, error) { f, err := os.Open(path.Join(imagesDir, fmt.Sprintf("%s.log", method))) if err != nil { return "", err } defer f.Close() scanner := bufio.NewScanner(f) ret := []string{} for scanner.Scan() { line := scanner.Text() if strings.Contains(line, "Error") { ret = append(ret, scanner.Text()) } } return strings.Join(ret, "\n"), nil } type migrationSourceWs struct { migrationFields allConnected chan bool } func NewMigrationSource(c container) (*migrationSourceWs, error) { ret := migrationSourceWs{migrationFields{container: c}, make(chan bool, 1)} var err error ret.controlSecret, err = shared.RandomCryptoString() if err != nil { return nil, err } ret.fsSecret, err = shared.RandomCryptoString() if err != nil { return nil, err } if c.IsRunning() { if err := findCriu("source"); err != nil { return nil, err } ret.live = true ret.criuSecret, err = shared.RandomCryptoString() if err != nil { return nil, err } } return &ret, nil } func (s *migrationSourceWs) Metadata() interface{} { secrets := shared.Jmap{ "control": s.controlSecret, "fs": s.fsSecret, } if s.criuSecret != "" { secrets["criu"] = s.criuSecret } return secrets } func (s *migrationSourceWs) Connect(op *operation, r *http.Request, w http.ResponseWriter) error { secret := r.FormValue("secret") if secret == "" { return fmt.Errorf("missing secret") } var conn **websocket.Conn switch secret { case s.controlSecret: conn = &s.controlConn case s.criuSecret: conn = &s.criuConn case s.fsSecret: conn = &s.fsConn default: /* If we didn't find the right secret, the user provided a bad one, * which 403, not 404, since this operation actually exists */ return os.ErrPermission } c, err := shared.WebsocketUpgrader.Upgrade(w, r, nil) if err != nil { return err } *conn = c if s.controlConn != nil && (!s.live || s.criuConn != nil) && s.fsConn != nil { s.allConnected <- true } return nil } func (s *migrationSourceWs) Do(op *operation) error { <-s.allConnected criuType := CRIUType_CRIU_RSYNC.Enum() if !s.live { criuType = nil err := s.container.StorageStart() if err != nil { return err } defer s.container.StorageStop() } idmaps := make([]*IDMapType, 0) idmapset := s.container.IdmapSet() if idmapset != nil { for _, ctnIdmap := range idmapset.Idmap { idmap := IDMapType{ Isuid: proto.Bool(ctnIdmap.Isuid), Isgid: proto.Bool(ctnIdmap.Isgid), Hostid: proto.Int(ctnIdmap.Hostid), Nsid: proto.Int(ctnIdmap.Nsid), Maprange: proto.Int(ctnIdmap.Maprange), } idmaps = append(idmaps, &idmap) } } driver, fsErr := s.container.Storage().MigrationSource(s.container) /* the protocol says we have to send a header no matter what, so let's * do that, but then immediately send an error. */ snapshots := []string{} if fsErr == nil { fullSnaps := driver.Snapshots() for _, snap := range fullSnaps { snapshots = append(snapshots, shared.ExtractSnapshotName(snap.Name())) } } myType := s.container.Storage().MigrationType() header := MigrationHeader{ Fs: &myType, Criu: criuType, Idmap: idmaps, Snapshots: snapshots, } if err := s.send(&header); err != nil { s.sendControl(err) return err } if fsErr != nil { s.sendControl(fsErr) return fsErr } if err := s.recv(&header); err != nil { s.sendControl(err) return err } if *header.Fs != myType { myType = MigrationFSType_RSYNC header.Fs = &myType driver, _ = rsyncMigrationSource(s.container) } if err := driver.SendWhileRunning(s.fsConn); err != nil { driver.Cleanup() s.sendControl(err) return err } if s.live { if header.Criu == nil { driver.Cleanup() err := fmt.Errorf("Got no CRIU socket type for live migration") s.sendControl(err) return err } else if *header.Criu != CRIUType_CRIU_RSYNC { driver.Cleanup() err := fmt.Errorf("Formats other than criu rsync not understood") s.sendControl(err) return err } checkpointDir, err := ioutil.TempDir("", "lxd_checkpoint_") if err != nil { driver.Cleanup() s.sendControl(err) return err } defer os.RemoveAll(checkpointDir) opts := lxc.CheckpointOptions{Stop: true, Directory: checkpointDir, Verbose: true} err = s.container.Checkpoint(opts) if err2 := CollectCRIULogFile(s.container, checkpointDir, "migration", "dump"); err2 != nil { shared.Debugf("Error collecting checkpoint log file %s", err) } if err != nil { driver.Cleanup() log, err2 := GetCRIULogErrors(checkpointDir, "dump") /* couldn't find the CRIU log file which means we * didn't even get that far; give back the liblxc * error. */ if err2 != nil { log = err.Error() } err = fmt.Errorf("checkpoint failed:\n%s", log) s.sendControl(err) return err } /* * We do the serially right now, but there's really no reason for us * to; since we have separate websockets, we can do it in parallel if * we wanted to. However, assuming we're network bound, there's really * no reason to do these in parallel. In the future when we're using * p.haul's protocol, it will make sense to do these in parallel. */ if err := RsyncSend(shared.AddSlash(checkpointDir), s.criuConn); err != nil { driver.Cleanup() s.sendControl(err) return err } if err := driver.SendAfterCheckpoint(s.fsConn); err != nil { driver.Cleanup() s.sendControl(err) return err } } driver.Cleanup() msg := MigrationControl{} if err := s.recv(&msg); err != nil { s.disconnect() return err } // TODO: should we add some config here about automatically restarting // the container migrate failure? What about the failures above? if !*msg.Success { return fmt.Errorf(*msg.Message) } return nil } type migrationSink struct { migrationFields url string dialer websocket.Dialer } type MigrationSinkArgs struct { Url string Dialer websocket.Dialer Container container Secrets map[string]string } func NewMigrationSink(args *MigrationSinkArgs) (func() error, error) { sink := migrationSink{ migrationFields{container: args.Container}, args.Url, args.Dialer, } var ok bool sink.controlSecret, ok = args.Secrets["control"] if !ok { return nil, fmt.Errorf("Missing control secret") } sink.fsSecret, ok = args.Secrets["fs"] if !ok { return nil, fmt.Errorf("Missing fs secret") } sink.criuSecret, ok = args.Secrets["criu"] sink.live = ok if err := findCriu("destination"); sink.live && err != nil { return nil, err } return sink.do, nil } func (c *migrationSink) connectWithSecret(secret string) (*websocket.Conn, error) { query := url.Values{"secret": []string{secret}} // The URL is a https URL to the operation, mangle to be a wss URL to the secret wsUrl := fmt.Sprintf("wss://%s/websocket?%s", strings.TrimPrefix(c.url, "https://"), query.Encode()) return lxd.WebsocketDial(c.dialer, wsUrl) } func (c *migrationSink) do() error { var err error c.controlConn, err = c.connectWithSecret(c.controlSecret) if err != nil { return err } defer c.disconnect() c.fsConn, err = c.connectWithSecret(c.fsSecret) if err != nil { c.sendControl(err) return err } if c.live { c.criuConn, err = c.connectWithSecret(c.criuSecret) if err != nil { c.sendControl(err) return err } } header := MigrationHeader{} if err := c.recv(&header); err != nil { c.sendControl(err) return err } criuType := CRIUType_CRIU_RSYNC.Enum() if !c.live { criuType = nil } mySink := c.container.Storage().MigrationSink myType := c.container.Storage().MigrationType() resp := MigrationHeader{ Fs: &myType, Criu: criuType, } // If the storage type the source has doesn't match what we have, then // we have to use rsync. if *header.Fs != *resp.Fs { mySink = rsyncMigrationSink myType = MigrationFSType_RSYNC resp.Fs = &myType } if err := c.send(&resp); err != nil { c.sendControl(err) return err } restore := make(chan error) go func(c *migrationSink) { imagesDir := "" srcIdmap := new(shared.IdmapSet) snapshots := []container{} for _, snap := range header.Snapshots { // TODO: we need to propagate snapshot configurations // as well. Right now the container configuration is // done through the initial migration post. Should we // post the snapshots and their configs as well, or do // it some other way? name := c.container.Name() + shared.SnapshotDelimiter + snap args := containerArgs{ Ctype: cTypeSnapshot, Config: c.container.LocalConfig(), Profiles: c.container.Profiles(), Ephemeral: c.container.IsEphemeral(), Architecture: c.container.Architecture(), Devices: c.container.LocalDevices(), Name: name, } ct, err := containerCreateEmptySnapshot(c.container.Daemon(), args) if err != nil { restore <- err return } snapshots = append(snapshots, ct) } for _, idmap := range header.Idmap { e := shared.IdmapEntry{ Isuid: *idmap.Isuid, Isgid: *idmap.Isgid, Nsid: int(*idmap.Nsid), Hostid: int(*idmap.Hostid), Maprange: int(*idmap.Maprange)} srcIdmap.Idmap = shared.Extend(srcIdmap.Idmap, e) } /* We do the fs receive in parallel so we don't have to reason * about when to receive what. The sending side is smart enough * to send the filesystem bits that it can before it seizes the * container to start checkpointing, so the total transfer time * will be minimized even if we're dumb here. */ fsTransfer := make(chan error) go func() { if err := mySink(c.live, c.container, snapshots, c.fsConn); err != nil { fsTransfer <- err return } if err := ShiftIfNecessary(c.container, srcIdmap); err != nil { fsTransfer <- err return } fsTransfer <- nil }() if c.live { var err error imagesDir, err = ioutil.TempDir("", "lxd_restore_") if err != nil { os.RemoveAll(imagesDir) return } defer func() { err := CollectCRIULogFile(c.container, imagesDir, "migration", "restore") /* * If the checkpoint fails, we won't have any log to collect, * so don't warn about that. */ if err != nil && !os.IsNotExist(err) { shared.Debugf("Error collectiong migration log file %s", err) } os.RemoveAll(imagesDir) }() if err := RsyncRecv(shared.AddSlash(imagesDir), c.criuConn); err != nil { restore <- err return } /* * For unprivileged containers we need to shift the * perms on the images images so that they can be * opened by the process after it is in its user * namespace. */ if !c.container.IsPrivileged() { if err := c.container.IdmapSet().ShiftRootfs(imagesDir); err != nil { restore <- err return } } } err := <-fsTransfer if err != nil { restore <- err return } if c.live { err := c.container.StartFromMigration(imagesDir) if err != nil { log, err2 := GetCRIULogErrors(imagesDir, "restore") /* restore failed before CRIU was invoked, give * back the liblxc error */ if err2 != nil { log = err.Error() } err = fmt.Errorf("restore failed:\n%s", log) restore <- err return } } for _, snap := range snapshots { if err := ShiftIfNecessary(snap, srcIdmap); err != nil { restore <- err return } } restore <- nil }(c) source := c.controlChannel() for { select { case err = <-restore: c.sendControl(err) return err case msg, ok := <-source: if !ok { c.disconnect() return fmt.Errorf("Got error reading source") } if !*msg.Success { c.disconnect() return fmt.Errorf(*msg.Message) } else { // The source can only tell us it failed (e.g. if // checkpointing failed). We have to tell the source // whether or not the restore was successful. shared.Debugf("Unknown message %v from source", msg) } } } } /* * Similar to forkstart, this is called when lxd is invoked as: * * lxd forkmigrate * * liblxc's restore() sets up the processes in such a way that the monitor ends * up being a child of the process that calls it, in our case lxd. However, we * really want the monitor to be daemonized, so we fork again. Additionally, we * want to fork for the same reasons we do forkstart (i.e. reduced memory * footprint when we fork tasks that will never free golang's memory, etc.) */ func MigrateContainer(args []string) error { if len(args) != 5 { return fmt.Errorf("Bad arguments %q", args) } name := args[1] lxcpath := args[2] configPath := args[3] imagesDir := args[4] c, err := lxc.NewContainer(name, lxcpath) if err != nil { return err } if err := c.LoadConfigFile(configPath); err != nil { return err } /* see https://github.com/golang/go/issues/13155, startContainer, and dc3a229 */ os.Stdin.Close() os.Stdout.Close() os.Stderr.Close() return c.Restore(lxc.RestoreOptions{ Directory: imagesDir, Verbose: true, }) } lxd-2.0.2/lxd/migrate.pb.go000066400000000000000000000120031272140510300154370ustar00rootroot00000000000000// Code generated by protoc-gen-go. // source: lxd/migrate.proto // DO NOT EDIT! /* Package main is a generated protocol buffer package. It is generated from these files: lxd/migrate.proto It has these top-level messages: IDMapType MigrationHeader MigrationControl */ package main import proto "github.com/golang/protobuf/proto" import math "math" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = math.Inf type MigrationFSType int32 const ( MigrationFSType_RSYNC MigrationFSType = 0 MigrationFSType_BTRFS MigrationFSType = 1 MigrationFSType_ZFS MigrationFSType = 2 ) var MigrationFSType_name = map[int32]string{ 0: "RSYNC", 1: "BTRFS", 2: "ZFS", } var MigrationFSType_value = map[string]int32{ "RSYNC": 0, "BTRFS": 1, "ZFS": 2, } func (x MigrationFSType) Enum() *MigrationFSType { p := new(MigrationFSType) *p = x return p } func (x MigrationFSType) String() string { return proto.EnumName(MigrationFSType_name, int32(x)) } func (x *MigrationFSType) UnmarshalJSON(data []byte) error { value, err := proto.UnmarshalJSONEnum(MigrationFSType_value, data, "MigrationFSType") if err != nil { return err } *x = MigrationFSType(value) return nil } type CRIUType int32 const ( CRIUType_CRIU_RSYNC CRIUType = 0 CRIUType_PHAUL CRIUType = 1 ) var CRIUType_name = map[int32]string{ 0: "CRIU_RSYNC", 1: "PHAUL", } var CRIUType_value = map[string]int32{ "CRIU_RSYNC": 0, "PHAUL": 1, } func (x CRIUType) Enum() *CRIUType { p := new(CRIUType) *p = x return p } func (x CRIUType) String() string { return proto.EnumName(CRIUType_name, int32(x)) } func (x *CRIUType) UnmarshalJSON(data []byte) error { value, err := proto.UnmarshalJSONEnum(CRIUType_value, data, "CRIUType") if err != nil { return err } *x = CRIUType(value) return nil } type IDMapType struct { Isuid *bool `protobuf:"varint,1,req,name=isuid" json:"isuid,omitempty"` Isgid *bool `protobuf:"varint,2,req,name=isgid" json:"isgid,omitempty"` Hostid *int32 `protobuf:"varint,3,req,name=hostid" json:"hostid,omitempty"` Nsid *int32 `protobuf:"varint,4,req,name=nsid" json:"nsid,omitempty"` Maprange *int32 `protobuf:"varint,5,req,name=maprange" json:"maprange,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *IDMapType) Reset() { *m = IDMapType{} } func (m *IDMapType) String() string { return proto.CompactTextString(m) } func (*IDMapType) ProtoMessage() {} func (m *IDMapType) GetIsuid() bool { if m != nil && m.Isuid != nil { return *m.Isuid } return false } func (m *IDMapType) GetIsgid() bool { if m != nil && m.Isgid != nil { return *m.Isgid } return false } func (m *IDMapType) GetHostid() int32 { if m != nil && m.Hostid != nil { return *m.Hostid } return 0 } func (m *IDMapType) GetNsid() int32 { if m != nil && m.Nsid != nil { return *m.Nsid } return 0 } func (m *IDMapType) GetMaprange() int32 { if m != nil && m.Maprange != nil { return *m.Maprange } return 0 } type MigrationHeader struct { Fs *MigrationFSType `protobuf:"varint,1,req,name=fs,enum=main.MigrationFSType" json:"fs,omitempty"` Criu *CRIUType `protobuf:"varint,2,opt,name=criu,enum=main.CRIUType" json:"criu,omitempty"` Idmap []*IDMapType `protobuf:"bytes,3,rep,name=idmap" json:"idmap,omitempty"` Snapshots []string `protobuf:"bytes,4,rep,name=snapshots" json:"snapshots,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *MigrationHeader) Reset() { *m = MigrationHeader{} } func (m *MigrationHeader) String() string { return proto.CompactTextString(m) } func (*MigrationHeader) ProtoMessage() {} func (m *MigrationHeader) GetFs() MigrationFSType { if m != nil && m.Fs != nil { return *m.Fs } return MigrationFSType_RSYNC } func (m *MigrationHeader) GetCriu() CRIUType { if m != nil && m.Criu != nil { return *m.Criu } return CRIUType_CRIU_RSYNC } func (m *MigrationHeader) GetIdmap() []*IDMapType { if m != nil { return m.Idmap } return nil } func (m *MigrationHeader) GetSnapshots() []string { if m != nil { return m.Snapshots } return nil } type MigrationControl struct { Success *bool `protobuf:"varint,1,req,name=success" json:"success,omitempty"` // optional failure message if sending a failure Message *string `protobuf:"bytes,2,opt,name=message" json:"message,omitempty"` XXX_unrecognized []byte `json:"-"` } func (m *MigrationControl) Reset() { *m = MigrationControl{} } func (m *MigrationControl) String() string { return proto.CompactTextString(m) } func (*MigrationControl) ProtoMessage() {} func (m *MigrationControl) GetSuccess() bool { if m != nil && m.Success != nil { return *m.Success } return false } func (m *MigrationControl) GetMessage() string { if m != nil && m.Message != nil { return *m.Message } return "" } func init() { proto.RegisterEnum("main.MigrationFSType", MigrationFSType_name, MigrationFSType_value) proto.RegisterEnum("main.CRIUType", CRIUType_name, CRIUType_value) } lxd-2.0.2/lxd/migrate.proto000066400000000000000000000013071272140510300156020ustar00rootroot00000000000000package main; enum MigrationFSType { RSYNC = 0; BTRFS = 1; ZFS = 2; } enum CRIUType { CRIU_RSYNC = 0; PHAUL = 1; } message IDMapType { required bool isuid = 1; required bool isgid = 2; required int32 hostid = 3; required int32 nsid = 4; required int32 maprange = 5; } message MigrationHeader { required MigrationFSType fs = 1; optional CRIUType criu = 2; repeated IDMapType idmap = 3; repeated string snapshots = 4; } message MigrationControl { required bool success = 1; /* optional failure message if sending a failure */ optional string message = 2; } lxd-2.0.2/lxd/networks.go000066400000000000000000000050501272140510300152670ustar00rootroot00000000000000package main import ( "fmt" "net" "net/http" "strconv" "github.com/gorilla/mux" "github.com/lxc/lxd/shared" ) func networksGet(d *Daemon, r *http.Request) Response { recursionStr := r.FormValue("recursion") recursion, err := strconv.Atoi(recursionStr) if err != nil { recursion = 0 } ifs, err := net.Interfaces() if err != nil { return InternalError(err) } resultString := []string{} resultMap := []network{} for _, iface := range ifs { if recursion == 0 { resultString = append(resultString, fmt.Sprintf("/%s/networks/%s", shared.APIVersion, iface.Name)) } else { net, err := doNetworkGet(d, iface.Name) if err != nil { continue } resultMap = append(resultMap, net) } } if recursion == 0 { return SyncResponse(true, resultString) } return SyncResponse(true, resultMap) } var networksCmd = Command{name: "networks", get: networksGet} type network struct { Name string `json:"name"` Type string `json:"type"` UsedBy []string `json:"used_by"` } func isOnBridge(c container, bridge string) bool { for _, device := range c.ExpandedDevices() { if device["type"] != "nic" { continue } if !shared.StringInSlice(device["nictype"], []string{"bridged", "macvlan"}) { continue } if device["parent"] == "" { continue } if device["parent"] == bridge { return true } } return false } func networkGet(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] n, err := doNetworkGet(d, name) if err != nil { return InternalError(err) } return SyncResponse(true, &n) } func doNetworkGet(d *Daemon, name string) (network, error) { iface, err := net.InterfaceByName(name) if err != nil { return network{}, err } // Prepare the response n := network{} n.Name = iface.Name n.UsedBy = []string{} // Look for containers using the interface cts, err := dbContainersList(d.db, cTypeRegular) if err != nil { return network{}, err } for _, ct := range cts { c, err := containerLoadByName(d, ct) if err != nil { return network{}, err } if isOnBridge(c, n.Name) { n.UsedBy = append(n.UsedBy, fmt.Sprintf("/%s/containers/%s", shared.APIVersion, ct)) } } // Set the device type as needed if shared.IsLoopback(iface) { n.Type = "loopback" } else if shared.PathExists(fmt.Sprintf("/sys/class/net/%s/bridge", n.Name)) { n.Type = "bridge" } else if shared.PathExists(fmt.Sprintf("/sys/class/net/%s/device", n.Name)) { n.Type = "physical" } else { n.Type = "unknown" } return n, nil } var networkCmd = Command{name: "networks/{name}", get: networkGet} lxd-2.0.2/lxd/nsexec.go000066400000000000000000000232231272140510300147020ustar00rootroot00000000000000/** * This file is a bit funny. The goal here is to use setns() to manipulate * files inside the container, so we don't have to reason about the paths to * make sure they don't escape (we can simply rely on the kernel for * correctness). Unfortunately, you can't setns() to a mount namespace with a * multi-threaded program, which every golang binary is. However, by declaring * our init as an initializer, we can capture process control before it is * transferred to the golang runtime, so we can then setns() as we'd like * before golang has a chance to set up any threads. So, we implement two new * lxd fork* commands which are captured here, and take a file on the host fs * and copy it into the container ns. * * An alternative to this would be to move this code into a separate binary, * which of course has problems of its own when it comes to packaging (how do * we find the binary, what do we do if someone does file push and it is * missing, etc.). After some discussion, even though the embedded method is * somewhat convoluted, it was preferred. */ package main /* #define _GNU_SOURCE #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include // This expects: // ./lxd forkputfile /source/path /target/path // or // ./lxd forkgetfile /target/path /soruce/path // i.e. 8 arguments, each which have a max length of PATH_MAX. // Unfortunately, lseek() and fstat() both fail (EINVAL and 0 size) for // procfs. Also, we can't mmap, because procfs doesn't support that, either. // #define CMDLINE_SIZE (8 * PATH_MAX) int mkdir_p(const char *dir, mode_t mode) { const char *tmp = dir; const char *orig = dir; char *makeme; do { dir = tmp + strspn(tmp, "/"); tmp = dir + strcspn(dir, "/"); makeme = strndup(orig, dir - orig); if (*makeme) { if (mkdir(makeme, mode) && errno != EEXIST) { fprintf(stderr, "failed to create directory '%s'", makeme); free(makeme); return -1; } } free(makeme); } while(tmp != dir); return 0; } int copy(int target, int source) { ssize_t n; char buf[1024]; if (ftruncate(target, 0) < 0) { perror("error: truncate"); return -1; } while ((n = read(source, buf, 1024)) > 0) { if (write(target, buf, n) != n) { perror("error: write"); return -1; } } if (n < 0) { perror("error: read"); return -1; } return 0; } int dosetns(int pid, char *nstype) { int mntns; char buf[PATH_MAX]; sprintf(buf, "/proc/%d/ns/%s", pid, nstype); mntns = open(buf, O_RDONLY); if (mntns < 0) { perror("error: open mntns"); return -1; } if (setns(mntns, 0) < 0) { perror("error: setns"); close(mntns); return -1; } close(mntns); return 0; } int manip_file_in_ns(char *rootfs, int pid, char *host, char *container, bool is_put, uid_t uid, gid_t gid, mode_t mode, uid_t defaultUid, gid_t defaultGid, mode_t defaultMode) { int host_fd, container_fd; int ret = -1; int container_open_flags; struct stat st; int exists = 1; host_fd = open(host, O_RDWR); if (host_fd < 0) { perror("error: open"); return -1; } container_open_flags = O_RDWR; if (is_put) container_open_flags |= O_CREAT; if (pid > 0) { if (dosetns(pid, "mnt") < 0) { perror("error: setns"); goto close_host; } } else { if (chroot(rootfs) < 0) { perror("error: chroot"); goto close_host; } if (chdir("/") < 0) { perror("error: chdir"); goto close_host; } } if (is_put && stat(container, &st) < 0) exists = 0; umask(0); container_fd = open(container, container_open_flags, 0); if (container_fd < 0) { perror("error: open"); goto close_host; } if (is_put) { if (!exists) { if (mode == -1) { mode = defaultMode; } if (uid == -1) { uid = defaultUid; } if (gid == -1) { gid = defaultGid; } } if (copy(container_fd, host_fd) < 0) { perror("error: copy"); goto close_container; } if (mode != -1 && fchmod(container_fd, mode) < 0) { perror("error: chmod"); goto close_container; } if (fchown(container_fd, uid, gid) < 0) { perror("error: chown"); goto close_container; } ret = 0; } else { ret = copy(host_fd, container_fd); if (fstat(container_fd, &st) < 0) { perror("error: stat"); goto close_container; } fprintf(stderr, "uid: %ld\n", (long)st.st_uid); fprintf(stderr, "gid: %ld\n", (long)st.st_gid); fprintf(stderr, "mode: %ld\n", (unsigned long)st.st_mode & (S_IRWXU | S_IRWXG | S_IRWXO)); } close_container: close(container_fd); close_host: close(host_fd); return ret; } #define ADVANCE_ARG_REQUIRED() \ do { \ while (*cur != 0) \ cur++; \ cur++; \ if (size <= cur - buf) { \ fprintf(stderr, "not enough arguments\n"); \ _exit(1); \ } \ } while(0) void ensure_dir(char *dest) { struct stat sb; if (stat(dest, &sb) == 0) { if ((sb.st_mode & S_IFMT) == S_IFDIR) return; if (unlink(dest) < 0) { fprintf(stderr, "Failed to remove old %s: %s\n", dest, strerror(errno)); _exit(1); } } if (mkdir(dest, 0755) < 0) { fprintf(stderr, "Failed to mkdir %s: %s\n", dest, strerror(errno)); _exit(1); } } void ensure_file(char *dest) { struct stat sb; int fd; if (stat(dest, &sb) == 0) { if ((sb.st_mode & S_IFMT) != S_IFDIR) return; if (rmdir(dest) < 0) { fprintf(stderr, "Failed to remove old %s: %s\n", dest, strerror(errno)); _exit(1); } } fd = creat(dest, 0755); if (fd < 0) { fprintf(stderr, "Failed to mkdir %s: %s\n", dest, strerror(errno)); _exit(1); } close(fd); } void create(char *src, char *dest) { char *destdirname; struct stat sb; if (stat(src, &sb) < 0) { fprintf(stderr, "source %s does not exist\n", src); _exit(1); } destdirname = strdup(dest); destdirname = dirname(destdirname); if (mkdir_p(destdirname, 0755) < 0) { fprintf(stderr, "failed to create path: %s\n", destdirname); free(destdirname); _exit(1); } switch (sb.st_mode & S_IFMT) { case S_IFDIR: ensure_dir(dest); return; default: ensure_file(dest); return; } free(destdirname); } void forkmount(char *buf, char *cur, ssize_t size) { char *src, *dest, *opts; ADVANCE_ARG_REQUIRED(); int pid = atoi(cur); if (dosetns(pid, "mnt") < 0) { fprintf(stderr, "Failed setns to container mount namespace: %s\n", strerror(errno)); _exit(1); } ADVANCE_ARG_REQUIRED(); src = cur; ADVANCE_ARG_REQUIRED(); dest = cur; create(src, dest); if (access(src, F_OK) < 0) { fprintf(stderr, "Mount source doesn't exist: %s\n", strerror(errno)); _exit(1); } if (access(dest, F_OK) < 0) { fprintf(stderr, "Mount destination doesn't exist: %s\n", strerror(errno)); _exit(1); } // Here, we always move recursively, because we sometimes allow // recursive mounts. If the mount has no kids then it doesn't matter, // but if it does, we want to move those too. if (mount(src, dest, "none", MS_MOVE | MS_REC, NULL) < 0) { fprintf(stderr, "Failed mounting %s onto %s: %s\n", src, dest, strerror(errno)); _exit(1); } _exit(0); } void forkumount(char *buf, char *cur, ssize_t size) { ADVANCE_ARG_REQUIRED(); int pid = atoi(cur); if (dosetns(pid, "mnt") < 0) { fprintf(stderr, "Failed setns to container mount namespace: %s\n", strerror(errno)); _exit(1); } ADVANCE_ARG_REQUIRED(); if (access(cur, F_OK) < 0) { fprintf(stderr, "Mount path doesn't exist: %s\n", strerror(errno)); _exit(1); } if (umount2(cur, MNT_DETACH) < 0) { fprintf(stderr, "Error unmounting %s: %s\n", cur, strerror(errno)); _exit(1); } _exit(0); } void forkdofile(char *buf, char *cur, bool is_put, ssize_t size) { uid_t uid = 0; gid_t gid = 0; mode_t mode = 0; uid_t defaultUid = 0; gid_t defaultGid = 0; mode_t defaultMode = 0; char *command = cur, *rootfs = NULL, *source = NULL, *target = NULL; pid_t pid; ADVANCE_ARG_REQUIRED(); rootfs = cur; ADVANCE_ARG_REQUIRED(); pid = atoi(cur); ADVANCE_ARG_REQUIRED(); source = cur; ADVANCE_ARG_REQUIRED(); target = cur; if (is_put) { ADVANCE_ARG_REQUIRED(); uid = atoi(cur); ADVANCE_ARG_REQUIRED(); gid = atoi(cur); ADVANCE_ARG_REQUIRED(); mode = atoi(cur); ADVANCE_ARG_REQUIRED(); defaultUid = atoi(cur); ADVANCE_ARG_REQUIRED(); defaultGid = atoi(cur); ADVANCE_ARG_REQUIRED(); defaultMode = atoi(cur); } _exit(manip_file_in_ns(rootfs, pid, source, target, is_put, uid, gid, mode, defaultUid, defaultGid, defaultMode)); } void forkgetnet(char *buf, char *cur, ssize_t size) { ADVANCE_ARG_REQUIRED(); int pid = atoi(cur); if (dosetns(pid, "net") < 0) { fprintf(stderr, "Failed setns to container network namespace: %s\n", strerror(errno)); _exit(1); } // The rest happens in Go } __attribute__((constructor)) void init(void) { int cmdline; char buf[CMDLINE_SIZE]; ssize_t size; char *cur; cmdline = open("/proc/self/cmdline", O_RDONLY); if (cmdline < 0) { perror("error: open"); _exit(232); } memset(buf, 0, sizeof(buf)); if ((size = read(cmdline, buf, sizeof(buf)-1)) < 0) { close(cmdline); perror("error: read"); _exit(232); } close(cmdline); cur = buf; // skip argv[0] while (*cur != 0) cur++; cur++; if (size <= cur - buf) return; if (strcmp(cur, "forkputfile") == 0) { forkdofile(buf, cur, true, size); } else if (strcmp(cur, "forkgetfile") == 0) { forkdofile(buf, cur, false, size); } else if (strcmp(cur, "forkmount") == 0) { forkmount(buf, cur, size); } else if (strcmp(cur, "forkumount") == 0) { forkumount(buf, cur, size); } else if (strcmp(cur, "forkgetnet") == 0) { forkgetnet(buf, cur, size); } } */ import "C" lxd-2.0.2/lxd/operations.go000066400000000000000000000303451272140510300156030ustar00rootroot00000000000000package main import ( "fmt" "net/http" "runtime" "strings" "sync" "time" "github.com/gorilla/mux" "github.com/pborman/uuid" "github.com/lxc/lxd/shared" ) var operationsLock sync.Mutex var operations map[string]*operation = make(map[string]*operation) type operationClass int const ( operationClassTask operationClass = 1 operationClassWebsocket operationClass = 2 operationClassToken operationClass = 3 ) func (t operationClass) String() string { return map[operationClass]string{ operationClassTask: "task", operationClassWebsocket: "websocket", operationClassToken: "token", }[t] } type operation struct { id string class operationClass createdAt time.Time updatedAt time.Time status shared.StatusCode url string resources map[string][]string metadata map[string]interface{} err string readonly bool // Those functions are called at various points in the operation lifecycle onRun func(*operation) error onCancel func(*operation) error onConnect func(*operation, *http.Request, http.ResponseWriter) error // Channels used for error reporting and state tracking of background actions chanDone chan error // Locking for concurent access to the operation lock sync.Mutex } func (op *operation) done() { if op.readonly { return } op.lock.Lock() op.readonly = true op.onRun = nil op.onCancel = nil op.onConnect = nil close(op.chanDone) op.lock.Unlock() time.AfterFunc(time.Second*5, func() { operationsLock.Lock() _, ok := operations[op.id] if !ok { operationsLock.Unlock() return } delete(operations, op.id) operationsLock.Unlock() /* * When we create a new lxc.Container, it adds a finalizer (via * SetFinalizer) that frees the struct. However, it sometimes * takes the go GC a while to actually free the struct, * presumably since it is a small amount of memory. * Unfortunately, the struct also keeps the log fd open, so if * we leave too many of these around, we end up running out of * fds. So, let's explicitly do a GC to collect these at the * end of each request. */ runtime.GC() }) } func (op *operation) Run() (chan error, error) { if op.status != shared.Pending { return nil, fmt.Errorf("Only pending operations can be started") } chanRun := make(chan error, 1) op.lock.Lock() op.status = shared.Running if op.onRun != nil { go func(op *operation, chanRun chan error) { err := op.onRun(op) if err != nil { op.lock.Lock() op.status = shared.Failure op.err = SmartError(err).String() op.lock.Unlock() op.done() chanRun <- err shared.Debugf("Failure for %s operation: %s: %s", op.class.String(), op.id, err) _, md, _ := op.Render() eventSend("operation", md) return } op.lock.Lock() op.status = shared.Success op.lock.Unlock() op.done() chanRun <- nil op.lock.Lock() shared.Debugf("Success for %s operation: %s", op.class.String(), op.id) _, md, _ := op.Render() eventSend("operation", md) op.lock.Unlock() }(op, chanRun) } op.lock.Unlock() shared.Debugf("Started %s operation: %s", op.class.String(), op.id) _, md, _ := op.Render() eventSend("operation", md) return chanRun, nil } func (op *operation) Cancel() (chan error, error) { if op.status != shared.Running { return nil, fmt.Errorf("Only running operations can be cancelled") } if !op.mayCancel() { return nil, fmt.Errorf("This operation can't be cancelled") } chanCancel := make(chan error, 1) op.lock.Lock() oldStatus := op.status op.status = shared.Cancelling op.lock.Unlock() if op.onCancel != nil { go func(op *operation, oldStatus shared.StatusCode, chanCancel chan error) { err := op.onCancel(op) if err != nil { op.lock.Lock() op.status = oldStatus op.lock.Unlock() chanCancel <- err shared.Debugf("Failed to cancel %s operation: %s: %s", op.class.String(), op.id, err) _, md, _ := op.Render() eventSend("operation", md) return } op.lock.Lock() op.status = shared.Cancelled op.lock.Unlock() op.done() chanCancel <- nil shared.Debugf("Cancelled %s operation: %s", op.class.String(), op.id) _, md, _ := op.Render() eventSend("operation", md) }(op, oldStatus, chanCancel) } shared.Debugf("Cancelling %s operation: %s", op.class.String(), op.id) _, md, _ := op.Render() eventSend("operation", md) if op.onCancel == nil { op.lock.Lock() op.status = shared.Cancelled op.lock.Unlock() op.done() chanCancel <- nil } shared.Debugf("Cancelled %s operation: %s", op.class.String(), op.id) _, md, _ = op.Render() eventSend("operation", md) return chanCancel, nil } func (op *operation) Connect(r *http.Request, w http.ResponseWriter) (chan error, error) { if op.class != operationClassWebsocket { return nil, fmt.Errorf("Only websocket operations can be connected") } if op.status != shared.Running { return nil, fmt.Errorf("Only running operations can be connected") } chanConnect := make(chan error, 1) op.lock.Lock() go func(op *operation, chanConnect chan error) { err := op.onConnect(op, r, w) if err != nil { chanConnect <- err shared.Debugf("Failed to handle %s operation: %s: %s", op.class.String(), op.id, err) return } chanConnect <- nil shared.Debugf("Handled %s operation: %s", op.class.String(), op.id) }(op, chanConnect) op.lock.Unlock() shared.Debugf("Connected %s operation: %s", op.class.String(), op.id) return chanConnect, nil } func (op *operation) mayCancel() bool { return op.onCancel != nil || op.class == operationClassToken } func (op *operation) Render() (string, *shared.Operation, error) { // Setup the resource URLs resources := op.resources if resources != nil { tmpResources := make(map[string][]string) for key, value := range resources { var values []string for _, c := range value { values = append(values, fmt.Sprintf("/%s/%s/%s", shared.APIVersion, key, c)) } tmpResources[key] = values } resources = tmpResources } md := shared.Jmap(op.metadata) return op.url, &shared.Operation{ Id: op.id, Class: op.class.String(), CreatedAt: op.createdAt, UpdatedAt: op.updatedAt, Status: op.status.String(), StatusCode: op.status, Resources: resources, Metadata: &md, MayCancel: op.mayCancel(), Err: op.err, }, nil } func (op *operation) WaitFinal(timeout int) (bool, error) { // Check current state if op.status.IsFinal() { return true, nil } // Wait indefinitely if timeout == -1 { for { <-op.chanDone return true, nil } } // Wait until timeout if timeout > 0 { timer := time.NewTimer(time.Duration(timeout) * time.Second) for { select { case <-op.chanDone: return false, nil case <-timer.C: return false, nil } } } return false, nil } func (op *operation) UpdateResources(opResources map[string][]string) error { if op.status != shared.Pending && op.status != shared.Running { return fmt.Errorf("Only pending or running operations can be updated") } if op.readonly { return fmt.Errorf("Read-only operations can't be updated") } op.lock.Lock() op.updatedAt = time.Now() op.resources = opResources op.lock.Unlock() shared.Debugf("Updated resources for %s operation: %s", op.class.String(), op.id) _, md, _ := op.Render() eventSend("operation", md) return nil } func (op *operation) UpdateMetadata(opMetadata interface{}) error { if op.status != shared.Pending && op.status != shared.Running { return fmt.Errorf("Only pending or running operations can be updated") } if op.readonly { return fmt.Errorf("Read-only operations can't be updated") } newMetadata, err := shared.ParseMetadata(opMetadata) if err != nil { return err } op.lock.Lock() op.updatedAt = time.Now() op.metadata = newMetadata op.lock.Unlock() shared.Debugf("Updated metadata for %s operation: %s", op.class.String(), op.id) _, md, _ := op.Render() eventSend("operation", md) return nil } func operationCreate(opClass operationClass, opResources map[string][]string, opMetadata interface{}, onRun func(*operation) error, onCancel func(*operation) error, onConnect func(*operation, *http.Request, http.ResponseWriter) error) (*operation, error) { // Main attributes op := operation{} op.id = uuid.NewRandom().String() op.class = opClass op.createdAt = time.Now() op.updatedAt = op.createdAt op.status = shared.Pending op.url = fmt.Sprintf("/%s/operations/%s", shared.APIVersion, op.id) op.resources = opResources op.chanDone = make(chan error) newMetadata, err := shared.ParseMetadata(opMetadata) if err != nil { return nil, err } op.metadata = newMetadata // Callback functions op.onRun = onRun op.onCancel = onCancel op.onConnect = onConnect // Sanity check if op.class != operationClassWebsocket && op.onConnect != nil { return nil, fmt.Errorf("Only websocket operations can have a Connect hook") } if op.class == operationClassWebsocket && op.onConnect == nil { return nil, fmt.Errorf("Websocket operations must have a Connect hook") } if op.class == operationClassToken && op.onRun != nil { return nil, fmt.Errorf("Token operations can't have a Run hook") } if op.class == operationClassToken && op.onCancel != nil { return nil, fmt.Errorf("Token operations can't have a Cancel hook") } operationsLock.Lock() operations[op.id] = &op operationsLock.Unlock() shared.Debugf("New %s operation: %s", op.class.String(), op.id) _, md, _ := op.Render() eventSend("operation", md) return &op, nil } func operationGet(id string) (*operation, error) { operationsLock.Lock() op, ok := operations[id] operationsLock.Unlock() if !ok { return nil, fmt.Errorf("Operation '%s' doesn't exist", id) } return op, nil } // API functions func operationAPIGet(d *Daemon, r *http.Request) Response { id := mux.Vars(r)["id"] op, err := operationGet(id) if err != nil { return NotFound } _, body, err := op.Render() if err != nil { return InternalError(err) } return SyncResponse(true, body) } func operationAPIDelete(d *Daemon, r *http.Request) Response { id := mux.Vars(r)["id"] op, err := operationGet(id) if err != nil { return NotFound } _, err = op.Cancel() if err != nil { return BadRequest(err) } return EmptySyncResponse } var operationCmd = Command{name: "operations/{id}", get: operationAPIGet, delete: operationAPIDelete} func operationsAPIGet(d *Daemon, r *http.Request) Response { var md shared.Jmap recursion := d.isRecursionRequest(r) md = shared.Jmap{} operationsLock.Lock() ops := operations operationsLock.Unlock() for _, v := range ops { status := strings.ToLower(v.status.String()) _, ok := md[status] if !ok { if recursion { md[status] = make([]*shared.Operation, 0) } else { md[status] = make([]string, 0) } } if !recursion { md[status] = append(md[status].([]string), v.url) continue } _, body, err := v.Render() if err != nil { continue } md[status] = append(md[status].([]*shared.Operation), body) } return SyncResponse(true, md) } var operationsCmd = Command{name: "operations", get: operationsAPIGet} func operationAPIWaitGet(d *Daemon, r *http.Request) Response { timeout, err := shared.AtoiEmptyDefault(r.FormValue("timeout"), -1) if err != nil { return InternalError(err) } id := mux.Vars(r)["id"] op, err := operationGet(id) if err != nil { return NotFound } _, err = op.WaitFinal(timeout) if err != nil { return InternalError(err) } _, body, err := op.Render() if err != nil { return InternalError(err) } return SyncResponse(true, body) } var operationWait = Command{name: "operations/{id}/wait", get: operationAPIWaitGet} type operationWebSocket struct { req *http.Request op *operation } func (r *operationWebSocket) Render(w http.ResponseWriter) error { chanErr, err := r.op.Connect(r.req, w) if err != nil { return err } err = <-chanErr return err } func (r *operationWebSocket) String() string { _, md, err := r.op.Render() if err != nil { return fmt.Sprintf("error: %s", err) } return md.Id } func operationAPIWebsocketGet(d *Daemon, r *http.Request) Response { id := mux.Vars(r)["id"] op, err := operationGet(id) if err != nil { return NotFound } return &operationWebSocket{r, op} } var operationWebsocket = Command{name: "operations/{id}/websocket", untrustedGet: true, get: operationAPIWebsocketGet} lxd-2.0.2/lxd/profiles.go000066400000000000000000000137401272140510300152430ustar00rootroot00000000000000package main import ( "encoding/json" "fmt" "net/http" "reflect" "github.com/gorilla/mux" _ "github.com/mattn/go-sqlite3" "github.com/lxc/lxd/shared" log "gopkg.in/inconshreveable/log15.v2" ) /* This is used for both profiles post and profile put */ type profilesPostReq struct { Name string `json:"name"` Config map[string]string `json:"config"` Description string `json:"description"` Devices shared.Devices `json:"devices"` } func profilesGet(d *Daemon, r *http.Request) Response { results, err := dbProfiles(d.db) if err != nil { return SmartError(err) } recursion := d.isRecursionRequest(r) resultString := make([]string, len(results)) resultMap := make([]*shared.ProfileConfig, len(results)) i := 0 for _, name := range results { if !recursion { url := fmt.Sprintf("/%s/profiles/%s", shared.APIVersion, name) resultString[i] = url } else { profile, err := doProfileGet(d, name) if err != nil { shared.Log.Error("Failed to get profile", log.Ctx{"profile": name}) continue } resultMap[i] = profile } i++ } if !recursion { return SyncResponse(true, resultString) } return SyncResponse(true, resultMap) } func profilesPost(d *Daemon, r *http.Request) Response { req := profilesPostReq{} if err := json.NewDecoder(r.Body).Decode(&req); err != nil { return BadRequest(err) } // Sanity checks if req.Name == "" { return BadRequest(fmt.Errorf("No name provided")) } err := containerValidConfig(req.Config, true, false) if err != nil { return BadRequest(err) } err = containerValidDevices(req.Devices, true, false) if err != nil { return BadRequest(err) } // Update DB entry _, err = dbProfileCreate(d.db, req.Name, req.Description, req.Config, req.Devices) if err != nil { return InternalError( fmt.Errorf("Error inserting %s into database: %s", req.Name, err)) } return EmptySyncResponse } var profilesCmd = Command{ name: "profiles", get: profilesGet, post: profilesPost} func doProfileGet(d *Daemon, name string) (*shared.ProfileConfig, error) { _, profile, err := dbProfileGet(d.db, name) return profile, err } func profileGet(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] resp, err := doProfileGet(d, name) if err != nil { return SmartError(err) } return SyncResponse(true, resp) } func getRunningContainersWithProfile(d *Daemon, profile string) []container { results := []container{} output, err := dbProfileContainersGet(d.db, profile) if err != nil { return results } for _, name := range output { c, err := containerLoadByName(d, name) if err != nil { shared.Log.Error("Failed opening container", log.Ctx{"container": name}) continue } results = append(results, c) } return results } func profilePut(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] req := profilesPostReq{} if err := json.NewDecoder(r.Body).Decode(&req); err != nil { return BadRequest(err) } // Sanity checks err := containerValidConfig(req.Config, true, false) if err != nil { return BadRequest(err) } err = containerValidDevices(req.Devices, true, false) if err != nil { return BadRequest(err) } // Get the running container list clist := getRunningContainersWithProfile(d, name) var containers []container for _, c := range clist { if !c.IsRunning() { continue } containers = append(containers, c) } // Update the database id, profile, err := dbProfileGet(d.db, name) if err != nil { return InternalError(fmt.Errorf("Failed to retrieve profile='%s'", name)) } tx, err := dbBegin(d.db) if err != nil { return InternalError(err) } if profile.Description != req.Description { err = dbProfileDescriptionUpdate(tx, id, req.Description) if err != nil { tx.Rollback() return InternalError(err) } } // Optimize for description-only changes if reflect.DeepEqual(profile.Config, req.Config) && reflect.DeepEqual(profile.Devices, req.Devices) { err = txCommit(tx) if err != nil { return InternalError(err) } return EmptySyncResponse } err = dbProfileConfigClear(tx, id) if err != nil { tx.Rollback() return InternalError(err) } err = dbProfileConfigAdd(tx, id, req.Config) if err != nil { tx.Rollback() return SmartError(err) } err = dbDevicesAdd(tx, "profile", id, req.Devices) if err != nil { tx.Rollback() return SmartError(err) } err = txCommit(tx) if err != nil { return InternalError(err) } // Update all the containers using the profile. Must be done after txCommit due to DB lock. failures := map[string]error{} for _, c := range containers { err = c.Update(containerArgs{ Architecture: c.Architecture(), Ephemeral: c.IsEphemeral(), Config: c.LocalConfig(), Devices: c.LocalDevices(), Profiles: c.Profiles()}, true) if err != nil { failures[c.Name()] = err } } if len(failures) != 0 { msg := "The following containers failed to update (profile change still saved):\n" for cname, err := range failures { msg += fmt.Sprintf(" - %s: %s\n", cname, err) } return InternalError(fmt.Errorf("%s", msg)) } return EmptySyncResponse } // The handler for the post operation. func profilePost(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] req := profilesPostReq{} if err := json.NewDecoder(r.Body).Decode(&req); err != nil { return BadRequest(err) } // Sanity checks if req.Name == "" { return BadRequest(fmt.Errorf("No name provided")) } err := dbProfileUpdate(d.db, name, req.Name) if err != nil { return InternalError(err) } return EmptySyncResponse } // The handler for the delete operation. func profileDelete(d *Daemon, r *http.Request) Response { name := mux.Vars(r)["name"] _, err := doProfileGet(d, name) if err != nil { return SmartError(err) } err = dbProfileDelete(d.db, name) if err != nil { return SmartError(err) } return EmptySyncResponse } var profileCmd = Command{name: "profiles/{name}", get: profileGet, put: profilePut, delete: profileDelete, post: profilePost} lxd-2.0.2/lxd/profiles_test.go000066400000000000000000000032101272140510300162710ustar00rootroot00000000000000package main import ( "database/sql" "testing" ) func Test_removing_a_profile_deletes_associated_configuration_entries(t *testing.T) { var db *sql.DB var err error d := &Daemon{} err = initializeDbObject(d, ":memory:") db = d.db // Insert a container and a related profile. Dont't forget that the profile // we insert is profile ID 2 (there is a default profile already). statements := ` INSERT INTO containers (name, architecture, type) VALUES ('thename', 1, 1); INSERT INTO profiles (name) VALUES ('theprofile'); INSERT INTO containers_profiles (container_id, profile_id) VALUES (1, 2); INSERT INTO profiles_devices (name, profile_id) VALUES ('somename', 2); INSERT INTO profiles_config (key, value, profile_id) VALUES ('thekey', 'thevalue', 2); INSERT INTO profiles_devices_config (profile_device_id, key, value) VALUES (1, 'something', 'boring');` _, err = db.Exec(statements) if err != nil { t.Fatal(err) } // Delete the profile we just created with dbProfileDelete err = dbProfileDelete(db, "theprofile") if err != nil { t.Fatal(err) } // Make sure there are 0 profiles_devices entries left. devices, err := dbDevices(d.db, "theprofile", true) if err != nil { t.Fatal(err) } if len(devices) != 0 { t.Errorf("Deleting a profile didn't delete the related profiles_devices! There are %d left", len(devices)) } // Make sure there are 0 profiles_config entries left. config, err := dbProfileConfig(d.db, "theprofile") if err == nil { t.Fatal("found the profile!") } if len(config) != 0 { t.Errorf("Deleting a profile didn't delete the related profiles_config! There are %d left", len(config)) } } lxd-2.0.2/lxd/remote.go000066400000000000000000000010341272140510300147040ustar00rootroot00000000000000package main import ( "encoding/json" "fmt" "github.com/lxc/lxd/shared" ) func remoteGetImageFingerprint(d *Daemon, server string, certificate string, alias string) (string, error) { url := fmt.Sprintf( "%s/%s/images/aliases/%s", server, shared.APIVersion, alias) resp, err := d.httpGetSync(url, certificate) if err != nil { return "", err } var result shared.ImageAliasesEntry if err = json.Unmarshal(resp.Metadata, &result); err != nil { return "", fmt.Errorf("Error reading alias") } return result.Target, nil } lxd-2.0.2/lxd/response.go000066400000000000000000000133201272140510300152500ustar00rootroot00000000000000package main import ( "bytes" "database/sql" "encoding/json" "fmt" "io" "mime/multipart" "net/http" "os" "github.com/mattn/go-sqlite3" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" ) type syncResp struct { Type lxd.ResponseType `json:"type"` Status string `json:"status"` StatusCode shared.StatusCode `json:"status_code"` Metadata interface{} `json:"metadata"` } type asyncResp struct { Type lxd.ResponseType `json:"type"` Status string `json:"status"` StatusCode shared.StatusCode `json:"status_code"` Metadata interface{} `json:"metadata"` Operation string `json:"operation"` } type Response interface { Render(w http.ResponseWriter) error String() string } // Sync response type syncResponse struct { success bool metadata interface{} } func (r *syncResponse) Render(w http.ResponseWriter) error { status := shared.Success if !r.success { status = shared.Failure } resp := syncResp{Type: lxd.Sync, Status: status.String(), StatusCode: status, Metadata: r.metadata} return WriteJSON(w, resp) } func SyncResponse(success bool, metadata interface{}) Response { return &syncResponse{success, metadata} } func (r *syncResponse) String() string { if r.success { return "success" } return "failure" } var EmptySyncResponse = &syncResponse{true, make(map[string]interface{})} // File transfer response type fileResponseEntry struct { identifier string path string filename string } type fileResponse struct { req *http.Request files []fileResponseEntry headers map[string]string removeAfterServe bool } func (r *fileResponse) Render(w http.ResponseWriter) error { if r.headers != nil { for k, v := range r.headers { w.Header().Set(k, v) } } // No file, well, it's easy then if len(r.files) == 0 { return nil } // For a single file, return it inline if len(r.files) == 1 { f, err := os.Open(r.files[0].path) if err != nil { return err } defer f.Close() fi, err := f.Stat() if err != nil { return err } w.Header().Set("Content-Type", "application/octet-stream") w.Header().Set("Content-Length", fmt.Sprintf("%d", fi.Size())) w.Header().Set("Content-Disposition", fmt.Sprintf("inline;filename=%s", r.files[0].filename)) http.ServeContent(w, r.req, r.files[0].filename, fi.ModTime(), f) if r.removeAfterServe { err = os.Remove(r.files[0].path) if err != nil { return err } } return nil } // Now the complex multipart answer body := &bytes.Buffer{} mw := multipart.NewWriter(body) for _, entry := range r.files { fd, err := os.Open(entry.path) if err != nil { return err } defer fd.Close() fw, err := mw.CreateFormFile(entry.identifier, entry.filename) if err != nil { return err } _, err = io.Copy(fw, fd) if err != nil { return err } } mw.Close() w.Header().Set("Content-Type", mw.FormDataContentType()) w.Header().Set("Content-Length", fmt.Sprintf("%d", body.Len())) _, err := io.Copy(w, body) return err } func (r *fileResponse) String() string { return fmt.Sprintf("%d files", len(r.files)) } func FileResponse(r *http.Request, files []fileResponseEntry, headers map[string]string, removeAfterServe bool) Response { return &fileResponse{r, files, headers, removeAfterServe} } // Operation response type operationResponse struct { op *operation } func (r *operationResponse) Render(w http.ResponseWriter) error { _, err := r.op.Run() if err != nil { return err } url, md, err := r.op.Render() if err != nil { return err } body := asyncResp{ Type: lxd.Async, Status: shared.OperationCreated.String(), StatusCode: shared.OperationCreated, Operation: url, Metadata: md} w.Header().Set("Location", url) w.WriteHeader(202) return WriteJSON(w, body) } func (r *operationResponse) String() string { _, md, err := r.op.Render() if err != nil { return fmt.Sprintf("error: %s", err) } return md.Id } func OperationResponse(op *operation) Response { return &operationResponse{op} } // Error response type errorResponse struct { code int msg string } func (r *errorResponse) String() string { return r.msg } func (r *errorResponse) Render(w http.ResponseWriter) error { var output io.Writer buf := &bytes.Buffer{} output = buf var captured *bytes.Buffer if debug { captured = &bytes.Buffer{} output = io.MultiWriter(buf, captured) } err := json.NewEncoder(output).Encode(shared.Jmap{"type": lxd.Error, "error": r.msg, "error_code": r.code}) if err != nil { return err } if debug { shared.DebugJson(captured) } w.Header().Set("Content-Type", "application/json") w.Header().Set("X-Content-Type-Options", "nosniff") w.WriteHeader(r.code) fmt.Fprintln(w, buf.String()) return nil } /* Some standard responses */ var NotImplemented = &errorResponse{http.StatusNotImplemented, "not implemented"} var NotFound = &errorResponse{http.StatusNotFound, "not found"} var Forbidden = &errorResponse{http.StatusForbidden, "not authorized"} var Conflict = &errorResponse{http.StatusConflict, "already exists"} func BadRequest(err error) Response { return &errorResponse{http.StatusBadRequest, err.Error()} } func InternalError(err error) Response { return &errorResponse{http.StatusInternalServerError, err.Error()} } /* * SmartError returns the right error message based on err. */ func SmartError(err error) Response { switch err { case nil: return EmptySyncResponse case os.ErrNotExist: return NotFound case sql.ErrNoRows: return NotFound case NoSuchObjectError: return NotFound case os.ErrPermission: return Forbidden case DbErrAlreadyDefined: return Conflict case sqlite3.ErrConstraintUnique: return Conflict default: return InternalError(err) } } lxd-2.0.2/lxd/rsync.go000066400000000000000000000117141272140510300145550ustar00rootroot00000000000000package main import ( "fmt" "io" "io/ioutil" "net" "os" "os/exec" "sync" "github.com/gorilla/websocket" "github.com/lxc/lxd/shared" ) func rsyncWebsocket(path string, cmd *exec.Cmd, conn *websocket.Conn) error { stdin, err := cmd.StdinPipe() if err != nil { return err } stdout, err := cmd.StdoutPipe() if err != nil { return err } stderr, err := cmd.StderrPipe() if err != nil { return err } if err := cmd.Start(); err != nil { return err } readDone, writeDone := shared.WebsocketMirror(conn, stdin, stdout) data, err2 := ioutil.ReadAll(stderr) if err2 != nil { shared.Debugf("error reading rsync stderr: %s", err2) return err2 } err = cmd.Wait() if err != nil { shared.Debugf("rsync recv error for path %s: %s: %s", path, err, string(data)) } <-readDone <-writeDone return err } func rsyncSendSetup(path string) (*exec.Cmd, net.Conn, io.ReadCloser, error) { /* * It's sort of unfortunate, but there's no library call to get a * temporary name, so we get the file and close it and use its name. */ f, err := ioutil.TempFile("", "lxd_rsync_") if err != nil { return nil, nil, nil, err } f.Close() os.Remove(f.Name()) /* * The way rsync works, it invokes a subprocess that does the actual * talking (given to it by a -E argument). Since there isn't an easy * way for us to capture this process' stdin/stdout, we just use netcat * and write to/from a unix socket. * * In principle we don't need this socket. It seems to me that some * clever invocation of rsync --server --sender and usage of that * process' stdin/stdout could work around the need for this socket, * but I couldn't get it to work. Another option would be to look at * the spawned process' first child and read/write from its * stdin/stdout, but that also seemed messy. In any case, this seems to * work just fine. */ l, err := net.Listen("unix", f.Name()) if err != nil { return nil, nil, nil, err } /* * Here, the path /tmp/foo is ignored. Since we specify localhost, * rsync thinks we are syncing to a remote host (in this case, the * other end of the lxd websocket), and so the path specified on the * --server instance of rsync takes precedence. * * Additionally, we use sh -c instead of just calling nc directly * because rsync passes a whole bunch of arguments to the wrapper * command (i.e. the command to run on --server). However, we're * hardcoding that at the other end, so we can just ignore it. */ rsyncCmd := fmt.Sprintf("sh -c \"%s netcat %s\"", execPath, f.Name()) cmd := exec.Command( "rsync", "-arvP", "--devices", "--numeric-ids", "--partial", path, "localhost:/tmp/foo", "-e", rsyncCmd) stderr, err := cmd.StderrPipe() if err != nil { return nil, nil, nil, err } if err := cmd.Start(); err != nil { return nil, nil, nil, err } conn, err := l.Accept() if err != nil { return nil, nil, nil, err } l.Close() return cmd, conn, stderr, nil } // RsyncSend sets up the sending half of an rsync, to recursively send the // directory pointed to by path over the websocket. func RsyncSend(path string, conn *websocket.Conn) error { cmd, dataSocket, stderr, err := rsyncSendSetup(path) if dataSocket != nil { defer dataSocket.Close() } if err != nil { return err } readDone, writeDone := shared.WebsocketMirror(conn, dataSocket, dataSocket) output, err := ioutil.ReadAll(stderr) if err != nil { shared.Debugf("problem reading rsync stderr %s", err) } err = cmd.Wait() if err != nil { shared.Debugf("problem with rsync send of %s: %s: %s", path, err, string(output)) } <-readDone <-writeDone return err } func rsyncRecvCmd(path string) *exec.Cmd { return exec.Command("rsync", "--server", "-vlogDtpre.iLsfx", "--numeric-ids", "--devices", "--partial", ".", path) } // RsyncRecv sets up the receiving half of the websocket to rsync (the other // half set up by RsyncSend), putting the contents in the directory specified // by path. func RsyncRecv(path string, conn *websocket.Conn) error { return rsyncWebsocket(path, rsyncRecvCmd(path), conn) } // Netcat is called with: // // lxd netcat /path/to/unix/socket // // and does unbuffered netcatting of to socket to stdin/stdout. Any arguments // after the path to the unix socket are ignored, so that this can be passed // directly to rsync as the sync command. func Netcat(args []string) error { if len(args) < 2 { return fmt.Errorf("Bad arguments %q", args) } uAddr, err := net.ResolveUnixAddr("unix", args[1]) if err != nil { return err } conn, err := net.DialUnix("unix", nil, uAddr) if err != nil { return err } wg := sync.WaitGroup{} wg.Add(1) go func() { io.Copy(os.Stdout, conn) f, _ := os.Create("/tmp/done_stdout") f.Close() conn.Close() f, _ = os.Create("/tmp/done_close") f.Close() wg.Done() }() go func() { io.Copy(conn, os.Stdin) f, _ := os.Create("/tmp/done_stdin") f.Close() }() f, _ := os.Create("/tmp/done_spawning_goroutines") f.Close() wg.Wait() return nil } lxd-2.0.2/lxd/seccomp.go000066400000000000000000000024571272140510300150540ustar00rootroot00000000000000package main import ( "io/ioutil" "os" "path" "github.com/lxc/lxd/shared" ) const DEFAULT_SECCOMP_POLICY = ` 2 blacklist reject_force_umount # comment this to allow umount -f; not recommended [all] kexec_load errno 1 open_by_handle_at errno 1 init_module errno 1 finit_module errno 1 delete_module errno 1 ` var seccompPath = shared.VarPath("security", "seccomp") func SeccompProfilePath(c container) string { return path.Join(seccompPath, c.Name()) } func getSeccompProfileContent(c container) string { /* for now there are no seccomp knobs. */ return DEFAULT_SECCOMP_POLICY } func SeccompCreateProfile(c container) error { /* Unlike apparmor, there is no way to "cache" profiles, and profiles * are automatically unloaded when a task dies. Thus, we don't need to * unload them when a container stops, and we don't have to worry about * the mtime on the file for any compiler purpose, so let's just write * out the profile. */ profile := getSeccompProfileContent(c) if err := os.MkdirAll(seccompPath, 0700); err != nil { return err } return ioutil.WriteFile(SeccompProfilePath(c), []byte(profile), 0600) } func SeccompDeleteProfile(c container) { /* similar to AppArmor, if we've never started this container, the * delete can fail and that's ok. */ os.Remove(SeccompProfilePath(c)) } lxd-2.0.2/lxd/storage.go000066400000000000000000000445601272140510300150700ustar00rootroot00000000000000package main import ( "encoding/json" "fmt" "os" "os/exec" "path/filepath" "reflect" "syscall" "time" "github.com/gorilla/websocket" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/logging" log "gopkg.in/inconshreveable/log15.v2" ) /* Some interesting filesystems */ const ( filesystemSuperMagicTmpfs = 0x01021994 filesystemSuperMagicExt4 = 0xEF53 filesystemSuperMagicXfs = 0x58465342 filesystemSuperMagicNfs = 0x6969 filesystemSuperMagicZfs = 0x2fc12fc1 ) /* * filesystemDetect returns the filesystem on which * the passed-in path sits */ func filesystemDetect(path string) (string, error) { fs := syscall.Statfs_t{} err := syscall.Statfs(path, &fs) if err != nil { return "", err } switch fs.Type { case filesystemSuperMagicBtrfs: return "btrfs", nil case filesystemSuperMagicZfs: return "zfs", nil case filesystemSuperMagicTmpfs: return "tmpfs", nil case filesystemSuperMagicExt4: return "ext4", nil case filesystemSuperMagicXfs: return "xfs", nil case filesystemSuperMagicNfs: return "nfs", nil default: shared.Debugf("Unknown backing filesystem type: 0x%x", fs.Type) return string(fs.Type), nil } } // storageRsyncCopy copies a directory using rsync (with the --devices option). func storageRsyncCopy(source string, dest string) (string, error) { if err := os.MkdirAll(dest, 0755); err != nil { return "", err } rsyncVerbosity := "-q" if debug { rsyncVerbosity = "-vi" } output, err := exec.Command( "rsync", "-a", "-HAX", "--devices", "--delete", "--checksum", "--numeric-ids", rsyncVerbosity, shared.AddSlash(source), dest).CombinedOutput() return string(output), err } // storageType defines the type of a storage type storageType int const ( storageTypeBtrfs storageType = iota storageTypeZfs storageTypeLvm storageTypeDir storageTypeMock ) func storageTypeToString(sType storageType) string { switch sType { case storageTypeBtrfs: return "btrfs" case storageTypeZfs: return "zfs" case storageTypeLvm: return "lvm" case storageTypeMock: return "mock" } return "dir" } type MigrationStorageSourceDriver interface { /* snapshots for this container, if any */ Snapshots() []container /* send any bits of the container/snapshots that are possible while the * container is still running. */ SendWhileRunning(conn *websocket.Conn) error /* send the final bits (e.g. a final delta snapshot for zfs, btrfs, or * do a final rsync) of the fs after the container has been * checkpointed. This will only be called when a container is actually * being live migrated. */ SendAfterCheckpoint(conn *websocket.Conn) error /* Called after either success or failure of a migration, can be used * to clean up any temporary snapshots, etc. */ Cleanup() } type storage interface { Init(config map[string]interface{}) (storage, error) GetStorageType() storageType GetStorageTypeName() string GetStorageTypeVersion() string // ContainerCreate creates an empty container (no rootfs/metadata.yaml) ContainerCreate(container container) error // ContainerCreateFromImage creates a container from a image. ContainerCreateFromImage(container container, imageFingerprint string) error ContainerCanRestore(container container, sourceContainer container) error ContainerDelete(container container) error ContainerCopy(container container, sourceContainer container) error ContainerStart(container container) error ContainerStop(container container) error ContainerRename(container container, newName string) error ContainerRestore(container container, sourceContainer container) error ContainerSetQuota(container container, size int64) error ContainerGetUsage(container container) (int64, error) ContainerSnapshotCreate( snapshotContainer container, sourceContainer container) error ContainerSnapshotDelete(snapshotContainer container) error ContainerSnapshotRename(snapshotContainer container, newName string) error ContainerSnapshotStart(container container) error ContainerSnapshotStop(container container) error /* for use in migrating snapshots */ ContainerSnapshotCreateEmpty(snapshotContainer container) error ImageCreate(fingerprint string) error ImageDelete(fingerprint string) error MigrationType() MigrationFSType // Get the pieces required to migrate the source. This contains a list // of the "object" (i.e. container or snapshot, depending on whether or // not it is a snapshot name) to be migrated in order, and a channel // for arguments of the specific migration command. We use a channel // here so we don't have to invoke `zfs send` or `rsync` or whatever // and keep its stdin/stdout open for each snapshot during the course // of migration, we can do it lazily. // // N.B. that the order here important: e.g. in btrfs/zfs, snapshots // which are parents of other snapshots should be sent first, to save // as much transfer as possible. However, the base container is always // sent as the first object, since that is the grandparent of every // snapshot. // // We leave sending containers which are snapshots of other containers // already present on the target instance as an exercise for the // enterprising developer. MigrationSource(container container) (MigrationStorageSourceDriver, error) MigrationSink(live bool, container container, objects []container, conn *websocket.Conn) error } func newStorage(d *Daemon, sType storageType) (storage, error) { var nilmap map[string]interface{} return newStorageWithConfig(d, sType, nilmap) } func newStorageWithConfig(d *Daemon, sType storageType, config map[string]interface{}) (storage, error) { if d.MockMode { return d.Storage, nil } var s storage switch sType { case storageTypeBtrfs: if d.Storage != nil && d.Storage.GetStorageType() == storageTypeBtrfs { return d.Storage, nil } s = &storageLogWrapper{w: &storageBtrfs{d: d}} case storageTypeZfs: if d.Storage != nil && d.Storage.GetStorageType() == storageTypeZfs { return d.Storage, nil } s = &storageLogWrapper{w: &storageZfs{d: d}} case storageTypeLvm: if d.Storage != nil && d.Storage.GetStorageType() == storageTypeLvm { return d.Storage, nil } s = &storageLogWrapper{w: &storageLvm{d: d}} default: if d.Storage != nil && d.Storage.GetStorageType() == storageTypeDir { return d.Storage, nil } s = &storageLogWrapper{w: &storageDir{d: d}} } return s.Init(config) } func storageForFilename(d *Daemon, filename string) (storage, error) { var filesystem string var err error config := make(map[string]interface{}) storageType := storageTypeDir if d.MockMode { return newStorageWithConfig(d, storageTypeMock, config) } if shared.PathExists(filename) { filesystem, err = filesystemDetect(filename) if err != nil { return nil, fmt.Errorf("couldn't detect filesystem for '%s': %v", filename, err) } } if shared.PathExists(filename + ".lv") { storageType = storageTypeLvm lvPath, err := os.Readlink(filename + ".lv") if err != nil { return nil, fmt.Errorf("couldn't read link dest for '%s': %v", filename+".lv", err) } vgname := filepath.Base(filepath.Dir(lvPath)) config["vgName"] = vgname } else if shared.PathExists(filename + ".zfs") { storageType = storageTypeZfs } else if shared.PathExists(filename+".btrfs") || filesystem == "btrfs" { storageType = storageTypeBtrfs } return newStorageWithConfig(d, storageType, config) } func storageForImage(d *Daemon, imgInfo *shared.ImageInfo) (storage, error) { imageFilename := shared.VarPath("images", imgInfo.Fingerprint) return storageForFilename(d, imageFilename) } type storageShared struct { sType storageType sTypeName string sTypeVersion string log shared.Logger } func (ss *storageShared) initShared() error { ss.log = logging.AddContext( shared.Log, log.Ctx{"driver": fmt.Sprintf("storage/%s", ss.sTypeName)}, ) return nil } func (ss *storageShared) GetStorageType() storageType { return ss.sType } func (ss *storageShared) GetStorageTypeName() string { return ss.sTypeName } func (ss *storageShared) GetStorageTypeVersion() string { return ss.sTypeVersion } func (ss *storageShared) shiftRootfs(c container) error { dpath := c.Path() rpath := c.RootfsPath() shared.Log.Debug("Shifting root filesystem", log.Ctx{"container": c.Name(), "rootfs": rpath}) idmapset := c.IdmapSet() if idmapset == nil { return fmt.Errorf("IdmapSet of container '%s' is nil", c.Name()) } err := idmapset.ShiftRootfs(rpath) if err != nil { shared.Debugf("Shift of rootfs %s failed: %s", rpath, err) return err } /* Set an acl so the container root can descend the container dir */ // TODO: i changed this so it calls ss.setUnprivUserAcl, which does // the acl change only if the container is not privileged, think thats right. return ss.setUnprivUserAcl(c, dpath) } func (ss *storageShared) setUnprivUserAcl(c container, destPath string) error { idmapset := c.IdmapSet() // Skip for privileged containers if idmapset == nil { return nil } // Make sure the map is valid. Skip if container uid 0 == host uid 0 uid, _ := idmapset.ShiftIntoNs(0, 0) switch uid { case -1: return fmt.Errorf("Container doesn't have a uid 0 in its map") case 0: return nil } // Attempt to set a POSIX ACL first. Fallback to chmod if the fs doesn't support it. acl := fmt.Sprintf("%d:rx", uid) _, err := exec.Command("setfacl", "-m", acl, destPath).CombinedOutput() if err != nil { _, err := exec.Command("chmod", "+x", destPath).CombinedOutput() if err != nil { return fmt.Errorf("Failed to chmod the container path.") } } return nil } type storageLogWrapper struct { w storage log shared.Logger } func (lw *storageLogWrapper) Init(config map[string]interface{}) (storage, error) { _, err := lw.w.Init(config) lw.log = logging.AddContext( shared.Log, log.Ctx{"driver": fmt.Sprintf("storage/%s", lw.w.GetStorageTypeName())}, ) lw.log.Info("Init") return lw, err } func (lw *storageLogWrapper) GetStorageType() storageType { return lw.w.GetStorageType() } func (lw *storageLogWrapper) GetStorageTypeName() string { return lw.w.GetStorageTypeName() } func (lw *storageLogWrapper) GetStorageTypeVersion() string { return lw.w.GetStorageTypeVersion() } func (lw *storageLogWrapper) ContainerCreate(container container) error { lw.log.Debug( "ContainerCreate", log.Ctx{ "name": container.Name(), "isPrivileged": container.IsPrivileged()}) return lw.w.ContainerCreate(container) } func (lw *storageLogWrapper) ContainerCreateFromImage( container container, imageFingerprint string) error { lw.log.Debug( "ContainerCreateFromImage", log.Ctx{ "imageFingerprint": imageFingerprint, "name": container.Name(), "isPrivileged": container.IsPrivileged()}) return lw.w.ContainerCreateFromImage(container, imageFingerprint) } func (lw *storageLogWrapper) ContainerCanRestore(container container, sourceContainer container) error { lw.log.Debug("ContainerCanRestore", log.Ctx{"container": container.Name()}) return lw.w.ContainerCanRestore(container, sourceContainer) } func (lw *storageLogWrapper) ContainerDelete(container container) error { lw.log.Debug("ContainerDelete", log.Ctx{"container": container.Name()}) return lw.w.ContainerDelete(container) } func (lw *storageLogWrapper) ContainerCopy( container container, sourceContainer container) error { lw.log.Debug( "ContainerCopy", log.Ctx{ "container": container.Name(), "source": sourceContainer.Name()}) return lw.w.ContainerCopy(container, sourceContainer) } func (lw *storageLogWrapper) ContainerStart(container container) error { lw.log.Debug("ContainerStart", log.Ctx{"container": container.Name()}) return lw.w.ContainerStart(container) } func (lw *storageLogWrapper) ContainerStop(container container) error { lw.log.Debug("ContainerStop", log.Ctx{"container": container.Name()}) return lw.w.ContainerStop(container) } func (lw *storageLogWrapper) ContainerRename( container container, newName string) error { lw.log.Debug( "ContainerRename", log.Ctx{ "container": container.Name(), "newName": newName}) return lw.w.ContainerRename(container, newName) } func (lw *storageLogWrapper) ContainerRestore( container container, sourceContainer container) error { lw.log.Debug( "ContainerRestore", log.Ctx{ "container": container.Name(), "source": sourceContainer.Name()}) return lw.w.ContainerRestore(container, sourceContainer) } func (lw *storageLogWrapper) ContainerSetQuota( container container, size int64) error { lw.log.Debug( "ContainerSetQuota", log.Ctx{ "container": container.Name(), "size": size}) return lw.w.ContainerSetQuota(container, size) } func (lw *storageLogWrapper) ContainerGetUsage( container container) (int64, error) { lw.log.Debug( "ContainerGetUsage", log.Ctx{ "container": container.Name()}) return lw.w.ContainerGetUsage(container) } func (lw *storageLogWrapper) ContainerSnapshotCreate( snapshotContainer container, sourceContainer container) error { lw.log.Debug("ContainerSnapshotCreate", log.Ctx{ "snapshotContainer": snapshotContainer.Name(), "sourceContainer": sourceContainer.Name()}) return lw.w.ContainerSnapshotCreate(snapshotContainer, sourceContainer) } func (lw *storageLogWrapper) ContainerSnapshotCreateEmpty(snapshotContainer container) error { lw.log.Debug("ContainerSnapshotCreateEmpty", log.Ctx{ "snapshotContainer": snapshotContainer.Name()}) return lw.w.ContainerSnapshotCreateEmpty(snapshotContainer) } func (lw *storageLogWrapper) ContainerSnapshotDelete( snapshotContainer container) error { lw.log.Debug("ContainerSnapshotDelete", log.Ctx{"snapshotContainer": snapshotContainer.Name()}) return lw.w.ContainerSnapshotDelete(snapshotContainer) } func (lw *storageLogWrapper) ContainerSnapshotRename( snapshotContainer container, newName string) error { lw.log.Debug("ContainerSnapshotRename", log.Ctx{ "snapshotContainer": snapshotContainer.Name(), "newName": newName}) return lw.w.ContainerSnapshotRename(snapshotContainer, newName) } func (lw *storageLogWrapper) ContainerSnapshotStart(container container) error { lw.log.Debug("ContainerSnapshotStart", log.Ctx{"container": container.Name()}) return lw.w.ContainerSnapshotStart(container) } func (lw *storageLogWrapper) ContainerSnapshotStop(container container) error { lw.log.Debug("ContainerSnapshotStop", log.Ctx{"container": container.Name()}) return lw.w.ContainerSnapshotStop(container) } func (lw *storageLogWrapper) ImageCreate(fingerprint string) error { lw.log.Debug( "ImageCreate", log.Ctx{"fingerprint": fingerprint}) return lw.w.ImageCreate(fingerprint) } func (lw *storageLogWrapper) ImageDelete(fingerprint string) error { lw.log.Debug("ImageDelete", log.Ctx{"fingerprint": fingerprint}) return lw.w.ImageDelete(fingerprint) } func (lw *storageLogWrapper) MigrationType() MigrationFSType { return lw.w.MigrationType() } func (lw *storageLogWrapper) MigrationSource(container container) (MigrationStorageSourceDriver, error) { lw.log.Debug("MigrationSource", log.Ctx{"container": container.Name()}) return lw.w.MigrationSource(container) } func (lw *storageLogWrapper) MigrationSink(live bool, container container, objects []container, conn *websocket.Conn) error { objNames := []string{} for _, obj := range objects { objNames = append(objNames, obj.Name()) } lw.log.Debug("MigrationSink", log.Ctx{ "live": live, "container": container.Name(), "objects": objNames, }) return lw.w.MigrationSink(live, container, objects, conn) } func ShiftIfNecessary(container container, srcIdmap *shared.IdmapSet) error { dstIdmap := container.IdmapSet() if dstIdmap == nil { dstIdmap = new(shared.IdmapSet) } if !reflect.DeepEqual(srcIdmap, dstIdmap) { var jsonIdmap string if srcIdmap != nil { idmapBytes, err := json.Marshal(srcIdmap.Idmap) if err != nil { return err } jsonIdmap = string(idmapBytes) } else { jsonIdmap = "[]" } err := container.ConfigKeySet("volatile.last_state.idmap", jsonIdmap) if err != nil { return err } } return nil } type rsyncStorageSourceDriver struct { container container snapshots []container } func (s rsyncStorageSourceDriver) Snapshots() []container { return s.snapshots } func (s rsyncStorageSourceDriver) SendWhileRunning(conn *websocket.Conn) error { toSend := append([]container{s.container}, s.snapshots...) for _, send := range toSend { path := send.Path() if err := RsyncSend(shared.AddSlash(path), conn); err != nil { return err } } return nil } func (s rsyncStorageSourceDriver) SendAfterCheckpoint(conn *websocket.Conn) error { /* resync anything that changed between our first send and the checkpoint */ return RsyncSend(shared.AddSlash(s.container.Path()), conn) } func (s rsyncStorageSourceDriver) Cleanup() { /* no-op */ } func rsyncMigrationSource(container container) (MigrationStorageSourceDriver, error) { snapshots, err := container.Snapshots() if err != nil { return nil, err } return rsyncStorageSourceDriver{container, snapshots}, nil } func rsyncMigrationSink(live bool, container container, snapshots []container, conn *websocket.Conn) error { /* the first object is the actual container */ if err := RsyncRecv(shared.AddSlash(container.Path()), conn); err != nil { return err } if len(snapshots) > 0 { err := os.MkdirAll(shared.VarPath(fmt.Sprintf("snapshots/%s", container.Name())), 0700) if err != nil { return err } } for _, snap := range snapshots { if err := RsyncRecv(shared.AddSlash(snap.Path()), conn); err != nil { return err } } if live { /* now receive the final sync */ if err := RsyncRecv(shared.AddSlash(container.Path()), conn); err != nil { return err } } return nil } // Useful functions for unreliable backends func tryExec(name string, arg ...string) ([]byte, error) { var err error var output []byte for i := 0; i < 20; i++ { output, err = exec.Command(name, arg...).CombinedOutput() if err == nil { break } time.Sleep(500 * time.Millisecond) } return output, err } func tryMount(src string, dst string, fs string, flags uintptr, options string) error { var err error for i := 0; i < 20; i++ { err = syscall.Mount(src, dst, fs, flags, options) if err == nil { break } time.Sleep(500 * time.Millisecond) } if err != nil { return err } return nil } func tryUnmount(path string, flags int) error { var err error for i := 0; i < 20; i++ { err = syscall.Unmount(path, flags) if err == nil { break } time.Sleep(500 * time.Millisecond) } if err != nil { return err } return nil } lxd-2.0.2/lxd/storage_32bit.go000066400000000000000000000003171272140510300160630ustar00rootroot00000000000000// +build 386 arm ppc s390 package main const ( /* This is really 0x9123683E, go wants us to give it in signed form * since we use it as a signed constant. */ filesystemSuperMagicBtrfs = -1859950530 ) lxd-2.0.2/lxd/storage_64bit.go000066400000000000000000000001531272140510300160660ustar00rootroot00000000000000// +build amd64 ppc64 ppc64le arm64 s390x package main const ( filesystemSuperMagicBtrfs = 0x9123683E ) lxd-2.0.2/lxd/storage_btrfs.go000066400000000000000000000564571272140510300163000ustar00rootroot00000000000000package main import ( "fmt" "io/ioutil" "os" "os/exec" "path" "path/filepath" "strconv" "strings" "syscall" "github.com/gorilla/websocket" "github.com/pborman/uuid" "github.com/lxc/lxd/shared" log "gopkg.in/inconshreveable/log15.v2" ) type storageBtrfs struct { d *Daemon storageShared } func (s *storageBtrfs) Init(config map[string]interface{}) (storage, error) { s.sType = storageTypeBtrfs s.sTypeName = storageTypeToString(s.sType) if err := s.initShared(); err != nil { return s, err } out, err := exec.LookPath("btrfs") if err != nil || len(out) == 0 { return s, fmt.Errorf("The 'btrfs' tool isn't available") } output, err := exec.Command("btrfs", "version").CombinedOutput() if err != nil { return s, fmt.Errorf("The 'btrfs' tool isn't working properly") } count, err := fmt.Sscanf(strings.SplitN(string(output), " ", 2)[1], "v%s\n", &s.sTypeVersion) if err != nil || count != 1 { return s, fmt.Errorf("The 'btrfs' tool isn't working properly") } return s, nil } func (s *storageBtrfs) ContainerCreate(container container) error { cPath := container.Path() // MkdirAll the pardir of the BTRFS Subvolume. if err := os.MkdirAll(filepath.Dir(cPath), 0755); err != nil { return err } // Create the BTRFS Subvolume err := s.subvolCreate(cPath) if err != nil { return err } if container.IsPrivileged() { if err := os.Chmod(cPath, 0700); err != nil { return err } } return container.TemplateApply("create") } func (s *storageBtrfs) ContainerCreateFromImage( container container, imageFingerprint string) error { imageSubvol := fmt.Sprintf( "%s.btrfs", shared.VarPath("images", imageFingerprint)) // Create the btrfs subvol of the image first if it doesn exists. if !shared.PathExists(imageSubvol) { if err := s.ImageCreate(imageFingerprint); err != nil { return err } } // Now make a snapshot of the image subvol err := s.subvolsSnapshot(imageSubvol, container.Path(), false) if err != nil { return err } if !container.IsPrivileged() { if err = s.shiftRootfs(container); err != nil { s.ContainerDelete(container) return err } } else { if err := os.Chmod(container.Path(), 0700); err != nil { return err } } return container.TemplateApply("create") } func (s *storageBtrfs) ContainerCanRestore(container container, sourceContainer container) error { return nil } func (s *storageBtrfs) ContainerDelete(container container) error { cPath := container.Path() // First remove the subvol (if it was one). if s.isSubvolume(cPath) { if err := s.subvolsDelete(cPath); err != nil { return err } } // Then the directory (if it still exists). if shared.PathExists(cPath) { err := os.RemoveAll(cPath) if err != nil { s.log.Error("ContainerDelete: failed", log.Ctx{"cPath": cPath, "err": err}) return fmt.Errorf("Error cleaning up %s: %s", cPath, err) } } return nil } func (s *storageBtrfs) ContainerCopy(container container, sourceContainer container) error { subvol := sourceContainer.Path() dpath := container.Path() if s.isSubvolume(subvol) { // Snapshot the sourcecontainer err := s.subvolsSnapshot(subvol, dpath, false) if err != nil { return err } } else { // Create the BTRFS Container. if err := s.ContainerCreate(container); err != nil { return err } /* * Copy by using rsync */ output, err := storageRsyncCopy( sourceContainer.Path(), container.Path()) if err != nil { s.ContainerDelete(container) s.log.Error("ContainerCopy: rsync failed", log.Ctx{"output": string(output)}) return fmt.Errorf("rsync failed: %s", string(output)) } } if err := s.setUnprivUserAcl(sourceContainer, dpath); err != nil { s.ContainerDelete(container) return err } return container.TemplateApply("copy") } func (s *storageBtrfs) ContainerStart(container container) error { return nil } func (s *storageBtrfs) ContainerStop(container container) error { return nil } func (s *storageBtrfs) ContainerRename(container container, newName string) error { oldName := container.Name() oldPath := container.Path() newPath := containerPath(newName, false) if err := os.Rename(oldPath, newPath); err != nil { return err } if shared.PathExists(shared.VarPath(fmt.Sprintf("snapshots/%s", oldName))) { err := os.Rename(shared.VarPath(fmt.Sprintf("snapshots/%s", oldName)), shared.VarPath(fmt.Sprintf("snapshots/%s", newName))) if err != nil { return err } } return nil } func (s *storageBtrfs) ContainerRestore( container container, sourceContainer container) error { targetSubVol := container.Path() sourceSubVol := sourceContainer.Path() sourceBackupPath := container.Path() + ".back" // Create a backup of the container err := os.Rename(container.Path(), sourceBackupPath) if err != nil { return err } var failure error if s.isSubvolume(sourceSubVol) { // Restore using btrfs snapshots. err := s.subvolsSnapshot(sourceSubVol, targetSubVol, false) if err != nil { failure = err } } else { // Restore using rsync but create a btrfs subvol. if err := s.subvolCreate(targetSubVol); err == nil { output, err := storageRsyncCopy( sourceSubVol, targetSubVol) if err != nil { s.log.Error( "ContainerRestore: rsync failed", log.Ctx{"output": string(output)}) failure = err } } else { failure = err } } // Now allow unprivileged users to access its data. if err := s.setUnprivUserAcl(sourceContainer, targetSubVol); err != nil { failure = err } if failure != nil { // Restore original container s.ContainerDelete(container) os.Rename(sourceBackupPath, container.Path()) } else { // Remove the backup, we made if s.isSubvolume(sourceBackupPath) { return s.subvolDelete(sourceBackupPath) } os.RemoveAll(sourceBackupPath) } return failure } func (s *storageBtrfs) ContainerSetQuota(container container, size int64) error { subvol := container.Path() _, err := s.subvolQGroup(subvol) if err != nil { return err } output, err := exec.Command( "btrfs", "qgroup", "limit", "-e", fmt.Sprintf("%d", size), subvol).CombinedOutput() if err != nil { return fmt.Errorf("Failed to set btrfs quota: %s", output) } return nil } func (s *storageBtrfs) ContainerGetUsage(container container) (int64, error) { return s.subvolQGroupUsage(container.Path()) } func (s *storageBtrfs) ContainerSnapshotCreate( snapshotContainer container, sourceContainer container) error { subvol := sourceContainer.Path() dpath := snapshotContainer.Path() if s.isSubvolume(subvol) { // Create a readonly snapshot of the source. err := s.subvolsSnapshot(subvol, dpath, true) if err != nil { s.ContainerSnapshotDelete(snapshotContainer) return err } } else { /* * Copy by using rsync */ output, err := storageRsyncCopy( subvol, dpath) if err != nil { s.ContainerSnapshotDelete(snapshotContainer) s.log.Error( "ContainerSnapshotCreate: rsync failed", log.Ctx{"output": string(output)}) return fmt.Errorf("rsync failed: %s", string(output)) } } return nil } func (s *storageBtrfs) ContainerSnapshotDelete( snapshotContainer container) error { err := s.ContainerDelete(snapshotContainer) if err != nil { return fmt.Errorf("Error deleting snapshot %s: %s", snapshotContainer.Name(), err) } oldPathParent := filepath.Dir(snapshotContainer.Path()) if ok, _ := shared.PathIsEmpty(oldPathParent); ok { os.Remove(oldPathParent) } return nil } func (s *storageBtrfs) ContainerSnapshotStart(container container) error { if shared.PathExists(container.Path() + ".ro") { return fmt.Errorf("The snapshot is already mounted read-write.") } err := os.Rename(container.Path(), container.Path()+".ro") if err != nil { return err } err = s.subvolsSnapshot(container.Path()+".ro", container.Path(), false) if err != nil { return err } return nil } func (s *storageBtrfs) ContainerSnapshotStop(container container) error { if !shared.PathExists(container.Path() + ".ro") { return fmt.Errorf("The snapshot isn't currently mounted read-write.") } err := s.subvolsDelete(container.Path()) if err != nil { return err } err = os.Rename(container.Path()+".ro", container.Path()) if err != nil { return err } return nil } // ContainerSnapshotRename renames a snapshot of a container. func (s *storageBtrfs) ContainerSnapshotRename( snapshotContainer container, newName string) error { oldPath := snapshotContainer.Path() newPath := containerPath(newName, true) // Create the new parent. if !shared.PathExists(filepath.Dir(newPath)) { os.MkdirAll(filepath.Dir(newPath), 0700) } // Now rename the snapshot. if !s.isSubvolume(oldPath) { if err := os.Rename(oldPath, newPath); err != nil { return err } } else { if err := s.subvolsSnapshot(oldPath, newPath, true); err != nil { return err } if err := s.subvolsDelete(oldPath); err != nil { return err } } // Remove the old parent (on container rename) if its empty. if ok, _ := shared.PathIsEmpty(filepath.Dir(oldPath)); ok { os.Remove(filepath.Dir(oldPath)) } return nil } func (s *storageBtrfs) ContainerSnapshotCreateEmpty(snapshotContainer container) error { dpath := snapshotContainer.Path() return s.subvolCreate(dpath) } func (s *storageBtrfs) ImageCreate(fingerprint string) error { imagePath := shared.VarPath("images", fingerprint) subvol := fmt.Sprintf("%s.btrfs", imagePath) if err := s.subvolCreate(subvol); err != nil { return err } if err := untarImage(imagePath, subvol); err != nil { s.subvolDelete(subvol) return err } return nil } func (s *storageBtrfs) ImageDelete(fingerprint string) error { imagePath := shared.VarPath("images", fingerprint) subvol := fmt.Sprintf("%s.btrfs", imagePath) if s.isSubvolume(subvol) { if err := s.subvolsDelete(subvol); err != nil { return err } } return nil } func (s *storageBtrfs) subvolCreate(subvol string) error { parentDestPath := filepath.Dir(subvol) if !shared.PathExists(parentDestPath) { if err := os.MkdirAll(parentDestPath, 0700); err != nil { return err } } output, err := exec.Command( "btrfs", "subvolume", "create", subvol).CombinedOutput() if err != nil { s.log.Debug( "subvolume create failed", log.Ctx{"subvol": subvol, "output": string(output)}, ) return fmt.Errorf( "btrfs subvolume create failed, subvol=%s, output%s", subvol, string(output), ) } return nil } func (s *storageBtrfs) subvolQGroup(subvol string) (string, error) { output, err := exec.Command( "btrfs", "qgroup", "show", subvol, "-e", "-f").CombinedOutput() if err != nil { return "", fmt.Errorf("btrfs quotas not supported. Try enabling them with 'btrfs quota enable'.") } var qgroup string for _, line := range strings.Split(string(output), "\n") { if line == "" || strings.HasPrefix(line, "qgroupid") || strings.HasPrefix(line, "---") { continue } fields := strings.Fields(line) if len(fields) != 4 { continue } qgroup = fields[0] } if qgroup == "" { return "", fmt.Errorf("Unable to find quota group") } return qgroup, nil } func (s *storageBtrfs) subvolQGroupUsage(subvol string) (int64, error) { output, err := exec.Command( "btrfs", "qgroup", "show", subvol, "-e", "-f").CombinedOutput() if err != nil { return -1, fmt.Errorf("btrfs quotas not supported. Try enabling them with 'btrfs quota enable'.") } for _, line := range strings.Split(string(output), "\n") { if line == "" || strings.HasPrefix(line, "qgroupid") || strings.HasPrefix(line, "---") { continue } fields := strings.Fields(line) if len(fields) != 4 { continue } usage, err := strconv.ParseInt(fields[2], 10, 64) if err != nil { continue } return usage, nil } return -1, fmt.Errorf("Unable to find current qgroup usage") } func (s *storageBtrfs) subvolDelete(subvol string) error { // Attempt (but don't fail on) to delete any qgroup on the subvolume qgroup, err := s.subvolQGroup(subvol) if err == nil { output, err := exec.Command( "btrfs", "qgroup", "destroy", qgroup, subvol).CombinedOutput() if err != nil { s.log.Warn( "subvolume qgroup delete failed", log.Ctx{"subvol": subvol, "output": string(output)}, ) } } // Delete the subvolume itself output, err := exec.Command( "btrfs", "subvolume", "delete", subvol, ).CombinedOutput() if err != nil { s.log.Warn( "subvolume delete failed", log.Ctx{"subvol": subvol, "output": string(output)}, ) } return nil } // subvolsDelete is the recursive variant on subvolDelete, // it first deletes subvolumes of the subvolume and then the // subvolume itself. func (s *storageBtrfs) subvolsDelete(subvol string) error { // Delete subsubvols. subsubvols, err := s.getSubVolumes(subvol) if err != nil { return err } for _, subsubvol := range subsubvols { s.log.Debug( "Deleting subsubvol", log.Ctx{ "subvol": subvol, "subsubvol": subsubvol}) if err := s.subvolDelete(path.Join(subvol, subsubvol)); err != nil { return err } } // Delete the subvol itself if err := s.subvolDelete(subvol); err != nil { return err } return nil } /* * subvolSnapshot creates a snapshot of "source" to "dest" * the result will be readonly if "readonly" is True. */ func (s *storageBtrfs) subvolSnapshot( source string, dest string, readonly bool) error { parentDestPath := filepath.Dir(dest) if !shared.PathExists(parentDestPath) { if err := os.MkdirAll(parentDestPath, 0700); err != nil { return err } } if shared.PathExists(dest) { if err := os.Remove(dest); err != nil { return err } } var output []byte var err error if readonly { output, err = exec.Command( "btrfs", "subvolume", "snapshot", "-r", source, dest).CombinedOutput() } else { output, err = exec.Command( "btrfs", "subvolume", "snapshot", source, dest).CombinedOutput() } if err != nil { s.log.Error( "subvolume snapshot failed", log.Ctx{"source": source, "dest": dest, "output": string(output)}, ) return fmt.Errorf( "subvolume snapshot failed, source=%s, dest=%s, output=%s", source, dest, string(output), ) } return err } func (s *storageBtrfs) subvolsSnapshot( source string, dest string, readonly bool) error { // Get a list of subvolumes of the root subsubvols, err := s.getSubVolumes(source) if err != nil { return err } if len(subsubvols) > 0 && readonly { // A root with subvolumes can never be readonly, // also don't make subvolumes readonly. readonly = false s.log.Warn( "Subvolumes detected, ignoring ro flag", log.Ctx{"source": source, "dest": dest}) } // First snapshot the root if err := s.subvolSnapshot(source, dest, readonly); err != nil { return err } // Now snapshot all subvolumes of the root. for _, subsubvol := range subsubvols { if err := s.subvolSnapshot( path.Join(source, subsubvol), path.Join(dest, subsubvol), readonly); err != nil { return err } } return nil } /* * isSubvolume returns true if the given Path is a btrfs subvolume * else false. */ func (s *storageBtrfs) isSubvolume(subvolPath string) bool { if runningInUserns { // subvolume show is restricted to real root, use a workaround fs := syscall.Statfs_t{} err := syscall.Statfs(subvolPath, &fs) if err != nil { return false } if fs.Type != filesystemSuperMagicBtrfs { return false } parentFs := syscall.Statfs_t{} err = syscall.Statfs(path.Dir(subvolPath), &parentFs) if err != nil { return false } if fs.Fsid == parentFs.Fsid { return false } return true } output, err := exec.Command( "btrfs", "subvolume", "show", subvolPath).CombinedOutput() if err != nil || strings.HasPrefix(string(output), "ERROR: ") { return false } return true } // getSubVolumes returns a list of relative subvolume paths of "path". func (s *storageBtrfs) getSubVolumes(path string) ([]string, error) { result := []string{} if runningInUserns { if !strings.HasSuffix(path, "/") { path = path + "/" } // Unprivileged users can't get to fs internals filepath.Walk(path, func(fpath string, fi os.FileInfo, err error) error { if strings.TrimRight(fpath, "/") == strings.TrimRight(path, "/") { return nil } if err != nil { return nil } if !fi.IsDir() { return nil } if s.isSubvolume(fpath) { result = append(result, strings.TrimPrefix(fpath, path)) } return nil }) return result, nil } out, err := exec.Command( "btrfs", "inspect-internal", "rootid", path).CombinedOutput() if err != nil { return result, fmt.Errorf( "Unable to get btrfs rootid, path='%s', err='%s'", path, err) } rootid := strings.TrimRight(string(out), "\n") out, err = exec.Command( "btrfs", "inspect-internal", "subvolid-resolve", rootid, path).CombinedOutput() if err != nil { return result, fmt.Errorf( "Unable to resolve btrfs rootid, path='%s', err='%s'", path, err) } basePath := strings.TrimRight(string(out), "\n") out, err = exec.Command( "btrfs", "subvolume", "list", "-o", path).CombinedOutput() if err != nil { return result, fmt.Errorf( "Unable to list subvolumes, path='%s', err='%s'", path, err) } lines := strings.Split(string(out), "\n") for _, line := range lines { if line == "" { continue } cols := strings.Fields(line) result = append(result, cols[8][len(basePath):]) } return result, nil } type btrfsMigrationSourceDriver struct { container container snapshots []container btrfsSnapshotNames []string btrfs *storageBtrfs runningSnapName string stoppedSnapName string } func (s *btrfsMigrationSourceDriver) Snapshots() []container { return s.snapshots } func (s *btrfsMigrationSourceDriver) send(conn *websocket.Conn, btrfsPath string, btrfsParent string) error { args := []string{"send", btrfsPath} if btrfsParent != "" { args = append(args, "-p", btrfsParent) } cmd := exec.Command("btrfs", args...) stdout, err := cmd.StdoutPipe() if err != nil { return err } stderr, err := cmd.StderrPipe() if err != nil { return err } if err := cmd.Start(); err != nil { return err } <-shared.WebsocketSendStream(conn, stdout) output, err := ioutil.ReadAll(stderr) if err != nil { shared.Log.Error("problem reading btrfs send stderr", "err", err) } err = cmd.Wait() if err != nil { shared.Log.Error("problem with btrfs send", "output", string(output)) } return err } func (s *btrfsMigrationSourceDriver) SendWhileRunning(conn *websocket.Conn) error { if s.container.IsSnapshot() { tmpPath := containerPath(fmt.Sprintf("%s/.migration-send-%s", s.container.Name(), uuid.NewRandom().String()), true) err := os.MkdirAll(tmpPath, 0700) if err != nil { return err } btrfsPath := fmt.Sprintf("%s/.root", tmpPath) if err := s.btrfs.subvolSnapshot(s.container.Path(), btrfsPath, true); err != nil { return err } defer s.btrfs.subvolDelete(btrfsPath) return s.send(conn, btrfsPath, "") } for i, snap := range s.snapshots { prev := "" if i > 0 { prev = s.snapshots[i-1].Path() } if err := s.send(conn, snap.Path(), prev); err != nil { return err } } /* We can't send running fses, so let's snapshot the fs and send * the snapshot. */ tmpPath := containerPath(fmt.Sprintf("%s/.migration-send-%s", s.container.Name(), uuid.NewRandom().String()), true) err := os.MkdirAll(tmpPath, 0700) if err != nil { return err } s.runningSnapName = fmt.Sprintf("%s/.root", tmpPath) if err := s.btrfs.subvolSnapshot(s.container.Path(), s.runningSnapName, true); err != nil { return err } btrfsParent := "" if len(s.btrfsSnapshotNames) > 0 { btrfsParent = s.btrfsSnapshotNames[len(s.btrfsSnapshotNames)-1] } return s.send(conn, s.runningSnapName, btrfsParent) } func (s *btrfsMigrationSourceDriver) SendAfterCheckpoint(conn *websocket.Conn) error { tmpPath := containerPath(fmt.Sprintf("%s/.migration-send-%s", s.container.Name(), uuid.NewRandom().String()), true) err := os.MkdirAll(tmpPath, 0700) if err != nil { return err } s.stoppedSnapName = fmt.Sprintf("%s/.root", tmpPath) if err := s.btrfs.subvolSnapshot(s.container.Path(), s.stoppedSnapName, true); err != nil { return err } return s.send(conn, s.stoppedSnapName, s.runningSnapName) } func (s *btrfsMigrationSourceDriver) Cleanup() { if s.stoppedSnapName != "" { s.btrfs.subvolDelete(s.stoppedSnapName) } if s.runningSnapName != "" { s.btrfs.subvolDelete(s.runningSnapName) } } func (s *storageBtrfs) MigrationType() MigrationFSType { if runningInUserns { return MigrationFSType_RSYNC } else { return MigrationFSType_BTRFS } } func (s *storageBtrfs) MigrationSource(c container) (MigrationStorageSourceDriver, error) { if runningInUserns { return rsyncMigrationSource(c) } /* List all the snapshots in order of reverse creation. The idea here * is that we send the oldest to newest snapshot, hopefully saving on * xfer costs. Then, after all that, we send the container itself. */ snapshots, err := c.Snapshots() if err != nil { return nil, err } driver := &btrfsMigrationSourceDriver{ container: c, snapshots: snapshots, btrfsSnapshotNames: []string{}, btrfs: s, } for _, snap := range snapshots { btrfsPath := snap.Path() driver.btrfsSnapshotNames = append(driver.btrfsSnapshotNames, btrfsPath) } return driver, nil } func (s *storageBtrfs) MigrationSink(live bool, container container, snapshots []container, conn *websocket.Conn) error { if runningInUserns { return rsyncMigrationSink(live, container, snapshots, conn) } cName := container.Name() snapshotsPath := shared.VarPath(fmt.Sprintf("snapshots/%s", cName)) if !shared.PathExists(snapshotsPath) { err := os.MkdirAll(shared.VarPath(fmt.Sprintf("snapshots/%s", cName)), 0700) if err != nil { return err } } btrfsRecv := func(btrfsPath string, targetPath string, isSnapshot bool) error { args := []string{"receive", "-e", btrfsPath} cmd := exec.Command("btrfs", args...) // Remove the existing pre-created subvolume err := s.subvolsDelete(targetPath) if err != nil { return err } stdin, err := cmd.StdinPipe() if err != nil { return err } stderr, err := cmd.StderrPipe() if err != nil { return err } if err := cmd.Start(); err != nil { return err } <-shared.WebsocketRecvStream(stdin, conn) output, err := ioutil.ReadAll(stderr) if err != nil { shared.Debugf("problem reading btrfs receive stderr %s", err) } err = cmd.Wait() if err != nil { shared.Log.Error("problem with btrfs receive", log.Ctx{"output": string(output)}) return err } if !isSnapshot { cPath := containerPath(fmt.Sprintf("%s/.root", cName), true) err := s.subvolSnapshot(cPath, targetPath, false) if err != nil { shared.Log.Error("problem with btrfs snapshot", log.Ctx{"err": err}) return err } err = s.subvolsDelete(cPath) if err != nil { shared.Log.Error("problem with btrfs delete", log.Ctx{"err": err}) return err } } return nil } for _, snap := range snapshots { if err := btrfsRecv(containerPath(cName, true), snap.Path(), true); err != nil { return err } } /* finally, do the real container */ if err := btrfsRecv(containerPath(cName, true), container.Path(), false); err != nil { return err } if live { if err := btrfsRecv(containerPath(cName, true), container.Path(), false); err != nil { return err } } // Cleanup if ok, _ := shared.PathIsEmpty(snapshotsPath); ok { err := os.Remove(snapshotsPath) if err != nil { return err } } return nil } lxd-2.0.2/lxd/storage_dir.go000066400000000000000000000151231272140510300157170ustar00rootroot00000000000000package main import ( "fmt" "os" "os/exec" "path/filepath" "strings" "github.com/gorilla/websocket" "github.com/lxc/lxd/shared" log "gopkg.in/inconshreveable/log15.v2" ) type storageDir struct { d *Daemon storageShared } func (s *storageDir) Init(config map[string]interface{}) (storage, error) { s.sType = storageTypeDir s.sTypeName = storageTypeToString(s.sType) if err := s.initShared(); err != nil { return s, err } return s, nil } func (s *storageDir) ContainerCreate(container container) error { cPath := container.Path() if err := os.MkdirAll(cPath, 0755); err != nil { return fmt.Errorf("Error creating containers directory") } if container.IsPrivileged() { if err := os.Chmod(cPath, 0700); err != nil { return err } } return container.TemplateApply("create") } func (s *storageDir) ContainerCreateFromImage( container container, imageFingerprint string) error { rootfsPath := container.RootfsPath() if err := os.MkdirAll(rootfsPath, 0755); err != nil { return fmt.Errorf("Error creating rootfs directory") } if container.IsPrivileged() { if err := os.Chmod(container.Path(), 0700); err != nil { return err } } imagePath := shared.VarPath("images", imageFingerprint) if err := untarImage(imagePath, container.Path()); err != nil { s.ContainerDelete(container) return err } if !container.IsPrivileged() { if err := s.shiftRootfs(container); err != nil { s.ContainerDelete(container) return err } } return container.TemplateApply("create") } func (s *storageDir) ContainerCanRestore(container container, sourceContainer container) error { return nil } func (s *storageDir) ContainerDelete(container container) error { cPath := container.Path() if !shared.PathExists(cPath) { return nil } err := os.RemoveAll(cPath) if err != nil { // RemovaAll fails on very long paths, so attempt an rm -Rf output, err := exec.Command("rm", "-Rf", cPath).CombinedOutput() if err != nil { s.log.Error("ContainerDelete: failed", log.Ctx{"cPath": cPath, "output": output}) return fmt.Errorf("Error cleaning up %s: %s", cPath, string(output)) } } return nil } func (s *storageDir) ContainerCopy( container container, sourceContainer container) error { oldPath := sourceContainer.RootfsPath() newPath := container.RootfsPath() /* * Copy by using rsync */ output, err := storageRsyncCopy(oldPath, newPath) if err != nil { s.ContainerDelete(container) s.log.Error("ContainerCopy: rsync failed", log.Ctx{"output": string(output)}) return fmt.Errorf("rsync failed: %s", string(output)) } err = s.setUnprivUserAcl(sourceContainer, container.Path()) if err != nil { return err } return container.TemplateApply("copy") } func (s *storageDir) ContainerStart(container container) error { return nil } func (s *storageDir) ContainerStop(container container) error { return nil } func (s *storageDir) ContainerRename(container container, newName string) error { oldName := container.Name() oldPath := container.Path() newPath := containerPath(newName, false) if err := os.Rename(oldPath, newPath); err != nil { return err } if shared.PathExists(shared.VarPath(fmt.Sprintf("snapshots/%s", oldName))) { err := os.Rename(shared.VarPath(fmt.Sprintf("snapshots/%s", oldName)), shared.VarPath(fmt.Sprintf("snapshots/%s", newName))) if err != nil { return err } } return nil } func (s *storageDir) ContainerRestore( container container, sourceContainer container) error { targetPath := container.Path() sourcePath := sourceContainer.Path() // Restore using rsync output, err := storageRsyncCopy( sourcePath, targetPath) if err != nil { s.log.Error( "ContainerRestore: rsync failed", log.Ctx{"output": string(output)}) return err } // Now allow unprivileged users to access its data. if err := s.setUnprivUserAcl(sourceContainer, targetPath); err != nil { return err } return nil } func (s *storageDir) ContainerSetQuota(container container, size int64) error { return fmt.Errorf("The directory container backend doesn't support quotas.") } func (s *storageDir) ContainerGetUsage(container container) (int64, error) { return -1, fmt.Errorf("The directory container backend doesn't support quotas.") } func (s *storageDir) ContainerSnapshotCreate( snapshotContainer container, sourceContainer container) error { oldPath := sourceContainer.Path() newPath := snapshotContainer.Path() /* * Copy by using rsync */ output, err := storageRsyncCopy(oldPath, newPath) if err != nil { s.ContainerDelete(snapshotContainer) s.log.Error("ContainerSnapshotCreate: rsync failed", log.Ctx{"output": string(output)}) return fmt.Errorf("rsync failed: %s", string(output)) } return nil } func (s *storageDir) ContainerSnapshotCreateEmpty(snapshotContainer container) error { return os.MkdirAll(snapshotContainer.Path(), 0700) } func (s *storageDir) ContainerSnapshotDelete( snapshotContainer container) error { err := s.ContainerDelete(snapshotContainer) if err != nil { return fmt.Errorf("Error deleting snapshot %s: %s", snapshotContainer.Name(), err) } oldPathParent := filepath.Dir(snapshotContainer.Path()) if ok, _ := shared.PathIsEmpty(oldPathParent); ok { os.Remove(oldPathParent) } return nil } func (s *storageDir) ContainerSnapshotRename( snapshotContainer container, newName string) error { oldPath := snapshotContainer.Path() newPath := containerPath(newName, true) // Create the new parent. if strings.Contains(snapshotContainer.Name(), "/") { if !shared.PathExists(filepath.Dir(newPath)) { os.MkdirAll(filepath.Dir(newPath), 0700) } } // Now rename the snapshot. if err := os.Rename(oldPath, newPath); err != nil { return err } // Remove the old parent (on container rename) if its empty. if strings.Contains(snapshotContainer.Name(), "/") { if ok, _ := shared.PathIsEmpty(filepath.Dir(oldPath)); ok { os.Remove(filepath.Dir(oldPath)) } } return nil } func (s *storageDir) ContainerSnapshotStart(container container) error { return nil } func (s *storageDir) ContainerSnapshotStop(container container) error { return nil } func (s *storageDir) ImageCreate(fingerprint string) error { return nil } func (s *storageDir) ImageDelete(fingerprint string) error { return nil } func (s *storageDir) MigrationType() MigrationFSType { return MigrationFSType_RSYNC } func (s *storageDir) MigrationSource(container container) (MigrationStorageSourceDriver, error) { return rsyncMigrationSource(container) } func (s *storageDir) MigrationSink(live bool, container container, snapshots []container, conn *websocket.Conn) error { return rsyncMigrationSink(live, container, snapshots, conn) } lxd-2.0.2/lxd/storage_lvm.go000066400000000000000000000576321272140510300157520ustar00rootroot00000000000000package main import ( "fmt" "io/ioutil" "os" "os/exec" "path/filepath" "strconv" "strings" "syscall" "github.com/gorilla/websocket" "github.com/lxc/lxd/shared" log "gopkg.in/inconshreveable/log15.v2" ) func storageLVMCheckVolumeGroup(vgName string) error { output, err := exec.Command("vgdisplay", "-s", vgName).CombinedOutput() if err != nil { shared.Log.Debug("vgdisplay failed to find vg", log.Ctx{"output": string(output)}) return fmt.Errorf("LVM volume group '%s' not found", vgName) } return nil } func storageLVMThinpoolExists(vgName string, poolName string) (bool, error) { output, err := exec.Command("vgs", "--noheadings", "-o", "lv_attr", fmt.Sprintf("%s/%s", vgName, poolName)).Output() if err != nil { if exitError, ok := err.(*exec.ExitError); ok { waitStatus := exitError.Sys().(syscall.WaitStatus) if waitStatus.ExitStatus() == 5 { // pool LV was not found return false, nil } } return false, fmt.Errorf("Error checking for pool '%s'", poolName) } // Found LV named poolname, check type: attrs := strings.TrimSpace(string(output[:])) if strings.HasPrefix(attrs, "t") { return true, nil } return false, fmt.Errorf("Pool named '%s' exists but is not a thin pool.", poolName) } func storageLVMGetThinPoolUsers(d *Daemon) ([]string, error) { results := []string{} if daemonConfig["storage.lvm_vg_name"].Get() == "" { return results, nil } cNames, err := dbContainersList(d.db, cTypeRegular) if err != nil { return results, err } for _, cName := range cNames { var lvLinkPath string if strings.Contains(cName, shared.SnapshotDelimiter) { lvLinkPath = shared.VarPath("snapshots", fmt.Sprintf("%s.lv", cName)) } else { lvLinkPath = shared.VarPath("containers", fmt.Sprintf("%s.lv", cName)) } if shared.PathExists(lvLinkPath) { results = append(results, cName) } } imageNames, err := dbImagesGet(d.db, false) if err != nil { return results, err } for _, imageName := range imageNames { imageLinkPath := shared.VarPath("images", fmt.Sprintf("%s.lv", imageName)) if shared.PathExists(imageLinkPath) { results = append(results, imageName) } } return results, nil } func storageLVMValidateThinPoolName(d *Daemon, key string, value string) error { users, err := storageLVMGetThinPoolUsers(d) if err != nil { return fmt.Errorf("Error checking if a pool is already in use: %v", err) } if len(users) > 0 { return fmt.Errorf("Can not change LVM config. Images or containers are still using LVs: %v", users) } vgname := daemonConfig["storage.lvm_vg_name"].Get() if value != "" { if vgname == "" { return fmt.Errorf("Can not set lvm_thinpool_name without lvm_vg_name set.") } poolExists, err := storageLVMThinpoolExists(vgname, value) if err != nil { return fmt.Errorf("Error checking for thin pool '%s' in '%s': %v", value, vgname, err) } if !poolExists { return fmt.Errorf("Pool '%s' does not exist in Volume Group '%s'", value, vgname) } } return nil } func storageLVMValidateVolumeGroupName(d *Daemon, key string, value string) error { users, err := storageLVMGetThinPoolUsers(d) if err != nil { return fmt.Errorf("Error checking if a pool is already in use: %v", err) } if len(users) > 0 { return fmt.Errorf("Can not change LVM config. Images or containers are still using LVs: %v", users) } if value != "" { err = storageLVMCheckVolumeGroup(value) if err != nil { return err } } return nil } func xfsGenerateNewUUID(lvpath string) error { output, err := exec.Command( "xfs_admin", "-U", "generate", lvpath).CombinedOutput() if err != nil { return fmt.Errorf("Error generating new UUID: %v\noutput:'%s'", err, string(output)) } return nil } func containerNameToLVName(containerName string) string { lvName := strings.Replace(containerName, "-", "--", -1) return strings.Replace(lvName, shared.SnapshotDelimiter, "-", -1) } type storageLvm struct { d *Daemon vgName string storageShared } func (s *storageLvm) Init(config map[string]interface{}) (storage, error) { s.sType = storageTypeLvm s.sTypeName = storageTypeToString(s.sType) if err := s.initShared(); err != nil { return s, err } output, err := exec.Command("lvm", "version").CombinedOutput() if err != nil { return nil, fmt.Errorf("Error getting LVM version: %v\noutput:'%s'", err, string(output)) } lines := strings.Split(string(output), "\n") s.sTypeVersion = "" for idx, line := range lines { fields := strings.SplitAfterN(line, ":", 2) if len(fields) < 2 { continue } if idx > 0 { s.sTypeVersion += " / " } s.sTypeVersion += strings.TrimSpace(fields[1]) } if config["vgName"] == nil { vgName := daemonConfig["storage.lvm_vg_name"].Get() if vgName == "" { return s, fmt.Errorf("LVM isn't enabled") } if err := storageLVMCheckVolumeGroup(vgName); err != nil { return s, err } s.vgName = vgName } else { s.vgName = config["vgName"].(string) } return s, nil } func versionSplit(versionString string) (int, int, int, error) { fs := strings.Split(versionString, ".") majs, mins, incs := fs[0], fs[1], fs[2] maj, err := strconv.Atoi(majs) if err != nil { return 0, 0, 0, err } min, err := strconv.Atoi(mins) if err != nil { return 0, 0, 0, err } incs = strings.Split(incs, "(")[0] inc, err := strconv.Atoi(incs) if err != nil { return 0, 0, 0, err } return maj, min, inc, nil } func (s *storageLvm) lvmVersionIsAtLeast(versionString string) (bool, error) { lvmVersion := strings.Split(s.sTypeVersion, "/")[0] lvmMaj, lvmMin, lvmInc, err := versionSplit(lvmVersion) if err != nil { return false, err } inMaj, inMin, inInc, err := versionSplit(versionString) if err != nil { return false, err } if lvmMaj < inMaj || lvmMin < inMin || lvmInc < inInc { return false, nil } else { return true, nil } } func (s *storageLvm) ContainerCreate(container container) error { containerName := containerNameToLVName(container.Name()) lvpath, err := s.createThinLV(containerName) if err != nil { return err } if err := os.MkdirAll(container.Path(), 0755); err != nil { return err } var mode os.FileMode if container.IsPrivileged() { mode = 0700 } else { mode = 0755 } err = os.Chmod(container.Path(), mode) if err != nil { return err } dst := fmt.Sprintf("%s.lv", container.Path()) err = os.Symlink(lvpath, dst) if err != nil { return err } return nil } func (s *storageLvm) ContainerCreateFromImage( container container, imageFingerprint string) error { imageLVFilename := shared.VarPath( "images", fmt.Sprintf("%s.lv", imageFingerprint)) if !shared.PathExists(imageLVFilename) { if err := s.ImageCreate(imageFingerprint); err != nil { return err } } containerName := containerNameToLVName(container.Name()) lvpath, err := s.createSnapshotLV(containerName, imageFingerprint, false) if err != nil { return err } destPath := container.Path() if err := os.MkdirAll(destPath, 0755); err != nil { return fmt.Errorf("Error creating container directory: %v", err) } err = os.Chmod(destPath, 0700) if err != nil { return err } dst := shared.VarPath("containers", fmt.Sprintf("%s.lv", container.Name())) err = os.Symlink(lvpath, dst) if err != nil { return err } // Generate a new xfs's UUID fstype := daemonConfig["storage.lvm_fstype"].Get() if fstype == "xfs" { err := xfsGenerateNewUUID(lvpath) if err != nil { s.ContainerDelete(container) return err } } err = tryMount(lvpath, destPath, fstype, 0, "discard") if err != nil { s.ContainerDelete(container) return fmt.Errorf("Error mounting snapshot LV: %v", err) } var mode os.FileMode if container.IsPrivileged() { mode = 0700 } else { mode = 0755 } err = os.Chmod(destPath, mode) if err != nil { return err } if !container.IsPrivileged() { if err = s.shiftRootfs(container); err != nil { err2 := tryUnmount(destPath, 0) if err2 != nil { return fmt.Errorf("Error in umount: '%s' while cleaning up after error in shiftRootfs: '%s'", err2, err) } s.ContainerDelete(container) return fmt.Errorf("Error in shiftRootfs: %v", err) } } err = container.TemplateApply("create") if err != nil { s.log.Error("Error in create template during ContainerCreateFromImage, continuing to unmount", log.Ctx{"err": err}) } umounterr := tryUnmount(destPath, 0) if umounterr != nil { return fmt.Errorf("Error unmounting '%s' after shiftRootfs: %v", destPath, umounterr) } return err } func (s *storageLvm) ContainerCanRestore(container container, sourceContainer container) error { return nil } func (s *storageLvm) ContainerDelete(container container) error { lvName := containerNameToLVName(container.Name()) if err := s.removeLV(lvName); err != nil { return err } lvLinkPath := fmt.Sprintf("%s.lv", container.Path()) if err := os.Remove(lvLinkPath); err != nil { return err } cPath := container.Path() if err := os.RemoveAll(cPath); err != nil { s.log.Error("ContainerDelete: failed to remove path", log.Ctx{"cPath": cPath, "err": err}) return fmt.Errorf("Cleaning up %s: %s", cPath, err) } return nil } func (s *storageLvm) ContainerCopy(container container, sourceContainer container) error { if s.isLVMContainer(sourceContainer) { if err := s.createSnapshotContainer(container, sourceContainer, false); err != nil { s.log.Error("Error creating snapshot LV for copy", log.Ctx{"err": err}) return err } } else { s.log.Info("Copy from Non-LVM container", log.Ctx{"container": container.Name(), "sourceContainer": sourceContainer.Name()}) if err := s.ContainerCreate(container); err != nil { s.log.Error("Error creating empty container", log.Ctx{"err": err}) return err } if err := s.ContainerStart(container); err != nil { s.log.Error("Error starting/mounting container", log.Ctx{"err": err, "container": container.Name()}) s.ContainerDelete(container) return err } output, err := storageRsyncCopy( sourceContainer.Path(), container.Path()) if err != nil { s.log.Error("ContainerCopy: rsync failed", log.Ctx{"output": string(output)}) s.ContainerDelete(container) return fmt.Errorf("rsync failed: %s", string(output)) } if err := s.ContainerStop(container); err != nil { return err } } return container.TemplateApply("copy") } func (s *storageLvm) ContainerStart(container container) error { lvName := containerNameToLVName(container.Name()) lvpath := fmt.Sprintf("/dev/%s/%s", s.vgName, lvName) fstype := daemonConfig["storage.lvm_fstype"].Get() err := tryMount(lvpath, container.Path(), fstype, 0, "discard") if err != nil { return fmt.Errorf( "Error mounting snapshot LV path='%s': %v", container.Path(), err) } return nil } func (s *storageLvm) ContainerStop(container container) error { err := tryUnmount(container.Path(), 0) if err != nil { return fmt.Errorf( "failed to unmount container path '%s'.\nError: %v", container.Path(), err) } return nil } func (s *storageLvm) ContainerRename( container container, newContainerName string) error { oldName := containerNameToLVName(container.Name()) newName := containerNameToLVName(newContainerName) output, err := s.renameLV(oldName, newName) if err != nil { s.log.Error("Failed to rename a container LV", log.Ctx{"oldName": oldName, "newName": newName, "err": err, "output": string(output)}) return fmt.Errorf("Failed to rename a container LV, oldName='%s', newName='%s', err='%s'", oldName, newName, err) } // Rename the snapshots if !container.IsSnapshot() { snaps, err := container.Snapshots() if err != nil { return err } for _, snap := range snaps { baseSnapName := filepath.Base(snap.Name()) newSnapshotName := newName + shared.SnapshotDelimiter + baseSnapName err := s.ContainerRename(snap, newSnapshotName) if err != nil { return err } oldPathParent := filepath.Dir(snap.Path()) if ok, _ := shared.PathIsEmpty(oldPathParent); ok { os.Remove(oldPathParent) } } } // Create a new symlink newSymPath := fmt.Sprintf("%s.lv", containerPath(newContainerName, container.IsSnapshot())) err = os.MkdirAll(filepath.Dir(containerPath(newContainerName, container.IsSnapshot())), 0700) if err != nil { return err } err = os.Symlink(fmt.Sprintf("/dev/%s/%s", s.vgName, newName), newSymPath) if err != nil { return err } // Remove the old symlink oldSymPath := fmt.Sprintf("%s.lv", container.Path()) err = os.Remove(oldSymPath) if err != nil { return err } // Rename the directory err = os.Rename(container.Path(), containerPath(newContainerName, container.IsSnapshot())) if err != nil { return err } return nil } func (s *storageLvm) ContainerRestore( container container, sourceContainer container) error { srcName := containerNameToLVName(sourceContainer.Name()) destName := containerNameToLVName(container.Name()) err := s.removeLV(destName) if err != nil { return fmt.Errorf("Error removing LV about to be restored over: %v", err) } _, err = s.createSnapshotLV(destName, srcName, false) if err != nil { return fmt.Errorf("Error creating snapshot LV: %v", err) } return nil } func (s *storageLvm) ContainerSetQuota(container container, size int64) error { return fmt.Errorf("The LVM container backend doesn't support quotas.") } func (s *storageLvm) ContainerGetUsage(container container) (int64, error) { return -1, fmt.Errorf("The LVM container backend doesn't support quotas.") } func (s *storageLvm) ContainerSnapshotCreate( snapshotContainer container, sourceContainer container) error { return s.createSnapshotContainer(snapshotContainer, sourceContainer, true) } func (s *storageLvm) createSnapshotContainer( snapshotContainer container, sourceContainer container, readonly bool) error { srcName := containerNameToLVName(sourceContainer.Name()) destName := containerNameToLVName(snapshotContainer.Name()) shared.Log.Debug( "Creating snapshot", log.Ctx{"srcName": srcName, "destName": destName}) lvpath, err := s.createSnapshotLV(destName, srcName, readonly) if err != nil { return fmt.Errorf("Error creating snapshot LV: %v", err) } destPath := snapshotContainer.Path() if err := os.MkdirAll(destPath, 0755); err != nil { return fmt.Errorf("Error creating container directory: %v", err) } var mode os.FileMode if snapshotContainer.IsPrivileged() { mode = 0700 } else { mode = 0755 } err = os.Chmod(destPath, mode) if err != nil { return err } dest := fmt.Sprintf("%s.lv", snapshotContainer.Path()) err = os.Symlink(lvpath, dest) if err != nil { return err } return nil } func (s *storageLvm) ContainerSnapshotDelete( snapshotContainer container) error { err := s.ContainerDelete(snapshotContainer) if err != nil { return fmt.Errorf("Error deleting snapshot %s: %s", snapshotContainer.Name(), err) } oldPathParent := filepath.Dir(snapshotContainer.Path()) if ok, _ := shared.PathIsEmpty(oldPathParent); ok { os.Remove(oldPathParent) } return nil } func (s *storageLvm) ContainerSnapshotRename( snapshotContainer container, newContainerName string) error { oldName := containerNameToLVName(snapshotContainer.Name()) newName := containerNameToLVName(newContainerName) oldPath := snapshotContainer.Path() oldSymPath := fmt.Sprintf("%s.lv", oldPath) newPath := containerPath(newContainerName, true) newSymPath := fmt.Sprintf("%s.lv", newPath) // Rename the LV output, err := s.renameLV(oldName, newName) if err != nil { s.log.Error("Failed to rename a snapshot LV", log.Ctx{"oldName": oldName, "newName": newName, "err": err, "output": string(output)}) return fmt.Errorf("Failed to rename a container LV, oldName='%s', newName='%s', err='%s'", oldName, newName, err) } // Delete the symlink err = os.Remove(oldSymPath) if err != nil { return fmt.Errorf("Failed to remove old symlink: %s", err) } // Create the symlink err = os.Symlink(fmt.Sprintf("/dev/%s/%s", s.vgName, newName), newSymPath) if err != nil { return fmt.Errorf("Failed to create symlink: %s", err) } // Rename the mount point err = os.Rename(oldPath, newPath) if err != nil { return fmt.Errorf("Failed to rename mountpoint: %s", err) } return nil } func (s *storageLvm) ContainerSnapshotStart(container container) error { srcName := containerNameToLVName(container.Name()) destName := containerNameToLVName(container.Name() + "/rw") shared.Log.Debug( "Creating snapshot", log.Ctx{"srcName": srcName, "destName": destName}) lvpath, err := s.createSnapshotLV(destName, srcName, false) if err != nil { return fmt.Errorf("Error creating snapshot LV: %v", err) } destPath := container.Path() if !shared.PathExists(destPath) { if err := os.MkdirAll(destPath, 0755); err != nil { return fmt.Errorf("Error creating container directory: %v", err) } } // Generate a new xfs's UUID fstype := daemonConfig["storage.lvm_fstype"].Get() if fstype == "xfs" { err := xfsGenerateNewUUID(lvpath) if err != nil { s.ContainerDelete(container) return err } } err = tryMount(lvpath, container.Path(), fstype, 0, "discard") if err != nil { return fmt.Errorf( "Error mounting snapshot LV path='%s': %v", container.Path(), err) } return nil } func (s *storageLvm) ContainerSnapshotStop(container container) error { err := s.ContainerStop(container) if err != nil { return err } lvName := containerNameToLVName(container.Name() + "/rw") if err := s.removeLV(lvName); err != nil { return err } return nil } func (s *storageLvm) ContainerSnapshotCreateEmpty(snapshotContainer container) error { return s.ContainerCreate(snapshotContainer) } func (s *storageLvm) ImageCreate(fingerprint string) error { finalName := shared.VarPath("images", fingerprint) lvpath, err := s.createThinLV(fingerprint) if err != nil { s.log.Error("LVMCreateThinLV", log.Ctx{"err": err}) return fmt.Errorf("Error Creating LVM LV for new image: %v", err) } dst := shared.VarPath("images", fmt.Sprintf("%s.lv", fingerprint)) err = os.Symlink(lvpath, dst) if err != nil { return err } tempLVMountPoint, err := ioutil.TempDir(shared.VarPath("images"), "tmp_lv_mnt") if err != nil { return err } defer func() { if err := os.RemoveAll(tempLVMountPoint); err != nil { s.log.Error("Deleting temporary LVM mount point", log.Ctx{"err": err}) } }() fstype := daemonConfig["storage.lvm_fstype"].Get() err = tryMount(lvpath, tempLVMountPoint, fstype, 0, "discard") if err != nil { shared.Logf("Error mounting image LV for untarring: %v", err) return fmt.Errorf("Error mounting image LV: %v", err) } untarErr := untarImage(finalName, tempLVMountPoint) err = tryUnmount(tempLVMountPoint, 0) if err != nil { s.log.Warn("could not unmount LV. Will not remove", log.Ctx{"lvpath": lvpath, "mountpoint": tempLVMountPoint, "err": err}) if untarErr == nil { return err } return fmt.Errorf( "Error unmounting '%s' during cleanup of error %v", tempLVMountPoint, untarErr) } if untarErr != nil { s.removeLV(fingerprint) return untarErr } return nil } func (s *storageLvm) ImageDelete(fingerprint string) error { err := s.removeLV(fingerprint) if err != nil { return err } lvsymlink := fmt.Sprintf( "%s.lv", shared.VarPath("images", fingerprint)) err = os.Remove(lvsymlink) if err != nil { return fmt.Errorf( "Failed to remove symlink to deleted image LV: '%s': %v", lvsymlink, err) } return nil } func (s *storageLvm) createDefaultThinPool() (string, error) { thinPoolName := daemonConfig["storage.lvm_thinpool_name"].Get() // Create a tiny 1G thinpool output, err := tryExec( "lvcreate", "--poolmetadatasize", "1G", "-L", "1G", "--thinpool", fmt.Sprintf("%s/%s", s.vgName, thinPoolName)) if err != nil { s.log.Error( "Could not create thin pool", log.Ctx{ "name": thinPoolName, "err": err, "output": string(output)}) return "", fmt.Errorf( "Could not create LVM thin pool named %s", thinPoolName) } // Grow it to the maximum VG size (two step process required by old LVM) output, err = tryExec( "lvextend", "--alloc", "anywhere", "-l", "100%FREE", fmt.Sprintf("%s/%s", s.vgName, thinPoolName)) if err != nil { s.log.Error( "Could not grow thin pool", log.Ctx{ "name": thinPoolName, "err": err, "output": string(output)}) return "", fmt.Errorf( "Could not grow LVM thin pool named %s", thinPoolName) } return thinPoolName, nil } func (s *storageLvm) createThinLV(lvname string) (string, error) { var err error vgname := daemonConfig["storage.lvm_vg_name"].Get() poolname := daemonConfig["storage.lvm_thinpool_name"].Get() exists, err := storageLVMThinpoolExists(vgname, poolname) if err != nil { return "", err } if !exists { poolname, err = s.createDefaultThinPool() if err != nil { return "", fmt.Errorf("Error creating LVM thin pool: %v", err) } err = storageLVMValidateThinPoolName(s.d, "", poolname) if err != nil { s.log.Error("Setting thin pool name", log.Ctx{"err": err}) return "", fmt.Errorf("Error setting LVM thin pool config: %v", err) } } lvSize := daemonConfig["storage.lvm_volume_size"].Get() output, err := tryExec( "lvcreate", "--thin", "-n", lvname, "--virtualsize", lvSize, fmt.Sprintf("%s/%s", s.vgName, poolname)) if err != nil { s.log.Error("Could not create LV", log.Ctx{"lvname": lvname, "output": string(output)}) return "", fmt.Errorf("Could not create thin LV named %s", lvname) } lvpath := fmt.Sprintf("/dev/%s/%s", s.vgName, lvname) fstype := daemonConfig["storage.lvm_fstype"].Get() switch fstype { case "xfs": output, err = tryExec( "mkfs.xfs", lvpath) default: // default = ext4 output, err = tryExec( "mkfs.ext4", "-E", "nodiscard,lazy_itable_init=0,lazy_journal_init=0", lvpath) } if err != nil { s.log.Error("Filesystem creation failed", log.Ctx{"output": string(output)}) return "", fmt.Errorf("Error making filesystem on image LV: %v", err) } return lvpath, nil } func (s *storageLvm) removeLV(lvname string) error { var err error var output []byte output, err = tryExec( "lvremove", "-f", fmt.Sprintf("%s/%s", s.vgName, lvname)) if err != nil { s.log.Error("Could not remove LV", log.Ctx{"lvname": lvname, "output": string(output)}) return fmt.Errorf("Could not remove LV named %s", lvname) } return nil } func (s *storageLvm) createSnapshotLV(lvname string, origlvname string, readonly bool) (string, error) { s.log.Debug("in createSnapshotLV:", log.Ctx{"lvname": lvname, "dev string": fmt.Sprintf("/dev/%s/%s", s.vgName, origlvname)}) isRecent, err := s.lvmVersionIsAtLeast("2.02.99") if err != nil { return "", fmt.Errorf("Error checking LVM version: %v", err) } var output []byte if isRecent { output, err = tryExec( "lvcreate", "-kn", "-n", lvname, "-s", fmt.Sprintf("/dev/%s/%s", s.vgName, origlvname)) } else { output, err = tryExec( "lvcreate", "-n", lvname, "-s", fmt.Sprintf("/dev/%s/%s", s.vgName, origlvname)) } if err != nil { s.log.Error("Could not create LV snapshot", log.Ctx{"lvname": lvname, "origlvname": origlvname, "output": string(output)}) return "", fmt.Errorf("Could not create snapshot LV named %s", lvname) } snapshotFullName := fmt.Sprintf("/dev/%s/%s", s.vgName, lvname) if readonly { output, err = tryExec("lvchange", "-ay", "-pr", snapshotFullName) } else { output, err = tryExec("lvchange", "-ay", snapshotFullName) } if err != nil { return "", fmt.Errorf("Could not activate new snapshot '%s': %v\noutput:%s", lvname, err, string(output)) } return snapshotFullName, nil } func (s *storageLvm) isLVMContainer(container container) bool { return shared.PathExists(fmt.Sprintf("%s.lv", container.Path())) } func (s *storageLvm) renameLV(oldName string, newName string) (string, error) { output, err := tryExec("lvrename", s.vgName, oldName, newName) return string(output), err } func (s *storageLvm) MigrationType() MigrationFSType { return MigrationFSType_RSYNC } func (s *storageLvm) MigrationSource(container container) (MigrationStorageSourceDriver, error) { return rsyncMigrationSource(container) } func (s *storageLvm) MigrationSink(live bool, container container, snapshots []container, conn *websocket.Conn) error { return rsyncMigrationSink(live, container, snapshots, conn) } lxd-2.0.2/lxd/storage_test.go000066400000000000000000000053741272140510300161270ustar00rootroot00000000000000package main import ( "fmt" "github.com/gorilla/websocket" log "gopkg.in/inconshreveable/log15.v2" ) type storageMock struct { d *Daemon sType storageType log log.Logger storageShared } func (s *storageMock) Init(config map[string]interface{}) (storage, error) { s.sType = storageTypeMock s.sTypeName = storageTypeToString(storageTypeMock) if err := s.initShared(); err != nil { return s, err } return s, nil } func (s *storageMock) GetStorageType() storageType { return s.sType } func (s *storageMock) GetStorageTypeName() string { return s.sTypeName } func (s *storageMock) ContainerCreate(container container) error { return nil } func (s *storageMock) ContainerCreateFromImage( container container, imageFingerprint string) error { return nil } func (s *storageMock) ContainerCanRestore(container container, sourceContainer container) error { return nil } func (s *storageMock) ContainerDelete(container container) error { return nil } func (s *storageMock) ContainerCopy( container container, sourceContainer container) error { return nil } func (s *storageMock) ContainerStart(container container) error { return nil } func (s *storageMock) ContainerStop(container container) error { return nil } func (s *storageMock) ContainerRename( container container, newName string) error { return nil } func (s *storageMock) ContainerRestore( container container, sourceContainer container) error { return nil } func (s *storageMock) ContainerSetQuota( container container, size int64) error { return nil } func (s *storageMock) ContainerGetUsage( container container) (int64, error) { return 0, nil } func (s *storageMock) ContainerSnapshotCreate( snapshotContainer container, sourceContainer container) error { return nil } func (s *storageMock) ContainerSnapshotDelete( snapshotContainer container) error { return nil } func (s *storageMock) ContainerSnapshotRename( snapshotContainer container, newName string) error { return nil } func (s *storageMock) ContainerSnapshotStart(container container) error { return nil } func (s *storageMock) ContainerSnapshotStop(container container) error { return nil } func (s *storageMock) ContainerSnapshotCreateEmpty(snapshotContainer container) error { return nil } func (s *storageMock) ImageCreate(fingerprint string) error { return nil } func (s *storageMock) ImageDelete(fingerprint string) error { return nil } func (s *storageMock) MigrationType() MigrationFSType { return MigrationFSType_RSYNC } func (s *storageMock) MigrationSource(container container) (MigrationStorageSourceDriver, error) { return nil, fmt.Errorf("not implemented") } func (s *storageMock) MigrationSink(live bool, container container, snapshots []container, conn *websocket.Conn) error { return nil } lxd-2.0.2/lxd/storage_zfs.go000066400000000000000000001021541272140510300157440ustar00rootroot00000000000000package main import ( "fmt" "io/ioutil" "os" "os/exec" "strconv" "strings" "syscall" "github.com/gorilla/websocket" "github.com/lxc/lxd/shared" "github.com/pborman/uuid" log "gopkg.in/inconshreveable/log15.v2" ) type storageZfs struct { d *Daemon zfsPool string storageShared } func (s *storageZfs) Init(config map[string]interface{}) (storage, error) { s.sType = storageTypeZfs s.sTypeName = storageTypeToString(s.sType) err := s.initShared() if err != nil { return s, err } if config["zfsPool"] == nil { zfsPool := daemonConfig["storage.zfs_pool_name"].Get() if zfsPool == "" { return s, fmt.Errorf("ZFS isn't enabled") } s.zfsPool = zfsPool } else { s.zfsPool = config["zfsPool"].(string) } out, err := exec.LookPath("zfs") if err != nil || len(out) == 0 { return s, fmt.Errorf("The 'zfs' tool isn't available") } err = s.zfsCheckPool(s.zfsPool) if err != nil { if shared.PathExists(shared.VarPath("zfs.img")) { _ = exec.Command("modprobe", "zfs").Run() output, err := exec.Command("zpool", "import", "-d", shared.VarPath(), s.zfsPool).CombinedOutput() if err != nil { return s, fmt.Errorf("Unable to import the ZFS pool: %s", output) } } else { return s, err } } output, err := exec.Command("zfs", "get", "version", "-H", "-o", "value", s.zfsPool).CombinedOutput() if err != nil { return s, fmt.Errorf("The 'zfs' tool isn't working properly") } count, err := fmt.Sscanf(string(output), "%s\n", &s.sTypeVersion) if err != nil || count != 1 { return s, fmt.Errorf("The 'zfs' tool isn't working properly") } return s, nil } // Things we don't need to care about func (s *storageZfs) ContainerStart(container container) error { return nil } func (s *storageZfs) ContainerStop(container container) error { return nil } // Things we do have to care about func (s *storageZfs) ContainerCreate(container container) error { cPath := container.Path() fs := fmt.Sprintf("containers/%s", container.Name()) err := s.zfsCreate(fs) if err != nil { return err } err = os.Symlink(cPath+".zfs", cPath) if err != nil { return err } var mode os.FileMode if container.IsPrivileged() { mode = 0700 } else { mode = 0755 } err = os.Chmod(cPath, mode) if err != nil { return err } return container.TemplateApply("create") } func (s *storageZfs) ContainerCreateFromImage(container container, fingerprint string) error { cPath := container.Path() imagePath := shared.VarPath("images", fingerprint) subvol := fmt.Sprintf("%s.zfs", imagePath) fs := fmt.Sprintf("containers/%s", container.Name()) fsImage := fmt.Sprintf("images/%s", fingerprint) if !shared.PathExists(subvol) { err := s.ImageCreate(fingerprint) if err != nil { return err } } err := s.zfsClone(fsImage, "readonly", fs, true) if err != nil { return err } err = os.Symlink(cPath+".zfs", cPath) if err != nil { return err } var mode os.FileMode if container.IsPrivileged() { mode = 0700 } else { mode = 0755 } err = os.Chmod(cPath, mode) if err != nil { return err } if !container.IsPrivileged() { err = s.shiftRootfs(container) if err != nil { return err } } return container.TemplateApply("create") } func (s *storageZfs) ContainerCanRestore(container container, sourceContainer container) error { snaps, err := container.Snapshots() if err != nil { return err } if snaps[len(snaps)-1].Name() != sourceContainer.Name() { return fmt.Errorf("ZFS can only restore from the latest snapshot. Delete newer snapshots or copy the snapshot into a new container instead.") } return nil } func (s *storageZfs) ContainerDelete(container container) error { fs := fmt.Sprintf("containers/%s", container.Name()) if s.zfsExists(fs) { removable := true snaps, err := s.zfsListSnapshots(fs) if err != nil { return err } for _, snap := range snaps { var err error removable, err = s.zfsSnapshotRemovable(fs, snap) if err != nil { return err } if !removable { break } } if removable { origin, err := s.zfsGet(fs, "origin") if err != nil { return err } origin = strings.TrimPrefix(origin, fmt.Sprintf("%s/", s.zfsPool)) err = s.zfsDestroy(fs) if err != nil { return err } err = s.zfsCleanup(origin) if err != nil { return err } } else { err := s.zfsSet(fs, "mountpoint", "none") if err != nil { return err } err = s.zfsRename(fs, fmt.Sprintf("deleted/containers/%s", uuid.NewRandom().String())) if err != nil { return err } } } if shared.PathExists(shared.VarPath(fs)) { err := os.Remove(shared.VarPath(fs)) if err != nil { return err } } if shared.PathExists(shared.VarPath(fs) + ".zfs") { err := os.Remove(shared.VarPath(fs) + ".zfs") if err != nil { return err } } s.zfsDestroy(fmt.Sprintf("snapshots/%s", container.Name())) return nil } func (s *storageZfs) ContainerCopy(container container, sourceContainer container) error { var sourceFs string var sourceSnap string sourceFields := strings.SplitN(sourceContainer.Name(), shared.SnapshotDelimiter, 2) sourceName := sourceFields[0] destName := container.Name() destFs := fmt.Sprintf("containers/%s", destName) if len(sourceFields) == 2 { sourceSnap = sourceFields[1] } if sourceSnap == "" { if s.zfsExists(fmt.Sprintf("containers/%s", sourceName)) { sourceSnap = fmt.Sprintf("copy-%s", uuid.NewRandom().String()) sourceFs = fmt.Sprintf("containers/%s", sourceName) err := s.zfsSnapshotCreate(fmt.Sprintf("containers/%s", sourceName), sourceSnap) if err != nil { return err } } } else { if s.zfsExists(fmt.Sprintf("containers/%s@snapshot-%s", sourceName, sourceSnap)) { sourceFs = fmt.Sprintf("containers/%s", sourceName) sourceSnap = fmt.Sprintf("snapshot-%s", sourceSnap) } } if sourceFs != "" { err := s.zfsClone(sourceFs, sourceSnap, destFs, true) if err != nil { return err } cPath := container.Path() err = os.Symlink(cPath+".zfs", cPath) if err != nil { return err } var mode os.FileMode if container.IsPrivileged() { mode = 0700 } else { mode = 0755 } err = os.Chmod(cPath, mode) if err != nil { return err } } else { err := s.ContainerCreate(container) if err != nil { return err } output, err := storageRsyncCopy(sourceContainer.Path(), container.Path()) if err != nil { return fmt.Errorf("rsync failed: %s", string(output)) } } return container.TemplateApply("copy") } func (s *storageZfs) ContainerRename(container container, newName string) error { oldName := container.Name() // Unmount the filesystem err := s.zfsUnmount(fmt.Sprintf("containers/%s", oldName)) if err != nil { return err } // Rename the filesystem err = s.zfsRename(fmt.Sprintf("containers/%s", oldName), fmt.Sprintf("containers/%s", newName)) if err != nil { return err } // Update to the new mountpoint err = s.zfsSet(fmt.Sprintf("containers/%s", newName), "mountpoint", shared.VarPath(fmt.Sprintf("containers/%s.zfs", newName))) if err != nil { return err } // In case ZFS didn't mount the filesystem, do it ourselves err = s.zfsMount(fmt.Sprintf("containers/%s", newName)) if err != nil { return err } // In case the change of mountpoint didn't remove the old path, do it ourselves if shared.PathExists(shared.VarPath(fmt.Sprintf("containers/%s.zfs", oldName))) { err = os.Remove(shared.VarPath(fmt.Sprintf("containers/%s.zfs", oldName))) if err != nil { return err } } // Remove the old symlink err = os.Remove(shared.VarPath(fmt.Sprintf("containers/%s", oldName))) if err != nil { return err } // Create a new symlink err = os.Symlink(shared.VarPath(fmt.Sprintf("containers/%s.zfs", newName)), shared.VarPath(fmt.Sprintf("containers/%s", newName))) if err != nil { return err } // Rename the snapshot path if shared.PathExists(shared.VarPath(fmt.Sprintf("snapshots/%s", oldName))) { err = os.Rename(shared.VarPath(fmt.Sprintf("snapshots/%s", oldName)), shared.VarPath(fmt.Sprintf("snapshots/%s", newName))) if err != nil { return err } } return nil } func (s *storageZfs) ContainerRestore(container container, sourceContainer container) error { fields := strings.SplitN(sourceContainer.Name(), shared.SnapshotDelimiter, 2) cName := fields[0] snapName := fmt.Sprintf("snapshot-%s", fields[1]) err := s.zfsSnapshotRestore(fmt.Sprintf("containers/%s", cName), snapName) if err != nil { return err } return nil } func (s *storageZfs) ContainerSetQuota(container container, size int64) error { var err error fs := fmt.Sprintf("containers/%s", container.Name()) if size > 0 { err = s.zfsSet(fs, "quota", fmt.Sprintf("%d", size)) } else { err = s.zfsSet(fs, "quota", "none") } if err != nil { return err } return nil } func (s *storageZfs) ContainerGetUsage(container container) (int64, error) { var err error fs := fmt.Sprintf("containers/%s", container.Name()) value, err := s.zfsGet(fs, "used") if err != nil { return -1, err } valueInt, err := strconv.ParseInt(value, 10, 64) if err != nil { return -1, err } return valueInt, nil } func (s *storageZfs) ContainerSnapshotCreate(snapshotContainer container, sourceContainer container) error { fields := strings.SplitN(snapshotContainer.Name(), shared.SnapshotDelimiter, 2) cName := fields[0] snapName := fmt.Sprintf("snapshot-%s", fields[1]) err := s.zfsSnapshotCreate(fmt.Sprintf("containers/%s", cName), snapName) if err != nil { return err } if !shared.PathExists(shared.VarPath(fmt.Sprintf("snapshots/%s", cName))) { err = os.MkdirAll(shared.VarPath(fmt.Sprintf("snapshots/%s", cName)), 0700) if err != nil { return err } } err = os.Symlink("on-zfs", shared.VarPath(fmt.Sprintf("snapshots/%s/%s.zfs", cName, fields[1]))) if err != nil { return err } return nil } func (s *storageZfs) ContainerSnapshotDelete(snapshotContainer container) error { fields := strings.SplitN(snapshotContainer.Name(), shared.SnapshotDelimiter, 2) cName := fields[0] snapName := fmt.Sprintf("snapshot-%s", fields[1]) if s.zfsExists(fmt.Sprintf("containers/%s@%s", cName, snapName)) { removable, err := s.zfsSnapshotRemovable(fmt.Sprintf("containers/%s", cName), snapName) if removable { err = s.zfsSnapshotDestroy(fmt.Sprintf("containers/%s", cName), snapName) if err != nil { return err } } else { err = s.zfsSnapshotRename(fmt.Sprintf("containers/%s", cName), snapName, fmt.Sprintf("copy-%s", uuid.NewRandom().String())) if err != nil { return err } } } snapPath := shared.VarPath(fmt.Sprintf("snapshots/%s/%s.zfs", cName, fields[1])) if shared.PathExists(snapPath) { err := os.Remove(snapPath) if err != nil { return err } } parent := shared.VarPath(fmt.Sprintf("snapshots/%s", cName)) if ok, _ := shared.PathIsEmpty(parent); ok { err := os.Remove(parent) if err != nil { return err } } return nil } func (s *storageZfs) ContainerSnapshotRename(snapshotContainer container, newName string) error { oldFields := strings.SplitN(snapshotContainer.Name(), shared.SnapshotDelimiter, 2) oldcName := oldFields[0] oldName := fmt.Sprintf("snapshot-%s", oldFields[1]) newFields := strings.SplitN(newName, shared.SnapshotDelimiter, 2) newcName := newFields[0] newName = fmt.Sprintf("snapshot-%s", newFields[1]) if oldName != newName { err := s.zfsSnapshotRename(fmt.Sprintf("containers/%s", oldcName), oldName, newName) if err != nil { return err } } err := os.Remove(shared.VarPath(fmt.Sprintf("snapshots/%s/%s.zfs", oldcName, oldFields[1]))) if err != nil { return err } if !shared.PathExists(shared.VarPath(fmt.Sprintf("snapshots/%s", newcName))) { err = os.MkdirAll(shared.VarPath(fmt.Sprintf("snapshots/%s", newcName)), 0700) if err != nil { return err } } err = os.Symlink("on-zfs", shared.VarPath(fmt.Sprintf("snapshots/%s/%s.zfs", newcName, newFields[1]))) if err != nil { return err } parent := shared.VarPath(fmt.Sprintf("snapshots/%s", oldcName)) if ok, _ := shared.PathIsEmpty(parent); ok { err = os.Remove(parent) if err != nil { return err } } return nil } func (s *storageZfs) ContainerSnapshotStart(container container) error { fields := strings.SplitN(container.Name(), shared.SnapshotDelimiter, 2) if len(fields) < 2 { return fmt.Errorf("Invalid snapshot name: %s", container.Name()) } cName := fields[0] sName := fields[1] sourceFs := fmt.Sprintf("containers/%s", cName) sourceSnap := fmt.Sprintf("snapshot-%s", sName) destFs := fmt.Sprintf("snapshots/%s/%s", cName, sName) err := s.zfsClone(sourceFs, sourceSnap, destFs, false) if err != nil { return err } return nil } func (s *storageZfs) ContainerSnapshotStop(container container) error { fields := strings.SplitN(container.Name(), shared.SnapshotDelimiter, 2) if len(fields) < 2 { return fmt.Errorf("Invalid snapshot name: %s", container.Name()) } cName := fields[0] sName := fields[1] destFs := fmt.Sprintf("snapshots/%s/%s", cName, sName) err := s.zfsDestroy(destFs) if err != nil { return err } /* zfs creates this directory on clone (start), so we need to clean it * up on stop */ return os.RemoveAll(container.Path()) } func (s *storageZfs) ContainerSnapshotCreateEmpty(snapshotContainer container) error { /* don't touch the fs yet, as migration will do that for us */ return nil } func (s *storageZfs) ImageCreate(fingerprint string) error { imagePath := shared.VarPath("images", fingerprint) subvol := fmt.Sprintf("%s.zfs", imagePath) fs := fmt.Sprintf("images/%s", fingerprint) if s.zfsExists(fmt.Sprintf("deleted/%s", fs)) { err := s.zfsRename(fmt.Sprintf("deleted/%s", fs), fs) if err != nil { return err } err = s.zfsSet(fs, "mountpoint", subvol) if err != nil { return err } return nil } err := s.zfsCreate(fs) if err != nil { return err } err = untarImage(imagePath, subvol) if err != nil { s.zfsDestroy(fs) return err } err = s.zfsSet(fs, "readonly", "on") if err != nil { s.zfsDestroy(fs) return err } err = s.zfsSnapshotCreate(fs, "readonly") if err != nil { s.zfsDestroy(fs) return err } return nil } func (s *storageZfs) ImageDelete(fingerprint string) error { fs := fmt.Sprintf("images/%s", fingerprint) if s.zfsExists(fs) { removable, err := s.zfsSnapshotRemovable(fs, "readonly") if err != nil { return err } if removable { err := s.zfsDestroy(fs) if err != nil { return err } } else { err := s.zfsSet(fs, "mountpoint", "none") if err != nil { return err } err = s.zfsRename(fs, fmt.Sprintf("deleted/%s", fs)) if err != nil { return err } } } if shared.PathExists(shared.VarPath(fs + ".zfs")) { err := os.Remove(shared.VarPath(fs + ".zfs")) if err != nil { return err } } return nil } // Helper functions func (s *storageZfs) zfsCheckPool(pool string) error { output, err := exec.Command( "zfs", "get", "type", "-H", "-o", "value", pool).CombinedOutput() if err != nil { return fmt.Errorf(strings.Split(string(output), "\n")[0]) } poolType := strings.Split(string(output), "\n")[0] if poolType != "filesystem" { return fmt.Errorf("Unsupported pool type: %s", poolType) } return nil } func (s *storageZfs) zfsClone(source string, name string, dest string, dotZfs bool) error { var mountpoint string mountpoint = shared.VarPath(dest) if dotZfs { mountpoint += ".zfs" } output, err := exec.Command( "zfs", "clone", "-p", "-o", fmt.Sprintf("mountpoint=%s", mountpoint), fmt.Sprintf("%s/%s@%s", s.zfsPool, source, name), fmt.Sprintf("%s/%s", s.zfsPool, dest)).CombinedOutput() if err != nil { s.log.Error("zfs clone failed", log.Ctx{"output": string(output)}) return fmt.Errorf("Failed to clone the filesystem: %s", output) } subvols, err := s.zfsListSubvolumes(source) if err != nil { return err } for _, sub := range subvols { snaps, err := s.zfsListSnapshots(sub) if err != nil { return err } if !shared.StringInSlice(name, snaps) { continue } destSubvol := dest + strings.TrimPrefix(sub, source) mountpoint = shared.VarPath(destSubvol) if dotZfs { mountpoint += ".zfs" } output, err := exec.Command( "zfs", "clone", "-p", "-o", fmt.Sprintf("mountpoint=%s", mountpoint), fmt.Sprintf("%s/%s@%s", s.zfsPool, sub, name), fmt.Sprintf("%s/%s", s.zfsPool, destSubvol)).CombinedOutput() if err != nil { s.log.Error("zfs clone failed", log.Ctx{"output": string(output)}) return fmt.Errorf("Failed to clone the sub-volume: %s", output) } } return nil } func (s *storageZfs) zfsCreate(path string) error { output, err := exec.Command( "zfs", "create", "-p", "-o", fmt.Sprintf("mountpoint=%s.zfs", shared.VarPath(path)), fmt.Sprintf("%s/%s", s.zfsPool, path)).CombinedOutput() if err != nil { s.log.Error("zfs create failed", log.Ctx{"output": string(output)}) return fmt.Errorf("Failed to create ZFS filesystem: %s", output) } return nil } func (s *storageZfs) zfsDestroy(path string) error { mountpoint, err := s.zfsGet(path, "mountpoint") if err != nil { return err } if mountpoint != "none" && shared.IsMountPoint(mountpoint) { err := syscall.Unmount(mountpoint, syscall.MNT_DETACH) if err != nil { s.log.Error("umount failed", log.Ctx{"err": err}) return err } } // Due to open fds or kernel refs, this may fail for a bit, give it 10s output, err := tryExec( "zfs", "destroy", "-r", fmt.Sprintf("%s/%s", s.zfsPool, path)) if err != nil { s.log.Error("zfs destroy failed", log.Ctx{"output": string(output)}) return fmt.Errorf("Failed to destroy ZFS filesystem: %s", output) } return nil } func (s *storageZfs) zfsCleanup(path string) error { if strings.HasPrefix(path, "deleted/") { // Cleanup of filesystems kept for refcount reason removablePath, err := s.zfsSnapshotRemovable(path, "") if err != nil { return err } // Confirm that there are no more clones if removablePath { if strings.Contains(path, "@") { // Cleanup snapshots err = s.zfsDestroy(path) if err != nil { return err } // Check if the parent can now be deleted subPath := strings.SplitN(path, "@", 2)[0] snaps, err := s.zfsListSnapshots(subPath) if err != nil { return err } if len(snaps) == 0 { err := s.zfsCleanup(subPath) if err != nil { return err } } } else { // Cleanup filesystems origin, err := s.zfsGet(path, "origin") if err != nil { return err } origin = strings.TrimPrefix(origin, fmt.Sprintf("%s/", s.zfsPool)) err = s.zfsDestroy(path) if err != nil { return err } // Attempt to remove its parent if origin != "-" { err := s.zfsCleanup(origin) if err != nil { return err } } } return nil } } else if strings.HasPrefix(path, "containers") { // Just remove the copy- snapshot for copies of active containers err := s.zfsDestroy(path) if err != nil { return err } } return nil } func (s *storageZfs) zfsExists(path string) bool { output, _ := s.zfsGet(path, "name") if output == fmt.Sprintf("%s/%s", s.zfsPool, path) { return true } return false } func (s *storageZfs) zfsGet(path string, key string) (string, error) { output, err := exec.Command( "zfs", "get", "-H", "-p", "-o", "value", key, fmt.Sprintf("%s/%s", s.zfsPool, path)).CombinedOutput() if err != nil { return string(output), fmt.Errorf("Failed to get ZFS config: %s", output) } return strings.TrimRight(string(output), "\n"), nil } func (s *storageZfs) zfsRename(source string, dest string) error { output, err := tryExec( "zfs", "rename", "-p", fmt.Sprintf("%s/%s", s.zfsPool, source), fmt.Sprintf("%s/%s", s.zfsPool, dest)) if err != nil { if s.zfsExists(source) || !s.zfsExists(dest) { s.log.Error("zfs rename failed", log.Ctx{"output": string(output)}) return fmt.Errorf("Failed to rename ZFS filesystem: %s", output) } } return nil } func (s *storageZfs) zfsSet(path string, key string, value string) error { output, err := exec.Command( "zfs", "set", fmt.Sprintf("%s=%s", key, value), fmt.Sprintf("%s/%s", s.zfsPool, path)).CombinedOutput() if err != nil { s.log.Error("zfs set failed", log.Ctx{"output": string(output)}) return fmt.Errorf("Failed to set ZFS config: %s", output) } return nil } func (s *storageZfs) zfsSnapshotCreate(path string, name string) error { output, err := exec.Command( "zfs", "snapshot", "-r", fmt.Sprintf("%s/%s@%s", s.zfsPool, path, name)).CombinedOutput() if err != nil { s.log.Error("zfs snapshot failed", log.Ctx{"output": string(output)}) return fmt.Errorf("Failed to create ZFS snapshot: %s", output) } return nil } func (s *storageZfs) zfsSnapshotDestroy(path string, name string) error { output, err := exec.Command( "zfs", "destroy", "-r", fmt.Sprintf("%s/%s@%s", s.zfsPool, path, name)).CombinedOutput() if err != nil { s.log.Error("zfs destroy failed", log.Ctx{"output": string(output)}) return fmt.Errorf("Failed to destroy ZFS snapshot: %s", output) } return nil } func (s *storageZfs) zfsSnapshotRestore(path string, name string) error { output, err := tryExec( "zfs", "rollback", fmt.Sprintf("%s/%s@%s", s.zfsPool, path, name)) if err != nil { s.log.Error("zfs rollback failed", log.Ctx{"output": string(output)}) return fmt.Errorf("Failed to restore ZFS snapshot: %s", output) } subvols, err := s.zfsListSubvolumes(path) if err != nil { return err } for _, sub := range subvols { snaps, err := s.zfsListSnapshots(sub) if err != nil { return err } if !shared.StringInSlice(name, snaps) { continue } output, err := tryExec( "zfs", "rollback", fmt.Sprintf("%s/%s@%s", s.zfsPool, sub, name)) if err != nil { s.log.Error("zfs rollback failed", log.Ctx{"output": string(output)}) return fmt.Errorf("Failed to restore ZFS sub-volume snapshot: %s", output) } } return nil } func (s *storageZfs) zfsSnapshotRename(path string, oldName string, newName string) error { output, err := exec.Command( "zfs", "rename", "-r", fmt.Sprintf("%s/%s@%s", s.zfsPool, path, oldName), fmt.Sprintf("%s/%s@%s", s.zfsPool, path, newName)).CombinedOutput() if err != nil { s.log.Error("zfs snapshot rename failed", log.Ctx{"output": string(output)}) return fmt.Errorf("Failed to rename ZFS snapshot: %s", output) } return nil } func (s *storageZfs) zfsMount(path string) error { output, err := tryExec( "zfs", "mount", fmt.Sprintf("%s/%s", s.zfsPool, path)) if err != nil { s.log.Error("zfs mount failed", log.Ctx{"output": string(output)}) return fmt.Errorf("Failed to mount ZFS filesystem: %s", output) } return nil } func (s *storageZfs) zfsUnmount(path string) error { output, err := tryExec( "zfs", "unmount", fmt.Sprintf("%s/%s", s.zfsPool, path)) if err != nil { s.log.Error("zfs unmount failed", log.Ctx{"output": string(output)}) return fmt.Errorf("Failed to unmount ZFS filesystem: %s", output) } return nil } func (s *storageZfs) zfsListSubvolumes(path string) ([]string, error) { path = strings.TrimRight(path, "/") fullPath := s.zfsPool if path != "" { fullPath = fmt.Sprintf("%s/%s", s.zfsPool, path) } output, err := exec.Command( "zfs", "list", "-t", "filesystem", "-o", "name", "-H", "-r", fullPath).CombinedOutput() if err != nil { s.log.Error("zfs list failed", log.Ctx{"output": string(output)}) return []string{}, fmt.Errorf("Failed to list ZFS filesystems: %s", output) } children := []string{} for _, entry := range strings.Split(string(output), "\n") { if entry == "" { continue } if entry == fullPath { continue } children = append(children, strings.TrimPrefix(entry, fmt.Sprintf("%s/", s.zfsPool))) } return children, nil } func (s *storageZfs) zfsListSnapshots(path string) ([]string, error) { path = strings.TrimRight(path, "/") fullPath := s.zfsPool if path != "" { fullPath = fmt.Sprintf("%s/%s", s.zfsPool, path) } output, err := exec.Command( "zfs", "list", "-t", "snapshot", "-o", "name", "-H", "-d", "1", "-s", "creation", "-r", fullPath).CombinedOutput() if err != nil { s.log.Error("zfs list failed", log.Ctx{"output": string(output)}) return []string{}, fmt.Errorf("Failed to list ZFS snapshots: %s", output) } children := []string{} for _, entry := range strings.Split(string(output), "\n") { if entry == "" { continue } if entry == fullPath { continue } children = append(children, strings.SplitN(entry, "@", 2)[1]) } return children, nil } func (s *storageZfs) zfsSnapshotRemovable(path string, name string) (bool, error) { var snap string if name == "" { snap = path } else { snap = fmt.Sprintf("%s@%s", path, name) } clones, err := s.zfsGet(snap, "clones") if err != nil { return false, err } if clones == "-" || clones == "" { return true, nil } return false, nil } func (s *storageZfs) zfsGetPoolUsers() ([]string, error) { subvols, err := s.zfsListSubvolumes("") if err != nil { return []string{}, err } exceptions := []string{ "containers", "images", "snapshots", "deleted", "deleted/containers", "deleted/images"} users := []string{} for _, subvol := range subvols { path := strings.Split(subvol, "/") // Only care about plausible LXD paths if !shared.StringInSlice(path[0], exceptions) { continue } // Ignore empty paths if shared.StringInSlice(subvol, exceptions) { continue } users = append(users, subvol) } return users, nil } // Global functions func storageZFSValidatePoolName(d *Daemon, key string, value string) error { s := storageZfs{} // Confirm the backend is working err := s.initShared() if err != nil { return fmt.Errorf("Unable to initialize the ZFS backend: %v", err) } // Confirm the new pool exists and is compatible if value != "" { err = s.zfsCheckPool(value) if err != nil { return fmt.Errorf("Invalid ZFS pool: %v", err) } } // Confirm the old pool isn't in use anymore oldPoolname := daemonConfig["storage.zfs_pool_name"].Get() if oldPoolname != "" { s.zfsPool = oldPoolname users, err := s.zfsGetPoolUsers() if err != nil { return fmt.Errorf("Error checking if a pool is already in use: %v", err) } if len(users) > 0 { return fmt.Errorf("Can not change ZFS config. Images or containers are still using the ZFS pool: %v", users) } } return nil } type zfsMigrationSourceDriver struct { container container snapshots []container zfsSnapshotNames []string zfs *storageZfs runningSnapName string stoppedSnapName string } func (s *zfsMigrationSourceDriver) Snapshots() []container { return s.snapshots } func (s *zfsMigrationSourceDriver) send(conn *websocket.Conn, zfsName string, zfsParent string) error { fields := strings.SplitN(s.container.Name(), shared.SnapshotDelimiter, 2) args := []string{"send", fmt.Sprintf("%s/containers/%s@%s", s.zfs.zfsPool, fields[0], zfsName)} if zfsParent != "" { args = append(args, "-i", fmt.Sprintf("%s/containers/%s@%s", s.zfs.zfsPool, s.container.Name(), zfsParent)) } cmd := exec.Command("zfs", args...) stdout, err := cmd.StdoutPipe() if err != nil { return err } stderr, err := cmd.StderrPipe() if err != nil { return err } if err := cmd.Start(); err != nil { return err } <-shared.WebsocketSendStream(conn, stdout) output, err := ioutil.ReadAll(stderr) if err != nil { shared.Log.Error("problem reading zfs send stderr", "err", err) } err = cmd.Wait() if err != nil { shared.Log.Error("problem with zfs send", "output", string(output)) } return err } func (s *zfsMigrationSourceDriver) SendWhileRunning(conn *websocket.Conn) error { if s.container.IsSnapshot() { fields := strings.SplitN(s.container.Name(), shared.SnapshotDelimiter, 2) snapshotName := fmt.Sprintf("snapshot-%s", fields[1]) return s.send(conn, snapshotName, "") } lastSnap := "" for i, snap := range s.zfsSnapshotNames { prev := "" if i > 0 { prev = s.zfsSnapshotNames[i-1] } lastSnap = snap if err := s.send(conn, snap, prev); err != nil { return err } } s.runningSnapName = fmt.Sprintf("migration-send-%s", uuid.NewRandom().String()) if err := s.zfs.zfsSnapshotCreate(fmt.Sprintf("containers/%s", s.container.Name()), s.runningSnapName); err != nil { return err } if err := s.send(conn, s.runningSnapName, lastSnap); err != nil { return err } return nil } func (s *zfsMigrationSourceDriver) SendAfterCheckpoint(conn *websocket.Conn) error { s.stoppedSnapName = fmt.Sprintf("migration-send-%s", uuid.NewRandom().String()) if err := s.zfs.zfsSnapshotCreate(fmt.Sprintf("containers/%s", s.container.Name()), s.stoppedSnapName); err != nil { return err } if err := s.send(conn, s.stoppedSnapName, s.runningSnapName); err != nil { return err } return nil } func (s *zfsMigrationSourceDriver) Cleanup() { if s.stoppedSnapName != "" { s.zfs.zfsSnapshotDestroy(fmt.Sprintf("containers/%s", s.container.Name()), s.stoppedSnapName) } if s.runningSnapName != "" { s.zfs.zfsSnapshotDestroy(fmt.Sprintf("containers/%s", s.container.Name()), s.runningSnapName) } } func (s *storageZfs) MigrationType() MigrationFSType { return MigrationFSType_ZFS } func (s *storageZfs) MigrationSource(ct container) (MigrationStorageSourceDriver, error) { /* If the container is a snapshot, let's just send that; we don't need * to send anything else, because that's all the user asked for. */ if ct.IsSnapshot() { return &zfsMigrationSourceDriver{container: ct, zfs: s}, nil } driver := zfsMigrationSourceDriver{ container: ct, snapshots: []container{}, zfsSnapshotNames: []string{}, zfs: s, } /* List all the snapshots in order of reverse creation. The idea here * is that we send the oldest to newest snapshot, hopefully saving on * xfer costs. Then, after all that, we send the container itself. */ snapshots, err := s.zfsListSnapshots(fmt.Sprintf("containers/%s", ct.Name())) if err != nil { return nil, err } for _, snap := range snapshots { /* In the case of e.g. multiple copies running at the same * time, we will have potentially multiple migration-send * snapshots. (Or in the case of the test suite, sometimes one * will take too long to delete.) */ if !strings.HasPrefix(snap, "snapshot-") { continue } lxdName := fmt.Sprintf("%s%s%s", ct.Name(), shared.SnapshotDelimiter, snap[len("snapshot-"):]) snapshot, err := containerLoadByName(s.d, lxdName) if err != nil { return nil, err } driver.snapshots = append(driver.snapshots, snapshot) driver.zfsSnapshotNames = append(driver.zfsSnapshotNames, snap) } return &driver, nil } func (s *storageZfs) MigrationSink(live bool, container container, snapshots []container, conn *websocket.Conn) error { zfsRecv := func(zfsName string) error { zfsFsName := fmt.Sprintf("%s/%s", s.zfsPool, zfsName) args := []string{"receive", "-F", "-u", zfsFsName} cmd := exec.Command("zfs", args...) stdin, err := cmd.StdinPipe() if err != nil { return err } stderr, err := cmd.StderrPipe() if err != nil { return err } if err := cmd.Start(); err != nil { return err } <-shared.WebsocketRecvStream(stdin, conn) output, err := ioutil.ReadAll(stderr) if err != nil { shared.Debugf("problem reading zfs recv stderr %s", "err", err) } err = cmd.Wait() if err != nil { shared.Log.Error("problem with zfs recv", "output", string(output)) } return err } /* In some versions of zfs we can write `zfs recv -F` to mounted * filesystems, and in some versions we can't. So, let's always unmount * this fs (it's empty anyway) before we zfs recv. N.B. that `zfs recv` * of a snapshot also needs tha actual fs that it has snapshotted * unmounted, so we do this before receiving anything. */ zfsName := fmt.Sprintf("containers/%s", container.Name()) err := s.zfsUnmount(zfsName) if err != nil { return err } for _, snap := range snapshots { fields := strings.SplitN(snap.Name(), shared.SnapshotDelimiter, 2) name := fmt.Sprintf("containers/%s@snapshot-%s", fields[0], fields[1]) if err := zfsRecv(name); err != nil { return err } err := os.MkdirAll(shared.VarPath(fmt.Sprintf("snapshots/%s", fields[0])), 0700) if err != nil { return err } err = os.Symlink("on-zfs", shared.VarPath(fmt.Sprintf("snapshots/%s/%s.zfs", fields[0], fields[1]))) if err != nil { return err } } defer func() { /* clean up our migration-send snapshots that we got from recv. */ zfsSnapshots, err := s.zfsListSnapshots(fmt.Sprintf("containers/%s", container.Name())) if err != nil { shared.Log.Error("failed listing snapshots post migration", "err", err) return } for _, snap := range zfsSnapshots { // If we received a bunch of snapshots, remove the migration-send-* ones, if not, wipe any snapshot we got if snapshots != nil && len(snapshots) > 0 && !strings.HasPrefix(snap, "migration-send") { continue } s.zfsSnapshotDestroy(fmt.Sprintf("containers/%s", container.Name()), snap) } }() /* finally, do the real container */ if err := zfsRecv(zfsName); err != nil { return err } if live { /* and again for the post-running snapshot if this was a live migration */ if err := zfsRecv(zfsName); err != nil { return err } } /* Sometimes, zfs recv mounts this anyway, even if we pass -u * (https://forums.freebsd.org/threads/zfs-receive-u-shouldnt-mount-received-filesystem-right.36844/) * but sometimes it doesn't. Let's try to mount, but not complain about * failure. */ s.zfsMount(zfsName) return nil } lxd-2.0.2/lxd/util.go000066400000000000000000000006521272140510300143730ustar00rootroot00000000000000package main import ( "bytes" "encoding/json" "io" "net/http" "github.com/lxc/lxd/shared" ) func WriteJSON(w http.ResponseWriter, body interface{}) error { var output io.Writer var captured *bytes.Buffer output = w if debug { captured = &bytes.Buffer{} output = io.MultiWriter(w, captured) } err := json.NewEncoder(output).Encode(body) if captured != nil { shared.DebugJson(captured) } return err } lxd-2.0.2/po/000077500000000000000000000000001272140510300127135ustar00rootroot00000000000000lxd-2.0.2/po/de.po000066400000000000000000001374771272140510300136660ustar00rootroot00000000000000# German translation for LXD # Copyright (C) 2015 - LXD contributors # This file is distributed under the same license as LXD. # Felix Engelmann , 2015. # msgid "" msgstr "" "Project-Id-Version: LXD\n" "Report-Msgid-Bugs-To: lxc-devel@lists.linuxcontainers.org\n" "POT-Creation-Date: 2016-04-25 14:47-0500\n" "PO-Revision-Date: 2015-06-13 06:10+0200\n" "Last-Translator: Felix Engelmann \n" "Language-Team: \n" "Language: \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" #: lxc/info.go:140 msgid " Disk usage:" msgstr "" #: lxc/info.go:163 msgid " Memory usage:" msgstr "" #: lxc/info.go:180 msgid " Network usage:" msgstr "" #: lxc/config.go:37 #, fuzzy msgid "" "### This is a yaml representation of the configuration.\n" "### Any line starting with a '# will be ignored.\n" "###\n" "### A sample configuration looks like:\n" "### name: container1\n" "### profiles:\n" "### - default\n" "### config:\n" "### volatile.eth0.hwaddr: 00:16:3e:e9:f8:7f\n" "### devices:\n" "### homedir:\n" "### path: /extra\n" "### source: /home/user\n" "### type: disk\n" "### ephemeral: false\n" "###\n" "### Note that the name is shown but cannot be changed" msgstr "" "### Dies ist eine Darstellung der Konfiguration in yaml.\n" "### Jede Zeile die mit '# beginnt wird ignoriert.\n" "###\n" "### Beispiel einer Konfiguration:\n" "### name: container1\n" "### profiles:\n" "### - default\n" "### config:\n" "### volatile.eth0.hwaddr: 00:16:3e:e9:f8:7f\n" "### devices:\n" "### homedir:\n" "### path: /extra\n" "### source: /home/user\n" "### type: disk\n" "### ephemeral: false\n" "###\n" "### Der Name wird zwar angezeigt, lรคsst sich jedoch nicht รคndern.\n" #: lxc/image.go:83 #, fuzzy msgid "" "### This is a yaml representation of the image properties.\n" "### Any line starting with a '# will be ignored.\n" "###\n" "### Each property is represented by a single line:\n" "### An example would be:\n" "### description: My custom image" msgstr "" "### Dies ist eine Darstellung der Eigenschaften eines Abbildes in yaml.\n" "### Jede Zeile die mit '# beginnt wird ignoriert.\n" "###\n" "### Pro Eigenschaft wird eine Zeile verwendet:\n" "### Zum Beispiel:\n" "### description: Mein eigenes Abbild\n" #: lxc/profile.go:27 #, fuzzy msgid "" "### This is a yaml representation of the profile.\n" "### Any line starting with a '# will be ignored.\n" "###\n" "### A profile consists of a set of configuration items followed by a set of\n" "### devices.\n" "###\n" "### An example would look like:\n" "### name: onenic\n" "### config:\n" "### raw.lxc: lxc.aa_profile=unconfined\n" "### devices:\n" "### eth0:\n" "### nictype: bridged\n" "### parent: lxdbr0\n" "### type: nic\n" "###\n" "### Note that the name is shown but cannot be changed" msgstr "" "### Dies ist eine Darstellung eines Profils in yaml.\n" "### Jede Zeile die mit '# beginnt wird ignoriert.\n" "###\n" "### Ein Profil besteht aus mehreren Konfigurationselementen gefolgt von\n" "### mehrere Gerรคten.\n" "###\n" "### Zum Beispiel:\n" "### name: onenic\n" "### config:\n" "### raw.lxc: lxc.aa_profile=unconfined\n" "### devices:\n" "### eth0:\n" "### nictype: bridged\n" "### parent: lxdbr0\n" "### type: nic\n" "###\n" "### Der Name wird zwar angezeigt, lรคsst sich jedoch nicht รคndern.\n" #: lxc/image.go:583 #, c-format msgid "%s (%d more)" msgstr "" #: lxc/snapshot.go:61 #, fuzzy msgid "'/' not allowed in snapshot name" msgstr "'/' ist kein gรผltiges Zeichen im Namen eines Sicherungspunktes\n" #: lxc/profile.go:251 msgid "(none)" msgstr "" #: lxc/image.go:604 lxc/image.go:633 msgid "ALIAS" msgstr "" #: lxc/image.go:608 msgid "ARCH" msgstr "" #: lxc/list.go:378 msgid "ARCHITECTURE" msgstr "" #: lxc/remote.go:53 msgid "Accept certificate" msgstr "Akzeptiere Zertifikat" #: lxc/remote.go:256 #, c-format msgid "Admin password for %s: " msgstr "Administrator Passwort fรผr %s: " #: lxc/image.go:347 #, fuzzy msgid "Aliases:" msgstr "Aliasse:\n" #: lxc/exec.go:54 msgid "An environment variable of the form HOME=/home/foo" msgstr "" #: lxc/image.go:330 lxc/info.go:90 #, fuzzy, c-format msgid "Architecture: %s" msgstr "Architektur: %s\n" #: lxc/image.go:351 #, c-format msgid "Auto update: %s" msgstr "" #: lxc/help.go:49 msgid "Available commands:" msgstr "" #: lxc/info.go:172 msgid "Bytes received" msgstr "" #: lxc/info.go:173 msgid "Bytes sent" msgstr "" #: lxc/config.go:270 msgid "COMMON NAME" msgstr "" #: lxc/list.go:379 msgid "CREATED AT" msgstr "" #: lxc/config.go:114 #, c-format msgid "Can't read from stdin: %s" msgstr "" #: lxc/config.go:127 lxc/config.go:160 lxc/config.go:182 #, c-format msgid "Can't unset key '%s', it's not currently set." msgstr "" #: lxc/profile.go:417 msgid "Cannot provide container name to list" msgstr "" #: lxc/remote.go:206 #, fuzzy, c-format msgid "Certificate fingerprint: %x" msgstr "Fingerabdruck des Zertifikats: % x\n" #: lxc/action.go:28 #, fuzzy, c-format msgid "" "Changes state of one or more containers to %s.\n" "\n" "lxc %s [...]" msgstr "" "ร„ndert den Laufzustand eines Containers in %s.\n" "\n" "lxd %s \n" #: lxc/remote.go:279 msgid "Client certificate stored at server: " msgstr "Gespeichertes Nutzerzertifikat auf dem Server: " #: lxc/list.go:99 lxc/list.go:100 msgid "Columns" msgstr "" #: lxc/init.go:134 lxc/init.go:135 lxc/launch.go:40 lxc/launch.go:41 #, fuzzy msgid "Config key/value to apply to the new container" msgstr "kann nicht zum selben Container Namen kopieren" #: lxc/config.go:500 lxc/config.go:565 lxc/image.go:687 lxc/profile.go:215 #, fuzzy, c-format msgid "Config parsing error: %s" msgstr "YAML Analyse Fehler %v\n" #: lxc/main.go:37 msgid "Connection refused; is LXD running?" msgstr "" #: lxc/publish.go:59 msgid "Container name is mandatory" msgstr "" #: lxc/init.go:210 #, c-format msgid "Container name is: %s" msgstr "" #: lxc/publish.go:141 lxc/publish.go:156 #, fuzzy, c-format msgid "Container published with fingerprint: %s" msgstr "Abbild mit Fingerabdruck %s importiert\n" #: lxc/image.go:155 msgid "Copy aliases from source" msgstr "Kopiere Aliasse von der Quelle" #: lxc/copy.go:22 #, fuzzy msgid "" "Copy containers within or in between lxd instances.\n" "\n" "lxc copy [remote:] [remote:] [--" "ephemeral|e]" msgstr "" "Kopiert Container innerhalb einer oder zwischen lxd Instanzen\n" "\n" "lxc copy \n" #: lxc/image.go:268 #, c-format msgid "Copying the image: %s" msgstr "" #: lxc/remote.go:221 msgid "Could not create server cert dir" msgstr "Kann Verzeichnis fรผr Zertifikate auf dem Server nicht erstellen" #: lxc/snapshot.go:21 msgid "" "Create a read-only snapshot of a container.\n" "\n" "lxc snapshot [remote:] [--stateful]\n" "\n" "Creates a snapshot of the container (optionally with the container's memory\n" "state). When --stateful is used, LXD attempts to checkpoint the container's\n" "running state, including process memory state, TCP connections, etc. so that " "it\n" "can be restored (via lxc restore) at a later time (although some things, e." "g.\n" "TCP connections after the TCP timeout window has expired, may not be " "restored\n" "successfully).\n" "\n" "Example:\n" "lxc snapshot u1 snap0" msgstr "" #: lxc/image.go:335 lxc/info.go:92 #, c-format msgid "Created: %s" msgstr "" #: lxc/init.go:177 lxc/launch.go:118 #, c-format msgid "Creating %s" msgstr "" #: lxc/init.go:175 #, fuzzy msgid "Creating the container" msgstr "kann nicht zum selben Container Namen kopieren" #: lxc/image.go:607 lxc/image.go:635 msgid "DESCRIPTION" msgstr "" #: lxc/delete.go:25 #, fuzzy msgid "" "Delete containers or container snapshots.\n" "\n" "lxc delete [remote:][/] [remote:][[/" "]...]\n" "\n" "Destroy containers or snapshots with any attached data (configuration, " "snapshots, ...)." msgstr "" "Lรถscht einen Container oder Container Sicherungspunkt.\n" "\n" "Entfernt einen Container (oder Sicherungspunkt) und alle dazugehรถrigen\n" "Daten (Konfiguration, Sicherungspunkte, ...).\n" #: lxc/config.go:617 #, fuzzy, c-format msgid "Device %s added to %s" msgstr "Gerรคt %s wurde zu %s hinzugefรผgt\n" #: lxc/config.go:804 #, fuzzy, c-format msgid "Device %s removed from %s" msgstr "Gerรคt %s wurde von %s entfernt\n" #: lxc/list.go:462 msgid "EPHEMERAL" msgstr "" #: lxc/config.go:272 msgid "EXPIRY DATE" msgstr "" #: lxc/main.go:55 msgid "Enables debug mode." msgstr "Aktiviert Debug Modus" #: lxc/main.go:54 msgid "Enables verbose mode." msgstr "Aktiviert ausfรผhrliche Ausgabe" #: lxc/help.go:68 msgid "Environment:" msgstr "" #: lxc/copy.go:29 lxc/copy.go:30 lxc/init.go:138 lxc/init.go:139 #: lxc/launch.go:44 lxc/launch.go:45 msgid "Ephemeral container" msgstr "Flรผchtiger Container" #: lxc/monitor.go:56 msgid "Event type to listen for" msgstr "" #: lxc/exec.go:45 #, fuzzy msgid "" "Execute the specified command in a container.\n" "\n" "lxc exec [remote:]container [--mode=auto|interactive|non-interactive] [--env " "EDITOR=/usr/bin/vim]... \n" "\n" "Mode defaults to non-interactive, interactive mode is selected if both stdin " "AND stdout are terminals (stderr is ignored)." msgstr "" "Fรผhrt den angegebenen Befehl in einem Container aus.\n" "\n" "lxc exec [--env EDITOR=/usr/bin/vim]... \n" #: lxc/image.go:339 #, c-format msgid "Expires: %s" msgstr "" #: lxc/image.go:341 msgid "Expires: never" msgstr "" #: lxc/config.go:269 lxc/image.go:605 lxc/image.go:634 msgid "FINGERPRINT" msgstr "" #: lxc/list.go:102 msgid "Fast mode (same as --columns=nsacPt" msgstr "" #: lxc/image.go:328 #, fuzzy, c-format msgid "Fingerprint: %s" msgstr "Fingerabdruck: %s\n" #: lxc/finger.go:17 #, fuzzy msgid "" "Fingers the LXD instance to check if it is up and working.\n" "\n" "lxc finger " msgstr "" "Fingert die LXD Instanz zum รผberprรผfen ob diese funktionsfรคhig ist.\n" "\n" "lxc finger \n" #: lxc/action.go:37 msgid "Force the container to shutdown." msgstr "Herunterfahren des Containers erzwingen." #: lxc/delete.go:34 lxc/delete.go:35 msgid "Force the removal of stopped containers." msgstr "" #: lxc/main.go:56 msgid "Force using the local unix socket." msgstr "" #: lxc/list.go:101 msgid "Format" msgstr "" #: lxc/main.go:138 #, fuzzy msgid "Generating a client certificate. This may take a minute..." msgstr "Generiere Nutzerzertifikat. Dies kann wenige Minuten dauern...\n" #: lxc/list.go:376 msgid "IPV4" msgstr "" #: lxc/list.go:377 msgid "IPV6" msgstr "" #: lxc/config.go:271 msgid "ISSUE DATE" msgstr "" #: lxc/main.go:146 msgid "" "If this is your first time using LXD, you should also run: sudo lxd init" msgstr "" #: lxc/main.go:57 msgid "Ignore aliases when determining what command to run." msgstr "" #: lxc/action.go:40 #, fuzzy msgid "Ignore the container state (only for start)." msgstr "Herunterfahren des Containers erzwingen." #: lxc/image.go:273 msgid "Image copied successfully!" msgstr "" #: lxc/image.go:419 #, fuzzy, c-format msgid "Image imported with fingerprint: %s" msgstr "Abbild mit Fingerabdruck %s importiert\n" #: lxc/init.go:73 #, fuzzy msgid "" "Initialize a container from a particular image.\n" "\n" "lxc init [remote:] [remote:][] [--ephemeral|-e] [--profile|-p " "...] [--config|-c ...]\n" "\n" "Initializes a container using the specified image and name.\n" "\n" "Not specifying -p will result in the default profile.\n" "Specifying \"-p\" with no argument will result in no profile.\n" "\n" "Example:\n" "lxc init ubuntu u1" msgstr "" "Starte Container von gegebenem Abbild.\n" "\n" "lxc launch [] [โ€”ephemeral|-e] [โ€”profile|-p โ€ฆ]\n" "\n" "Startet einen Container von gegebenem Abbild und mit Namen\n" "\n" "Ohne den -p Parameter wird das default Profil benutzt.\n" "Wird -p ohne Argument angegeben, wird kein Profil verwendet\n" "\n" "Beispiel:\n" "lxc launch ubuntu u1\n" #: lxc/remote.go:122 #, c-format msgid "Invalid URL scheme \"%s\" in \"%s\"" msgstr "" #: lxc/init.go:30 lxc/init.go:35 msgid "Invalid configuration key" msgstr "" #: lxc/file.go:190 #, c-format msgid "Invalid source %s" msgstr "Ungรผltige Quelle %s" #: lxc/file.go:57 #, c-format msgid "Invalid target %s" msgstr "Ungรผltiges Ziel %s" #: lxc/info.go:121 msgid "Ips:" msgstr "" #: lxc/image.go:156 msgid "Keep the image up to date after initial copy" msgstr "" #: lxc/main.go:35 msgid "LXD socket not found; is LXD running?" msgstr "" #: lxc/launch.go:22 #, fuzzy msgid "" "Launch a container from a particular image.\n" "\n" "lxc launch [remote:] [remote:][] [--ephemeral|-e] [--profile|-p " "...] [--config|-c ...]\n" "\n" "Launches a container using the specified image and name.\n" "\n" "Not specifying -p will result in the default profile.\n" "Specifying \"-p\" with no argument will result in no profile.\n" "\n" "Example:\n" "lxc launch ubuntu:16.04 u1" msgstr "" "Starte Container von gegebenem Abbild.\n" "\n" "lxc launch [] [โ€”ephemeral|-e] [โ€”profile|-p โ€ฆ]\n" "\n" "Startet einen Container von gegebenem Abbild und mit Namen\n" "\n" "Ohne den -p Parameter wird das default Profil benutzt.\n" "Wird -p ohne Argument angegeben, wird kein Profil verwendet\n" "\n" "Beispiel:\n" "lxc launch ubuntu u1\n" #: lxc/info.go:25 #, fuzzy msgid "" "List information on LXD servers and containers.\n" "\n" "For a container:\n" " lxc info [:]container [--show-log]\n" "\n" "For a server:\n" " lxc info [:]" msgstr "" "Listet Informationen รผber Container.\n" "\n" "Dies wird entfernte Instanzen und Abbilder unterstรผtzen, \n" "zur Zeit jedoch nur Container.\n" "\n" "lxc info [:]Container [--show-log]\n" #: lxc/list.go:67 #, fuzzy msgid "" "Lists the available resources.\n" "\n" "lxc list [resource] [filters] [--format table|json] [-c columns] [--fast]\n" "\n" "The filters are:\n" "* A single keyword like \"web\" which will list any container with a name " "starting by \"web\".\n" "* A regular expression on the container name. (e.g. .*web.*01$)\n" "* A key/value pair referring to a configuration item. For those, the " "namespace can be abreviated to the smallest unambiguous identifier:\n" " * \"user.blah=abc\" will list all containers with the \"blah\" user " "property set to \"abc\".\n" " * \"u.blah=abc\" will do the same\n" " * \"security.privileged=1\" will list all privileged containers\n" " * \"s.privileged=1\" will do the same\n" "* A regular expression matching a configuration item or its value. (e.g. " "volatile.eth0.hwaddr=00:16:3e:.*)\n" "\n" "Columns for table format are:\n" "* 4 - IPv4 address\n" "* 6 - IPv6 address\n" "* a - architecture\n" "* c - creation date\n" "* n - name\n" "* p - pid of container init process\n" "* P - profiles\n" "* s - state\n" "* S - number of snapshots\n" "* t - type (persistent or ephemeral)\n" "\n" "Default column layout: ns46tS\n" "Fast column layout: nsacPt" msgstr "" "Listet vorhandene Ressourcen.\n" "\n" "lxc list [Resource] [Filter]\n" "\n" "Filter sind:\n" "* Ein einzelnes Schlรผsselwort wie \"webโ€œ, was alle Container mit \"web\" im " "Namen listet.\n" "* Ein key/value Paar bezรผglich eines Konfigurationsparameters. Dafรผr kann " "der Namensraum, solange eindeutig, abgekรผrzt werden:\n" "* \"user.blah=abc\" listet alle Container mit der \"blah\" Benutzer " "Eigenschaft \"abc\"\n" "* \"u.blah=abc\" ebenfalls\n" "* \"security.privileged=1\" listet alle privilegierten Container\n" "* \"s.privileged=1\" ebenfalls\n" #: lxc/info.go:225 msgid "Log:" msgstr "" #: lxc/image.go:154 msgid "Make image public" msgstr "Verรถffentliche Abbild" #: lxc/publish.go:32 #, fuzzy msgid "Make the image public" msgstr "Verรถffentliche Abbild" #: lxc/profile.go:48 #, fuzzy msgid "" "Manage configuration profiles.\n" "\n" "lxc profile list [filters] List available profiles.\n" "lxc profile show Show details of a profile.\n" "lxc profile create Create a profile.\n" "lxc profile copy Copy the profile to the " "specified remote.\n" "lxc profile get Get profile configuration.\n" "lxc profile set Set profile configuration.\n" "lxc profile delete Delete a profile.\n" "lxc profile edit \n" " Edit profile, either by launching external editor or reading STDIN.\n" " Example: lxc profile edit # launch editor\n" " cat profile.yml | lxc profile edit # read from " "profile.yml\n" "\n" "lxc profile apply \n" " Apply a comma-separated list of profiles to a container, in order.\n" " All profiles passed in this call (and only those) will be applied\n" " to the specified container.\n" " Example: lxc profile apply foo default,bar # Apply default and bar\n" " lxc profile apply foo default # Only default is active\n" " lxc profile apply '' # no profiles are applied anymore\n" " lxc profile apply bar,default # Apply default second now\n" "lxc profile apply-add \n" "lxc profile apply-remove \n" "\n" "Devices:\n" "lxc profile device list List " "devices in the given profile.\n" "lxc profile device show Show " "full device details in the given profile.\n" "lxc profile device remove Remove a " "device from a profile.\n" "lxc profile device get <[remote:]profile> Get a " "device property.\n" "lxc profile device set <[remote:]profile> Set a " "device property.\n" "lxc profile device unset <[remote:]profile> Unset a " "device property.\n" "lxc profile device add " "[key=value]...\n" " Add a profile device, such as a disk or a nic, to the containers\n" " using the specified profile." msgstr "" "Verwalte Konfigurationsprofile.\n" "\n" "lxc profile list [Filter] Listet verfรผgbare Profile\n" "lxc profile show Zeigt Details zu einem Profil\n" "lxc profile create Erstellt ein Profil\n" "lxc profile edit Bearbeitet das Profil in " "externem Editor\n" "lxc profile copy Kopiert das Profil zur " "entfernten Instanz\n" "lxc profile set Setzt eine " "Profilkonfiguration\n" "lxc profile delete Lรถscht ein Profil\n" "lxc profile apply \n" " Wendet eine durch Kommata getrennte Liste von Profilen,\n" " in gegeben Reihenfolge, auf einen Container an.\n" " Alle angegeben, und nur diese, werden auf den Container angewandt.\n" " Beispiel: lxc profile apply foo default,bar # Wende default und bar an\n" " lxc profile apply foo default # Nur default ist aktiv\n" " lxc profile apply '' # keine Profile werden angewandt\n" " lxc profile apply bar,default # Wende nun default als zweites " "an\n" "\n" "Gerรคte:\n" "lxc profile device list Listet Gerรคte im Profil\n" "lxc profile device show Zeigt alle Gerรคte Details im " "gegebenen Profil.\n" "lxc profile device remove Entfernt ein Gerรคt von dem " "Profil.\n" "lxc profile device add " "[key=value]...\n" " Fรผgt ein Gerรคt, wie zum Beispiel eine Festplatte oder Netzwerkkarte, den " "Containern hinzu,\n" " die dieses Profil verwenden.\n" #: lxc/config.go:58 #, fuzzy msgid "" "Manage configuration.\n" "\n" "lxc config device add <[remote:]container> [key=value]... " "Add a device to a container.\n" "lxc config device get <[remote:]container> " "Get a device property.\n" "lxc config device set <[remote:]container> " "Set a device property.\n" "lxc config device unset <[remote:]container> " "Unset a device property.\n" "lxc config device list <[remote:]container> " "List devices for container.\n" "lxc config device show <[remote:]container> " "Show full device details for container.\n" "lxc config device remove <[remote:]container> " "Remove device from container.\n" "\n" "lxc config get [remote:][container] " "Get container or server configuration key.\n" "lxc config set [remote:][container] " "Set container or server configuration key.\n" "lxc config unset [remote:][container] " "Unset container or server configuration key.\n" "lxc config show [remote:][container] [--expanded] " "Show container or server configuration.\n" "lxc config edit [remote:][container] " "Edit container or server configuration in external editor.\n" " Edit configuration, either by launching external editor or reading " "STDIN.\n" " Example: lxc config edit # launch editor\n" " cat config.yml | lxc config edit # read from config." "yml\n" "\n" "lxc config trust list [remote] " "List all trusted certs.\n" "lxc config trust add [remote] " "Add certfile.crt to trusted hosts.\n" "lxc config trust remove [remote] [hostname|fingerprint] " "Remove the cert from trusted hosts.\n" "\n" "Examples:\n" "To mount host's /share/c1 onto /opt in the container:\n" " lxc config device add [remote:]container1 disk source=/" "share/c1 path=opt\n" "\n" "To set an lxc config value:\n" " lxc config set [remote:] raw.lxc 'lxc.aa_allow_incomplete = " "1'\n" "\n" "To listen on IPv4 and IPv6 port 8443 (you can omit the 8443 its the " "default):\n" " lxc config set core.https_address [::]:8443\n" "\n" "To set the server trust password:\n" " lxc config set core.trust_password blah" msgstr "" "Verwalte Konfiguration.\n" "\n" "lxc config device add [key=value]...\n" " Fรผge ein Gerรคt zu einem Container hinzu\n" "lxc config device list Listet die Gerรคte des " "Containers\n" "lxc config device show Zeigt alle Gerรคte " "Details fรผr den Container\n" "lxc config device remove Entfernt Gerรคt vom " "Container\n" "lxc config edit Bearbeite Container " "Konfiguration in externem Editor\n" "lxc config get key Holt " "Konfigurationsschlรผssel\n" "lxc config set key [value] Setzt Container " "Konfigurationsschlรผssel\n" "lxc config show Zeigt Konfiguration " "des Containers\n" "lxc config trust list [remote] Liste alle " "vertrauenswรผrdigen Zertifikate.\n" "lxc config trust add [remote] [certfile.crt] Fรผge certfile.crt zu " "vertrauenden Instanzen hinzu.\n" "lxc config trust remove [remote] [hostname|fingerprint]\n" " Lรถscht das Zertifikat aus der Liste der vertrauenswรผrdigen.\n" "\n" "Beispiele:\n" "Zum mounten von /share/c1 des Hosts nach /opt im Container:\n" "\tlxc config device add container1 mntdir disk source=/share/c1 path=opt\n" "Um einen lxc config Wert zu setzen:\n" "\tlxc config set raw.lxc 'lxc.aa_allow_incomplete = 1'\n" "Um das Server Passwort zur authentifizierung zu setzen:\n" "\tlxc config set core.trust_password blah\n" #: lxc/file.go:32 #, fuzzy msgid "" "Manage files on a container.\n" "\n" "lxc file pull [...] \n" "lxc file push [--uid=UID] [--gid=GID] [--mode=MODE] [...] " "\n" "lxc file edit \n" "\n" " in the case of pull, in the case of push and in the " "case of edit are /" msgstr "" "Verwaltet Dateien in einem Container.\n" "\n" "lxc file pull [...] \n" "lxc file push [--uid=UID] [--gid=GID] [--mode=MODE] [...] " "\n" "\n" " bei pull und bei push sind jeweils von der Form /\n" #: lxc/remote.go:39 #, fuzzy msgid "" "Manage remote LXD servers.\n" "\n" "lxc remote add [--accept-certificate] [--password=PASSWORD]\n" " [--public] [--protocol=PROTOCOL] " "Add the remote at .\n" "lxc remote remove " "Remove the remote .\n" "lxc remote list " "List all remotes.\n" "lxc remote rename " "Rename remote to .\n" "lxc remote set-url " "Update 's url to .\n" "lxc remote set-default " "Set the default remote.\n" "lxc remote get-default " "Print the default remote." msgstr "" "Verwalte entfernte LXD Server.\n" "\n" "lxc remote add [--accept-certificate] [--password=PASSWORT] " "Fรผgt die Instanz auf hinzu.\n" "lxc remote remove " "Entfernt die Instanz .\n" "lxc remote list " "Listet alle entfernte Instanzen.\n" "lxc remote rename " "Benennt Instanz von nach um.\n" "lxc remote set-url " "Setzt die URL von auf .\n" "lxc remote set-default " "Setzt die Standard Instanz.\n" "lxc remote get-default " "Gibt die Standard Instanz aus.\n" #: lxc/image.go:93 msgid "" "Manipulate container images.\n" "\n" "In LXD containers are created from images. Those images were themselves\n" "either generated from an existing container or downloaded from an image\n" "server.\n" "\n" "When using remote images, LXD will automatically cache images for you\n" "and remove them upon expiration.\n" "\n" "The image unique identifier is the hash (sha-256) of its representation\n" "as a compressed tarball (or for split images, the concatenation of the\n" "metadata and rootfs tarballs).\n" "\n" "Images can be referenced by their full hash, shortest unique partial\n" "hash or alias name (if one is set).\n" "\n" "\n" "lxc image import [rootfs tarball|URL] [remote:] [--public] [--" "created-at=ISO-8601] [--expires-at=ISO-8601] [--fingerprint=FINGERPRINT] [--" "alias=ALIAS].. [prop=value]\n" " Import an image tarball (or tarballs) into the LXD image store.\n" "\n" "lxc image copy [remote:] : [--alias=ALIAS].. [--copy-aliases] " "[--public] [--auto-update]\n" " Copy an image from one LXD daemon to another over the network.\n" "\n" " The auto-update flag instructs the server to keep this image up to\n" " date. It requires the source to be an alias and for it to be public.\n" "\n" "lxc image delete [remote:]\n" " Delete an image from the LXD image store.\n" "\n" "lxc image export [remote:]\n" " Export an image from the LXD image store into a distributable tarball.\n" "\n" "lxc image info [remote:]\n" " Print everything LXD knows about a given image.\n" "\n" "lxc image list [remote:] [filter]\n" " List images in the LXD image store. Filters may be of the\n" " = form for property based filtering, or part of the image\n" " hash or part of the image alias name.\n" "\n" "lxc image show [remote:]\n" " Yaml output of the user modifiable properties of an image.\n" "\n" "lxc image edit [remote:]\n" " Edit image, either by launching external editor or reading STDIN.\n" " Example: lxc image edit # launch editor\n" " cat image.yml | lxc image edit # read from image.yml\n" "\n" "lxc image alias create [remote:] \n" " Create a new alias for an existing image.\n" "\n" "lxc image alias delete [remote:]\n" " Delete an alias.\n" "\n" "lxc image alias list [remote:] [filter]\n" " List the aliases. Filters may be part of the image hash or part of the " "image alias name.\n" msgstr "" #: lxc/info.go:147 msgid "Memory (current)" msgstr "" #: lxc/info.go:151 msgid "Memory (peak)" msgstr "" #: lxc/help.go:86 msgid "Missing summary." msgstr "Fehlende Zusammenfassung." #: lxc/monitor.go:41 msgid "" "Monitor activity on the LXD server.\n" "\n" "lxc monitor [remote:] [--type=TYPE...]\n" "\n" "Connects to the monitoring interface of the specified LXD server.\n" "\n" "By default will listen to all message types.\n" "Specific types to listen to can be specified with --type.\n" "\n" "Example:\n" "lxc monitor --type=logging" msgstr "" #: lxc/file.go:178 msgid "More than one file to download, but target is not a directory" msgstr "" "Mehr als eine Datei herunterzuladen, aber das Ziel ist kein Verzeichnis" #: lxc/move.go:17 #, fuzzy msgid "" "Move containers within or in between lxd instances.\n" "\n" "lxc move [remote:] [remote:]\n" " Move a container between two hosts, renaming it if destination name " "differs.\n" "\n" "lxc move \n" " Rename a local container.\n" msgstr "" "Verschiebt Container innerhalb einer oder zwischen lxd Instanzen\n" "\n" "lxc move \n" #: lxc/action.go:63 #, fuzzy msgid "Must supply container name for: " msgstr "der Name des Ursprung Containers muss angegeben werden" #: lxc/list.go:380 lxc/remote.go:363 msgid "NAME" msgstr "" #: lxc/remote.go:337 lxc/remote.go:342 msgid "NO" msgstr "" #: lxc/info.go:89 #, c-format msgid "Name: %s" msgstr "" #: lxc/image.go:157 lxc/publish.go:33 msgid "New alias to define at target" msgstr "" #: lxc/config.go:281 #, fuzzy msgid "No certificate provided to add" msgstr "Kein Zertifikat zum hinzufรผgen bereitgestellt" #: lxc/config.go:304 msgid "No fingerprint specified." msgstr "Kein Fingerabdruck angegeben." #: lxc/remote.go:107 msgid "Only https URLs are supported for simplestreams" msgstr "" #: lxc/image.go:411 msgid "Only https:// is supported for remote image import." msgstr "" #: lxc/help.go:63 lxc/main.go:122 msgid "Options:" msgstr "" #: lxc/image.go:506 #, c-format msgid "Output is in %s" msgstr "" #: lxc/exec.go:55 msgid "Override the terminal mode (auto, interactive or non-interactive)" msgstr "" #: lxc/list.go:464 msgid "PERSISTENT" msgstr "" #: lxc/list.go:381 msgid "PID" msgstr "" #: lxc/list.go:382 msgid "PROFILES" msgstr "" #: lxc/remote.go:365 msgid "PROTOCOL" msgstr "" #: lxc/image.go:606 lxc/remote.go:366 msgid "PUBLIC" msgstr "" #: lxc/info.go:174 msgid "Packets received" msgstr "" #: lxc/info.go:175 msgid "Packets sent" msgstr "" #: lxc/help.go:69 #, fuzzy msgid "Path to an alternate client configuration directory." msgstr "Alternatives config Verzeichnis." #: lxc/help.go:70 #, fuzzy msgid "Path to an alternate server directory." msgstr "Alternatives config Verzeichnis." #: lxc/main.go:39 msgid "Permisson denied, are you in the lxd group?" msgstr "" #: lxc/info.go:103 #, c-format msgid "Pid: %d" msgstr "" #: lxc/help.go:25 #, fuzzy msgid "" "Presents details on how to use LXD.\n" "\n" "lxd help [--all]" msgstr "" "Zeigt Details รผber die Benutzung von LXD an.\n" "\n" "lxd help [โ€”all]\n" #: lxc/profile.go:216 msgid "Press enter to open the editor again" msgstr "" #: lxc/config.go:501 lxc/config.go:566 lxc/image.go:688 msgid "Press enter to start the editor again" msgstr "" #: lxc/help.go:65 msgid "Print debug information." msgstr "" #: lxc/help.go:64 msgid "Print less common commands." msgstr "" #: lxc/help.go:66 msgid "Print verbose information." msgstr "" #: lxc/version.go:18 #, fuzzy msgid "" "Prints the version number of this client tool.\n" "\n" "lxc version" msgstr "" "Zeigt die Versionsnummer von LXD.\n" "\n" "lxc version\n" #: lxc/info.go:127 #, fuzzy, c-format msgid "Processes: %d" msgstr "Profil %s erstellt\n" #: lxc/profile.go:272 #, fuzzy, c-format msgid "Profile %s added to %s" msgstr "Profil %s wurde auf %s angewandt\n" #: lxc/profile.go:167 #, fuzzy, c-format msgid "Profile %s created" msgstr "Profil %s erstellt\n" #: lxc/profile.go:237 #, fuzzy, c-format msgid "Profile %s deleted" msgstr "Profil %s gelรถscht\n" #: lxc/profile.go:303 #, fuzzy, c-format msgid "Profile %s removed from %s" msgstr "Gerรคt %s wurde von %s entfernt\n" #: lxc/init.go:136 lxc/init.go:137 lxc/launch.go:42 lxc/launch.go:43 #, fuzzy msgid "Profile to apply to the new container" msgstr "kann nicht zum selben Container Namen kopieren" #: lxc/profile.go:253 #, fuzzy, c-format msgid "Profiles %s applied to %s" msgstr "Profil %s wurde auf %s angewandt\n" #: lxc/info.go:101 #, fuzzy, c-format msgid "Profiles: %s" msgstr "Profil %s erstellt\n" #: lxc/image.go:343 #, fuzzy msgid "Properties:" msgstr "Eigenschaften:\n" #: lxc/remote.go:56 msgid "Public image server" msgstr "" #: lxc/image.go:331 #, fuzzy, c-format msgid "Public: %s" msgstr "ร–ffentlich: %s\n" #: lxc/publish.go:25 msgid "" "Publish containers as images.\n" "\n" "lxc publish [remote:]container [remote:] [--alias=ALIAS]... [prop-key=prop-" "value]..." msgstr "" #: lxc/remote.go:54 msgid "Remote admin password" msgstr "Entferntes Administrator Passwort" #: lxc/delete.go:42 #, c-format msgid "Remove %s (yes/no): " msgstr "" #: lxc/delete.go:36 lxc/delete.go:37 msgid "Require user confirmation." msgstr "" #: lxc/info.go:124 msgid "Resources:" msgstr "" #: lxc/init.go:247 #, c-format msgid "Retrieving image: %s" msgstr "" #: lxc/image.go:609 msgid "SIZE" msgstr "" #: lxc/list.go:383 msgid "SNAPSHOTS" msgstr "" #: lxc/list.go:384 msgid "STATE" msgstr "" #: lxc/remote.go:367 msgid "STATIC" msgstr "" #: lxc/remote.go:214 msgid "Server certificate NACKed by user" msgstr "Server Zertifikat vom Benutzer nicht akzeptiert" #: lxc/remote.go:276 msgid "Server doesn't trust us after adding our cert" msgstr "" "Der Server vertraut uns nicht nachdem er unser Zertifikat hinzugefรผgt hat" #: lxc/remote.go:55 msgid "Server protocol (lxd or simplestreams)" msgstr "" #: lxc/restore.go:21 msgid "" "Set the current state of a resource back to a snapshot.\n" "\n" "lxc restore [remote:] [--stateful]\n" "\n" "Restores a container from a snapshot (optionally with running state, see\n" "snapshot help for details).\n" "\n" "For example:\n" "lxc snapshot u1 snap0 # create the snapshot\n" "lxc restore u1 snap0 # restore the snapshot" msgstr "" #: lxc/file.go:44 msgid "Set the file's gid on push" msgstr "Setzt die gid der Datei beim รœbertragen" #: lxc/file.go:45 msgid "Set the file's perms on push" msgstr "Setzt die Dateiberechtigungen beim รœbertragen" #: lxc/file.go:43 msgid "Set the file's uid on push" msgstr "Setzt die uid der Datei beim รœbertragen" #: lxc/help.go:32 msgid "Show all commands (not just interesting ones)" msgstr "Zeigt alle Befehle (nicht nur die interessanten)" #: lxc/info.go:36 msgid "Show the container's last 100 log lines?" msgstr "Zeige die letzten 100 Zeilen Protokoll des Containers?" #: lxc/image.go:329 #, fuzzy, c-format msgid "Size: %.2fMB" msgstr "GrรถรŸe: %.2vMB\n" #: lxc/info.go:194 msgid "Snapshots:" msgstr "" #: lxc/image.go:353 msgid "Source:" msgstr "" #: lxc/launch.go:124 #, c-format msgid "Starting %s" msgstr "" #: lxc/info.go:95 #, c-format msgid "Status: %s" msgstr "" #: lxc/publish.go:34 lxc/publish.go:35 msgid "Stop the container if currently running" msgstr "" #: lxc/delete.go:106 lxc/publish.go:111 msgid "Stopping container failed!" msgstr "Anhalten des Containers fehlgeschlagen!" #: lxc/action.go:39 #, fuzzy msgid "Store the container state (only for stop)." msgstr "Herunterfahren des Containers erzwingen." #: lxc/info.go:155 msgid "Swap (current)" msgstr "" #: lxc/info.go:159 msgid "Swap (peak)" msgstr "" #: lxc/list.go:385 msgid "TYPE" msgstr "" #: lxc/delete.go:92 msgid "The container is currently running, stop it first or pass --force." msgstr "" #: lxc/publish.go:89 msgid "" "The container is currently running. Use --force to have it stopped and " "restarted." msgstr "" #: lxc/config.go:645 lxc/config.go:657 lxc/config.go:690 lxc/config.go:708 #: lxc/config.go:746 lxc/config.go:764 #, fuzzy msgid "The device doesn't exist" msgstr "entfernte Instanz %s existiert nicht" #: lxc/init.go:277 #, c-format msgid "The local image '%s' couldn't be found, trying '%s:' instead." msgstr "" #: lxc/publish.go:62 msgid "There is no \"image name\". Did you want an alias?" msgstr "" #: lxc/action.go:36 msgid "Time to wait for the container before killing it." msgstr "Wartezeit bevor der Container gestoppt wird." #: lxc/image.go:332 #, fuzzy msgid "Timestamps:" msgstr "Zeitstempel:\n" #: lxc/main.go:147 msgid "To start your first container, try: lxc launch ubuntu:16.04" msgstr "" #: lxc/image.go:402 #, c-format msgid "Transferring image: %d%%" msgstr "" #: lxc/action.go:93 lxc/launch.go:132 #, c-format msgid "Try `lxc info --show-log %s` for more info" msgstr "" #: lxc/info.go:97 msgid "Type: ephemeral" msgstr "" #: lxc/info.go:99 msgid "Type: persistent" msgstr "" #: lxc/image.go:610 msgid "UPLOAD DATE" msgstr "" #: lxc/remote.go:364 msgid "URL" msgstr "" #: lxc/remote.go:82 msgid "Unable to read remote TLS certificate" msgstr "" #: lxc/image.go:337 #, c-format msgid "Uploaded: %s" msgstr "" #: lxc/main.go:122 #, fuzzy, c-format msgid "Usage: %s" msgstr "" "Benutzung: %s\n" "\n" "Optionen:\n" "\n" #: lxc/help.go:48 #, fuzzy msgid "Usage: lxc [subcommand] [options]" msgstr "" "Benutzung: lxc [Unterbefehl] [Optionen]\n" "Verfรผgbare Befehle:\n" #: lxc/delete.go:46 msgid "User aborted delete operation." msgstr "" #: lxc/restore.go:35 msgid "" "Whether or not to restore the container's running state from snapshot (if " "available)" msgstr "" "Laufenden Zustand des Containers aus dem Sicherungspunkt (falls vorhanden) " "wiederherstellen oder nicht" #: lxc/snapshot.go:38 msgid "Whether or not to snapshot the container's running state" msgstr "Zustand des laufenden Containers sichern oder nicht" #: lxc/config.go:33 msgid "Whether to show the expanded configuration" msgstr "" #: lxc/remote.go:339 lxc/remote.go:344 msgid "YES" msgstr "" #: lxc/main.go:66 msgid "`lxc config profile` is deprecated, please use `lxc profile`" msgstr "" #: lxc/launch.go:111 msgid "bad number of things scanned from image, container or snapshot" msgstr "" "Falsche Anzahl an Objekten im Abbild, Container oder Sicherungspunkt gelesen." #: lxc/action.go:89 msgid "bad result type from action" msgstr "" #: lxc/copy.go:78 msgid "can't copy to the same container name" msgstr "kann nicht zum selben Container Namen kopieren" #: lxc/remote.go:327 msgid "can't remove the default remote" msgstr "" #: lxc/remote.go:353 msgid "default" msgstr "" #: lxc/init.go:200 lxc/init.go:205 lxc/launch.go:95 lxc/launch.go:100 msgid "didn't get any affected image, container or snapshot from server" msgstr "" #: lxc/image.go:323 msgid "disabled" msgstr "" #: lxc/image.go:325 msgid "enabled" msgstr "" #: lxc/main.go:25 lxc/main.go:159 #, fuzzy, c-format msgid "error: %v" msgstr "Fehler: %v\n" #: lxc/help.go:40 lxc/main.go:117 #, fuzzy, c-format msgid "error: unknown command: %s" msgstr "Fehler: unbekannter Befehl: %s\n" #: lxc/launch.go:115 msgid "got bad version" msgstr "Versionskonflikt" #: lxc/image.go:318 lxc/image.go:586 msgid "no" msgstr "" #: lxc/copy.go:101 msgid "not all the profiles from the source exist on the target" msgstr "nicht alle Profile der Quelle sind am Ziel vorhanden." #: lxc/remote.go:207 #, fuzzy msgid "ok (y/n)?" msgstr "OK (y/n)? " #: lxc/main.go:266 lxc/main.go:270 #, c-format msgid "processing aliases failed %s\n" msgstr "" #: lxc/remote.go:389 #, c-format msgid "remote %s already exists" msgstr "entfernte Instanz %s existiert bereits" #: lxc/remote.go:319 lxc/remote.go:381 lxc/remote.go:416 lxc/remote.go:432 #, c-format msgid "remote %s doesn't exist" msgstr "entfernte Instanz %s existiert nicht" #: lxc/remote.go:302 #, c-format msgid "remote %s exists as <%s>" msgstr "entfernte Instanz %s existiert als <%s>" #: lxc/remote.go:323 lxc/remote.go:385 lxc/remote.go:420 #, c-format msgid "remote %s is static and cannot be modified" msgstr "" #: lxc/info.go:205 msgid "stateful" msgstr "" #: lxc/info.go:207 msgid "stateless" msgstr "" #: lxc/info.go:201 #, c-format msgid "taken at %s" msgstr "" #: lxc/exec.go:166 msgid "unreachable return reached" msgstr "" #: lxc/main.go:199 msgid "wrong number of subcommand arguments" msgstr "falsche Anzahl an Parametern fรผr Unterbefehl" #: lxc/delete.go:45 lxc/image.go:320 lxc/image.go:590 msgid "yes" msgstr "" #: lxc/copy.go:38 msgid "you must specify a source container name" msgstr "der Name des Ursprung Containers muss angegeben werden" #, fuzzy #~ msgid "For example: 'lxd-images import ubuntu --alias ubuntu'." #~ msgstr "Zum Beispiel: 'lxd-images import ubuntu --alias ubuntu'.\n" #, fuzzy #~ msgid "" #~ "If this is your first run, you will need to import images using the 'lxd-" #~ "images' script." #~ msgstr "" #~ "Falls dies der erste Start ist, sollten Sie mit dem 'lxd-images' Script " #~ "Abbilder importieren.\n" #, fuzzy #~ msgid "Bad image property: %s" #~ msgstr "Ungรผltige Abbild Eigenschaft: %s\n" #~ msgid "Cannot change profile name" #~ msgstr "Profilname kann nicht geรคndert werden" #, fuzzy #~ msgid "" #~ "Create a read-only snapshot of a container.\n" #~ "\n" #~ "lxc snapshot [remote:] [--stateful]" #~ msgstr "" #~ "Erstellt einen schreibgeschรผtzten Sicherungspunkt des Containers.\n" #~ "\n" #~ "lxc snapshot [--stateful]\n" #, fuzzy #~ msgid "Error adding alias %s" #~ msgstr "Fehler beim hinzufรผgen des Alias %s\n" #, fuzzy #~ msgid "" #~ "Manipulate container images.\n" #~ "\n" #~ "lxc image import [rootfs tarball|URL] [target] [--public] [--" #~ "created-at=ISO-8601] [--expires-at=ISO-8601] [--fingerprint=FINGERPRINT] " #~ "[prop=value]\n" #~ "\n" #~ "lxc image copy [remote:] : [--alias=ALIAS].. [--copy-" #~ "aliases] [--public]\n" #~ "lxc image delete [remote:]\n" #~ "lxc image export [remote:]\n" #~ "lxc image info [remote:]\n" #~ "lxc image list [remote:] [filter]\n" #~ "lxc image show [remote:]\n" #~ "lxc image edit [remote:]\n" #~ " Edit image, either by launching external editor or reading STDIN.\n" #~ " Example: lxc image edit # launch editor\n" #~ " cat image.yml | lxc image edit # read from image." #~ "yml\n" #~ "\n" #~ "Lists the images at specified remote, or local images.\n" #~ "Filters are not yet supported.\n" #~ "\n" #~ "lxc image alias create \n" #~ "lxc image alias delete \n" #~ "lxc image alias list [remote:]\n" #~ "\n" #~ "Create, delete, list image aliases. Example:\n" #~ "lxc remote add store2 images.linuxcontainers.org\n" #~ "lxc image alias list store2:" #~ msgstr "" #~ "ร„ndere Container Abbilder\n" #~ "\n" #~ "lxc image import [Ziel] [--public] [--created-at=ISO-8601] [--" #~ "expires-at=ISO-8601] [--fingerprint=FINGERPRINT] [prop=value]\n" #~ "\n" #~ "lxc image copy [remote:] : [--alias=ALIAS].. [--copy-" #~ "alias]\n" #~ "lxc image delete [remote:]\n" #~ "lxc image edit [remote:]\n" #~ "lxc image export [remote:]\n" #~ "lxc image info [remote:]\n" #~ "lxc image list [remote:] [Filter]\n" #~ "lxc image show [remote:]\n" #~ "\n" #~ "Listet die Abbilder auf der entfernten oder lokalen Instanz.\n" #~ "Filter werden noch nicht unterstรผtzt.\n" #~ "\n" #~ "lxc image alias create \n" #~ "lxc image alias delete \n" #~ "lxc remote add images images.linuxcontainers.org\n" #~ "lxc image alias list images:\n" #~ "erstelle, lรถsche und liste Abbild Aliasse\n" #~ msgid "No certificate on this connection" #~ msgstr "Kein Zertifikat fรผr diese Verbindung" #~ msgid "" #~ "Server certificate for host %s has changed. Add correct certificate or " #~ "remove certificate in %s" #~ msgstr "" #~ "Server Zertifikat fรผr Rechner %s hat sich geรคndert. Fรผrgen Sie das " #~ "richtige Zertifikat hinzu oder lรถschen Sie das Zertifikat unter %s" #, fuzzy #~ msgid "" #~ "Set the current state of a resource back to its state at the time the " #~ "snapshot was created.\n" #~ "\n" #~ "lxc restore [remote:] [--stateful]" #~ msgstr "" #~ "Setzt eine Ressource auf einen Sicherungspunkt zurรผck.\n" #~ "\n" #~ "lxc restore [โ€”stateful]\n" #~ msgid "api version mismatch: mine: %q, daemon: %q" #~ msgstr "API Versionskonflikt: meine: %q, Hintergrund Dienst: %q" #~ msgid "bad profile url %s" #~ msgstr "Fehlerhafte Profil URL %s" #~ msgid "bad version in profile url" #~ msgstr "Falsche Version in Profil URL" #, fuzzy #~ msgid "device already exists" #~ msgstr "entfernte Instanz %s existiert bereits" #, fuzzy #~ msgid "error." #~ msgstr "Fehler: %v\n" #~ msgid "no response!" #~ msgstr "keine Antwort!" #, fuzzy #~ msgid "no value found in %q" #~ msgstr "kein Wert in %q gefunden\n" #~ msgid "unknown remote name: %q" #~ msgstr "unbekannter entfernter Instanz Name: %q" #, fuzzy #~ msgid "unknown transport type: %s" #~ msgstr "unbekannter entfernter Instanz Name: %q" #~ msgid "cannot resolve unix socket address: %v" #~ msgstr "kann unix Socket Adresse %v nicht auflรถsen" #, fuzzy #~ msgid "unknown group %s" #~ msgstr "unbekannter entfernter Instanz Name: %q" #, fuzzy #~ msgid "Information about remotes not yet supported" #~ msgstr "" #~ "Informationen รผber entfernte Instanzen wird noch nicht unterstรผtzt\n" #~ msgid "Unknown image command %s" #~ msgstr "Unbekannter Befehl %s fรผr Abbild" #~ msgid "Unknown remote subcommand %s" #~ msgstr "Unbekannter Unterbefehl %s fรผr entfernte Instanz" #~ msgid "Unkonwn config trust command %s" #~ msgstr "Unbekannter config Befehl %s" #, fuzzy #~ msgid "YAML parse error %v" #~ msgstr "YAML Analyse Fehler %v\n" #~ msgid "invalid argument %s" #~ msgstr "ungรผltiges Argument %s" #, fuzzy #~ msgid "unknown profile cmd %s" #~ msgstr "Unbekannter Befehl %s fรผr Abbild" #, fuzzy #~ msgid "Publish to remote server is not supported yet" #~ msgstr "" #~ "Anzeigen von Informationen รผber entfernte Instanzen wird noch nicht " #~ "unterstรผtzt\n" #, fuzzy #~ msgid "Use an alternative config path." #~ msgstr "Alternatives config Verzeichnis." #, fuzzy #~ msgid "" #~ "error: %v\n" #~ "%s\n" #~ msgstr "" #~ "Fehler: %v\n" #~ "%s" #, fuzzy #~ msgid "" #~ "lxc init [remote:] [remote:][] [--ephemeral|-e] [--profile|-" #~ "p ...]\n" #~ "\n" #~ "Initializes a container using the specified image and name.\n" #~ "\n" #~ "Not specifying -p will result in the default profile.\n" #~ "Specifying \"-p\" with no argument will result in no profile.\n" #~ "\n" #~ "Example:\n" #~ "lxc init ubuntu u1\n" #~ msgstr "" #~ "lxc init [] [--ephemeral|-e] [--profile|-p ...]\n" #~ "\n" #~ "Initialisiert einen Container mit Namen von dem angegebenen Abbild .\n" #~ "\n" #~ "Ohne den -p Parameter wird das default Profil benutzt.\n" #~ "Wird -p ohne Argument angegeben, wird kein Profil verwendet\n" #~ "\n" #~ "Beispiel:\n" #~ "lxc init ubuntu u1\n" #~ msgid "" #~ "Changing the name of a running container during copy isn't supported." #~ msgstr "" #~ "Den Namen eines laufenden Containers beim kopieren zu รคndern wird nicht " #~ "unterstรผtzt." #~ msgid "non-http remotes are not supported for migration right now" #~ msgstr "" #~ "die Migration an entfernte Instanzen wird zur Zeit nur รผber http " #~ "unterstรผtzt." #~ msgid "Cannot connect to unix socket at %s Is the server running?\n" #~ msgstr "" #~ "Keine Verbindung zum unix Socket unter %s mรถglich. Lรคuft der Server?\n" #~ msgid "Show for remotes is not yet supported\n" #~ msgstr "" #~ "Anzeigen von Informationen รผber entfernte Instanzen wird noch nicht " #~ "unterstรผtzt\n" lxd-2.0.2/po/fr.po000066400000000000000000001135721272140510300136730ustar00rootroot00000000000000# French translation for LXD # Copyright (C) 2015 - LXD contributors # This file is distributed under the same license as LXD. # Stรฉphane Graber \n" "Language: \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" #: lxc/info.go:140 msgid " Disk usage:" msgstr "" #: lxc/info.go:163 msgid " Memory usage:" msgstr "" #: lxc/info.go:180 msgid " Network usage:" msgstr "" #: lxc/config.go:37 msgid "" "### This is a yaml representation of the configuration.\n" "### Any line starting with a '# will be ignored.\n" "###\n" "### A sample configuration looks like:\n" "### name: container1\n" "### profiles:\n" "### - default\n" "### config:\n" "### volatile.eth0.hwaddr: 00:16:3e:e9:f8:7f\n" "### devices:\n" "### homedir:\n" "### path: /extra\n" "### source: /home/user\n" "### type: disk\n" "### ephemeral: false\n" "###\n" "### Note that the name is shown but cannot be changed" msgstr "" #: lxc/image.go:83 msgid "" "### This is a yaml representation of the image properties.\n" "### Any line starting with a '# will be ignored.\n" "###\n" "### Each property is represented by a single line:\n" "### An example would be:\n" "### description: My custom image" msgstr "" #: lxc/profile.go:27 msgid "" "### This is a yaml representation of the profile.\n" "### Any line starting with a '# will be ignored.\n" "###\n" "### A profile consists of a set of configuration items followed by a set of\n" "### devices.\n" "###\n" "### An example would look like:\n" "### name: onenic\n" "### config:\n" "### raw.lxc: lxc.aa_profile=unconfined\n" "### devices:\n" "### eth0:\n" "### nictype: bridged\n" "### parent: lxdbr0\n" "### type: nic\n" "###\n" "### Note that the name is shown but cannot be changed" msgstr "" #: lxc/image.go:583 #, c-format msgid "%s (%d more)" msgstr "" #: lxc/snapshot.go:61 #, fuzzy msgid "'/' not allowed in snapshot name" msgstr "'/' n'est pas autorisรฉ dans le nom d'un instantanรฉ (snapshot)\n" #: lxc/profile.go:251 msgid "(none)" msgstr "" #: lxc/image.go:604 lxc/image.go:633 msgid "ALIAS" msgstr "" #: lxc/image.go:608 msgid "ARCH" msgstr "" #: lxc/list.go:378 msgid "ARCHITECTURE" msgstr "" #: lxc/remote.go:53 msgid "Accept certificate" msgstr "" #: lxc/remote.go:256 #, c-format msgid "Admin password for %s: " msgstr "Mot de passe administrateur pour %s: " #: lxc/image.go:347 msgid "Aliases:" msgstr "" #: lxc/exec.go:54 msgid "An environment variable of the form HOME=/home/foo" msgstr "" #: lxc/image.go:330 lxc/info.go:90 #, c-format msgid "Architecture: %s" msgstr "" #: lxc/image.go:351 #, c-format msgid "Auto update: %s" msgstr "" #: lxc/help.go:49 msgid "Available commands:" msgstr "" #: lxc/info.go:172 msgid "Bytes received" msgstr "" #: lxc/info.go:173 msgid "Bytes sent" msgstr "" #: lxc/config.go:270 msgid "COMMON NAME" msgstr "" #: lxc/list.go:379 msgid "CREATED AT" msgstr "" #: lxc/config.go:114 #, c-format msgid "Can't read from stdin: %s" msgstr "" #: lxc/config.go:127 lxc/config.go:160 lxc/config.go:182 #, c-format msgid "Can't unset key '%s', it's not currently set." msgstr "" #: lxc/profile.go:417 msgid "Cannot provide container name to list" msgstr "" #: lxc/remote.go:206 #, fuzzy, c-format msgid "Certificate fingerprint: %x" msgstr "Empreinte du certificat: % x\n" #: lxc/action.go:28 #, fuzzy, c-format msgid "" "Changes state of one or more containers to %s.\n" "\n" "lxc %s [...]" msgstr "Change l'รฉtat du conteneur ร  %s.\n" #: lxc/remote.go:279 msgid "Client certificate stored at server: " msgstr "Certificat client enregistrรฉ avec le serveur: " #: lxc/list.go:99 lxc/list.go:100 msgid "Columns" msgstr "" #: lxc/init.go:134 lxc/init.go:135 lxc/launch.go:40 lxc/launch.go:41 msgid "Config key/value to apply to the new container" msgstr "" #: lxc/config.go:500 lxc/config.go:565 lxc/image.go:687 lxc/profile.go:215 #, fuzzy, c-format msgid "Config parsing error: %s" msgstr "erreur: %v\n" #: lxc/main.go:37 msgid "Connection refused; is LXD running?" msgstr "" #: lxc/publish.go:59 msgid "Container name is mandatory" msgstr "" #: lxc/init.go:210 #, c-format msgid "Container name is: %s" msgstr "" #: lxc/publish.go:141 lxc/publish.go:156 #, fuzzy, c-format msgid "Container published with fingerprint: %s" msgstr "Empreinte du certificat: % x\n" #: lxc/image.go:155 msgid "Copy aliases from source" msgstr "" #: lxc/copy.go:22 msgid "" "Copy containers within or in between lxd instances.\n" "\n" "lxc copy [remote:] [remote:] [--" "ephemeral|e]" msgstr "" #: lxc/image.go:268 #, c-format msgid "Copying the image: %s" msgstr "" #: lxc/remote.go:221 msgid "Could not create server cert dir" msgstr "Le dossier de stockage des certificats serveurs n'a pas pรป รชtre crรฉรฉ" #: lxc/snapshot.go:21 msgid "" "Create a read-only snapshot of a container.\n" "\n" "lxc snapshot [remote:] [--stateful]\n" "\n" "Creates a snapshot of the container (optionally with the container's memory\n" "state). When --stateful is used, LXD attempts to checkpoint the container's\n" "running state, including process memory state, TCP connections, etc. so that " "it\n" "can be restored (via lxc restore) at a later time (although some things, e." "g.\n" "TCP connections after the TCP timeout window has expired, may not be " "restored\n" "successfully).\n" "\n" "Example:\n" "lxc snapshot u1 snap0" msgstr "" #: lxc/image.go:335 lxc/info.go:92 #, c-format msgid "Created: %s" msgstr "" #: lxc/init.go:177 lxc/launch.go:118 #, c-format msgid "Creating %s" msgstr "" #: lxc/init.go:175 msgid "Creating the container" msgstr "" #: lxc/image.go:607 lxc/image.go:635 msgid "DESCRIPTION" msgstr "" #: lxc/delete.go:25 msgid "" "Delete containers or container snapshots.\n" "\n" "lxc delete [remote:][/] [remote:][[/" "]...]\n" "\n" "Destroy containers or snapshots with any attached data (configuration, " "snapshots, ...)." msgstr "" #: lxc/config.go:617 #, c-format msgid "Device %s added to %s" msgstr "" #: lxc/config.go:804 #, c-format msgid "Device %s removed from %s" msgstr "" #: lxc/list.go:462 msgid "EPHEMERAL" msgstr "" #: lxc/config.go:272 msgid "EXPIRY DATE" msgstr "" #: lxc/main.go:55 msgid "Enables debug mode." msgstr "Active le mode de dรฉboguage." #: lxc/main.go:54 msgid "Enables verbose mode." msgstr "Active le mode verbeux." #: lxc/help.go:68 msgid "Environment:" msgstr "" #: lxc/copy.go:29 lxc/copy.go:30 lxc/init.go:138 lxc/init.go:139 #: lxc/launch.go:44 lxc/launch.go:45 msgid "Ephemeral container" msgstr "" #: lxc/monitor.go:56 msgid "Event type to listen for" msgstr "" #: lxc/exec.go:45 #, fuzzy msgid "" "Execute the specified command in a container.\n" "\n" "lxc exec [remote:]container [--mode=auto|interactive|non-interactive] [--env " "EDITOR=/usr/bin/vim]... \n" "\n" "Mode defaults to non-interactive, interactive mode is selected if both stdin " "AND stdout are terminals (stderr is ignored)." msgstr "Exรฉcute la commande spรฉcifiรฉe dans un conteneur.\n" #: lxc/image.go:339 #, c-format msgid "Expires: %s" msgstr "" #: lxc/image.go:341 msgid "Expires: never" msgstr "" #: lxc/config.go:269 lxc/image.go:605 lxc/image.go:634 msgid "FINGERPRINT" msgstr "" #: lxc/list.go:102 msgid "Fast mode (same as --columns=nsacPt" msgstr "" #: lxc/image.go:328 #, fuzzy, c-format msgid "Fingerprint: %s" msgstr "Empreinte du certificat: % x\n" #: lxc/finger.go:17 #, fuzzy msgid "" "Fingers the LXD instance to check if it is up and working.\n" "\n" "lxc finger " msgstr "Contacte LXD pour voir s'il est fonctionel.\n" #: lxc/action.go:37 msgid "Force the container to shutdown." msgstr "Force l'arrรชt du conteneur." #: lxc/delete.go:34 lxc/delete.go:35 msgid "Force the removal of stopped containers." msgstr "" #: lxc/main.go:56 msgid "Force using the local unix socket." msgstr "" #: lxc/list.go:101 msgid "Format" msgstr "" #: lxc/main.go:138 #, fuzzy msgid "Generating a client certificate. This may take a minute..." msgstr "Gรฉneration d'un certificat client. Ceci peut prendre une minute...\n" #: lxc/list.go:376 msgid "IPV4" msgstr "" #: lxc/list.go:377 msgid "IPV6" msgstr "" #: lxc/config.go:271 msgid "ISSUE DATE" msgstr "" #: lxc/main.go:146 msgid "" "If this is your first time using LXD, you should also run: sudo lxd init" msgstr "" #: lxc/main.go:57 msgid "Ignore aliases when determining what command to run." msgstr "" #: lxc/action.go:40 #, fuzzy msgid "Ignore the container state (only for start)." msgstr "Force l'arrรชt du conteneur." #: lxc/image.go:273 msgid "Image copied successfully!" msgstr "" #: lxc/image.go:419 #, fuzzy, c-format msgid "Image imported with fingerprint: %s" msgstr "Empreinte du certificat: % x\n" #: lxc/init.go:73 msgid "" "Initialize a container from a particular image.\n" "\n" "lxc init [remote:] [remote:][] [--ephemeral|-e] [--profile|-p " "...] [--config|-c ...]\n" "\n" "Initializes a container using the specified image and name.\n" "\n" "Not specifying -p will result in the default profile.\n" "Specifying \"-p\" with no argument will result in no profile.\n" "\n" "Example:\n" "lxc init ubuntu u1" msgstr "" #: lxc/remote.go:122 #, c-format msgid "Invalid URL scheme \"%s\" in \"%s\"" msgstr "" #: lxc/init.go:30 lxc/init.go:35 #, fuzzy msgid "Invalid configuration key" msgstr "Gรฉrer la configuration.\n" #: lxc/file.go:190 #, c-format msgid "Invalid source %s" msgstr "Source invalide %s" #: lxc/file.go:57 #, c-format msgid "Invalid target %s" msgstr "Destination invalide %s" #: lxc/info.go:121 msgid "Ips:" msgstr "" #: lxc/image.go:156 msgid "Keep the image up to date after initial copy" msgstr "" #: lxc/main.go:35 msgid "LXD socket not found; is LXD running?" msgstr "" #: lxc/launch.go:22 msgid "" "Launch a container from a particular image.\n" "\n" "lxc launch [remote:] [remote:][] [--ephemeral|-e] [--profile|-p " "...] [--config|-c ...]\n" "\n" "Launches a container using the specified image and name.\n" "\n" "Not specifying -p will result in the default profile.\n" "Specifying \"-p\" with no argument will result in no profile.\n" "\n" "Example:\n" "lxc launch ubuntu:16.04 u1" msgstr "" #: lxc/info.go:25 msgid "" "List information on LXD servers and containers.\n" "\n" "For a container:\n" " lxc info [:]container [--show-log]\n" "\n" "For a server:\n" " lxc info [:]" msgstr "" #: lxc/list.go:67 msgid "" "Lists the available resources.\n" "\n" "lxc list [resource] [filters] [--format table|json] [-c columns] [--fast]\n" "\n" "The filters are:\n" "* A single keyword like \"web\" which will list any container with a name " "starting by \"web\".\n" "* A regular expression on the container name. (e.g. .*web.*01$)\n" "* A key/value pair referring to a configuration item. For those, the " "namespace can be abreviated to the smallest unambiguous identifier:\n" " * \"user.blah=abc\" will list all containers with the \"blah\" user " "property set to \"abc\".\n" " * \"u.blah=abc\" will do the same\n" " * \"security.privileged=1\" will list all privileged containers\n" " * \"s.privileged=1\" will do the same\n" "* A regular expression matching a configuration item or its value. (e.g. " "volatile.eth0.hwaddr=00:16:3e:.*)\n" "\n" "Columns for table format are:\n" "* 4 - IPv4 address\n" "* 6 - IPv6 address\n" "* a - architecture\n" "* c - creation date\n" "* n - name\n" "* p - pid of container init process\n" "* P - profiles\n" "* s - state\n" "* S - number of snapshots\n" "* t - type (persistent or ephemeral)\n" "\n" "Default column layout: ns46tS\n" "Fast column layout: nsacPt" msgstr "" #: lxc/info.go:225 msgid "Log:" msgstr "" #: lxc/image.go:154 msgid "Make image public" msgstr "" #: lxc/publish.go:32 msgid "Make the image public" msgstr "" #: lxc/profile.go:48 msgid "" "Manage configuration profiles.\n" "\n" "lxc profile list [filters] List available profiles.\n" "lxc profile show Show details of a profile.\n" "lxc profile create Create a profile.\n" "lxc profile copy Copy the profile to the " "specified remote.\n" "lxc profile get Get profile configuration.\n" "lxc profile set Set profile configuration.\n" "lxc profile delete Delete a profile.\n" "lxc profile edit \n" " Edit profile, either by launching external editor or reading STDIN.\n" " Example: lxc profile edit # launch editor\n" " cat profile.yml | lxc profile edit # read from " "profile.yml\n" "\n" "lxc profile apply \n" " Apply a comma-separated list of profiles to a container, in order.\n" " All profiles passed in this call (and only those) will be applied\n" " to the specified container.\n" " Example: lxc profile apply foo default,bar # Apply default and bar\n" " lxc profile apply foo default # Only default is active\n" " lxc profile apply '' # no profiles are applied anymore\n" " lxc profile apply bar,default # Apply default second now\n" "lxc profile apply-add \n" "lxc profile apply-remove \n" "\n" "Devices:\n" "lxc profile device list List " "devices in the given profile.\n" "lxc profile device show Show " "full device details in the given profile.\n" "lxc profile device remove Remove a " "device from a profile.\n" "lxc profile device get <[remote:]profile> Get a " "device property.\n" "lxc profile device set <[remote:]profile> Set a " "device property.\n" "lxc profile device unset <[remote:]profile> Unset a " "device property.\n" "lxc profile device add " "[key=value]...\n" " Add a profile device, such as a disk or a nic, to the containers\n" " using the specified profile." msgstr "" #: lxc/config.go:58 msgid "" "Manage configuration.\n" "\n" "lxc config device add <[remote:]container> [key=value]... " "Add a device to a container.\n" "lxc config device get <[remote:]container> " "Get a device property.\n" "lxc config device set <[remote:]container> " "Set a device property.\n" "lxc config device unset <[remote:]container> " "Unset a device property.\n" "lxc config device list <[remote:]container> " "List devices for container.\n" "lxc config device show <[remote:]container> " "Show full device details for container.\n" "lxc config device remove <[remote:]container> " "Remove device from container.\n" "\n" "lxc config get [remote:][container] " "Get container or server configuration key.\n" "lxc config set [remote:][container] " "Set container or server configuration key.\n" "lxc config unset [remote:][container] " "Unset container or server configuration key.\n" "lxc config show [remote:][container] [--expanded] " "Show container or server configuration.\n" "lxc config edit [remote:][container] " "Edit container or server configuration in external editor.\n" " Edit configuration, either by launching external editor or reading " "STDIN.\n" " Example: lxc config edit # launch editor\n" " cat config.yml | lxc config edit # read from config." "yml\n" "\n" "lxc config trust list [remote] " "List all trusted certs.\n" "lxc config trust add [remote] " "Add certfile.crt to trusted hosts.\n" "lxc config trust remove [remote] [hostname|fingerprint] " "Remove the cert from trusted hosts.\n" "\n" "Examples:\n" "To mount host's /share/c1 onto /opt in the container:\n" " lxc config device add [remote:]container1 disk source=/" "share/c1 path=opt\n" "\n" "To set an lxc config value:\n" " lxc config set [remote:] raw.lxc 'lxc.aa_allow_incomplete = " "1'\n" "\n" "To listen on IPv4 and IPv6 port 8443 (you can omit the 8443 its the " "default):\n" " lxc config set core.https_address [::]:8443\n" "\n" "To set the server trust password:\n" " lxc config set core.trust_password blah" msgstr "" #: lxc/file.go:32 msgid "" "Manage files on a container.\n" "\n" "lxc file pull [...] \n" "lxc file push [--uid=UID] [--gid=GID] [--mode=MODE] [...] " "\n" "lxc file edit \n" "\n" " in the case of pull, in the case of push and in the " "case of edit are /" msgstr "" #: lxc/remote.go:39 msgid "" "Manage remote LXD servers.\n" "\n" "lxc remote add [--accept-certificate] [--password=PASSWORD]\n" " [--public] [--protocol=PROTOCOL] " "Add the remote at .\n" "lxc remote remove " "Remove the remote .\n" "lxc remote list " "List all remotes.\n" "lxc remote rename " "Rename remote to .\n" "lxc remote set-url " "Update 's url to .\n" "lxc remote set-default " "Set the default remote.\n" "lxc remote get-default " "Print the default remote." msgstr "" #: lxc/image.go:93 msgid "" "Manipulate container images.\n" "\n" "In LXD containers are created from images. Those images were themselves\n" "either generated from an existing container or downloaded from an image\n" "server.\n" "\n" "When using remote images, LXD will automatically cache images for you\n" "and remove them upon expiration.\n" "\n" "The image unique identifier is the hash (sha-256) of its representation\n" "as a compressed tarball (or for split images, the concatenation of the\n" "metadata and rootfs tarballs).\n" "\n" "Images can be referenced by their full hash, shortest unique partial\n" "hash or alias name (if one is set).\n" "\n" "\n" "lxc image import [rootfs tarball|URL] [remote:] [--public] [--" "created-at=ISO-8601] [--expires-at=ISO-8601] [--fingerprint=FINGERPRINT] [--" "alias=ALIAS].. [prop=value]\n" " Import an image tarball (or tarballs) into the LXD image store.\n" "\n" "lxc image copy [remote:] : [--alias=ALIAS].. [--copy-aliases] " "[--public] [--auto-update]\n" " Copy an image from one LXD daemon to another over the network.\n" "\n" " The auto-update flag instructs the server to keep this image up to\n" " date. It requires the source to be an alias and for it to be public.\n" "\n" "lxc image delete [remote:]\n" " Delete an image from the LXD image store.\n" "\n" "lxc image export [remote:]\n" " Export an image from the LXD image store into a distributable tarball.\n" "\n" "lxc image info [remote:]\n" " Print everything LXD knows about a given image.\n" "\n" "lxc image list [remote:] [filter]\n" " List images in the LXD image store. Filters may be of the\n" " = form for property based filtering, or part of the image\n" " hash or part of the image alias name.\n" "\n" "lxc image show [remote:]\n" " Yaml output of the user modifiable properties of an image.\n" "\n" "lxc image edit [remote:]\n" " Edit image, either by launching external editor or reading STDIN.\n" " Example: lxc image edit # launch editor\n" " cat image.yml | lxc image edit # read from image.yml\n" "\n" "lxc image alias create [remote:] \n" " Create a new alias for an existing image.\n" "\n" "lxc image alias delete [remote:]\n" " Delete an alias.\n" "\n" "lxc image alias list [remote:] [filter]\n" " List the aliases. Filters may be part of the image hash or part of the " "image alias name.\n" msgstr "" #: lxc/info.go:147 msgid "Memory (current)" msgstr "" #: lxc/info.go:151 msgid "Memory (peak)" msgstr "" #: lxc/help.go:86 msgid "Missing summary." msgstr "Sommaire manquant." #: lxc/monitor.go:41 msgid "" "Monitor activity on the LXD server.\n" "\n" "lxc monitor [remote:] [--type=TYPE...]\n" "\n" "Connects to the monitoring interface of the specified LXD server.\n" "\n" "By default will listen to all message types.\n" "Specific types to listen to can be specified with --type.\n" "\n" "Example:\n" "lxc monitor --type=logging" msgstr "" #: lxc/file.go:178 msgid "More than one file to download, but target is not a directory" msgstr "" "Plusieurs fichiers ร  tรฉlรฉcharger mais la destination n'est pas un dossier" #: lxc/move.go:17 msgid "" "Move containers within or in between lxd instances.\n" "\n" "lxc move [remote:] [remote:]\n" " Move a container between two hosts, renaming it if destination name " "differs.\n" "\n" "lxc move \n" " Rename a local container.\n" msgstr "" #: lxc/action.go:63 msgid "Must supply container name for: " msgstr "" #: lxc/list.go:380 lxc/remote.go:363 msgid "NAME" msgstr "" #: lxc/remote.go:337 lxc/remote.go:342 msgid "NO" msgstr "" #: lxc/info.go:89 #, c-format msgid "Name: %s" msgstr "" #: lxc/image.go:157 lxc/publish.go:33 msgid "New alias to define at target" msgstr "" #: lxc/config.go:281 #, fuzzy msgid "No certificate provided to add" msgstr "Un certificat n'a pas รฉtรฉ fournis" #: lxc/config.go:304 msgid "No fingerprint specified." msgstr "Aucune empreinte n'a รฉtรฉ spรฉcifiรฉ." #: lxc/remote.go:107 msgid "Only https URLs are supported for simplestreams" msgstr "" #: lxc/image.go:411 msgid "Only https:// is supported for remote image import." msgstr "" #: lxc/help.go:63 lxc/main.go:122 #, fuzzy msgid "Options:" msgstr "Opรฉration %s" #: lxc/image.go:506 #, c-format msgid "Output is in %s" msgstr "" #: lxc/exec.go:55 msgid "Override the terminal mode (auto, interactive or non-interactive)" msgstr "" #: lxc/list.go:464 msgid "PERSISTENT" msgstr "" #: lxc/list.go:381 msgid "PID" msgstr "" #: lxc/list.go:382 msgid "PROFILES" msgstr "" #: lxc/remote.go:365 msgid "PROTOCOL" msgstr "" #: lxc/image.go:606 lxc/remote.go:366 msgid "PUBLIC" msgstr "" #: lxc/info.go:174 msgid "Packets received" msgstr "" #: lxc/info.go:175 msgid "Packets sent" msgstr "" #: lxc/help.go:69 #, fuzzy msgid "Path to an alternate client configuration directory." msgstr "Dossier de configuration alternatif." #: lxc/help.go:70 #, fuzzy msgid "Path to an alternate server directory." msgstr "Dossier de configuration alternatif." #: lxc/main.go:39 msgid "Permisson denied, are you in the lxd group?" msgstr "" #: lxc/info.go:103 #, c-format msgid "Pid: %d" msgstr "" #: lxc/help.go:25 #, fuzzy msgid "" "Presents details on how to use LXD.\n" "\n" "lxd help [--all]" msgstr "Explique comment utiliser LXD.\n" #: lxc/profile.go:216 msgid "Press enter to open the editor again" msgstr "" #: lxc/config.go:501 lxc/config.go:566 lxc/image.go:688 msgid "Press enter to start the editor again" msgstr "" #: lxc/help.go:65 msgid "Print debug information." msgstr "" #: lxc/help.go:64 msgid "Print less common commands." msgstr "" #: lxc/help.go:66 msgid "Print verbose information." msgstr "" #: lxc/version.go:18 #, fuzzy msgid "" "Prints the version number of this client tool.\n" "\n" "lxc version" msgstr "Montre le numรฉro de version de LXD.\n" #: lxc/info.go:127 #, fuzzy, c-format msgid "Processes: %d" msgstr "Mauvaise URL pour le conteneur %s" #: lxc/profile.go:272 #, fuzzy, c-format msgid "Profile %s added to %s" msgstr "Mauvaise URL pour le conteneur %s" #: lxc/profile.go:167 #, c-format msgid "Profile %s created" msgstr "" #: lxc/profile.go:237 #, c-format msgid "Profile %s deleted" msgstr "" #: lxc/profile.go:303 #, c-format msgid "Profile %s removed from %s" msgstr "" #: lxc/init.go:136 lxc/init.go:137 lxc/launch.go:42 lxc/launch.go:43 msgid "Profile to apply to the new container" msgstr "" #: lxc/profile.go:253 #, c-format msgid "Profiles %s applied to %s" msgstr "" #: lxc/info.go:101 #, fuzzy, c-format msgid "Profiles: %s" msgstr "Mauvaise URL pour le conteneur %s" #: lxc/image.go:343 msgid "Properties:" msgstr "" #: lxc/remote.go:56 msgid "Public image server" msgstr "" #: lxc/image.go:331 #, c-format msgid "Public: %s" msgstr "" #: lxc/publish.go:25 msgid "" "Publish containers as images.\n" "\n" "lxc publish [remote:]container [remote:] [--alias=ALIAS]... [prop-key=prop-" "value]..." msgstr "" #: lxc/remote.go:54 msgid "Remote admin password" msgstr "" #: lxc/delete.go:42 #, c-format msgid "Remove %s (yes/no): " msgstr "" #: lxc/delete.go:36 lxc/delete.go:37 msgid "Require user confirmation." msgstr "" #: lxc/info.go:124 msgid "Resources:" msgstr "" #: lxc/init.go:247 #, c-format msgid "Retrieving image: %s" msgstr "" #: lxc/image.go:609 msgid "SIZE" msgstr "" #: lxc/list.go:383 msgid "SNAPSHOTS" msgstr "" #: lxc/list.go:384 msgid "STATE" msgstr "" #: lxc/remote.go:367 msgid "STATIC" msgstr "" #: lxc/remote.go:214 msgid "Server certificate NACKed by user" msgstr "Le certificat serveur a รฉtรฉ rejetรฉ par l'utilisateur" #: lxc/remote.go:276 msgid "Server doesn't trust us after adding our cert" msgstr "Identification refuse aprรจs l'ajout du certificat client" #: lxc/remote.go:55 msgid "Server protocol (lxd or simplestreams)" msgstr "" #: lxc/restore.go:21 msgid "" "Set the current state of a resource back to a snapshot.\n" "\n" "lxc restore [remote:] [--stateful]\n" "\n" "Restores a container from a snapshot (optionally with running state, see\n" "snapshot help for details).\n" "\n" "For example:\n" "lxc snapshot u1 snap0 # create the snapshot\n" "lxc restore u1 snap0 # restore the snapshot" msgstr "" #: lxc/file.go:44 msgid "Set the file's gid on push" msgstr "Dรฉfinit le gid lors de l'envoi" #: lxc/file.go:45 msgid "Set the file's perms on push" msgstr "Dรฉfinit les permissions lors de l'envoi" #: lxc/file.go:43 msgid "Set the file's uid on push" msgstr "Dรฉfinit le uid lors de l'envoi" #: lxc/help.go:32 msgid "Show all commands (not just interesting ones)" msgstr "Affiche toutes les comandes (pas seulement les intรฉresantes)" #: lxc/info.go:36 msgid "Show the container's last 100 log lines?" msgstr "" #: lxc/image.go:329 #, c-format msgid "Size: %.2fMB" msgstr "" #: lxc/info.go:194 msgid "Snapshots:" msgstr "" #: lxc/image.go:353 msgid "Source:" msgstr "" #: lxc/launch.go:124 #, c-format msgid "Starting %s" msgstr "" #: lxc/info.go:95 #, c-format msgid "Status: %s" msgstr "" #: lxc/publish.go:34 lxc/publish.go:35 msgid "Stop the container if currently running" msgstr "" #: lxc/delete.go:106 lxc/publish.go:111 msgid "Stopping container failed!" msgstr "L'arrรชt du conteneur a รฉchouรฉ!" #: lxc/action.go:39 #, fuzzy msgid "Store the container state (only for stop)." msgstr "Force l'arrรชt du conteneur." #: lxc/info.go:155 msgid "Swap (current)" msgstr "" #: lxc/info.go:159 msgid "Swap (peak)" msgstr "" #: lxc/list.go:385 msgid "TYPE" msgstr "" #: lxc/delete.go:92 msgid "The container is currently running, stop it first or pass --force." msgstr "" #: lxc/publish.go:89 msgid "" "The container is currently running. Use --force to have it stopped and " "restarted." msgstr "" #: lxc/config.go:645 lxc/config.go:657 lxc/config.go:690 lxc/config.go:708 #: lxc/config.go:746 lxc/config.go:764 #, fuzzy msgid "The device doesn't exist" msgstr "le serveur distant %s n'existe pas" #: lxc/init.go:277 #, c-format msgid "The local image '%s' couldn't be found, trying '%s:' instead." msgstr "" #: lxc/publish.go:62 msgid "There is no \"image name\". Did you want an alias?" msgstr "" #: lxc/action.go:36 msgid "Time to wait for the container before killing it." msgstr "Temps d'attente avant de tuer le conteneur." #: lxc/image.go:332 msgid "Timestamps:" msgstr "" #: lxc/main.go:147 msgid "To start your first container, try: lxc launch ubuntu:16.04" msgstr "" #: lxc/image.go:402 #, c-format msgid "Transferring image: %d%%" msgstr "" #: lxc/action.go:93 lxc/launch.go:132 #, c-format msgid "Try `lxc info --show-log %s` for more info" msgstr "" #: lxc/info.go:97 msgid "Type: ephemeral" msgstr "" #: lxc/info.go:99 msgid "Type: persistent" msgstr "" #: lxc/image.go:610 msgid "UPLOAD DATE" msgstr "" #: lxc/remote.go:364 msgid "URL" msgstr "" #: lxc/remote.go:82 msgid "Unable to read remote TLS certificate" msgstr "" #: lxc/image.go:337 #, c-format msgid "Uploaded: %s" msgstr "" #: lxc/main.go:122 #, fuzzy, c-format msgid "Usage: %s" msgstr "" "Utilisation: %s\n" "\n" "Options:\n" "\n" #: lxc/help.go:48 #, fuzzy msgid "Usage: lxc [subcommand] [options]" msgstr "" "Utilisation: lxc [sous commande] [options]\n" "Comande disponibles:\n" #: lxc/delete.go:46 msgid "User aborted delete operation." msgstr "" #: lxc/restore.go:35 #, fuzzy msgid "" "Whether or not to restore the container's running state from snapshot (if " "available)" msgstr "" "Est-ce que l'รฉtat de fonctionement du conteneur doit รชtre inclus dans " "l'instantanรฉ (snapshot)" #: lxc/snapshot.go:38 msgid "Whether or not to snapshot the container's running state" msgstr "" "Est-ce que l'รฉtat de fonctionement du conteneur doit รชtre inclus dans " "l'instantanรฉ (snapshot)" #: lxc/config.go:33 msgid "Whether to show the expanded configuration" msgstr "" #: lxc/remote.go:339 lxc/remote.go:344 msgid "YES" msgstr "" #: lxc/main.go:66 msgid "`lxc config profile` is deprecated, please use `lxc profile`" msgstr "" #: lxc/launch.go:111 #, fuzzy msgid "bad number of things scanned from image, container or snapshot" msgstr "nombre de propriรฉtรฉ invalide pour la ressource" #: lxc/action.go:89 msgid "bad result type from action" msgstr "mauvais type de rรฉponse pour l'action!" #: lxc/copy.go:78 msgid "can't copy to the same container name" msgstr "" #: lxc/remote.go:327 msgid "can't remove the default remote" msgstr "" #: lxc/remote.go:353 msgid "default" msgstr "" #: lxc/init.go:200 lxc/init.go:205 lxc/launch.go:95 lxc/launch.go:100 #, fuzzy msgid "didn't get any affected image, container or snapshot from server" msgstr "N'a pas pรป obtenir de resource du serveur" #: lxc/image.go:323 msgid "disabled" msgstr "" #: lxc/image.go:325 msgid "enabled" msgstr "" #: lxc/main.go:25 lxc/main.go:159 #, fuzzy, c-format msgid "error: %v" msgstr "erreur: %v\n" #: lxc/help.go:40 lxc/main.go:117 #, fuzzy, c-format msgid "error: unknown command: %s" msgstr "erreur: comande inconnue: %s\n" #: lxc/launch.go:115 msgid "got bad version" msgstr "reรงu une version invalide" #: lxc/image.go:318 lxc/image.go:586 msgid "no" msgstr "" #: lxc/copy.go:101 msgid "not all the profiles from the source exist on the target" msgstr "" #: lxc/remote.go:207 #, fuzzy msgid "ok (y/n)?" msgstr "ok (y/n)?" #: lxc/main.go:266 lxc/main.go:270 #, c-format msgid "processing aliases failed %s\n" msgstr "" #: lxc/remote.go:389 #, c-format msgid "remote %s already exists" msgstr "le serveur distant %s existe dรฉjร " #: lxc/remote.go:319 lxc/remote.go:381 lxc/remote.go:416 lxc/remote.go:432 #, c-format msgid "remote %s doesn't exist" msgstr "le serveur distant %s n'existe pas" #: lxc/remote.go:302 #, c-format msgid "remote %s exists as <%s>" msgstr "le serveur distant %s existe en tant que <%s>" #: lxc/remote.go:323 lxc/remote.go:385 lxc/remote.go:420 #, c-format msgid "remote %s is static and cannot be modified" msgstr "" #: lxc/info.go:205 msgid "stateful" msgstr "" #: lxc/info.go:207 msgid "stateless" msgstr "" #: lxc/info.go:201 #, c-format msgid "taken at %s" msgstr "" #: lxc/exec.go:166 msgid "unreachable return reached" msgstr "Un retour inacessible ร  รฉtรฉ atteint" #: lxc/main.go:199 msgid "wrong number of subcommand arguments" msgstr "nombre d'argument incorrect pour la sous-comande" #: lxc/delete.go:45 lxc/image.go:320 lxc/image.go:590 msgid "yes" msgstr "" #: lxc/copy.go:38 msgid "you must specify a source container name" msgstr "" #, fuzzy #~ msgid "Bad image property: %s" #~ msgstr "(Image invalide: %s\n" #, fuzzy #~ msgid "" #~ "Create a read-only snapshot of a container.\n" #~ "\n" #~ "lxc snapshot [remote:] [--stateful]" #~ msgstr "Prend un instantanรฉ (snapshot) en lecture seule d'un conteneur.\n" #~ msgid "No certificate on this connection" #~ msgstr "Aucun certificat pour cette connexion" #, fuzzy #~ msgid "" #~ "Set the current state of a resource back to its state at the time the " #~ "snapshot was created.\n" #~ "\n" #~ "lxc restore [remote:] [--stateful]" #~ msgstr "Prend un instantanรฉ (snapshot) en lecture seule d'un conteneur.\n" #~ msgid "api version mismatch: mine: %q, daemon: %q" #~ msgstr "Version de l'API incompatible: local: %q, distant: %q" #, fuzzy #~ msgid "bad version in profile url" #~ msgstr "version invalide dans l'URL du conteneur" #, fuzzy #~ msgid "device already exists" #~ msgstr "le serveur distant %s existe dรฉjร " #, fuzzy #~ msgid "error." #~ msgstr "erreur: %v\n" #~ msgid "got bad op status %s" #~ msgstr "reรงu un status d'opration invalide %s" #, fuzzy #~ msgid "got bad response type, expected %s got %s" #~ msgstr "reรงu une mauvaise rรฉponse pour \"exec\"" #~ msgid "invalid wait url %s" #~ msgstr "URL d'attente invalide %s" #~ msgid "no response!" #~ msgstr "pas de rรฉponse!" #~ msgid "unknown remote name: %q" #~ msgstr "serveur distant inconnu: %q" #, fuzzy #~ msgid "unknown transport type: %s" #~ msgstr "serveur distant inconnu: %q" #~ msgid "cannot resolve unix socket address: %v" #~ msgstr "Ne peut pas rรฉsoudre l'adresse du unix socket: %v" #, fuzzy #~ msgid "unknown group %s" #~ msgstr "serveur distant inconnu: %q" #, fuzzy #~ msgid "Information about remotes not yet supported" #~ msgstr "" #~ "Il n'est pas encore possible d'obtenir de l'information sur un serveur " #~ "distant\n" #~ msgid "Unknown image command %s" #~ msgstr "Comande d'image inconnue %s" #~ msgid "Unknown remote subcommand %s" #~ msgstr "Comande de serveur distant inconnue %s" #~ msgid "Unkonwn config trust command %s" #~ msgstr "Comande de configuration de confiance inconnue %s" #, fuzzy #~ msgid "YAML parse error %v" #~ msgstr "erreur: %v\n" #~ msgid "invalid argument %s" #~ msgstr "Arguments invalides %s" #, fuzzy #~ msgid "unknown profile cmd %s" #~ msgstr "Comande de configuration inconue %s" #, fuzzy #~ msgid "Publish to remote server is not supported yet" #~ msgstr "" #~ "Il n'est pas encore possible d'obtenir de l'information sur un serveur " #~ "distant\n" #, fuzzy #~ msgid "Use an alternative config path." #~ msgstr "Dossier de configuration alternatif." #, fuzzy #~ msgid "" #~ "error: %v\n" #~ "%s\n" #~ msgstr "" #~ "erreur: %v\n" #~ "%s" #, fuzzy #~ msgid "Show for remotes is not yet supported\n" #~ msgstr "" #~ "Il n'est pas encore possible d'obtenir de l'information sur un serveur " #~ "distant\n" #~ msgid "(Bad alias entry: %s\n" #~ msgstr "(Alias invalide: %s\n" #~ msgid "bad container url %s" #~ msgstr "Mauvaise URL pour le conteneur %s" #~ msgid "bad version in container url" #~ msgstr "version invalide dans l'URL du conteneur" #, fuzzy #~ msgid "Ephemeral containers not yet supported\n" #~ msgstr "" #~ "Il n'est pas encore possible d'obtenir de l'information sur un serveur " #~ "distant\n" #~ msgid "(Bad image entry: %s\n" #~ msgstr "(Image invalide: %s\n" #~ msgid "Certificate already stored.\n" #~ msgstr "Le certificat a dรฉjร  รฉtรฉ enregistrรฉ.\n" #, fuzzy #~ msgid "Non-async response from delete!" #~ msgstr "Rรฉponse invalide (non-async) durant la suppression!" #, fuzzy #~ msgid "Non-async response from init!" #~ msgstr "" #~ "Rรฉponse invalide (non-async) durant la cration d'un instantanรฉ (snapshot)!" #~ msgid "Non-async response from snapshot!" #~ msgstr "" #~ "Rรฉponse invalide (non-async) durant la cration d'un instantanรฉ (snapshot)!" #~ msgid "Server certificate has changed" #~ msgstr "Le certificat serveur a changรฉ" #, fuzzy #~ msgid "Unexpected non-async response" #~ msgstr "Rรฉponse invalide (non-async) durant la suppression!" #~ msgid "bad response type from image list!" #~ msgstr "mauvais type de rรฉponse pour la liste d'image!" #~ msgid "bad response type from list!" #~ msgstr "mauvais type de rรฉponse pour la liste!" #, fuzzy #~ msgid "got non-async response!" #~ msgstr "Rรฉponse invalide (non-async) durant la suppression!" #~ msgid "got non-sync response from containers get!" #~ msgstr "Rรฉponse invalide (non-async) durant le chargement!" #~ msgid "Delete a container or container snapshot.\n" #~ msgstr "Supprime un conteneur ou l'instantanรฉ (snapshot) d'un conteneur.\n" #~ msgid "List information on containers.\n" #~ msgstr "Liste de l'information sur les conteneurs.\n" #~ msgid "Lists the available resources.\n" #~ msgstr "Liste des ressources disponibles.\n" #~ msgid "Manage files on a container.\n" #~ msgstr "Gรฉrer les fichiers du conteneur.\n" #~ msgid "Manage remote lxc servers.\n" #~ msgstr "Gรฉrer les serveurs distants.\n" #~ msgid "Non-async response from create!" #~ msgstr "Rรฉponse invalide (non-async) durant la cration!" #~ msgid "Only 'password' can be set currently" #~ msgstr "Seul 'password' peut รชtre configurรฉ en ce moment" #~ msgid "" #~ "lxc image import [target] [--created-at=ISO-8601] [--expires-" #~ "at=ISO-8601] [--fingerprint=HASH] [prop=value]\n" #~ msgstr "" #~ "lxc image import [destination] [--created-at=ISO-8601] [--" #~ "expires-at=ISO-8601] [--fingerprint=HASH] [proprit=valeur]\n" #~ msgid "lxc init ubuntu []\n" #~ msgstr "lxc init ubuntu []\n" #~ msgid "lxc launch ubuntu []\n" #~ msgstr "lxc launch ubuntu []\n" lxd-2.0.2/po/ja.po000066400000000000000000001660721272140510300136610ustar00rootroot00000000000000# Japanese translation for LXD # Copyright (C) 2015 - LXD contributors # This file is distributed under the same license as LXD. # Hiroaki Nakamura , 2015. # msgid "" msgstr "" "Project-Id-Version: LXD\n" "Report-Msgid-Bugs-To: lxc-devel@lists.linuxcontainers.org\n" "POT-Creation-Date: 2016-04-25 14:47-0500\n" "PO-Revision-Date: 2016-04-26 14:31+0900\n" "Last-Translator: KATOH Yasufumi \n" "Language-Team: Japanese \n" "Language: \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" #: lxc/info.go:140 msgid " Disk usage:" msgstr " ใƒ‡ใ‚ฃใ‚นใ‚ฏไฝฟ็”จ้‡:" #: lxc/info.go:163 msgid " Memory usage:" msgstr " ใƒกใƒขใƒชๆถˆ่ฒป้‡:" #: lxc/info.go:180 msgid " Network usage:" msgstr " ใƒใƒƒใƒˆใƒฏใƒผใ‚ฏไฝฟ็”จ็Šถๆณ:" #: lxc/config.go:37 msgid "" "### This is a yaml representation of the configuration.\n" "### Any line starting with a '# will be ignored.\n" "###\n" "### A sample configuration looks like:\n" "### name: container1\n" "### profiles:\n" "### - default\n" "### config:\n" "### volatile.eth0.hwaddr: 00:16:3e:e9:f8:7f\n" "### devices:\n" "### homedir:\n" "### path: /extra\n" "### source: /home/user\n" "### type: disk\n" "### ephemeral: false\n" "###\n" "### Note that the name is shown but cannot be changed" msgstr "" #: lxc/image.go:83 msgid "" "### This is a yaml representation of the image properties.\n" "### Any line starting with a '# will be ignored.\n" "###\n" "### Each property is represented by a single line:\n" "### An example would be:\n" "### description: My custom image" msgstr "" #: lxc/profile.go:27 msgid "" "### This is a yaml representation of the profile.\n" "### Any line starting with a '# will be ignored.\n" "###\n" "### A profile consists of a set of configuration items followed by a set of\n" "### devices.\n" "###\n" "### An example would look like:\n" "### name: onenic\n" "### config:\n" "### raw.lxc: lxc.aa_profile=unconfined\n" "### devices:\n" "### eth0:\n" "### nictype: bridged\n" "### parent: lxdbr0\n" "### type: nic\n" "###\n" "### Note that the name is shown but cannot be changed" msgstr "" #: lxc/image.go:583 #, c-format msgid "%s (%d more)" msgstr "" #: lxc/snapshot.go:61 msgid "'/' not allowed in snapshot name" msgstr "'/' ใฏใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆใฎๅๅ‰ใซใฏไฝฟ็”จใงใใพใ›ใ‚“ใ€‚" #: lxc/profile.go:251 msgid "(none)" msgstr "" #: lxc/image.go:604 lxc/image.go:633 msgid "ALIAS" msgstr "" #: lxc/image.go:608 msgid "ARCH" msgstr "" #: lxc/list.go:378 msgid "ARCHITECTURE" msgstr "" #: lxc/remote.go:53 msgid "Accept certificate" msgstr "่จผๆ˜Žๆ›ธใฎใƒ•ใ‚ฃใƒณใ‚ฌใƒผใƒ—ใƒชใƒณใƒˆใฎ็ขบ่ชใชใ—ใง่จผๆ˜Žๆ›ธใ‚’ๅ—ใ‘ๅ…ฅใ‚Œใพใ™" #: lxc/remote.go:256 #, c-format msgid "Admin password for %s: " msgstr "%s ใฎ็ฎก็†่€…ใƒ‘ใ‚นใƒฏใƒผใƒ‰: " #: lxc/image.go:347 msgid "Aliases:" msgstr "ใ‚จใ‚คใƒชใ‚ขใ‚น:" #: lxc/exec.go:54 msgid "An environment variable of the form HOME=/home/foo" msgstr "็’ฐๅขƒๅค‰ๆ•ฐใ‚’ HOME=/home/foo ใฎๅฝขๅผใงๆŒ‡ๅฎšใ—ใพใ™" #: lxc/image.go:330 lxc/info.go:90 #, c-format msgid "Architecture: %s" msgstr "ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃ: %s" #: lxc/image.go:351 #, c-format msgid "Auto update: %s" msgstr "่‡ชๅ‹•ๆ›ดๆ–ฐ: %s" #: lxc/help.go:49 msgid "Available commands:" msgstr "ไฝฟ็”จๅฏ่ƒฝใชใ‚ณใƒžใƒณใƒ‰:" #: lxc/info.go:172 msgid "Bytes received" msgstr "ๅ—ไฟกใƒใ‚คใƒˆๆ•ฐ" #: lxc/info.go:173 msgid "Bytes sent" msgstr "้€ไฟกใƒใ‚คใƒˆๆ•ฐ" #: lxc/config.go:270 msgid "COMMON NAME" msgstr "" #: lxc/list.go:379 msgid "CREATED AT" msgstr "" #: lxc/config.go:114 #, c-format msgid "Can't read from stdin: %s" msgstr "ๆจ™ๆบ–ๅ…ฅๅŠ›ใ‹ใ‚‰่ชญใฟ่พผใ‚ใพใ›ใ‚“: %s" #: lxc/config.go:127 lxc/config.go:160 lxc/config.go:182 #, c-format msgid "Can't unset key '%s', it's not currently set." msgstr "ใ‚ญใƒผ '%s' ใŒๆŒ‡ๅฎšใ•ใ‚Œใฆใ„ใชใ„ใฎใงๅ‰Š้™คใงใใพใ›ใ‚“ใ€‚" #: lxc/profile.go:417 msgid "Cannot provide container name to list" msgstr "ใ‚ณใƒณใƒ†ใƒŠๅใ‚’ๅ–ๅพ—ใงใใพใ›ใ‚“" #: lxc/remote.go:206 #, c-format msgid "Certificate fingerprint: %x" msgstr "่จผๆ˜Žๆ›ธใฎใƒ•ใ‚ฃใƒณใ‚ฌใƒผใƒ—ใƒชใƒณใƒˆ: %x" #: lxc/action.go:28 #, c-format msgid "" "Changes state of one or more containers to %s.\n" "\n" "lxc %s [...]" msgstr "" "1ใคใพใŸใฏ่ค‡ๆ•ฐใฎใ‚ณใƒณใƒ†ใƒŠใฎ็Šถๆ…‹ใ‚’ %s ใซๅค‰ๆ›ดใ—ใพใ™ใ€‚\n" "\n" "lxc %s [...]" #: lxc/remote.go:279 msgid "Client certificate stored at server: " msgstr "ใ‚ฏใƒฉใ‚คใ‚ขใƒณใƒˆ่จผๆ˜Žๆ›ธใŒใ‚ตใƒผใƒใซๆ ผ็ดใ•ใ‚Œใพใ—ใŸ: " #: lxc/list.go:99 lxc/list.go:100 msgid "Columns" msgstr "ใ‚ซใƒฉใƒ ใƒฌใ‚คใ‚ขใ‚ฆใƒˆ" #: lxc/init.go:134 lxc/init.go:135 lxc/launch.go:40 lxc/launch.go:41 msgid "Config key/value to apply to the new container" msgstr "ๆ–ฐใ—ใ„ใ‚ณใƒณใƒ†ใƒŠใซ้ฉ็”จใ™ใ‚‹ใ‚ญใƒผ/ๅ€คใฎ่จญๅฎš" #: lxc/config.go:500 lxc/config.go:565 lxc/image.go:687 lxc/profile.go:215 #, c-format msgid "Config parsing error: %s" msgstr "่จญๅฎšใฎๆง‹ๆ–‡ใ‚จใƒฉใƒผ: %s" #: lxc/main.go:37 msgid "Connection refused; is LXD running?" msgstr "ๆŽฅ็ถšใŒๆ‹’ๅฆใ•ใ‚Œใพใ—ใŸใ€‚LXDใŒๅฎŸ่กŒใ•ใ‚Œใฆใ„ใพใ™ใ‹?" #: lxc/publish.go:59 msgid "Container name is mandatory" msgstr "ใ‚ณใƒณใƒ†ใƒŠๅใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™" #: lxc/init.go:210 #, c-format msgid "Container name is: %s" msgstr "ใ‚ณใƒณใƒ†ใƒŠๅ: %s" #: lxc/publish.go:141 lxc/publish.go:156 #, c-format msgid "Container published with fingerprint: %s" msgstr "ใ‚ณใƒณใƒ†ใƒŠใฏไปฅไธ‹ใฎใƒ•ใ‚ฃใƒณใ‚ฌใƒผใƒ—ใƒชใƒณใƒˆใง publish ใ•ใ‚Œใพใ™: %s" #: lxc/image.go:155 msgid "Copy aliases from source" msgstr "ใ‚ฝใƒผใ‚นใ‹ใ‚‰ใ‚จใ‚คใƒชใ‚ขใ‚นใ‚’ใ‚ณใƒ”ใƒผใ—ใพใ—ใŸ" #: lxc/copy.go:22 msgid "" "Copy containers within or in between lxd instances.\n" "\n" "lxc copy [remote:] [remote:] [--" "ephemeral|e]" msgstr "" "LXDใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นๅ†…ใ‚‚ใ—ใใฏLXDใ‚คใƒณใ‚นใ‚ฟใƒณใ‚น้–“ใงใ‚ณใƒณใƒ†ใƒŠใ‚’ใ‚ณใƒ”ใƒผใ—ใพใ™ใ€‚\n" "\n" "lxc copy [remote:] [remote:] [--" "ephemeral|e]" #: lxc/image.go:268 #, c-format msgid "Copying the image: %s" msgstr "ใ‚คใƒกใƒผใ‚ธใฎใ‚ณใƒ”ใƒผไธญ: %s" #: lxc/remote.go:221 msgid "Could not create server cert dir" msgstr "ใ‚ตใƒผใƒ่จผๆ˜Žๆ›ธๆ ผ็ด็”จใฎใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใ‚’ไฝœๆˆใงใใพใ›ใ‚“ใ€‚" #: lxc/snapshot.go:21 msgid "" "Create a read-only snapshot of a container.\n" "\n" "lxc snapshot [remote:] [--stateful]\n" "\n" "Creates a snapshot of the container (optionally with the container's memory\n" "state). When --stateful is used, LXD attempts to checkpoint the container's\n" "running state, including process memory state, TCP connections, etc. so that " "it\n" "can be restored (via lxc restore) at a later time (although some things, e." "g.\n" "TCP connections after the TCP timeout window has expired, may not be " "restored\n" "successfully).\n" "\n" "Example:\n" "lxc snapshot u1 snap0" msgstr "" "ใ‚ณใƒณใƒ†ใƒŠใฎ่ชญใฟๅ–ใ‚Šๅฐ‚็”จใฎใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆใ‚’ไฝœๆˆใ—ใพใ™ใ€‚\n" "\n" "lxc snapshot [remote:] [--stateful]\n" "\n" "ใ‚ณใƒณใƒ†ใƒŠใฎใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆใ‚’ไฝœๆˆใ—ใพใ™ (ใ‚ชใƒ—ใ‚ทใƒงใƒณใงใ‚ณใƒณใƒ†ใƒŠใฎใƒกใƒขใƒช็Šถๆ…‹ใ‚’\n" "ๅซใ‚ใฆ)ใ€‚--statefulใŒๆŒ‡ๅฎšใ•ใ‚ŒใŸๅ ดๅˆใ€LXDใฏใ‚ณใƒณใƒ†ใƒŠใƒ—ใƒญใ‚ปใ‚นใฎใƒกใƒขใƒช็Šถๆ…‹ใ€TCP\n" "ๆŽฅ็ถšใชใฉใฎๅฎŸ่กŒ็Šถๆ…‹ใ‚’ใ€ใ‚ใจใงlxc restoreใ‚’ไฝฟใฃใฆใƒชใ‚นใƒˆใ‚ขใงใใ‚‹ใ‚ˆใ†ใซใ€ใ‚ณใƒณใƒ†\n" "ใƒŠใฎใƒใ‚งใƒƒใ‚ฏใƒใ‚คใƒณใƒˆใ‚’ๅ–ๅพ—ใ—ใ‚ˆใ†ใจใ—ใพใ™ (ใ—ใ‹ใ—ใ€ใ‚ฟใ‚คใƒ ใ‚ขใ‚ฆใƒˆใ‚ฆใ‚ฃใƒณใƒ‰ใ‚ฆใŒ\n" "expireใ—ใŸใ‚ใจใฎTCPๆŽฅ็ถšใฎใ‚ˆใ†ใซๆญฃๅธธใซใƒชใ‚นใƒˆใ‚ขใงใใชใ„ใ‚‚ใฎใ‚‚ใ‚ใ‚Šใพใ™)\n" "\n" "ไพ‹:\n" "lxc snapshot u1 snap0" #: lxc/image.go:335 lxc/info.go:92 #, c-format msgid "Created: %s" msgstr "ไฝœๆˆๆ—ฅๆ™‚: %s" #: lxc/init.go:177 lxc/launch.go:118 #, c-format msgid "Creating %s" msgstr "%s ใ‚’ไฝœๆˆไธญ" #: lxc/init.go:175 msgid "Creating the container" msgstr "ใ‚ณใƒณใƒ†ใƒŠใ‚’ไฝœๆˆไธญ" #: lxc/image.go:607 lxc/image.go:635 msgid "DESCRIPTION" msgstr "" #: lxc/delete.go:25 msgid "" "Delete containers or container snapshots.\n" "\n" "lxc delete [remote:][/] [remote:][[/" "]...]\n" "\n" "Destroy containers or snapshots with any attached data (configuration, " "snapshots, ...)." msgstr "" "ใ‚ณใƒณใƒ†ใƒŠใ‚‚ใ—ใใฏใ‚ณใƒณใƒ†ใƒŠใฎใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆใ‚’ๆถˆๅŽปใ—ใพใ™ใ€‚\n" "\n" "lxc delete [remote:][/] [remote:]" "[[]...]\n" "\n" "ไป˜ๅฑžใ™ใ‚‹ใƒ‡ใƒผใ‚ฟ (่จญๅฎšใ€ใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆใ€...) ใจไธ€็ท’ใซใ‚ณใƒณใƒ†ใƒŠใ‚‚ใ—ใใฏใ‚ณใƒณใƒ†\n" "ใƒŠใฎใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆใ‚’ๆถˆๅŽปใ—ใพใ™ใ€‚" #: lxc/config.go:617 #, c-format msgid "Device %s added to %s" msgstr "ใƒ‡ใƒใ‚คใ‚น %s ใŒ %s ใซ่ฟฝๅŠ ใ•ใ‚Œใพใ—ใŸ" #: lxc/config.go:804 #, c-format msgid "Device %s removed from %s" msgstr "ใƒ‡ใƒใ‚คใ‚น %s ใŒ %s ใ‹ใ‚‰ๅ‰Š้™คใ•ใ‚Œใพใ—ใŸ" #: lxc/list.go:462 msgid "EPHEMERAL" msgstr "" #: lxc/config.go:272 msgid "EXPIRY DATE" msgstr "" #: lxc/main.go:55 msgid "Enables debug mode." msgstr "ใƒ‡ใƒใƒƒใ‚ฐใƒขใƒผใƒ‰ใ‚’ๆœ‰ๅŠนใซใ—ใพใ™ใ€‚" #: lxc/main.go:54 msgid "Enables verbose mode." msgstr "่ฉณ็ดฐใƒขใƒผใƒ‰ใ‚’ๆœ‰ๅŠนใซใ—ใพใ™ใ€‚" #: lxc/help.go:68 msgid "Environment:" msgstr "็’ฐๅขƒๅค‰ๆ•ฐ:" #: lxc/copy.go:29 lxc/copy.go:30 lxc/init.go:138 lxc/init.go:139 #: lxc/launch.go:44 lxc/launch.go:45 msgid "Ephemeral container" msgstr "Ephemeral ใ‚ณใƒณใƒ†ใƒŠ" #: lxc/monitor.go:56 msgid "Event type to listen for" msgstr "Listenใ™ใ‚‹ใ‚คใƒ™ใƒณใƒˆใ‚ฟใ‚คใƒ—" #: lxc/exec.go:45 msgid "" "Execute the specified command in a container.\n" "\n" "lxc exec [remote:]container [--mode=auto|interactive|non-interactive] [--env " "EDITOR=/usr/bin/vim]... \n" "\n" "Mode defaults to non-interactive, interactive mode is selected if both stdin " "AND stdout are terminals (stderr is ignored)." msgstr "" "ๆŒ‡ๅฎšใ—ใŸใ‚ณใƒžใƒณใƒ‰ใ‚’ใ‚ณใƒณใƒ†ใƒŠๅ†…ใงๅฎŸ่กŒใ—ใพใ™ใ€‚\n" "\n" "lxc exec [remote:]container [--mode=auto|interactive|non-interactive] [--env " "EDITOR=/usr/bin/vim]... \n" "\n" "ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒขใƒผใƒ‰ใฏ non-interactive ใงใ™ใ€‚ใ‚‚ใ—ๆจ™ๆบ–ๅ…ฅๅ‡บๅŠ›ใŒไธกๆ–นใจใ‚‚ใ‚ฟใƒผใƒŸใƒŠ\n" "ใƒซใฎๅ ดๅˆใฏ interactive ใƒขใƒผใƒ‰ใŒ้ธๆŠžใ•ใ‚Œใพใ™ (ๆจ™ๆบ–ใ‚จใƒฉใƒผๅ‡บๅŠ›ใฏ็„ก่ฆ–ใ•ใ‚Œใพใ™)ใ€‚" #: lxc/image.go:339 #, c-format msgid "Expires: %s" msgstr "ๅคฑๅŠนๆ—ฅๆ™‚: %s" #: lxc/image.go:341 msgid "Expires: never" msgstr "ๅคฑๅŠนๆ—ฅๆ™‚: ๅคฑๅŠนใ—ใชใ„" #: lxc/config.go:269 lxc/image.go:605 lxc/image.go:634 msgid "FINGERPRINT" msgstr "" #: lxc/list.go:102 msgid "Fast mode (same as --columns=nsacPt" msgstr "Fast ใƒขใƒผใƒ‰ (--columns=nsacPt ใจๅŒใ˜)" #: lxc/image.go:328 #, c-format msgid "Fingerprint: %s" msgstr "่จผๆ˜Žๆ›ธใฎใƒ•ใ‚ฃใƒณใ‚ฌใƒผใƒ—ใƒชใƒณใƒˆ: %s" #: lxc/finger.go:17 msgid "" "Fingers the LXD instance to check if it is up and working.\n" "\n" "lxc finger " msgstr "" "LXDใ‚คใƒณใ‚นใ‚ฟใƒณใ‚นใŒ็จผๅƒไธญใ‹ใ‚’็ขบ่ชใ—ใพใ™ใ€‚\n" "\n" "lxc finger " #: lxc/action.go:37 msgid "Force the container to shutdown." msgstr "ใ‚ณใƒณใƒ†ใƒŠใ‚’ๅผทๅˆถใ‚ทใƒฃใƒƒใƒˆใƒ€ใ‚ฆใƒณใ—ใพใ™ใ€‚" #: lxc/delete.go:34 lxc/delete.go:35 msgid "Force the removal of stopped containers." msgstr "ๅœๆญขใ—ใŸใ‚ณใƒณใƒ†ใƒŠใ‚’ๅผทๅˆถ็š„ใซๅ‰Š้™คใ—ใพใ™ใ€‚" #: lxc/main.go:56 msgid "Force using the local unix socket." msgstr "ๅผทๅˆถ็š„ใซใƒญใƒผใ‚ซใƒซใฎUNIXใ‚ฝใ‚ฑใƒƒใƒˆใ‚’ไฝฟใ„ใพใ™ใ€‚" #: lxc/list.go:101 msgid "Format" msgstr "" #: lxc/main.go:138 msgid "Generating a client certificate. This may take a minute..." msgstr "ใ‚ฏใƒฉใ‚คใ‚ขใƒณใƒˆ่จผๆ˜Žๆ›ธใ‚’็”Ÿๆˆใ—ใพใ™ใ€‚1ๅˆ†ใใ‚‰ใ„ใ‹ใ‹ใ‚Šใพใ™..." #: lxc/list.go:376 msgid "IPV4" msgstr "" #: lxc/list.go:377 msgid "IPV6" msgstr "" #: lxc/config.go:271 msgid "ISSUE DATE" msgstr "" #: lxc/main.go:146 msgid "" "If this is your first time using LXD, you should also run: sudo lxd init" msgstr "ๅˆใ‚ใฆ LXD ใ‚’ไฝฟใ†ๅ ดๅˆใ€sudo lxd init ใจๅฎŸ่กŒใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™" #: lxc/main.go:57 msgid "Ignore aliases when determining what command to run." msgstr "ใฉใฎใ‚ณใƒžใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ™ใ‚‹ใ‹ๆฑบใ‚ใ‚‹้š›ใซใ‚จใ‚คใƒชใ‚ขใ‚นใ‚’็„ก่ฆ–ใ—ใพใ™ใ€‚" #: lxc/action.go:40 msgid "Ignore the container state (only for start)." msgstr "ใ‚ณใƒณใƒ†ใƒŠใฎ็Šถๆ…‹ใ‚’็„ก่ฆ–ใ—ใพใ™ (startใฎใฟ)ใ€‚" #: lxc/image.go:273 msgid "Image copied successfully!" msgstr "ใ‚คใƒกใƒผใ‚ธใฎใ‚ณใƒ”ใƒผใŒๆˆๅŠŸใ—ใพใ—ใŸ!" #: lxc/image.go:419 #, c-format msgid "Image imported with fingerprint: %s" msgstr "ใ‚คใƒกใƒผใ‚ธใฏไปฅไธ‹ใฎใƒ•ใ‚ฃใƒณใ‚ฌใƒผใƒ—ใƒชใƒณใƒˆใงใ‚คใƒณใƒใƒผใƒˆใ•ใ‚Œใพใ—ใŸ: %s" #: lxc/init.go:73 msgid "" "Initialize a container from a particular image.\n" "\n" "lxc init [remote:] [remote:][] [--ephemeral|-e] [--profile|-p " "...] [--config|-c ...]\n" "\n" "Initializes a container using the specified image and name.\n" "\n" "Not specifying -p will result in the default profile.\n" "Specifying \"-p\" with no argument will result in no profile.\n" "\n" "Example:\n" "lxc init ubuntu u1" msgstr "" "ๆŒ‡ๅฎšใ—ใŸใ‚คใƒกใƒผใ‚ธใ‹ใ‚‰ใ‚ณใƒณใƒ†ใƒŠใ‚’ๅˆๆœŸๅŒ–ใ—ใพใ™ใ€‚\n" "\n" "lxc init [remote:] [remote:][] [--ephemeral|-e] [--profile|-p " "...] [--config|-c ...]\n" "\n" "ๆŒ‡ๅฎšใ—ใŸใ‚คใƒกใƒผใ‚ธใจใ‚ณใƒณใƒ†ใƒŠๅใ‚’ไฝฟใฃใฆใ‚ณใƒณใƒ†ใƒŠใ‚’ๅˆๆœŸๅŒ–ใ—ใพใ™ใ€‚\n" "\n" "-p ใ‚’ๆŒ‡ๅฎšใ—ใชใ„ๅ ดๅˆใฏใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใ‚’ไฝฟใ„ใพใ™ใ€‚\n" "\"-p\" ใฎใ‚ˆใ†ใซๅผ•ๆ•ฐใชใ—ใง -p ใ‚’ไฝฟใ†ใจใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใชใ—ใจใชใ‚Šใพใ™ใ€‚\n" "\n" "ไพ‹:\n" "lxc init ubuntu u1" #: lxc/remote.go:122 #, c-format msgid "Invalid URL scheme \"%s\" in \"%s\"" msgstr "ไธๆญฃใช URL ใ‚นใ‚ญใƒผใƒ  \"%s\" (\"%s\" ๅ†…)" #: lxc/init.go:30 lxc/init.go:35 msgid "Invalid configuration key" msgstr "ๆญฃใ—ใใชใ„่จญๅฎš้ …็›ฎ (key) ใงใ™" #: lxc/file.go:190 #, c-format msgid "Invalid source %s" msgstr "ไธๆญฃใชใ‚ฝใƒผใ‚น %s" #: lxc/file.go:57 #, c-format msgid "Invalid target %s" msgstr "ไธๆญฃใช้€ใ‚Šๅ…ˆ %s" #: lxc/info.go:121 msgid "Ips:" msgstr "IPใ‚ขใƒ‰ใƒฌใ‚น:" #: lxc/image.go:156 msgid "Keep the image up to date after initial copy" msgstr "ๆœ€ๅˆใซใ‚ณใƒ”ใƒผใ—ใŸๅพŒใ‚‚ๅธธใซใ‚คใƒกใƒผใ‚ธใ‚’ๆœ€ๆ–ฐใฎ็Šถๆ…‹ใซไฟใค" #: lxc/main.go:35 msgid "LXD socket not found; is LXD running?" msgstr "LXD ใฎใ‚ฝใ‚ฑใƒƒใƒˆใŒ่ฆ‹ใคใ‹ใ‚Šใพใ›ใ‚“ใ€‚LXD ใŒๅฎŸ่กŒใ•ใ‚Œใฆใ„ใพใ™ใ‹?" #: lxc/launch.go:22 msgid "" "Launch a container from a particular image.\n" "\n" "lxc launch [remote:] [remote:][] [--ephemeral|-e] [--profile|-p " "...] [--config|-c ...]\n" "\n" "Launches a container using the specified image and name.\n" "\n" "Not specifying -p will result in the default profile.\n" "Specifying \"-p\" with no argument will result in no profile.\n" "\n" "Example:\n" "lxc launch ubuntu:16.04 u1" msgstr "" "ๆŒ‡ๅฎšใ—ใŸใ‚คใƒกใƒผใ‚ธใ‹ใ‚‰ใ‚ณใƒณใƒ†ใƒŠใ‚’่ตทๅ‹•ใ—ใพใ™ใ€‚\n" "\n" "lxc launch [remote:] [remote:][] [--ephemeral|-e] [--profile|-p ...] [--config|-c ...]\n" "\n" "ๆŒ‡ๅฎšใ—ใŸใ‚คใƒกใƒผใ‚ธใจๅๅ‰ใ‚’ไฝฟใฃใฆใ‚ณใƒณใƒ†ใƒŠใ‚’่ตทๅ‹•ใ—ใพใ™ใ€‚\n" "\n" "-p ใ‚’ๆŒ‡ๅฎšใ—ใชใ„ๅ ดๅˆใฏใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใ‚’ไฝฟใ„ใพใ™ใ€‚\n" "\"-p\" ใฎใ‚ˆใ†ใซๅผ•ๆ•ฐใชใ—ใง -p ใ‚’ไฝฟใ†ใจใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใชใ—ใจใชใ‚Šใพใ™ใ€‚\n" "\n" "ไพ‹:\n" "lxc launch ubuntu:16.04 u1" #: lxc/info.go:25 msgid "" "List information on LXD servers and containers.\n" "\n" "For a container:\n" " lxc info [:]container [--show-log]\n" "\n" "For a server:\n" " lxc info [:]" msgstr "" "LXD ใ‚ตใƒผใƒใจใ‚ณใƒณใƒ†ใƒŠใฎๆƒ…ๅ ฑใ‚’ไธ€่ฆง่กจ็คบใ—ใพใ™ใ€‚\n" "\n" "ใ‚ณใƒณใƒ†ใƒŠๆƒ…ๅ ฑ:\n" " lxc info [:]container [--show-log]\n" "\n" "ใ‚ตใƒผใƒๆƒ…ๅ ฑ:\n" " lxc info [:]" #: lxc/list.go:67 msgid "" "Lists the available resources.\n" "\n" "lxc list [resource] [filters] [--format table|json] [-c columns] [--fast]\n" "\n" "The filters are:\n" "* A single keyword like \"web\" which will list any container with a name " "starting by \"web\".\n" "* A regular expression on the container name. (e.g. .*web.*01$)\n" "* A key/value pair referring to a configuration item. For those, the " "namespace can be abreviated to the smallest unambiguous identifier:\n" " * \"user.blah=abc\" will list all containers with the \"blah\" user " "property set to \"abc\".\n" " * \"u.blah=abc\" will do the same\n" " * \"security.privileged=1\" will list all privileged containers\n" " * \"s.privileged=1\" will do the same\n" "* A regular expression matching a configuration item or its value. (e.g. " "volatile.eth0.hwaddr=00:16:3e:.*)\n" "\n" "Columns for table format are:\n" "* 4 - IPv4 address\n" "* 6 - IPv6 address\n" "* a - architecture\n" "* c - creation date\n" "* n - name\n" "* p - pid of container init process\n" "* P - profiles\n" "* s - state\n" "* S - number of snapshots\n" "* t - type (persistent or ephemeral)\n" "\n" "Default column layout: ns46tS\n" "Fast column layout: nsacPt" msgstr "" "ๅˆฉ็”จๅฏ่ƒฝใชใƒชใ‚ฝใƒผใ‚นใ‚’ไธ€่ฆง่กจ็คบใ—ใพใ™ใ€‚\n" "\n" "lxc list [resource] [filters] [--format table|json] [-c columns] [--fast]\n" "\n" "ใƒ•ใ‚ฃใƒซใ‚ฟใฎๆŒ‡ๅฎš:\n" "* ๅ˜ไธ€ใฎ \"web\" ใฎใ‚ˆใ†ใชใ‚ญใƒผใƒฏใƒผใƒ‰ใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ใจใ€ๅๅ‰ใŒ \"web\" ใงใฏใ˜ใพใ‚‹ใ‚ณใƒณใƒ†\n" " ใƒŠใŒไธ€่ฆง่กจ็คบใ•ใ‚Œใพใ™ใ€‚\n" "* ใ‚ณใƒณใƒ†ใƒŠๅใฎๆญฃ่ฆ่กจ็พ (ไพ‹: .*web.*01$)\n" "* ่จญๅฎš้ …็›ฎใฎใ‚ญใƒผใจๅ€คใ€‚ใ‚ญใƒผใฎๅๅ‰็ฉบ้–“ใฏไธ€ๆ„ใซ่ญ˜ๅˆฅใงใใ‚‹ๅ ดๅˆใฏ็Ÿญ็ธฎใ™ใ‚‹ใ“ใจใŒใง\n" " ใใพใ™:\n" " * \"user.blah=abc\" ใฏ \"blah\" ใจใ„ใ† user ใƒ—ใƒญใƒ‘ใƒ†ใ‚ฃใŒ \"abc\" ใซ่จญๅฎšใ•ใ‚Œใฆใ„ใ‚‹\n" " ใ‚ณใƒณใƒ†ใƒŠใ‚’ใ™ในใฆไธ€่ฆง่กจ็คบใ—ใพใ™ใ€‚\n" " * \"u.blah=abc\" ใฏไธŠ่จ˜ใจๅŒใ˜ๆ„ๅ‘ณใซใชใ‚Šใพใ™ใ€‚\n" " * \"security.privileged=1\" ใฏ็‰นๆจฉใ‚ณใƒณใƒ†ใƒŠใ‚’ใ™ในใฆไธ€่ฆง่กจ็คบใ—ใพใ™ใ€‚\n" " * \"s.privilaged=1\" ใฏไธŠ่จ˜ใจๅŒใ˜ๆ„ๅ‘ณใซใชใ‚Šใพใ™ใ€‚\n" " * ่จญๅฎš้ …็›ฎใ‚‚ใ—ใใฏๅ€คใจใƒžใƒƒใƒใ™ใ‚‹ๆญฃ่ฆ่กจ็พ (ไพ‹:volatile.eth0.hwaddr=00:16:3e:.*)\n" "\n" "่กจใฎใ‚ซใƒฉใƒ ใฎๆŒ‡ๅฎš:\n" "* 4 - IPv4 ใ‚ขใƒ‰ใƒฌใ‚น\n" "* 6 - IPv6 ใ‚ขใƒ‰ใƒฌใ‚น\n" "* a - ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃ\n" "* c - ไฝœๆˆๆ—ฅ\n" "* n - ๅๅ‰\n" "* p - ใ‚ณใƒณใƒ†ใƒŠใฎ init ใƒ—ใƒญใ‚ปใ‚นใฎ pid\n" "* P - ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซ\n" "* s - ็Šถๆ…‹\n" "* S - ใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆใฎๆ•ฐ\n" "* t - ใ‚ฟใ‚คใƒ— (persistent or ephemeral)\n" "\n" "ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใ‚ซใƒฉใƒ ใƒฌใ‚คใ‚ขใ‚ฆใƒˆ: ns46tS\n" "Fast ใƒขใƒผใƒ‰ใฎใ‚ซใƒฉใƒ ใƒฌใ‚คใ‚ขใ‚ฆใƒˆ: nsacPt" #: lxc/info.go:225 msgid "Log:" msgstr "ใƒญใ‚ฐ:" #: lxc/image.go:154 msgid "Make image public" msgstr "ใ‚คใƒกใƒผใ‚ธใ‚’ public ใซใ™ใ‚‹" #: lxc/publish.go:32 msgid "Make the image public" msgstr "ใ‚คใƒกใƒผใ‚ธใ‚’ public ใซใ™ใ‚‹" #: lxc/profile.go:48 msgid "" "Manage configuration profiles.\n" "\n" "lxc profile list [filters] List available profiles.\n" "lxc profile show Show details of a profile.\n" "lxc profile create Create a profile.\n" "lxc profile copy Copy the profile to the " "specified remote.\n" "lxc profile get Get profile configuration.\n" "lxc profile set Set profile configuration.\n" "lxc profile delete Delete a profile.\n" "lxc profile edit \n" " Edit profile, either by launching external editor or reading STDIN.\n" " Example: lxc profile edit # launch editor\n" " cat profile.yml | lxc profile edit # read from " "profile.yml\n" "\n" "lxc profile apply \n" " Apply a comma-separated list of profiles to a container, in order.\n" " All profiles passed in this call (and only those) will be applied\n" " to the specified container.\n" " Example: lxc profile apply foo default,bar # Apply default and bar\n" " lxc profile apply foo default # Only default is active\n" " lxc profile apply '' # no profiles are applied anymore\n" " lxc profile apply bar,default # Apply default second now\n" "lxc profile apply-add \n" "lxc profile apply-remove \n" "\n" "Devices:\n" "lxc profile device list List " "devices in the given profile.\n" "lxc profile device show Show " "full device details in the given profile.\n" "lxc profile device remove Remove a " "device from a profile.\n" "lxc profile device get <[remote:]profile> Get a " "device property.\n" "lxc profile device set <[remote:]profile> Set a " "device property.\n" "lxc profile device unset <[remote:]profile> Unset a " "device property.\n" "lxc profile device add " "[key=value]...\n" " Add a profile device, such as a disk or a nic, to the containers\n" " using the specified profile." msgstr "" "่จญๅฎšใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใ‚’็ฎก็†ใ—ใพใ™ใ€‚\n" "\n" "lxc profile list [filters]\n" " ๅˆฉ็”จๅฏ่ƒฝใชใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใ‚’ไธ€่ฆงใ—ใพใ™ใ€‚\n" "lxc profile show \n" " ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใฎ่ฉณ็ดฐใ‚’่กจ็คบใ—ใพใ™ใ€‚\n" "lxc profile create \n" " ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใ‚’ไฝœๆˆใ—ใพใ™ใ€‚\n" "lxc profile copy \n" " ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใ‚’ remote ใซใ‚ณใƒ”ใƒผใ—ใพใ™ใ€‚\n" "lxc profile get \n" " ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใฎ่จญๅฎšใ‚’ๅ–ๅพ—ใ—ใพใ™ใ€‚\n" "lxc profile set \n" " ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใฎ่จญๅฎšใ‚’่จญๅฎšใ—ใพใ™ใ€‚\n" "lxc profile delete \n" " ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใ‚’ๅ‰Š้™คใ—ใพใ™ใ€‚\n" "lxc profile edit \n" " ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใ‚’็ทจ้›†ใ—ใพใ™ใ€‚ๅค–้ƒจใ‚จใƒ‡ใ‚ฃใ‚ฟใ‚‚ใ—ใใฏSTDINใ‹ใ‚‰่ชญใฟ่พผใฟใพใ™ใ€‚\n" " ไพ‹: lxc profile edit # ใ‚จใƒ‡ใ‚ฃใ‚ฟใฎ่ตทๅ‹•\n" " cat profile.yml | lxc profile edit # profile.yml ใ‹ใ‚‰่ชญใฟ่พผใฟ\n" "\n" "lxc profile apply \n" " ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใฎใ‚ณใƒณใƒžๅŒบๅˆ‡ใ‚Šใฎใƒชใ‚นใƒˆใ‚’ใ‚ณใƒณใƒ†ใƒŠใซ้ †็•ชใซ้ฉ็”จใ—ใพใ™ใ€‚\n" " ใ“ใฎใ‚ณใƒžใƒณใƒ‰ใงๆŒ‡ๅฎšใ—ใŸใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใ ใ‘ใŒๅฏพ่ฑกใฎใ‚ณใƒณใƒ†ใƒŠใซ้ฉ็”จใ•ใ‚Œใพใ™ใ€‚\n" " ไพ‹: lxc profile apply foo default,bar # defaultใจbarใ‚’้ฉ็”จ\n" " lxc profile apply foo default # defaultใ ใ‘ใ‚’ๆœ‰ๅŠนๅŒ–\n" " lxc profile apply '' # ไธ€ๅˆ‡ใฎใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใ‚’้ฉ็”จใ—ใชใ„\n" " lxc profile apply bar,default # defaultใ‚’2็•ช็›ฎใซ้ฉ็”จ\n" "lxc profile apply-add \n" "lxc profile apply-remove \n" "\n" "ใƒ‡ใƒใ‚คใ‚น:\n" "lxc profile device list \n" " ๆŒ‡ๅฎšใ—ใŸใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซๅ†…ใฎใƒ‡ใƒใ‚คใ‚นใ‚’ไธ€่ฆง่กจ็คบใ—ใพใ™\n" "lxc profile device show \n" " ๆŒ‡ๅฎšใ—ใŸใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซๅ†…ใฎๅ…จใƒ‡ใƒใ‚คใ‚นใฎ่ฉณ็ดฐใ‚’่กจ็คบใ—ใพใ™\n" "lxc profile device remove \n" " ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใ‹ใ‚‰ใƒ‡ใƒใ‚คใ‚นใ‚’ๅ‰Š้™คใ—ใพใ™\n" "lxc profile device get <[remote:]profile> \n" " ใƒ‡ใƒใ‚คใ‚นใƒ—ใƒญใƒ‘ใƒ†ใ‚ฃใ‚’ๅ–ๅพ—ใ—ใพใ™\n" "lxc profile device set <[remote:]profile> \n" " ใƒ‡ใƒใ‚คใ‚นใƒ—ใƒญใƒ‘ใƒ†ใ‚ฃใ‚’่จญๅฎšใ—ใพใ™\n" "lxc profile device unset <[remote:]profile> \n" " ใƒ‡ใƒใ‚คใ‚นใƒ—ใƒญใƒ‘ใƒ†ใ‚ฃใ‚’ๅ‰Š้™คใ—ใพใ™\n" "lxc profile device add [key=value]...\n" " ใƒ‡ใ‚ฃใ‚นใ‚ฏใ‚„NICใฎใ‚ˆใ†ใชใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใƒ‡ใƒใ‚คใ‚นใ‚’ๆŒ‡ๅฎšใ—ใŸใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใ‚’ไฝฟใฃใฆ\n" " ใ‚ณใƒณใƒ†ใƒŠใซ่ฟฝๅŠ ใ—ใพใ™ใ€‚" #: lxc/config.go:58 msgid "" "Manage configuration.\n" "\n" "lxc config device add <[remote:]container> [key=value]... " "Add a device to a container.\n" "lxc config device get <[remote:]container> " "Get a device property.\n" "lxc config device set <[remote:]container> " "Set a device property.\n" "lxc config device unset <[remote:]container> " "Unset a device property.\n" "lxc config device list <[remote:]container> " "List devices for container.\n" "lxc config device show <[remote:]container> " "Show full device details for container.\n" "lxc config device remove <[remote:]container> " "Remove device from container.\n" "\n" "lxc config get [remote:][container] " "Get container or server configuration key.\n" "lxc config set [remote:][container] " "Set container or server configuration key.\n" "lxc config unset [remote:][container] " "Unset container or server configuration key.\n" "lxc config show [remote:][container] [--expanded] " "Show container or server configuration.\n" "lxc config edit [remote:][container] " "Edit container or server configuration in external editor.\n" " Edit configuration, either by launching external editor or reading " "STDIN.\n" " Example: lxc config edit # launch editor\n" " cat config.yml | lxc config edit # read from config." "yml\n" "\n" "lxc config trust list [remote] " "List all trusted certs.\n" "lxc config trust add [remote] " "Add certfile.crt to trusted hosts.\n" "lxc config trust remove [remote] [hostname|fingerprint] " "Remove the cert from trusted hosts.\n" "\n" "Examples:\n" "To mount host's /share/c1 onto /opt in the container:\n" " lxc config device add [remote:]container1 disk source=/" "share/c1 path=opt\n" "\n" "To set an lxc config value:\n" " lxc config set [remote:] raw.lxc 'lxc.aa_allow_incomplete = " "1'\n" "\n" "To listen on IPv4 and IPv6 port 8443 (you can omit the 8443 its the " "default):\n" " lxc config set core.https_address [::]:8443\n" "\n" "To set the server trust password:\n" " lxc config set core.trust_password blah" msgstr "" "่จญๅฎšใ‚’็ฎก็†ใ—ใพใ™ใ€‚\n" "\n" "lxc config device add <[remote:]container> [key=value]...\n" " ใ‚ณใƒณใƒ†ใƒŠใซใƒ‡ใƒใ‚คใ‚นใ‚’่ฟฝๅŠ ใ—ใพใ™ใ€‚\n" "lxc config device get <[remote:]container> \n" " ใƒ‡ใƒใ‚คใ‚นใฎใƒ—ใƒญใƒ‘ใƒ†ใ‚ฃใ‚’ๅ–ๅพ—ใ—ใพใ™ใ€‚\n" "lxc config device set <[remote:]container> \n" " ใƒ‡ใƒใ‚คใ‚นใฎใƒ—ใƒญใƒ‘ใƒ†ใ‚ฃใ‚’่จญๅฎšใ—ใพใ™ใ€‚\n" "lxc config device unset <[remote:]container> \n" " ใƒ‡ใƒใ‚คใ‚นใฎใƒ—ใƒญใƒ‘ใƒ†ใ‚ฃใ‚’ๅ‰Š้™คใ—ใพใ™ใ€‚\n" "lxc config device list <[remote:]container>\n" " ใ‚ณใƒณใƒ†ใƒŠใฎใƒ‡ใƒใ‚คใ‚นใ‚’ไธ€่ฆง่กจ็คบใ—ใพใ™ใ€‚\n" "lxc config device show <[remote:]container>\n" " ๅ…จใฆใฎใƒ‡ใƒใ‚คใ‚นใฎ่ฉณ็ดฐใ‚’่กจ็คบใ—ใพใ™ใ€‚\n" "lxc config device remove <[remote:]container> \n" " ใ‚ณใƒณใƒ†ใƒŠใ‹ใ‚‰ใƒ‡ใƒใ‚คใ‚นใ‚’ๅ‰Š้™คใ—ใพใ™ใ€‚\n" "\n" "lxc config get [remote:][container] \n" " ใ‚ณใƒณใƒ†ใƒŠใ‚‚ใ—ใใฏใ‚ตใƒผใƒใฎ่จญๅฎš้ …็›ฎใฎๅ€คใ‚’ๅ–ๅพ—ใ—ใพใ™ใ€‚\n" "lxc config set [remote:][container] \n" " ใ‚ณใƒณใƒ†ใƒŠใ‚‚ใ—ใใฏใ‚ตใƒผใƒใฎ่จญๅฎš้ …็›ฎใซๅ€คใ‚’่จญๅฎšใ—ใพใ™ใ€‚\n" "lxc config unset [remote:][container] \n" " ใ‚ณใƒณใƒ†ใƒŠใ‚‚ใ—ใใฏใ‚ตใƒผใƒใฎ่จญๅฎš้ …็›ฎใ‚’ๅ‰Š้™คใ—ใพใ™ใ€‚\n" "lxc config show [remote:][container] [--expanded]\n" " ใ‚ณใƒณใƒ†ใƒŠใ‚‚ใ—ใใฏใ‚ตใƒผใƒใฎ่จญๅฎšใ‚’่กจ็คบใ—ใพใ™ใ€‚\n" "lxc config edit [remote:][container]\n" " ใ‚ณใƒณใƒ†ใƒŠใ‚‚ใ—ใใฏใ‚ตใƒผใƒใฎ่จญๅฎšใ‚’ๅค–้ƒจใ‚จใƒ‡ใ‚ฃใ‚ฟใง็ทจ้›†ใ—ใพใ™ใ€‚\n" " ่จญๅฎšใฎ็ทจ้›†ใฏๅค–้ƒจใ‚จใƒ‡ใ‚ฃใ‚ฟใ‚’่ตทๅ‹•ใ™ใ‚‹ใ‹ใ€ๆจ™ๆบ–ๅ…ฅๅŠ›ใ‹ใ‚‰ใฎ่ชญใฟ่พผใฟใง่กŒใ„ใพ" "ใ™ใ€‚\n" " ไพ‹: lxc config edit # ใ‚จใƒ‡ใ‚ฃใ‚ฟใฎ่ตทๅ‹•\n" " cat config.yml | lxc config edit # config.ymlใ‹ใ‚‰่ชญใฟ่พผใฟ\n" "\n" "lxc config trust list [remote]\n" " ไฟก้ ผใ™ใ‚‹่จผๆ˜Žๆ›ธใ‚’ๅ…จใฆ่กจ็คบใ—ใพใ™ใ€‚\n" "lxc config trust add [remote] \n" " certfile.crt ใ‚’ไฟก้ ผใ™ใ‚‹ใƒ›ใ‚นใƒˆใซ่ฟฝๅŠ ใ—ใพใ™ใ€‚\n" "lxc config trust remove [remote] [hostname|fingerprint]\n" " ไฟก้ ผใ™ใ‚‹ใƒ›ใ‚นใƒˆใ‹ใ‚‰่จผๆ˜Žๆ›ธใ‚’ๆถˆๅŽปใ—ใพใ™ใ€‚\n" "\n" "ไพ‹:\n" "ใƒ›ใ‚นใƒˆใฎ /share/c1 ใ‚’ใ‚ณใƒณใƒ†ใƒŠๅ†…ใฎ /opt ใซใƒžใ‚ฆใƒณใƒˆใ™ใ‚‹ใซใฏ:\n" " lxc config device add [remote:]container1 disk source=/" "share/c1 path=opt\n" "\n" "lxc ่จญๅฎš้ …็›ฎใซๅ€คใ‚’่จญๅฎšใ™ใ‚‹ใซใฏ:\n" " lxc config set [remote:] raw.lxc 'lxc.aa_allow_incomplete = " "1'\n" "\n" "IPv4 ใจ IPv6 ใฎใƒใƒผใƒˆ 8443 ใง Listen ใ™ใ‚‹ใซใฏ\n" "(8443 ใฏใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใชใฎใง็œ็•ฅใงใใพใ™):\n" " lxc config set core.https_address [::]:8443\n" "\n" "ใ‚ตใƒผใƒใฎใƒ‘ใ‚นใƒฏใƒผใƒ‰ใ‚’่จญๅฎšใ™ใ‚‹ใซใฏ:\n" " lxc config set core.trust_password blah" #: lxc/file.go:32 msgid "" "Manage files on a container.\n" "\n" "lxc file pull [...] \n" "lxc file push [--uid=UID] [--gid=GID] [--mode=MODE] [...] " "\n" "lxc file edit \n" "\n" " in the case of pull, in the case of push and in the " "case of edit are /" msgstr "" "ใ‚ณใƒณใƒ†ใƒŠไธŠใฎใƒ•ใ‚กใ‚คใƒซใ‚’็ฎก็†ใ—ใพใ™ใ€‚\n" "\n" "lxc file pull [...] \n" "lxc file push [--uid=UID] [--gid=GID] [--mode=MODE] [...] " "\n" "lxc file edit \n" "\n" "pull ใฎๅ ดๅˆใฎ ใ€push ใฎๅ ดๅˆใฎ ใ€edit ใฎๅ ดๅˆใฎ ใฏใ€ใ„\n" "ใšใ‚Œใ‚‚ / ใฎๅฝขๅผใงใ™ใ€‚" #: lxc/remote.go:39 msgid "" "Manage remote LXD servers.\n" "\n" "lxc remote add [--accept-certificate] [--password=PASSWORD]\n" " [--public] [--protocol=PROTOCOL] " "Add the remote at .\n" "lxc remote remove " "Remove the remote .\n" "lxc remote list " "List all remotes.\n" "lxc remote rename " "Rename remote to .\n" "lxc remote set-url " "Update 's url to .\n" "lxc remote set-default " "Set the default remote.\n" "lxc remote get-default " "Print the default remote." msgstr "" "ใƒชใƒขใƒผใƒˆใฎ LXD ใ‚ตใƒผใƒใ‚’็ฎก็†ใ—ใพใ™ใ€‚\n" "\n" "lxc remote add \n" " [--accept-certificate] [--password=PASSWORD]\n" " [--public] [--protocol=PROTOCOL]\n" " ใ‚’ใƒชใƒขใƒผใƒˆใƒ›ใ‚นใƒˆ ใจใ—ใฆ่ฟฝๅŠ ใ—ใพใ™ใ€‚\n" "lxc remote remove \n" " ใƒชใƒขใƒผใƒˆใƒ›ใ‚นใƒˆ ใ‚’ๅ‰Š้™คใ—ใพใ™ใ€‚\n" "lxc remote list\n" " ็™ป้Œฒๆธˆใฟใฎใƒชใƒขใƒผใƒˆใƒ›ใ‚นใƒˆใ‚’ๅ…จใฆไธ€่ฆง่กจ็คบใ—ใพใ™ใ€‚\n" "lxc remote rename \n" " ใƒชใƒขใƒผใƒˆใƒ›ใ‚นใƒˆใฎๅๅ‰ใ‚’ ใ‹ใ‚‰ ใซๅค‰ๆ›ดใ—ใพใ™ใ€‚\n" "lxc remote set-url \n" " ใฎ url ใ‚’ ใซๆ›ดๆ–ฐใ—ใพใ™ใ€‚\n" "lxc remote set-default \n" " ใ‚’ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒชใƒขใƒผใƒˆใƒ›ใ‚นใƒˆใซ่จญๅฎšใ—ใพใ™ใ€‚\n" "lxc remote get-default\n" " ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใซ่จญๅฎšใ•ใ‚Œใฆใ„ใ‚‹ใƒชใƒขใƒผใƒˆใƒ›ใ‚นใƒˆใ‚’่กจ็คบใ—ใพใ™ใ€‚" #: lxc/image.go:93 msgid "" "Manipulate container images.\n" "\n" "In LXD containers are created from images. Those images were themselves\n" "either generated from an existing container or downloaded from an image\n" "server.\n" "\n" "When using remote images, LXD will automatically cache images for you\n" "and remove them upon expiration.\n" "\n" "The image unique identifier is the hash (sha-256) of its representation\n" "as a compressed tarball (or for split images, the concatenation of the\n" "metadata and rootfs tarballs).\n" "\n" "Images can be referenced by their full hash, shortest unique partial\n" "hash or alias name (if one is set).\n" "\n" "\n" "lxc image import [rootfs tarball|URL] [remote:] [--public] [--" "created-at=ISO-8601] [--expires-at=ISO-8601] [--fingerprint=FINGERPRINT] [--" "alias=ALIAS].. [prop=value]\n" " Import an image tarball (or tarballs) into the LXD image store.\n" "\n" "lxc image copy [remote:] : [--alias=ALIAS].. [--copy-aliases] " "[--public] [--auto-update]\n" " Copy an image from one LXD daemon to another over the network.\n" "\n" " The auto-update flag instructs the server to keep this image up to\n" " date. It requires the source to be an alias and for it to be public.\n" "\n" "lxc image delete [remote:]\n" " Delete an image from the LXD image store.\n" "\n" "lxc image export [remote:]\n" " Export an image from the LXD image store into a distributable tarball.\n" "\n" "lxc image info [remote:]\n" " Print everything LXD knows about a given image.\n" "\n" "lxc image list [remote:] [filter]\n" " List images in the LXD image store. Filters may be of the\n" " = form for property based filtering, or part of the image\n" " hash or part of the image alias name.\n" "\n" "lxc image show [remote:]\n" " Yaml output of the user modifiable properties of an image.\n" "\n" "lxc image edit [remote:]\n" " Edit image, either by launching external editor or reading STDIN.\n" " Example: lxc image edit # launch editor\n" " cat image.yml | lxc image edit # read from image.yml\n" "\n" "lxc image alias create [remote:] \n" " Create a new alias for an existing image.\n" "\n" "lxc image alias delete [remote:]\n" " Delete an alias.\n" "\n" "lxc image alias list [remote:] [filter]\n" " List the aliases. Filters may be part of the image hash or part of the " "image alias name.\n" msgstr "" "ใ‚ณใƒณใƒ†ใƒŠใ‚คใƒกใƒผใ‚ธใ‚’ๆ“ไฝœใ—ใพใ™ใ€‚\n" "\n" "LXD ใงใฏใ€ใ‚ณใƒณใƒ†ใƒŠใฏใ‚คใƒกใƒผใ‚ธใ‹ใ‚‰ไฝœใ‚‰ใ‚Œใพใ™ใ€‚ใ“ใฎใ‚คใƒกใƒผใ‚ธใฏใ€ๆ—ขๅญ˜ใฎใ‚ณใƒณใƒ†ใƒŠ\n" "ใ‚„ใ‚คใƒกใƒผใ‚ธใ‚ตใƒผใƒใ‹ใ‚‰ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใŸใ‚คใƒกใƒผใ‚ธใ‹ใ‚‰ไฝœใ‚‰ใ‚Œใพใ™ใ€‚\n" "\n" "ใƒชใƒขใƒผใƒˆใฎใ‚คใƒกใƒผใ‚ธใ‚’ไฝฟใ†ๅ ดๅˆใ€LXD ใฏ่‡ชๅ‹•็š„ใซใ‚คใƒกใƒผใ‚ธใ‚’ใ‚ญใƒฃใƒƒใ‚ทใƒฅใ—ใพใ™ใ€‚ใ\n" "ใ—ใฆใ€ใ‚คใƒกใƒผใ‚ธใฎๆœŸ้™ใŒๅˆ‡ใ‚Œใ‚‹ใจใ‚ญใƒฃใƒƒใ‚ทใƒฅใ‚’ๅ‰Š้™คใ—ใพใ™ใ€‚\n" "\n" "ใ‚คใƒกใƒผใ‚ธๅ›บๆœ‰ใฎ่ญ˜ๅˆฅๅญใฏๅœง็ธฎใ•ใ‚ŒใŸ tarball (ๅˆ†ๅ‰ฒใ‚คใƒกใƒผใ‚ธใฎๅ ดๅˆใฏใ€ใƒกใ‚ฟใƒ‡ใƒผใ‚ฟ\n" "ใจ rootfs tarball ใ‚’็ตๅˆใ—ใŸใ‚‚ใฎ) ใฎใƒใƒƒใ‚ทใƒฅ (sha-256) ใงใ™ใ€‚\n" "\n" "ใ‚คใƒกใƒผใ‚ธใฏๅ…จใƒใƒƒใ‚ทใƒฅๆ–‡ๅญ—ๅˆ—ใ€ไธ€ๆ„ใซๅฎšใพใ‚‹ใƒใƒƒใ‚ทใƒฅใฎ็Ÿญ็ธฎ่กจ็พใ€(่จญๅฎšใ•ใ‚Œใฆใ„\n" "ใ‚‹ๅ ดๅˆใฏ) ใ‚จใ‚คใƒชใ‚ขใ‚นใงๅ‚็…งใงใใพใ™ใ€‚\n" "\n" "\n" "lxc image import [rootfs tarball|URL] [remote:] [--public] [--created-at=ISO-8601] [--expires-at=ISO-8601] [--fingerprint=FINGERPRINT] [--alias=ALIAS].. [prop=value]\n" " ใ‚คใƒกใƒผใ‚ธใฎ tarball (่ค‡ๆ•ฐใ‚‚ๅฏ่ƒฝ) ใ‚’ LXD ใฎใ‚คใƒกใƒผใ‚ธใ‚นใƒˆใ‚ขใซใ‚คใƒณใƒใƒผใƒˆใ—ใพ\n" " ใ™ใ€‚\n" "\n" "lxc image copy [remote:] : [--alias=ALIAS].. [--copy-aliases] [--public] [--auto-update]\n" " ใƒใƒƒใƒˆใƒฏใƒผใ‚ฏ็ตŒ็”ฑใงใ‚ใ‚‹ LXD ใƒ‡ใƒผใƒขใƒณใ‹ใ‚‰ไป–ใฎ LXD ใƒ‡ใƒผใƒขใƒณใธใ‚คใƒกใƒผใ‚ธใ‚’\n" " ใ‚ณใƒ”ใƒผใ—ใพใ™ใ€‚\n" "\n" " auto-update ใƒ•ใƒฉใ‚ฐใฏใ€ใ‚ตใƒผใƒใŒใ‚คใƒกใƒผใ‚ธใ‚’ๆœ€ๆ–ฐใซไฟใคใ‚ˆใ†ใซๆŒ‡็คบใ—ใพใ™ใ€‚ใ‚ค\n" " ใƒกใƒผใ‚ธใฎใ‚ฝใƒผใ‚นใŒใ‚จใ‚คใƒชใ‚ขใ‚นใงใ‚ใ‚Šใ€public ใงใ‚ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚\n" "\n" "lxc image delete [remote:]\n" " LXD ใฎใ‚คใƒกใƒผใ‚ธใ‚นใƒˆใ‚ขใ‹ใ‚‰ใ‚คใƒกใƒผใ‚ธใ‚’ๅ‰Š้™คใ—ใพใ™ใ€‚\n" "\n" "lxc image export [remote:]\n" " LXD ใฎใ‚คใƒกใƒผใ‚ธใ‚นใƒˆใ‚ขใ‹ใ‚‰้…ๅธƒๅฏ่ƒฝใช tarball ใจใ—ใฆใ‚คใƒกใƒผใ‚ธใ‚’ใ‚จใ‚ฏใ‚นใƒใƒผใƒˆ\n" " ใ—ใพใ™ใ€‚\n" "\n" "lxc image info [remote:]\n" " ๆŒ‡ๅฎšใ—ใŸใ‚คใƒกใƒผใ‚ธใซใคใ„ใฆใฎใ™ในใฆใฎๆƒ…ๅ ฑใ‚’่กจ็คบใ—ใพใ™ใ€‚\n" "\n" "lxc image list [remote:] [filter]\n" " LXD ใฎใ‚คใƒกใƒผใ‚ธใ‚นใƒˆใ‚ขๅ†…ใฎใ‚คใƒกใƒผใ‚ธใ‚’ไธ€่ฆง่กจ็คบใ—ใพใ™ใ€‚ใƒ—ใƒญใƒ‘ใƒ†ใ‚ฃใงใƒ•ใ‚ฃใƒซใ‚ฟ\n" " ใ‚’่กŒใ†ๅ ดๅˆใฏใ€ใƒ•ใ‚ฃใƒซใ‚ฟใฏ = ใฎๅฝขใซใชใ‚Šใพใ™ใ€‚ใƒ•ใ‚ฃใƒซใ‚ฟใฏใ‚คใƒกใƒผ\n" " ใ‚ธใƒใƒƒใ‚ทใƒฅใฎไธ€้ƒจใ‚„ใ‚คใƒกใƒผใ‚ธใ‚จใ‚คใƒชใ‚ขใ‚นๅใฎไธ€้ƒจใ‚‚ๆŒ‡ๅฎšใงใใพใ™ใ€‚\n" "\n" "lxc image show [remote:]\n" " ใƒฆใƒผใ‚ถใŒๅค‰ๆ›ดใงใใ‚‹ใƒ—ใƒญใƒ‘ใƒ†ใ‚ฃใฎ YAML ๅฝขๅผใฎๅ‡บๅŠ›ใ‚’่กŒใ„ใพใ™ใ€‚\n" "\n" "lxc image edit [remote:]\n" " ๅค–้ƒจใ‚จใƒ‡ใ‚ฃใ‚ฟใพใŸใฏๆจ™ๆบ–ๅ…ฅๅŠ›ใ‹ใ‚‰ใฎ่ชญใฟ่พผใฟใซใ‚ˆใ‚Šใ€ใ‚คใƒกใƒผใ‚ธใ‚’็ทจ้›†ใ—ใพใ™ใ€‚\n" " ไพ‹: lxc image edit # ใ‚จใƒ‡ใ‚ฃใ‚ฟใฎ่ตทๅ‹•\n" " cat image.yml | lxc image edit # image.yml ใ‹ใ‚‰่ชญใฟ่พผใฟ\n" "\n" "lxc image alias create [remote:] \n" " ๆ—ขๅญ˜ใฎใ‚คใƒกใƒผใ‚ธใซๆ–ฐใŸใซใ‚จใ‚คใƒชใ‚ขใ‚นใ‚’ไฝœๆˆใ—ใพใ™ใ€‚\n" "\n" "lxc image alias delete [remote:]\n" " ใ‚จใ‚คใƒชใ‚ขใ‚นใ‚’ๅ‰Š้™คใ—ใพใ™ใ€‚\n" "\n" "lxc image alias list [remote:] [filter]\n" " ใ‚จใ‚คใƒชใ‚ขใ‚นใ‚’ไธ€่ฆง่กจ็คบใ—ใพใ™ใ€‚ใ‚คใƒกใƒผใ‚ธใƒใƒƒใ‚ทใƒฅใฎไธ€้ƒจใ‚„ใ‚คใƒกใƒผใ‚ธใฎใ‚จใ‚คใƒชใ‚ขใ‚น\n" " ๅใฎไธ€้ƒจใ‚’ใƒ•ใ‚ฃใƒซใ‚ฟใจใ—ใฆๆŒ‡ๅฎšใงใใพใ™ใ€‚\n" #: lxc/info.go:147 msgid "Memory (current)" msgstr "ใƒกใƒขใƒช (็พๅœจๅ€ค)" #: lxc/info.go:151 msgid "Memory (peak)" msgstr "ใƒกใƒขใƒช (ใƒ”ใƒผใ‚ฏ)" #: lxc/help.go:86 msgid "Missing summary." msgstr "ใ‚ตใƒžใƒชใƒผใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚" #: lxc/monitor.go:41 msgid "" "Monitor activity on the LXD server.\n" "\n" "lxc monitor [remote:] [--type=TYPE...]\n" "\n" "Connects to the monitoring interface of the specified LXD server.\n" "\n" "By default will listen to all message types.\n" "Specific types to listen to can be specified with --type.\n" "\n" "Example:\n" "lxc monitor --type=logging" msgstr "" "LXD ใ‚ตใƒผใƒใฎๅ‹•ไฝœใ‚’ใƒขใƒ‹ใ‚ฟใƒชใƒณใ‚ฐใ—ใพใ™ใ€‚\n" "\n" "lxc monitor [remote:] [--type=TYPE...]\n" "\n" "ๆŒ‡ๅฎšใ—ใŸ LXD ใ‚ตใƒผใƒใฎใƒขใƒ‹ใ‚ฟใƒชใƒณใ‚ฐใ‚คใƒณใ‚ฟใƒผใƒ•ใ‚งใƒผใ‚นใซๆŽฅ็ถšใ—ใพใ™ใ€‚\n" "\n" "ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใงใฏๅ…จใฆใฎใ‚ฟใ‚คใƒ—ใ‚’ใƒขใƒ‹ใ‚ฟใƒชใƒณใ‚ฐใ—ใพใ™ใ€‚\n" "--type ใซใ‚ˆใ‚Šใ€ใƒขใƒ‹ใ‚ฟใƒชใƒณใ‚ฐใ™ใ‚‹ใ‚ฟใ‚คใƒ—ใ‚’ๆŒ‡ๅฎšใงใใพใ™ใ€‚\n" "\n" "ไพ‹:\n" "lxc monitor --type=logging" #: lxc/file.go:178 msgid "More than one file to download, but target is not a directory" msgstr "" "ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ๅฏพ่ฑกใฎใƒ•ใ‚กใ‚คใƒซใŒ่ค‡ๆ•ฐใ‚ใ‚Šใพใ™ใŒใ€ใ‚ณใƒ”ใƒผๅ…ˆใŒใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒชใงใฏใ‚ใ‚Šใพ" "ใ›ใ‚“ใ€‚" #: lxc/move.go:17 msgid "" "Move containers within or in between lxd instances.\n" "\n" "lxc move [remote:] [remote:]\n" " Move a container between two hosts, renaming it if destination name " "differs.\n" "\n" "lxc move \n" " Rename a local container.\n" msgstr "" "LXD ใƒ›ใ‚นใƒˆๅ†…ใ€ใ‚‚ใ—ใใฏ LXD ใƒ›ใ‚นใƒˆ้–“ใงใ‚ณใƒณใƒ†ใƒŠใ‚’็งปๅ‹•ใ—ใพใ™ใ€‚\n" "\n" "lxc move [remote:] [remote:]\n" " 2 ใคใฎใƒ›ใ‚นใƒˆ้–“ใงใ‚ณใƒณใƒ†ใƒŠใ‚’็งปๅ‹•ใ—ใพใ™ใ€‚ใ‚ณใƒ”ใƒผๅ…ˆใฎๅๅ‰ใŒๅ…ƒใจ้•ใ†ๅ ดๅˆใฏ\n" " ๅŒๆ™‚ใซใƒชใƒใƒผใƒ ใ•ใ‚Œใพใ™ใ€‚\n" "\n" "lxc move \n" " ใƒญใƒผใ‚ซใƒซใฎใ‚ณใƒณใƒ†ใƒŠใ‚’ใƒชใƒใƒผใƒ ใ—ใพใ™ใ€‚\n" #: lxc/action.go:63 msgid "Must supply container name for: " msgstr "ใ‚ณใƒณใƒ†ใƒŠๅใ‚’ๆŒ‡ๅฎšใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™: " #: lxc/list.go:380 lxc/remote.go:363 msgid "NAME" msgstr "" #: lxc/remote.go:337 lxc/remote.go:342 msgid "NO" msgstr "" #: lxc/info.go:89 #, c-format msgid "Name: %s" msgstr "ใ‚ณใƒณใƒ†ใƒŠๅ: %s" #: lxc/image.go:157 lxc/publish.go:33 msgid "New alias to define at target" msgstr "ๆ–ฐใ—ใ„ใ‚จใ‚คใƒชใ‚ขใ‚นใ‚’ๅฎš็พฉใ™ใ‚‹" #: lxc/config.go:281 msgid "No certificate provided to add" msgstr "่ฟฝๅŠ ใ™ในใ่จผๆ˜Žๆ›ธใŒๆไพ›ใ•ใ‚Œใฆใ„ใพใ›ใ‚“" #: lxc/config.go:304 msgid "No fingerprint specified." msgstr "ใƒ•ใ‚ฃใƒณใ‚ฌใƒผใƒ—ใƒชใƒณใƒˆใŒๆŒ‡ๅฎšใ•ใ‚Œใฆใ„ใพใ›ใ‚“ใ€‚" #: lxc/remote.go:107 msgid "Only https URLs are supported for simplestreams" msgstr "simplestreams ใฏ https ใฎ URL ใฎใฟใ‚ตใƒใƒผใƒˆใ—ใพใ™" #: lxc/image.go:411 msgid "Only https:// is supported for remote image import." msgstr "ใƒชใƒขใƒผใƒˆใ‚คใƒกใƒผใ‚ธใฎใ‚คใƒณใƒใƒผใƒˆใฏ https:// ใฎใฟใ‚’ใ‚ตใƒใƒผใƒˆใ—ใพใ™ใ€‚" #: lxc/help.go:63 lxc/main.go:122 msgid "Options:" msgstr "ใ‚ชใƒ—ใ‚ทใƒงใƒณ:" #: lxc/image.go:506 #, c-format msgid "Output is in %s" msgstr "%s ใซๅ‡บๅŠ›ใ•ใ‚Œใพใ™" #: lxc/exec.go:55 msgid "Override the terminal mode (auto, interactive or non-interactive)" msgstr "ใ‚ฟใƒผใƒŸใƒŠใƒซใƒขใƒผใƒ‰ใ‚’ไธŠๆ›ธใใ—ใพใ™ (auto, interactive, non-interactive)" #: lxc/list.go:464 msgid "PERSISTENT" msgstr "" #: lxc/list.go:381 msgid "PID" msgstr "" #: lxc/list.go:382 msgid "PROFILES" msgstr "" #: lxc/remote.go:365 msgid "PROTOCOL" msgstr "" #: lxc/image.go:606 lxc/remote.go:366 msgid "PUBLIC" msgstr "" #: lxc/info.go:174 msgid "Packets received" msgstr "ๅ—ไฟกใƒ‘ใ‚ฑใƒƒใƒˆ" #: lxc/info.go:175 msgid "Packets sent" msgstr "้€ไฟกใƒ‘ใ‚ฑใƒƒใƒˆ" #: lxc/help.go:69 msgid "Path to an alternate client configuration directory." msgstr "ๅˆฅใฎใ‚ฏใƒฉใ‚คใ‚ขใƒณใƒˆ็”จ่จญๅฎšใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒช" #: lxc/help.go:70 msgid "Path to an alternate server directory." msgstr "ๅˆฅใฎใ‚ตใƒผใƒ็”จ่จญๅฎšใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒช" #: lxc/main.go:39 msgid "Permisson denied, are you in the lxd group?" msgstr "ใ‚ขใ‚ฏใ‚ปใ‚นใŒๆ‹’ๅฆใ•ใ‚Œใพใ—ใŸใ€‚lxd ใ‚ฐใƒซใƒผใƒ—ใซๆ‰€ๅฑžใ—ใฆใ„ใพใ™ใ‹?" #: lxc/info.go:103 #, c-format msgid "Pid: %d" msgstr "" #: lxc/help.go:25 msgid "" "Presents details on how to use LXD.\n" "\n" "lxd help [--all]" msgstr "" "LXDใฎไฝฟใ„ๆ–นใฎ่ฉณ็ดฐใ‚’่กจ็คบใ—ใพใ™ใ€‚\n" "\n" "lxd help [--all]" #: lxc/profile.go:216 msgid "Press enter to open the editor again" msgstr "ๅ†ๅบฆใ‚จใƒ‡ใ‚ฃใ‚ฟใ‚’้–‹ใใŸใ‚ใซใฏ Enter ใ‚ญใƒผใ‚’ๆŠผใ—ใพใ™" #: lxc/config.go:501 lxc/config.go:566 lxc/image.go:688 msgid "Press enter to start the editor again" msgstr "ๅ†ๅบฆใ‚จใƒ‡ใ‚ฃใ‚ฟใ‚’่ตทๅ‹•ใ™ใ‚‹ใซใฏ Enter ใ‚ญใƒผใ‚’ๆŠผใ—ใพใ™" #: lxc/help.go:65 msgid "Print debug information." msgstr "ใƒ‡ใƒใƒƒใ‚ฐๆƒ…ๅ ฑใ‚’่กจ็คบใ—ใพใ™ใ€‚" #: lxc/help.go:64 msgid "Print less common commands." msgstr "ๅ…จใฆใฎใ‚ณใƒžใƒณใƒ‰ใ‚’่กจ็คบใ—ใพใ™ (ไธปใชใ‚ณใƒžใƒณใƒ‰ใ ใ‘ใงใฏใชใ)ใ€‚" #: lxc/help.go:66 msgid "Print verbose information." msgstr "่ฉณ็ดฐๆƒ…ๅ ฑใ‚’่กจ็คบใ—ใพใ™ใ€‚" #: lxc/version.go:18 msgid "" "Prints the version number of this client tool.\n" "\n" "lxc version" msgstr "" "ใŠไฝฟใ„ใฎใ‚ฏใƒฉใ‚คใ‚ขใƒณใƒˆใฎใƒใƒผใ‚ธใƒงใƒณ็•ชๅทใ‚’่กจ็คบใ—ใพใ™ใ€‚\n" "\n" "lxc version" #: lxc/info.go:127 #, c-format msgid "Processes: %d" msgstr "ใƒ—ใƒญใ‚ปใ‚นๆ•ฐ: %d" #: lxc/profile.go:272 #, c-format msgid "Profile %s added to %s" msgstr "ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซ %s ใŒ %s ใซ่ฟฝๅŠ ใ•ใ‚Œใพใ—ใŸ" #: lxc/profile.go:167 #, c-format msgid "Profile %s created" msgstr "ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซ %s ใ‚’ไฝœๆˆใ—ใพใ—ใŸ" #: lxc/profile.go:237 #, c-format msgid "Profile %s deleted" msgstr "ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซ %s ใ‚’ๅ‰Š้™คใ—ใพใ—ใŸ" #: lxc/profile.go:303 #, c-format msgid "Profile %s removed from %s" msgstr "ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซ %s ใŒ %s ใ‹ใ‚‰ๅ‰Š้™คใ•ใ‚Œใพใ—ใŸ" #: lxc/init.go:136 lxc/init.go:137 lxc/launch.go:42 lxc/launch.go:43 msgid "Profile to apply to the new container" msgstr "ๆ–ฐใ—ใ„ใ‚ณใƒณใƒ†ใƒŠใซ้ฉ็”จใ™ใ‚‹ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซ" #: lxc/profile.go:253 #, c-format msgid "Profiles %s applied to %s" msgstr "ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซ %s ใŒ %s ใซ่ฟฝๅŠ ใ•ใ‚Œใพใ—ใŸ" #: lxc/info.go:101 #, c-format msgid "Profiles: %s" msgstr "ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซ: %s" #: lxc/image.go:343 msgid "Properties:" msgstr "ใƒ—ใƒญใƒ‘ใƒ†ใ‚ฃ:" #: lxc/remote.go:56 msgid "Public image server" msgstr "Public ใชใ‚คใƒกใƒผใ‚ธใ‚ตใƒผใƒใจใ—ใฆ่จญๅฎšใ—ใพใ™" #: lxc/image.go:331 #, c-format msgid "Public: %s" msgstr "" #: lxc/publish.go:25 msgid "" "Publish containers as images.\n" "\n" "lxc publish [remote:]container [remote:] [--alias=ALIAS]... [prop-key=prop-" "value]..." msgstr "" "ใ‚คใƒกใƒผใ‚ธใจใ—ใฆใ‚ณใƒณใƒ†ใƒŠใ‚’ publish ใ—ใพใ™ใ€‚\n" "\n" "lxc publish [remote:]container [remote:] [--alias=ALIAS]... [prop-key=prop-" "value]..." #: lxc/remote.go:54 msgid "Remote admin password" msgstr "ใƒชใƒขใƒผใƒˆใฎ็ฎก็†่€…ใƒ‘ใ‚นใƒฏใƒผใƒ‰" #: lxc/delete.go:42 #, c-format msgid "Remove %s (yes/no): " msgstr "%s ใ‚’ๆถˆๅŽปใ—ใพใ™ใ‹ (yes/no): " #: lxc/delete.go:36 lxc/delete.go:37 msgid "Require user confirmation." msgstr "ใƒฆใƒผใ‚ถใฎ็ขบ่ชใ‚’่ฆๆฑ‚ใ™ใ‚‹ใ€‚" #: lxc/info.go:124 msgid "Resources:" msgstr "ใƒชใ‚ฝใƒผใ‚น:" #: lxc/init.go:247 #, c-format msgid "Retrieving image: %s" msgstr "ใ‚คใƒกใƒผใ‚ธใฎๅ–ๅพ—ไธญ: %s" #: lxc/image.go:609 msgid "SIZE" msgstr "" #: lxc/list.go:383 msgid "SNAPSHOTS" msgstr "" #: lxc/list.go:384 msgid "STATE" msgstr "" #: lxc/remote.go:367 msgid "STATIC" msgstr "" #: lxc/remote.go:214 msgid "Server certificate NACKed by user" msgstr "ใƒฆใƒผใ‚ถใซใ‚ˆใ‚Šใ‚ตใƒผใƒ่จผๆ˜Žๆ›ธใŒๆ‹’ๅฆใ•ใ‚Œใพใ—ใŸ" #: lxc/remote.go:276 msgid "Server doesn't trust us after adding our cert" msgstr "ใ‚ตใƒผใƒใŒๆˆ‘ใ€…ใฎ่จผๆ˜Žๆ›ธใ‚’่ฟฝๅŠ ใ—ใŸๅพŒๆˆ‘ใ€…ใ‚’ไฟก้ ผใ—ใฆใ„ใพใ›ใ‚“" #: lxc/remote.go:55 msgid "Server protocol (lxd or simplestreams)" msgstr "ใ‚ตใƒผใƒใฎใƒ—ใƒญใƒˆใ‚ณใƒซ (lxd or simplestreams)" #: lxc/restore.go:21 msgid "" "Set the current state of a resource back to a snapshot.\n" "\n" "lxc restore [remote:] [--stateful]\n" "\n" "Restores a container from a snapshot (optionally with running state, see\n" "snapshot help for details).\n" "\n" "For example:\n" "lxc snapshot u1 snap0 # create the snapshot\n" "lxc restore u1 snap0 # restore the snapshot" msgstr "" "ใƒชใ‚ฝใƒผใ‚นใฎ็พๅœจใฎ็Šถๆ…‹ใ‚’ใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆๆ™‚็‚นใฎ็Šถๆ…‹ใซ่จญๅฎšใ—ใพใ™ใ€‚\n" "\n" "lxc restore [remote:] [--stateful]\n" "\n" "ใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆใ‹ใ‚‰ใ‚ณใƒณใƒ†ใƒŠใ‚’ใƒชใ‚นใƒˆใ‚ขใ—ใพใ™ (ใ‚ชใƒ—ใ‚ทใƒงใƒณใงๅฎŸ่กŒ็Šถๆ…‹ใ‚‚ใƒชใ‚นใƒˆ\n" "ใ‚ขใ—ใพใ™ใ€‚่ฉณใ—ใใฏใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆใฎใƒ˜ใƒซใƒ—ใ‚’ใ”่ฆงใใ ใ•ใ„)ใ€‚\n" "\n" "ไพ‹:\n" "lxc snapshot u1 snap0 # ใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆใฎไฝœๆˆ\n" "lxc restore u1 snap0 # ใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆใ‹ใ‚‰ใƒชใ‚นใƒˆใ‚ข" #: lxc/file.go:44 msgid "Set the file's gid on push" msgstr "ใƒ—ใƒƒใ‚ทใƒฅๆ™‚ใซใƒ•ใ‚กใ‚คใƒซใฎgidใ‚’่จญๅฎšใ—ใพใ™" #: lxc/file.go:45 msgid "Set the file's perms on push" msgstr "ใƒ—ใƒƒใ‚ทใƒฅๆ™‚ใซใƒ•ใ‚กใ‚คใƒซใฎใƒ‘ใƒผใƒŸใ‚ทใƒงใƒณใ‚’่จญๅฎšใ—ใพใ™" #: lxc/file.go:43 msgid "Set the file's uid on push" msgstr "ใƒ—ใƒƒใ‚ทใƒฅๆ™‚ใซใƒ•ใ‚กใ‚คใƒซใฎuidใ‚’่จญๅฎšใ—ใพใ™" #: lxc/help.go:32 msgid "Show all commands (not just interesting ones)" msgstr "ๅ…จใฆใ‚ณใƒžใƒณใƒ‰ใ‚’่กจ็คบใ—ใพใ™ (ไธปใชใ‚ณใƒžใƒณใƒ‰ใ ใ‘ใงใฏใชใ)" #: lxc/info.go:36 msgid "Show the container's last 100 log lines?" msgstr "ใ‚ณใƒณใƒ†ใƒŠใƒญใ‚ฐใฎๆœ€ๅพŒใฎ 100 ่กŒใ‚’่กจ็คบใ—ใพใ™ใ‹?" #: lxc/image.go:329 #, c-format msgid "Size: %.2fMB" msgstr "ใ‚ตใ‚คใ‚บ: %.2fMB" #: lxc/info.go:194 msgid "Snapshots:" msgstr "ใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆ:" #: lxc/image.go:353 msgid "Source:" msgstr "ๅ–ๅพ—ๅ…ƒ:" #: lxc/launch.go:124 #, c-format msgid "Starting %s" msgstr "%s ใ‚’่ตทๅ‹•ไธญ" #: lxc/info.go:95 #, c-format msgid "Status: %s" msgstr "็Šถๆ…‹: %s" #: lxc/publish.go:34 lxc/publish.go:35 msgid "Stop the container if currently running" msgstr "ๅฎŸ่กŒไธญใฎๅ ดๅˆใ€ใ‚ณใƒณใƒ†ใƒŠใ‚’ๅœๆญขใ—ใพใ™" #: lxc/delete.go:106 lxc/publish.go:111 msgid "Stopping container failed!" msgstr "ใ‚ณใƒณใƒ†ใƒŠใฎๅœๆญขใซๅคฑๆ•—ใ—ใพใ—ใŸ๏ผ" #: lxc/action.go:39 msgid "Store the container state (only for stop)." msgstr "ใ‚ณใƒณใƒ†ใƒŠใฎ็Šถๆ…‹ใ‚’ไฟๅญ˜ใ—ใพใ™ (stopใฎใฟ)ใ€‚" #: lxc/info.go:155 msgid "Swap (current)" msgstr "Swap (็พๅœจๅ€ค)" #: lxc/info.go:159 msgid "Swap (peak)" msgstr "Swap (ใƒ”ใƒผใ‚ฏ)" #: lxc/list.go:385 msgid "TYPE" msgstr "" #: lxc/delete.go:92 msgid "The container is currently running, stop it first or pass --force." msgstr "ใ‚ณใƒณใƒ†ใƒŠใฏๅฎŸ่กŒไธญใงใ™ใ€‚ๅ…ˆใซๅœๆญขใ•ใ›ใ‚‹ใ‹ใ€--force ใ‚’ๆŒ‡ๅฎšใ—ใฆใใ ใ•ใ„ใ€‚" #: lxc/publish.go:89 msgid "" "The container is currently running. Use --force to have it stopped and " "restarted." msgstr "" "ใ‚ณใƒณใƒ†ใƒŠใฏ็พๅœจๅฎŸ่กŒไธญใงใ™ใ€‚ๅœๆญขใ—ใฆใ€ๅ†่ตทๅ‹•ใ™ใ‚‹ใŸใ‚ใซ --force ใ‚’ไฝฟ็”จใ—ใฆใใ \n" "ใ•ใ„ใ€‚" #: lxc/config.go:645 lxc/config.go:657 lxc/config.go:690 lxc/config.go:708 #: lxc/config.go:746 lxc/config.go:764 msgid "The device doesn't exist" msgstr "ใƒ‡ใƒใ‚คใ‚นใŒๅญ˜ๅœจใ—ใพใ›ใ‚“" #: lxc/init.go:277 #, c-format msgid "The local image '%s' couldn't be found, trying '%s:' instead." msgstr "ใƒญใƒผใ‚ซใƒซใ‚คใƒกใƒผใ‚ธ '%s' ใŒ่ฆ‹ใคใ‹ใ‚Šใพใ›ใ‚“ใ€‚ไปฃใ‚ใ‚Šใซ '%s:' ใ‚’่ฉฆใ—ใฆใฟใฆใใ ใ•ใ„ใ€‚" #: lxc/publish.go:62 msgid "There is no \"image name\". Did you want an alias?" msgstr "" "publish ๅ…ˆใซใฏใ‚คใƒกใƒผใ‚ธๅใฏๆŒ‡ๅฎšใงใใพใ›ใ‚“ใ€‚\"--alias\" ใ‚ชใƒ—ใ‚ทใƒงใƒณใ‚’ไฝฟใฃใฆใ" "ใ \n" "ใ•ใ„ใ€‚" #: lxc/action.go:36 msgid "Time to wait for the container before killing it." msgstr "ใ‚ณใƒณใƒ†ใƒŠใ‚’ๅผทๅˆถๅœๆญขใ™ใ‚‹ใพใงใฎๆ™‚้–“" #: lxc/image.go:332 msgid "Timestamps:" msgstr "ใ‚ฟใ‚คใƒ ใ‚นใ‚ฟใƒณใƒ—:" #: lxc/main.go:147 msgid "To start your first container, try: lxc launch ubuntu:16.04" msgstr "" "ๅˆใ‚ใฆใ‚ณใƒณใƒ†ใƒŠใ‚’่ตทๅ‹•ใ™ใ‚‹ใซใฏใ€\"lxc launch ubuntu:16.04\" ใจๅฎŸ่กŒใ—ใฆใฟใฆใใ \n" "ใ•ใ„ใ€‚" #: lxc/image.go:402 #, c-format msgid "Transferring image: %d%%" msgstr "ใ‚คใƒกใƒผใ‚ธใ‚’่ปข้€ไธญ: %d%%" #: lxc/action.go:93 lxc/launch.go:132 #, c-format msgid "Try `lxc info --show-log %s` for more info" msgstr "ๆ›ดใซๆƒ…ๅ ฑใ‚’ๅพ—ใ‚‹ใŸใ‚ใซ `lxc info --show-log` ใ‚’ๅฎŸ่กŒใ—ใฆใฟใฆใใ ใ•ใ„" #: lxc/info.go:97 msgid "Type: ephemeral" msgstr "ใ‚ฟใ‚คใƒ—: ephemeral" #: lxc/info.go:99 msgid "Type: persistent" msgstr "ใ‚ฟใ‚คใƒ—: persistent" #: lxc/image.go:610 msgid "UPLOAD DATE" msgstr "" #: lxc/remote.go:364 msgid "URL" msgstr "" #: lxc/remote.go:82 msgid "Unable to read remote TLS certificate" msgstr "ใƒชใƒขใƒผใƒˆใฎ TLS ่จผๆ˜Žๆ›ธใ‚’่ชญใ‚ใพใ›ใ‚“" #: lxc/image.go:337 #, c-format msgid "Uploaded: %s" msgstr "ใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ๆ—ฅๆ™‚: %s" #: lxc/main.go:122 #, c-format msgid "Usage: %s" msgstr "ไฝฟใ„ๆ–น: %s" #: lxc/help.go:48 msgid "Usage: lxc [subcommand] [options]" msgstr "ไฝฟใ„ๆ–น: lxc [ใ‚ตใƒ–ใ‚ณใƒžใƒณใƒ‰] [ใ‚ชใƒ—ใ‚ทใƒงใƒณ]" #: lxc/delete.go:46 msgid "User aborted delete operation." msgstr "ใƒฆใƒผใ‚ถใŒๅ‰Š้™คๆ“ไฝœใ‚’ไธญๆ–ญใ—ใพใ—ใŸใ€‚" #: lxc/restore.go:35 msgid "" "Whether or not to restore the container's running state from snapshot (if " "available)" msgstr "" "ใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆใ‹ใ‚‰ใ‚ณใƒณใƒ†ใƒŠใฎ็จผๅ‹•็Šถๆ…‹ใ‚’ใƒชใ‚นใƒˆใ‚ขใ™ใ‚‹ใ‹ใฉใ†ใ‹ (ๅ–ๅพ—ๅฏ่ƒฝใชๅ ดๅˆ)" #: lxc/snapshot.go:38 msgid "Whether or not to snapshot the container's running state" msgstr "ใ‚ณใƒณใƒ†ใƒŠใฎ็จผๅ‹•็Šถๆ…‹ใฎใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆใ‚’ๅ–ๅพ—ใ™ใ‚‹ใ‹ใฉใ†ใ‹" #: lxc/config.go:33 msgid "Whether to show the expanded configuration" msgstr "ๆ‹กๅผตใ—ใŸ่จญๅฎšใ‚’่กจ็คบใ™ใ‚‹ใ‹ใฉใ†ใ‹" #: lxc/remote.go:339 lxc/remote.go:344 msgid "YES" msgstr "" #: lxc/main.go:66 msgid "`lxc config profile` is deprecated, please use `lxc profile`" msgstr "`lxc config profile` ใฏๅปƒๆญขใ•ใ‚Œใพใ—ใŸใ€‚`lxc profile` ใ‚’ไฝฟใฃใฆใใ ใ•ใ„" #: lxc/launch.go:111 msgid "bad number of things scanned from image, container or snapshot" msgstr "" "ใ‚คใƒกใƒผใ‚ธใ€ใ‚ณใƒณใƒ†ใƒŠใ€ใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆใฎใ„ใšใ‚Œใ‹ใ‹ใ‚‰ใ‚นใ‚ญใƒฃใƒณใ•ใ‚ŒใŸๆ•ฐใŒไธๆญฃ" #: lxc/action.go:89 msgid "bad result type from action" msgstr "ใ‚ขใ‚ฏใ‚ทใƒงใƒณใ‹ใ‚‰ใฎ็ตๆžœใ‚ฟใ‚คใƒ—ใŒไธๆญฃใงใ™" #: lxc/copy.go:78 msgid "can't copy to the same container name" msgstr "ๅŒใ˜ใ‚ณใƒณใƒ†ใƒŠๅใธใฏใ‚ณใƒ”ใƒผใงใใพใ›ใ‚“" #: lxc/remote.go:327 msgid "can't remove the default remote" msgstr "ใƒ‡ใƒ•ใ‚ฉใƒซใƒˆใฎใƒชใƒขใƒผใƒˆใฏๅ‰Š้™คใงใใพใ›ใ‚“" #: lxc/remote.go:353 msgid "default" msgstr "" #: lxc/init.go:200 lxc/init.go:205 lxc/launch.go:95 lxc/launch.go:100 msgid "didn't get any affected image, container or snapshot from server" msgstr "" "ใ‚ตใƒผใƒใ‹ใ‚‰ๅค‰ๆ›ดใ•ใ‚ŒใŸใ‚คใƒกใƒผใ‚ธใ€ใ‚ณใƒณใƒ†ใƒŠใ€ใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆใ‚’ๅ–ๅพ—ใงใใพใ›ใ‚“ใง\n" "ใ—ใŸ" #: lxc/image.go:323 msgid "disabled" msgstr "็„กๅŠน" #: lxc/image.go:325 msgid "enabled" msgstr "ๆœ‰ๅŠน" #: lxc/main.go:25 lxc/main.go:159 #, c-format msgid "error: %v" msgstr "ใ‚จใƒฉใƒผ: %v" #: lxc/help.go:40 lxc/main.go:117 #, c-format msgid "error: unknown command: %s" msgstr "ใ‚จใƒฉใƒผ: ๆœช็Ÿฅใฎใ‚ณใƒžใƒณใƒ‰: %s" #: lxc/launch.go:115 msgid "got bad version" msgstr "ไธๆญฃใชใƒใƒผใ‚ธใƒงใƒณใ‚’ๅพ—ใพใ—ใŸ" #: lxc/image.go:318 lxc/image.go:586 msgid "no" msgstr "" #: lxc/copy.go:101 msgid "not all the profiles from the source exist on the target" msgstr "ใ‚ณใƒ”ใƒผๅ…ƒใฎๅ…จใฆใฎใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซใŒใ‚ฟใƒผใ‚ฒใƒƒใƒˆใซๅญ˜ๅœจใ—ใพใ›ใ‚“" #: lxc/remote.go:207 msgid "ok (y/n)?" msgstr "ok (y/n)?" #: lxc/main.go:266 lxc/main.go:270 #, c-format msgid "processing aliases failed %s\n" msgstr "ใ‚จใ‚คใƒชใ‚ขใ‚นใฎๅ‡ฆ็†ใŒๅคฑๆ•—ใ—ใพใ—ใŸ %s\n" #: lxc/remote.go:389 #, c-format msgid "remote %s already exists" msgstr "ใƒชใƒขใƒผใƒˆ %s ใฏๆ—ขใซๅญ˜ๅœจใ—ใพใ™" #: lxc/remote.go:319 lxc/remote.go:381 lxc/remote.go:416 lxc/remote.go:432 #, c-format msgid "remote %s doesn't exist" msgstr "ใƒชใƒขใƒผใƒˆ %s ใฏๅญ˜ๅœจใ—ใพใ›ใ‚“" #: lxc/remote.go:302 #, c-format msgid "remote %s exists as <%s>" msgstr "ใƒชใƒขใƒผใƒˆ %s ใฏ <%s> ใจใ—ใฆๅญ˜ๅœจใ—ใพใ™" #: lxc/remote.go:323 lxc/remote.go:385 lxc/remote.go:420 #, c-format msgid "remote %s is static and cannot be modified" msgstr "ใƒชใƒขใƒผใƒˆ %s ใฏ static ใงใ™ใฎใงๅค‰ๆ›ดใงใใพใ›ใ‚“" #: lxc/info.go:205 msgid "stateful" msgstr "" #: lxc/info.go:207 msgid "stateless" msgstr "" #: lxc/info.go:201 #, c-format msgid "taken at %s" msgstr "%s ใซๅ–ๅพ—ใ—ใพใ—ใŸ" #: lxc/exec.go:166 msgid "unreachable return reached" msgstr "ๅˆฐ้”ใ—ใชใ„ใฏใšใฎreturnใซๅˆฐ้”ใ—ใพใ—ใŸ" #: lxc/main.go:199 msgid "wrong number of subcommand arguments" msgstr "ใ‚ตใƒ–ใ‚ณใƒžใƒณใƒ‰ใฎๅผ•ๆ•ฐใฎๆ•ฐใŒๆญฃใ—ใใ‚ใ‚Šใพใ›ใ‚“" #: lxc/delete.go:45 lxc/image.go:320 lxc/image.go:590 msgid "yes" msgstr "" #: lxc/copy.go:38 msgid "you must specify a source container name" msgstr "ใ‚ณใƒ”ใƒผๅ…ƒใฎใ‚ณใƒณใƒ†ใƒŠๅใ‚’ๆŒ‡ๅฎšใ—ใฆใใ ใ•ใ„" #, fuzzy #~ msgid "Bad image property: %s" #~ msgstr "(ไธๆญฃใชใ‚คใƒกใƒผใ‚ธใƒ—ใƒญใƒ‘ใƒ†ใ‚ฃๅฝขๅผ: %s\n" #~ msgid "Cannot change profile name" #~ msgstr "ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซๅใ‚’ๅค‰ๆ›ดใงใใพใ›ใ‚“" #, fuzzy #~ msgid "" #~ "Create a read-only snapshot of a container.\n" #~ "\n" #~ "lxc snapshot [remote:] [--stateful]" #~ msgstr "ใ‚ณใƒณใƒ†ใƒŠใฎ่ชญใฟๅ–ใ‚Šๅฐ‚็”จใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆใ‚’ไฝœๆˆใ—ใพใ™ใ€‚\n" #~ msgid "No certificate on this connection" #~ msgstr "ใ“ใฎๆŽฅ็ถšใซไฝฟ็”จใ™ใ‚‹่จผๆ˜Žๆ›ธใŒใ‚ใ‚Šใพใ›ใ‚“" #, fuzzy #~ msgid "" #~ "Set the current state of a resource back to its state at the time the " #~ "snapshot was created.\n" #~ "\n" #~ "lxc restore [remote:] [--stateful]" #~ msgstr "ใ‚ณใƒณใƒ†ใƒŠใฎ่ชญใฟๅ–ใ‚Šๅฐ‚็”จใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆใ‚’ไฝœๆˆใ—ใพใ™ใ€‚\n" #~ msgid "api version mismatch: mine: %q, daemon: %q" #~ msgstr "APIใฎใƒใƒผใ‚ธใƒงใƒณไธไธ€่‡ด: ใ‚ฏใƒฉใ‚คใ‚ขใƒณใƒˆ: %q, ใ‚ตใƒผใƒ: %q" #, fuzzy #~ msgid "bad profile url %s" #~ msgstr "ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซURLใŒไธๆญฃ %s" #, fuzzy #~ msgid "bad version in profile url" #~ msgstr "ใƒ—ใƒญใƒ•ใ‚กใ‚คใƒซURLๅ†…ใฎใƒใƒผใ‚ธใƒงใƒณใŒไธๆญฃ" #, fuzzy #~ msgid "device already exists" #~ msgstr "ใƒชใƒขใƒผใƒˆ %s ใฏๆ—ขใซๅญ˜ๅœจใ—ใพใ™" #, fuzzy #~ msgid "error." #~ msgstr "ใ‚จใƒฉใƒผ: %v\n" #~ msgid "got bad op status %s" #~ msgstr "ไธๆญฃใชๆ“ไฝœใ‚นใƒ†ใƒผใ‚ฟใ‚นใ‚’ๅพ—ใพใ—ใŸ %s" #, fuzzy #~ msgid "got bad response type, expected %s got %s" #~ msgstr "\"exec\"ใ‹ใ‚‰ไธๆญฃใชๅฟœ็ญ”ใ‚ฟใ‚คใƒ—ใ‚’ๅพ—ใพใ—ใŸ" #~ msgid "invalid wait url %s" #~ msgstr "ๅพ…ใคURLใŒไธๆญฃ %s" #~ msgid "no response!" #~ msgstr "ๅฟœ็ญ”ใŒใ‚ใ‚Šใพใ›ใ‚“๏ผ" #~ msgid "unknown remote name: %q" #~ msgstr "ๆœช็Ÿฅใฎใƒชใƒขใƒผใƒˆๅ: %q" #, fuzzy #~ msgid "unknown transport type: %s" #~ msgstr "ๆœช็Ÿฅใฎใƒชใƒขใƒผใƒˆๅ: %q" #~ msgid "cannot resolve unix socket address: %v" #~ msgstr "UNIXใ‚ฝใ‚ฑใƒƒใƒˆใฎใ‚ขใƒ‰ใƒฌใ‚นใ‚’่งฃๆฑบใงใใพใ›ใ‚“: %v" #, fuzzy #~ msgid "unknown group %s" #~ msgstr "ๆœช็Ÿฅใฎใƒชใƒขใƒผใƒˆๅ: %q" #, fuzzy #~ msgid "Information about remotes not yet supported" #~ msgstr "ใƒชใƒขใƒผใƒˆใฎๆƒ…ๅ ฑ่กจ็คบใฏใพใ ใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใพใ›ใ‚“ใ€‚\n" #~ msgid "Unknown image command %s" #~ msgstr "ๆœช็Ÿฅใฎใ‚คใƒกใƒผใ‚ธใ‚ณใƒžใƒณใƒ‰ %s" #~ msgid "Unknown remote subcommand %s" #~ msgstr "ๆœช็Ÿฅใฎใƒชใƒขใƒผใƒˆใ‚ตใƒ–ใ‚ณใƒžใƒณใƒ‰ %s" #~ msgid "Unkonwn config trust command %s" #~ msgstr "ๆœช็Ÿฅใฎ่จญๅฎšไฟก้ ผใ‚ณใƒžใƒณใƒ‰ %s" #, fuzzy #~ msgid "YAML parse error %v" #~ msgstr "ใ‚จใƒฉใƒผ: %v\n" #~ msgid "invalid argument %s" #~ msgstr "ไธๆญฃใชๅผ•ๆ•ฐ %s" #, fuzzy #~ msgid "unknown profile cmd %s" #~ msgstr "ๆœช็Ÿฅใฎ่จญๅฎšใ‚ณใƒžใƒณใƒ‰ %s" #, fuzzy #~ msgid "Publish to remote server is not supported yet" #~ msgstr "ใƒชใƒขใƒผใƒˆใฎๆƒ…ๅ ฑ่กจ็คบใฏใพใ ใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใพใ›ใ‚“ใ€‚\n" #, fuzzy #~ msgid "Use an alternative config path." #~ msgstr "ๅˆฅใฎ่จญๅฎšใƒ‡ใ‚ฃใƒฌใ‚ฏใƒˆใƒช" #, fuzzy #~ msgid "" #~ "error: %v\n" #~ "%s\n" #~ msgstr "" #~ "ใ‚จใƒฉใƒผ: %v\n" #~ "%s" #, fuzzy #~ msgid "Show for remotes is not yet supported\n" #~ msgstr "ใƒชใƒขใƒผใƒˆใฎๆƒ…ๅ ฑ่กจ็คบใฏใพใ ใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใพใ›ใ‚“ใ€‚\n" #~ msgid "(Bad alias entry: %s\n" #~ msgstr "(ไธๆญฃใชใ‚จใ‚คใƒชใ‚ขใ‚น: %s\n" #~ msgid "bad container url %s" #~ msgstr "ใ‚ณใƒณใƒ†ใƒŠใฎไธๆญฃใชURL %s" #~ msgid "bad version in container url" #~ msgstr "ใ‚ณใƒณใƒ†ใƒŠURLๅ†…ใฎใƒใƒผใ‚ธใƒงใƒณใŒไธๆญฃ" #, fuzzy #~ msgid "Ephemeral containers not yet supported\n" #~ msgstr "ใƒชใƒขใƒผใƒˆใฎๆƒ…ๅ ฑ่กจ็คบใฏใพใ ใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใพใ›ใ‚“ใ€‚\n" #~ msgid "(Bad image entry: %s\n" #~ msgstr "(ไธๆญฃใชใ‚คใƒกใƒผใ‚ธ: %s\n" #~ msgid "Certificate already stored.\n" #~ msgstr "่จผๆ˜Žๆ›ธใฏๆ—ขใซๆ ผ็ดใ•ใ‚Œใฆใ„ใพใ™ใ€‚\n" #, fuzzy #~ msgid "Non-async response from delete!" #~ msgstr "ๅ‰Š้™คใ‚ณใƒžใƒณใƒ‰ใ‹ใ‚‰ใฎๅฟœ็ญ”ใŒไธๆญฃ(้žๅŒๆœŸใงใชใ„)ใงใ™๏ผ" #, fuzzy #~ msgid "Non-async response from init!" #~ msgstr "ๅˆๆœŸๅŒ–ใ‚ณใƒžใƒณใƒ‰ใ‹ใ‚‰ใฎๅฟœ็ญ”ใŒไธๆญฃ(้žๅŒๆœŸใงใชใ„)ใงใ™๏ผ" #~ msgid "Non-async response from snapshot!" #~ msgstr "ใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆใ‹ใ‚‰ใฎๅฟœ็ญ”ใŒไธๆญฃ(้žๅŒๆœŸใงใชใ„)ใงใ™๏ผ" #~ msgid "Server certificate has changed" #~ msgstr "ใ‚ตใƒผใƒใฎ่จผๆ˜Žๆ›ธใŒๅค‰ๆ›ดใ•ใ‚Œใฆใ„ใพใ—ใŸ" #, fuzzy #~ msgid "Unexpected non-async response" #~ msgstr "ไธๆญฃใชใƒฌใ‚นใƒใƒณใ‚น (้žๅŒๆœŸใงใชใ„)" #~ msgid "bad response type from image list!" #~ msgstr "ใ‚คใƒกใƒผใ‚ธใƒชใ‚นใƒˆใ‹ใ‚‰ใฎใƒฌใ‚นใƒใƒณใ‚นใ‚ฟใ‚คใƒ—ใŒไธๆญฃ๏ผ" #~ msgid "bad response type from list!" #~ msgstr "ไธ€่ฆงใ‹ใ‚‰ใฎใƒฌใ‚นใƒใƒณใ‚นใ‚ฟใ‚คใƒ—ใŒไธๆญฃ๏ผ" #, fuzzy #~ msgid "got non-async response!" #~ msgstr "ไธๆญฃใช(้žๅŒๆœŸใงใชใ„)ๅฟœ็ญ”ใ‚’ๅพ—ใพใ—ใŸ๏ผ" #~ msgid "got non-sync response from containers get!" #~ msgstr "ใ‚ณใƒณใƒ†ใƒŠใ‹ใ‚‰ไธๆญฃใช(้žๅŒๆœŸใงใชใ„)ๅฟœ็ญ”ใ‚’ๅพ—ใพใ—ใŸ๏ผ" #~ msgid "Delete a container or container snapshot.\n" #~ msgstr "ใ‚ณใƒณใƒ†ใƒŠใพใŸใฏใ‚ณใƒณใƒ†ใƒŠใฎใ‚นใƒŠใƒƒใƒ—ใ‚ทใƒงใƒƒใƒˆใ‚’ๅ‰Š้™คใ—ใพใ™ใ€‚\n" #~ msgid "List information on containers.\n" #~ msgstr "ใ‚ณใƒณใƒ†ใƒŠใฎๆƒ…ๅ ฑใ‚’ไธ€่ฆง่กจ็คบใ—ใพใ™ใ€‚\n" #~ msgid "Lists the available resources.\n" #~ msgstr "ๅˆฉ็”จๅฏ่ƒฝใชใƒชใ‚ฝใƒผใ‚นใ‚’ไธ€่ฆง่กจ็คบใ—ใพใ™ใ€‚\n" #~ msgid "Manage files on a container.\n" #~ msgstr "ใ‚ณใƒณใƒ†ใƒŠไธŠใฎใƒ•ใ‚กใ‚คใƒซใ‚’็ฎก็†ใ—ใพใ™ใ€‚\n" #~ msgid "Manage remote lxc servers.\n" #~ msgstr "ใƒชใƒขใƒผใƒˆใฎlxcใ‚ตใƒผใƒใ‚’็ฎก็†ใ—ใพใ™ใ€‚\n" #~ msgid "Non-async response from create!" #~ msgstr "ไฝœๆˆใ‚ณใƒžใƒณใƒ‰ใ‹ใ‚‰ใฎใƒฌใ‚นใƒใƒณใ‚นใŒไธๆญฃ(้žๅŒๆœŸใงใชใ„)๏ผ" #~ msgid "Only 'password' can be set currently" #~ msgstr "็พๆ™‚็‚นใงใฏ 'password' ใฎใฟใŒ่จญๅฎšๅฏ่ƒฝใงใ™" #~ msgid "" #~ "lxc image import [target] [--created-at=ISO-8601] [--expires-" #~ "at=ISO-8601] [--fingerprint=HASH] [prop=value]\n" #~ msgstr "" #~ "lxc image import [destination] [--created-at=ISO-8601] [--" #~ "expires-at=ISO-8601] [--fingerprint=HASH] [proprit=valeur]\n" #~ msgid "lxc init ubuntu []\n" #~ msgstr "lxc init ubuntu []\n" #~ msgid "lxc launch ubuntu []\n" #~ msgstr "lxc launch ubuntu []\n" lxd-2.0.2/po/lxd.pot000066400000000000000000001003341272140510300142270ustar00rootroot00000000000000# SOME DESCRIPTIVE TITLE. # Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER # This file is distributed under the same license as the PACKAGE package. # FIRST AUTHOR , YEAR. # #, fuzzy msgid "" msgstr "Project-Id-Version: lxd\n" "Report-Msgid-Bugs-To: lxc-devel@lists.linuxcontainers.org\n" "POT-Creation-Date: 2016-05-11 18:51-0400\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" "Language: \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=CHARSET\n" "Content-Transfer-Encoding: 8bit\n" #: lxc/info.go:140 msgid " Disk usage:" msgstr "" #: lxc/info.go:163 msgid " Memory usage:" msgstr "" #: lxc/info.go:180 msgid " Network usage:" msgstr "" #: lxc/config.go:37 msgid "### This is a yaml representation of the configuration.\n" "### Any line starting with a '# will be ignored.\n" "###\n" "### A sample configuration looks like:\n" "### name: container1\n" "### profiles:\n" "### - default\n" "### config:\n" "### volatile.eth0.hwaddr: 00:16:3e:e9:f8:7f\n" "### devices:\n" "### homedir:\n" "### path: /extra\n" "### source: /home/user\n" "### type: disk\n" "### ephemeral: false\n" "###\n" "### Note that the name is shown but cannot be changed" msgstr "" #: lxc/image.go:83 msgid "### This is a yaml representation of the image properties.\n" "### Any line starting with a '# will be ignored.\n" "###\n" "### Each property is represented by a single line:\n" "### An example would be:\n" "### description: My custom image" msgstr "" #: lxc/profile.go:27 msgid "### This is a yaml representation of the profile.\n" "### Any line starting with a '# will be ignored.\n" "###\n" "### A profile consists of a set of configuration items followed by a set of\n" "### devices.\n" "###\n" "### An example would look like:\n" "### name: onenic\n" "### config:\n" "### raw.lxc: lxc.aa_profile=unconfined\n" "### devices:\n" "### eth0:\n" "### nictype: bridged\n" "### parent: lxdbr0\n" "### type: nic\n" "###\n" "### Note that the name is shown but cannot be changed" msgstr "" #: lxc/image.go:583 #, c-format msgid "%s (%d more)" msgstr "" #: lxc/snapshot.go:61 msgid "'/' not allowed in snapshot name" msgstr "" #: lxc/profile.go:226 msgid "(none)" msgstr "" #: lxc/image.go:604 lxc/image.go:633 msgid "ALIAS" msgstr "" #: lxc/image.go:608 msgid "ARCH" msgstr "" #: lxc/list.go:378 msgid "ARCHITECTURE" msgstr "" #: lxc/remote.go:53 msgid "Accept certificate" msgstr "" #: lxc/remote.go:256 #, c-format msgid "Admin password for %s: " msgstr "" #: lxc/image.go:347 msgid "Aliases:" msgstr "" #: lxc/exec.go:54 msgid "An environment variable of the form HOME=/home/foo" msgstr "" #: lxc/image.go:330 lxc/info.go:90 #, c-format msgid "Architecture: %s" msgstr "" #: lxc/image.go:351 #, c-format msgid "Auto update: %s" msgstr "" #: lxc/help.go:49 msgid "Available commands:" msgstr "" #: lxc/info.go:172 msgid "Bytes received" msgstr "" #: lxc/info.go:173 msgid "Bytes sent" msgstr "" #: lxc/config.go:270 msgid "COMMON NAME" msgstr "" #: lxc/list.go:379 msgid "CREATED AT" msgstr "" #: lxc/config.go:114 #, c-format msgid "Can't read from stdin: %s" msgstr "" #: lxc/config.go:127 lxc/config.go:160 lxc/config.go:182 #, c-format msgid "Can't unset key '%s', it's not currently set." msgstr "" #: lxc/profile.go:343 msgid "Cannot provide container name to list" msgstr "" #: lxc/remote.go:206 #, c-format msgid "Certificate fingerprint: %x" msgstr "" #: lxc/action.go:28 #, c-format msgid "Changes state of one or more containers to %s.\n" "\n" "lxc %s [...]" msgstr "" #: lxc/remote.go:279 msgid "Client certificate stored at server: " msgstr "" #: lxc/list.go:99 lxc/list.go:100 msgid "Columns" msgstr "" #: lxc/init.go:134 lxc/init.go:135 lxc/launch.go:40 lxc/launch.go:41 msgid "Config key/value to apply to the new container" msgstr "" #: lxc/config.go:500 lxc/config.go:565 lxc/image.go:687 lxc/profile.go:190 #, c-format msgid "Config parsing error: %s" msgstr "" #: lxc/main.go:37 msgid "Connection refused; is LXD running?" msgstr "" #: lxc/publish.go:59 msgid "Container name is mandatory" msgstr "" #: lxc/init.go:210 #, c-format msgid "Container name is: %s" msgstr "" #: lxc/publish.go:141 lxc/publish.go:156 #, c-format msgid "Container published with fingerprint: %s" msgstr "" #: lxc/image.go:155 msgid "Copy aliases from source" msgstr "" #: lxc/copy.go:22 msgid "Copy containers within or in between lxd instances.\n" "\n" "lxc copy [remote:] [remote:] [--ephemeral|e]" msgstr "" #: lxc/image.go:268 #, c-format msgid "Copying the image: %s" msgstr "" #: lxc/remote.go:221 msgid "Could not create server cert dir" msgstr "" #: lxc/snapshot.go:21 msgid "Create a read-only snapshot of a container.\n" "\n" "lxc snapshot [remote:] [--stateful]\n" "\n" "Creates a snapshot of the container (optionally with the container's memory\n" "state). When --stateful is used, LXD attempts to checkpoint the container's\n" "running state, including process memory state, TCP connections, etc. so that it\n" "can be restored (via lxc restore) at a later time (although some things, e.g.\n" "TCP connections after the TCP timeout window has expired, may not be restored\n" "successfully).\n" "\n" "Example:\n" "lxc snapshot u1 snap0" msgstr "" #: lxc/image.go:335 lxc/info.go:92 #, c-format msgid "Created: %s" msgstr "" #: lxc/init.go:177 lxc/launch.go:118 #, c-format msgid "Creating %s" msgstr "" #: lxc/init.go:175 msgid "Creating the container" msgstr "" #: lxc/image.go:607 lxc/image.go:635 msgid "DESCRIPTION" msgstr "" #: lxc/delete.go:25 msgid "Delete containers or container snapshots.\n" "\n" "lxc delete [remote:][/] [remote:][[/]...]\n" "\n" "Destroy containers or snapshots with any attached data (configuration, snapshots, ...)." msgstr "" #: lxc/config.go:617 #, c-format msgid "Device %s added to %s" msgstr "" #: lxc/config.go:804 #, c-format msgid "Device %s removed from %s" msgstr "" #: lxc/list.go:462 msgid "EPHEMERAL" msgstr "" #: lxc/config.go:272 msgid "EXPIRY DATE" msgstr "" #: lxc/main.go:55 msgid "Enables debug mode." msgstr "" #: lxc/main.go:54 msgid "Enables verbose mode." msgstr "" #: lxc/help.go:68 msgid "Environment:" msgstr "" #: lxc/copy.go:29 lxc/copy.go:30 lxc/init.go:138 lxc/init.go:139 lxc/launch.go:44 lxc/launch.go:45 msgid "Ephemeral container" msgstr "" #: lxc/monitor.go:56 msgid "Event type to listen for" msgstr "" #: lxc/exec.go:45 msgid "Execute the specified command in a container.\n" "\n" "lxc exec [remote:]container [--mode=auto|interactive|non-interactive] [--env EDITOR=/usr/bin/vim]... \n" "\n" "Mode defaults to non-interactive, interactive mode is selected if both stdin AND stdout are terminals (stderr is ignored)." msgstr "" #: lxc/image.go:339 #, c-format msgid "Expires: %s" msgstr "" #: lxc/image.go:341 msgid "Expires: never" msgstr "" #: lxc/config.go:269 lxc/image.go:605 lxc/image.go:634 msgid "FINGERPRINT" msgstr "" #: lxc/list.go:102 msgid "Fast mode (same as --columns=nsacPt" msgstr "" #: lxc/image.go:328 #, c-format msgid "Fingerprint: %s" msgstr "" #: lxc/finger.go:17 msgid "Fingers the LXD instance to check if it is up and working.\n" "\n" "lxc finger " msgstr "" #: lxc/action.go:37 msgid "Force the container to shutdown." msgstr "" #: lxc/delete.go:34 lxc/delete.go:35 msgid "Force the removal of stopped containers." msgstr "" #: lxc/main.go:56 msgid "Force using the local unix socket." msgstr "" #: lxc/list.go:101 msgid "Format" msgstr "" #: lxc/main.go:138 msgid "Generating a client certificate. This may take a minute..." msgstr "" #: lxc/list.go:376 msgid "IPV4" msgstr "" #: lxc/list.go:377 msgid "IPV6" msgstr "" #: lxc/config.go:271 msgid "ISSUE DATE" msgstr "" #: lxc/main.go:146 msgid "If this is your first time using LXD, you should also run: sudo lxd init" msgstr "" #: lxc/main.go:57 msgid "Ignore aliases when determining what command to run." msgstr "" #: lxc/action.go:40 msgid "Ignore the container state (only for start)." msgstr "" #: lxc/image.go:273 msgid "Image copied successfully!" msgstr "" #: lxc/image.go:419 #, c-format msgid "Image imported with fingerprint: %s" msgstr "" #: lxc/init.go:73 msgid "Initialize a container from a particular image.\n" "\n" "lxc init [remote:] [remote:][] [--ephemeral|-e] [--profile|-p ...] [--config|-c ...]\n" "\n" "Initializes a container using the specified image and name.\n" "\n" "Not specifying -p will result in the default profile.\n" "Specifying \"-p\" with no argument will result in no profile.\n" "\n" "Example:\n" "lxc init ubuntu u1" msgstr "" #: lxc/remote.go:122 #, c-format msgid "Invalid URL scheme \"%s\" in \"%s\"" msgstr "" #: lxc/init.go:30 lxc/init.go:35 msgid "Invalid configuration key" msgstr "" #: lxc/file.go:190 #, c-format msgid "Invalid source %s" msgstr "" #: lxc/file.go:57 #, c-format msgid "Invalid target %s" msgstr "" #: lxc/info.go:121 msgid "Ips:" msgstr "" #: lxc/image.go:156 msgid "Keep the image up to date after initial copy" msgstr "" #: lxc/main.go:35 msgid "LXD socket not found; is LXD running?" msgstr "" #: lxc/launch.go:22 msgid "Launch a container from a particular image.\n" "\n" "lxc launch [remote:] [remote:][] [--ephemeral|-e] [--profile|-p ...] [--config|-c ...]\n" "\n" "Launches a container using the specified image and name.\n" "\n" "Not specifying -p will result in the default profile.\n" "Specifying \"-p\" with no argument will result in no profile.\n" "\n" "Example:\n" "lxc launch ubuntu:16.04 u1" msgstr "" #: lxc/info.go:25 msgid "List information on LXD servers and containers.\n" "\n" "For a container:\n" " lxc info [:]container [--show-log]\n" "\n" "For a server:\n" " lxc info [:]" msgstr "" #: lxc/list.go:67 msgid "Lists the available resources.\n" "\n" "lxc list [resource] [filters] [--format table|json] [-c columns] [--fast]\n" "\n" "The filters are:\n" "* A single keyword like \"web\" which will list any container with a name starting by \"web\".\n" "* A regular expression on the container name. (e.g. .*web.*01$)\n" "* A key/value pair referring to a configuration item. For those, the namespace can be abreviated to the smallest unambiguous identifier:\n" " * \"user.blah=abc\" will list all containers with the \"blah\" user property set to \"abc\".\n" " * \"u.blah=abc\" will do the same\n" " * \"security.privileged=1\" will list all privileged containers\n" " * \"s.privileged=1\" will do the same\n" "* A regular expression matching a configuration item or its value. (e.g. volatile.eth0.hwaddr=00:16:3e:.*)\n" "\n" "Columns for table format are:\n" "* 4 - IPv4 address\n" "* 6 - IPv6 address\n" "* a - architecture\n" "* c - creation date\n" "* n - name\n" "* p - pid of container init process\n" "* P - profiles\n" "* s - state\n" "* S - number of snapshots\n" "* t - type (persistent or ephemeral)\n" "\n" "Default column layout: ns46tS\n" "Fast column layout: nsacPt" msgstr "" #: lxc/info.go:225 msgid "Log:" msgstr "" #: lxc/image.go:154 msgid "Make image public" msgstr "" #: lxc/publish.go:32 msgid "Make the image public" msgstr "" #: lxc/profile.go:48 msgid "Manage configuration profiles.\n" "\n" "lxc profile list [filters] List available profiles.\n" "lxc profile show Show details of a profile.\n" "lxc profile create Create a profile.\n" "lxc profile copy Copy the profile to the specified remote.\n" "lxc profile get Get profile configuration.\n" "lxc profile set Set profile configuration.\n" "lxc profile delete Delete a profile.\n" "lxc profile edit \n" " Edit profile, either by launching external editor or reading STDIN.\n" " Example: lxc profile edit # launch editor\n" " cat profile.yml | lxc profile edit # read from profile.yml\n" "lxc profile apply \n" " Apply a comma-separated list of profiles to a container, in order.\n" " All profiles passed in this call (and only those) will be applied\n" " to the specified container.\n" " Example: lxc profile apply foo default,bar # Apply default and bar\n" " lxc profile apply foo default # Only default is active\n" " lxc profile apply '' # no profiles are applied anymore\n" " lxc profile apply bar,default # Apply default second now\n" "\n" "Devices:\n" "lxc profile device list List devices in the given profile.\n" "lxc profile device show Show full device details in the given profile.\n" "lxc profile device remove Remove a device from a profile.\n" "lxc profile device get <[remote:]profile> Get a device property.\n" "lxc profile device set <[remote:]profile> Set a device property.\n" "lxc profile device unset <[remote:]profile> Unset a device property.\n" "lxc profile device add [key=value]...\n" " Add a profile device, such as a disk or a nic, to the containers\n" " using the specified profile." msgstr "" #: lxc/config.go:58 msgid "Manage configuration.\n" "\n" "lxc config device add <[remote:]container> [key=value]... Add a device to a container.\n" "lxc config device get <[remote:]container> Get a device property.\n" "lxc config device set <[remote:]container> Set a device property.\n" "lxc config device unset <[remote:]container> Unset a device property.\n" "lxc config device list <[remote:]container> List devices for container.\n" "lxc config device show <[remote:]container> Show full device details for container.\n" "lxc config device remove <[remote:]container> Remove device from container.\n" "\n" "lxc config get [remote:][container] Get container or server configuration key.\n" "lxc config set [remote:][container] Set container or server configuration key.\n" "lxc config unset [remote:][container] Unset container or server configuration key.\n" "lxc config show [remote:][container] [--expanded] Show container or server configuration.\n" "lxc config edit [remote:][container] Edit container or server configuration in external editor.\n" " Edit configuration, either by launching external editor or reading STDIN.\n" " Example: lxc config edit # launch editor\n" " cat config.yml | lxc config edit # read from config.yml\n" "\n" "lxc config trust list [remote] List all trusted certs.\n" "lxc config trust add [remote] Add certfile.crt to trusted hosts.\n" "lxc config trust remove [remote] [hostname|fingerprint] Remove the cert from trusted hosts.\n" "\n" "Examples:\n" "To mount host's /share/c1 onto /opt in the container:\n" " lxc config device add [remote:]container1 disk source=/share/c1 path=opt\n" "\n" "To set an lxc config value:\n" " lxc config set [remote:] raw.lxc 'lxc.aa_allow_incomplete = 1'\n" "\n" "To listen on IPv4 and IPv6 port 8443 (you can omit the 8443 its the default):\n" " lxc config set core.https_address [::]:8443\n" "\n" "To set the server trust password:\n" " lxc config set core.trust_password blah" msgstr "" #: lxc/file.go:32 msgid "Manage files on a container.\n" "\n" "lxc file pull [...] \n" "lxc file push [--uid=UID] [--gid=GID] [--mode=MODE] [...] \n" "lxc file edit \n" "\n" " in the case of pull, in the case of push and in the case of edit are /" msgstr "" #: lxc/remote.go:39 msgid "Manage remote LXD servers.\n" "\n" "lxc remote add [--accept-certificate] [--password=PASSWORD]\n" " [--public] [--protocol=PROTOCOL] Add the remote at .\n" "lxc remote remove Remove the remote .\n" "lxc remote list List all remotes.\n" "lxc remote rename Rename remote to .\n" "lxc remote set-url Update 's url to .\n" "lxc remote set-default Set the default remote.\n" "lxc remote get-default Print the default remote." msgstr "" #: lxc/image.go:93 msgid "Manipulate container images.\n" "\n" "In LXD containers are created from images. Those images were themselves\n" "either generated from an existing container or downloaded from an image\n" "server.\n" "\n" "When using remote images, LXD will automatically cache images for you\n" "and remove them upon expiration.\n" "\n" "The image unique identifier is the hash (sha-256) of its representation\n" "as a compressed tarball (or for split images, the concatenation of the\n" "metadata and rootfs tarballs).\n" "\n" "Images can be referenced by their full hash, shortest unique partial\n" "hash or alias name (if one is set).\n" "\n" "\n" "lxc image import [rootfs tarball|URL] [remote:] [--public] [--created-at=ISO-8601] [--expires-at=ISO-8601] [--fingerprint=FINGERPRINT] [--alias=ALIAS].. [prop=value]\n" " Import an image tarball (or tarballs) into the LXD image store.\n" "\n" "lxc image copy [remote:] : [--alias=ALIAS].. [--copy-aliases] [--public] [--auto-update]\n" " Copy an image from one LXD daemon to another over the network.\n" "\n" " The auto-update flag instructs the server to keep this image up to\n" " date. It requires the source to be an alias and for it to be public.\n" "\n" "lxc image delete [remote:]\n" " Delete an image from the LXD image store.\n" "\n" "lxc image export [remote:]\n" " Export an image from the LXD image store into a distributable tarball.\n" "\n" "lxc image info [remote:]\n" " Print everything LXD knows about a given image.\n" "\n" "lxc image list [remote:] [filter]\n" " List images in the LXD image store. Filters may be of the\n" " = form for property based filtering, or part of the image\n" " hash or part of the image alias name.\n" "\n" "lxc image show [remote:]\n" " Yaml output of the user modifiable properties of an image.\n" "\n" "lxc image edit [remote:]\n" " Edit image, either by launching external editor or reading STDIN.\n" " Example: lxc image edit # launch editor\n" " cat image.yml | lxc image edit # read from image.yml\n" "\n" "lxc image alias create [remote:] \n" " Create a new alias for an existing image.\n" "\n" "lxc image alias delete [remote:]\n" " Delete an alias.\n" "\n" "lxc image alias list [remote:] [filter]\n" " List the aliases. Filters may be part of the image hash or part of the image alias name.\n" msgstr "" #: lxc/info.go:147 msgid "Memory (current)" msgstr "" #: lxc/info.go:151 msgid "Memory (peak)" msgstr "" #: lxc/help.go:86 msgid "Missing summary." msgstr "" #: lxc/monitor.go:41 msgid "Monitor activity on the LXD server.\n" "\n" "lxc monitor [remote:] [--type=TYPE...]\n" "\n" "Connects to the monitoring interface of the specified LXD server.\n" "\n" "By default will listen to all message types.\n" "Specific types to listen to can be specified with --type.\n" "\n" "Example:\n" "lxc monitor --type=logging" msgstr "" #: lxc/file.go:178 msgid "More than one file to download, but target is not a directory" msgstr "" #: lxc/move.go:17 msgid "Move containers within or in between lxd instances.\n" "\n" "lxc move [remote:] [remote:]\n" " Move a container between two hosts, renaming it if destination name differs.\n" "\n" "lxc move \n" " Rename a local container.\n" msgstr "" #: lxc/action.go:63 msgid "Must supply container name for: " msgstr "" #: lxc/list.go:380 lxc/remote.go:363 msgid "NAME" msgstr "" #: lxc/remote.go:337 lxc/remote.go:342 msgid "NO" msgstr "" #: lxc/info.go:89 #, c-format msgid "Name: %s" msgstr "" #: lxc/image.go:157 lxc/publish.go:33 msgid "New alias to define at target" msgstr "" #: lxc/config.go:281 msgid "No certificate provided to add" msgstr "" #: lxc/config.go:304 msgid "No fingerprint specified." msgstr "" #: lxc/remote.go:107 msgid "Only https URLs are supported for simplestreams" msgstr "" #: lxc/image.go:411 msgid "Only https:// is supported for remote image import." msgstr "" #: lxc/help.go:63 lxc/main.go:122 msgid "Options:" msgstr "" #: lxc/image.go:506 #, c-format msgid "Output is in %s" msgstr "" #: lxc/exec.go:55 msgid "Override the terminal mode (auto, interactive or non-interactive)" msgstr "" #: lxc/list.go:464 msgid "PERSISTENT" msgstr "" #: lxc/list.go:381 msgid "PID" msgstr "" #: lxc/list.go:382 msgid "PROFILES" msgstr "" #: lxc/remote.go:365 msgid "PROTOCOL" msgstr "" #: lxc/image.go:606 lxc/remote.go:366 msgid "PUBLIC" msgstr "" #: lxc/info.go:174 msgid "Packets received" msgstr "" #: lxc/info.go:175 msgid "Packets sent" msgstr "" #: lxc/help.go:69 msgid "Path to an alternate client configuration directory." msgstr "" #: lxc/help.go:70 msgid "Path to an alternate server directory." msgstr "" #: lxc/main.go:39 msgid "Permisson denied, are you in the lxd group?" msgstr "" #: lxc/info.go:103 #, c-format msgid "Pid: %d" msgstr "" #: lxc/help.go:25 msgid "Presents details on how to use LXD.\n" "\n" "lxd help [--all]" msgstr "" #: lxc/profile.go:191 msgid "Press enter to open the editor again" msgstr "" #: lxc/config.go:501 lxc/config.go:566 lxc/image.go:688 msgid "Press enter to start the editor again" msgstr "" #: lxc/help.go:65 msgid "Print debug information." msgstr "" #: lxc/help.go:64 msgid "Print less common commands." msgstr "" #: lxc/help.go:66 msgid "Print verbose information." msgstr "" #: lxc/version.go:18 msgid "Prints the version number of this client tool.\n" "\n" "lxc version" msgstr "" #: lxc/info.go:127 #, c-format msgid "Processes: %d" msgstr "" #: lxc/profile.go:228 #, c-format msgid "Profile %s applied to %s" msgstr "" #: lxc/profile.go:142 #, c-format msgid "Profile %s created" msgstr "" #: lxc/profile.go:212 #, c-format msgid "Profile %s deleted" msgstr "" #: lxc/init.go:136 lxc/init.go:137 lxc/launch.go:42 lxc/launch.go:43 msgid "Profile to apply to the new container" msgstr "" #: lxc/info.go:101 #, c-format msgid "Profiles: %s" msgstr "" #: lxc/image.go:343 msgid "Properties:" msgstr "" #: lxc/remote.go:56 msgid "Public image server" msgstr "" #: lxc/image.go:331 #, c-format msgid "Public: %s" msgstr "" #: lxc/publish.go:25 msgid "Publish containers as images.\n" "\n" "lxc publish [remote:]container [remote:] [--alias=ALIAS]... [prop-key=prop-value]..." msgstr "" #: lxc/remote.go:54 msgid "Remote admin password" msgstr "" #: lxc/delete.go:42 #, c-format msgid "Remove %s (yes/no): " msgstr "" #: lxc/delete.go:36 lxc/delete.go:37 msgid "Require user confirmation." msgstr "" #: lxc/info.go:124 msgid "Resources:" msgstr "" #: lxc/init.go:247 #, c-format msgid "Retrieving image: %s" msgstr "" #: lxc/image.go:609 msgid "SIZE" msgstr "" #: lxc/list.go:383 msgid "SNAPSHOTS" msgstr "" #: lxc/list.go:384 msgid "STATE" msgstr "" #: lxc/remote.go:367 msgid "STATIC" msgstr "" #: lxc/remote.go:214 msgid "Server certificate NACKed by user" msgstr "" #: lxc/remote.go:276 msgid "Server doesn't trust us after adding our cert" msgstr "" #: lxc/remote.go:55 msgid "Server protocol (lxd or simplestreams)" msgstr "" #: lxc/restore.go:21 msgid "Set the current state of a resource back to a snapshot.\n" "\n" "lxc restore [remote:] [--stateful]\n" "\n" "Restores a container from a snapshot (optionally with running state, see\n" "snapshot help for details).\n" "\n" "For example:\n" "lxc snapshot u1 snap0 # create the snapshot\n" "lxc restore u1 snap0 # restore the snapshot" msgstr "" #: lxc/file.go:44 msgid "Set the file's gid on push" msgstr "" #: lxc/file.go:45 msgid "Set the file's perms on push" msgstr "" #: lxc/file.go:43 msgid "Set the file's uid on push" msgstr "" #: lxc/help.go:32 msgid "Show all commands (not just interesting ones)" msgstr "" #: lxc/info.go:36 msgid "Show the container's last 100 log lines?" msgstr "" #: lxc/image.go:329 #, c-format msgid "Size: %.2fMB" msgstr "" #: lxc/info.go:194 msgid "Snapshots:" msgstr "" #: lxc/image.go:353 msgid "Source:" msgstr "" #: lxc/launch.go:124 #, c-format msgid "Starting %s" msgstr "" #: lxc/info.go:95 #, c-format msgid "Status: %s" msgstr "" #: lxc/publish.go:34 lxc/publish.go:35 msgid "Stop the container if currently running" msgstr "" #: lxc/delete.go:106 lxc/publish.go:111 msgid "Stopping container failed!" msgstr "" #: lxc/action.go:39 msgid "Store the container state (only for stop)." msgstr "" #: lxc/info.go:155 msgid "Swap (current)" msgstr "" #: lxc/info.go:159 msgid "Swap (peak)" msgstr "" #: lxc/list.go:385 msgid "TYPE" msgstr "" #: lxc/delete.go:92 msgid "The container is currently running, stop it first or pass --force." msgstr "" #: lxc/publish.go:89 msgid "The container is currently running. Use --force to have it stopped and restarted." msgstr "" #: lxc/config.go:645 lxc/config.go:657 lxc/config.go:690 lxc/config.go:708 lxc/config.go:746 lxc/config.go:764 msgid "The device doesn't exist" msgstr "" #: lxc/init.go:277 #, c-format msgid "The local image '%s' couldn't be found, trying '%s:' instead." msgstr "" #: lxc/publish.go:62 msgid "There is no \"image name\". Did you want an alias?" msgstr "" #: lxc/action.go:36 msgid "Time to wait for the container before killing it." msgstr "" #: lxc/image.go:332 msgid "Timestamps:" msgstr "" #: lxc/main.go:147 msgid "To start your first container, try: lxc launch ubuntu:16.04" msgstr "" #: lxc/image.go:402 #, c-format msgid "Transferring image: %d%%" msgstr "" #: lxc/action.go:93 lxc/launch.go:132 #, c-format msgid "Try `lxc info --show-log %s` for more info" msgstr "" #: lxc/info.go:97 msgid "Type: ephemeral" msgstr "" #: lxc/info.go:99 msgid "Type: persistent" msgstr "" #: lxc/image.go:610 msgid "UPLOAD DATE" msgstr "" #: lxc/remote.go:364 msgid "URL" msgstr "" #: lxc/remote.go:82 msgid "Unable to read remote TLS certificate" msgstr "" #: lxc/image.go:337 #, c-format msgid "Uploaded: %s" msgstr "" #: lxc/main.go:122 #, c-format msgid "Usage: %s" msgstr "" #: lxc/help.go:48 msgid "Usage: lxc [subcommand] [options]" msgstr "" #: lxc/delete.go:46 msgid "User aborted delete operation." msgstr "" #: lxc/restore.go:35 msgid "Whether or not to restore the container's running state from snapshot (if available)" msgstr "" #: lxc/snapshot.go:38 msgid "Whether or not to snapshot the container's running state" msgstr "" #: lxc/config.go:33 msgid "Whether to show the expanded configuration" msgstr "" #: lxc/remote.go:339 lxc/remote.go:344 msgid "YES" msgstr "" #: lxc/main.go:66 msgid "`lxc config profile` is deprecated, please use `lxc profile`" msgstr "" #: lxc/launch.go:111 msgid "bad number of things scanned from image, container or snapshot" msgstr "" #: lxc/action.go:89 msgid "bad result type from action" msgstr "" #: lxc/copy.go:78 msgid "can't copy to the same container name" msgstr "" #: lxc/remote.go:327 msgid "can't remove the default remote" msgstr "" #: lxc/remote.go:353 msgid "default" msgstr "" #: lxc/init.go:200 lxc/init.go:205 lxc/launch.go:95 lxc/launch.go:100 msgid "didn't get any affected image, container or snapshot from server" msgstr "" #: lxc/image.go:323 msgid "disabled" msgstr "" #: lxc/image.go:325 msgid "enabled" msgstr "" #: lxc/main.go:25 lxc/main.go:159 #, c-format msgid "error: %v" msgstr "" #: lxc/help.go:40 lxc/main.go:117 #, c-format msgid "error: unknown command: %s" msgstr "" #: lxc/launch.go:115 msgid "got bad version" msgstr "" #: lxc/image.go:318 lxc/image.go:586 msgid "no" msgstr "" #: lxc/copy.go:101 msgid "not all the profiles from the source exist on the target" msgstr "" #: lxc/remote.go:207 msgid "ok (y/n)?" msgstr "" #: lxc/main.go:266 lxc/main.go:270 #, c-format msgid "processing aliases failed %s\n" msgstr "" #: lxc/remote.go:389 #, c-format msgid "remote %s already exists" msgstr "" #: lxc/remote.go:319 lxc/remote.go:381 lxc/remote.go:416 lxc/remote.go:432 #, c-format msgid "remote %s doesn't exist" msgstr "" #: lxc/remote.go:302 #, c-format msgid "remote %s exists as <%s>" msgstr "" #: lxc/remote.go:323 lxc/remote.go:385 lxc/remote.go:420 #, c-format msgid "remote %s is static and cannot be modified" msgstr "" #: lxc/info.go:205 msgid "stateful" msgstr "" #: lxc/info.go:207 msgid "stateless" msgstr "" #: lxc/info.go:201 #, c-format msgid "taken at %s" msgstr "" #: lxc/exec.go:166 msgid "unreachable return reached" msgstr "" #: lxc/main.go:199 msgid "wrong number of subcommand arguments" msgstr "" #: lxc/delete.go:45 lxc/image.go:320 lxc/image.go:590 msgid "yes" msgstr "" #: lxc/copy.go:38 msgid "you must specify a source container name" msgstr "" lxd-2.0.2/scripts/000077500000000000000000000000001272140510300137645ustar00rootroot00000000000000lxd-2.0.2/scripts/lxc-to-lxd000077500000000000000000000317671272140510300157230ustar00rootroot00000000000000#!/usr/bin/python3 import argparse import json import lxc import os import subprocess from pylxd.client import Client # Fetch a config key as a list def config_get(config, key, default=None): result = [] for line in config: fields = line.split("=", 1) if fields[0].strip() == key: result.append(fields[-1].strip()) if len(result) == 0: return default else: return result # Parse a LXC configuration file, called recursively for includes def config_parse(path): config = [] with open(path, "r") as fd: for line in fd: line = line.strip() key = line.split("=", 1)[0].strip() value = line.split("=", 1)[-1].strip() # Parse user-added includes if key == "lxc.include": # Ignore our own default configs if value.startswith("/usr/share/lxc/config/"): continue if os.path.isfile(value): config += config_parse(value) continue elif os.path.isdir(value): for entry in os.listdir(value): if not entry.endswith(".conf"): continue config += config_parse(os.path.join(value, entry)) continue else: print("Invalid include: %s", line) # Expand any fstab if key == "lxc.mount": if not os.path.exists(value): print("Container fstab file doesn't exist, skipping...") return False with open(value, "r") as fd: for line in fd: line = line.strip() if line and not line.startswith("#"): config.append("lxc.mount.entry = %s" % line) continue # Proces normal configuration keys if line and not line.startswith("#"): config.append(line) return config # Convert a LXC container to a LXD one def convert_container(container_name, args): # Connect to LXD if args.lxdpath: os.environ['LXD_DIR'] = args.lxdpath lxd = Client() print("==> Processing container: %s" % container_name) # Load the container try: container = lxc.Container(container_name, args.lxcpath) except: print("Invalid container configuration, skipping...") return False if container.running: print("Only stopped containers can be migrated, skipping...") return False # As some keys can't be queried over the API, parse the config ourselves print("Parsing LXC configuration") lxc_config = config_parse(container.config_file_name) if args.debug: print("Container configuration:") print(" ", end="") print("\n ".join(lxc_config)) print("") if config_get(lxc_config, "lxd.migrated"): print("Container has already been migrated, skipping...") return False # Make sure we don't have a conflict print("Checking for existing containers") try: lxd.containers.get(container_name) print("Container already exists, skipping...") return False except NameError: pass # Validate lxc.utsname print("Validating container name") value = config_get(lxc_config, "lxc.utsname") if value and value[0] != container_name: print("Container name doesn't match lxc.utsname, skipping...") return False # Detect privileged containers print("Validating container mode") if config_get(lxc_config, "lxc.id_map"): print("Unprivileged containers aren't supported, skipping...") return False # Detect hooks in config for line in lxc_config: if line.startswith("lxc.hook."): print("Hooks aren't supported, skipping...") return False # Extract and valid rootfs key print("Validating container rootfs") value = config_get(lxc_config, "lxc.rootfs") if not value: print("Invalid container, missing lxc.rootfs key, skipping...") return False rootfs = value[0] if not os.path.exists(rootfs): print("Couldn't find the container rootfs '%s', skipping..." % rootfs) return False # Base config config = {} config['security.privileged'] = "true" devices = {} devices['eth0'] = {'type': "none"} # Convert network configuration print("Processing network configuration") try: count = len(container.get_config_item("lxc.network")) except: count = 0 for i in range(count): device = {"type": "nic"} # Get the device type device["nictype"] = container.get_config_item("lxc.network")[i] # Get everything else dev = container.network[i] # Validate configuration if dev.ipv4 or dev.ipv4_gateway: print("IPv4 network configuration isn't supported, skipping...") return False if dev.ipv6 or dev.ipv6_gateway: print("IPv6 network configuration isn't supported, skipping...") return False if dev.script_up or dev.script_down: print("Network config scripts aren't supported, skipping...") return False if device["nictype"] == "none": print("\"none\" network mode isn't supported, skipping...") return False if device["nictype"] == "vlan": print("\"vlan\" network mode isn't supported, skipping...") return False # Convert the configuration if dev.hwaddr: device['hwaddr'] = dev.hwaddr if dev.link: device['parent'] = dev.link if dev.mtu: device['mtu'] = int(dev.mtu) if dev.name: device['name'] = dev.name if dev.veth_pair: device['host_name'] = dev.veth_pair if device["nictype"] == "veth": if "parent" in device: device["nictype"] = "bridged" else: device["nictype"] = "p2p" if device["nictype"] == "phys": device["nictype"] = "physical" if device["nictype"] == "empty": continue devices['convert_net%d' % i] = device count += 1 # Convert storage configuration value = config_get(lxc_config, "lxc.mount.entry", []) i = 0 for entry in value: mount = entry.split(" ") if len(mount) < 4: print("Invalid mount configuration, skipping...") return False device = {'type': "disk"} # Deal with read-only mounts if "ro" in mount[3].split(","): device['readonly'] = "true" # Deal with optional mounts if "optional" in mount[3].split(","): device['optional'] = "true" # Set the source device['source'] = mount[0] # Figure out the target if mount[1][0] != "/": device['path'] = "/%s" % mount[1] else: device['path'] = mount[1].split(rootfs, 1)[-1] devices['convert_mount%d' % i] = device i += 1 # Convert environment print("Processing environment configuration") value = config_get(lxc_config, "lxc.environment", []) for env in value: entry = env.split("=", 1) config['environment.%s' % entry[0].strip()] = entry[-1].strip() # Convert auto-start print("Processing container boot configuration") value = config_get(lxc_config, "lxc.start.auto") if value and int(value[0]) > 0: config['boot.autostart'] = "true" value = config_get(lxc_config, "lxc.start.delay") if value and int(value[0]) > 0: config['boot.autostart.delay'] = value[0] value = config_get(lxc_config, "lxc.start.order") if value and int(value[0]) > 0: config['boot.autostart.priority'] = value[0] # Convert apparmor print("Processing container apparmor configuration") value = config_get(lxc_config, "lxc.aa_profile") if value: if value[0] == "lxc-container-default-with-nesting": config['security.nesting'] = "true" elif value[0] != "lxc-container-default": print("Unsupported custom apparmor profile, skipping...") return False # Convert seccomp print("Processing container seccomp configuration") value = config_get(lxc_config, "lxc.seccomp") if value: print("Custom seccomp profiles aren't supported, skipping...") return False # Convert SELinux print("Processing container SELinux configuration") value = config_get(lxc_config, "lxc.se_context") if value: print("Custom SELinux policies aren't supported, skipping...") return False # Convert capabilities print("Processing container capabilities configuration") value = config_get(lxc_config, "lxc.cap.drop") if value: print("Custom capabilities aren't supported, skipping...") return False value = config_get(lxc_config, "lxc.cap.keep") if value: print("Custom capabilities aren't supported, skipping...") return False # Setup the container creation request new = {'name': container_name, 'source': {'type': 'none'}, 'config': config, 'devices': devices, 'profiles': ["default"]} # Set the container architecture if set in LXC print("Converting container architecture configuration") arches = {'i686': "i686", 'x86_64': "x86_64", 'armhf': "armv7l", 'arm64': "aarch64", 'powerpc': "ppc", 'powerpc64': "ppc64", 'ppc64el': "ppc64le", 's390x': "s390x"} arch = None try: arch = config_get(lxc_config, "lxc.arch", None) if arch and arch[0] in arches: new['architecture'] = arches[arch[0]] else: print("Unknown architecture, assuming native.") except: print("Couldn't find container architecture, assuming native.") # Define the container in LXD if args.debug: print("LXD container config:") print(json.dumps(new, indent=True, sort_keys=True)) if args.dry_run: return True try: print("Creating the container") lxd.containers.create(new, wait=True) except Exception as e: raise print("Failed to create the container: %s" % e) return False # Transfer the filesystem lxd_rootfs = os.path.join("/var/lib/lxd/", "containers", container_name, "rootfs") if args.copy_rootfs: print("Copying container rootfs") if not os.path.exists(lxd_rootfs): os.mkdir(lxd_rootfs) if subprocess.call(["rsync", "-Aa", "--sparse", "--acls", "--numeric-ids", "--hard-links", "%s/" % rootfs, "%s/" % lxd_rootfs]) != 0: print("Failed to transfer the container rootfs, skipping...") return False else: if os.path.exists(lxd_rootfs): os.rmdir(lxd_rootfs) if subprocess.call(["mv", rootfs, lxd_rootfs]) != 0: print("Failed to move the container rootfs, skipping...") return False os.mkdir(rootfs) # Delete the source if args.delete: print("Deleting source container") container.delete() # Mark the container as migrated with open(container.config_file_name, "a") as fd: fd.write("lxd.migrated=true\n") print("Container is ready to use") # Argument parsing parser = argparse.ArgumentParser() parser.add_argument("--dry-run", action="store_true", default=False, help="Dry run mode") parser.add_argument("--debug", action="store_true", default=False, help="Print debugging output") parser.add_argument("--all", action="store_true", default=False, help="Import all containers") parser.add_argument("--delete", action="store_true", default=False, help="Delete the source container") parser.add_argument("--copy-rootfs", action="store_true", default=False, help="Copy the container rootfs rather than moving it") parser.add_argument("--lxcpath", type=str, default=False, help="Alternate LXC path") parser.add_argument("--lxdpath", type=str, default=False, help="Alternate LXD path") parser.add_argument(dest='containers', metavar="CONTAINER", type=str, help="Container to import", nargs="*") args = parser.parse_args() # Sanity checks if not os.geteuid() == 0: parser.error("You must be root to run this tool") if (not args.containers and not args.all) or (args.containers and args.all): parser.error("You must either pass container names or --all") for container_name in lxc.list_containers(config_path=args.lxcpath): if args.containers and container_name not in args.containers: continue convert_container(container_name, args) lxd-2.0.2/scripts/lxd-setup-lvm-storage000077500000000000000000000167311272140510300201050ustar00rootroot00000000000000#!/usr/bin/env python3 # Let's stick to core python3 modules import argparse import gettext import http.client import json import os import socket from subprocess import check_output import sys DEFAULT_VGNAME = "LXDStorage" _ = gettext.gettext gettext.textdomain("lxd") class FriendlyParser(argparse.ArgumentParser): def error(self, message): sys.stderr.write('error: %s\n' % message) self.print_help() sys.exit(2) class UnixHTTPConnection(http.client.HTTPConnection): def __init__(self, path): http.client.HTTPConnection.__init__(self, 'localhost') self.path = path def connect(self): sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) sock.connect(self.path) self.sock = sock class LXD(object): def __init__(self, path): self.lxd = UnixHTTPConnection(path) def rest_call(self, path, data=None, method="GET", headers={}): if method == "GET" and data: self.lxd.request( method, "%s?%s" % "&".join(["%s=%s" % (key, value) for key, value in data.items()]), headers) else: self.lxd.request(method, path, data, headers) r = self.lxd.getresponse() d = json.loads(r.read().decode("utf-8")) return r.status, d def set_lvm_vgname(self, vgname): self._set_lvm_config("storage.lvm_vg_name", vgname) def set_lvm_poolname(self, poolname): self._set_lvm_config("storage.lvm_thinpool_name", poolname) def _set_lvm_config(self, key, val): data = json.dumps({"config": {key: val}}) status, data = self.rest_call("/1.0", data, "PUT") if status != 200: sys.stderr.write("Error in setting vgname:{}\n{}\n".format(status, data)) raise Exception("Failed to set vgname: %s" % val) def get_server_config(self): status, config = self.rest_call("/1.0", "", "GET") if status != 200: sys.stderr.write("Error in getting vgname\n") raise Exception("Failed to get vgname") return config["metadata"]["config"] def lxd_dir(): if "LXD_DIR" in os.environ: return os.environ["LXD_DIR"] else: return "/var/lib/lxd" def connect_to_socket(): lxd_socket = os.path.join(lxd_dir(), "unix.socket") if not os.path.exists(lxd_socket): print(_("LXD isn't running.")) sys.exit(1) return LXD(lxd_socket) def create_image(args): imgfname = os.path.join(lxd_dir(), "{}.img".format(args.size)) rollbacks = [] try: print("Creating sparse backing file {}".format(imgfname), flush=True) check_output("truncate -s {} {}".format(args.size, imgfname), shell=True) rollbacks.append("rm {}".format(imgfname)) print("Setting up loop device", flush=True) pvloopdev = check_output("losetup -f", shell=True).decode().strip() check_output("losetup {} {}".format(pvloopdev, imgfname), shell=True) rollbacks.append("losetup -d " + pvloopdev) print("Creating LVM PV {}".format(pvloopdev), flush=True) check_output("pvcreate {}".format(pvloopdev), shell=True) rollbacks.append("pvremove " + pvloopdev) print("Creating LVM VG {}".format(DEFAULT_VGNAME), flush=True) check_output("vgcreate {} {}".format(DEFAULT_VGNAME, pvloopdev), shell=True) rollbacks.append("vgremove {}".format(DEFAULT_VGNAME)) except Exception as e: sys.stderr.write("Error: {}. Cleaning up:\n".format(e)) for rbcmd in reversed(rollbacks): sys.stderr.write("+ {}\n".format(rbcmd)) check_output(rbcmd, shell=True) raise e def destroy_image(args, lxd): print("Checking current LXD configuration", flush=True) cfg = lxd.get_server_config() vgname = cfg.get("storage.lvm_vg_name", None) if vgname is None: sys.stderr.write("LXD is not configured for LVM. " "No changes will be made.\n") return lvnames = check_output("lvs {} -o name,lv_attr --noheadings" .format(vgname), shell=True).decode().strip() used_lvs = [] for lvline in lvnames.split("\n"): if lvline == '': continue name, attrs = lvline.split() if attrs.strip().startswith("V"): used_lvs.append(name) if len(used_lvs) > 0: print("LVM storage is still in use by the following volumes: {}" .format(used_lvs)) print("Please delete the corresponding images and/or " "containers before destroying storage.") sys.exit() pvname = check_output("vgs {} --noheadings -o pv_name" .format(vgname), shell=True).decode().strip() print("Removing volume group {}".format(vgname)) check_output("vgremove -f {}".format(vgname), shell=True) print("Removing physical volume {}".format(pvname)) check_output("pvremove -y {}".format(pvname), shell=True) lostr = check_output("losetup -a | grep {}".format(pvname), shell=True).decode().strip() imgfname = lostr.split('(')[-1].replace(')', '') print("Detaching loop device {}".format(pvname)) check_output("losetup -d {}".format(pvname), shell=True) print("Deleting backing file {}".format(imgfname)) if os.path.exists(imgfname): check_output("rm '{}'".format(imgfname), shell=True) def do_main(): parser = FriendlyParser( description=_("LXD: LVM storage helper"), formatter_class=argparse.RawTextHelpFormatter, epilog=_("""Examples: To create a 10G sparse loopback file and register it with LVM and LXD: %s -s 10G To de-configure LXD and destroy the LVM volumes and backing file: %s --destroy """ % (sys.argv[0], sys.argv[0]))) parser.add_argument("-s", "--size", default="10G", help=_("Size of backing file to register as LVM PV")) parser.add_argument("--destroy", action="store_true", default=False, help=_("Un-configure LXD and delete image file")) args = parser.parse_args() if os.geteuid() != 0: sys.exit("Configuring LVM requires root privileges.") try: check_output("type vgcreate", shell=True) except: sys.exit("lvm2 tools not found. try 'apt-get install lvm2 " "thin-provisioning-tools'") try: check_output("type thin_check", shell=True) except: sys.exit("lvm thin provisioning tools are required. " "try 'apt-get install thin-provisioning-tools'") lxd = connect_to_socket() if args.destroy: try: destroy_image(args, lxd) print("Clearing LXD storage configuration") lxd.set_lvm_vgname("") lxd.set_lvm_poolname("") except Exception as e: sys.stderr.write("Error destroying image:") sys.stderr.write(str(e)) sys.stderr.write("\n") else: try: create_image(args) except: sys.stderr.write("Stopping.\n") else: try: print("Configuring LXD") lxd.set_lvm_vgname(DEFAULT_VGNAME) except: sys.stderr.write("Error configuring LXD, " "removing backing file\n") destroy_image(args, lxd) print("Done.") if __name__ == "__main__": do_main() lxd-2.0.2/scripts/vagrant/000077500000000000000000000000001272140510300154265ustar00rootroot00000000000000lxd-2.0.2/scripts/vagrant/install-go.sh000066400000000000000000000006511272140510300200350ustar00rootroot00000000000000#!/bin/bash set -xe export DEBIAN_FRONTEND=noninteractive which add-apt-repository || (sudo apt-get update ; sudo apt-get install -y software-properties-common) sudo add-apt-repository ppa:ubuntu-lxc/lxd-git-master sudo apt-get update which go || sudo apt-get install -y golang [ -e $HOME/go ] || mkdir -p $HOME/go cat << 'EOF' | sudo tee /etc/profile.d/S99go.sh export GOPATH=$HOME/go export PATH=$PATH:$GOPATH/bin EOF lxd-2.0.2/scripts/vagrant/install-lxd.sh000066400000000000000000000014101272140510300202110ustar00rootroot00000000000000#!/bin/bash set -xe export DEBIAN_FRONTEND=noninteractive # install runtime dependencies sudo apt-get -y install xz-utils tar acl curl gettext \ jq sqlite3 # install build dependencies sudo apt-get -y install lxc lxc-dev mercurial git pkg-config \ protobuf-compiler golang-goprotobuf-dev # setup env [ -e uid_gid_setup ] || \ echo "root:1000000:65536" | sudo tee -a /etc/subuid /etc/subgid && \ touch uid_gid_setup go get github.com/lxc/lxd cd $GOPATH/src/github.com/lxc/lxd go get -v -d ./... make cat << 'EOF' | sudo tee /etc/init/lxd.conf description "LXD daemon" author "John Brooker" start on filesystem or runlevel [2345] stop on shutdown script exec /home/vagrant/go/bin/lxd --group vagrant end script EOF sudo service lxd start lxd-2.0.2/shared/000077500000000000000000000000001272140510300135435ustar00rootroot00000000000000lxd-2.0.2/shared/architectures.go000066400000000000000000000062621272140510300167450ustar00rootroot00000000000000package shared import ( "fmt" ) const ( ARCH_UNKNOWN = 0 ARCH_32BIT_INTEL_X86 = 1 ARCH_64BIT_INTEL_X86 = 2 ARCH_32BIT_ARMV7_LITTLE_ENDIAN = 3 ARCH_64BIT_ARMV8_LITTLE_ENDIAN = 4 ARCH_32BIT_POWERPC_BIG_ENDIAN = 5 ARCH_64BIT_POWERPC_BIG_ENDIAN = 6 ARCH_64BIT_POWERPC_LITTLE_ENDIAN = 7 ARCH_64BIT_S390_BIG_ENDIAN = 8 ) var architectureNames = map[int]string{ ARCH_32BIT_INTEL_X86: "i686", ARCH_64BIT_INTEL_X86: "x86_64", ARCH_32BIT_ARMV7_LITTLE_ENDIAN: "armv7l", ARCH_64BIT_ARMV8_LITTLE_ENDIAN: "aarch64", ARCH_32BIT_POWERPC_BIG_ENDIAN: "ppc", ARCH_64BIT_POWERPC_BIG_ENDIAN: "ppc64", ARCH_64BIT_POWERPC_LITTLE_ENDIAN: "ppc64le", ARCH_64BIT_S390_BIG_ENDIAN: "s390x", } var architectureAliases = map[int][]string{ ARCH_32BIT_INTEL_X86: []string{"i386"}, ARCH_64BIT_INTEL_X86: []string{"amd64"}, ARCH_32BIT_ARMV7_LITTLE_ENDIAN: []string{"armel", "armhf"}, ARCH_64BIT_ARMV8_LITTLE_ENDIAN: []string{"arm64"}, ARCH_32BIT_POWERPC_BIG_ENDIAN: []string{"powerpc"}, ARCH_64BIT_POWERPC_BIG_ENDIAN: []string{"powerpc64"}, ARCH_64BIT_POWERPC_LITTLE_ENDIAN: []string{"ppc64el"}, } var architecturePersonalities = map[int]string{ ARCH_32BIT_INTEL_X86: "linux32", ARCH_64BIT_INTEL_X86: "linux64", ARCH_32BIT_ARMV7_LITTLE_ENDIAN: "linux32", ARCH_64BIT_ARMV8_LITTLE_ENDIAN: "linux64", ARCH_32BIT_POWERPC_BIG_ENDIAN: "linux32", ARCH_64BIT_POWERPC_BIG_ENDIAN: "linux64", ARCH_64BIT_POWERPC_LITTLE_ENDIAN: "linux64", ARCH_64BIT_S390_BIG_ENDIAN: "linux64", } var architectureSupportedPersonalities = map[int][]int{ ARCH_32BIT_INTEL_X86: []int{}, ARCH_64BIT_INTEL_X86: []int{ARCH_32BIT_INTEL_X86}, ARCH_32BIT_ARMV7_LITTLE_ENDIAN: []int{}, ARCH_64BIT_ARMV8_LITTLE_ENDIAN: []int{ARCH_32BIT_ARMV7_LITTLE_ENDIAN}, ARCH_32BIT_POWERPC_BIG_ENDIAN: []int{}, ARCH_64BIT_POWERPC_BIG_ENDIAN: []int{ARCH_32BIT_POWERPC_BIG_ENDIAN}, ARCH_64BIT_POWERPC_LITTLE_ENDIAN: []int{}, ARCH_64BIT_S390_BIG_ENDIAN: []int{}, } const ArchitectureDefault = "x86_64" func ArchitectureName(arch int) (string, error) { arch_name, exists := architectureNames[arch] if exists { return arch_name, nil } return "unknown", fmt.Errorf("Architecture isn't supported: %d", arch) } func ArchitectureId(arch string) (int, error) { for arch_id, arch_name := range architectureNames { if arch_name == arch { return arch_id, nil } } for arch_id, arch_aliases := range architectureAliases { for _, arch_name := range arch_aliases { if arch_name == arch { return arch_id, nil } } } return 0, fmt.Errorf("Architecture isn't supported: %s", arch) } func ArchitecturePersonality(arch int) (string, error) { arch_personality, exists := architecturePersonalities[arch] if exists { return arch_personality, nil } return "", fmt.Errorf("Architecture isn't supported: %d", arch) } func ArchitecturePersonalities(arch int) ([]int, error) { personalities, exists := architectureSupportedPersonalities[arch] if exists { return personalities, nil } return []int{}, fmt.Errorf("Architecture isn't supported: %d", arch) } lxd-2.0.2/shared/architectures_linux.go000066400000000000000000000005671272140510300201660ustar00rootroot00000000000000// +build linux package shared import ( "syscall" ) func ArchitectureGetLocal() (string, error) { uname := syscall.Utsname{} if err := syscall.Uname(&uname); err != nil { return ArchitectureDefault, err } architectureName := "" for _, c := range uname.Machine { if c == 0 { break } architectureName += string(byte(c)) } return architectureName, nil } lxd-2.0.2/shared/architectures_others.go000066400000000000000000000001631272140510300203230ustar00rootroot00000000000000// +build !linux package shared func ArchitectureGetLocal() (string, error) { return ArchitectureDefault, nil } lxd-2.0.2/shared/cert.go000066400000000000000000000107311272140510300150310ustar00rootroot00000000000000// http://golang.org/src/pkg/crypto/tls/generate_cert.go // Copyright 2009 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package shared import ( "crypto/rand" "crypto/rsa" "crypto/x509" "crypto/x509/pkix" "encoding/pem" "fmt" "io/ioutil" "log" "math/big" "net" "os" "os/user" "path" "time" ) // CertInfo is the representation of a Certificate in the API. type CertInfo struct { Certificate string `json:"certificate"` Fingerprint string `json:"fingerprint"` Type string `json:"type"` } /* * Generate a list of names for which the certificate will be valid. * This will include the hostname and ip address */ func mynames() ([]string, error) { h, err := os.Hostname() if err != nil { return nil, err } ret := []string{h} ifs, err := net.Interfaces() if err != nil { return nil, err } for _, iface := range ifs { if IsLoopback(&iface) { continue } addrs, err := iface.Addrs() if err != nil { return nil, err } for _, addr := range addrs { ret = append(ret, addr.String()) } } return ret, nil } func FindOrGenCert(certf string, keyf string) error { if PathExists(certf) && PathExists(keyf) { return nil } /* If neither stat succeeded, then this is our first run and we * need to generate cert and privkey */ err := GenCert(certf, keyf) if err != nil { return err } return nil } // GenCert will create and populate a certificate file and a key file func GenCert(certf string, keyf string) error { /* Create the basenames if needed */ dir := path.Dir(certf) err := os.MkdirAll(dir, 0750) if err != nil { return err } dir = path.Dir(keyf) err = os.MkdirAll(dir, 0750) if err != nil { return err } certBytes, keyBytes, err := GenerateMemCert() if err != nil { return err } certOut, err := os.Create(certf) if err != nil { log.Fatalf("failed to open %s for writing: %s", certf, err) return err } certOut.Write(certBytes) certOut.Close() keyOut, err := os.OpenFile(keyf, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600) if err != nil { log.Printf("failed to open %s for writing: %s", keyf, err) return err } keyOut.Write(keyBytes) keyOut.Close() return nil } // GenerateMemCert creates a certificate and key pair, returning them as byte // arrays in memory. func GenerateMemCert() ([]byte, []byte, error) { privk, err := rsa.GenerateKey(rand.Reader, 4096) if err != nil { log.Fatalf("failed to generate key") return nil, nil, err } hosts, err := mynames() if err != nil { log.Fatalf("Failed to get my hostname") return nil, nil, err } validFrom := time.Now() validTo := validFrom.Add(10 * 365 * 24 * time.Hour) serialNumberLimit := new(big.Int).Lsh(big.NewInt(1), 128) serialNumber, err := rand.Int(rand.Reader, serialNumberLimit) if err != nil { log.Fatalf("failed to generate serial number: %s", err) return nil, nil, err } userEntry, err := user.Current() var username string if err == nil { username = userEntry.Username if username == "" { username = "UNKNOWN" } } else { username = "UNKNOWN" } hostname, err := os.Hostname() if err != nil { hostname = "UNKNOWN" } template := x509.Certificate{ SerialNumber: serialNumber, Subject: pkix.Name{ Organization: []string{"linuxcontainers.org"}, CommonName: fmt.Sprintf("%s@%s", username, hostname), }, NotBefore: validFrom, NotAfter: validTo, KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature, ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth}, BasicConstraintsValid: true, } for _, h := range hosts { if ip := net.ParseIP(h); ip != nil { template.IPAddresses = append(template.IPAddresses, ip) } else { template.DNSNames = append(template.DNSNames, h) } } derBytes, err := x509.CreateCertificate(rand.Reader, &template, &template, &privk.PublicKey, privk) if err != nil { log.Fatalf("Failed to create certificate: %s", err) return nil, nil, err } cert := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: derBytes}) key := pem.EncodeToMemory(&pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(privk)}) return cert, key, nil } func ReadCert(fpath string) (*x509.Certificate, error) { cf, err := ioutil.ReadFile(fpath) if err != nil { return nil, err } certBlock, _ := pem.Decode(cf) if certBlock == nil { return nil, fmt.Errorf("Invalid certificate file") } return x509.ParseCertificate(certBlock.Bytes) } lxd-2.0.2/shared/cert_test.go000066400000000000000000000016761272140510300161000ustar00rootroot00000000000000package shared import ( "encoding/pem" "testing" ) func TestGenerateMemCert(t *testing.T) { if testing.Short() { t.Skip("skipping cert generation in short mode") } cert, key, err := GenerateMemCert() if err != nil { t.Error(err) return } if cert == nil { t.Error("GenerateMemCert returned a nil cert") return } if key == nil { t.Error("GenerateMemCert returned a nil key") return } block, rest := pem.Decode(cert) if len(rest) != 0 { t.Errorf("GenerateMemCert returned a cert with trailing content: %q", string(rest)) } if block.Type != "CERTIFICATE" { t.Errorf("GenerateMemCert returned a cert with Type %q not \"CERTIFICATE\"", block.Type) } block, rest = pem.Decode(key) if len(rest) != 0 { t.Errorf("GenerateMemCert returned a key with trailing content: %q", string(rest)) } if block.Type != "RSA PRIVATE KEY" { t.Errorf("GenerateMemCert returned a cert with Type %q not \"RSA PRIVATE KEY\"", block.Type) } } lxd-2.0.2/shared/container.go000066400000000000000000000105321272140510300160550ustar00rootroot00000000000000package shared import ( "time" ) type ContainerState struct { Status string `json:"status"` StatusCode StatusCode `json:"status_code"` Disk map[string]ContainerStateDisk `json:"disk"` Memory ContainerStateMemory `json:"memory"` Network map[string]ContainerStateNetwork `json:"network"` Pid int64 `json:"pid"` Processes int64 `json:"processes"` } type ContainerStateDisk struct { Usage int64 `json:"usage"` } type ContainerStateMemory struct { Usage int64 `json:"usage"` UsagePeak int64 `json:"usage_peak"` SwapUsage int64 `json:"swap_usage"` SwapUsagePeak int64 `json:"swap_usage_peak"` } type ContainerStateNetwork struct { Addresses []ContainerStateNetworkAddress `json:"addresses"` Counters ContainerStateNetworkCounters `json:"counters"` Hwaddr string `json:"hwaddr"` HostName string `json:"host_name"` Mtu int `json:"mtu"` State string `json:"state"` Type string `json:"type"` } type ContainerStateNetworkAddress struct { Family string `json:"family"` Address string `json:"address"` Netmask string `json:"netmask"` Scope string `json:"scope"` } type ContainerStateNetworkCounters struct { BytesReceived int64 `json:"bytes_received"` BytesSent int64 `json:"bytes_sent"` PacketsReceived int64 `json:"packets_received"` PacketsSent int64 `json:"packets_sent"` } type ContainerExecControl struct { Command string `json:"command"` Args map[string]string `json:"args"` } type SnapshotInfo struct { Architecture string `json:"architecture"` Config map[string]string `json:"config"` CreationDate time.Time `json:"created_at"` Devices Devices `json:"devices"` Ephemeral bool `json:"ephemeral"` ExpandedConfig map[string]string `json:"expanded_config"` ExpandedDevices Devices `json:"expanded_devices"` Name string `json:"name"` Profiles []string `json:"profiles"` Stateful bool `json:"stateful"` } type ContainerInfo struct { Architecture string `json:"architecture"` Config map[string]string `json:"config"` CreationDate time.Time `json:"created_at"` Devices Devices `json:"devices"` Ephemeral bool `json:"ephemeral"` ExpandedConfig map[string]string `json:"expanded_config"` ExpandedDevices Devices `json:"expanded_devices"` Name string `json:"name"` Profiles []string `json:"profiles"` Stateful bool `json:"stateful"` Status string `json:"status"` StatusCode StatusCode `json:"status_code"` } func (c ContainerInfo) IsActive() bool { switch c.StatusCode { case Stopped: return false case Error: return false default: return true } } /* * BriefContainerState contains a subset of the fields in * ContainerState, namely those which a user may update */ type BriefContainerInfo struct { Name string `json:"name"` Profiles []string `json:"profiles"` Config map[string]string `json:"config"` Devices Devices `json:"devices"` Ephemeral bool `json:"ephemeral"` } func (c *ContainerInfo) Brief() BriefContainerInfo { retstate := BriefContainerInfo{Name: c.Name, Profiles: c.Profiles, Config: c.Config, Devices: c.Devices, Ephemeral: c.Ephemeral} return retstate } func (c *ContainerInfo) BriefExpanded() BriefContainerInfo { retstate := BriefContainerInfo{Name: c.Name, Profiles: c.Profiles, Config: c.ExpandedConfig, Devices: c.ExpandedDevices, Ephemeral: c.Ephemeral} return retstate } type ContainerAction string const ( Stop ContainerAction = "stop" Start ContainerAction = "start" Restart ContainerAction = "restart" Freeze ContainerAction = "freeze" Unfreeze ContainerAction = "unfreeze" ) type ProfileConfig struct { Name string `json:"name"` Config map[string]string `json:"config"` Description string `json:"description"` Devices Devices `json:"devices"` } lxd-2.0.2/shared/devices.go000066400000000000000000000040561272140510300155210ustar00rootroot00000000000000package shared type Device map[string]string type Devices map[string]Device func (list Devices) ContainsName(k string) bool { if list[k] != nil { return true } return false } func (d Device) get(key string) string { return d[key] } func (list Devices) Contains(k string, d Device) bool { // If it didn't exist, it's different if list[k] == nil { return false } old := list[k] return deviceEquals(old, d) } func deviceEquals(old Device, d Device) bool { // Check for any difference and addition/removal of properties for k, _ := range d { if d[k] != old[k] { return false } } for k, _ := range old { if d[k] != old[k] { return false } } return true } func (old Devices) Update(newlist Devices) (map[string]Device, map[string]Device, map[string]Device) { rmlist := map[string]Device{} addlist := map[string]Device{} updatelist := map[string]Device{} for key, d := range old { if !newlist.Contains(key, d) { rmlist[key] = d } } for key, d := range newlist { if !old.Contains(key, d) { addlist[key] = d } } for key, d := range addlist { srcOldDevice := rmlist[key] var oldDevice Device err := DeepCopy(&srcOldDevice, &oldDevice) if err != nil { continue } srcNewDevice := newlist[key] var newDevice Device err = DeepCopy(&srcNewDevice, &newDevice) if err != nil { continue } for _, k := range []string{"limits.max", "limits.read", "limits.write", "limits.egress", "limits.ingress"} { delete(oldDevice, k) delete(newDevice, k) } if deviceEquals(oldDevice, newDevice) { delete(rmlist, key) delete(addlist, key) updatelist[key] = d } } return rmlist, addlist, updatelist } func (newBaseDevices Devices) ExtendFromProfile(currentFullDevices Devices, newDevicesFromProfile Devices) error { // For any entry which exists in a profile and doesn't in the container config, add it for name, newDev := range newDevicesFromProfile { if curDev, ok := currentFullDevices[name]; ok { newBaseDevices[name] = curDev } else { newBaseDevices[name] = newDev } } return nil } lxd-2.0.2/shared/flex.go000066400000000000000000000004361272140510300150330ustar00rootroot00000000000000/* This is a FLEXible file which can be used by both client and daemon. * Teehee. */ package shared var Version = "2.0.2" var UserAgent = "LXD " + Version /* * Please increment the api compat number every time you change the API. * * Version 1.0: ping */ var APIVersion = "1.0" lxd-2.0.2/shared/gnuflag/000077500000000000000000000000001272140510300151665ustar00rootroot00000000000000lxd-2.0.2/shared/gnuflag/export_test.go000066400000000000000000000012111272140510300200700ustar00rootroot00000000000000// Copyright 2010 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package gnuflag import ( "os" ) // Additional routines compiled into the package only during testing. // ResetForTesting clears all flag state and sets the usage function as directed. // After calling ResetForTesting, parse errors in flag handling will not // exit the program. func ResetForTesting(usage func()) { commandLine = NewFlagSet(os.Args[0], ContinueOnError) Usage = usage } // CommandLine returns the default FlagSet. func CommandLine() *FlagSet { return commandLine } lxd-2.0.2/shared/gnuflag/flag.go000066400000000000000000000670141272140510300164360ustar00rootroot00000000000000// Copyright 2009 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. /* Package flag implements command-line flag parsing in the GNU style. It is almost exactly the same as the standard flag package, the only difference being the extra argument to Parse. Command line flag syntax: -f // single letter flag -fg // two single letter flags together --flag // multiple letter flag --flag x // non-boolean flags only -f x // non-boolean flags only -fx // if f is a non-boolean flag, x is its argument. The last three forms are not permitted for boolean flags because the meaning of the command cmd -f * will change if there is a file called 0, false, etc. There is currently no way to turn off a boolean flag. Flag parsing stops after the terminator "--", or just before the first non-flag argument ("-" is a non-flag argument) if the interspersed argument to Parse is false. */ package gnuflag import ( "bytes" "errors" "fmt" "io" "os" "sort" "strconv" "strings" "time" "unicode/utf8" ) // ErrHelp is the error returned if the flag -help is invoked but no such flag is defined. var ErrHelp = errors.New("flag: help requested") // -- bool Value type boolValue bool func newBoolValue(val bool, p *bool) *boolValue { *p = val return (*boolValue)(p) } func (b *boolValue) Set(s string) error { v, err := strconv.ParseBool(s) *b = boolValue(v) return err } func (b *boolValue) String() string { return fmt.Sprintf("%v", *b) } // -- int Value type intValue int func newIntValue(val int, p *int) *intValue { *p = val return (*intValue)(p) } func (i *intValue) Set(s string) error { v, err := strconv.ParseInt(s, 0, 64) *i = intValue(v) return err } func (i *intValue) String() string { return fmt.Sprintf("%v", *i) } // -- int64 Value type int64Value int64 func newInt64Value(val int64, p *int64) *int64Value { *p = val return (*int64Value)(p) } func (i *int64Value) Set(s string) error { v, err := strconv.ParseInt(s, 0, 64) *i = int64Value(v) return err } func (i *int64Value) String() string { return fmt.Sprintf("%v", *i) } // -- uint Value type uintValue uint func newUintValue(val uint, p *uint) *uintValue { *p = val return (*uintValue)(p) } func (i *uintValue) Set(s string) error { v, err := strconv.ParseUint(s, 0, 64) *i = uintValue(v) return err } func (i *uintValue) String() string { return fmt.Sprintf("%v", *i) } // -- uint64 Value type uint64Value uint64 func newUint64Value(val uint64, p *uint64) *uint64Value { *p = val return (*uint64Value)(p) } func (i *uint64Value) Set(s string) error { v, err := strconv.ParseUint(s, 0, 64) *i = uint64Value(v) return err } func (i *uint64Value) String() string { return fmt.Sprintf("%v", *i) } // -- string Value type stringValue string func newStringValue(val string, p *string) *stringValue { *p = val return (*stringValue)(p) } func (s *stringValue) Set(val string) error { *s = stringValue(val) return nil } func (s *stringValue) String() string { return fmt.Sprintf("%s", *s) } // -- float64 Value type float64Value float64 func newFloat64Value(val float64, p *float64) *float64Value { *p = val return (*float64Value)(p) } func (f *float64Value) Set(s string) error { v, err := strconv.ParseFloat(s, 64) *f = float64Value(v) return err } func (f *float64Value) String() string { return fmt.Sprintf("%v", *f) } // -- time.Duration Value type durationValue time.Duration func newDurationValue(val time.Duration, p *time.Duration) *durationValue { *p = val return (*durationValue)(p) } func (d *durationValue) Set(s string) error { v, err := time.ParseDuration(s) *d = durationValue(v) return err } func (d *durationValue) String() string { return (*time.Duration)(d).String() } // Value is the interface to the dynamic value stored in a flag. // (The default value is represented as a string.) type Value interface { String() string Set(string) error } // ErrorHandling defines how to handle flag parsing errors. type ErrorHandling int const ( ContinueOnError ErrorHandling = iota ExitOnError PanicOnError ) // A FlagSet represents a set of defined flags. type FlagSet struct { // Usage is the function called when an error occurs while parsing flags. // The field is a function (not a method) that may be changed to point to // a custom error handler. Usage func() name string parsed bool actual map[string]*Flag formal map[string]*Flag args []string // arguments after flags procArgs []string // arguments being processed (gnu only) procFlag string // flag being processed (gnu only) allowIntersperse bool // (gnu only) exitOnError bool // does the program exit if there's an error? errorHandling ErrorHandling output io.Writer // nil means stderr; use out() accessor } // A Flag represents the state of a flag. type Flag struct { Name string // name as it appears on command line Usage string // help message Value Value // value as set DefValue string // default value (as text); for usage message } // sortFlags returns the flags as a slice in lexicographical sorted order. func sortFlags(flags map[string]*Flag) []*Flag { list := make(sort.StringSlice, len(flags)) i := 0 for _, f := range flags { list[i] = f.Name i++ } list.Sort() result := make([]*Flag, len(list)) for i, name := range list { result[i] = flags[name] } return result } func (f *FlagSet) out() io.Writer { if f.output == nil { return os.Stderr } return f.output } // SetOutput sets the destination for usage and error messages. // If output is nil, os.Stderr is used. func (f *FlagSet) SetOutput(output io.Writer) { f.output = output } // VisitAll visits the flags in lexicographical order, calling fn for each. // It visits all flags, even those not set. func (f *FlagSet) VisitAll(fn func(*Flag)) { for _, flag := range sortFlags(f.formal) { fn(flag) } } // VisitAll visits the command-line flags in lexicographical order, calling // fn for each. It visits all flags, even those not set. func VisitAll(fn func(*Flag)) { commandLine.VisitAll(fn) } // Visit visits the flags in lexicographical order, calling fn for each. // It visits only those flags that have been set. func (f *FlagSet) Visit(fn func(*Flag)) { for _, flag := range sortFlags(f.actual) { fn(flag) } } // Visit visits the command-line flags in lexicographical order, calling fn // for each. It visits only those flags that have been set. func Visit(fn func(*Flag)) { commandLine.Visit(fn) } // Lookup returns the Flag structure of the named flag, returning nil if none exists. func (f *FlagSet) Lookup(name string) *Flag { return f.formal[name] } // Lookup returns the Flag structure of the named command-line flag, // returning nil if none exists. func Lookup(name string) *Flag { return commandLine.formal[name] } // Set sets the value of the named flag. func (f *FlagSet) Set(name, value string) error { flag, ok := f.formal[name] if !ok { return fmt.Errorf("no such flag -%v", name) } err := flag.Value.Set(value) if err != nil { return err } if f.actual == nil { f.actual = make(map[string]*Flag) } f.actual[name] = flag return nil } // Set sets the value of the named command-line flag. func Set(name, value string) error { return commandLine.Set(name, value) } // flagsByLength is a slice of flags implementing sort.Interface, // sorting primarily by the length of the flag, and secondarily // alphabetically. type flagsByLength []*Flag func (f flagsByLength) Less(i, j int) bool { s1, s2 := f[i].Name, f[j].Name if len(s1) != len(s2) { return len(s1) < len(s2) } return s1 < s2 } func (f flagsByLength) Swap(i, j int) { f[i], f[j] = f[j], f[i] } func (f flagsByLength) Len() int { return len(f) } // flagsByName is a slice of slices of flags implementing sort.Interface, // alphabetically sorting by the name of the first flag in each slice. type flagsByName [][]*Flag func (f flagsByName) Less(i, j int) bool { return f[i][0].Name < f[j][0].Name } func (f flagsByName) Swap(i, j int) { f[i], f[j] = f[j], f[i] } func (f flagsByName) Len() int { return len(f) } // PrintDefaults prints, to standard error unless configured // otherwise, the default values of all defined flags in the set. // If there is more than one name for a given flag, the usage information and // default value from the shortest will be printed (or the least alphabetically // if there are several equally short flag names). func (f *FlagSet) PrintDefaults() { // group together all flags for a given value flags := make(map[interface{}]flagsByLength) f.VisitAll(func(f *Flag) { flags[f.Value] = append(flags[f.Value], f) }) // sort the output flags by shortest name for each group. var byName flagsByName for _, f := range flags { sort.Sort(f) byName = append(byName, f) } sort.Sort(byName) var line bytes.Buffer for _, fs := range byName { line.Reset() for i, f := range fs { if i > 0 { line.WriteString(", ") } line.WriteString(flagWithMinus(f.Name)) } format := " %s (= %s)\n %s\n" if _, ok := fs[0].Value.(*stringValue); ok { // put quotes on the value format = " %s (= %q)\n %s\n" } fmt.Fprintf(f.out(), format, line.Bytes(), fs[0].DefValue, fs[0].Usage) } } // PrintDefaults prints to standard error the default values of all defined command-line flags. func PrintDefaults() { commandLine.PrintDefaults() } // defaultUsage is the default function to print a usage message. func defaultUsage(f *FlagSet) { fmt.Fprintf(f.out(), "Usage of %s:\n", f.name) f.PrintDefaults() } // NOTE: Usage is not just defaultUsage(commandLine) // because it serves (via godoc flag Usage) as the example // for how to write your own usage function. // Usage prints to standard error a usage message documenting all defined command-line flags. // The function is a variable that may be changed to point to a custom function. var Usage = func() { fmt.Fprintf(os.Stderr, "Usage of %s:\n", os.Args[0]) PrintDefaults() } // NFlag returns the number of flags that have been set. func (f *FlagSet) NFlag() int { return len(f.actual) } // NFlag returns the number of command-line flags that have been set. func NFlag() int { return len(commandLine.actual) } // Arg returns the i'th argument. Arg(0) is the first remaining argument // after flags have been processed. func (f *FlagSet) Arg(i int) string { if i < 0 || i >= len(f.args) { return "" } return f.args[i] } // Arg returns the i'th command-line argument. Arg(0) is the first remaining argument // after flags have been processed. func Arg(i int) string { return commandLine.Arg(i) } // NArg is the number of arguments remaining after flags have been processed. func (f *FlagSet) NArg() int { return len(f.args) } // NArg is the number of arguments remaining after flags have been processed. func NArg() int { return len(commandLine.args) } // Args returns the non-flag arguments. func (f *FlagSet) Args() []string { return f.args } // Args returns the non-flag command-line arguments. func Args() []string { return commandLine.args } // BoolVar defines a bool flag with specified name, default value, and usage string. // The argument p points to a bool variable in which to store the value of the flag. func (f *FlagSet) BoolVar(p *bool, name string, value bool, usage string) { f.Var(newBoolValue(value, p), name, usage) } // BoolVar defines a bool flag with specified name, default value, and usage string. // The argument p points to a bool variable in which to store the value of the flag. func BoolVar(p *bool, name string, value bool, usage string) { commandLine.Var(newBoolValue(value, p), name, usage) } // Bool defines a bool flag with specified name, default value, and usage string. // The return value is the address of a bool variable that stores the value of the flag. func (f *FlagSet) Bool(name string, value bool, usage string) *bool { p := new(bool) f.BoolVar(p, name, value, usage) return p } // Bool defines a bool flag with specified name, default value, and usage string. // The return value is the address of a bool variable that stores the value of the flag. func Bool(name string, value bool, usage string) *bool { return commandLine.Bool(name, value, usage) } // IntVar defines an int flag with specified name, default value, and usage string. // The argument p points to an int variable in which to store the value of the flag. func (f *FlagSet) IntVar(p *int, name string, value int, usage string) { f.Var(newIntValue(value, p), name, usage) } // IntVar defines an int flag with specified name, default value, and usage string. // The argument p points to an int variable in which to store the value of the flag. func IntVar(p *int, name string, value int, usage string) { commandLine.Var(newIntValue(value, p), name, usage) } // Int defines an int flag with specified name, default value, and usage string. // The return value is the address of an int variable that stores the value of the flag. func (f *FlagSet) Int(name string, value int, usage string) *int { p := new(int) f.IntVar(p, name, value, usage) return p } // Int defines an int flag with specified name, default value, and usage string. // The return value is the address of an int variable that stores the value of the flag. func Int(name string, value int, usage string) *int { return commandLine.Int(name, value, usage) } // Int64Var defines an int64 flag with specified name, default value, and usage string. // The argument p points to an int64 variable in which to store the value of the flag. func (f *FlagSet) Int64Var(p *int64, name string, value int64, usage string) { f.Var(newInt64Value(value, p), name, usage) } // Int64Var defines an int64 flag with specified name, default value, and usage string. // The argument p points to an int64 variable in which to store the value of the flag. func Int64Var(p *int64, name string, value int64, usage string) { commandLine.Var(newInt64Value(value, p), name, usage) } // Int64 defines an int64 flag with specified name, default value, and usage string. // The return value is the address of an int64 variable that stores the value of the flag. func (f *FlagSet) Int64(name string, value int64, usage string) *int64 { p := new(int64) f.Int64Var(p, name, value, usage) return p } // Int64 defines an int64 flag with specified name, default value, and usage string. // The return value is the address of an int64 variable that stores the value of the flag. func Int64(name string, value int64, usage string) *int64 { return commandLine.Int64(name, value, usage) } // UintVar defines a uint flag with specified name, default value, and usage string. // The argument p points to a uint variable in which to store the value of the flag. func (f *FlagSet) UintVar(p *uint, name string, value uint, usage string) { f.Var(newUintValue(value, p), name, usage) } // UintVar defines a uint flag with specified name, default value, and usage string. // The argument p points to a uint variable in which to store the value of the flag. func UintVar(p *uint, name string, value uint, usage string) { commandLine.Var(newUintValue(value, p), name, usage) } // Uint defines a uint flag with specified name, default value, and usage string. // The return value is the address of a uint variable that stores the value of the flag. func (f *FlagSet) Uint(name string, value uint, usage string) *uint { p := new(uint) f.UintVar(p, name, value, usage) return p } // Uint defines a uint flag with specified name, default value, and usage string. // The return value is the address of a uint variable that stores the value of the flag. func Uint(name string, value uint, usage string) *uint { return commandLine.Uint(name, value, usage) } // Uint64Var defines a uint64 flag with specified name, default value, and usage string. // The argument p points to a uint64 variable in which to store the value of the flag. func (f *FlagSet) Uint64Var(p *uint64, name string, value uint64, usage string) { f.Var(newUint64Value(value, p), name, usage) } // Uint64Var defines a uint64 flag with specified name, default value, and usage string. // The argument p points to a uint64 variable in which to store the value of the flag. func Uint64Var(p *uint64, name string, value uint64, usage string) { commandLine.Var(newUint64Value(value, p), name, usage) } // Uint64 defines a uint64 flag with specified name, default value, and usage string. // The return value is the address of a uint64 variable that stores the value of the flag. func (f *FlagSet) Uint64(name string, value uint64, usage string) *uint64 { p := new(uint64) f.Uint64Var(p, name, value, usage) return p } // Uint64 defines a uint64 flag with specified name, default value, and usage string. // The return value is the address of a uint64 variable that stores the value of the flag. func Uint64(name string, value uint64, usage string) *uint64 { return commandLine.Uint64(name, value, usage) } // StringVar defines a string flag with specified name, default value, and usage string. // The argument p points to a string variable in which to store the value of the flag. func (f *FlagSet) StringVar(p *string, name string, value string, usage string) { f.Var(newStringValue(value, p), name, usage) } // StringVar defines a string flag with specified name, default value, and usage string. // The argument p points to a string variable in which to store the value of the flag. func StringVar(p *string, name string, value string, usage string) { commandLine.Var(newStringValue(value, p), name, usage) } // String defines a string flag with specified name, default value, and usage string. // The return value is the address of a string variable that stores the value of the flag. func (f *FlagSet) String(name string, value string, usage string) *string { p := new(string) f.StringVar(p, name, value, usage) return p } // String defines a string flag with specified name, default value, and usage string. // The return value is the address of a string variable that stores the value of the flag. func String(name string, value string, usage string) *string { return commandLine.String(name, value, usage) } // Float64Var defines a float64 flag with specified name, default value, and usage string. // The argument p points to a float64 variable in which to store the value of the flag. func (f *FlagSet) Float64Var(p *float64, name string, value float64, usage string) { f.Var(newFloat64Value(value, p), name, usage) } // Float64Var defines a float64 flag with specified name, default value, and usage string. // The argument p points to a float64 variable in which to store the value of the flag. func Float64Var(p *float64, name string, value float64, usage string) { commandLine.Var(newFloat64Value(value, p), name, usage) } // Float64 defines a float64 flag with specified name, default value, and usage string. // The return value is the address of a float64 variable that stores the value of the flag. func (f *FlagSet) Float64(name string, value float64, usage string) *float64 { p := new(float64) f.Float64Var(p, name, value, usage) return p } // Float64 defines a float64 flag with specified name, default value, and usage string. // The return value is the address of a float64 variable that stores the value of the flag. func Float64(name string, value float64, usage string) *float64 { return commandLine.Float64(name, value, usage) } // DurationVar defines a time.Duration flag with specified name, default value, and usage string. // The argument p points to a time.Duration variable in which to store the value of the flag. func (f *FlagSet) DurationVar(p *time.Duration, name string, value time.Duration, usage string) { f.Var(newDurationValue(value, p), name, usage) } // DurationVar defines a time.Duration flag with specified name, default value, and usage string. // The argument p points to a time.Duration variable in which to store the value of the flag. func DurationVar(p *time.Duration, name string, value time.Duration, usage string) { commandLine.Var(newDurationValue(value, p), name, usage) } // Duration defines a time.Duration flag with specified name, default value, and usage string. // The return value is the address of a time.Duration variable that stores the value of the flag. func (f *FlagSet) Duration(name string, value time.Duration, usage string) *time.Duration { p := new(time.Duration) f.DurationVar(p, name, value, usage) return p } // Duration defines a time.Duration flag with specified name, default value, and usage string. // The return value is the address of a time.Duration variable that stores the value of the flag. func Duration(name string, value time.Duration, usage string) *time.Duration { return commandLine.Duration(name, value, usage) } // Var defines a flag with the specified name and usage string. The type and // value of the flag are represented by the first argument, of type Value, which // typically holds a user-defined implementation of Value. For instance, the // caller could create a flag that turns a comma-separated string into a slice // of strings by giving the slice the methods of Value; in particular, Set would // decompose the comma-separated string into the slice. func (f *FlagSet) Var(value Value, name string, usage string) { // Remember the default value as a string; it won't change. flag := &Flag{name, usage, value, value.String()} _, alreadythere := f.formal[name] if alreadythere { fmt.Fprintf(f.out(), "%s flag redefined: %s\n", f.name, name) panic("flag redefinition") // Happens only if flags are declared with identical names } if f.formal == nil { f.formal = make(map[string]*Flag) } f.formal[name] = flag } // Var defines a flag with the specified name and usage string. The type and // value of the flag are represented by the first argument, of type Value, which // typically holds a user-defined implementation of Value. For instance, the // caller could create a flag that turns a comma-separated string into a slice // of strings by giving the slice the methods of Value; in particular, Set would // decompose the comma-separated string into the slice. func Var(value Value, name string, usage string) { commandLine.Var(value, name, usage) } // failf prints to standard error a formatted error and usage message and // returns the error. func (f *FlagSet) failf(format string, a ...interface{}) error { err := fmt.Errorf(format, a...) fmt.Fprintf(f.out(), "error: %v\n", err) return err } // usage calls the Usage method for the flag set, or the usage function if // the flag set is commandLine. func (f *FlagSet) usage() { if f == commandLine { Usage() } else if f.Usage == nil { defaultUsage(f) } else { f.Usage() } } func (f *FlagSet) parseOneGnu() (flagName string, long, finished bool, err error) { if len(f.procArgs) == 0 { finished = true return } // processing previously encountered single-rune flag if flag := f.procFlag; len(flag) > 0 { _, n := utf8.DecodeRuneInString(flag) f.procFlag = flag[n:] flagName = flag[0:n] return } a := f.procArgs[0] // one non-flag argument if a == "-" || a == "" || a[0] != '-' { if f.allowIntersperse { f.args = append(f.args, a) f.procArgs = f.procArgs[1:] return } f.args = append(f.args, f.procArgs...) f.procArgs = nil finished = true return } // end of flags if f.procArgs[0] == "--" { f.args = append(f.args, f.procArgs[1:]...) f.procArgs = nil finished = true return } // long flag signified with "--" prefix if a[1] == '-' { long = true i := strings.Index(a, "=") if i < 0 { f.procArgs = f.procArgs[1:] flagName = a[2:] return } flagName = a[2:i] if flagName == "" { err = fmt.Errorf("empty flag in argument %q", a) return } f.procArgs = f.procArgs[1:] f.procFlag = a[i:] return } // some number of single-rune flags a = a[1:] _, n := utf8.DecodeRuneInString(a) flagName = a[0:n] f.procFlag = a[n:] f.procArgs = f.procArgs[1:] return } func flagWithMinus(name string) string { if len(name) > 1 { return "--" + name } return "-" + name } func (f *FlagSet) parseGnuFlagArg(name string, long bool) (finished bool, err error) { m := f.formal flag, alreadythere := m[name] // BUG if !alreadythere { if name == "help" || name == "h" { // special case for nice help message. f.usage() return false, ErrHelp } // TODO print --xxx when flag is more than one rune. return false, f.failf("flag provided but not defined: %s", flagWithMinus(name)) } if fv, ok := flag.Value.(*boolValue); ok && !strings.HasPrefix(f.procFlag, "=") { // special case: doesn't need an arg, and an arg hasn't // been provided explicitly. fv.Set("true") } else { // It must have a value, which might be the next argument. var hasValue bool var value string if f.procFlag != "" { // value directly follows flag value = f.procFlag if long { if value[0] != '=' { panic("no leading '=' in long flag") } value = value[1:] } hasValue = true f.procFlag = "" } if !hasValue && len(f.procArgs) > 0 { // value is the next arg hasValue = true value, f.procArgs = f.procArgs[0], f.procArgs[1:] } if !hasValue { return false, f.failf("flag needs an argument: %s", flagWithMinus(name)) } if err := flag.Value.Set(value); err != nil { return false, f.failf("invalid value %q for flag %s: %v", value, flagWithMinus(name), err) } } if f.actual == nil { f.actual = make(map[string]*Flag) } f.actual[name] = flag return } // Parse parses flag definitions from the argument list, which should not // include the command name. Must be called after all flags in the FlagSet // are defined and before flags are accessed by the program. // The return value will be ErrHelp if --help was set but not defined. // If allowIntersperse is set, arguments and flags can be interspersed, that // is flags can follow positional arguments. func (f *FlagSet) Parse(allowIntersperse bool, arguments []string) error { f.parsed = true f.procArgs = arguments f.procFlag = "" f.args = nil f.allowIntersperse = allowIntersperse for { name, long, finished, err := f.parseOneGnu() if !finished { if name != "" { finished, err = f.parseGnuFlagArg(name, long) } } if err != nil { switch f.errorHandling { case ContinueOnError: return err case ExitOnError: os.Exit(2) case PanicOnError: panic(err) } } if !finished { continue } if err == nil { break } } return nil } // Parsed reports whether f.Parse has been called. func (f *FlagSet) Parsed() bool { return f.parsed } // Parse parses the command-line flags from os.Args[1:]. Must be called // after all flags are defined and before flags are accessed by the program. // If allowIntersperse is set, arguments and flags can be interspersed, that // is flags can follow positional arguments. func Parse(allowIntersperse bool) { // Ignore errors; commandLine is set for ExitOnError. commandLine.Parse(allowIntersperse, os.Args[1:]) } // Parsed returns true if the command-line flags have been parsed. func Parsed() bool { return commandLine.Parsed() } // The default set of command-line flags, parsed from os.Args. var commandLine = NewFlagSet(os.Args[0], ExitOnError) // SetOut sets the output writer for the default FlagSet. func SetOut(w io.Writer) { commandLine.output = w } // NewFlagSet returns a new, empty flag set with the specified name and // error handling property. func NewFlagSet(name string, errorHandling ErrorHandling) *FlagSet { f := &FlagSet{ name: name, errorHandling: errorHandling, } return f } // Init sets the name and error handling property for a flag set. // By default, the zero FlagSet uses an empty name and the // ContinueOnError error handling policy. func (f *FlagSet) Init(name string, errorHandling ErrorHandling) { f.name = name f.errorHandling = errorHandling } lxd-2.0.2/shared/gnuflag/flag_test.go000066400000000000000000000272141272140510300174730ustar00rootroot00000000000000// Copyright 2009 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package gnuflag_test import ( "bytes" "fmt" "os" "reflect" "sort" "strings" "testing" "time" . "github.com/lxc/lxd/shared/gnuflag" ) var ( test_bool = Bool("test_bool", false, "bool value") test_int = Int("test_int", 0, "int value") test_int64 = Int64("test_int64", 0, "int64 value") test_uint = Uint("test_uint", 0, "uint value") test_uint64 = Uint64("test_uint64", 0, "uint64 value") test_string = String("test_string", "0", "string value") test_float64 = Float64("test_float64", 0, "float64 value") test_duration = Duration("test_duration", 0, "time.Duration value") ) func boolString(s string) string { if s == "0" { return "false" } return "true" } func TestEverything(t *testing.T) { m := make(map[string]*Flag) desired := "0" visitor := func(f *Flag) { if len(f.Name) > 5 && f.Name[0:5] == "test_" { m[f.Name] = f ok := false switch { case f.Value.String() == desired: ok = true case f.Name == "test_bool" && f.Value.String() == boolString(desired): ok = true case f.Name == "test_duration" && f.Value.String() == desired+"s": ok = true } if !ok { t.Error("Visit: bad value", f.Value.String(), "for", f.Name) } } } VisitAll(visitor) if len(m) != 8 { t.Error("VisitAll misses some flags") for k, v := range m { t.Log(k, *v) } } m = make(map[string]*Flag) Visit(visitor) if len(m) != 0 { t.Errorf("Visit sees unset flags") for k, v := range m { t.Log(k, *v) } } // Now set all flags Set("test_bool", "true") Set("test_int", "1") Set("test_int64", "1") Set("test_uint", "1") Set("test_uint64", "1") Set("test_string", "1") Set("test_float64", "1") Set("test_duration", "1s") desired = "1" Visit(visitor) if len(m) != 8 { t.Error("Visit fails after set") for k, v := range m { t.Log(k, *v) } } // Now test they're visited in sort order. var flagNames []string Visit(func(f *Flag) { flagNames = append(flagNames, f.Name) }) if !sort.StringsAreSorted(flagNames) { t.Errorf("flag names not sorted: %v", flagNames) } } func TestUsage(t *testing.T) { called := false ResetForTesting(func() { called = true }) f := CommandLine() f.SetOutput(nullWriter{}) if f.Parse(true, []string{"-x"}) == nil { t.Error("parse did not fail for unknown flag") } if called { t.Error("called Usage for unknown flag") } } var parseTests = []struct { about string intersperse bool args []string vals map[string]interface{} remaining []string error string }{{ about: "regular args", intersperse: true, args: []string{ "--bool2", "--int", "22", "--int64", "0x23", "--uint", "24", "--uint64", "25", "--string", "hello", "--float64", "2718e28", "--duration", "2m", "one - extra - argument", }, vals: map[string]interface{}{ "bool": false, "bool2": true, "int": 22, "int64": int64(0x23), "uint": uint(24), "uint64": uint64(25), "string": "hello", "float64": 2718e28, "duration": 2 * 60 * time.Second, }, remaining: []string{ "one - extra - argument", }, }, { about: "playing with -", intersperse: true, args: []string{ "-a", "-", "-bc", "2", "-de1s", "-f2s", "-g", "3s", "--h", "--long", "--long2", "-4s", "3", "4", "--", "-5", }, vals: map[string]interface{}{ "a": true, "b": true, "c": true, "d": true, "e": "1s", "f": "2s", "g": "3s", "h": true, "long": true, "long2": "-4s", "z": "default", "www": 99, }, remaining: []string{ "-", "2", "3", "4", "-5", }, }, { about: "flag after explicit --", intersperse: true, args: []string{ "-a", "--", "-b", }, vals: map[string]interface{}{ "a": true, "b": false, }, remaining: []string{ "-b", }, }, { about: "flag after end", args: []string{ "-a", "foo", "-b", }, vals: map[string]interface{}{ "a": true, "b": false, }, remaining: []string{ "foo", "-b", }, }, { about: "arg and flag after explicit end", args: []string{ "-a", "--", "foo", "-b", }, vals: map[string]interface{}{ "a": true, "b": false, }, remaining: []string{ "foo", "-b", }, }, { about: "boolean args, explicitly and non-explicitly given", args: []string{ "--a=false", "--b=true", "--c", }, vals: map[string]interface{}{ "a": false, "b": true, "c": true, }, }, { about: "using =", args: []string{ "--arble=bar", "--bletch=", "--a=something", "-b=other", "-cdand more", "--curdle=--milk", "--sandwich", "=", "--darn=", "=arg", }, vals: map[string]interface{}{ "arble": "bar", "bletch": "", "a": "something", "b": "=other", "c": true, "d": "and more", "curdle": "--milk", "sandwich": "=", "darn": "", }, remaining: []string{"=arg"}, }, { about: "empty flag #1", args: []string{ "--=bar", }, error: `empty flag in argument "--=bar"`, }, { about: "single-letter equals", args: []string{ "-=bar", }, error: `flag provided but not defined: -=`, }, { about: "empty flag #2", args: []string{ "--=", }, error: `empty flag in argument "--="`, }, { about: "no equals", args: []string{ "-=", }, error: `flag provided but not defined: -=`, }, { args: []string{ "-a=true", }, vals: map[string]interface{}{ "a": true, }, error: `invalid value "=true" for flag -a: strconv.ParseBool: parsing "=true": invalid syntax`, }, { intersperse: true, args: []string{ "-a", "-b", }, vals: map[string]interface{}{ "a": true, }, error: "flag provided but not defined: -b", }, { intersperse: true, args: []string{ "-a", }, vals: map[string]interface{}{ "a": "default", }, error: "flag needs an argument: -a", }, { intersperse: true, args: []string{ "-a", "b", }, vals: map[string]interface{}{ "a": 0, }, error: `invalid value "b" for flag -a: strconv.ParseInt: parsing "b": invalid syntax`, }, } func testParse(newFlagSet func() *FlagSet, t *testing.T) { for i, g := range parseTests { t.Logf("test %d. %s", i, g.about) f := newFlagSet() flags := make(map[string]interface{}) for name, val := range g.vals { switch val.(type) { case bool: flags[name] = f.Bool(name, false, "bool value "+name) case string: flags[name] = f.String(name, "default", "string value "+name) case int: flags[name] = f.Int(name, 99, "int value "+name) case uint: flags[name] = f.Uint(name, 0, "uint value") case uint64: flags[name] = f.Uint64(name, 0, "uint64 value") case int64: flags[name] = f.Int64(name, 0, "uint64 value") case float64: flags[name] = f.Float64(name, 0, "float64 value") case time.Duration: flags[name] = f.Duration(name, 5*time.Second, "duration value") default: t.Fatalf("unhandled type %T", val) } } err := f.Parse(g.intersperse, g.args) if g.error != "" { if err == nil { t.Errorf("expected error %q got nil", g.error) } else if err.Error() != g.error { t.Errorf("expected error %q got %q", g.error, err.Error()) } continue } for name, val := range g.vals { actual := reflect.ValueOf(flags[name]).Elem().Interface() if val != actual { t.Errorf("flag %q, expected %v got %v", name, val, actual) } } if len(f.Args()) != len(g.remaining) { t.Fatalf("remaining args, expected %q got %q", g.remaining, f.Args()) } for j, a := range f.Args() { if a != g.remaining[j] { t.Errorf("arg %d, expected %q got %q", j, g.remaining[i], a) } } } } func TestParse(t *testing.T) { testParse(func() *FlagSet { ResetForTesting(func() {}) f := CommandLine() f.SetOutput(nullWriter{}) return f }, t) } func TestFlagSetParse(t *testing.T) { testParse(func() *FlagSet { f := NewFlagSet("test", ContinueOnError) f.SetOutput(nullWriter{}) return f }, t) } // Declare a user-defined flag type. type flagVar []string func (f *flagVar) String() string { return fmt.Sprint([]string(*f)) } func (f *flagVar) Set(value string) error { *f = append(*f, value) return nil } func TestUserDefined(t *testing.T) { var flags FlagSet flags.Init("test", ContinueOnError) var v flagVar flags.Var(&v, "v", "usage") if err := flags.Parse(true, []string{"-v", "1", "-v", "2", "-v3"}); err != nil { t.Error(err) } if len(v) != 3 { t.Fatal("expected 3 args; got ", len(v)) } expect := "[1 2 3]" if v.String() != expect { t.Errorf("expected value %q got %q", expect, v.String()) } } func TestSetOutput(t *testing.T) { var flags FlagSet var buf bytes.Buffer flags.SetOutput(&buf) flags.Init("test", ContinueOnError) flags.Parse(true, []string{"-unknown"}) if out := buf.String(); !strings.Contains(out, "-unknown") { t.Logf("expected output mentioning unknown; got %q", out) } } // This tests that one can reset the flags. This still works but not well, and is // superseded by FlagSet. func TestChangingArgs(t *testing.T) { ResetForTesting(func() { t.Fatal("bad parse") }) oldArgs := os.Args defer func() { os.Args = oldArgs }() os.Args = []string{"cmd", "--before", "subcmd", "--after", "args"} before := Bool("before", false, "") if err := CommandLine().Parse(false, os.Args[1:]); err != nil { t.Fatal(err) } cmd := Arg(0) os.Args = Args() after := Bool("after", false, "") Parse(false) args := Args() if !*before || cmd != "subcmd" || !*after || len(args) != 1 || args[0] != "args" { t.Fatalf("expected true subcmd true [args] got %v %v %v %v", *before, cmd, *after, args) } } // Test that -help invokes the usage message and returns ErrHelp. func TestHelp(t *testing.T) { var helpCalled = false fs := NewFlagSet("help test", ContinueOnError) fs.SetOutput(nullWriter{}) fs.Usage = func() { helpCalled = true } var flag bool fs.BoolVar(&flag, "flag", false, "regular flag") // Regular flag invocation should work err := fs.Parse(true, []string{"--flag"}) if err != nil { t.Fatal("expected no error; got ", err) } if !flag { t.Error("flag was not set by --flag") } if helpCalled { t.Error("help called for regular flag") helpCalled = false // reset for next test } // Help flag should work as expected. err = fs.Parse(true, []string{"--help"}) if err == nil { t.Fatal("error expected") } if err != ErrHelp { t.Fatal("expected ErrHelp; got ", err) } if !helpCalled { t.Fatal("help was not called") } // If we define a help flag, that should override. var help bool fs.BoolVar(&help, "help", false, "help flag") helpCalled = false err = fs.Parse(true, []string{"--help"}) if err != nil { t.Fatal("expected no error for defined --help; got ", err) } if helpCalled { t.Fatal("help was called; should not have been for defined help flag") } } type nullWriter struct{} func (nullWriter) Write(buf []byte) (int, error) { return len(buf), nil } func TestPrintDefaults(t *testing.T) { f := NewFlagSet("print test", ContinueOnError) f.SetOutput(nullWriter{}) var b bool var c int var d string var e float64 f.IntVar(&c, "trapclap", 99, "usage not shown") f.IntVar(&c, "c", 99, "c usage") f.BoolVar(&b, "bal", false, "usage not shown") f.BoolVar(&b, "x", false, "usage not shown") f.BoolVar(&b, "b", false, "b usage") f.BoolVar(&b, "balalaika", false, "usage not shown") f.StringVar(&d, "d", "d default", "d usage") f.Float64Var(&e, "elephant", 3.14, "elephant usage") var buf bytes.Buffer f.SetOutput(&buf) f.PrintDefaults() f.SetOutput(nullWriter{}) expect := ` -b, -x, --bal, --balalaika (= false) b usage -c, --trapclap (= 99) c usage -d (= "d default") d usage --elephant (= 3.14) elephant usage ` if buf.String() != expect { t.Errorf("expect %q got %q", expect, buf.String()) } } lxd-2.0.2/shared/i18n/000077500000000000000000000000001272140510300143225ustar00rootroot00000000000000lxd-2.0.2/shared/i18n/i18n.go000066400000000000000000000002341272140510300154270ustar00rootroot00000000000000// +build !linux package i18n func G(msgid string) string { return msgid } func NG(msgid string, msgidPlural string, n uint64) string { return msgid } lxd-2.0.2/shared/i18n/i18n_linux.go000066400000000000000000000005411272140510300166470ustar00rootroot00000000000000// +build linux package i18n import ( "github.com/gosexy/gettext" ) var TEXTDOMAIN = "lxd" func G(msgid string) string { return gettext.DGettext(TEXTDOMAIN, msgid) } func NG(msgid string, msgidPlural string, n uint64) string { return gettext.DNGettext(TEXTDOMAIN, msgid, msgidPlural, n) } func init() { gettext.SetLocale(gettext.LC_ALL, "") } lxd-2.0.2/shared/idmapset_linux.go000066400000000000000000000202531272140510300171210ustar00rootroot00000000000000package shared import ( "bufio" "fmt" "os" "os/exec" "os/user" "path" "path/filepath" "strconv" "strings" ) /* * One entry in id mapping set - a single range of either * uid or gid mappings. */ type IdmapEntry struct { Isuid bool Isgid bool Hostid int // id as seen on the host - i.e. 100000 Nsid int // id as seen in the ns - i.e. 0 Maprange int } func (e *IdmapEntry) ToLxcString() string { if e.Isuid { return fmt.Sprintf("u %d %d %d", e.Nsid, e.Hostid, e.Maprange) } return fmt.Sprintf("g %d %d %d", e.Nsid, e.Hostid, e.Maprange) } func is_between(x, low, high int) bool { return x >= low && x < high } func (e *IdmapEntry) Intersects(i IdmapEntry) bool { if (e.Isuid && i.Isuid) || (e.Isgid && i.Isgid) { switch { case is_between(e.Hostid, i.Hostid, i.Hostid+i.Maprange): return true case is_between(i.Hostid, e.Hostid, e.Hostid+e.Maprange): return true case is_between(e.Hostid+e.Maprange, i.Hostid, i.Hostid+i.Maprange): return true case is_between(i.Hostid+e.Maprange, e.Hostid, e.Hostid+e.Maprange): return true case is_between(e.Nsid, i.Nsid, i.Nsid+i.Maprange): return true case is_between(i.Nsid, e.Nsid, e.Nsid+e.Maprange): return true case is_between(e.Nsid+e.Maprange, i.Nsid, i.Nsid+i.Maprange): return true case is_between(i.Nsid+e.Maprange, e.Nsid, e.Nsid+e.Maprange): return true } } return false } func (e *IdmapEntry) parse(s string) error { split := strings.Split(s, ":") var err error if len(split) != 4 { return fmt.Errorf("Bad idmap: %q", s) } switch split[0] { case "u": e.Isuid = true case "g": e.Isgid = true case "b": e.Isuid = true e.Isgid = true default: return fmt.Errorf("Bad idmap type in %q", s) } e.Nsid, err = strconv.Atoi(split[1]) if err != nil { return err } e.Hostid, err = strconv.Atoi(split[2]) if err != nil { return err } e.Maprange, err = strconv.Atoi(split[3]) if err != nil { return err } // wraparound if e.Hostid+e.Maprange < e.Hostid || e.Nsid+e.Maprange < e.Nsid { return fmt.Errorf("Bad mapping: id wraparound") } return nil } /* * Shift a uid from the host into the container * I.e. 0 -> 1000 -> 101000 */ func (e *IdmapEntry) shift_into_ns(id int) (int, error) { if id < e.Nsid || id >= e.Nsid+e.Maprange { // this mapping doesn't apply return 0, fmt.Errorf("N/A") } return id - e.Nsid + e.Hostid, nil } /* * Shift a uid from the container back to the host * I.e. 101000 -> 1000 */ func (e *IdmapEntry) shift_from_ns(id int) (int, error) { if id < e.Hostid || id >= e.Hostid+e.Maprange { // this mapping doesn't apply return 0, fmt.Errorf("N/A") } return id - e.Hostid + e.Nsid, nil } /* taken from http://blog.golang.org/slices (which is under BSD licence) */ func Extend(slice []IdmapEntry, element IdmapEntry) []IdmapEntry { n := len(slice) if n == cap(slice) { // Slice is full; must grow. // We double its size and add 1, so if the size is zero we still grow. newSlice := make([]IdmapEntry, len(slice), 2*len(slice)+1) copy(newSlice, slice) slice = newSlice } slice = slice[0 : n+1] slice[n] = element return slice } type IdmapSet struct { Idmap []IdmapEntry } func (m IdmapSet) Len() int { return len(m.Idmap) } func (m IdmapSet) Intersects(i IdmapEntry) bool { for _, e := range m.Idmap { if i.Intersects(e) { return true } } return false } func (m IdmapSet) ToLxcString() []string { var lines []string for _, e := range m.Idmap { lines = append(lines, e.ToLxcString()+"\n") } return lines } func (m IdmapSet) Append(s string) (IdmapSet, error) { e := IdmapEntry{} err := e.parse(s) if err != nil { return m, err } if m.Intersects(e) { return m, fmt.Errorf("Conflicting id mapping") } m.Idmap = Extend(m.Idmap, e) return m, nil } func (m IdmapSet) doShiftIntoNs(uid int, gid int, how string) (int, int) { u := -1 g := -1 for _, e := range m.Idmap { var err error var tmpu, tmpg int if e.Isuid && u == -1 { switch how { case "in": tmpu, err = e.shift_into_ns(uid) case "out": tmpu, err = e.shift_from_ns(uid) } if err == nil { u = tmpu } } if e.Isgid && g == -1 { switch how { case "in": tmpg, err = e.shift_into_ns(gid) case "out": tmpg, err = e.shift_from_ns(gid) } if err == nil { g = tmpg } } } return u, g } func (m IdmapSet) ShiftIntoNs(uid int, gid int) (int, int) { return m.doShiftIntoNs(uid, gid, "in") } func (m IdmapSet) ShiftFromNs(uid int, gid int) (int, int) { return m.doShiftIntoNs(uid, gid, "out") } func GetOwner(path string) (int, int, error) { uid, gid, _, _, _, _, err := GetFileStat(path) return uid, gid, err } func (set *IdmapSet) doUidshiftIntoContainer(dir string, testmode bool, how string) error { // Expand any symlink in dir and cleanup resulting path dir, err := filepath.EvalSymlinks(dir) if err != nil { return err } dir = strings.TrimRight(dir, "/") convert := func(path string, fi os.FileInfo, err error) (e error) { uid, gid, err := GetOwner(path) if err != nil { return err } var newuid, newgid int switch how { case "in": newuid, newgid = set.ShiftIntoNs(uid, gid) case "out": newuid, newgid = set.ShiftFromNs(uid, gid) } if testmode { fmt.Printf("I would shift %q to %d %d\n", path, newuid, newgid) } else { err = ShiftOwner(dir, path, int(newuid), int(newgid)) if err != nil { return err } } return nil } if !PathExists(dir) { return fmt.Errorf("No such file or directory: %q", dir) } return filepath.Walk(dir, convert) } func (set *IdmapSet) UidshiftIntoContainer(dir string, testmode bool) error { return set.doUidshiftIntoContainer(dir, testmode, "in") } func (set *IdmapSet) UidshiftFromContainer(dir string, testmode bool) error { return set.doUidshiftIntoContainer(dir, testmode, "out") } func (set *IdmapSet) ShiftRootfs(p string) error { return set.doUidshiftIntoContainer(p, false, "in") } func (set *IdmapSet) UnshiftRootfs(p string) error { return set.doUidshiftIntoContainer(p, false, "out") } func (set *IdmapSet) ShiftFile(p string) error { return set.ShiftRootfs(p) } const ( minIDRange = 65536 ) /* * get a uid or gid mapping from /etc/subxid */ func getFromMap(fname string, username string) (int, int, error) { f, err := os.Open(fname) var min int var idrange int if err != nil { return 0, 0, err } defer f.Close() scanner := bufio.NewScanner(f) min = 0 idrange = 0 for scanner.Scan() { /* * /etc/sub{gu}id allow comments in the files, so ignore * everything after a '#' */ s := strings.Split(scanner.Text(), "#") if len(s[0]) == 0 { continue } s = strings.Split(s[0], ":") if len(s) < 3 { return 0, 0, fmt.Errorf("unexpected values in %q: %q", fname, s) } if strings.EqualFold(s[0], username) { bigmin, err := strconv.ParseUint(s[1], 10, 32) if err != nil { continue } bigIdrange, err := strconv.ParseUint(s[2], 10, 32) if err != nil { continue } min = int(bigmin) idrange = int(bigIdrange) if idrange < minIDRange { continue } return min, idrange, nil } } return 0, 0, fmt.Errorf("User %q has no %ss.", username, path.Base(fname)) } /* * Get current username */ func getUsername() (string, error) { me, err := user.Current() if err == nil { return me.Username, nil } else { /* user.Current() requires cgo */ username := os.Getenv("USER") if username == "" { return "", err } return username, nil } } /* * Create a new default idmap */ func DefaultIdmapSet() (*IdmapSet, error) { myname, err := getUsername() if err != nil { return nil, err } umin := 1000000 urange := 100000 gmin := 1000000 grange := 100000 newuidmap, _ := exec.LookPath("newuidmap") newgidmap, _ := exec.LookPath("newgidmap") if newuidmap != "" && newgidmap != "" && PathExists("/etc/subuid") && PathExists("/etc/subgid") { umin, urange, err = getFromMap("/etc/subuid", myname) if err != nil { return nil, err } gmin, grange, err = getFromMap("/etc/subgid", myname) if err != nil { return nil, err } } m := new(IdmapSet) e := IdmapEntry{Isuid: true, Nsid: 0, Hostid: umin, Maprange: urange} m.Idmap = Extend(m.Idmap, e) e = IdmapEntry{Isgid: true, Nsid: 0, Hostid: gmin, Maprange: grange} m.Idmap = Extend(m.Idmap, e) return m, nil } lxd-2.0.2/shared/image.go000066400000000000000000000032661272140510300151630ustar00rootroot00000000000000package shared import ( "time" ) type ImageProperties map[string]string type ImageAliasesEntry struct { Name string `json:"name"` Description string `json:"description"` Target string `json:"target"` } type ImageAliases []ImageAliasesEntry type ImageAlias struct { Name string `json:"name"` Description string `json:"description"` } type ImageSource struct { Server string `json:"server"` Protocol string `json:"protocol"` Certificate string `json:"certificate"` Alias string `json:"alias"` } type ImageInfo struct { Aliases []ImageAlias `json:"aliases"` Architecture string `json:"architecture"` Cached bool `json:"cached"` Filename string `json:"filename"` Fingerprint string `json:"fingerprint"` Properties map[string]string `json:"properties"` Public bool `json:"public"` Size int64 `json:"size"` AutoUpdate bool `json:"auto_update"` Source *ImageSource `json:"update_source,omitempty"` CreationDate time.Time `json:"created_at"` ExpiryDate time.Time `json:"expires_at"` LastUsedDate time.Time `json:"last_used_at"` UploadDate time.Time `json:"uploaded_at"` } /* * BriefImageInfo contains a subset of the fields in * ImageInfo, namely those which a user may update */ type BriefImageInfo struct { AutoUpdate bool `json:"auto_update"` Properties map[string]string `json:"properties"` Public bool `json:"public"` } func (i *ImageInfo) Brief() BriefImageInfo { retstate := BriefImageInfo{ AutoUpdate: i.AutoUpdate, Properties: i.Properties, Public: i.Public} return retstate } lxd-2.0.2/shared/json.go000066400000000000000000000027211272140510300150450ustar00rootroot00000000000000package shared import ( "bytes" "encoding/json" "fmt" ) type Jmap map[string]interface{} func (m Jmap) GetString(key string) (string, error) { if val, ok := m[key]; !ok { return "", fmt.Errorf("Response was missing `%s`", key) } else if val, ok := val.(string); !ok { return "", fmt.Errorf("`%s` was not a string", key) } else { return val, nil } } func (m Jmap) GetMap(key string) (Jmap, error) { if val, ok := m[key]; !ok { return nil, fmt.Errorf("Response was missing `%s`", key) } else if val, ok := val.(map[string]interface{}); !ok { return nil, fmt.Errorf("`%s` was not a map, got %T", key, m[key]) } else { return val, nil } } func (m Jmap) GetInt(key string) (int, error) { if val, ok := m[key]; !ok { return -1, fmt.Errorf("Response was missing `%s`", key) } else if val, ok := val.(float64); !ok { return -1, fmt.Errorf("`%s` was not an int", key) } else { return int(val), nil } } func (m Jmap) GetBool(key string) (bool, error) { if val, ok := m[key]; !ok { return false, fmt.Errorf("Response was missing `%s`", key) } else if val, ok := val.(bool); !ok { return false, fmt.Errorf("`%s` was not an int", key) } else { return val, nil } } func DebugJson(r *bytes.Buffer) { pretty := &bytes.Buffer{} if err := json.Indent(pretty, r.Bytes(), "\t", "\t"); err != nil { Debugf("error indenting json: %s", err) return } // Print the JSON without the last "\n" str := pretty.String() Debugf("\n\t%s", str[0:len(str)-1]) } lxd-2.0.2/shared/log.go000066400000000000000000000023741272140510300146610ustar00rootroot00000000000000package shared import ( "fmt" "runtime" ) type Logger interface { Debug(msg string, ctx ...interface{}) Info(msg string, ctx ...interface{}) Warn(msg string, ctx ...interface{}) Error(msg string, ctx ...interface{}) Crit(msg string, ctx ...interface{}) } var Log Logger type nullLogger struct{} func (nl nullLogger) Debug(msg string, ctx ...interface{}) {} func (nl nullLogger) Info(msg string, ctx ...interface{}) {} func (nl nullLogger) Warn(msg string, ctx ...interface{}) {} func (nl nullLogger) Error(msg string, ctx ...interface{}) {} func (nl nullLogger) Crit(msg string, ctx ...interface{}) {} func init() { Log = nullLogger{} } // Logf sends to the logger registered via SetLogger the string resulting // from running format and args through Sprintf. func Logf(format string, args ...interface{}) { if Log != nil { Log.Info(fmt.Sprintf(format, args...)) } } // Debugf sends to the logger registered via SetLogger the string resulting // from running format and args through Sprintf, but only if debugging was // enabled via SetDebug. func Debugf(format string, args ...interface{}) { if Log != nil { Log.Debug(fmt.Sprintf(format, args...)) } } func PrintStack() { buf := make([]byte, 1<<16) runtime.Stack(buf, true) Debugf("%s", buf) } lxd-2.0.2/shared/logging/000077500000000000000000000000001272140510300151715ustar00rootroot00000000000000lxd-2.0.2/shared/logging/log.go000066400000000000000000000035261272140510300163070ustar00rootroot00000000000000package logging import ( "fmt" "os" "path/filepath" "github.com/lxc/lxd/shared" log "gopkg.in/inconshreveable/log15.v2" ) // GetLogger returns a logger suitable for using as shared.Log. func GetLogger(syslog string, logfile string, verbose bool, debug bool, customHandler log.Handler) (shared.Logger, error) { Log := log.New() var handlers []log.Handler var syshandler log.Handler // System specific handler syshandler = getSystemHandler(syslog, debug) if syshandler != nil { handlers = append(handlers, syshandler) } // FileHandler if logfile != "" { if !pathExists(filepath.Dir(logfile)) { return nil, fmt.Errorf("Log file path doesn't exist: %s", filepath.Dir(logfile)) } if !debug { handlers = append( handlers, log.LvlFilterHandler( log.LvlInfo, log.Must.FileHandler(logfile, log.LogfmtFormat()), ), ) } else { handlers = append(handlers, log.Must.FileHandler(logfile, log.LogfmtFormat())) } } // StderrHandler if verbose || debug { if !debug { handlers = append( handlers, log.LvlFilterHandler( log.LvlInfo, log.StderrHandler, ), ) } else { handlers = append(handlers, log.StderrHandler) } } else { handlers = append( handlers, log.LvlFilterHandler( log.LvlWarn, log.StderrHandler, ), ) } if customHandler != nil { handlers = append(handlers, customHandler) } Log.SetHandler(log.MultiHandler(handlers...)) return Log, nil } func AddContext(logger shared.Logger, ctx log.Ctx) shared.Logger { log15logger, ok := logger.(log.Logger) if !ok { logger.Error("couldn't downcast logger to add context", log.Ctx{"logger": log15logger, "ctx": ctx}) return logger } return log15logger.New(ctx) } func pathExists(name string) bool { _, err := os.Lstat(name) if err != nil && os.IsNotExist(err) { return false } return true } lxd-2.0.2/shared/logging/log_posix.go000066400000000000000000000007241272140510300175260ustar00rootroot00000000000000// +build linux darwin package logging import ( log "gopkg.in/inconshreveable/log15.v2" ) // getSystemHandler on Linux writes messages to syslog. func getSystemHandler(syslog string, debug bool) log.Handler { // SyslogHandler if syslog != "" { if !debug { return log.LvlFilterHandler( log.LvlInfo, log.Must.SyslogHandler(syslog, log.LogfmtFormat()), ) } else { return log.Must.SyslogHandler(syslog, log.LogfmtFormat()) } } return nil } lxd-2.0.2/shared/logging/log_windows.go000066400000000000000000000003231272140510300200510ustar00rootroot00000000000000// +build windows package logging import ( log "gopkg.in/inconshreveable/log15.v2" ) // getSystemHandler on Windows does nothing. func getSystemHandler(syslog string, debug bool) log.Handler { return nil } lxd-2.0.2/shared/network.go000066400000000000000000000137621272140510300155740ustar00rootroot00000000000000package shared import ( "crypto/tls" "crypto/x509" "encoding/pem" "fmt" "io" "io/ioutil" "net" "time" "github.com/gorilla/websocket" ) func RFC3493Dialer(network, address string) (net.Conn, error) { host, port, err := net.SplitHostPort(address) if err != nil { return nil, err } addrs, err := net.LookupHost(host) if err != nil { return nil, err } for _, a := range addrs { c, err := net.DialTimeout(network, net.JoinHostPort(a, port), 10*time.Second) if err != nil { continue } return c, err } return nil, fmt.Errorf("Unable to connect to: " + address) } func initTLSConfig() *tls.Config { return &tls.Config{ MinVersion: tls.VersionTLS12, MaxVersion: tls.VersionTLS12, CipherSuites: []uint16{ tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256}, PreferServerCipherSuites: true, } } func finalizeTLSConfig(tlsConfig *tls.Config, tlsRemoteCert *x509.Certificate) { // Trusted certificates if tlsRemoteCert != nil { caCertPool := x509.NewCertPool() // Make it a valid RootCA tlsRemoteCert.IsCA = true tlsRemoteCert.KeyUsage = x509.KeyUsageCertSign // Setup the pool caCertPool.AddCert(tlsRemoteCert) tlsConfig.RootCAs = caCertPool // Set the ServerName if tlsRemoteCert.DNSNames != nil { tlsConfig.ServerName = tlsRemoteCert.DNSNames[0] } } tlsConfig.BuildNameToCertificate() } func GetTLSConfig(tlsClientCertFile string, tlsClientKeyFile string, tlsRemoteCert *x509.Certificate) (*tls.Config, error) { tlsConfig := initTLSConfig() // Client authentication if tlsClientCertFile != "" && tlsClientKeyFile != "" { cert, err := tls.LoadX509KeyPair(tlsClientCertFile, tlsClientKeyFile) if err != nil { return nil, err } tlsConfig.Certificates = []tls.Certificate{cert} } finalizeTLSConfig(tlsConfig, tlsRemoteCert) return tlsConfig, nil } func GetTLSConfigMem(tlsClientCert string, tlsClientKey string, tlsRemoteCertPEM string) (*tls.Config, error) { tlsConfig := initTLSConfig() // Client authentication if tlsClientCert != "" && tlsClientKey != "" { cert, err := tls.X509KeyPair([]byte(tlsClientCert), []byte(tlsClientKey)) if err != nil { return nil, err } tlsConfig.Certificates = []tls.Certificate{cert} } var tlsRemoteCert *x509.Certificate if tlsRemoteCertPEM != "" { // Ignore any content outside of the PEM bytes we care about certBlock, _ := pem.Decode([]byte(tlsRemoteCertPEM)) var err error tlsRemoteCert, err = x509.ParseCertificate(certBlock.Bytes) if err != nil { return nil, err } } finalizeTLSConfig(tlsConfig, tlsRemoteCert) return tlsConfig, nil } func IsLoopback(iface *net.Interface) bool { return int(iface.Flags&net.FlagLoopback) > 0 } func WebsocketSendStream(conn *websocket.Conn, r io.Reader) chan bool { ch := make(chan bool) if r == nil { close(ch) return ch } go func(conn *websocket.Conn, r io.Reader) { in := ReaderToChannel(r) for { buf, ok := <-in if !ok { break } w, err := conn.NextWriter(websocket.BinaryMessage) if err != nil { Debugf("Got error getting next writer %s", err) break } _, err = w.Write(buf) w.Close() if err != nil { Debugf("Got err writing %s", err) break } } conn.WriteMessage(websocket.TextMessage, []byte{}) ch <- true }(conn, r) return ch } func WebsocketRecvStream(w io.WriteCloser, conn *websocket.Conn) chan bool { ch := make(chan bool) go func(w io.WriteCloser, conn *websocket.Conn) { for { mt, r, err := conn.NextReader() if mt == websocket.CloseMessage { Debugf("Got close message for reader") break } if mt == websocket.TextMessage { Debugf("got message barrier") break } if err != nil { Debugf("Got error getting next reader %s, %s", err, w) break } buf, err := ioutil.ReadAll(r) if err != nil { Debugf("Got error writing to writer %s", err) break } if w == nil { continue } i, err := w.Write(buf) if i != len(buf) { Debugf("Didn't write all of buf") break } if err != nil { Debugf("Error writing buf %s", err) break } } ch <- true }(w, conn) return ch } // WebsocketMirror allows mirroring a reader to a websocket and taking the // result and writing it to a writer. This function allows for multiple // mirrorings and correctly negotiates stream endings. However, it means any // websocket.Conns passed to it are live when it returns, and must be closed // explicitly. func WebsocketMirror(conn *websocket.Conn, w io.WriteCloser, r io.ReadCloser) (chan bool, chan bool) { readDone := make(chan bool, 1) writeDone := make(chan bool, 1) go func(conn *websocket.Conn, w io.WriteCloser) { for { mt, r, err := conn.NextReader() if err != nil { Debugf("Got error getting next reader %s, %s", err, w) break } if mt == websocket.CloseMessage { Debugf("Got close message for reader") break } if mt == websocket.TextMessage { Debugf("Got message barrier, resetting stream") break } buf, err := ioutil.ReadAll(r) if err != nil { Debugf("Got error writing to writer %s", err) break } i, err := w.Write(buf) if i != len(buf) { Debugf("Didn't write all of buf") break } if err != nil { Debugf("Error writing buf %s", err) break } } writeDone <- true w.Close() }(conn, w) go func(conn *websocket.Conn, r io.ReadCloser) { in := ReaderToChannel(r) for { buf, ok := <-in if !ok { readDone <- true r.Close() Debugf("sending write barrier") conn.WriteMessage(websocket.TextMessage, []byte{}) return } w, err := conn.NextWriter(websocket.BinaryMessage) if err != nil { Debugf("Got error getting next writer %s", err) break } _, err = w.Write(buf) w.Close() if err != nil { Debugf("Got err writing %s", err) break } } closeMsg := websocket.FormatCloseMessage(websocket.CloseNormalClosure, "") conn.WriteMessage(websocket.CloseMessage, closeMsg) readDone <- true r.Close() }(conn, r) return readDone, writeDone } lxd-2.0.2/shared/operation.go000066400000000000000000000013661272140510300161000ustar00rootroot00000000000000package shared import ( "net/http" "time" "github.com/gorilla/websocket" ) var WebsocketUpgrader = websocket.Upgrader{ ReadBufferSize: 1024, WriteBufferSize: 1024, CheckOrigin: func(r *http.Request) bool { return true }, } type Operation struct { Id string `json:"id"` Class string `json:"class"` CreatedAt time.Time `json:"created_at"` UpdatedAt time.Time `json:"updated_at"` Status string `json:"status"` StatusCode StatusCode `json:"status_code"` Resources map[string][]string `json:"resources"` Metadata *Jmap `json:"metadata"` MayCancel bool `json:"may_cancel"` Err string `json:"err"` } lxd-2.0.2/shared/proxy.go000066400000000000000000000064471272140510300152660ustar00rootroot00000000000000package shared import ( "fmt" "net" "net/http" "net/url" "os" "strings" "sync" ) var ( httpProxyEnv = &envOnce{ names: []string{"HTTP_PROXY", "http_proxy"}, } httpsProxyEnv = &envOnce{ names: []string{"HTTPS_PROXY", "https_proxy"}, } noProxyEnv = &envOnce{ names: []string{"NO_PROXY", "no_proxy"}, } ) type envOnce struct { names []string once sync.Once val string } func (e *envOnce) Get() string { e.once.Do(e.init) return e.val } func (e *envOnce) init() { for _, n := range e.names { e.val = os.Getenv(n) if e.val != "" { return } } } // This is basically the same as golang's ProxyFromEnvironment, except it // doesn't fall back to http_proxy when https_proxy isn't around, which is // incorrect behavior. It still respects HTTP_PROXY, HTTPS_PROXY, and NO_PROXY. func ProxyFromEnvironment(req *http.Request) (*url.URL, error) { return ProxyFromConfig("", "", "")(req) } func ProxyFromConfig(httpsProxy string, httpProxy string, noProxy string) func(req *http.Request) (*url.URL, error) { return func(req *http.Request) (*url.URL, error) { var proxy, port string var err error switch req.URL.Scheme { case "https": proxy = httpsProxy if proxy == "" { proxy = httpsProxyEnv.Get() } port = ":443" case "http": proxy = httpProxy if proxy == "" { proxy = httpProxyEnv.Get() } port = ":80" default: return nil, fmt.Errorf("unknown scheme %s", req.URL.Scheme) } if proxy == "" { return nil, nil } addr := req.URL.Host if !hasPort(addr) { addr = addr + port } use, err := useProxy(addr, noProxy) if err != nil { return nil, err } if !use { return nil, nil } proxyURL, err := url.Parse(proxy) if err != nil || !strings.HasPrefix(proxyURL.Scheme, "http") { // proxy was bogus. Try prepending "http://" to it and // see if that parses correctly. If not, we fall // through and complain about the original one. if proxyURL, err := url.Parse("http://" + proxy); err == nil { return proxyURL, nil } } if err != nil { return nil, fmt.Errorf("invalid proxy address %q: %v", proxy, err) } return proxyURL, nil } } func hasPort(s string) bool { return strings.LastIndex(s, ":") > strings.LastIndex(s, "]") } func useProxy(addr string, noProxy string) (bool, error) { if noProxy == "" { noProxy = noProxyEnv.Get() } if len(addr) == 0 { return true, nil } host, _, err := net.SplitHostPort(addr) if err != nil { return false, nil } if host == "localhost" { return false, nil } if ip := net.ParseIP(host); ip != nil { if ip.IsLoopback() { return false, nil } } if noProxy == "*" { return false, nil } addr = strings.ToLower(strings.TrimSpace(addr)) if hasPort(addr) { addr = addr[:strings.LastIndex(addr, ":")] } for _, p := range strings.Split(noProxy, ",") { p = strings.ToLower(strings.TrimSpace(p)) if len(p) == 0 { continue } if hasPort(p) { p = p[:strings.LastIndex(p, ":")] } if addr == p { return false, nil } if p[0] == '.' && (strings.HasSuffix(addr, p) || addr == p[1:]) { // noProxy ".foo.com" matches "bar.foo.com" or "foo.com" return false, nil } if p[0] != '.' && strings.HasSuffix(addr, p) && addr[len(addr)-len(p)-1] == '.' { // noProxy "foo.com" matches "bar.foo.com" return false, nil } } return true, nil } lxd-2.0.2/shared/server.go000066400000000000000000000022601272140510300154000ustar00rootroot00000000000000package shared type ServerStateEnvironment struct { Addresses []string `json:"addresses"` Architectures []string `json:"architectures"` Certificate string `json:"certificate"` Driver string `json:"driver"` DriverVersion string `json:"driver_version"` Kernel string `json:"kernel"` KernelArchitecture string `json:"kernel_architecture"` KernelVersion string `json:"kernel_version"` Server string `json:"server"` ServerPid int `json:"server_pid"` ServerVersion string `json:"server_version"` Storage string `json:"storage"` StorageVersion string `json:"storage_version"` } type ServerState struct { APICompat int `json:"api_compat"` Auth string `json:"auth"` Environment ServerStateEnvironment `json:"environment"` Config map[string]interface{} `json:"config"` Public bool `json:"public"` } type BriefServerState struct { Config map[string]interface{} `json:"config"` } func (c *ServerState) Brief() BriefServerState { retstate := BriefServerState{Config: c.Config} return retstate } lxd-2.0.2/shared/simplestreams.go000066400000000000000000000341611272140510300167670ustar00rootroot00000000000000package shared import ( "crypto/sha256" "encoding/json" "fmt" "io" "io/ioutil" "net/http" "net/url" "os" "path/filepath" "sort" "strings" "time" ) type ssSortImage []ImageInfo func (a ssSortImage) Len() int { return len(a) } func (a ssSortImage) Swap(i, j int) { a[i], a[j] = a[j], a[i] } func (a ssSortImage) Less(i, j int) bool { if a[i].Properties["os"] == a[j].Properties["os"] { if a[i].Properties["release"] == a[j].Properties["release"] { if a[i].CreationDate.UTC().Unix() == 0 { return true } if a[j].CreationDate.UTC().Unix() == 0 { return false } return a[i].CreationDate.UTC().Unix() > a[j].CreationDate.UTC().Unix() } if a[i].Properties["release"] == "" { return false } if a[j].Properties["release"] == "" { return true } return a[i].Properties["release"] < a[j].Properties["release"] } if a[i].Properties["os"] == "" { return false } if a[j].Properties["os"] == "" { return true } return a[i].Properties["os"] < a[j].Properties["os"] } var ssDefaultOS = map[string]string{ "https://cloud-images.ubuntu.com": "ubuntu", } type SimpleStreamsManifest struct { Updated string `json:"updated"` DataType string `json:"datatype"` Format string `json:"format"` License string `json:"license"` Products map[string]SimpleStreamsManifestProduct `json:"products"` } func (s *SimpleStreamsManifest) ToLXD() ([]ImageInfo, map[string][][]string) { downloads := map[string][][]string{} images := []ImageInfo{} nameLayout := "20060102" eolLayout := "2006-01-02" for _, product := range s.Products { // Skip unsupported architectures architecture, err := ArchitectureId(product.Architecture) if err != nil { continue } architectureName, err := ArchitectureName(architecture) if err != nil { continue } for name, version := range product.Versions { // Short of anything better, use the name as date (see format above) if len(name) < 8 { continue } creationDate, err := time.Parse(nameLayout, name[0:8]) if err != nil { continue } size := int64(0) filename := "" fingerprint := "" metaPath := "" metaHash := "" rootfsPath := "" rootfsHash := "" found := 0 for _, item := range version.Items { // Skip the files we don't care about if !StringInSlice(item.FileType, []string{"root.tar.xz", "lxd.tar.xz"}) { continue } found += 1 size += item.Size if item.LXDHashSha256 != "" { fingerprint = item.LXDHashSha256 } if item.FileType == "lxd.tar.xz" { fields := strings.Split(item.Path, "/") filename = fields[len(fields)-1] metaPath = item.Path metaHash = item.HashSha256 } if item.FileType == "root.tar.xz" { rootfsPath = item.Path rootfsHash = item.HashSha256 } } if found != 2 || size == 0 || filename == "" || fingerprint == "" { // Invalid image continue } // Generate the actual image entry image := ImageInfo{} image.Architecture = architectureName image.Public = true image.Size = size image.CreationDate = creationDate image.UploadDate = creationDate image.Filename = filename image.Fingerprint = fingerprint image.Properties = map[string]string{ "aliases": product.Aliases, "os": product.OperatingSystem, "release": product.Release, "version": product.Version, "architecture": product.Architecture, "label": version.Label, "serial": name, "description": fmt.Sprintf("%s %s %s (%s) (%s)", product.OperatingSystem, product.ReleaseTitle, product.Architecture, version.Label, name), } // Attempt to parse the EOL if product.SupportedEOL != "" { eolDate, err := time.Parse(eolLayout, product.SupportedEOL) if err == nil { image.ExpiryDate = eolDate } } downloads[fingerprint] = [][]string{[]string{metaPath, metaHash, "meta"}, []string{rootfsPath, rootfsHash, "root"}} images = append(images, image) } } return images, downloads } type SimpleStreamsManifestProduct struct { Aliases string `json:"aliases"` Architecture string `json:"arch"` OperatingSystem string `json:"os"` Release string `json:"release"` ReleaseCodename string `json:"release_codename"` ReleaseTitle string `json:"release_title"` Supported bool `json:"supported"` SupportedEOL string `json:"support_eol"` Version string `json:"version"` Versions map[string]SimpleStreamsManifestProductVersion `json:"versions"` } type SimpleStreamsManifestProductVersion struct { PublicName string `json:"pubname"` Label string `json:"label"` Items map[string]SimpleStreamsManifestProductVersionItem `json:"items"` } type SimpleStreamsManifestProductVersionItem struct { Path string `json:"path"` FileType string `json:"ftype"` HashMd5 string `json:"md5"` HashSha256 string `json:"sha256"` LXDHashSha256 string `json:"combined_sha256"` Size int64 `json:"size"` } type SimpleStreamsIndex struct { Format string `json:"format"` Index map[string]SimpleStreamsIndexStream `json:"index"` Updated string `json:"updated"` } type SimpleStreamsIndexStream struct { Updated string `json:"updated"` DataType string `json:"datatype"` Path string `json:"path"` Products []string `json:"products"` } func SimpleStreamsClient(url string, proxy func(*http.Request) (*url.URL, error)) (*SimpleStreams, error) { // Setup a http client tlsConfig, err := GetTLSConfig("", "", nil) if err != nil { return nil, err } tr := &http.Transport{ TLSClientConfig: tlsConfig, Dial: RFC3493Dialer, Proxy: proxy, } myHttp := http.Client{ Transport: tr, } return &SimpleStreams{ http: &myHttp, url: url, cachedManifest: map[string]*SimpleStreamsManifest{}}, nil } type SimpleStreams struct { http *http.Client url string cachedIndex *SimpleStreamsIndex cachedManifest map[string]*SimpleStreamsManifest cachedImages []ImageInfo cachedAliases map[string]*ImageAliasesEntry } func (s *SimpleStreams) parseIndex() (*SimpleStreamsIndex, error) { if s.cachedIndex != nil { return s.cachedIndex, nil } req, err := http.NewRequest("GET", fmt.Sprintf("%s/streams/v1/index.json", s.url), nil) if err != nil { return nil, err } req.Header.Set("User-Agent", UserAgent) r, err := s.http.Do(req) if err != nil { return nil, err } defer r.Body.Close() body, err := ioutil.ReadAll(r.Body) if err != nil { return nil, err } // Parse the idnex ssIndex := SimpleStreamsIndex{} err = json.Unmarshal(body, &ssIndex) if err != nil { return nil, err } s.cachedIndex = &ssIndex return &ssIndex, nil } func (s *SimpleStreams) parseManifest(path string) (*SimpleStreamsManifest, error) { if s.cachedManifest[path] != nil { return s.cachedManifest[path], nil } req, err := http.NewRequest("GET", fmt.Sprintf("%s/%s", s.url, path), nil) if err != nil { return nil, err } req.Header.Set("User-Agent", UserAgent) r, err := s.http.Do(req) if err != nil { return nil, err } defer r.Body.Close() body, err := ioutil.ReadAll(r.Body) if err != nil { return nil, err } // Parse the idnex ssManifest := SimpleStreamsManifest{} err = json.Unmarshal(body, &ssManifest) if err != nil { return nil, err } s.cachedManifest[path] = &ssManifest return &ssManifest, nil } func (s *SimpleStreams) applyAliases(images []ImageInfo) ([]ImageInfo, map[string]*ImageAliasesEntry, error) { aliases := map[string]*ImageAliasesEntry{} sort.Sort(ssSortImage(images)) defaultOS := "" for k, v := range ssDefaultOS { if strings.HasPrefix(s.url, k) { defaultOS = v break } } addAlias := func(name string, fingerprint string) *ImageAlias { if defaultOS != "" { name = strings.TrimPrefix(name, fmt.Sprintf("%s/", defaultOS)) } if aliases[name] != nil { return nil } alias := ImageAliasesEntry{} alias.Name = name alias.Target = fingerprint aliases[name] = &alias return &ImageAlias{Name: name} } architectureName, _ := ArchitectureGetLocal() newImages := []ImageInfo{} for _, image := range images { if image.Properties["aliases"] != "" { aliases := strings.Split(image.Properties["aliases"], ",") for _, entry := range aliases { // Short if image.Architecture == architectureName { alias := addAlias(fmt.Sprintf("%s", entry), image.Fingerprint) if alias != nil { image.Aliases = append(image.Aliases, *alias) } } // Medium alias := addAlias(fmt.Sprintf("%s/%s", entry, image.Properties["architecture"]), image.Fingerprint) if alias != nil { image.Aliases = append(image.Aliases, *alias) } } } newImages = append(newImages, image) } return newImages, aliases, nil } func (s *SimpleStreams) getImages() ([]ImageInfo, map[string]*ImageAliasesEntry, error) { if s.cachedImages != nil && s.cachedAliases != nil { return s.cachedImages, s.cachedAliases, nil } images := []ImageInfo{} // Load the main index ssIndex, err := s.parseIndex() if err != nil { return nil, nil, err } // Iterate through the various image manifests for _, entry := range ssIndex.Index { // We only care about images if entry.DataType != "image-downloads" { continue } // No point downloading an empty image list if len(entry.Products) == 0 { continue } manifest, err := s.parseManifest(entry.Path) if err != nil { return nil, nil, err } manifestImages, _ := manifest.ToLXD() for _, image := range manifestImages { images = append(images, image) } } // Setup the aliases images, aliases, err := s.applyAliases(images) if err != nil { return nil, nil, err } s.cachedImages = images s.cachedAliases = aliases return images, aliases, nil } func (s *SimpleStreams) getPaths(fingerprint string) ([][]string, error) { // Load the main index ssIndex, err := s.parseIndex() if err != nil { return nil, err } // Iterate through the various image manifests for _, entry := range ssIndex.Index { // We only care about images if entry.DataType != "image-downloads" { continue } // No point downloading an empty image list if len(entry.Products) == 0 { continue } manifest, err := s.parseManifest(entry.Path) if err != nil { return nil, err } manifestImages, downloads := manifest.ToLXD() for _, image := range manifestImages { if strings.HasPrefix(image.Fingerprint, fingerprint) { urls := [][]string{} for _, path := range downloads[image.Fingerprint] { urls = append(urls, []string{path[0], path[1], path[2]}) } return urls, nil } } } return nil, fmt.Errorf("Couldn't find the requested image") } func (s *SimpleStreams) downloadFile(path string, hash string, target string, progress func(int)) error { download := func(url string, hash string, target string) error { out, err := os.Create(target) if err != nil { return err } defer out.Close() resp, err := s.http.Get(url) if err != nil { } defer resp.Body.Close() if resp.StatusCode != http.StatusOK { return fmt.Errorf("invalid simplestreams source: got %d looking for %s", resp.StatusCode, path) } body := &TransferProgress{Reader: resp.Body, Length: resp.ContentLength, Handler: progress} sha256 := sha256.New() _, err = io.Copy(io.MultiWriter(out, sha256), body) if err != nil { return err } result := fmt.Sprintf("%x", sha256.Sum(nil)) if result != hash { os.Remove(target) return fmt.Errorf("Hash mismatch for %s: %s != %s", path, result, hash) } return nil } // Try http first if strings.HasPrefix(s.url, "https://") { err := download(fmt.Sprintf("http://%s/%s", strings.TrimPrefix(s.url, "https://"), path), hash, target) if err == nil { return nil } } err := download(fmt.Sprintf("%s/%s", s.url, path), hash, target) if err != nil { return err } return nil } func (s *SimpleStreams) ListAliases() (ImageAliases, error) { _, aliasesMap, err := s.getImages() if err != nil { return nil, err } aliases := ImageAliases{} for _, alias := range aliasesMap { aliases = append(aliases, *alias) } return aliases, nil } func (s *SimpleStreams) ListImages() ([]ImageInfo, error) { images, _, err := s.getImages() return images, err } func (s *SimpleStreams) GetAlias(name string) string { _, aliasesMap, err := s.getImages() if err != nil { return "" } alias, ok := aliasesMap[name] if !ok { return "" } return alias.Target } func (s *SimpleStreams) GetImageInfo(fingerprint string) (*ImageInfo, error) { images, _, err := s.getImages() if err != nil { return nil, err } for _, image := range images { if strings.HasPrefix(image.Fingerprint, fingerprint) { return &image, nil } } return nil, fmt.Errorf("The requested image couldn't be found.") } func (s *SimpleStreams) ExportImage(image string, target string) (string, error) { if !IsDir(target) { return "", fmt.Errorf("Split images can only be written to a directory.") } paths, err := s.getPaths(image) if err != nil { return "", err } for _, path := range paths { fields := strings.Split(path[0], "/") targetFile := filepath.Join(target, fields[len(fields)-1]) err := s.downloadFile(path[0], path[1], targetFile, nil) if err != nil { return "", err } } return target, nil } func (s *SimpleStreams) Download(image string, file string, target string, progress func(int)) error { paths, err := s.getPaths(image) if err != nil { return err } for _, path := range paths { if file != path[2] { continue } return s.downloadFile(path[0], path[1], target, progress) } return fmt.Errorf("The file couldn't be found.") } lxd-2.0.2/shared/status.go000066400000000000000000000031351272140510300154170ustar00rootroot00000000000000package shared type StatusCode int const ( OperationCreated StatusCode = 100 Started StatusCode = 101 Stopped StatusCode = 102 Running StatusCode = 103 Cancelling StatusCode = 104 Pending StatusCode = 105 Starting StatusCode = 106 Stopping StatusCode = 107 Aborting StatusCode = 108 Freezing StatusCode = 109 Frozen StatusCode = 110 Thawed StatusCode = 111 Error StatusCode = 112 Success StatusCode = 200 Failure StatusCode = 400 Cancelled StatusCode = 401 ) func (o StatusCode) String() string { return map[StatusCode]string{ OperationCreated: "Operation created", Started: "Started", Stopped: "Stopped", Running: "Running", Cancelling: "Cancelling", Pending: "Pending", Success: "Success", Failure: "Failure", Cancelled: "Cancelled", Starting: "Starting", Stopping: "Stopping", Aborting: "Aborting", Freezing: "Freezing", Frozen: "Frozen", Thawed: "Thawed", Error: "Error", }[o] } func (o StatusCode) IsFinal() bool { return int(o) >= 200 } /* * Create a StatusCode from an lxc.State code. N.B.: we accept an int instead * of a lxc.State so that the shared code doesn't depend on lxc, which depends * on liblxc, etc. */ func FromLXCState(state int) StatusCode { return map[int]StatusCode{ 1: Stopped, 2: Starting, 3: Running, 4: Stopping, 5: Aborting, 6: Freezing, 7: Frozen, 8: Thawed, 9: Error, }[state] } lxd-2.0.2/shared/stringset.go000066400000000000000000000007161272140510300161200ustar00rootroot00000000000000// That this code needs to exist is kind of dumb, but I'm not sure how else to // do it. package shared type StringSet map[string]bool func (ss StringSet) IsSubset(oss StringSet) bool { for k, _ := range map[string]bool(ss) { if _, ok := map[string]bool(oss)[k]; !ok { return false } } return true } func NewStringSet(strings []string) StringSet { ret := map[string]bool{} for _, s := range strings { ret[s] = true } return StringSet(ret) } lxd-2.0.2/shared/stringset_test.go000066400000000000000000000006011272140510300171500ustar00rootroot00000000000000package shared import ( "testing" ) func TestStringSetSubset(t *testing.T) { ss := NewStringSet([]string{"one", "two"}) if !ss.IsSubset(ss) { t.Error("subests wrong") return } if !ss.IsSubset(NewStringSet([]string{"one", "two", "three"})) { t.Error("subsets wrong") return } if ss.IsSubset(NewStringSet([]string{"four"})) { t.Error("subests wrong") return } } lxd-2.0.2/shared/termios/000077500000000000000000000000001272140510300152255ustar00rootroot00000000000000lxd-2.0.2/shared/termios/termios.go000066400000000000000000000026441272140510300172440ustar00rootroot00000000000000// +build !windows package termios import ( "syscall" "unsafe" "github.com/lxc/lxd/shared" ) // #include import "C" type State struct { Termios syscall.Termios } func IsTerminal(fd int) bool { _, err := GetState(fd) return err == nil } func GetState(fd int) (*State, error) { termios := syscall.Termios{} ret, err := C.tcgetattr(C.int(fd), (*C.struct_termios)(unsafe.Pointer(&termios))) if ret != 0 { return nil, err.(syscall.Errno) } state := State{} state.Termios = termios return &state, nil } func GetSize(fd int) (int, int, error) { var dimensions [4]uint16 if _, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), uintptr(syscall.TIOCGWINSZ), uintptr(unsafe.Pointer(&dimensions)), 0, 0, 0); err != 0 { return -1, -1, err } return int(dimensions[1]), int(dimensions[0]), nil } func MakeRaw(fd int) (*State, error) { var err error var oldState, newState *State oldState, err = GetState(fd) if err != nil { return nil, err } err = shared.DeepCopy(&oldState, &newState) if err != nil { return nil, err } C.cfmakeraw((*C.struct_termios)(unsafe.Pointer(&newState.Termios))) err = Restore(fd, newState) if err != nil { return nil, err } return oldState, nil } func Restore(fd int, state *State) error { ret, err := C.tcsetattr(C.int(fd), C.TCSANOW, (*C.struct_termios)(unsafe.Pointer(&state.Termios))) if ret != 0 { return err.(syscall.Errno) } return nil } lxd-2.0.2/shared/termios/termios_windows.go000066400000000000000000000013141272140510300210070ustar00rootroot00000000000000// +build windows package termios import ( "golang.org/x/crypto/ssh/terminal" ) type State terminal.State func IsTerminal(fd int) bool { return terminal.IsTerminal(fd) } func GetState(fd int) (*State, error) { state, err := terminal.GetState(fd) if err != nil { return nil, err } currentState := State(*state) return ¤tState, nil } func GetSize(fd int) (int, int, error) { return terminal.GetSize(fd) } func MakeRaw(fd int) (*State, error) { state, err := terminal.MakeRaw(fd) if err != nil { return nil, err } oldState := State(*state) return &oldState, nil } func Restore(fd int, state *State) error { newState := terminal.State(*state) return terminal.Restore(fd, &newState) } lxd-2.0.2/shared/util.go000066400000000000000000000343361272140510300150600ustar00rootroot00000000000000package shared import ( "bufio" "bytes" "crypto/rand" "encoding/gob" "encoding/hex" "encoding/json" "fmt" "io" "io/ioutil" "net/http" "os" "os/exec" "path" "path/filepath" "reflect" "regexp" "strconv" "strings" ) const SnapshotDelimiter = "/" const DefaultPort = "8443" // AddSlash adds a slash to the end of paths if they don't already have one. // This can be useful for rsyncing things, since rsync has behavior present on // the presence or absence of a trailing slash. func AddSlash(path string) string { if path[len(path)-1] != '/' { return path + "/" } return path } func PathExists(name string) bool { _, err := os.Lstat(name) if err != nil && os.IsNotExist(err) { return false } return true } // PathIsEmpty checks if the given path is empty. func PathIsEmpty(path string) (bool, error) { f, err := os.Open(path) if err != nil { return false, err } defer f.Close() // read in ONLY one file _, err = f.Readdir(1) // and if the file is EOF... well, the dir is empty. if err == io.EOF { return true, nil } return false, err } // IsDir returns true if the given path is a directory. func IsDir(name string) bool { stat, err := os.Lstat(name) if err != nil { return false } return stat.IsDir() } // VarPath returns the provided path elements joined by a slash and // appended to the end of $LXD_DIR, which defaults to /var/lib/lxd. func VarPath(path ...string) string { varDir := os.Getenv("LXD_DIR") if varDir == "" { varDir = "/var/lib/lxd" } items := []string{varDir} items = append(items, path...) return filepath.Join(items...) } // LogPath returns the directory that LXD should put logs under. If LXD_DIR is // set, this path is $LXD_DIR/logs, otherwise it is /var/log/lxd. func LogPath(path ...string) string { varDir := os.Getenv("LXD_DIR") logDir := "/var/log/lxd" if varDir != "" { logDir = filepath.Join(varDir, "logs") } items := []string{logDir} items = append(items, path...) return filepath.Join(items...) } func ParseLXDFileHeaders(headers http.Header) (uid int, gid int, mode int) { uid, err := strconv.Atoi(headers.Get("X-LXD-uid")) if err != nil { uid = -1 } gid, err = strconv.Atoi(headers.Get("X-LXD-gid")) if err != nil { gid = -1 } mode, err = strconv.Atoi(headers.Get("X-LXD-mode")) if err != nil { mode = -1 } else { rawMode, err := strconv.ParseInt(headers.Get("X-LXD-mode"), 0, 0) if err == nil { mode = int(os.FileMode(rawMode) & os.ModePerm) } } return uid, gid, mode } func ReadToJSON(r io.Reader, req interface{}) error { buf, err := ioutil.ReadAll(r) if err != nil { return err } return json.Unmarshal(buf, req) } func ReaderToChannel(r io.Reader) <-chan []byte { ch := make(chan ([]byte)) go func() { for { /* io.Copy uses a 32KB buffer, so we might as well too. */ buf := make([]byte, 32*1024) nr, err := r.Read(buf) if nr > 0 { ch <- buf[0:nr] } if err != nil { close(ch) break } } }() return ch } // Returns a random base64 encoded string from crypto/rand. func RandomCryptoString() (string, error) { buf := make([]byte, 32) n, err := rand.Read(buf) if err != nil { return "", err } if n != len(buf) { return "", fmt.Errorf("not enough random bytes read") } return hex.EncodeToString(buf), nil } func SplitExt(fpath string) (string, string) { b := path.Base(fpath) ext := path.Ext(fpath) return b[:len(b)-len(ext)], ext } func AtoiEmptyDefault(s string, def int) (int, error) { if s == "" { return def, nil } return strconv.Atoi(s) } func ReadStdin() ([]byte, error) { buf := bufio.NewReader(os.Stdin) line, _, err := buf.ReadLine() if err != nil { return nil, err } return line, nil } func WriteAll(w io.Writer, buf []byte) error { return WriteAllBuf(w, bytes.NewBuffer(buf)) } func WriteAllBuf(w io.Writer, buf *bytes.Buffer) error { toWrite := int64(buf.Len()) for { n, err := io.Copy(w, buf) if err != nil { return err } toWrite -= n if toWrite <= 0 { return nil } } } // FileMove tries to move a file by using os.Rename, // if that fails it tries to copy the file and remove the source. func FileMove(oldPath string, newPath string) error { if err := os.Rename(oldPath, newPath); err == nil { return nil } if err := FileCopy(oldPath, newPath); err != nil { return err } os.Remove(oldPath) return nil } // FileCopy copies a file, overwriting the target if it exists. func FileCopy(source string, dest string) error { s, err := os.Open(source) if err != nil { return err } defer s.Close() d, err := os.Create(dest) if err != nil { if os.IsExist(err) { d, err = os.OpenFile(dest, os.O_WRONLY, 0700) if err != nil { return err } } else { return err } } defer d.Close() _, err = io.Copy(d, s) return err } type BytesReadCloser struct { Buf *bytes.Buffer } func (r BytesReadCloser) Read(b []byte) (n int, err error) { return r.Buf.Read(b) } func (r BytesReadCloser) Close() error { /* no-op since we're in memory */ return nil } func IsSnapshot(name string) bool { return strings.Contains(name, SnapshotDelimiter) } func ExtractSnapshotName(name string) string { return strings.SplitN(name, SnapshotDelimiter, 2)[1] } func ReadDir(p string) ([]string, error) { ents, err := ioutil.ReadDir(p) if err != nil { return []string{}, err } var ret []string for _, ent := range ents { ret = append(ret, ent.Name()) } return ret, nil } func MkdirAllOwner(path string, perm os.FileMode, uid int, gid int) error { // This function is a slightly modified version of MkdirAll from the Go standard library. // https://golang.org/src/os/path.go?s=488:535#L9 // Fast path: if we can tell whether path is a directory or file, stop with success or error. dir, err := os.Stat(path) if err == nil { if dir.IsDir() { return nil } return fmt.Errorf("path exists but isn't a directory") } // Slow path: make sure parent exists and then call Mkdir for path. i := len(path) for i > 0 && os.IsPathSeparator(path[i-1]) { // Skip trailing path separator. i-- } j := i for j > 0 && !os.IsPathSeparator(path[j-1]) { // Scan backward over element. j-- } if j > 1 { // Create parent err = MkdirAllOwner(path[0:j-1], perm, uid, gid) if err != nil { return err } } // Parent now exists; invoke Mkdir and use its result. err = os.Mkdir(path, perm) err_chown := os.Chown(path, uid, gid) if err_chown != nil { return err_chown } if err != nil { // Handle arguments like "foo/." by // double-checking that directory doesn't exist. dir, err1 := os.Lstat(path) if err1 == nil && dir.IsDir() { return nil } return err } return nil } func StringInSlice(key string, list []string) bool { for _, entry := range list { if entry == key { return true } } return false } func IntInSlice(key int, list []int) bool { for _, entry := range list { if entry == key { return true } } return false } func IsTrue(value string) bool { if StringInSlice(strings.ToLower(value), []string{"true", "1", "yes", "on"}) { return true } return false } func IsOnSharedMount(pathName string) (bool, error) { file, err := os.Open("/proc/self/mountinfo") if err != nil { return false, err } defer file.Close() absPath, err := filepath.Abs(pathName) if err != nil { return false, err } expPath, err := os.Readlink(absPath) if err != nil { expPath = absPath } scanner := bufio.NewScanner(file) for scanner.Scan() { line := scanner.Text() rows := strings.Fields(line) if rows[4] != expPath { continue } if strings.HasPrefix(rows[6], "shared:") { return true, nil } else { return false, nil } } return false, nil } func IsBlockdev(fm os.FileMode) bool { return ((fm&os.ModeDevice != 0) && (fm&os.ModeCharDevice == 0)) } func IsBlockdevPath(pathName string) bool { sb, err := os.Stat(pathName) if err != nil { return false } fm := sb.Mode() return ((fm&os.ModeDevice != 0) && (fm&os.ModeCharDevice == 0)) } func BlockFsDetect(dev string) (string, error) { out, err := exec.Command("blkid", "-s", "TYPE", "-o", "value", dev).Output() if err != nil { return "", fmt.Errorf("Failed to run blkid on: %s", dev) } return strings.TrimSpace(string(out)), nil } // DeepCopy copies src to dest by using encoding/gob so its not that fast. func DeepCopy(src, dest interface{}) error { buff := new(bytes.Buffer) enc := gob.NewEncoder(buff) dec := gob.NewDecoder(buff) if err := enc.Encode(src); err != nil { return err } if err := dec.Decode(dest); err != nil { return err } return nil } func RunningInUserNS() bool { file, err := os.Open("/proc/self/uid_map") if err != nil { return false } defer file.Close() buf := bufio.NewReader(file) l, _, err := buf.ReadLine() if err != nil { return false } line := string(l) var a, b, c int64 fmt.Sscanf(line, "%d %d %d", &a, &b, &c) if a == 0 && b == 0 && c == 4294967295 { return false } return true } func ValidHostname(name string) bool { // Validate length if len(name) < 1 || len(name) > 63 { return false } // Validate first character if strings.HasPrefix(name, "-") { return false } if _, err := strconv.Atoi(string(name[0])); err == nil { return false } // Validate last character if strings.HasSuffix(name, "-") { return false } // Validate the character set match, _ := regexp.MatchString("^[-a-zA-Z0-9]*$", name) if !match { return false } return true } func TextEditor(inPath string, inContent []byte) ([]byte, error) { var f *os.File var err error var path string // Detect the text editor to use editor := os.Getenv("VISUAL") if editor == "" { editor = os.Getenv("EDITOR") if editor == "" { for _, p := range []string{"editor", "vi", "emacs", "nano"} { _, err := exec.LookPath(p) if err == nil { editor = p break } } if editor == "" { return []byte{}, fmt.Errorf("No text editor found, please set the EDITOR environment variable.") } } } if inPath == "" { // If provided input, create a new file f, err = ioutil.TempFile("", "lxd_editor_") if err != nil { return []byte{}, err } if err = f.Chmod(0600); err != nil { f.Close() os.Remove(f.Name()) return []byte{}, err } f.Write(inContent) f.Close() path = f.Name() defer os.Remove(path) } else { path = inPath } cmdParts := strings.Fields(editor) cmd := exec.Command(cmdParts[0], append(cmdParts[1:], path)...) cmd.Stdin = os.Stdin cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr err = cmd.Run() if err != nil { return []byte{}, err } content, err := ioutil.ReadFile(path) if err != nil { return []byte{}, err } return content, nil } func ParseMetadata(metadata interface{}) (map[string]interface{}, error) { newMetadata := make(map[string]interface{}) s := reflect.ValueOf(metadata) if !s.IsValid() { return nil, nil } if s.Kind() == reflect.Map { for _, k := range s.MapKeys() { if k.Kind() != reflect.String { return nil, fmt.Errorf("Invalid metadata provided (key isn't a string).") } newMetadata[k.String()] = s.MapIndex(k).Interface() } } else if s.Kind() == reflect.Ptr && !s.Elem().IsValid() { return nil, nil } else { return nil, fmt.Errorf("Invalid metadata provided (type isn't a map).") } return newMetadata, nil } // Parse a size string in bytes (e.g. 200kB or 5GB) into the number of bytes it // represents. Supports suffixes up to EB. "" == 0. func ParseByteSizeString(input string) (int64, error) { if input == "" { return 0, nil } if len(input) < 3 { return -1, fmt.Errorf("Invalid value: %s", input) } // Extract the suffix suffix := input[len(input)-2:] // Extract the value value := input[0 : len(input)-2] valueInt, err := strconv.ParseInt(value, 10, 64) if err != nil { return -1, fmt.Errorf("Invalid integer: %s", input) } if valueInt < 0 { return -1, fmt.Errorf("Invalid value: %d", valueInt) } // Figure out the multiplicator multiplicator := int64(0) switch suffix { case "kB": multiplicator = 1024 case "MB": multiplicator = 1024 * 1024 case "GB": multiplicator = 1024 * 1024 * 1024 case "TB": multiplicator = 1024 * 1024 * 1024 * 1024 case "PB": multiplicator = 1024 * 1024 * 1024 * 1024 * 1024 case "EB": multiplicator = 1024 * 1024 * 1024 * 1024 * 1024 * 1024 default: return -1, fmt.Errorf("Unsupported suffix: %s", suffix) } return valueInt * multiplicator, nil } // Parse a size string in bits (e.g. 200kbit or 5Gbit) into the number of bits // it represents. Supports suffixes up to Ebit. "" == 0. func ParseBitSizeString(input string) (int64, error) { if input == "" { return 0, nil } if len(input) < 5 { return -1, fmt.Errorf("Invalid value: %s", input) } // Extract the suffix suffix := input[len(input)-4:] // Extract the value value := input[0 : len(input)-4] valueInt, err := strconv.ParseInt(value, 10, 64) if err != nil { return -1, fmt.Errorf("Invalid integer: %s", input) } if valueInt < 0 { return -1, fmt.Errorf("Invalid value: %d", valueInt) } // Figure out the multiplicator multiplicator := int64(0) switch suffix { case "kbit": multiplicator = 1000 case "Mbit": multiplicator = 1000 * 1000 case "Gbit": multiplicator = 1000 * 1000 * 1000 case "Tbit": multiplicator = 1000 * 1000 * 1000 * 1000 case "Pbit": multiplicator = 1000 * 1000 * 1000 * 1000 * 1000 case "Ebit": multiplicator = 1000 * 1000 * 1000 * 1000 * 1000 * 1000 default: return -1, fmt.Errorf("Unsupported suffix: %s", suffix) } return valueInt * multiplicator, nil } func GetByteSizeString(input int64) string { if input < 1024 { return fmt.Sprintf("%d bytes", input) } value := float64(input) for _, unit := range []string{"kB", "MB", "GB", "TB", "PB", "EB"} { value = value / 1024 if value < 1024 { return fmt.Sprintf("%.2f%s", value, unit) } } return fmt.Sprintf("%.2fEB", value) } type TransferProgress struct { io.Reader percentage float64 total int64 Length int64 Handler func(int) } func (pt *TransferProgress) Read(p []byte) (int, error) { n, err := pt.Reader.Read(p) if pt.Handler == nil { return n, err } if n > 0 { pt.total += int64(n) percentage := float64(pt.total) / float64(pt.Length) * float64(100) if percentage-pt.percentage > 0.9 { pt.percentage = percentage progressInt := 1 - (int(percentage) % 1) + int(percentage) if progressInt > 100 { progressInt = 100 } pt.Handler(progressInt) } } return n, err } lxd-2.0.2/shared/util_linux.go000066400000000000000000000203061272140510300162670ustar00rootroot00000000000000// +build linux // +build cgo package shared import ( "errors" "fmt" "os" "os/exec" "syscall" "unsafe" ) // #cgo LDFLAGS: -lutil -lpthread /* #define _GNU_SOURCE #include #include #include #include #include #include #include #include #include #include #include #ifndef AT_SYMLINK_FOLLOW #define AT_SYMLINK_FOLLOW 0x400 #endif #ifndef AT_EMPTY_PATH #define AT_EMPTY_PATH 0x1000 #endif // This is an adaption from https://codereview.appspot.com/4589049, to be // included in the stdlib with the stdlib's license. static int mygetgrgid_r(int gid, struct group *grp, char *buf, size_t buflen, struct group **result) { return getgrgid_r(gid, grp, buf, buflen, result); } void configure_pty(int fd) { struct termios term_settings; struct winsize win; if (tcgetattr(fd, &term_settings) < 0) { fprintf(stderr, "Failed to get settings: %s\n", strerror(errno)); return; } term_settings.c_iflag |= IMAXBEL; term_settings.c_iflag |= IUTF8; term_settings.c_iflag |= BRKINT; term_settings.c_iflag |= IXANY; term_settings.c_cflag |= HUPCL; if (tcsetattr(fd, TCSANOW, &term_settings) < 0) { fprintf(stderr, "Failed to set settings: %s\n", strerror(errno)); return; } if (ioctl(fd, TIOCGWINSZ, &win) < 0) { fprintf(stderr, "Failed to get the terminal size: %s\n", strerror(errno)); return; } win.ws_col = 80; win.ws_row = 25; if (ioctl(fd, TIOCSWINSZ, &win) < 0) { fprintf(stderr, "Failed to set the terminal size: %s\n", strerror(errno)); return; } if (fcntl(fd, F_SETFD, FD_CLOEXEC) < 0) { fprintf(stderr, "Failed to set FD_CLOEXEC: %s\n", strerror(errno)); return; } return; } void create_pty(int *master, int *slave, int uid, int gid) { if (openpty(master, slave, NULL, NULL, NULL) < 0) { fprintf(stderr, "Failed to openpty: %s\n", strerror(errno)); return; } configure_pty(*master); configure_pty(*slave); if (fchown(*slave, uid, gid) < 0) { fprintf(stderr, "Warning: error chowning pty to container root\n"); fprintf(stderr, "Continuing...\n"); } if (fchown(*master, uid, gid) < 0) { fprintf(stderr, "Warning: error chowning pty to container root\n"); fprintf(stderr, "Continuing...\n"); } } void create_pipe(int *master, int *slave) { int pipefd[2]; if (pipe2(pipefd, O_CLOEXEC) < 0) { fprintf(stderr, "Failed to create a pipe: %s\n", strerror(errno)); return; } *master = pipefd[0]; *slave = pipefd[1]; } int shiftowner(char *basepath, char *path, int uid, int gid) { struct stat sb; int fd, r; char fdpath[PATH_MAX]; char realpath[PATH_MAX]; fd = open(path, O_PATH|O_NOFOLLOW); if (fd < 0 ) { perror("Failed open"); return 1; } r = sprintf(fdpath, "/proc/self/fd/%d", fd); if (r < 0) { perror("Failed sprintf"); close(fd); return 1; } r = readlink(fdpath, realpath, PATH_MAX); if (r < 0) { perror("Failed readlink"); close(fd); return 1; } if (strlen(realpath) < strlen(basepath)) { printf("Invalid path, source (%s) is outside of basepath (%s).\n", realpath, basepath); close(fd); return 1; } if (strncmp(realpath, basepath, strlen(basepath))) { printf("Invalid path, source (%s) is outside of basepath (%s).\n", realpath, basepath); close(fd); return 1; } r = fstat(fd, &sb); if (r < 0) { perror("Failed fstat"); close(fd); return 1; } r = fchownat(fd, "", uid, gid, AT_EMPTY_PATH|AT_SYMLINK_NOFOLLOW); if (r < 0) { perror("Failed chown"); close(fd); return 1; } if (!S_ISLNK(sb.st_mode)) { r = chmod(fdpath, sb.st_mode); if (r < 0) { perror("Failed chmod"); close(fd); return 1; } } close(fd); return 0; } */ import "C" func ShiftOwner(basepath string, path string, uid int, gid int) error { cbasepath := C.CString(basepath) defer C.free(unsafe.Pointer(cbasepath)) cpath := C.CString(path) defer C.free(unsafe.Pointer(cpath)) r := C.shiftowner(cbasepath, cpath, C.int(uid), C.int(gid)) if r != 0 { return fmt.Errorf("Failed to change ownership of: %s", path) } return nil } func OpenPty(uid, gid int) (master *os.File, slave *os.File, err error) { fd_master := C.int(-1) fd_slave := C.int(-1) rootUid := C.int(uid) rootGid := C.int(gid) C.create_pty(&fd_master, &fd_slave, rootUid, rootGid) if fd_master == -1 || fd_slave == -1 { return nil, nil, errors.New("Failed to create a new pts pair") } master = os.NewFile(uintptr(fd_master), "master") slave = os.NewFile(uintptr(fd_slave), "slave") return master, slave, nil } func Pipe() (master *os.File, slave *os.File, err error) { fd_master := C.int(-1) fd_slave := C.int(-1) C.create_pipe(&fd_master, &fd_slave) if fd_master == -1 || fd_slave == -1 { return nil, nil, errors.New("Failed to create a new pipe") } master = os.NewFile(uintptr(fd_master), "master") slave = os.NewFile(uintptr(fd_slave), "slave") return master, slave, nil } // GroupName is an adaption from https://codereview.appspot.com/4589049. func GroupName(gid int) (string, error) { var grp C.struct_group var result *C.struct_group bufSize := C.size_t(C.sysconf(C._SC_GETGR_R_SIZE_MAX)) buf := C.malloc(bufSize) if buf == nil { return "", fmt.Errorf("allocation failed") } defer C.free(buf) // mygetgrgid_r is a wrapper around getgrgid_r to // to avoid using gid_t because C.gid_t(gid) for // unknown reasons doesn't work on linux. rv := C.mygetgrgid_r(C.int(gid), &grp, (*C.char)(buf), bufSize, &result) if rv != 0 { return "", fmt.Errorf("failed group lookup: %s", syscall.Errno(rv)) } if result == nil { return "", fmt.Errorf("unknown group %d", gid) } return C.GoString(result.gr_name), nil } // GroupId is an adaption from https://codereview.appspot.com/4589049. func GroupId(name string) (int, error) { var grp C.struct_group var result *C.struct_group bufSize := C.size_t(C.sysconf(C._SC_GETGR_R_SIZE_MAX)) buf := C.malloc(bufSize) if buf == nil { return -1, fmt.Errorf("allocation failed") } defer C.free(buf) // mygetgrgid_r is a wrapper around getgrgid_r to // to avoid using gid_t because C.gid_t(gid) for // unknown reasons doesn't work on linux. cname := C.CString(name) defer C.free(unsafe.Pointer(cname)) rv := C.getgrnam_r(cname, &grp, (*C.char)(buf), bufSize, &result) if rv != 0 { return -1, fmt.Errorf("failed group lookup: %s", syscall.Errno(rv)) } if result == nil { return -1, fmt.Errorf("unknown group %s", name) } return int(C.int(result.gr_gid)), nil } // --- pure Go functions --- func GetFileStat(p string) (uid int, gid int, major int, minor int, inode uint64, nlink int, err error) { var stat syscall.Stat_t err = syscall.Lstat(p, &stat) if err != nil { return } uid = int(stat.Uid) gid = int(stat.Gid) inode = uint64(stat.Ino) nlink = int(stat.Nlink) major = -1 minor = -1 if stat.Mode&syscall.S_IFBLK != 0 || stat.Mode&syscall.S_IFCHR != 0 { major = int(stat.Rdev / 256) minor = int(stat.Rdev % 256) } return } func IsMountPoint(name string) bool { _, err := exec.LookPath("mountpoint") if err == nil { err = exec.Command("mountpoint", "-q", name).Run() if err != nil { return false } return true } stat, err := os.Stat(name) if err != nil { return false } rootStat, err := os.Lstat(name + "/..") if err != nil { return false } // If the directory has the same device as parent, then it's not a mountpoint. return stat.Sys().(*syscall.Stat_t).Dev != rootStat.Sys().(*syscall.Stat_t).Dev } func ReadLastNLines(f *os.File, lines int) (string, error) { if lines <= 0 { return "", fmt.Errorf("invalid line count") } stat, err := f.Stat() if err != nil { return "", err } data, err := syscall.Mmap(int(f.Fd()), 0, int(stat.Size()), syscall.PROT_READ, syscall.MAP_SHARED) if err != nil { return "", err } defer syscall.Munmap(data) for i := len(data) - 1; i >= 0; i-- { if data[i] == '\n' { lines-- } if lines < 0 { return string(data[i+1 : len(data)]), nil } } return string(data), nil } func SetSize(fd int, width int, height int) (err error) { var dimensions [4]uint16 dimensions[0] = uint16(height) dimensions[1] = uint16(width) if _, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), uintptr(syscall.TIOCSWINSZ), uintptr(unsafe.Pointer(&dimensions)), 0, 0, 0); err != 0 { return err } return nil } lxd-2.0.2/shared/util_test.go000066400000000000000000000033171272140510300161120ustar00rootroot00000000000000package shared import ( "fmt" "io/ioutil" "os" "strings" "testing" ) func TestFileCopy(t *testing.T) { helloWorld := []byte("hello world\n") source, err := ioutil.TempFile("", "") if err != nil { t.Error(err) return } defer os.Remove(source.Name()) if err := WriteAll(source, helloWorld); err != nil { source.Close() t.Error(err) return } source.Close() dest, err := ioutil.TempFile("", "") defer os.Remove(dest.Name()) if err != nil { t.Error(err) return } dest.Close() if err := FileCopy(source.Name(), dest.Name()); err != nil { t.Error(err) return } dest2, err := os.Open(dest.Name()) if err != nil { t.Error(err) return } content, err := ioutil.ReadAll(dest2) if err != nil { t.Error(err) return } if string(content) != string(helloWorld) { t.Error("content mismatch: ", string(content), "!=", string(helloWorld)) return } } func TestReadLastNLines(t *testing.T) { source, err := ioutil.TempFile("", "") if err != nil { t.Error(err) return } defer os.Remove(source.Name()) for i := 0; i < 50; i++ { fmt.Fprintf(source, "%d\n", i) } lines, err := ReadLastNLines(source, 100) if err != nil { t.Error(err) return } split := strings.Split(lines, "\n") for i := 0; i < 50; i++ { if fmt.Sprintf("%d", i) != split[i] { t.Error(fmt.Sprintf("got %s expected %d", split[i], i)) return } } source.Seek(0, 0) for i := 0; i < 150; i++ { fmt.Fprintf(source, "%d\n", i) } lines, err = ReadLastNLines(source, 100) if err != nil { t.Error(err) return } split = strings.Split(lines, "\n") for i := 0; i < 100; i++ { if fmt.Sprintf("%d", i+50) != split[i] { t.Error(fmt.Sprintf("got %s expected %d", split[i], i)) return } } } lxd-2.0.2/test/000077500000000000000000000000001272140510300132545ustar00rootroot00000000000000lxd-2.0.2/test/README.md000066400000000000000000000024201272140510300145310ustar00rootroot00000000000000# How to run To run all tests, including the Go tests, run from repository root: sudo -E make check To run only the integration tests, run from the test directory: sudo -E ./main.sh # Environment variables Name | Default | Description :-- | :--- | :---------- LXD\_BACKEND | dir | What backend to test against (btrfs, dir, lvm or zfs) LXD\_CONCURRENT | 0 | Run concurency tests, very CPU intensive LXD\_DEBUG | 0 | Run lxd, lxc and the shell in debug mode (very verbose) LXD\_INSPECT | 0 | Don't teardown the test environment on failure LXD\_LOGS | "" | Path to a directory to copy all the LXD logs to LXD\_OFFLINE | 0 | Skip anything that requires network access LXD\_TEST\_IMAGE | "" (busybox test image) | Path to an image tarball to use instead of the default busybox image LXD\_TMPFS | 0 | Sets up a tmpfs for the whole testsuite to run on (fast but needs memory) lxd-2.0.2/test/backends/000077500000000000000000000000001272140510300150265ustar00rootroot00000000000000lxd-2.0.2/test/backends/btrfs.sh000066400000000000000000000012701272140510300165020ustar00rootroot00000000000000#!/bin/sh btrfs_setup() { local LXD_DIR LXD_DIR=$1 echo "==> Setting up btrfs backend in ${LXD_DIR}" if ! which btrfs >/dev/null 2>&1; then echo "Couldn't find the btrfs binary"; false fi truncate -s 100G "${TEST_DIR}/$(basename "${LXD_DIR}").btrfs" mkfs.btrfs "${TEST_DIR}/$(basename "${LXD_DIR}").btrfs" mount -o loop "${TEST_DIR}/$(basename "${LXD_DIR}").btrfs" "${LXD_DIR}" } btrfs_configure() { local LXD_DIR LXD_DIR=$1 echo "==> Configuring btrfs backend in ${LXD_DIR}" } btrfs_teardown() { local LXD_DIR LXD_DIR=$1 echo "==> Tearing down btrfs backend in ${LXD_DIR}" umount -l "${LXD_DIR}" rm -f "${TEST_DIR}/$(basename "${LXD_DIR}").btrfs" } lxd-2.0.2/test/backends/dir.sh000066400000000000000000000011421272140510300161360ustar00rootroot00000000000000#!/bin/sh # Nothing need be done for the dir backed, but we still need some functions. # This file can also serve as a skel file for what needs to be done to # implement a new backend. # Any necessary backend-specific setup dir_setup() { local LXD_DIR LXD_DIR=$1 echo "==> Setting up directory backend in ${LXD_DIR}" } # Do the API voodoo necessary to configure LXD to use this backend dir_configure() { local LXD_DIR LXD_DIR=$1 echo "==> Configuring directory backend in ${LXD_DIR}" } dir_teardown() { local LXD_DIR LXD_DIR=$1 echo "==> Tearing down directory backend in ${LXD_DIR}" } lxd-2.0.2/test/backends/lvm.sh000066400000000000000000000026731272140510300161700ustar00rootroot00000000000000#!/bin/sh lvm_setup() { local LXD_DIR LXD_DIR=$1 echo "==> Setting up lvm backend in ${LXD_DIR}" if ! which lvm >/dev/null 2>&1; then echo "Couldn't find the lvm binary"; false fi truncate -s 4G "${TEST_DIR}/$(basename "${LXD_DIR}").lvm" pvloopdev=$(losetup --show -f "${TEST_DIR}/$(basename "${LXD_DIR}").lvm") if [ ! -e "${pvloopdev}" ]; then echo "failed to setup loop" false fi echo "${pvloopdev}" > "${TEST_DIR}/$(basename "${LXD_DIR}").lvm.vg" pvcreate "${pvloopdev}" vgcreate "lxdtest-$(basename "${LXD_DIR}")" "${pvloopdev}" } lvm_configure() { local LXD_DIR LXD_DIR=$1 echo "==> Configuring lvm backend in ${LXD_DIR}" lxc config set storage.lvm_volume_size "10Mib" lxc config set storage.lvm_vg_name "lxdtest-$(basename "${LXD_DIR}")" } lvm_teardown() { local LXD_DIR LXD_DIR=$1 echo "==> Tearing down lvm backend in ${LXD_DIR}" SUCCESS=0 # shellcheck disable=SC2034 for i in $(seq 10); do vgremove -f "lxdtest-$(basename "${LXD_DIR}")" >/dev/null 2>&1 || true pvremove -f "$(cat "${TEST_DIR}/$(basename "${LXD_DIR}").lvm.vg")" >/dev/null 2>&1 || true if losetup -d "$(cat "${TEST_DIR}/$(basename "${LXD_DIR}").lvm.vg")"; then SUCCESS=1 break fi sleep 0.5 done if [ "${SUCCESS}" = "0" ]; then echo "Failed to tear down LVM" false fi rm -f "${TEST_DIR}/$(basename "${LXD_DIR}").lvm" rm -f "${TEST_DIR}/$(basename "${LXD_DIR}").lvm.vg" } lxd-2.0.2/test/backends/zfs.sh000066400000000000000000000021661272140510300161710ustar00rootroot00000000000000#!/bin/sh zfs_setup() { local LXD_DIR LXD_DIR=$1 echo "==> Setting up ZFS backend in ${LXD_DIR}" if ! which zfs >/dev/null 2>&1; then echo "Couldn't find zfs binary"; false fi truncate -s 100G "${LXD_DIR}/zfspool" # prefix lxdtest- here, as zfs pools must start with a letter, but tempdir # won't necessarily generate one that does. zpool create "lxdtest-$(basename "${LXD_DIR}")" "${LXD_DIR}/zfspool" -m none } zfs_configure() { local LXD_DIR LXD_DIR=$1 echo "==> Configuring ZFS backend in ${LXD_DIR}" lxc config set storage.zfs_pool_name "lxdtest-$(basename "${LXD_DIR}")" } zfs_teardown() { local LXD_DIR LXD_DIR=$1 echo "==> Tearing down ZFS backend in ${LXD_DIR}" # Wait up to 5s for zpool destroy to succeed SUCCESS=0 # shellcheck disable=SC2034 for i in $(seq 10); do zpool destroy -f "lxdtest-$(basename "${LXD_DIR}")" >/dev/null 2>&1 || true if ! zpool list -o name -H | grep -q "^lxdtest-$(basename "${LXD_DIR}")"; then SUCCESS=1 break fi sleep 0.5 done if [ "${SUCCESS}" = "0" ]; then echo "Failed to destroy the zpool" false fi } lxd-2.0.2/test/deps/000077500000000000000000000000001272140510300142075ustar00rootroot00000000000000lxd-2.0.2/test/deps/devlxd-client.go000066400000000000000000000030001272140510300172710ustar00rootroot00000000000000/* * An example of how to use lxd's golang /dev/lxd client. This is intended to * be run from inside a container. */ package main import ( "encoding/json" "fmt" "io/ioutil" "net" "net/http" "os" ) type DevLxdDialer struct { Path string } func (d DevLxdDialer) DevLxdDial(network, path string) (net.Conn, error) { addr, err := net.ResolveUnixAddr("unix", d.Path) if err != nil { return nil, err } conn, err := net.DialUnix("unix", nil, addr) if err != nil { return nil, err } return conn, err } var DevLxdTransport = &http.Transport{ Dial: DevLxdDialer{"/dev/lxd/sock"}.DevLxdDial, } func main() { c := http.Client{Transport: DevLxdTransport} raw, err := c.Get("http://meshuggah-rocks/") if err != nil { fmt.Println(err) os.Exit(1) } if raw.StatusCode != http.StatusOK { fmt.Println("http error", raw.StatusCode) result, err := ioutil.ReadAll(raw.Body) if err == nil { fmt.Println(string(result)) } os.Exit(1) } result := []string{} if err := json.NewDecoder(raw.Body).Decode(&result); err != nil { fmt.Println("err decoding response", err) os.Exit(1) } if result[0] != "/1.0" { fmt.Println("unknown response", result) os.Exit(1) } if len(os.Args) > 1 { raw, err := c.Get(fmt.Sprintf("http://meshuggah-rocks/1.0/config/%s", os.Args[1])) if err != nil { fmt.Println(err) os.Exit(1) } value, err := ioutil.ReadAll(raw.Body) if err != nil { fmt.Println(err) os.Exit(1) } fmt.Println(string(value)) } else { fmt.Println("/dev/lxd ok") } } lxd-2.0.2/test/deps/import-busybox000077500000000000000000000274751272140510300171570ustar00rootroot00000000000000#!/usr/bin/env python3 import argparse import atexit import hashlib import http.client import io import json import os import shutil import socket import subprocess import sys import tarfile import tempfile import uuid class FriendlyParser(argparse.ArgumentParser): def error(self, message): sys.stderr.write('\nerror: %s\n' % message) self.print_help() sys.exit(2) def find_on_path(command): """Is command on the executable search path?""" if 'PATH' not in os.environ: return False path = os.environ['PATH'] for element in path.split(os.pathsep): if not element: continue filename = os.path.join(element, command) if os.path.isfile(filename) and os.access(filename, os.X_OK): return True return False class UnixHTTPConnection(http.client.HTTPConnection): def __init__(self, path): http.client.HTTPConnection.__init__(self, 'localhost') self.path = path def connect(self): sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) sock.connect(self.path) self.sock = sock class LXD(object): workdir = None def __init__(self, path): self.lxd = UnixHTTPConnection(path) # Create our workdir self.workdir = tempfile.mkdtemp() atexit.register(self.cleanup) def cleanup(self): if self.workdir: shutil.rmtree(self.workdir) def rest_call(self, path, data=None, method="GET", headers={}): if method == "GET" and data: self.lxd.request( method, "%s?%s" % "&".join(["%s=%s" % (key, value) for key, value in data.items()]), headers) else: self.lxd.request(method, path, data, headers) r = self.lxd.getresponse() d = json.loads(r.read().decode("utf-8")) return r.status, d def aliases_create(self, name, target): data = json.dumps({"target": target, "name": name}) status, data = self.rest_call("/1.0/images/aliases", data, "POST") if status != 200: raise Exception("Failed to create alias: %s" % name) def aliases_remove(self, name): status, data = self.rest_call("/1.0/images/aliases/%s" % name, method="DELETE") if status != 200: raise Exception("Failed to remove alias: %s" % name) def aliases_list(self): status, data = self.rest_call("/1.0/images/aliases") return [alias.split("/1.0/images/aliases/")[-1] for alias in data['metadata']] def images_list(self, recursive=False): if recursive: status, data = self.rest_call("/1.0/images?recursion=1") return data['metadata'] else: status, data = self.rest_call("/1.0/images") return [image.split("/1.0/images/")[-1] for image in data['metadata']] def images_upload(self, path, filename, public): headers = {} if public: headers['X-LXD-public'] = "1" if isinstance(path, str): headers['Content-Type'] = "application/octet-stream" status, data = self.rest_call("/1.0/images", open(path, "rb"), "POST", headers) else: meta_path, rootfs_path = path boundary = str(uuid.uuid1()) upload_path = os.path.join(self.workdir, "upload") body = open(upload_path, "wb+") for name, path in [("metadata", meta_path), ("rootfs", rootfs_path)]: filename = os.path.basename(path) body.write(bytes("--%s\r\n" % boundary, "utf-8")) body.write(bytes("Content-Disposition: form-data; " "name=%s; filename=%s\r\n" % (name, filename), "utf-8")) body.write(b"Content-Type: application/octet-stream\r\n") body.write(b"\r\n") with open(path, "rb") as fd: shutil.copyfileobj(fd, body) body.write(b"\r\n") body.write(bytes("--%s--\r\n" % boundary, "utf-8")) body.write(b"\r\n") body.close() headers['Content-Type'] = "multipart/form-data; boundary=%s" \ % boundary status, data = self.rest_call("/1.0/images", open(upload_path, "rb"), "POST", headers) if status != 202: raise Exception("Failed to upload the image: %s" % status) status, data = self.rest_call(data['operation'] + "/wait", "", "GET", {}) if status != 200: raise Exception("Failed to query the operation: %s" % status) if data['status_code'] != 200: raise Exception("Failed to import the image: %s" % data['metadata']) return data['metadata']['metadata'] class Busybox(object): workdir = None def __init__(self): # Create our workdir self.workdir = tempfile.mkdtemp() atexit.register(self.cleanup) def cleanup(self): if self.workdir: shutil.rmtree(self.workdir) def create_tarball(self, split=False): xz = "pxz" if find_on_path("pxz") else "xz" destination_tar = os.path.join(self.workdir, "busybox.tar") target_tarball = tarfile.open(destination_tar, "w:") if split: destination_tar_rootfs = os.path.join(self.workdir, "busybox.rootfs.tar") target_tarball_rootfs = tarfile.open(destination_tar_rootfs, "w:") metadata = {'architecture': os.uname()[4], 'creation_date': int(os.stat("/bin/busybox").st_ctime), 'properties': { 'os': "Busybox", 'architecture': os.uname()[4], 'description': "Busybox %s" % os.uname()[4], 'name': "busybox-%s" % os.uname()[4] }, } # Add busybox with open("/bin/busybox", "rb") as fd: busybox_file = tarfile.TarInfo() busybox_file.size = os.stat("/bin/busybox").st_size busybox_file.mode = 0o755 if split: busybox_file.name = "bin/busybox" target_tarball_rootfs.addfile(busybox_file, fd) else: busybox_file.name = "rootfs/bin/busybox" target_tarball.addfile(busybox_file, fd) # Add symlinks busybox = subprocess.Popen(["/bin/busybox", "--list-full"], stdout=subprocess.PIPE, universal_newlines=True) busybox.wait() for path in busybox.stdout.read().split("\n"): if not path.strip(): continue symlink_file = tarfile.TarInfo() symlink_file.type = tarfile.SYMTYPE symlink_file.linkname = "/bin/busybox" if split: symlink_file.name = "%s" % path.strip() target_tarball_rootfs.addfile(symlink_file) else: symlink_file.name = "rootfs/%s" % path.strip() target_tarball.addfile(symlink_file) # Add directories for path in ("dev", "mnt", "proc", "root", "sys", "tmp"): directory_file = tarfile.TarInfo() directory_file.type = tarfile.DIRTYPE if split: directory_file.name = "%s" % path target_tarball_rootfs.addfile(directory_file) else: directory_file.name = "rootfs/%s" % path target_tarball.addfile(directory_file) # Add the metadata file metadata_yaml = json.dumps(metadata, sort_keys=True, indent=4, separators=(',', ': '), ensure_ascii=False).encode('utf-8') + b"\n" metadata_file = tarfile.TarInfo() metadata_file.size = len(metadata_yaml) metadata_file.name = "metadata.yaml" target_tarball.addfile(metadata_file, io.BytesIO(metadata_yaml)) # Add an /etc/inittab; this is to work around: # http://lists.busybox.net/pipermail/busybox/2015-November/083618.html # Basically, since there are some hardcoded defaults that misbehave, we # just pass an empty inittab so those aren't applied, and then busybox # doesn't spin forever. inittab = tarfile.TarInfo() inittab.size = 1 inittab.name = "/rootfs/etc/inittab" target_tarball.addfile(inittab, io.BytesIO(b"\n")) target_tarball.close() if split: target_tarball_rootfs.close() # Compress the tarball r = subprocess.call([xz, "-9", destination_tar]) if r: raise Exception("Failed to compress: %s" % destination_tar) if split: r = subprocess.call([xz, "-9", destination_tar_rootfs]) if r: raise Exception("Failed to compress: %s" % destination_tar_rootfs) return destination_tar + ".xz", destination_tar_rootfs + ".xz" else: return destination_tar + ".xz" if __name__ == "__main__": if "LXD_DIR" in os.environ: lxd_socket = os.path.join(os.environ['LXD_DIR'], "unix.socket") else: lxd_socket = "/var/lib/lxd/unix.socket" if not os.path.exists(lxd_socket): print("LXD isn't running.") sys.exit(1) lxd = LXD(lxd_socket) def setup_alias(aliases, fingerprint): existing = lxd.aliases_list() for alias in aliases: if alias in existing: lxd.aliases_remove(alias) lxd.aliases_create(alias, fingerprint) print("Setup alias: %s" % alias) def import_busybox(parser, args): busybox = Busybox() if args.split: meta_path, rootfs_path = busybox.create_tarball(split=True) with open(meta_path, "rb") as meta_fd: with open(rootfs_path, "rb") as rootfs_fd: fingerprint = hashlib.sha256(meta_fd.read() + rootfs_fd.read()).hexdigest() if fingerprint in lxd.images_list(): parser.exit(1, "This image is already in the store.\n") r = lxd.images_upload((meta_path, rootfs_path), meta_path.split("/")[-1], args.public) print("Image imported as: %s" % r['fingerprint']) else: path = busybox.create_tarball() with open(path, "rb") as fd: fingerprint = hashlib.sha256(fd.read()).hexdigest() if fingerprint in lxd.images_list(): parser.exit(1, "This image is already in the store.\n") r = lxd.images_upload(path, path.split("/")[-1], args.public) print("Image imported as: %s" % r['fingerprint']) setup_alias(args.alias, fingerprint) parser = FriendlyParser(description="Import a busybox image") parser.add_argument("--alias", action="append", default=[], help="Aliases for the image") parser.add_argument("--public", action="store_true", default=False, help="Make the image public") parser.add_argument("--split", action="store_true", default=False, help="Whether to create a split image") parser.set_defaults(func=import_busybox) # Call the function args = parser.parse_args() try: args.func(parser, args) except Exception as e: parser.error(e) lxd-2.0.2/test/deps/schema1.sql000066400000000000000000000031001272140510300162430ustar00rootroot00000000000000-- Database schema version 1 as taken from febb96e8164fbd189698da77383c26ce68b9762a CREATE TABLE certificates ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, fingerprint VARCHAR(255) NOT NULL, type INTEGER NOT NULL, name VARCHAR(255) NOT NULL, certificate TEXT NOT NULL, UNIQUE (fingerprint) ); CREATE TABLE containers ( id INTEGER primary key AUTOINCREMENT NOT NULL, name VARCHAR(255) NOT NULL, architecture INTEGER NOT NULL, type INTEGER NOT NULL, UNIQUE (name) ); CREATE TABLE containers_config ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, container_id INTEGER NOT NULL, key VARCHAR(255) NOT NULL, value TEXT, FOREIGN KEY (container_id) REFERENCES containers (id), UNIQUE (container_id, key) ); CREATE TABLE images ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, fingerprint VARCHAR(255) NOT NULL, filename VARCHAR(255) NOT NULL, size INTEGER NOT NULL, public INTEGER NOT NULL DEFAULT 0, architecture INTEGER NOT NULL, creation_date DATETIME, expiry_date DATETIME, upload_date DATETIME NOT NULL, UNIQUE (fingerprint) ); CREATE TABLE images_properties ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, image_id INTEGER NOT NULL, type INTEGER NOT NULL, key VARCHAR(255) NOT NULL, value TEXT, FOREIGN KEY (image_id) REFERENCES images (id) ); CREATE TABLE schema ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, version INTEGER NOT NULL, updated_at DATETIME NOT NULL, UNIQUE (version) ); INSERT INTO schema (version, updated_at) values (1, "now"); lxd-2.0.2/test/deps/server.crt000066400000000000000000000040321272140510300162260ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIFzjCCA7igAwIBAgIRAKnCQRdpkZ86oXYOd9hGrPgwCwYJKoZIhvcNAQELMB4x HDAaBgNVBAoTE2xpbnV4Y29udGFpbmVycy5vcmcwHhcNMTUwNzE1MDQ1NjQ0WhcN MjUwNzEyMDQ1NjQ0WjAeMRwwGgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMIIC IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAyViJkCzoxa1NYilXqGJog6xz lSm4xt8KIzayc0JdB9VxEdIVdJqUzBAUtyCS4KZ9MbPmMEOX9NbBASL0tRK58/7K Scq99Kj4XbVMLU1P/y5aW0ymnF0OpKbG6unmgAI2k/duRlbYHvGRdhlswpKl0Yst l8i2kXOK0Rxcz90FewcEXGSnIYW21sz8YpBLfIZqOx6XEV36mOdi3MLrhUSAhXDw Pay33Y7NonCQUBtiO7BT938cqI14FJrWdKon1UnODtzONcVBLTWtoe7D41+mx7EE Taq5OPxBSe0DD6KQcPOZ7ZSJEhIqVKMvzLyiOJpyShmhm4OuGNoAG6jAuSij/9Kc aLU4IitcrvFOuAo8M9OpiY9ZCR7Gb/qaPAXPAxE7Ci3f9DDNKXtPXDjhj3YG01+h fNXMW3kCkMImn0A/+mZUMdCL87GWN2AN3Do5qaIc5XVEt1gp+LVqJeMoZ/lAeZWT IbzcnkneOzE25m+bjw3r3WlR26amhyrWNwjGzRkgfEpw336kniX/GmwaCNgdNk+g 5aIbVxIHO0DbgkDBtdljR3VOic4djW/LtUIYIQ2egnPPyRR3fcFI+x5EQdVQYUXf jpGIwovUDyG0Lkam2tpdeEXvLMZr8+Lhzu+H6vUFSj3cz6gcw/Xepw40FOkYdAI9 LYB6nwpZLTVaOqZCJ2ECAwEAAaOCAQkwggEFMA4GA1UdDwEB/wQEAwIAoDATBgNV HSUEDDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMIHPBgNVHREEgccwgcSCCVVi dW50dVByb4IRMTAuMTY3LjE2MC4xODMvMjSCHzIwMDE6MTVjMDo2NzM1OmVlMDA6 OmU6ZTMxMy8xMjiCKWZkNTc6Yzg3ZDpmMWVlOmVlMDA6MjFkOjdkZmY6ZmUwOToz NzUzLzY0gikyMDAxOjE1YzA6NjczNTplZTAwOjIxZDo3ZGZmOmZlMDk6Mzc1My82 NIIbZmU4MDo6MjFkOjdkZmY6ZmUwOTozNzUzLzY0ghAxOTIuMTY4LjEyMi4xLzI0 MAsGCSqGSIb3DQEBCwOCAgEAmcJUSBH7cLw3auEEV1KewtdqY1ARVB/pafAtbe9F 7ZKBbxUcS7cP3P1hRs5FH1bH44bIJKHxckctNUPqvC+MpXSryKinQ5KvGPNjGdlW 6EPlQr23btizC6hRdQ6RjEkCnQxhyTLmQ9n78nt47hjA96rFAhCUyfPdv9dI4Zux bBTJekhCx5taamQKoxr7tql4Y2TchVlwASZvOfar8I0GxBRFT8w9IjckOSLoT9/s OhlvXpeoxxFT7OHwqXEXdRUvw/8MGBo6JDnw+J/NGDBw3Z0goebG4FMT//xGSHia czl3A0M0flk4/45L7N6vctwSqi+NxVaJRKeiYPZyzOO9K/d+No+WVBPwKmyP8icQ b7FGTelPJOUolC6kmoyM+vyaNUoU4nz6lgOSHAtuqGNDWZWuX/gqzZw77hzDIgkN qisOHZWPVlG/iUh1JBkbglBaPeaa3zf0XwSdgwwf4v8Z+YtEiRqkuFgQY70eQKI/ CIkj1p0iW5IBEsEAGUGklz4ZwqJwH3lQIqDBzIgHe3EP4cXaYsx6oYhPSDdHLPv4 HMZhl05DP75CEkEWRD0AIaL7SHdyuYUmCZ2zdrMI7TEDrAqcUuPbYpHcdJ2wnYmi 2G8XHJibfu4PCpIm1J8kPL8rqpdgW3moKR8Mp0HJQOH4tSBr1Ep7xNLP1wg6PIe+ p7U= -----END CERTIFICATE----- lxd-2.0.2/test/deps/server.key000066400000000000000000000062531272140510300162350ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIIJKAIBAAKCAgEAyViJkCzoxa1NYilXqGJog6xzlSm4xt8KIzayc0JdB9VxEdIV dJqUzBAUtyCS4KZ9MbPmMEOX9NbBASL0tRK58/7KScq99Kj4XbVMLU1P/y5aW0ym nF0OpKbG6unmgAI2k/duRlbYHvGRdhlswpKl0Ystl8i2kXOK0Rxcz90FewcEXGSn IYW21sz8YpBLfIZqOx6XEV36mOdi3MLrhUSAhXDwPay33Y7NonCQUBtiO7BT938c qI14FJrWdKon1UnODtzONcVBLTWtoe7D41+mx7EETaq5OPxBSe0DD6KQcPOZ7ZSJ EhIqVKMvzLyiOJpyShmhm4OuGNoAG6jAuSij/9KcaLU4IitcrvFOuAo8M9OpiY9Z CR7Gb/qaPAXPAxE7Ci3f9DDNKXtPXDjhj3YG01+hfNXMW3kCkMImn0A/+mZUMdCL 87GWN2AN3Do5qaIc5XVEt1gp+LVqJeMoZ/lAeZWTIbzcnkneOzE25m+bjw3r3WlR 26amhyrWNwjGzRkgfEpw336kniX/GmwaCNgdNk+g5aIbVxIHO0DbgkDBtdljR3VO ic4djW/LtUIYIQ2egnPPyRR3fcFI+x5EQdVQYUXfjpGIwovUDyG0Lkam2tpdeEXv LMZr8+Lhzu+H6vUFSj3cz6gcw/Xepw40FOkYdAI9LYB6nwpZLTVaOqZCJ2ECAwEA AQKCAgBCe8GwoaOa4kaTCyOurg/kqqTftA8XW751MjJqbJdbZtcXE0+SWRiY6RZu AYt+MntUVhrEBQ3AAsloHqq+v5g3QQJ6qz9d8g1Qo/SrYMPxdtTPINhC+VdEdu1n 1CQQUKrE4QbAoxxp20o0vOB0vweR0WsUm2ntTUGhGsRqvoh4vzBpcbLeFtDwzG7p /MtwKtIZA1jOm0GMC5tRWet67cuiRFCPjOCJgAXWhWShjuk43FhdeNN1tIDaDOaT Tzwn6V7o+W/9wUxsKTVUKwrzoTno5kKNgrn2XxUP2/sOxpb7NPS2xj0cgnMHz3qR GBhYqGbkoOID/88U1acDew1oFktQL24yd8/cvooh7KLN3k5oSKjpKmGAKaMMwsSv ccRSM9EkTtgTANLpSFiVF738drZw7UXUsvVTCF8WHhMtGD50XOahR02D1kZnpqpe SdxJ9qFNEeozk6w56cTerJNz4od18/gQtNADcPI6WE+8NBrqYjN/X4CBNS76IEtp 5ddGbi6+4HgO5B0pU87f2bZH4BwR8XJ07wdMRyXXhmnKcnirkyqUtgHmLF3LZnGX +Fph5KmhBGs/ZovBvnBI2nREsMfNvzffK7x3hyFXv6J+XxILk4i3LkgKLJFC+RY0 sjWNQB5tHuA1dbq3AtsbfJcTK764kSaUsq0JoqPQgiSuiNoCIQKCAQEA1Fk4SR5I H1QHlXeQ/k1sg6B5H0uosPAnAQxjuI8SvYkty+b4diP+CJIS4IphgLIItROORUFE bOi6pj2D2oK04J55fhlJaE8LQs7i90nFXT4B09Ut4oBYGCz5aE/wAUxUanaq1dxj K17y+ejlqh7yKTwupHOvIm4ddDwU1U5H9J/Cyywvp5fznVIGMJynVk7zriXYM6aC tioNCbOTHwQxjYEaG3AwymXaI6sNwdNiAzgq6M7v43GF3IOj8SYK2VhVdLqLJPnL 6G5OqMRxxQtxOcSctFOuicu+Jq/KVWJGDaERQZJloHcBJCtO34ONswGJqC/PGoU+ Ny/BOaZdLQDIpwKCAQEA8rxOKaLuOWEi4MDJuAgQYqpO9JxY0h3yN1YrspBuGezR 4Lzdh0vUh9Jr4npV723gGwA7r8AcqIPZvSk8MmcYVuwoxz9VWYeNP8P6cRc3bDO8 shnSvFxV32gKTEH8fOH3/BlJOnbn62tebSFHnGxyh2WPsRbzAMOKj9Q3Yq6ad3DD 6rJhtopIedC3AWc3aVeO2FHPC+Lza0PhUVsHf5X7Bg+zQlHaaEXB0lysruXkDlU9 WdW+Ajvo0enhOROgEa7QBC74NsKZF4KJGMGTaglydRtVYbqfx4QbfgDU5h2zaUnB lRINZvKNYGRXDN944ymynE9bo4xfOERbWc68GFaItwKCAQBCY+qvIaKW+OSuHIXe nEJTHPcBi9wgBdWMBF2hNEo9rAf/eiUweqxP7autPFajsAX85zJSAMft7Q1+MDlr NfZrS+DcRfenfx8cMibP/eaQ8nQL0NjZuhrQ5C7OKD/3h+/UoWlkF9WBl9wLun8j oy0/KyvCCtE0yIy47Jfu4NyqZNC4SQZVNbLa+uwogrHm0CRrzDU+YM75OUh+QgC7 b8o2XajV70ux3ApJoI9ajEZWj1cLFrf1umaJvTaijKxTq8R8DF64nsjb0LETHugb HSq3TvtXfdpSBrtayRdPfrw8QqFsiOLxOoPG1SuBwlWpI8/wH5J2zjXXdzzIU3VK PrZ9AoIBAQDazTjbuT1pxZCN7donJEW42nHPdvttc4b5sJg1HpHQlrNdFIHPyl/q iperD8FU0MM5M42Zz99FW4yzQW88s8ex2rCrYgCKcnC1cO/YbygLRduq4zIdjlHt zrexo6132K0TtqtWowZNJHx6fIwziWH3gGn1JI2pO5o0KgQ+1MryLVi8v0zrIV1R SP0dq6+8Kivd/GhY+5uWLhr1nct1i3k6Ln7Uojnw0ihzegxCn4FiFh32U4AyPVSR m3PkYjdgmSZzDu+5VNJw6b6w7RT3eUqOGzRsorASRZgOjatbPpyRpOV1fU9NZAhi QjBhrzMl+VlCIxqkowzWCHAb1QmiGqajAoIBAGYKD5h7jTgPFKFlMViTg8LoMcQl 9vbpmWkB+WdY5xXOwO0hO99rFDmLx6elsmYjdpq8zJkOFTnSB2o3IpenxZltNMsI +aDlZWxDxokTxr6gbQPPrjePT1oON0/6sLEYkDOln8H1P9jmLPqTrET0DxCMgE5D NE9TAEuUKVhRTWy6FSdP58hUimyVnlbnvbGOh2tviNO+TK/H7k0WjRg57Sz9XTHO q36ob5TEsQngkTATEoksE9xhXFxtmTm/nu/26wN2Py49LSwu2aAYTfX/KhQKklNX P/tP5//z+hGeba8/xv8YhEr7vhbnlBdwp0wHJj5g7nHAbYfo9ELbXSON8wc= -----END RSA PRIVATE KEY----- lxd-2.0.2/test/extras/000077500000000000000000000000001272140510300145625ustar00rootroot00000000000000lxd-2.0.2/test/extras/speedtest_create.sh000077500000000000000000000014101272140510300204400ustar00rootroot00000000000000#!/bin/bash MYDIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd ) CIMAGE="testimage" CNAME="speedtest" count=${1} if [ "x${count}" == "x" ]; then echo "USAGE: ${0} 10" echo "This creates 10 busybox containers" exit 1 fi if [ "x${2}" != "xnotime" ]; then time ${0} ${count} notime exit 0 fi ${MYDIR}/deps/import-busybox --alias busybox PIDS="" for c in $(seq 1 $count); do lxc init busybox "${CNAME}${c}" 2>&1 & PIDS="$PIDS $!" done for pid in $PIDS; do wait $pid done echo -e "\nlxc list: All shutdown" time lxc list 1>/dev/null PIDS="" for c in $(seq 1 $count); do lxc start "${CNAME}${c}" 2>&1 & PIDS="$PIDS $!" done for pid in $PIDS; do wait $pid done echo -e "\nlxc list: All started" time lxc list 1>/dev/null echo -e "\nRun completed" lxd-2.0.2/test/extras/speedtest_delete.sh000077500000000000000000000007231272140510300204450ustar00rootroot00000000000000#!/bin/bash MYDIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd ) CIMAGE="testimage" CNAME="speedtest" count=${1} if [ "x${count}" == "x" ]; then echo "USAGE: ${0} 10" echo "This deletes 10 busybox containers" exit 1 fi if [ "x${2}" != "xnotime" ]; then time ${0} ${count} notime exit 0 fi PIDS="" for c in $(seq 1 $count); do lxc delete "${CNAME}${c}" 2>&1 & PIDS="$PIDS $!" done for pid in $PIDS; do wait $pid done echo -e "\nRun completed" lxd-2.0.2/test/extras/stresstest.sh000077500000000000000000000114631272140510300173510ustar00rootroot00000000000000#!/bin/bash export PATH=$GOPATH/bin:$PATH # /tmp isn't moutned exec on most systems, so we can't actually start # containers that are created there. export SRC_DIR=$(pwd) export LXD_DIR=$(mktemp -d -p $(pwd)) chmod 777 "${LXD_DIR}" export LXD_CONF=$(mktemp -d) export LXD_FUIDMAP_DIR=${LXD_DIR}/fuidmap mkdir -p ${LXD_FUIDMAP_DIR} BASEURL=https://127.0.0.1:18443 RESULT=failure set -e if [ -n "$LXD_DEBUG" ]; then set -x debug=--debug fi echo "==> Running the LXD testsuite" BASEURL=https://127.0.0.1:18443 my_curl() { curl -k -s --cert "${LXD_CONF}/client.crt" --key "${LXD_CONF}/client.key" $@ } wait_for() { op=$($@ | jq -r .operation) my_curl $BASEURL$op/wait } lxc() { INJECTED=0 CMD="$(which lxc)" for arg in $@; do if [ "$arg" = "--" ]; then INJECTED=1 CMD="$CMD $debug" CMD="$CMD --" else CMD="$CMD \"$arg\"" fi done if [ "$INJECTED" = "0" ]; then CMD="$CMD $debug" fi eval "$CMD" } cleanup() { read -p "Tests Completed ($RESULT): hit enter to continue" x echo "==> Cleaning up" # Try to stop all the containers my_curl "$BASEURL/1.0/containers" | jq -r .metadata[] 2>/dev/null | while read -r line; do wait_for my_curl -X PUT "$BASEURL$line/state" -d "{\"action\":\"stop\",\"force\":true}" done # kill the lxds which share our pgrp as parent mygrp=`awk '{ print $5 }' /proc/self/stat` for p in `pidof lxd`; do pgrp=`awk '{ print $5 }' /proc/$p/stat` if [ "$pgrp" = "$mygrp" ]; then do_kill_lxd $p fi done # Apparently we need to wait a while for everything to die sleep 3 rm -Rf ${LXD_DIR} rm -Rf ${LXD_CONF} echo "" echo "" echo "==> Test result: $RESULT" } trap cleanup EXIT HUP INT TERM if [ -z "`which lxc`" ]; then echo "==> Couldn't find lxc" && false fi spawn_lxd() { # LXD_DIR is local here because since `lxc` is actually a function, it # overwrites the environment and we would lose LXD_DIR's value otherwise. local LXD_DIR addr=$1 lxddir=$2 shift shift echo "==> Spawning lxd on $addr in $lxddir" LXD_DIR=$lxddir lxd ${DEBUG} $extraargs $* 2>&1 > $lxddir/lxd.log & echo "==> Confirming lxd on $addr is responsive" alive=0 while [ $alive -eq 0 ]; do [ -e "${lxddir}/unix.socket" ] && LXD_DIR=$lxddir lxc finger && alive=1 sleep 1s done echo "==> Binding to network" LXD_DIR=$lxddir lxc config set core.https_address $addr echo "==> Setting trust password" LXD_DIR=$lxddir lxc config set core.trust_password foo } spawn_lxd 127.0.0.1:18443 $LXD_DIR ## tests go here if [ ! -e "$LXD_TEST_IMAGE" ]; then echo "Please define LXD_TEST_IMAGE" false fi lxc image import $LXD_TEST_IMAGE --alias busybox lxc image list lxc list NUMCREATES=5 createthread() { echo "createthread: I am $$" for i in `seq 1 $NUMCREATES`; do echo "createthread: starting loop $i out of $NUMCREATES" declare -a pids for j in `seq 1 20`; do lxc launch busybox b.$i.$j & pids[$j]=$! done for j in `seq 1 20`; do # ignore errors if the task has already exited wait ${pids[$j]} 2>/dev/null || true done echo "createthread: deleting..." for j in `seq 1 20`; do lxc delete b.$i.$j & pids[$j]=$! done for j in `seq 1 20`; do # ignore errors if the task has already exited wait ${pids[$j]} 2>/dev/null || true done done exit 0 } listthread() { echo "listthread: I am $$" while [ 1 ]; do lxc list sleep 2s done exit 0 } configthread() { echo "configthread: I am $$" for i in `seq 1 20`; do lxc profile create p$i lxc profile set p$i limits.memory 100MB lxc profile delete p$i done exit 0 } disturbthread() { echo "disturbthread: I am $$" while [ 1 ]; do lxc profile create empty lxc init busybox disturb1 lxc profile apply disturb1 empty lxc start disturb1 lxc exec disturb1 -- ps -ef lxc stop disturb1 --force lxc delete disturb1 lxc profile delete empty done exit 0 } echo "Starting create thread" createthread 2>&1 | tee $LXD_DIR/createthread.out & p1=$! echo "starting the disturb thread" disturbthread 2>&1 | tee $LXD_DIR/disturbthread.out & pdisturb=$! echo "Starting list thread" listthread 2>&1 | tee $LXD_DIR/listthread.out & p2=$! echo "Starting config thread" configthread 2>&1 | tee $LXD_DIR/configthread.out & p3=$! # wait for listthread to finish wait $p1 # and configthread, it should be quick wait $p3 echo "The creation loop is done, killing the list and disturb threads" kill $p2 wait $p2 || true kill $pdisturb wait $pdisturb || true RESULT=success lxd-2.0.2/test/lxd-benchmark/000077500000000000000000000000001272140510300157735ustar00rootroot00000000000000lxd-2.0.2/test/lxd-benchmark/main.go000066400000000000000000000177151272140510300172610ustar00rootroot00000000000000package main import ( "fmt" "io/ioutil" "os" "strings" "sync" "time" "github.com/lxc/lxd" "github.com/lxc/lxd/shared" "github.com/lxc/lxd/shared/gnuflag" ) var argCount = gnuflag.Int("count", 100, "Number of containers to create") var argParallel = gnuflag.Int("parallel", -1, "Number of threads to use") var argImage = gnuflag.String("image", "ubuntu:", "Image to use for the test") var argPrivileged = gnuflag.Bool("privileged", false, "Use privileged containers") var argFreeze = gnuflag.Bool("freeze", false, "Freeze the container right after start") func main() { err := run(os.Args) if err != nil { fmt.Fprintf(os.Stderr, "error: %s\n", err) os.Exit(1) } os.Exit(0) } func run(args []string) error { // Parse command line gnuflag.Parse(true) if len(os.Args) == 1 || !shared.StringInSlice(os.Args[1], []string{"spawn", "delete"}) { fmt.Printf("Usage: %s spawn [--count=COUNT] [--image=IMAGE] [--privileged=BOOL] [--parallel=COUNT]\n", os.Args[0]) fmt.Printf(" %s delete [--parallel=COUNT]\n\n", os.Args[0]) gnuflag.Usage() fmt.Printf("\n") return fmt.Errorf("An action (spawn or delete) must be passed.") } // Connect to LXD c, err := lxd.NewClient(&lxd.DefaultConfig, "local") if err != nil { return err } switch os.Args[1] { case "spawn": return spawnContainers(c, *argCount, *argImage, *argPrivileged) case "delete": return deleteContainers(c) } return nil } func logf(format string, args ...interface{}) { fmt.Printf(fmt.Sprintf("[%s] %s\n", time.Now().Format(time.StampMilli), format), args...) } func spawnContainers(c *lxd.Client, count int, image string, privileged bool) error { batch := *argParallel if batch < 1 { // Detect the number of parallel actions cpus, err := ioutil.ReadDir("/sys/bus/cpu/devices") if err != nil { return err } batch = len(cpus) } batches := count / batch remainder := count % batch // Print the test header st, err := c.ServerStatus() if err != nil { return err } privilegedStr := "unprivileged" if privileged { privilegedStr = "privileged" } mode := "normal startup" if *argFreeze { mode = "start and freeze" } fmt.Printf("Test environment:\n") fmt.Printf(" Server backend: %s\n", st.Environment.Server) fmt.Printf(" Server version: %s\n", st.Environment.ServerVersion) fmt.Printf(" Kernel: %s\n", st.Environment.Kernel) fmt.Printf(" Kernel architecture: %s\n", st.Environment.KernelArchitecture) fmt.Printf(" Kernel version: %s\n", st.Environment.KernelVersion) fmt.Printf(" Storage backend: %s\n", st.Environment.Storage) fmt.Printf(" Storage version: %s\n", st.Environment.StorageVersion) fmt.Printf(" Container backend: %s\n", st.Environment.Driver) fmt.Printf(" Container version: %s\n", st.Environment.DriverVersion) fmt.Printf("\n") fmt.Printf("Test variables:\n") fmt.Printf(" Container count: %d\n", count) fmt.Printf(" Container mode: %s\n", privilegedStr) fmt.Printf(" Startup mode: %s\n", mode) fmt.Printf(" Image: %s\n", image) fmt.Printf(" Batches: %d\n", batches) fmt.Printf(" Batch size: %d\n", batch) fmt.Printf(" Remainder: %d\n", remainder) fmt.Printf("\n") // Pre-load the image var fingerprint string if strings.Contains(image, ":") { var remote string remote, fingerprint = lxd.DefaultConfig.ParseRemoteAndContainer(image) if fingerprint == "" { fingerprint = "default" } d, err := lxd.NewClient(&lxd.DefaultConfig, remote) if err != nil { return err } target := d.GetAlias(fingerprint) if target != "" { fingerprint = target } _, err = c.GetImageInfo(fingerprint) if err != nil { logf("Importing image into local store: %s", fingerprint) err := d.CopyImage(fingerprint, c, false, nil, false, false, nil) if err != nil { return err } } else { logf("Found image in local store: %s", fingerprint) } } else { fingerprint = image logf("Found image in local store: %s", fingerprint) } // Start the containers spawnedCount := 0 nameFormat := "benchmark-%." + fmt.Sprintf("%d", len(fmt.Sprintf("%d", count))) + "d" wgBatch := sync.WaitGroup{} nextStat := batch startContainer := func(name string) { defer wgBatch.Done() // Configure config := map[string]string{} if privileged { config["security.privileged"] = "true" } config["user.lxd-benchmark"] = "true" // Create resp, err := c.Init(name, "local", fingerprint, nil, config, false) if err != nil { logf(fmt.Sprintf("Failed to spawn container '%s': %s", name, err)) return } err = c.WaitForSuccess(resp.Operation) if err != nil { logf(fmt.Sprintf("Failed to spawn container '%s': %s", name, err)) return } // Start resp, err = c.Action(name, "start", -1, false, false) if err != nil { logf(fmt.Sprintf("Failed to spawn container '%s': %s", name, err)) return } err = c.WaitForSuccess(resp.Operation) if err != nil { logf(fmt.Sprintf("Failed to spawn container '%s': %s", name, err)) return } // Freeze if *argFreeze { resp, err = c.Action(name, "freeze", -1, false, false) if err != nil { logf(fmt.Sprintf("Failed to spawn container '%s': %s", name, err)) return } err = c.WaitForSuccess(resp.Operation) if err != nil { logf(fmt.Sprintf("Failed to spawn container '%s': %s", name, err)) return } } } logf("Starting the test") timeStart := time.Now() for i := 0; i < batches; i++ { for j := 0; j < batch; j++ { spawnedCount = spawnedCount + 1 name := fmt.Sprintf(nameFormat, spawnedCount) wgBatch.Add(1) go startContainer(name) } wgBatch.Wait() if spawnedCount >= nextStat { interval := time.Since(timeStart).Seconds() logf("Started %d containers in %.3fs (%.3f/s)", spawnedCount, interval, float64(spawnedCount)/interval) nextStat = nextStat * 2 } } for k := 0; k < remainder; k++ { spawnedCount = spawnedCount + 1 name := fmt.Sprintf(nameFormat, spawnedCount) wgBatch.Add(1) go startContainer(name) } wgBatch.Wait() logf("Test completed in %.3fs", time.Since(timeStart).Seconds()) return nil } func deleteContainers(c *lxd.Client) error { batch := *argParallel if batch < 1 { // Detect the number of parallel actions cpus, err := ioutil.ReadDir("/sys/bus/cpu/devices") if err != nil { return err } batch = len(cpus) } // List all the containers allContainers, err := c.ListContainers() if err != nil { return err } containers := []shared.ContainerInfo{} for _, container := range allContainers { if container.Config["user.lxd-benchmark"] != "true" { continue } containers = append(containers, container) } // Delete them all count := len(containers) logf("%d containers to delete", count) batches := count / batch deletedCount := 0 wgBatch := sync.WaitGroup{} nextStat := batch deleteContainer := func(ct shared.ContainerInfo) { defer wgBatch.Done() // Stop if ct.IsActive() { resp, err := c.Action(ct.Name, "stop", -1, true, false) if err != nil { logf("Failed to delete container: %s", ct.Name) return } err = c.WaitForSuccess(resp.Operation) if err != nil { logf("Failed to delete container: %s", ct.Name) return } } // Delete resp, err := c.Delete(ct.Name) if err != nil { logf("Failed to delete container: %s", ct.Name) return } err = c.WaitForSuccess(resp.Operation) if err != nil { logf("Failed to delete container: %s", ct.Name) return } } logf("Starting the cleanup") timeStart := time.Now() for i := 0; i < batches; i++ { for j := 0; j < batch; j++ { wgBatch.Add(1) go deleteContainer(containers[deletedCount]) deletedCount = deletedCount + 1 } wgBatch.Wait() if deletedCount >= nextStat { interval := time.Since(timeStart).Seconds() logf("Deleted %d containers in %.3fs (%.3f/s)", deletedCount, interval, float64(deletedCount)/interval) nextStat = nextStat * 2 } } for k := deletedCount; k < count; k++ { wgBatch.Add(1) go deleteContainer(containers[deletedCount]) deletedCount = deletedCount + 1 } wgBatch.Wait() logf("Cleanup completed") return nil } lxd-2.0.2/test/main.sh000077500000000000000000000263051272140510300145450ustar00rootroot00000000000000#!/bin/sh -eu [ -n "${GOPATH:-}" ] && export "PATH=${GOPATH}/bin:${PATH}" # Don't translate lxc output for parsing in it in tests. export "LC_ALL=C" if [ -n "${LXD_DEBUG:-}" ]; then set -x DEBUG="--debug" fi echo "==> Checking for dependencies" for dep in lxd lxc curl jq git xgettext sqlite3 msgmerge msgfmt shuf setfacl uuidgen pyflakes3 pep8 shellcheck; do which "${dep}" >/dev/null 2>&1 || (echo "Missing dependency: ${dep}" >&2 && exit 1) done if [ "${USER:-'root'}" != "root" ]; then echo "The testsuite must be run as root." >&2 exit 1 fi if [ -n "${LXD_LOGS:-}" ] && [ ! -d "${LXD_LOGS}" ]; then echo "Your LXD_LOGS path doesn't exist: ${LXD_LOGS}" exit 1 fi # Helper functions local_tcp_port() { while :; do port=$(shuf -i 10000-32768 -n 1) nc -l 127.0.0.1 "${port}" >/dev/null 2>&1 & pid=$! kill "${pid}" >/dev/null 2>&1 || continue wait "${pid}" || true echo "${port}" return done } # import all the backends for backend in backends/*.sh; do . "${backend}" done if [ -z "${LXD_BACKEND:-}" ]; then LXD_BACKEND=dir fi spawn_lxd() { set +x # LXD_DIR is local here because since $(lxc) is actually a function, it # overwrites the environment and we would lose LXD_DIR's value otherwise. local LXD_DIR lxddir=${1} shift # Copy pre generated Certs cp deps/server.crt "${lxddir}" cp deps/server.key "${lxddir}" # setup storage "$LXD_BACKEND"_setup "${lxddir}" echo "==> Spawning lxd in ${lxddir}" # shellcheck disable=SC2086 LXD_DIR="${lxddir}" lxd --logfile "${lxddir}/lxd.log" ${DEBUG-} "$@" 2>&1 & LXD_PID=$! echo "${LXD_PID}" > "${lxddir}/lxd.pid" echo "${lxddir}" >> "${TEST_DIR}/daemons" echo "==> Spawned LXD (PID is ${LXD_PID})" echo "==> Confirming lxd is responsive" LXD_DIR="${lxddir}" lxd waitready --timeout=300 echo "==> Binding to network" # shellcheck disable=SC2034 for i in $(seq 10); do addr="127.0.0.1:$(local_tcp_port)" LXD_DIR="${lxddir}" lxc config set core.https_address "${addr}" || continue echo "${addr}" > "${lxddir}/lxd.addr" echo "==> Bound to ${addr}" break done echo "==> Setting trust password" LXD_DIR="${lxddir}" lxc config set core.trust_password foo if [ -n "${LXD_DEBUG:-}" ]; then set -x fi echo "==> Configuring storage backend" "$LXD_BACKEND"_configure "${lxddir}" } lxc() { LXC_LOCAL=1 lxc_remote "$@" RET=$? unset LXC_LOCAL return ${RET} } lxc_remote() { set +x injected=0 cmd=$(which lxc) # shellcheck disable=SC2048,SC2068 for arg in $@; do if [ "${arg}" = "--" ]; then injected=1 cmd="${cmd} ${DEBUG:-}" [ -n "${LXC_LOCAL}" ] && cmd="${cmd} --force-local" cmd="${cmd} --" elif [ "${arg}" = "--force-local" ]; then continue else cmd="${cmd} \"${arg}\"" fi done if [ "${injected}" = "0" ]; then cmd="${cmd} ${DEBUG-}" fi if [ -n "${LXD_DEBUG:-}" ]; then set -x fi eval "${cmd}" } my_curl() { curl -k -s --cert "${LXD_CONF}/client.crt" --key "${LXD_CONF}/client.key" "$@" } wait_for() { addr=${1} shift op=$("$@" | jq -r .operation) my_curl "https://${addr}${op}/wait" } ensure_has_localhost_remote() { addr=${1} if ! lxc remote list | grep -q "localhost"; then lxc remote add localhost "https://${addr}" --accept-certificate --password foo fi } ensure_import_testimage() { if ! lxc image alias list | grep -q "^| testimage\s*|.*$"; then if [ -e "${LXD_TEST_IMAGE:-}" ]; then lxc image import "${LXD_TEST_IMAGE}" --alias testimage else deps/import-busybox --alias testimage fi fi } check_empty() { if [ "$(find "${1}" 2> /dev/null | wc -l)" -gt "1" ]; then echo "${1} is not empty, content:" find "${1}" false fi } check_empty_table() { if [ -n "$(sqlite3 "${1}" "SELECT * FROM ${2};")" ]; then echo "DB table ${2} is not empty, content:" sqlite3 "${1}" "SELECT * FROM ${2};" false fi } kill_lxd() { # LXD_DIR is local here because since $(lxc) is actually a function, it # overwrites the environment and we would lose LXD_DIR's value otherwise. local LXD_DIR daemon_dir=${1} LXD_DIR=${daemon_dir} daemon_pid=$(cat "${daemon_dir}/lxd.pid") echo "==> Killing LXD at ${daemon_dir}" if [ -e "${daemon_dir}/unix.socket" ]; then # Delete all containers echo "==> Deleting all containers" for container in $(lxc list --force-local | tail -n+3 | grep "^| " | cut -d' ' -f2); do lxc delete "${container}" --force-local -f || true done # Delete all images echo "==> Deleting all images" for image in $(lxc image list --force-local | tail -n+3 | grep "^| " | cut -d'|' -f3 | sed "s/^ //g"); do lxc image delete "${image}" --force-local || true done echo "==> Checking for locked DB tables" for table in $(echo .tables | sqlite3 "${daemon_dir}/lxd.db"); do echo "SELECT * FROM ${table};" | sqlite3 "${daemon_dir}/lxd.db" >/dev/null done # Kill the daemon lxd shutdown || kill -9 "${daemon_pid}" 2>/dev/null || true # Cleanup shmounts (needed due to the forceful kill) find "${daemon_dir}" -name shmounts -exec "umount" "-l" "{}" \; >/dev/null 2>&1 || true fi if [ -n "${LXD_LOGS:-}" ]; then echo "==> Copying the logs" mkdir -p "${LXD_LOGS}/${daemon_pid}" cp -R "${daemon_dir}/logs/" "${LXD_LOGS}/${daemon_pid}/" cp "${daemon_dir}/lxd.log" "${LXD_LOGS}/${daemon_pid}/" fi echo "==> Checking for leftover files" rm -f "${daemon_dir}/containers/lxc-monitord.log" rm -f "${daemon_dir}/security/apparmor/cache/.features" check_empty "${daemon_dir}/containers/" check_empty "${daemon_dir}/devices/" check_empty "${daemon_dir}/images/" # FIXME: Once container logging rework is done, uncomment # check_empty "${daemon_dir}/logs/" check_empty "${daemon_dir}/security/apparmor/cache/" check_empty "${daemon_dir}/security/apparmor/profiles/" check_empty "${daemon_dir}/security/seccomp/" check_empty "${daemon_dir}/shmounts/" check_empty "${daemon_dir}/snapshots/" echo "==> Checking for leftover DB entries" check_empty_table "${daemon_dir}/lxd.db" "containers" check_empty_table "${daemon_dir}/lxd.db" "containers_config" check_empty_table "${daemon_dir}/lxd.db" "containers_devices" check_empty_table "${daemon_dir}/lxd.db" "containers_devices_config" check_empty_table "${daemon_dir}/lxd.db" "containers_profiles" check_empty_table "${daemon_dir}/lxd.db" "images" check_empty_table "${daemon_dir}/lxd.db" "images_aliases" check_empty_table "${daemon_dir}/lxd.db" "images_properties" # teardown storage "$LXD_BACKEND"_teardown "${daemon_dir}" # Wipe the daemon directory wipe "${daemon_dir}" # Remove the daemon from the list sed "\|^${daemon_dir}|d" -i "${TEST_DIR}/daemons" } cleanup() { set +e # Allow for inspection if [ -n "${LXD_INSPECT:-}" ]; then echo "==> Test result: ${TEST_RESULT}" if [ "${TEST_RESULT}" != "success" ]; then echo "failed test: ${TEST_CURRENT}" fi # shellcheck disable=SC2086 printf "To poke around, use:\n LXD_DIR=%s LXD_CONF=%s sudo -E %s/bin/lxc COMMAND\n" "${LXD_DIR}" "${LXD_CONF}" ${GOPATH:-} echo "Tests Completed (${TEST_RESULT}): hit enter to continue" # shellcheck disable=SC2034 read nothing fi echo "==> Cleaning up" # Kill all the LXD instances while read daemon_dir; do kill_lxd "${daemon_dir}" done < "${TEST_DIR}/daemons" # Wipe the test environment wipe "${TEST_DIR}" echo "" echo "" echo "==> Test result: ${TEST_RESULT}" if [ "${TEST_RESULT}" != "success" ]; then echo "failed test: ${TEST_CURRENT}" fi } wipe() { if which btrfs >/dev/null 2>&1; then rm -Rf "${1}" 2>/dev/null || true if [ -d "${1}" ]; then find "${1}" | tac | xargs btrfs subvolume delete >/dev/null 2>&1 || true fi fi # shellcheck disable=SC2009 ps aux | grep lxc-monitord | grep "${1}" | awk '{print $2}' | while read pid; do kill -9 "${pid}" done if [ -f "${TEST_DIR}/loops" ]; then while read line; do losetup -d "${line}" || true done < "${TEST_DIR}/loops" fi if mountpoint -q "${1}"; then umount "${1}" fi rm -Rf "${1}" } # Must be set before cleanup() TEST_CURRENT=setup TEST_RESULT=failure trap cleanup EXIT HUP INT TERM # Import all the testsuites for suite in suites/*.sh; do . "${suite}" done # Setup test directory TEST_DIR=$(mktemp -d -p "$(pwd)" tmp.XXX) chmod +x "${TEST_DIR}" if [ -n "${LXD_TMPFS:-}" ]; then mount -t tmpfs tmpfs "${TEST_DIR}" -o mode=0751 fi LXD_CONF=$(mktemp -d -p "${TEST_DIR}" XXX) export LXD_CONF # Setup the first LXD LXD_DIR=$(mktemp -d -p "${TEST_DIR}" XXX) export LXD_DIR chmod +x "${LXD_DIR}" spawn_lxd "${LXD_DIR}" LXD_ADDR=$(cat "${LXD_DIR}/lxd.addr") export LXD_ADDR # Setup the second LXD LXD2_DIR=$(mktemp -d -p "${TEST_DIR}" XXX) chmod +x "${LXD2_DIR}" spawn_lxd "${LXD2_DIR}" LXD2_ADDR=$(cat "${LXD2_DIR}/lxd.addr") export LXD2_ADDR # allow for running a specific set of tests if [ "$#" -gt 0 ]; then "test_${1}" TEST_RESULT=success exit fi echo "==> TEST: doing static analysis of commits" TEST_CURRENT=test_static_analysis test_static_analysis echo "==> TEST: checking dependencies" TEST_CURRENT=test_check_deps test_check_deps echo "==> TEST: Database schema update" TEST_CURRENT=test_database_update test_database_update echo "==> TEST: lxc remote url" TEST_CURRENT=test_remote_url test_remote_url echo "==> TEST: lxc remote administration" TEST_CURRENT=test_remote_admin test_remote_admin echo "==> TEST: basic usage" TEST_CURRENT=test_basic_usage test_basic_usage echo "==> TEST: security" TEST_CURRENT=test_security test_security echo "==> TEST: images (and cached image expiry)" TEST_CURRENT=test_image_expiry test_image_expiry if [ -n "${LXD_CONCURRENT:-}" ]; then echo "==> TEST: concurrent exec" TEST_CURRENT=test_concurrent_exec test_concurrent_exec echo "==> TEST: concurrent startup" TEST_CURRENT=test_concurrent test_concurrent fi echo "==> TEST: lxc remote usage" TEST_CURRENT=test_remote_usage test_remote_usage echo "==> TEST: snapshots" TEST_CURRENT=test_snapshots test_snapshots echo "==> TEST: snapshot restore" TEST_CURRENT=test_snap_restore test_snap_restore echo "==> TEST: profiles, devices and configuration" TEST_CURRENT=test_config_profiles test_config_profiles echo "==> TEST: server config" TEST_CURRENT=test_server_config test_server_config echo "==> TEST: filemanip" TEST_CURRENT=test_filemanip test_filemanip echo "==> TEST: devlxd" TEST_CURRENT=test_devlxd test_devlxd if which fuidshift >/dev/null 2>&1; then echo "==> TEST: uidshift" TEST_CURRENT=test_fuidshift test_fuidshift else echo "==> SKIP: fuidshift (binary missing)" fi echo "==> TEST: migration" TEST_CURRENT=test_migration test_migration curversion=$(dpkg -s lxc | awk '/^Version/ { print $2 }') if dpkg --compare-versions "${curversion}" gt 1.1.2-0ubuntu3; then echo "==> TEST: fdleak" TEST_CURRENT=test_fdleak test_fdleak else # We temporarily skip the fdleak test because a bug in lxc is # known to make it # fail without lxc commit # 858377e: # logs: introduce a thread-local 'current' lxc_config (v2) echo "==> SKIPPING TEST: fdleak" fi echo "==> TEST: cpu profiling" TEST_CURRENT=test_cpu_profiling test_cpu_profiling echo "==> TEST: memory profiling" TEST_CURRENT=test_mem_profiling test_mem_profiling TEST_RESULT=success lxd-2.0.2/test/suites/000077500000000000000000000000001272140510300145705ustar00rootroot00000000000000lxd-2.0.2/test/suites/basic.sh000066400000000000000000000226521272140510300162140ustar00rootroot00000000000000#!/bin/sh gen_third_cert() { [ -f "${LXD_CONF}/client3.crt" ] && return mv "${LXD_CONF}/client.crt" "${LXD_CONF}/client.crt.bak" mv "${LXD_CONF}/client.key" "${LXD_CONF}/client.key.bak" lxc_remote list > /dev/null 2>&1 mv "${LXD_CONF}/client.crt" "${LXD_CONF}/client3.crt" mv "${LXD_CONF}/client.key" "${LXD_CONF}/client3.key" mv "${LXD_CONF}/client.crt.bak" "${LXD_CONF}/client.crt" mv "${LXD_CONF}/client.key.bak" "${LXD_CONF}/client.key" } test_basic_usage() { ensure_import_testimage ensure_has_localhost_remote "${LXD_ADDR}" # Test image export sum=$(lxc image info testimage | grep ^Fingerprint | cut -d' ' -f2) lxc image export testimage "${LXD_DIR}/" if [ -e "${LXD_TEST_IMAGE:-}" ]; then name=$(basename "${LXD_TEST_IMAGE}") else name=${sum}.tar.xz fi [ "${sum}" = "$(sha256sum "${LXD_DIR}/${name}" | cut -d' ' -f1)" ] # Test an alias with slashes lxc image show "${sum}" lxc image alias create a/b/ "${sum}" lxc image alias delete a/b/ # Test alias list filtering lxc image alias create foo "${sum}" lxc image alias create bar "${sum}" lxc image alias list local: | grep -q foo lxc image alias list local: | grep -q bar lxc image alias list local: foo | grep -q -v bar lxc image alias list local: "${sum}" | grep -q foo lxc image alias list local: non-existent | grep -q -v non-existent lxc image alias delete foo lxc image alias delete bar # Test image delete lxc image delete testimage # test GET /1.0, since the client always puts to /1.0/ my_curl -f -X GET "https://${LXD_ADDR}/1.0" my_curl -f -X GET "https://${LXD_ADDR}/1.0/containers" # Re-import the image mv "${LXD_DIR}/${name}" "${LXD_DIR}/testimage.tar.xz" lxc image import "${LXD_DIR}/testimage.tar.xz" --alias testimage rm "${LXD_DIR}/testimage.tar.xz" # Test filename for image export (should be "out") lxc image export testimage "${LXD_DIR}/" [ "${sum}" = "$(sha256sum "${LXD_DIR}/testimage.tar.xz" | cut -d' ' -f1)" ] rm "${LXD_DIR}/testimage.tar.xz" # Test container creation lxc init testimage foo lxc list | grep foo | grep STOPPED lxc list fo | grep foo | grep STOPPED # Test list json format lxc list --format json | jq '.[]|select(.name="foo")' | grep '"name": "foo"' # Test container rename lxc move foo bar lxc list | grep -v foo lxc list | grep bar # Test container copy lxc copy bar foo lxc delete foo # gen untrusted cert gen_third_cert # don't allow requests without a cert to get trusted data curl -k -s -X GET "https://${LXD_ADDR}/1.0/containers/foo" | grep 403 # Test unprivileged container publish lxc publish bar --alias=foo-image prop1=val1 lxc image show foo-image | grep val1 curl -k -s --cert "${LXD_CONF}/client3.crt" --key "${LXD_CONF}/client3.key" -X GET "https://${LXD_ADDR}/1.0/images" | grep "/1.0/images/" && false lxc image delete foo-image # Test privileged container publish lxc profile create priv lxc profile set priv security.privileged true lxc init testimage barpriv -p default -p priv lxc publish barpriv --alias=foo-image prop1=val1 lxc image show foo-image | grep val1 curl -k -s --cert "${LXD_CONF}/client3.crt" --key "${LXD_CONF}/client3.key" -X GET "https://${LXD_ADDR}/1.0/images" | grep "/1.0/images/" && false lxc image delete foo-image lxc delete barpriv lxc profile delete priv # Test that containers without metadata.yaml are published successfully. # Note that this quick hack won't work for LVM, since it doesn't always mount # the container's filesystem. That's ok though: the logic we're trying to # test here is independent of storage backend, so running it for just one # backend (or all non-lvm backends) is enough. if [ "${LXD_BACKEND}" != "lvm" ]; then lxc init testimage nometadata rm "${LXD_DIR}/containers/nometadata/metadata.yaml" lxc publish nometadata --alias=nometadata-image lxc image delete nometadata-image lxc delete nometadata fi # Test public images lxc publish --public bar --alias=foo-image2 curl -k -s --cert "${LXD_CONF}/client3.crt" --key "${LXD_CONF}/client3.key" -X GET "https://${LXD_ADDR}/1.0/images" | grep "/1.0/images/" lxc image delete foo-image2 # Test invalid container names ! lxc init testimage -abc ! lxc init testimage abc- ! lxc init testimage 1234 ! lxc init testimage 12test ! lxc init testimage a_b_c ! lxc init testimage aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa # Test snapshot publish lxc snapshot bar lxc publish bar/snap0 --alias foo lxc init foo bar2 lxc list | grep bar2 lxc delete bar2 lxc image delete foo # test basic alias support printf "aliases:\n ls: list" >> "${LXD_CONF}/config.yml" lxc ls # Delete the bar container we've used for several tests lxc delete bar # lxc delete should also delete all snapshots of bar [ ! -d "${LXD_DIR}/snapshots/bar" ] # Test randomly named container creation lxc init testimage RDNAME=$(lxc list | tail -n2 | grep ^\| | awk '{print $2}') lxc delete "${RDNAME}" # Test "nonetype" container creation wait_for "${LXD_ADDR}" my_curl -X POST "https://${LXD_ADDR}/1.0/containers" \ -d "{\"name\":\"nonetype\",\"source\":{\"type\":\"none\"}}" lxc delete nonetype # Test "nonetype" container creation with an LXC config wait_for "${LXD_ADDR}" my_curl -X POST "https://${LXD_ADDR}/1.0/containers" \ -d "{\"name\":\"configtest\",\"config\":{\"raw.lxc\":\"lxc.hook.clone=/bin/true\"},\"source\":{\"type\":\"none\"}}" [ "$(my_curl "https://${LXD_ADDR}/1.0/containers/configtest" | jq -r .metadata.config[\"raw.lxc\"])" = "lxc.hook.clone=/bin/true" ] lxc delete configtest # Test socket activation LXD_ACTIVATION_DIR=$(mktemp -d -p "${TEST_DIR}" XXX) spawn_lxd "${LXD_ACTIVATION_DIR}" ( set -e # shellcheck disable=SC2030 LXD_DIR=${LXD_ACTIVATION_DIR} ensure_import_testimage lxd activateifneeded --debug 2>&1 | grep -q "Daemon has core.https_address set, activating..." lxc config unset core.https_address --force-local lxd activateifneeded --debug 2>&1 | grep -q -v "activating..." lxc init testimage autostart --force-local lxd activateifneeded --debug 2>&1 | grep -q -v "activating..." lxc config set autostart boot.autostart true --force-local lxd activateifneeded --debug 2>&1 | grep -q "Daemon has auto-started containers, activating..." lxc delete autostart --force-local ) # shellcheck disable=SC2031 LXD_DIR=${LXD_DIR} kill_lxd "${LXD_ACTIVATION_DIR}" # Create and start a container lxc launch testimage foo lxc list | grep foo | grep RUNNING lxc stop foo --force # stop is hanging # cycle it a few times lxc start foo mac1=$(lxc exec foo cat /sys/class/net/eth0/address) lxc stop foo --force # stop is hanging lxc start foo mac2=$(lxc exec foo cat /sys/class/net/eth0/address) if [ -n "${mac1}" ] && [ -n "${mac2}" ] && [ "${mac1}" != "${mac2}" ]; then echo "==> MAC addresses didn't match across restarts (${mac1} vs ${mac2})" false fi # check that we can set the environment lxc exec foo pwd | grep /root lxc exec --env BEST_BAND=meshuggah foo env | grep meshuggah lxc exec foo ip link show | grep eth0 # test file transfer echo abc > "${LXD_DIR}/in" lxc file push "${LXD_DIR}/in" foo/root/ lxc exec foo /bin/cat /root/in | grep abc lxc exec foo -- /bin/rm -f root/in lxc file push "${LXD_DIR}/in" foo/root/in1 lxc exec foo /bin/cat /root/in1 | grep abc lxc exec foo -- /bin/rm -f root/in1 # make sure stdin is chowned to our container root uid (Issue #590) [ -t 0 ] && [ -t 1 ] && lxc exec foo -- chown 1000:1000 /proc/self/fd/0 echo foo | lxc exec foo tee /tmp/foo # Detect regressions/hangs in exec sum=$(ps aux | tee "${LXD_DIR}/out" | lxc exec foo md5sum | cut -d' ' -f1) [ "${sum}" = "$(md5sum "${LXD_DIR}/out" | cut -d' ' -f1)" ] rm "${LXD_DIR}/out" # FIXME: make this backend agnostic if [ "${LXD_BACKEND}" = "dir" ]; then content=$(cat "${LXD_DIR}/containers/foo/rootfs/tmp/foo") [ "${content}" = "foo" ] fi lxc launch testimage deleterunning my_curl -X DELETE "https://${LXD_ADDR}/1.0/containers/deleterunning" | grep "container is running" lxc delete deleterunning -f # cleanup lxc delete foo -f # check that an apparmor profile is created for this container, that it is # unloaded on stop, and that it is deleted when the container is deleted lxc launch testimage lxd-apparmor-test aa-status | grep "lxd-lxd-apparmor-test_<${LXD_DIR}>" lxc stop lxd-apparmor-test --force ! aa-status | grep -q "lxd-lxd-apparmor-test_<${LXD_DIR}>" lxc delete lxd-apparmor-test [ ! -f "${LXD_DIR}/security/apparmor/profiles/lxd-lxd-apparmor-test" ] # make sure that privileged containers are not world-readable lxc profile create unconfined lxc profile set unconfined security.privileged true lxc init testimage foo2 -p unconfined [ "$(stat -L -c "%a" "${LXD_DIR}/containers/foo2")" = "700" ] lxc delete foo2 lxc profile delete unconfined # Ephemeral lxc launch testimage foo -e OLD_INIT=$(lxc info foo | grep ^Pid) lxc exec foo reboot REBOOTED="false" # shellcheck disable=SC2034 for i in $(seq 10); do NEW_INIT=$(lxc info foo | grep ^Pid || true) if [ -n "${NEW_INIT}" ] && [ "${OLD_INIT}" != "${NEW_INIT}" ]; then REBOOTED="true" break fi sleep 0.5 done [ "${REBOOTED}" = "true" ] # Workaround for LXC bug which causes LXD to double-start containers # on reboot sleep 2 lxc stop foo --force || true ! lxc list | grep -q foo } lxd-2.0.2/test/suites/concurrent.sh000066400000000000000000000011001272140510300172760ustar00rootroot00000000000000#!/bin/sh test_concurrent() { ensure_import_testimage spawn_container() { set -e name=concurrent-${1} lxc launch testimage "${name}" lxc info "${name}" | grep Running echo abc | lxc exec "${name}" -- cat | grep abc lxc stop "${name}" --force lxc delete "${name}" } PIDS="" for id in $(seq $(($(find /sys/bus/cpu/devices/ -type l | wc -l)*8))); do spawn_container "${id}" 2>&1 | tee "${LXD_DIR}/lxc-${id}.out" & PIDS="${PIDS} $!" done for pid in ${PIDS}; do wait "${pid}" done ! lxc list | grep -q concurrent } lxd-2.0.2/test/suites/config.sh000066400000000000000000000125571272140510300164030ustar00rootroot00000000000000#!/bin/sh ensure_removed() { bad=0 lxc exec foo -- stat /dev/ttyS0 && bad=1 if [ "${bad}" -eq 1 ]; then echo "device should have been removed; $*" false fi } dounixdevtest() { lxc start foo lxc config device add foo tty unix-char "$@" lxc exec foo -- stat /dev/ttyS0 lxc exec foo reboot lxc exec foo -- stat /dev/ttyS0 lxc restart foo --force lxc exec foo -- stat /dev/ttyS0 lxc config device remove foo tty ensure_removed "was not hot-removed" lxc exec foo reboot ensure_removed "removed device re-appeared after container reboot" lxc restart foo --force ensure_removed "removed device re-appaared after lxc reboot" lxc stop foo --force } testunixdevs() { echo "Testing passing char device /dev/ttyS0" dounixdevtest path=/dev/ttyS0 echo "Testing passing char device 4 64" dounixdevtest path=/dev/ttyS0 major=4 minor=64 } ensure_fs_unmounted() { bad=0 lxc exec foo -- stat /mnt/hello && bad=1 if [ "${bad}" -eq 1 ]; then echo "device should have been removed; $*" false fi } testloopmounts() { loopfile=$(mktemp -p "${TEST_DIR}" loop_XXX) dd if=/dev/zero of="${loopfile}" bs=1M seek=200 count=1 mkfs.ext4 -F "${loopfile}" lpath=$(losetup --show -f "${loopfile}") if [ ! -e "${lpath}" ]; then echo "failed to setup loop" false fi echo "${lpath}" >> "${TEST_DIR}/loops" mkdir -p "${TEST_DIR}/mnt" mount "${lpath}" "${TEST_DIR}/mnt" || { echo "loop mount failed"; return; } touch "${TEST_DIR}/mnt/hello" umount -l "${TEST_DIR}/mnt" lxc start foo lxc config device add foo mnt disk source="${lpath}" path=/mnt lxc exec foo stat /mnt/hello # Note - we need to add a set_running_config_item to lxc # or work around its absence somehow. Once that's done, we # can run the following two lines: #lxc exec foo reboot #lxc exec foo stat /mnt/hello lxc restart foo --force lxc exec foo stat /mnt/hello lxc config device remove foo mnt ensure_fs_unmounted "fs should have been hot-unmounted" lxc exec foo reboot ensure_fs_unmounted "removed fs re-appeared after reboot" lxc restart foo --force ensure_fs_unmounted "removed fs re-appeared after restart" lxc stop foo --force losetup -d "${lpath}" sed -i "\|^${lpath}|d" "${TEST_DIR}/loops" } test_config_profiles() { ensure_import_testimage lxc init testimage foo lxc profile list | grep default # let's check that 'lxc config profile' still works while it's deprecated lxc config profile list | grep default # setting an invalid config item should error out when setting it, not get # into the database and never let the user edit the container again. ! lxc config set foo raw.lxc "lxc.notaconfigkey = invalid" lxc profile create stdintest echo "BADCONF" | lxc profile set stdintest user.user_data - lxc profile show stdintest | grep BADCONF lxc profile delete stdintest echo "BADCONF" | lxc config set foo user.user_data - lxc config show foo | grep BADCONF lxc config unset foo user.user_data mkdir -p "${TEST_DIR}/mnt1" lxc config device add foo mnt1 disk source="${TEST_DIR}/mnt1" path=/mnt1 readonly=true lxc profile create onenic lxc profile device add onenic eth0 nic nictype=bridged parent=lxdbr0 lxc profile apply foo onenic lxc profile create unconfined lxc profile set unconfined raw.lxc "lxc.aa_profile=unconfined" lxc profile apply foo onenic,unconfined lxc config device list foo | grep mnt1 lxc config device show foo | grep "/mnt1" lxc config show foo | grep "onenic" -A1 | grep "unconfined" lxc profile list | grep onenic lxc profile device list onenic | grep eth0 lxc profile device show onenic | grep lxdbr0 # test live-adding a nic lxc start foo ! lxc config show foo | grep -q "raw.lxc" lxc config show foo --expanded | grep -q "raw.lxc" ! lxc config show foo | grep -v "volatile.eth0" | grep -q "eth0" lxc config show foo --expanded | grep -v "volatile.eth0" | grep -q "eth0" lxc config device add foo eth2 nic nictype=bridged parent=lxdbr0 name=eth10 lxc exec foo -- /sbin/ifconfig -a | grep eth0 lxc exec foo -- /sbin/ifconfig -a | grep eth10 lxc config device list foo | grep eth2 lxc config device remove foo eth2 # test live-adding a disk mkdir "${TEST_DIR}/mnt2" touch "${TEST_DIR}/mnt2/hosts" lxc config device add foo mnt2 disk source="${TEST_DIR}/mnt2" path=/mnt2 readonly=true lxc exec foo -- ls /mnt2/hosts lxc stop foo --force lxc start foo lxc exec foo -- ls /mnt2/hosts lxc config device remove foo mnt2 ! lxc exec foo -- ls /mnt2/hosts lxc stop foo --force lxc start foo ! lxc exec foo -- ls /mnt2/hosts lxc stop foo --force lxc config set foo user.prop value lxc list user.prop=value | grep foo lxc config unset foo user.prop # Test for invalid raw.lxc ! lxc config set foo raw.lxc a ! lxc profile set default raw.lxc a bad=0 lxc list user.prop=value | grep foo && bad=1 if [ "${bad}" -eq 1 ]; then echo "property unset failed" false fi bad=0 lxc config set foo user.prop 2>/dev/null && bad=1 if [ "${bad}" -eq 1 ]; then echo "property set succeded when it shouldn't have" false fi testunixdevs testloopmounts lxc delete foo lxc init testimage foo lxc profile apply foo onenic,unconfined lxc start foo lxc exec foo -- cat /proc/self/attr/current | grep unconfined lxc exec foo -- ls /sys/class/net | grep eth0 lxc stop foo --force lxc delete foo } lxd-2.0.2/test/suites/database_update.sh000066400000000000000000000017041272140510300202340ustar00rootroot00000000000000#!/bin/sh test_database_update(){ LXD_MIGRATE_DIR=$(mktemp -d -p "${TEST_DIR}" XXX) MIGRATE_DB=${LXD_MIGRATE_DIR}/lxd.db # Create the version 1 schema as the database sqlite3 "${MIGRATE_DB}" > /dev/null < deps/schema1.sql # Start an LXD demon in the tmp directory. This should start the updates. spawn_lxd "${LXD_MIGRATE_DIR}" # Assert there are enough tables. expected_tables=16 tables=$(sqlite3 "${MIGRATE_DB}" ".dump" | grep -c "CREATE TABLE") [ "${tables}" -eq "${expected_tables}" ] || { echo "FAIL: Wrong number of tables after database migration. Found: ${tables}, expected ${expected_tables}"; false; } # There should be 10 "ON DELETE CASCADE" occurences expected_cascades=11 cascades=$(sqlite3 "${MIGRATE_DB}" ".dump" | grep -c "ON DELETE CASCADE") [ "${cascades}" -eq "${expected_cascades}" ] || { echo "FAIL: Wrong number of ON DELETE CASCADE foreign keys. Found: ${cascades}, exected: ${expected_cascades}"; false; } } lxd-2.0.2/test/suites/deps.sh000066400000000000000000000001111272140510300160500ustar00rootroot00000000000000#!/bin/sh test_check_deps() { ! ldd "$(which lxc)" | grep -q liblxc } lxd-2.0.2/test/suites/devlxd.sh000066400000000000000000000006271272140510300164170ustar00rootroot00000000000000#!/bin/sh test_devlxd() { ensure_import_testimage cd "${TEST_DIR}" go build -tags netgo -a -installsuffix devlxd ../deps/devlxd-client.go cd - lxc launch testimage devlxd lxc file push "${TEST_DIR}/devlxd-client" devlxd/bin/ lxc exec devlxd chmod +x /bin/devlxd-client lxc config set devlxd user.foo bar lxc exec devlxd devlxd-client user.foo | grep bar lxc stop devlxd --force } lxd-2.0.2/test/suites/exec.sh000066400000000000000000000007041272140510300160510ustar00rootroot00000000000000#!/bin/sh test_concurrent_exec() { ensure_import_testimage name=x1 lxc launch testimage x1 lxc list ${name} | grep RUNNING exec_container() { echo "abc${1}" | lxc exec "${name}" -- cat | grep abc } PIDS="" for i in $(seq 1 50); do exec_container "${i}" > "${LXD_DIR}/exec-${i}.out" 2>&1 & PIDS="${PIDS} $!" done for pid in ${PIDS}; do wait "${pid}" done lxc stop "${name}" --force lxc delete "${name}" } lxd-2.0.2/test/suites/fdleak.sh000066400000000000000000000017011272140510300163510ustar00rootroot00000000000000#!/bin/sh test_fdleak() { LXD_FDLEAK_DIR=$(mktemp -d -p "${TEST_DIR}" XXX) chmod +x "${LXD_FDLEAK_DIR}" spawn_lxd "${LXD_FDLEAK_DIR}" pid=$(cat "${LXD_FDLEAK_DIR}/lxd.pid") beforefds=$(/bin/ls "/proc/${pid}/fd" | wc -l) ( set -e # shellcheck disable=SC2034 LXD_DIR=${LXD_FDLEAK_DIR} ensure_import_testimage for i in $(seq 5); do lxc init "testimage leaktest${i}" lxc info "leaktest${i}" lxc start "leaktest${i}" lxc exec "leaktest${i}" -- ps -ef lxc stop "leaktest${i}" --force lxc delete "leaktest${i}" done sleep 5 exit 0 ) afterfds=$(/bin/ls "/proc/${pid}/fd" | wc -l) leakedfds=$((afterfds - beforefds)) bad=0 # shellcheck disable=SC2015 [ ${leakedfds} -gt 5 ] && bad=1 || true if [ ${bad} -eq 1 ]; then echo "${leakedfds} FDS leaked" ls "/proc/${pid}/fd" -al netstat -anp 2>&1 | grep "${pid}/" false fi kill_lxd "${LXD_FDLEAK_DIR}" } lxd-2.0.2/test/suites/filemanip.sh000066400000000000000000000004301272140510300170650ustar00rootroot00000000000000#!/bin/sh test_filemanip() { ensure_import_testimage lxc launch testimage filemanip lxc exec filemanip -- ln -s /tmp/ /tmp/outside lxc file push main.sh filemanip/tmp/outside/ [ ! -f /tmp/main.sh ] lxc exec filemanip -- ls /tmp/main.sh lxc delete filemanip -f } lxd-2.0.2/test/suites/fuidshift.sh000066400000000000000000000026701272140510300171160ustar00rootroot00000000000000#!/bin/sh test_common_fuidshift() { # test some bad arguments fail=0 fuidshift > /dev/null 2>&1 && fail=1 fuidshift -t > /dev/null 2>&1 && fail=1 fuidshift /tmp -t b:0 > /dev/null 2>&1 && fail=1 fuidshift /tmp -t x:0:0:0 > /dev/null 2>&1 && fail=1 [ "${fail}" -ne 1 ] } test_nonroot_fuidshift() { test_common_fuidshift LXD_FUIDMAP_DIR=$(mktemp -d -p "${TEST_DIR}" XXX) u=$(id -u) g=$(id -g) u1=$((u+1)) g1=$((g+1)) touch "${LXD_FUIDMAP_DIR}/x1" fuidshift "${LXD_FUIDMAP_DIR}/x1" -t "u:${u}:100000:1" "g:${g}:100000:1" | tee /dev/stderr | grep "to 100000 100000" > /dev/null || fail=1 if [ "${fail}" -eq 1 ]; then echo "==> Failed to shift own uid to container root" false fi fuidshift "${LXD_FUIDMAP_DIR}/x1" -t "u:${u1}:10000:1" "g:${g1}:100000:1" | tee /dev/stderr | grep "to -1 -1" > /dev/null || fail=1 if [ "${fail}" -eq 1 ]; then echo "==> Wrongly shifted invalid uid to container root" false fi # unshift it chown 100000:100000 "${LXD_FUIDMAP_DIR}/x1" fuidshift "${LXD_FUIDMAP_DIR}/x1" -r -t "u:${u}:100000:1" "g:${g}:100000:1" | tee /dev/stderr | grep "to 0 0" > /dev/null || fail=1 if [ "${fail}" -eq 1 ]; then echo "==> Failed to unshift container root back to own uid" false fi } test_root_fuidshift() { test_nonroot_fuidshift # Todo - test ranges } test_fuidshift() { if [ "$(id -u)" -ne 0 ]; then test_nonroot_fuidshift else test_root_fuidshift fi } lxd-2.0.2/test/suites/image.sh000066400000000000000000000016561272140510300162160ustar00rootroot00000000000000#!/bin/sh test_image_expiry() { ensure_import_testimage if ! lxc_remote remote list | grep -q l1; then lxc_remote remote add l1 "${LXD_ADDR}" --accept-certificate --password foo fi if ! lxc_remote remote list | grep -q l2; then lxc_remote remote add l2 "${LXD2_ADDR}" --accept-certificate --password foo fi lxc_remote init l1:testimage l2:c1 fp=$(lxc_remote image info testimage | awk -F: '/^Fingerprint/ { print $2 }' | awk '{ print $1 }') [ ! -z "${fp}" ] fpbrief=$(echo "${fp}" | cut -c 1-10) lxc_remote image list l2: | grep -q "${fpbrief}" lxc_remote remote set-default l2 lxc_remote config set images.remote_cache_expiry 0 lxc_remote remote set-default local ! lxc_remote image list l2: | grep -q "${fpbrief}" lxc_remote delete l2:c1 # rest the default expiry lxc_remote remote set-default l2 lxc_remote config set images.remote_cache_expiry 10 lxc_remote remote set-default local } lxd-2.0.2/test/suites/migration.sh000066400000000000000000000045661272140510300171300ustar00rootroot00000000000000#!/bin/sh test_migration() { ensure_import_testimage if ! lxc_remote remote list | grep -q l1; then lxc_remote remote add l1 "${LXD_ADDR}" --accept-certificate --password foo fi if ! lxc_remote remote list | grep -q l2; then lxc_remote remote add l2 "${LXD2_ADDR}" --accept-certificate --password foo fi lxc_remote init testimage nonlive # test moving snapshots lxc_remote snapshot l1:nonlive lxc_remote move l1:nonlive l2: # FIXME: make this backend agnostic if [ "${LXD_BACKEND}" != "lvm" ]; then [ -d "${LXD2_DIR}/containers/nonlive/rootfs" ] fi [ ! -d "${LXD_DIR}/containers/nonlive" ] # FIXME: make this backend agnostic if [ "${LXD_BACKEND}" = "dir" ]; then [ -d "${LXD2_DIR}/snapshots/nonlive/snap0/rootfs/bin" ] fi lxc_remote copy l2:nonlive l1:nonlive2 [ -d "${LXD_DIR}/containers/nonlive2" ] # FIXME: make this backend agnostic if [ "${LXD_BACKEND}" != "lvm" ]; then [ -d "${LXD2_DIR}/containers/nonlive/rootfs/bin" ] fi # FIXME: make this backend agnostic if [ "${LXD_BACKEND}" = "dir" ]; then [ -d "${LXD_DIR}/snapshots/nonlive2/snap0/rootfs/bin" ] fi lxc_remote copy l1:nonlive2/snap0 l2:nonlive3 # FIXME: make this backend agnostic if [ "${LXD_BACKEND}" != "lvm" ]; then [ -d "${LXD2_DIR}/containers/nonlive3/rootfs/bin" ] fi lxc_remote copy l2:nonlive l2:nonlive2 # should have the same base image tag [ "$(lxc_remote config get l2:nonlive volatile.base_image)" = "$(lxc_remote config get l2:nonlive2 volatile.base_image)" ] # check that nonlive2 has a new addr in volatile [ "$(lxc_remote config get l2:nonlive volatile.eth0.hwaddr)" != "$(lxc_remote config get l2:nonlive2 volatile.eth0.hwaddr)" ] lxc_remote config unset l2:nonlive volatile.base_image lxc_remote copy l2:nonlive l1:nobase lxc_remote delete l1:nobase lxc_remote start l1:nonlive2 lxc_remote list l1: | grep RUNNING | grep nonlive2 lxc_remote stop l1:nonlive2 --force lxc_remote start l2:nonlive lxc_remote list l2: | grep RUNNING | grep nonlive lxc_remote stop l2:nonlive --force if ! which criu >/dev/null 2>&1; then echo "==> SKIP: live migration with CRIU (missing binary)" return fi lxc_remote launch testimage l1:migratee # let the container do some interesting things sleep 1s lxc_remote stop --stateful l1:migratee lxc_remote start l1:migratee lxc_remote stop --force l1:migratee } lxd-2.0.2/test/suites/profiling.sh000066400000000000000000000016611272140510300171210ustar00rootroot00000000000000#!/bin/sh test_cpu_profiling() { LXD3_DIR=$(mktemp -d -p "${TEST_DIR}" XXX) chmod +x "${LXD3_DIR}" spawn_lxd "${LXD3_DIR}" --cpuprofile "${LXD3_DIR}/cpu.out" lxdpid=$(cat "${LXD3_DIR}/lxd.pid") kill -TERM "${lxdpid}" wait "${lxdpid}" || true export PPROF_TMPDIR="${TEST_DIR}/pprof" echo top5 | go tool pprof "$(which lxd)" "${LXD3_DIR}/cpu.out" echo "" kill_lxd "${LXD3_DIR}" } test_mem_profiling() { LXD4_DIR=$(mktemp -d -p "${TEST_DIR}" XXX) chmod +x "${LXD4_DIR}" spawn_lxd "${LXD4_DIR}" --memprofile "${LXD4_DIR}/mem" lxdpid=$(cat "${LXD4_DIR}/lxd.pid") if [ -e "${LXD4_DIR}/mem" ]; then false fi kill -USR1 "${lxdpid}" timeout=50 while [ "${timeout}" != "0" ]; do [ -e "${LXD4_DIR}/mem" ] && break sleep 0.1 timeout=$((timeout-1)) done export PPROF_TMPDIR="${TEST_DIR}/pprof" echo top5 | go tool pprof "$(which lxd)" "${LXD4_DIR}/mem" echo "" kill_lxd "${LXD4_DIR}" } lxd-2.0.2/test/suites/remote.sh000066400000000000000000000112351272140510300164210ustar00rootroot00000000000000#!/bin/sh gen_second_cert() { [ -f "${LXD_CONF}/client2.crt" ] && return mv "${LXD_CONF}/client.crt" "${LXD_CONF}/client.crt.bak" mv "${LXD_CONF}/client.key" "${LXD_CONF}/client.key.bak" lxc_remote list > /dev/null 2>&1 mv "${LXD_CONF}/client.crt" "${LXD_CONF}/client2.crt" mv "${LXD_CONF}/client.key" "${LXD_CONF}/client2.key" mv "${LXD_CONF}/client.crt.bak" "${LXD_CONF}/client.crt" mv "${LXD_CONF}/client.key.bak" "${LXD_CONF}/client.key" } test_remote_url() { for url in "${LXD_ADDR}" "https://${LXD_ADDR}"; do lxc_remote remote add test "${url}" --accept-certificate --password foo lxc_remote finger test: lxc_remote config trust list | grep @ | awk '{print $2}' | while read line ; do lxc_remote config trust remove "\"${line}\"" done lxc_remote remote remove test done urls="${LXD_DIR}/unix.socket unix:${LXD_DIR}/unix.socket unix://${LXD_DIR}/unix.socket" if [ -z "${LXD_OFFLINE:-}" ]; then urls="images.linuxcontainers.org https://images.linuxcontainers.org ${urls}" fi for url in ${urls}; do lxc_remote remote add test "${url}" lxc_remote finger test: lxc_remote remote remove test done } test_remote_admin() { lxc_remote remote add badpass "${LXD_ADDR}" --accept-certificate --password bad || true ! lxc_remote list badpass: lxc_remote remote add localhost "${LXD_ADDR}" --accept-certificate --password foo lxc_remote remote list | grep 'localhost' lxc_remote remote set-default localhost [ "$(lxc_remote remote get-default)" = "localhost" ] lxc_remote remote rename localhost foo lxc_remote remote list | grep 'foo' lxc_remote remote list | grep -v 'localhost' [ "$(lxc_remote remote get-default)" = "foo" ] ! lxc_remote remote remove foo lxc_remote remote set-default local lxc_remote remote remove foo # This is a test for #91, we expect this to hang asking for a password if we # tried to re-add our cert. echo y | lxc_remote remote add localhost "${LXD_ADDR}" # we just re-add our cert under a different name to test the cert # manipulation mechanism. gen_second_cert # Test for #623 lxc_remote remote add test-623 "${LXD_ADDR}" --accept-certificate --password foo # now re-add under a different alias lxc_remote config trust add "${LXD_CONF}/client2.crt" if [ "$(lxc_remote config trust list | wc -l)" -ne 7 ]; then echo "wrong number of certs" false fi # Check that we can add domains with valid certs without confirmation: # avoid default high port behind some proxies: if [ -z "${LXD_OFFLINE:-}" ]; then lxc_remote remote add images1 images.linuxcontainers.org lxc_remote remote add images2 images.linuxcontainers.org:443 fi } test_remote_usage() { lxc_remote remote add lxd2 "${LXD2_ADDR}" --accept-certificate --password foo # we need a public image on localhost lxc_remote image export localhost:testimage "${LXD_DIR}/foo.img" lxc_remote image delete localhost:testimage sum=$(sha256sum "${LXD_DIR}/foo.img" | cut -d' ' -f1) lxc_remote image import "${LXD_DIR}/foo.img" localhost: --public lxc_remote image alias create localhost:testimage "${sum}" lxc_remote image delete "lxd2:${sum}" || true lxc_remote image copy localhost:testimage lxd2: --copy-aliases --public lxc_remote image delete "localhost:${sum}" lxc_remote image copy "lxd2:${sum}" local: --copy-aliases --public lxc_remote image info localhost:testimage lxc_remote image delete "lxd2:${sum}" lxc_remote image copy "localhost:${sum}" lxd2: lxc_remote image delete "lxd2:${sum}" lxc_remote image copy "localhost:$(echo "${sum}" | colrm 3)" lxd2: lxc_remote image delete "lxd2:${sum}" # test a private image lxc_remote image copy "localhost:${sum}" lxd2: lxc_remote image delete "localhost:${sum}" lxc_remote init "lxd2:${sum}" localhost:c1 lxc_remote delete localhost:c1 lxc_remote image alias create localhost:testimage "${sum}" # test remote publish lxc_remote init testimage pub lxc_remote publish pub lxd2: --alias bar --public a=b lxc_remote image show lxd2:bar | grep -q "a: b" lxc_remote image show lxd2:bar | grep -q "public: true" ! lxc_remote image show bar lxc_remote delete pub lxc_remote image delete lxd2:bar # Double launch to test if the image downloads only once. lxc_remote init localhost:testimage lxd2:c1 & C1PID=$! lxc_remote init localhost:testimage lxd2:c2 lxc_remote delete lxd2:c2 wait "${C1PID}" lxc_remote delete lxd2:c1 # launch testimage stored on localhost as container c1 on lxd2 lxc_remote launch localhost:testimage lxd2:c1 # make sure it is running lxc_remote list lxd2: | grep c1 | grep RUNNING lxc_remote info lxd2:c1 lxc_remote stop lxd2:c1 --force lxc_remote delete lxd2:c1 } lxd-2.0.2/test/suites/security.sh000066400000000000000000000031051272140510300167720ustar00rootroot00000000000000#!/bin/sh test_security() { ensure_import_testimage ensure_has_localhost_remote "${LXD_ADDR}" # CVE-2016-1581 if [ "${LXD_BACKEND}" = "zfs" ]; then LXD_INIT_DIR=$(mktemp -d -p "${TEST_DIR}" XXX) chmod +x "${LXD_INIT_DIR}" spawn_lxd "${LXD_INIT_DIR}" ZFS_POOL="lxdtest-$(basename "${LXD_DIR}")-init" LXD_DIR=${LXD_INIT_DIR} lxd init --storage-backend zfs --storage-create-loop 1 --storage-pool "${ZFS_POOL}" --auto PERM=$(stat -c %a "${LXD_INIT_DIR}/zfs.img") if [ "${PERM}" != "600" ]; then echo "Bad zfs.img permissions: ${PERM}" zpool destroy "${ZFS_POOL}" false fi zpool destroy "${ZFS_POOL}" kill_lxd "${LXD_INIT_DIR}" fi # CVE-2016-1582 lxc launch testimage test-priv -c security.privileged=true PERM=$(stat -L -c %a "${LXD_DIR}/containers/test-priv") if [ "${PERM}" != "700" ]; then echo "Bad container permissions: ${PERM}" false fi lxc config set test-priv security.privileged false lxc restart test-priv --force lxc config set test-priv security.privileged true lxc restart test-priv --force PERM=$(stat -L -c %a "${LXD_DIR}/containers/test-priv") if [ "${PERM}" != "700" ]; then echo "Bad container permissions: ${PERM}" false fi lxc delete test-priv --force lxc launch testimage test-unpriv lxc config set test-unpriv security.privileged true lxc restart test-unpriv --force PERM=$(stat -L -c %a "${LXD_DIR}/containers/test-unpriv") if [ "${PERM}" != "700" ]; then echo "Bad container permissions: ${PERM}" false fi lxc delete test-unpriv --force } lxd-2.0.2/test/suites/serverconfig.sh000066400000000000000000000007751272140510300176310ustar00rootroot00000000000000#!/bin/sh test_server_config() { LXD_SERVERCONFIG_DIR=$(mktemp -d -p "${TEST_DIR}" XXX) spawn_lxd "${LXD_SERVERCONFIG_DIR}" lxc config set core.trust_password 123456 config=$(lxc config show) echo "${config}" | grep -q "trust_password" echo "${config}" | grep -q -v "123456" lxc config unset core.trust_password lxc config show | grep -q -v "trust_password" # test untrusted server GET my_curl -X GET "https://$(cat "${LXD_SERVERCONFIG_DIR}/lxd.addr")/1.0" | grep -v -q environment } lxd-2.0.2/test/suites/snapshots.sh000066400000000000000000000111621272140510300171470ustar00rootroot00000000000000#!/bin/sh test_snapshots() { ensure_import_testimage ensure_has_localhost_remote "${LXD_ADDR}" lxc init testimage foo lxc snapshot foo # FIXME: make this backend agnostic if [ "${LXD_BACKEND}" = "dir" ]; then [ -d "${LXD_DIR}/snapshots/foo/snap0" ] fi lxc snapshot foo # FIXME: make this backend agnostic if [ "${LXD_BACKEND}" = "dir" ]; then [ -d "${LXD_DIR}/snapshots/foo/snap1" ] fi lxc snapshot foo tester # FIXME: make this backend agnostic if [ "${LXD_BACKEND}" = "dir" ]; then [ -d "${LXD_DIR}/snapshots/foo/tester" ] fi lxc copy foo/tester foosnap1 # FIXME: make this backend agnostic if [ "${LXD_BACKEND}" != "lvm" ]; then [ -d "${LXD_DIR}/containers/foosnap1/rootfs" ] fi lxc delete foo/snap0 # FIXME: make this backend agnostic if [ "${LXD_BACKEND}" = "dir" ]; then [ ! -d "${LXD_DIR}/snapshots/foo/snap0" ] fi # no CLI for this, so we use the API directly (rename a snapshot) wait_for "${LXD_ADDR}" my_curl -X POST "https://${LXD_ADDR}/1.0/containers/foo/snapshots/tester" -d "{\"name\":\"tester2\"}" # FIXME: make this backend agnostic if [ "${LXD_BACKEND}" = "dir" ]; then [ ! -d "${LXD_DIR}/snapshots/foo/tester" ] fi lxc move foo/tester2 foo/tester-two lxc delete foo/tester-two # FIXME: make this backend agnostic if [ "${LXD_BACKEND}" = "dir" ]; then [ ! -d "${LXD_DIR}/snapshots/foo/tester-two" ] fi lxc snapshot foo namechange # FIXME: make this backend agnostic if [ "${LXD_BACKEND}" = "dir" ]; then [ -d "${LXD_DIR}/snapshots/foo/namechange" ] fi lxc move foo foople [ ! -d "${LXD_DIR}/containers/foo" ] [ -d "${LXD_DIR}/containers/foople" ] # FIXME: make this backend agnostic if [ "${LXD_BACKEND}" = "dir" ]; then [ -d "${LXD_DIR}/snapshots/foople/namechange" ] [ -d "${LXD_DIR}/snapshots/foople/namechange" ] fi lxc delete foople lxc delete foosnap1 [ ! -d "${LXD_DIR}/containers/foople" ] [ ! -d "${LXD_DIR}/containers/foosnap1" ] } test_snap_restore() { ensure_import_testimage ensure_has_localhost_remote "${LXD_ADDR}" ########################################################## # PREPARATION ########################################################## ## create some state we will check for when snapshot is restored ## prepare snap0 lxc launch testimage bar echo snap0 > state lxc file push state bar/root/state lxc file push state bar/root/file_only_in_snap0 lxc exec bar -- mkdir /root/dir_only_in_snap0 lxc exec bar -- ln -s file_only_in_snap0 /root/statelink lxc stop bar --force lxc snapshot bar snap0 ## prepare snap1 lxc start bar echo snap1 > state lxc file push state bar/root/state lxc file push state bar/root/file_only_in_snap1 lxc exec bar -- rmdir /root/dir_only_in_snap0 lxc exec bar -- rm /root/file_only_in_snap0 lxc exec bar -- rm /root/statelink lxc exec bar -- ln -s file_only_in_snap1 /root/statelink lxc exec bar -- mkdir /root/dir_only_in_snap1 lxc stop bar --force # Delete the state file we created to prevent leaking. rm state lxc config set bar limits.cpu 1 lxc snapshot bar snap1 ########################################################## if [ "${LXD_BACKEND}" != "zfs" ]; then # The problem here is that you can't `zfs rollback` to a snapshot with a # parent, which snap0 has (snap1). restore_and_compare_fs snap0 # Check container config has been restored (limits.cpu is unset) cpus=$(lxc config get bar limits.cpu) if [ -n "${cpus}" ]; then echo "==> config didn't match expected value after restore (${cpus})" false fi fi ########################################################## # test restore using full snapshot name restore_and_compare_fs snap1 # Check config value in snapshot has been restored cpus=$(lxc config get bar limits.cpu) if [ "${cpus}" != "1" ]; then echo "==> config didn't match expected value after restore (${cpus})" false fi ########################################################## # Start container and then restore snapshot to verify the running state after restore. lxc start bar if [ "${LXD_BACKEND}" != "zfs" ]; then # see comment above about snap0 restore_and_compare_fs snap0 # check container is running after restore lxc list | grep bar | grep RUNNING fi lxc stop --force bar lxc delete bar } restore_and_compare_fs() { snap=${1} echo "==> Restoring ${snap}" lxc restore bar "${snap}" # FIXME: make this backend agnostic if [ "${LXD_BACKEND}" = "dir" ]; then # Recursive diff of container FS diff -r "${LXD_DIR}/containers/bar/rootfs" "${LXD_DIR}/snapshots/bar/${snap}/rootfs" fi } lxd-2.0.2/test/suites/static_analysis.sh000066400000000000000000000033401272140510300203160ustar00rootroot00000000000000#!/bin/sh safe_pot_hash() { sed -e "/Project-Id-Version/,/Content-Transfer-Encoding/d" -e "/^#/d" "po/lxd.pot" | tee /tmp/foo | md5sum | cut -f1 -d" " } test_static_analysis() { ( set -e cd ../ # Python3 static analysis pep8 test/deps/import-busybox scripts/lxd-setup-lvm-storage pyflakes3 test/deps/import-busybox scripts/lxd-setup-lvm-storage # Shell static analysis shellcheck lxd-bridge/lxd-bridge test/main.sh test/suites/* test/backends/* # Go static analysis ## Functions starting by empty line OUT=$(grep -r "^$" -B1 . | grep "func " | grep -v "}$" || true) if [ -n "${OUT}" ]; then echo "${OUT}" false fi ## go vet, if it exists have_go_vet=1 go help vet > /dev/null 2>&1 || have_go_vet=0 if [ "${have_go_vet}" -eq 1 ]; then go vet ./... fi ## vet if which vet >/dev/null 2>&1; then vet --all . fi ## deadcode if which deadcode >/dev/null 2>&1; then for path in . lxc/ lxd/ shared/ shared/i18n shared/termios fuidshift/ lxd-bridge/lxd-bridge-proxy/; do OUT=$(deadcode ${path} 2>&1 | grep -v lxd/migrate.pb.go || true) if [ -n "${OUT}" ]; then echo "${OUT}" >&2 false fi done fi # Skip the tests which require git if ! git status; then return fi # go fmt git add -u :/ go fmt ./... git diff --exit-code # make sure the .pot is updated cp --preserve "po/lxd.pot" "po/lxd.pot.bak" hash1=$(safe_pot_hash) make i18n -s hash2=$(safe_pot_hash) mv "po/lxd.pot.bak" "po/lxd.pot" if [ "${hash1}" != "${hash2}" ]; then echo "==> Please update the .pot file in your commit (make i18n)" && false fi ) } lxd-2.0.2/dist/0000755061062106075000000000000012721405174015656 5ustar00stgraberdomain admins00000000000000lxd-2.0.2/dist/src/0000755061062106075000000000000012721405203016436 5ustar00stgraberdomain admins00000000000000lxd-2.0.2/dist/src/golang.org/0000755061062106075000000000000012721405203020473 5ustar00stgraberdomain admins00000000000000lxd-2.0.2/dist/src/golang.org/x/0000755061062106075000000000000012721405230020742 5ustar00stgraberdomain admins00000000000000lxd-2.0.2/dist/src/golang.org/x/net/0000755061062106075000000000000012721405224021533 5ustar00stgraberdomain admins00000000000000lxd-2.0.2/dist/src/golang.org/x/net/trace/0000755061062106075000000000000012721405224022631 5ustar00stgraberdomain admins00000000000000lxd-2.0.2/dist/src/golang.org/x/net/trace/trace_test.go0000644061062106075000000000327212721405224025321 0ustar00stgraberdomain admins00000000000000// Copyright 2015 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package trace import ( "net/http" "reflect" "testing" ) type s struct{} func (s) String() string { return "lazy string" } // TestReset checks whether all the fields are zeroed after reset. func TestReset(t *testing.T) { tr := New("foo", "bar") tr.LazyLog(s{}, false) tr.LazyPrintf("%d", 1) tr.SetRecycler(func(_ interface{}) {}) tr.SetTraceInfo(3, 4) tr.SetMaxEvents(100) tr.SetError() tr.Finish() tr.(*trace).reset() if !reflect.DeepEqual(tr, new(trace)) { t.Errorf("reset didn't clear all fields: %+v", tr) } } // TestResetLog checks whether all the fields are zeroed after reset. func TestResetLog(t *testing.T) { el := NewEventLog("foo", "bar") el.Printf("message") el.Errorf("error") el.Finish() el.(*eventLog).reset() if !reflect.DeepEqual(el, new(eventLog)) { t.Errorf("reset didn't clear all fields: %+v", el) } } func TestAuthRequest(t *testing.T) { testCases := []struct { host string want bool }{ {host: "192.168.23.1", want: false}, {host: "192.168.23.1:8080", want: false}, {host: "malformed remote addr", want: false}, {host: "localhost", want: true}, {host: "localhost:8080", want: true}, {host: "127.0.0.1", want: true}, {host: "127.0.0.1:8080", want: true}, {host: "::1", want: true}, {host: "[::1]:8080", want: true}, } for _, tt := range testCases { req := &http.Request{RemoteAddr: tt.host} any, sensitive := AuthRequest(req) if any != tt.want || sensitive != tt.want { t.Errorf("AuthRequest(%q) = %t, %t; want %t, %t", tt.host, any, sensitive, tt.want, tt.want) } } } lxd-2.0.2/dist/src/golang.org/x/net/trace/histogram.go0000644061062106075000000002204612721405224025161 0ustar00stgraberdomain admins00000000000000// Copyright 2015 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package trace // This file implements histogramming for RPC statistics collection. import ( "bytes" "fmt" "html/template" "log" "math" "golang.org/x/net/internal/timeseries" ) const ( bucketCount = 38 ) // histogram keeps counts of values in buckets that are spaced // out in powers of 2: 0-1, 2-3, 4-7... // histogram implements timeseries.Observable type histogram struct { sum int64 // running total of measurements sumOfSquares float64 // square of running total buckets []int64 // bucketed values for histogram value int // holds a single value as an optimization valueCount int64 // number of values recorded for single value } // AddMeasurement records a value measurement observation to the histogram. func (h *histogram) addMeasurement(value int64) { // TODO: assert invariant h.sum += value h.sumOfSquares += float64(value) * float64(value) bucketIndex := getBucket(value) if h.valueCount == 0 || (h.valueCount > 0 && h.value == bucketIndex) { h.value = bucketIndex h.valueCount++ } else { h.allocateBuckets() h.buckets[bucketIndex]++ } } func (h *histogram) allocateBuckets() { if h.buckets == nil { h.buckets = make([]int64, bucketCount) h.buckets[h.value] = h.valueCount h.value = 0 h.valueCount = -1 } } func log2(i int64) int { n := 0 for ; i >= 0x100; i >>= 8 { n += 8 } for ; i > 0; i >>= 1 { n += 1 } return n } func getBucket(i int64) (index int) { index = log2(i) - 1 if index < 0 { index = 0 } if index >= bucketCount { index = bucketCount - 1 } return } // Total returns the number of recorded observations. func (h *histogram) total() (total int64) { if h.valueCount >= 0 { total = h.valueCount } for _, val := range h.buckets { total += int64(val) } return } // Average returns the average value of recorded observations. func (h *histogram) average() float64 { t := h.total() if t == 0 { return 0 } return float64(h.sum) / float64(t) } // Variance returns the variance of recorded observations. func (h *histogram) variance() float64 { t := float64(h.total()) if t == 0 { return 0 } s := float64(h.sum) / t return h.sumOfSquares/t - s*s } // StandardDeviation returns the standard deviation of recorded observations. func (h *histogram) standardDeviation() float64 { return math.Sqrt(h.variance()) } // PercentileBoundary estimates the value that the given fraction of recorded // observations are less than. func (h *histogram) percentileBoundary(percentile float64) int64 { total := h.total() // Corner cases (make sure result is strictly less than Total()) if total == 0 { return 0 } else if total == 1 { return int64(h.average()) } percentOfTotal := round(float64(total) * percentile) var runningTotal int64 for i := range h.buckets { value := h.buckets[i] runningTotal += value if runningTotal == percentOfTotal { // We hit an exact bucket boundary. If the next bucket has data, it is a // good estimate of the value. If the bucket is empty, we interpolate the // midpoint between the next bucket's boundary and the next non-zero // bucket. If the remaining buckets are all empty, then we use the // boundary for the next bucket as the estimate. j := uint8(i + 1) min := bucketBoundary(j) if runningTotal < total { for h.buckets[j] == 0 { j++ } } max := bucketBoundary(j) return min + round(float64(max-min)/2) } else if runningTotal > percentOfTotal { // The value is in this bucket. Interpolate the value. delta := runningTotal - percentOfTotal percentBucket := float64(value-delta) / float64(value) bucketMin := bucketBoundary(uint8(i)) nextBucketMin := bucketBoundary(uint8(i + 1)) bucketSize := nextBucketMin - bucketMin return bucketMin + round(percentBucket*float64(bucketSize)) } } return bucketBoundary(bucketCount - 1) } // Median returns the estimated median of the observed values. func (h *histogram) median() int64 { return h.percentileBoundary(0.5) } // Add adds other to h. func (h *histogram) Add(other timeseries.Observable) { o := other.(*histogram) if o.valueCount == 0 { // Other histogram is empty } else if h.valueCount >= 0 && o.valueCount > 0 && h.value == o.value { // Both have a single bucketed value, aggregate them h.valueCount += o.valueCount } else { // Two different values necessitate buckets in this histogram h.allocateBuckets() if o.valueCount >= 0 { h.buckets[o.value] += o.valueCount } else { for i := range h.buckets { h.buckets[i] += o.buckets[i] } } } h.sumOfSquares += o.sumOfSquares h.sum += o.sum } // Clear resets the histogram to an empty state, removing all observed values. func (h *histogram) Clear() { h.buckets = nil h.value = 0 h.valueCount = 0 h.sum = 0 h.sumOfSquares = 0 } // CopyFrom copies from other, which must be a *histogram, into h. func (h *histogram) CopyFrom(other timeseries.Observable) { o := other.(*histogram) if o.valueCount == -1 { h.allocateBuckets() copy(h.buckets, o.buckets) } h.sum = o.sum h.sumOfSquares = o.sumOfSquares h.value = o.value h.valueCount = o.valueCount } // Multiply scales the histogram by the specified ratio. func (h *histogram) Multiply(ratio float64) { if h.valueCount == -1 { for i := range h.buckets { h.buckets[i] = int64(float64(h.buckets[i]) * ratio) } } else { h.valueCount = int64(float64(h.valueCount) * ratio) } h.sum = int64(float64(h.sum) * ratio) h.sumOfSquares = h.sumOfSquares * ratio } // New creates a new histogram. func (h *histogram) New() timeseries.Observable { r := new(histogram) r.Clear() return r } func (h *histogram) String() string { return fmt.Sprintf("%d, %f, %d, %d, %v", h.sum, h.sumOfSquares, h.value, h.valueCount, h.buckets) } // round returns the closest int64 to the argument func round(in float64) int64 { return int64(math.Floor(in + 0.5)) } // bucketBoundary returns the first value in the bucket. func bucketBoundary(bucket uint8) int64 { if bucket == 0 { return 0 } return 1 << bucket } // bucketData holds data about a specific bucket for use in distTmpl. type bucketData struct { Lower, Upper int64 N int64 Pct, CumulativePct float64 GraphWidth int } // data holds data about a Distribution for use in distTmpl. type data struct { Buckets []*bucketData Count, Median int64 Mean, StandardDeviation float64 } // maxHTMLBarWidth is the maximum width of the HTML bar for visualizing buckets. const maxHTMLBarWidth = 350.0 // newData returns data representing h for use in distTmpl. func (h *histogram) newData() *data { // Force the allocation of buckets to simplify the rendering implementation h.allocateBuckets() // We scale the bars on the right so that the largest bar is // maxHTMLBarWidth pixels in width. maxBucket := int64(0) for _, n := range h.buckets { if n > maxBucket { maxBucket = n } } total := h.total() barsizeMult := maxHTMLBarWidth / float64(maxBucket) var pctMult float64 if total == 0 { pctMult = 1.0 } else { pctMult = 100.0 / float64(total) } buckets := make([]*bucketData, len(h.buckets)) runningTotal := int64(0) for i, n := range h.buckets { if n == 0 { continue } runningTotal += n var upperBound int64 if i < bucketCount-1 { upperBound = bucketBoundary(uint8(i + 1)) } else { upperBound = math.MaxInt64 } buckets[i] = &bucketData{ Lower: bucketBoundary(uint8(i)), Upper: upperBound, N: n, Pct: float64(n) * pctMult, CumulativePct: float64(runningTotal) * pctMult, GraphWidth: int(float64(n) * barsizeMult), } } return &data{ Buckets: buckets, Count: total, Median: h.median(), Mean: h.average(), StandardDeviation: h.standardDeviation(), } } func (h *histogram) html() template.HTML { buf := new(bytes.Buffer) if err := distTmpl.Execute(buf, h.newData()); err != nil { buf.Reset() log.Printf("net/trace: couldn't execute template: %v", err) } return template.HTML(buf.String()) } // Input: data var distTmpl = template.Must(template.New("distTmpl").Parse(`
Count: {{.Count}} Mean: {{printf "%.0f" .Mean}} StdDev: {{printf "%.0f" .StandardDeviation}} Median: {{.Median}}

{{range $b := .Buckets}} {{if $b}} {{end}} {{end}}
[ {{.Lower}}, {{.Upper}}) {{.N}} {{printf "%#.3f" .Pct}}% {{printf "%#.3f" .CumulativePct}}%
`)) lxd-2.0.2/dist/src/golang.org/x/net/trace/events.go0000644061062106075000000003020712721405224024466 0ustar00stgraberdomain admins00000000000000// Copyright 2015 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package trace import ( "bytes" "fmt" "html/template" "io" "log" "net/http" "runtime" "sort" "strconv" "strings" "sync" "sync/atomic" "text/tabwriter" "time" ) var eventsTmpl = template.Must(template.New("events").Funcs(template.FuncMap{ "elapsed": elapsed, "trimSpace": strings.TrimSpace, }).Parse(eventsHTML)) const maxEventsPerLog = 100 type bucket struct { MaxErrAge time.Duration String string } var buckets = []bucket{ {0, "total"}, {10 * time.Second, "errs<10s"}, {1 * time.Minute, "errs<1m"}, {10 * time.Minute, "errs<10m"}, {1 * time.Hour, "errs<1h"}, {10 * time.Hour, "errs<10h"}, {24000 * time.Hour, "errors"}, } // RenderEvents renders the HTML page typically served at /debug/events. // It does not do any auth checking; see AuthRequest for the default auth check // used by the handler registered on http.DefaultServeMux. // req may be nil. func RenderEvents(w http.ResponseWriter, req *http.Request, sensitive bool) { now := time.Now() data := &struct { Families []string // family names Buckets []bucket Counts [][]int // eventLog count per family/bucket // Set when a bucket has been selected. Family string Bucket int EventLogs eventLogs Expanded bool }{ Buckets: buckets, } data.Families = make([]string, 0, len(families)) famMu.RLock() for name := range families { data.Families = append(data.Families, name) } famMu.RUnlock() sort.Strings(data.Families) // Count the number of eventLogs in each family for each error age. data.Counts = make([][]int, len(data.Families)) for i, name := range data.Families { // TODO(sameer): move this loop under the family lock. f := getEventFamily(name) data.Counts[i] = make([]int, len(data.Buckets)) for j, b := range data.Buckets { data.Counts[i][j] = f.Count(now, b.MaxErrAge) } } if req != nil { var ok bool data.Family, data.Bucket, ok = parseEventsArgs(req) if !ok { // No-op } else { data.EventLogs = getEventFamily(data.Family).Copy(now, buckets[data.Bucket].MaxErrAge) } if data.EventLogs != nil { defer data.EventLogs.Free() sort.Sort(data.EventLogs) } if exp, err := strconv.ParseBool(req.FormValue("exp")); err == nil { data.Expanded = exp } } famMu.RLock() defer famMu.RUnlock() if err := eventsTmpl.Execute(w, data); err != nil { log.Printf("net/trace: Failed executing template: %v", err) } } func parseEventsArgs(req *http.Request) (fam string, b int, ok bool) { fam, bStr := req.FormValue("fam"), req.FormValue("b") if fam == "" || bStr == "" { return "", 0, false } b, err := strconv.Atoi(bStr) if err != nil || b < 0 || b >= len(buckets) { return "", 0, false } return fam, b, true } // An EventLog provides a log of events associated with a specific object. type EventLog interface { // Printf formats its arguments with fmt.Sprintf and adds the // result to the event log. Printf(format string, a ...interface{}) // Errorf is like Printf, but it marks this event as an error. Errorf(format string, a ...interface{}) // Finish declares that this event log is complete. // The event log should not be used after calling this method. Finish() } // NewEventLog returns a new EventLog with the specified family name // and title. func NewEventLog(family, title string) EventLog { el := newEventLog() el.ref() el.Family, el.Title = family, title el.Start = time.Now() el.events = make([]logEntry, 0, maxEventsPerLog) el.stack = make([]uintptr, 32) n := runtime.Callers(2, el.stack) el.stack = el.stack[:n] getEventFamily(family).add(el) return el } func (el *eventLog) Finish() { getEventFamily(el.Family).remove(el) el.unref() // matches ref in New } var ( famMu sync.RWMutex families = make(map[string]*eventFamily) // family name => family ) func getEventFamily(fam string) *eventFamily { famMu.Lock() defer famMu.Unlock() f := families[fam] if f == nil { f = &eventFamily{} families[fam] = f } return f } type eventFamily struct { mu sync.RWMutex eventLogs eventLogs } func (f *eventFamily) add(el *eventLog) { f.mu.Lock() f.eventLogs = append(f.eventLogs, el) f.mu.Unlock() } func (f *eventFamily) remove(el *eventLog) { f.mu.Lock() defer f.mu.Unlock() for i, el0 := range f.eventLogs { if el == el0 { copy(f.eventLogs[i:], f.eventLogs[i+1:]) f.eventLogs = f.eventLogs[:len(f.eventLogs)-1] return } } } func (f *eventFamily) Count(now time.Time, maxErrAge time.Duration) (n int) { f.mu.RLock() defer f.mu.RUnlock() for _, el := range f.eventLogs { if el.hasRecentError(now, maxErrAge) { n++ } } return } func (f *eventFamily) Copy(now time.Time, maxErrAge time.Duration) (els eventLogs) { f.mu.RLock() defer f.mu.RUnlock() els = make(eventLogs, 0, len(f.eventLogs)) for _, el := range f.eventLogs { if el.hasRecentError(now, maxErrAge) { el.ref() els = append(els, el) } } return } type eventLogs []*eventLog // Free calls unref on each element of the list. func (els eventLogs) Free() { for _, el := range els { el.unref() } } // eventLogs may be sorted in reverse chronological order. func (els eventLogs) Len() int { return len(els) } func (els eventLogs) Less(i, j int) bool { return els[i].Start.After(els[j].Start) } func (els eventLogs) Swap(i, j int) { els[i], els[j] = els[j], els[i] } // A logEntry is a timestamped log entry in an event log. type logEntry struct { When time.Time Elapsed time.Duration // since previous event in log NewDay bool // whether this event is on a different day to the previous event What string IsErr bool } // WhenString returns a string representation of the elapsed time of the event. // It will include the date if midnight was crossed. func (e logEntry) WhenString() string { if e.NewDay { return e.When.Format("2006/01/02 15:04:05.000000") } return e.When.Format("15:04:05.000000") } // An eventLog represents an active event log. type eventLog struct { // Family is the top-level grouping of event logs to which this belongs. Family string // Title is the title of this event log. Title string // Timing information. Start time.Time // Call stack where this event log was created. stack []uintptr // Append-only sequence of events. // // TODO(sameer): change this to a ring buffer to avoid the array copy // when we hit maxEventsPerLog. mu sync.RWMutex events []logEntry LastErrorTime time.Time discarded int refs int32 // how many buckets this is in } func (el *eventLog) reset() { // Clear all but the mutex. Mutexes may not be copied, even when unlocked. el.Family = "" el.Title = "" el.Start = time.Time{} el.stack = nil el.events = nil el.LastErrorTime = time.Time{} el.discarded = 0 el.refs = 0 } func (el *eventLog) hasRecentError(now time.Time, maxErrAge time.Duration) bool { if maxErrAge == 0 { return true } el.mu.RLock() defer el.mu.RUnlock() return now.Sub(el.LastErrorTime) < maxErrAge } // delta returns the elapsed time since the last event or the log start, // and whether it spans midnight. // L >= el.mu func (el *eventLog) delta(t time.Time) (time.Duration, bool) { if len(el.events) == 0 { return t.Sub(el.Start), false } prev := el.events[len(el.events)-1].When return t.Sub(prev), prev.Day() != t.Day() } func (el *eventLog) Printf(format string, a ...interface{}) { el.printf(false, format, a...) } func (el *eventLog) Errorf(format string, a ...interface{}) { el.printf(true, format, a...) } func (el *eventLog) printf(isErr bool, format string, a ...interface{}) { e := logEntry{When: time.Now(), IsErr: isErr, What: fmt.Sprintf(format, a...)} el.mu.Lock() e.Elapsed, e.NewDay = el.delta(e.When) if len(el.events) < maxEventsPerLog { el.events = append(el.events, e) } else { // Discard the oldest event. if el.discarded == 0 { // el.discarded starts at two to count for the event it // is replacing, plus the next one that we are about to // drop. el.discarded = 2 } else { el.discarded++ } // TODO(sameer): if this causes allocations on a critical path, // change eventLog.What to be a fmt.Stringer, as in trace.go. el.events[0].What = fmt.Sprintf("(%d events discarded)", el.discarded) // The timestamp of the discarded meta-event should be // the time of the last event it is representing. el.events[0].When = el.events[1].When copy(el.events[1:], el.events[2:]) el.events[maxEventsPerLog-1] = e } if e.IsErr { el.LastErrorTime = e.When } el.mu.Unlock() } func (el *eventLog) ref() { atomic.AddInt32(&el.refs, 1) } func (el *eventLog) unref() { if atomic.AddInt32(&el.refs, -1) == 0 { freeEventLog(el) } } func (el *eventLog) When() string { return el.Start.Format("2006/01/02 15:04:05.000000") } func (el *eventLog) ElapsedTime() string { elapsed := time.Since(el.Start) return fmt.Sprintf("%.6f", elapsed.Seconds()) } func (el *eventLog) Stack() string { buf := new(bytes.Buffer) tw := tabwriter.NewWriter(buf, 1, 8, 1, '\t', 0) printStackRecord(tw, el.stack) tw.Flush() return buf.String() } // printStackRecord prints the function + source line information // for a single stack trace. // Adapted from runtime/pprof/pprof.go. func printStackRecord(w io.Writer, stk []uintptr) { for _, pc := range stk { f := runtime.FuncForPC(pc) if f == nil { continue } file, line := f.FileLine(pc) name := f.Name() // Hide runtime.goexit and any runtime functions at the beginning. if strings.HasPrefix(name, "runtime.") { continue } fmt.Fprintf(w, "# %s\t%s:%d\n", name, file, line) } } func (el *eventLog) Events() []logEntry { el.mu.RLock() defer el.mu.RUnlock() return el.events } // freeEventLogs is a freelist of *eventLog var freeEventLogs = make(chan *eventLog, 1000) // newEventLog returns a event log ready to use. func newEventLog() *eventLog { select { case el := <-freeEventLogs: return el default: return new(eventLog) } } // freeEventLog adds el to freeEventLogs if there's room. // This is non-blocking. func freeEventLog(el *eventLog) { el.reset() select { case freeEventLogs <- el: default: } } const eventsHTML = ` events

/debug/events

{{range $i, $fam := .Families}} {{range $j, $bucket := $.Buckets}} {{$n := index $.Counts $i $j}} {{end}} {{end}}
{{$fam}} {{if $n}}{{end}} [{{$n}} {{$bucket.String}}] {{if $n}}{{end}}
{{if $.EventLogs}}

Family: {{$.Family}}

{{if $.Expanded}}{{end}} [Summary]{{if $.Expanded}}{{end}} {{if not $.Expanded}}{{end}} [Expanded]{{if not $.Expanded}}{{end}} {{range $el := $.EventLogs}} {{if $.Expanded}} {{range $el.Events}} {{end}} {{end}} {{end}}
WhenElapsed
{{$el.When}} {{$el.ElapsedTime}} {{$el.Title}}
{{$el.Stack|trimSpace}}
{{.WhenString}} {{elapsed .Elapsed}} .{{if .IsErr}}E{{else}}.{{end}}. {{.What}}
{{end}} ` lxd-2.0.2/dist/src/golang.org/x/net/trace/trace.go0000644061062106075000000006471212721405224024270 0ustar00stgraberdomain admins00000000000000// Copyright 2015 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. /* Package trace implements tracing of requests and long-lived objects. It exports HTTP interfaces on /debug/requests and /debug/events. A trace.Trace provides tracing for short-lived objects, usually requests. A request handler might be implemented like this: func fooHandler(w http.ResponseWriter, req *http.Request) { tr := trace.New("mypkg.Foo", req.URL.Path) defer tr.Finish() ... tr.LazyPrintf("some event %q happened", str) ... if err := somethingImportant(); err != nil { tr.LazyPrintf("somethingImportant failed: %v", err) tr.SetError() } } The /debug/requests HTTP endpoint organizes the traces by family, errors, and duration. It also provides histogram of request duration for each family. A trace.EventLog provides tracing for long-lived objects, such as RPC connections. // A Fetcher fetches URL paths for a single domain. type Fetcher struct { domain string events trace.EventLog } func NewFetcher(domain string) *Fetcher { return &Fetcher{ domain, trace.NewEventLog("mypkg.Fetcher", domain), } } func (f *Fetcher) Fetch(path string) (string, error) { resp, err := http.Get("http://" + f.domain + "/" + path) if err != nil { f.events.Errorf("Get(%q) = %v", path, err) return "", err } f.events.Printf("Get(%q) = %s", path, resp.Status) ... } func (f *Fetcher) Close() error { f.events.Finish() return nil } The /debug/events HTTP endpoint organizes the event logs by family and by time since the last error. The expanded view displays recent log entries and the log's call stack. */ package trace // import "golang.org/x/net/trace" import ( "bytes" "fmt" "html/template" "io" "log" "net" "net/http" "runtime" "sort" "strconv" "sync" "sync/atomic" "time" "golang.org/x/net/context" "golang.org/x/net/internal/timeseries" ) // DebugUseAfterFinish controls whether to debug uses of Trace values after finishing. // FOR DEBUGGING ONLY. This will slow down the program. var DebugUseAfterFinish = false // AuthRequest determines whether a specific request is permitted to load the // /debug/requests or /debug/events pages. // // It returns two bools; the first indicates whether the page may be viewed at all, // and the second indicates whether sensitive events will be shown. // // AuthRequest may be replaced by a program to customise its authorisation requirements. // // The default AuthRequest function returns (true, true) if and only if the request // comes from localhost/127.0.0.1/[::1]. var AuthRequest = func(req *http.Request) (any, sensitive bool) { // RemoteAddr is commonly in the form "IP" or "IP:port". // If it is in the form "IP:port", split off the port. host, _, err := net.SplitHostPort(req.RemoteAddr) if err != nil { host = req.RemoteAddr } switch host { case "localhost", "127.0.0.1", "::1": return true, true default: return false, false } } func init() { http.HandleFunc("/debug/requests", func(w http.ResponseWriter, req *http.Request) { any, sensitive := AuthRequest(req) if !any { http.Error(w, "not allowed", http.StatusUnauthorized) return } w.Header().Set("Content-Type", "text/html; charset=utf-8") Render(w, req, sensitive) }) http.HandleFunc("/debug/events", func(w http.ResponseWriter, req *http.Request) { any, sensitive := AuthRequest(req) if !any { http.Error(w, "not allowed", http.StatusUnauthorized) return } w.Header().Set("Content-Type", "text/html; charset=utf-8") RenderEvents(w, req, sensitive) }) } // Render renders the HTML page typically served at /debug/requests. // It does not do any auth checking; see AuthRequest for the default auth check // used by the handler registered on http.DefaultServeMux. // req may be nil. func Render(w io.Writer, req *http.Request, sensitive bool) { data := &struct { Families []string ActiveTraceCount map[string]int CompletedTraces map[string]*family // Set when a bucket has been selected. Traces traceList Family string Bucket int Expanded bool Traced bool Active bool ShowSensitive bool // whether to show sensitive events Histogram template.HTML HistogramWindow string // e.g. "last minute", "last hour", "all time" // If non-zero, the set of traces is a partial set, // and this is the total number. Total int }{ CompletedTraces: completedTraces, } data.ShowSensitive = sensitive if req != nil { // Allow show_sensitive=0 to force hiding of sensitive data for testing. // This only goes one way; you can't use show_sensitive=1 to see things. if req.FormValue("show_sensitive") == "0" { data.ShowSensitive = false } if exp, err := strconv.ParseBool(req.FormValue("exp")); err == nil { data.Expanded = exp } if exp, err := strconv.ParseBool(req.FormValue("rtraced")); err == nil { data.Traced = exp } } completedMu.RLock() data.Families = make([]string, 0, len(completedTraces)) for fam := range completedTraces { data.Families = append(data.Families, fam) } completedMu.RUnlock() sort.Strings(data.Families) // We are careful here to minimize the time spent locking activeMu, // since that lock is required every time an RPC starts and finishes. data.ActiveTraceCount = make(map[string]int, len(data.Families)) activeMu.RLock() for fam, s := range activeTraces { data.ActiveTraceCount[fam] = s.Len() } activeMu.RUnlock() var ok bool data.Family, data.Bucket, ok = parseArgs(req) switch { case !ok: // No-op case data.Bucket == -1: data.Active = true n := data.ActiveTraceCount[data.Family] data.Traces = getActiveTraces(data.Family) if len(data.Traces) < n { data.Total = n } case data.Bucket < bucketsPerFamily: if b := lookupBucket(data.Family, data.Bucket); b != nil { data.Traces = b.Copy(data.Traced) } default: if f := getFamily(data.Family, false); f != nil { var obs timeseries.Observable f.LatencyMu.RLock() switch o := data.Bucket - bucketsPerFamily; o { case 0: obs = f.Latency.Minute() data.HistogramWindow = "last minute" case 1: obs = f.Latency.Hour() data.HistogramWindow = "last hour" case 2: obs = f.Latency.Total() data.HistogramWindow = "all time" } f.LatencyMu.RUnlock() if obs != nil { data.Histogram = obs.(*histogram).html() } } } if data.Traces != nil { defer data.Traces.Free() sort.Sort(data.Traces) } completedMu.RLock() defer completedMu.RUnlock() if err := pageTmpl.ExecuteTemplate(w, "Page", data); err != nil { log.Printf("net/trace: Failed executing template: %v", err) } } func parseArgs(req *http.Request) (fam string, b int, ok bool) { if req == nil { return "", 0, false } fam, bStr := req.FormValue("fam"), req.FormValue("b") if fam == "" || bStr == "" { return "", 0, false } b, err := strconv.Atoi(bStr) if err != nil || b < -1 { return "", 0, false } return fam, b, true } func lookupBucket(fam string, b int) *traceBucket { f := getFamily(fam, false) if f == nil || b < 0 || b >= len(f.Buckets) { return nil } return f.Buckets[b] } type contextKeyT string var contextKey = contextKeyT("golang.org/x/net/trace.Trace") // NewContext returns a copy of the parent context // and associates it with a Trace. func NewContext(ctx context.Context, tr Trace) context.Context { return context.WithValue(ctx, contextKey, tr) } // FromContext returns the Trace bound to the context, if any. func FromContext(ctx context.Context) (tr Trace, ok bool) { tr, ok = ctx.Value(contextKey).(Trace) return } // Trace represents an active request. type Trace interface { // LazyLog adds x to the event log. It will be evaluated each time the // /debug/requests page is rendered. Any memory referenced by x will be // pinned until the trace is finished and later discarded. LazyLog(x fmt.Stringer, sensitive bool) // LazyPrintf evaluates its arguments with fmt.Sprintf each time the // /debug/requests page is rendered. Any memory referenced by a will be // pinned until the trace is finished and later discarded. LazyPrintf(format string, a ...interface{}) // SetError declares that this trace resulted in an error. SetError() // SetRecycler sets a recycler for the trace. // f will be called for each event passed to LazyLog at a time when // it is no longer required, whether while the trace is still active // and the event is discarded, or when a completed trace is discarded. SetRecycler(f func(interface{})) // SetTraceInfo sets the trace info for the trace. // This is currently unused. SetTraceInfo(traceID, spanID uint64) // SetMaxEvents sets the maximum number of events that will be stored // in the trace. This has no effect if any events have already been // added to the trace. SetMaxEvents(m int) // Finish declares that this trace is complete. // The trace should not be used after calling this method. Finish() } type lazySprintf struct { format string a []interface{} } func (l *lazySprintf) String() string { return fmt.Sprintf(l.format, l.a...) } // New returns a new Trace with the specified family and title. func New(family, title string) Trace { tr := newTrace() tr.ref() tr.Family, tr.Title = family, title tr.Start = time.Now() tr.events = make([]event, 0, maxEventsPerTrace) activeMu.RLock() s := activeTraces[tr.Family] activeMu.RUnlock() if s == nil { activeMu.Lock() s = activeTraces[tr.Family] // check again if s == nil { s = new(traceSet) activeTraces[tr.Family] = s } activeMu.Unlock() } s.Add(tr) // Trigger allocation of the completed trace structure for this family. // This will cause the family to be present in the request page during // the first trace of this family. We don't care about the return value, // nor is there any need for this to run inline, so we execute it in its // own goroutine, but only if the family isn't allocated yet. completedMu.RLock() if _, ok := completedTraces[tr.Family]; !ok { go allocFamily(tr.Family) } completedMu.RUnlock() return tr } func (tr *trace) Finish() { tr.Elapsed = time.Now().Sub(tr.Start) if DebugUseAfterFinish { buf := make([]byte, 4<<10) // 4 KB should be enough n := runtime.Stack(buf, false) tr.finishStack = buf[:n] } activeMu.RLock() m := activeTraces[tr.Family] activeMu.RUnlock() m.Remove(tr) f := getFamily(tr.Family, true) for _, b := range f.Buckets { if b.Cond.match(tr) { b.Add(tr) } } // Add a sample of elapsed time as microseconds to the family's timeseries h := new(histogram) h.addMeasurement(tr.Elapsed.Nanoseconds() / 1e3) f.LatencyMu.Lock() f.Latency.Add(h) f.LatencyMu.Unlock() tr.unref() // matches ref in New } const ( bucketsPerFamily = 9 tracesPerBucket = 10 maxActiveTraces = 20 // Maximum number of active traces to show. maxEventsPerTrace = 10 numHistogramBuckets = 38 ) var ( // The active traces. activeMu sync.RWMutex activeTraces = make(map[string]*traceSet) // family -> traces // Families of completed traces. completedMu sync.RWMutex completedTraces = make(map[string]*family) // family -> traces ) type traceSet struct { mu sync.RWMutex m map[*trace]bool // We could avoid the entire map scan in FirstN by having a slice of all the traces // ordered by start time, and an index into that from the trace struct, with a periodic // repack of the slice after enough traces finish; we could also use a skip list or similar. // However, that would shift some of the expense from /debug/requests time to RPC time, // which is probably the wrong trade-off. } func (ts *traceSet) Len() int { ts.mu.RLock() defer ts.mu.RUnlock() return len(ts.m) } func (ts *traceSet) Add(tr *trace) { ts.mu.Lock() if ts.m == nil { ts.m = make(map[*trace]bool) } ts.m[tr] = true ts.mu.Unlock() } func (ts *traceSet) Remove(tr *trace) { ts.mu.Lock() delete(ts.m, tr) ts.mu.Unlock() } // FirstN returns the first n traces ordered by time. func (ts *traceSet) FirstN(n int) traceList { ts.mu.RLock() defer ts.mu.RUnlock() if n > len(ts.m) { n = len(ts.m) } trl := make(traceList, 0, n) // Fast path for when no selectivity is needed. if n == len(ts.m) { for tr := range ts.m { tr.ref() trl = append(trl, tr) } sort.Sort(trl) return trl } // Pick the oldest n traces. // This is inefficient. See the comment in the traceSet struct. for tr := range ts.m { // Put the first n traces into trl in the order they occur. // When we have n, sort trl, and thereafter maintain its order. if len(trl) < n { tr.ref() trl = append(trl, tr) if len(trl) == n { // This is guaranteed to happen exactly once during this loop. sort.Sort(trl) } continue } if tr.Start.After(trl[n-1].Start) { continue } // Find where to insert this one. tr.ref() i := sort.Search(n, func(i int) bool { return trl[i].Start.After(tr.Start) }) trl[n-1].unref() copy(trl[i+1:], trl[i:]) trl[i] = tr } return trl } func getActiveTraces(fam string) traceList { activeMu.RLock() s := activeTraces[fam] activeMu.RUnlock() if s == nil { return nil } return s.FirstN(maxActiveTraces) } func getFamily(fam string, allocNew bool) *family { completedMu.RLock() f := completedTraces[fam] completedMu.RUnlock() if f == nil && allocNew { f = allocFamily(fam) } return f } func allocFamily(fam string) *family { completedMu.Lock() defer completedMu.Unlock() f := completedTraces[fam] if f == nil { f = newFamily() completedTraces[fam] = f } return f } // family represents a set of trace buckets and associated latency information. type family struct { // traces may occur in multiple buckets. Buckets [bucketsPerFamily]*traceBucket // latency time series LatencyMu sync.RWMutex Latency *timeseries.MinuteHourSeries } func newFamily() *family { return &family{ Buckets: [bucketsPerFamily]*traceBucket{ {Cond: minCond(0)}, {Cond: minCond(50 * time.Millisecond)}, {Cond: minCond(100 * time.Millisecond)}, {Cond: minCond(200 * time.Millisecond)}, {Cond: minCond(500 * time.Millisecond)}, {Cond: minCond(1 * time.Second)}, {Cond: minCond(10 * time.Second)}, {Cond: minCond(100 * time.Second)}, {Cond: errorCond{}}, }, Latency: timeseries.NewMinuteHourSeries(func() timeseries.Observable { return new(histogram) }), } } // traceBucket represents a size-capped bucket of historic traces, // along with a condition for a trace to belong to the bucket. type traceBucket struct { Cond cond // Ring buffer implementation of a fixed-size FIFO queue. mu sync.RWMutex buf [tracesPerBucket]*trace start int // < tracesPerBucket length int // <= tracesPerBucket } func (b *traceBucket) Add(tr *trace) { b.mu.Lock() defer b.mu.Unlock() i := b.start + b.length if i >= tracesPerBucket { i -= tracesPerBucket } if b.length == tracesPerBucket { // "Remove" an element from the bucket. b.buf[i].unref() b.start++ if b.start == tracesPerBucket { b.start = 0 } } b.buf[i] = tr if b.length < tracesPerBucket { b.length++ } tr.ref() } // Copy returns a copy of the traces in the bucket. // If tracedOnly is true, only the traces with trace information will be returned. // The logs will be ref'd before returning; the caller should call // the Free method when it is done with them. // TODO(dsymonds): keep track of traced requests in separate buckets. func (b *traceBucket) Copy(tracedOnly bool) traceList { b.mu.RLock() defer b.mu.RUnlock() trl := make(traceList, 0, b.length) for i, x := 0, b.start; i < b.length; i++ { tr := b.buf[x] if !tracedOnly || tr.spanID != 0 { tr.ref() trl = append(trl, tr) } x++ if x == b.length { x = 0 } } return trl } func (b *traceBucket) Empty() bool { b.mu.RLock() defer b.mu.RUnlock() return b.length == 0 } // cond represents a condition on a trace. type cond interface { match(t *trace) bool String() string } type minCond time.Duration func (m minCond) match(t *trace) bool { return t.Elapsed >= time.Duration(m) } func (m minCond) String() string { return fmt.Sprintf("โ‰ฅ%gs", time.Duration(m).Seconds()) } type errorCond struct{} func (e errorCond) match(t *trace) bool { return t.IsError } func (e errorCond) String() string { return "errors" } type traceList []*trace // Free calls unref on each element of the list. func (trl traceList) Free() { for _, t := range trl { t.unref() } } // traceList may be sorted in reverse chronological order. func (trl traceList) Len() int { return len(trl) } func (trl traceList) Less(i, j int) bool { return trl[i].Start.After(trl[j].Start) } func (trl traceList) Swap(i, j int) { trl[i], trl[j] = trl[j], trl[i] } // An event is a timestamped log entry in a trace. type event struct { When time.Time Elapsed time.Duration // since previous event in trace NewDay bool // whether this event is on a different day to the previous event Recyclable bool // whether this event was passed via LazyLog What interface{} // string or fmt.Stringer Sensitive bool // whether this event contains sensitive information } // WhenString returns a string representation of the elapsed time of the event. // It will include the date if midnight was crossed. func (e event) WhenString() string { if e.NewDay { return e.When.Format("2006/01/02 15:04:05.000000") } return e.When.Format("15:04:05.000000") } // discarded represents a number of discarded events. // It is stored as *discarded to make it easier to update in-place. type discarded int func (d *discarded) String() string { return fmt.Sprintf("(%d events discarded)", int(*d)) } // trace represents an active or complete request, // either sent or received by this program. type trace struct { // Family is the top-level grouping of traces to which this belongs. Family string // Title is the title of this trace. Title string // Timing information. Start time.Time Elapsed time.Duration // zero while active // Trace information if non-zero. traceID uint64 spanID uint64 // Whether this trace resulted in an error. IsError bool // Append-only sequence of events (modulo discards). mu sync.RWMutex events []event refs int32 // how many buckets this is in recycler func(interface{}) disc discarded // scratch space to avoid allocation finishStack []byte // where finish was called, if DebugUseAfterFinish is set } func (tr *trace) reset() { // Clear all but the mutex. Mutexes may not be copied, even when unlocked. tr.Family = "" tr.Title = "" tr.Start = time.Time{} tr.Elapsed = 0 tr.traceID = 0 tr.spanID = 0 tr.IsError = false tr.events = nil tr.refs = 0 tr.recycler = nil tr.disc = 0 tr.finishStack = nil } // delta returns the elapsed time since the last event or the trace start, // and whether it spans midnight. // L >= tr.mu func (tr *trace) delta(t time.Time) (time.Duration, bool) { if len(tr.events) == 0 { return t.Sub(tr.Start), false } prev := tr.events[len(tr.events)-1].When return t.Sub(prev), prev.Day() != t.Day() } func (tr *trace) addEvent(x interface{}, recyclable, sensitive bool) { if DebugUseAfterFinish && tr.finishStack != nil { buf := make([]byte, 4<<10) // 4 KB should be enough n := runtime.Stack(buf, false) log.Printf("net/trace: trace used after finish:\nFinished at:\n%s\nUsed at:\n%s", tr.finishStack, buf[:n]) } /* NOTE TO DEBUGGERS If you are here because your program panicked in this code, it is almost definitely the fault of code using this package, and very unlikely to be the fault of this code. The most likely scenario is that some code elsewhere is using a requestz.Trace after its Finish method is called. You can temporarily set the DebugUseAfterFinish var to help discover where that is; do not leave that var set, since it makes this package much less efficient. */ e := event{When: time.Now(), What: x, Recyclable: recyclable, Sensitive: sensitive} tr.mu.Lock() e.Elapsed, e.NewDay = tr.delta(e.When) if len(tr.events) < cap(tr.events) { tr.events = append(tr.events, e) } else { // Discard the middle events. di := int((cap(tr.events) - 1) / 2) if d, ok := tr.events[di].What.(*discarded); ok { (*d)++ } else { // disc starts at two to count for the event it is replacing, // plus the next one that we are about to drop. tr.disc = 2 if tr.recycler != nil && tr.events[di].Recyclable { go tr.recycler(tr.events[di].What) } tr.events[di].What = &tr.disc } // The timestamp of the discarded meta-event should be // the time of the last event it is representing. tr.events[di].When = tr.events[di+1].When if tr.recycler != nil && tr.events[di+1].Recyclable { go tr.recycler(tr.events[di+1].What) } copy(tr.events[di+1:], tr.events[di+2:]) tr.events[cap(tr.events)-1] = e } tr.mu.Unlock() } func (tr *trace) LazyLog(x fmt.Stringer, sensitive bool) { tr.addEvent(x, true, sensitive) } func (tr *trace) LazyPrintf(format string, a ...interface{}) { tr.addEvent(&lazySprintf{format, a}, false, false) } func (tr *trace) SetError() { tr.IsError = true } func (tr *trace) SetRecycler(f func(interface{})) { tr.recycler = f } func (tr *trace) SetTraceInfo(traceID, spanID uint64) { tr.traceID, tr.spanID = traceID, spanID } func (tr *trace) SetMaxEvents(m int) { // Always keep at least three events: first, discarded count, last. if len(tr.events) == 0 && m > 3 { tr.events = make([]event, 0, m) } } func (tr *trace) ref() { atomic.AddInt32(&tr.refs, 1) } func (tr *trace) unref() { if atomic.AddInt32(&tr.refs, -1) == 0 { if tr.recycler != nil { // freeTrace clears tr, so we hold tr.recycler and tr.events here. go func(f func(interface{}), es []event) { for _, e := range es { if e.Recyclable { f(e.What) } } }(tr.recycler, tr.events) } freeTrace(tr) } } func (tr *trace) When() string { return tr.Start.Format("2006/01/02 15:04:05.000000") } func (tr *trace) ElapsedTime() string { t := tr.Elapsed if t == 0 { // Active trace. t = time.Since(tr.Start) } return fmt.Sprintf("%.6f", t.Seconds()) } func (tr *trace) Events() []event { tr.mu.RLock() defer tr.mu.RUnlock() return tr.events } var traceFreeList = make(chan *trace, 1000) // TODO(dsymonds): Use sync.Pool? // newTrace returns a trace ready to use. func newTrace() *trace { select { case tr := <-traceFreeList: return tr default: return new(trace) } } // freeTrace adds tr to traceFreeList if there's room. // This is non-blocking. func freeTrace(tr *trace) { if DebugUseAfterFinish { return // never reuse } tr.reset() select { case traceFreeList <- tr: default: } } func elapsed(d time.Duration) string { b := []byte(fmt.Sprintf("%.6f", d.Seconds())) // For subsecond durations, blank all zeros before decimal point, // and all zeros between the decimal point and the first non-zero digit. if d < time.Second { dot := bytes.IndexByte(b, '.') for i := 0; i < dot; i++ { b[i] = ' ' } for i := dot + 1; i < len(b); i++ { if b[i] == '0' { b[i] = ' ' } else { break } } } return string(b) } var pageTmpl = template.Must(template.New("Page").Funcs(template.FuncMap{ "elapsed": elapsed, "add": func(a, b int) int { return a + b }, }).Parse(pageHTML)) const pageHTML = ` {{template "Prolog" .}} {{template "StatusTable" .}} {{template "Epilog" .}} {{define "Prolog"}} /debug/requests

/debug/requests

{{end}} {{/* end of Prolog */}} {{define "StatusTable"}} {{range $fam := .Families}} {{$n := index $.ActiveTraceCount $fam}} {{$f := index $.CompletedTraces $fam}} {{range $i, $b := $f.Buckets}} {{$empty := $b.Empty}} {{end}} {{$nb := len $f.Buckets}} {{end}}
{{$fam}} {{if $n}}{{end}} [{{$n}} active] {{if $n}}{{end}} {{if not $empty}}{{end}} [{{.Cond}}] {{if not $empty}}{{end}} [minute] [hour] [total]
{{end}} {{/* end of StatusTable */}} {{define "Epilog"}} {{if $.Traces}}

Family: {{$.Family}}

{{if or $.Expanded $.Traced}} [Normal/Summary] {{else}} [Normal/Summary] {{end}} {{if or (not $.Expanded) $.Traced}} [Normal/Expanded] {{else}} [Normal/Expanded] {{end}} {{if not $.Active}} {{if or $.Expanded (not $.Traced)}} [Traced/Summary] {{else}} [Traced/Summary] {{end}} {{if or (not $.Expanded) (not $.Traced)}} [Traced/Expanded] {{else}} [Traced/Expanded] {{end}} {{end}} {{if $.Total}}

Showing {{len $.Traces}} of {{$.Total}} traces.

{{end}} {{range $tr := $.Traces}} {{/* TODO: include traceID/spanID */}} {{if $.Expanded}} {{range $tr.Events}} {{end}} {{end}} {{end}}
{{if $.Active}}Active{{else}}Completed{{end}} Requests
WhenElapsed (s)
{{$tr.When}} {{$tr.ElapsedTime}} {{$tr.Title}}
{{.WhenString}} {{elapsed .Elapsed}} {{if or $.ShowSensitive (not .Sensitive)}}... {{.What}}{{else}}[redacted]{{end}}
{{end}} {{/* if $.Traces */}} {{if $.Histogram}}

Latency (µs) of {{$.Family}} over {{$.HistogramWindow}}

{{$.Histogram}} {{end}} {{/* if $.Histogram */}} {{end}} {{/* end of Epilog */}} ` lxd-2.0.2/dist/src/golang.org/x/net/trace/histogram_test.go0000644061062106075000000001655712721405224026232 0ustar00stgraberdomain admins00000000000000// Copyright 2015 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package trace import ( "math" "testing" ) type sumTest struct { value int64 sum int64 sumOfSquares float64 total int64 } var sumTests = []sumTest{ {100, 100, 10000, 1}, {50, 150, 12500, 2}, {50, 200, 15000, 3}, {50, 250, 17500, 4}, } type bucketingTest struct { in int64 log int bucket int } var bucketingTests = []bucketingTest{ {0, 0, 0}, {1, 1, 0}, {2, 2, 1}, {3, 2, 1}, {4, 3, 2}, {1000, 10, 9}, {1023, 10, 9}, {1024, 11, 10}, {1000000, 20, 19}, } type multiplyTest struct { in int64 ratio float64 expectedSum int64 expectedTotal int64 expectedSumOfSquares float64 } var multiplyTests = []multiplyTest{ {15, 2.5, 37, 2, 562.5}, {128, 4.6, 758, 13, 77953.9}, } type percentileTest struct { fraction float64 expected int64 } var percentileTests = []percentileTest{ {0.25, 48}, {0.5, 96}, {0.6, 109}, {0.75, 128}, {0.90, 205}, {0.95, 230}, {0.99, 256}, } func TestSum(t *testing.T) { var h histogram for _, test := range sumTests { h.addMeasurement(test.value) sum := h.sum if sum != test.sum { t.Errorf("h.Sum = %v WANT: %v", sum, test.sum) } sumOfSquares := h.sumOfSquares if sumOfSquares != test.sumOfSquares { t.Errorf("h.SumOfSquares = %v WANT: %v", sumOfSquares, test.sumOfSquares) } total := h.total() if total != test.total { t.Errorf("h.Total = %v WANT: %v", total, test.total) } } } func TestMultiply(t *testing.T) { var h histogram for i, test := range multiplyTests { h.addMeasurement(test.in) h.Multiply(test.ratio) if h.sum != test.expectedSum { t.Errorf("#%v: h.sum = %v WANT: %v", i, h.sum, test.expectedSum) } if h.total() != test.expectedTotal { t.Errorf("#%v: h.total = %v WANT: %v", i, h.total(), test.expectedTotal) } if h.sumOfSquares != test.expectedSumOfSquares { t.Errorf("#%v: h.SumOfSquares = %v WANT: %v", i, test.expectedSumOfSquares, h.sumOfSquares) } } } func TestBucketingFunctions(t *testing.T) { for _, test := range bucketingTests { log := log2(test.in) if log != test.log { t.Errorf("log2 = %v WANT: %v", log, test.log) } bucket := getBucket(test.in) if bucket != test.bucket { t.Errorf("getBucket = %v WANT: %v", bucket, test.bucket) } } } func TestAverage(t *testing.T) { a := new(histogram) average := a.average() if average != 0 { t.Errorf("Average of empty histogram was %v WANT: 0", average) } a.addMeasurement(1) a.addMeasurement(1) a.addMeasurement(3) const expected = float64(5) / float64(3) average = a.average() if !isApproximate(average, expected) { t.Errorf("Average = %g WANT: %v", average, expected) } } func TestStandardDeviation(t *testing.T) { a := new(histogram) add(a, 10, 1<<4) add(a, 10, 1<<5) add(a, 10, 1<<6) stdDev := a.standardDeviation() const expected = 19.95 if !isApproximate(stdDev, expected) { t.Errorf("StandardDeviation = %v WANT: %v", stdDev, expected) } // No values a = new(histogram) stdDev = a.standardDeviation() if !isApproximate(stdDev, 0) { t.Errorf("StandardDeviation = %v WANT: 0", stdDev) } add(a, 1, 1<<4) if !isApproximate(stdDev, 0) { t.Errorf("StandardDeviation = %v WANT: 0", stdDev) } add(a, 10, 1<<4) if !isApproximate(stdDev, 0) { t.Errorf("StandardDeviation = %v WANT: 0", stdDev) } } func TestPercentileBoundary(t *testing.T) { a := new(histogram) add(a, 5, 1<<4) add(a, 10, 1<<6) add(a, 5, 1<<7) for _, test := range percentileTests { percentile := a.percentileBoundary(test.fraction) if percentile != test.expected { t.Errorf("h.PercentileBoundary (fraction=%v) = %v WANT: %v", test.fraction, percentile, test.expected) } } } func TestCopyFrom(t *testing.T) { a := histogram{5, 25, []int64{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38}, 4, -1} b := histogram{6, 36, []int64{2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39}, 5, -1} a.CopyFrom(&b) if a.String() != b.String() { t.Errorf("a.String = %s WANT: %s", a.String(), b.String()) } } func TestClear(t *testing.T) { a := histogram{5, 25, []int64{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38}, 4, -1} a.Clear() expected := "0, 0.000000, 0, 0, []" if a.String() != expected { t.Errorf("a.String = %s WANT %s", a.String(), expected) } } func TestNew(t *testing.T) { a := histogram{5, 25, []int64{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38}, 4, -1} b := a.New() expected := "0, 0.000000, 0, 0, []" if b.(*histogram).String() != expected { t.Errorf("b.(*histogram).String = %s WANT: %s", b.(*histogram).String(), expected) } } func TestAdd(t *testing.T) { // The tests here depend on the associativity of addMeasurement and Add. // Add empty observation a := histogram{5, 25, []int64{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38}, 4, -1} b := a.New() expected := a.String() a.Add(b) if a.String() != expected { t.Errorf("a.String = %s WANT: %s", a.String(), expected) } // Add same bucketed value, no new buckets c := new(histogram) d := new(histogram) e := new(histogram) c.addMeasurement(12) d.addMeasurement(11) e.addMeasurement(12) e.addMeasurement(11) c.Add(d) if c.String() != e.String() { t.Errorf("c.String = %s WANT: %s", c.String(), e.String()) } // Add bucketed values f := new(histogram) g := new(histogram) h := new(histogram) f.addMeasurement(4) f.addMeasurement(12) f.addMeasurement(100) g.addMeasurement(18) g.addMeasurement(36) g.addMeasurement(255) h.addMeasurement(4) h.addMeasurement(12) h.addMeasurement(100) h.addMeasurement(18) h.addMeasurement(36) h.addMeasurement(255) f.Add(g) if f.String() != h.String() { t.Errorf("f.String = %q WANT: %q", f.String(), h.String()) } // add buckets to no buckets i := new(histogram) j := new(histogram) k := new(histogram) j.addMeasurement(18) j.addMeasurement(36) j.addMeasurement(255) k.addMeasurement(18) k.addMeasurement(36) k.addMeasurement(255) i.Add(j) if i.String() != k.String() { t.Errorf("i.String = %q WANT: %q", i.String(), k.String()) } // add buckets to single value (no overlap) l := new(histogram) m := new(histogram) n := new(histogram) l.addMeasurement(0) m.addMeasurement(18) m.addMeasurement(36) m.addMeasurement(255) n.addMeasurement(0) n.addMeasurement(18) n.addMeasurement(36) n.addMeasurement(255) l.Add(m) if l.String() != n.String() { t.Errorf("l.String = %q WANT: %q", l.String(), n.String()) } // mixed order o := new(histogram) p := new(histogram) o.addMeasurement(0) o.addMeasurement(2) o.addMeasurement(0) p.addMeasurement(0) p.addMeasurement(0) p.addMeasurement(2) if o.String() != p.String() { t.Errorf("o.String = %q WANT: %q", o.String(), p.String()) } } func add(h *histogram, times int, val int64) { for i := 0; i < times; i++ { h.addMeasurement(val) } } func isApproximate(x, y float64) bool { return math.Abs(x-y) < 1e-2 } lxd-2.0.2/dist/src/golang.org/x/net/proxy/0000755061062106075000000000000012721405224022714 5ustar00stgraberdomain admins00000000000000lxd-2.0.2/dist/src/golang.org/x/net/proxy/proxy_test.go0000644061062106075000000000647012721405224025472 0ustar00stgraberdomain admins00000000000000// Copyright 2011 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package proxy import ( "io" "net" "net/url" "strconv" "sync" "testing" ) func TestFromURL(t *testing.T) { endSystem, err := net.Listen("tcp", "127.0.0.1:0") if err != nil { t.Fatalf("net.Listen failed: %v", err) } defer endSystem.Close() gateway, err := net.Listen("tcp", "127.0.0.1:0") if err != nil { t.Fatalf("net.Listen failed: %v", err) } defer gateway.Close() var wg sync.WaitGroup wg.Add(1) go socks5Gateway(t, gateway, endSystem, socks5Domain, &wg) url, err := url.Parse("socks5://user:password@" + gateway.Addr().String()) if err != nil { t.Fatalf("url.Parse failed: %v", err) } proxy, err := FromURL(url, Direct) if err != nil { t.Fatalf("FromURL failed: %v", err) } _, port, err := net.SplitHostPort(endSystem.Addr().String()) if err != nil { t.Fatalf("net.SplitHostPort failed: %v", err) } if c, err := proxy.Dial("tcp", "localhost:"+port); err != nil { t.Fatalf("FromURL.Dial failed: %v", err) } else { c.Close() } wg.Wait() } func TestSOCKS5(t *testing.T) { endSystem, err := net.Listen("tcp", "127.0.0.1:0") if err != nil { t.Fatalf("net.Listen failed: %v", err) } defer endSystem.Close() gateway, err := net.Listen("tcp", "127.0.0.1:0") if err != nil { t.Fatalf("net.Listen failed: %v", err) } defer gateway.Close() var wg sync.WaitGroup wg.Add(1) go socks5Gateway(t, gateway, endSystem, socks5IP4, &wg) proxy, err := SOCKS5("tcp", gateway.Addr().String(), nil, Direct) if err != nil { t.Fatalf("SOCKS5 failed: %v", err) } if c, err := proxy.Dial("tcp", endSystem.Addr().String()); err != nil { t.Fatalf("SOCKS5.Dial failed: %v", err) } else { c.Close() } wg.Wait() } func socks5Gateway(t *testing.T, gateway, endSystem net.Listener, typ byte, wg *sync.WaitGroup) { defer wg.Done() c, err := gateway.Accept() if err != nil { t.Errorf("net.Listener.Accept failed: %v", err) return } defer c.Close() b := make([]byte, 32) var n int if typ == socks5Domain { n = 4 } else { n = 3 } if _, err := io.ReadFull(c, b[:n]); err != nil { t.Errorf("io.ReadFull failed: %v", err) return } if _, err := c.Write([]byte{socks5Version, socks5AuthNone}); err != nil { t.Errorf("net.Conn.Write failed: %v", err) return } if typ == socks5Domain { n = 16 } else { n = 10 } if _, err := io.ReadFull(c, b[:n]); err != nil { t.Errorf("io.ReadFull failed: %v", err) return } if b[0] != socks5Version || b[1] != socks5Connect || b[2] != 0x00 || b[3] != typ { t.Errorf("got an unexpected packet: %#02x %#02x %#02x %#02x", b[0], b[1], b[2], b[3]) return } if typ == socks5Domain { copy(b[:5], []byte{socks5Version, 0x00, 0x00, socks5Domain, 9}) b = append(b, []byte("localhost")...) } else { copy(b[:4], []byte{socks5Version, 0x00, 0x00, socks5IP4}) } host, port, err := net.SplitHostPort(endSystem.Addr().String()) if err != nil { t.Errorf("net.SplitHostPort failed: %v", err) return } b = append(b, []byte(net.ParseIP(host).To4())...) p, err := strconv.Atoi(port) if err != nil { t.Errorf("strconv.Atoi failed: %v", err) return } b = append(b, []byte{byte(p >> 8), byte(p)}...) if _, err := c.Write(b); err != nil { t.Errorf("net.Conn.Write failed: %v", err) return } } lxd-2.0.2/dist/src/golang.org/x/net/proxy/per_host.go0000644061062106075000000000717412721405224025077 0ustar00stgraberdomain admins00000000000000// Copyright 2011 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package proxy import ( "net" "strings" ) // A PerHost directs connections to a default Dialer unless the hostname // requested matches one of a number of exceptions. type PerHost struct { def, bypass Dialer bypassNetworks []*net.IPNet bypassIPs []net.IP bypassZones []string bypassHosts []string } // NewPerHost returns a PerHost Dialer that directs connections to either // defaultDialer or bypass, depending on whether the connection matches one of // the configured rules. func NewPerHost(defaultDialer, bypass Dialer) *PerHost { return &PerHost{ def: defaultDialer, bypass: bypass, } } // Dial connects to the address addr on the given network through either // defaultDialer or bypass. func (p *PerHost) Dial(network, addr string) (c net.Conn, err error) { host, _, err := net.SplitHostPort(addr) if err != nil { return nil, err } return p.dialerForRequest(host).Dial(network, addr) } func (p *PerHost) dialerForRequest(host string) Dialer { if ip := net.ParseIP(host); ip != nil { for _, net := range p.bypassNetworks { if net.Contains(ip) { return p.bypass } } for _, bypassIP := range p.bypassIPs { if bypassIP.Equal(ip) { return p.bypass } } return p.def } for _, zone := range p.bypassZones { if strings.HasSuffix(host, zone) { return p.bypass } if host == zone[1:] { // For a zone "example.com", we match "example.com" // too. return p.bypass } } for _, bypassHost := range p.bypassHosts { if bypassHost == host { return p.bypass } } return p.def } // AddFromString parses a string that contains comma-separated values // specifying hosts that should use the bypass proxy. Each value is either an // IP address, a CIDR range, a zone (*.example.com) or a hostname // (localhost). A best effort is made to parse the string and errors are // ignored. func (p *PerHost) AddFromString(s string) { hosts := strings.Split(s, ",") for _, host := range hosts { host = strings.TrimSpace(host) if len(host) == 0 { continue } if strings.Contains(host, "/") { // We assume that it's a CIDR address like 127.0.0.0/8 if _, net, err := net.ParseCIDR(host); err == nil { p.AddNetwork(net) } continue } if ip := net.ParseIP(host); ip != nil { p.AddIP(ip) continue } if strings.HasPrefix(host, "*.") { p.AddZone(host[1:]) continue } p.AddHost(host) } } // AddIP specifies an IP address that will use the bypass proxy. Note that // this will only take effect if a literal IP address is dialed. A connection // to a named host will never match an IP. func (p *PerHost) AddIP(ip net.IP) { p.bypassIPs = append(p.bypassIPs, ip) } // AddNetwork specifies an IP range that will use the bypass proxy. Note that // this will only take effect if a literal IP address is dialed. A connection // to a named host will never match. func (p *PerHost) AddNetwork(net *net.IPNet) { p.bypassNetworks = append(p.bypassNetworks, net) } // AddZone specifies a DNS suffix that will use the bypass proxy. A zone of // "example.com" matches "example.com" and all of its subdomains. func (p *PerHost) AddZone(zone string) { if strings.HasSuffix(zone, ".") { zone = zone[:len(zone)-1] } if !strings.HasPrefix(zone, ".") { zone = "." + zone } p.bypassZones = append(p.bypassZones, zone) } // AddHost specifies a hostname that will use the bypass proxy. func (p *PerHost) AddHost(host string) { if strings.HasSuffix(host, ".") { host = host[:len(host)-1] } p.bypassHosts = append(p.bypassHosts, host) } lxd-2.0.2/dist/src/golang.org/x/net/proxy/per_host_test.go0000644061062106075000000000241512721405224026127 0ustar00stgraberdomain admins00000000000000// Copyright 2011 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package proxy import ( "errors" "net" "reflect" "testing" ) type recordingProxy struct { addrs []string } func (r *recordingProxy) Dial(network, addr string) (net.Conn, error) { r.addrs = append(r.addrs, addr) return nil, errors.New("recordingProxy") } func TestPerHost(t *testing.T) { var def, bypass recordingProxy perHost := NewPerHost(&def, &bypass) perHost.AddFromString("localhost,*.zone,127.0.0.1,10.0.0.1/8,1000::/16") expectedDef := []string{ "example.com:123", "1.2.3.4:123", "[1001::]:123", } expectedBypass := []string{ "localhost:123", "zone:123", "foo.zone:123", "127.0.0.1:123", "10.1.2.3:123", "[1000::]:123", } for _, addr := range expectedDef { perHost.Dial("tcp", addr) } for _, addr := range expectedBypass { perHost.Dial("tcp", addr) } if !reflect.DeepEqual(expectedDef, def.addrs) { t.Errorf("Hosts which went to the default proxy didn't match. Got %v, want %v", def.addrs, expectedDef) } if !reflect.DeepEqual(expectedBypass, bypass.addrs) { t.Errorf("Hosts which went to the bypass proxy didn't match. Got %v, want %v", bypass.addrs, expectedBypass) } } lxd-2.0.2/dist/src/golang.org/x/net/proxy/direct.go0000644061062106075000000000063012721405224024514 0ustar00stgraberdomain admins00000000000000// Copyright 2011 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package proxy import ( "net" ) type direct struct{} // Direct is a direct proxy: one that makes network connections directly. var Direct = direct{} func (direct) Dial(network, addr string) (net.Conn, error) { return net.Dial(network, addr) } lxd-2.0.2/dist/src/golang.org/x/net/proxy/proxy.go0000644061062106075000000000460712721405224024433 0ustar00stgraberdomain admins00000000000000// Copyright 2011 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // Package proxy provides support for a variety of protocols to proxy network // data. package proxy // import "golang.org/x/net/proxy" import ( "errors" "net" "net/url" "os" ) // A Dialer is a means to establish a connection. type Dialer interface { // Dial connects to the given address via the proxy. Dial(network, addr string) (c net.Conn, err error) } // Auth contains authentication parameters that specific Dialers may require. type Auth struct { User, Password string } // FromEnvironment returns the dialer specified by the proxy related variables in // the environment. func FromEnvironment() Dialer { allProxy := os.Getenv("all_proxy") if len(allProxy) == 0 { return Direct } proxyURL, err := url.Parse(allProxy) if err != nil { return Direct } proxy, err := FromURL(proxyURL, Direct) if err != nil { return Direct } noProxy := os.Getenv("no_proxy") if len(noProxy) == 0 { return proxy } perHost := NewPerHost(proxy, Direct) perHost.AddFromString(noProxy) return perHost } // proxySchemes is a map from URL schemes to a function that creates a Dialer // from a URL with such a scheme. var proxySchemes map[string]func(*url.URL, Dialer) (Dialer, error) // RegisterDialerType takes a URL scheme and a function to generate Dialers from // a URL with that scheme and a forwarding Dialer. Registered schemes are used // by FromURL. func RegisterDialerType(scheme string, f func(*url.URL, Dialer) (Dialer, error)) { if proxySchemes == nil { proxySchemes = make(map[string]func(*url.URL, Dialer) (Dialer, error)) } proxySchemes[scheme] = f } // FromURL returns a Dialer given a URL specification and an underlying // Dialer for it to make network requests. func FromURL(u *url.URL, forward Dialer) (Dialer, error) { var auth *Auth if u.User != nil { auth = new(Auth) auth.User = u.User.Username() if p, ok := u.User.Password(); ok { auth.Password = p } } switch u.Scheme { case "socks5": return SOCKS5("tcp", u.Host, auth, forward) } // If the scheme doesn't match any of the built-in schemes, see if it // was registered by another package. if proxySchemes != nil { if f, ok := proxySchemes[u.Scheme]; ok { return f(u, forward) } } return nil, errors.New("proxy: unknown scheme: " + u.Scheme) } lxd-2.0.2/dist/src/golang.org/x/net/proxy/socks5.go0000644061062106075000000001305412721405224024455 0ustar00stgraberdomain admins00000000000000// Copyright 2011 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package proxy import ( "errors" "io" "net" "strconv" ) // SOCKS5 returns a Dialer that makes SOCKSv5 connections to the given address // with an optional username and password. See RFC 1928. func SOCKS5(network, addr string, auth *Auth, forward Dialer) (Dialer, error) { s := &socks5{ network: network, addr: addr, forward: forward, } if auth != nil { s.user = auth.User s.password = auth.Password } return s, nil } type socks5 struct { user, password string network, addr string forward Dialer } const socks5Version = 5 const ( socks5AuthNone = 0 socks5AuthPassword = 2 ) const socks5Connect = 1 const ( socks5IP4 = 1 socks5Domain = 3 socks5IP6 = 4 ) var socks5Errors = []string{ "", "general failure", "connection forbidden", "network unreachable", "host unreachable", "connection refused", "TTL expired", "command not supported", "address type not supported", } // Dial connects to the address addr on the network net via the SOCKS5 proxy. func (s *socks5) Dial(network, addr string) (net.Conn, error) { switch network { case "tcp", "tcp6", "tcp4": default: return nil, errors.New("proxy: no support for SOCKS5 proxy connections of type " + network) } conn, err := s.forward.Dial(s.network, s.addr) if err != nil { return nil, err } closeConn := &conn defer func() { if closeConn != nil { (*closeConn).Close() } }() host, portStr, err := net.SplitHostPort(addr) if err != nil { return nil, err } port, err := strconv.Atoi(portStr) if err != nil { return nil, errors.New("proxy: failed to parse port number: " + portStr) } if port < 1 || port > 0xffff { return nil, errors.New("proxy: port number out of range: " + portStr) } // the size here is just an estimate buf := make([]byte, 0, 6+len(host)) buf = append(buf, socks5Version) if len(s.user) > 0 && len(s.user) < 256 && len(s.password) < 256 { buf = append(buf, 2 /* num auth methods */, socks5AuthNone, socks5AuthPassword) } else { buf = append(buf, 1 /* num auth methods */, socks5AuthNone) } if _, err := conn.Write(buf); err != nil { return nil, errors.New("proxy: failed to write greeting to SOCKS5 proxy at " + s.addr + ": " + err.Error()) } if _, err := io.ReadFull(conn, buf[:2]); err != nil { return nil, errors.New("proxy: failed to read greeting from SOCKS5 proxy at " + s.addr + ": " + err.Error()) } if buf[0] != 5 { return nil, errors.New("proxy: SOCKS5 proxy at " + s.addr + " has unexpected version " + strconv.Itoa(int(buf[0]))) } if buf[1] == 0xff { return nil, errors.New("proxy: SOCKS5 proxy at " + s.addr + " requires authentication") } if buf[1] == socks5AuthPassword { buf = buf[:0] buf = append(buf, 1 /* password protocol version */) buf = append(buf, uint8(len(s.user))) buf = append(buf, s.user...) buf = append(buf, uint8(len(s.password))) buf = append(buf, s.password...) if _, err := conn.Write(buf); err != nil { return nil, errors.New("proxy: failed to write authentication request to SOCKS5 proxy at " + s.addr + ": " + err.Error()) } if _, err := io.ReadFull(conn, buf[:2]); err != nil { return nil, errors.New("proxy: failed to read authentication reply from SOCKS5 proxy at " + s.addr + ": " + err.Error()) } if buf[1] != 0 { return nil, errors.New("proxy: SOCKS5 proxy at " + s.addr + " rejected username/password") } } buf = buf[:0] buf = append(buf, socks5Version, socks5Connect, 0 /* reserved */) if ip := net.ParseIP(host); ip != nil { if ip4 := ip.To4(); ip4 != nil { buf = append(buf, socks5IP4) ip = ip4 } else { buf = append(buf, socks5IP6) } buf = append(buf, ip...) } else { if len(host) > 255 { return nil, errors.New("proxy: destination hostname too long: " + host) } buf = append(buf, socks5Domain) buf = append(buf, byte(len(host))) buf = append(buf, host...) } buf = append(buf, byte(port>>8), byte(port)) if _, err := conn.Write(buf); err != nil { return nil, errors.New("proxy: failed to write connect request to SOCKS5 proxy at " + s.addr + ": " + err.Error()) } if _, err := io.ReadFull(conn, buf[:4]); err != nil { return nil, errors.New("proxy: failed to read connect reply from SOCKS5 proxy at " + s.addr + ": " + err.Error()) } failure := "unknown error" if int(buf[1]) < len(socks5Errors) { failure = socks5Errors[buf[1]] } if len(failure) > 0 { return nil, errors.New("proxy: SOCKS5 proxy at " + s.addr + " failed to connect: " + failure) } bytesToDiscard := 0 switch buf[3] { case socks5IP4: bytesToDiscard = net.IPv4len case socks5IP6: bytesToDiscard = net.IPv6len case socks5Domain: _, err := io.ReadFull(conn, buf[:1]) if err != nil { return nil, errors.New("proxy: failed to read domain length from SOCKS5 proxy at " + s.addr + ": " + err.Error()) } bytesToDiscard = int(buf[0]) default: return nil, errors.New("proxy: got unknown address type " + strconv.Itoa(int(buf[3])) + " from SOCKS5 proxy at " + s.addr) } if cap(buf) < bytesToDiscard { buf = make([]byte, bytesToDiscard) } else { buf = buf[:bytesToDiscard] } if _, err := io.ReadFull(conn, buf); err != nil { return nil, errors.New("proxy: failed to read address from SOCKS5 proxy at " + s.addr + ": " + err.Error()) } // Also need to discard the port number if _, err := io.ReadFull(conn, buf[:2]); err != nil { return nil, errors.New("proxy: failed to read port from SOCKS5 proxy at " + s.addr + ": " + err.Error()) } closeConn = nil return conn, nil } lxd-2.0.2/dist/src/golang.org/x/net/.gitattributes0000644061062106075000000000053112721405224024425 0ustar00stgraberdomain admins00000000000000# Treat all files in this repo as binary, with no git magic updating # line endings. Windows users contributing to Go will need to use a # modern version of git and editors capable of LF line endings. # # We'll prevent accidental CRLF line endings from entering the repo # via the git-review gofmt checks. # # See golang.org/issue/9281 * -text lxd-2.0.2/dist/src/golang.org/x/net/PATENTS0000644061062106075000000000242712721405224022601 0ustar00stgraberdomain admins00000000000000Additional IP Rights Grant (Patents) "This implementation" means the copyrightable works distributed by Google as part of the Go project. Google hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, transfer and otherwise run, modify and propagate the contents of this implementation of Go, where such license applies only to those patent claims, both currently owned or controlled by Google and acquired in the future, licensable by Google that are necessarily infringed by this implementation of Go. This grant does not include claims that would be infringed only as a consequence of further modification of this implementation. If you or your agent or exclusive licensee institute or order or agree to the institution of patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that this implementation of Go or any code incorporated within this implementation of Go constitutes direct or contributory patent infringement, or inducement of patent infringement, then any patent rights granted to you under this License for this implementation of Go shall terminate as of the date such litigation is filed. lxd-2.0.2/dist/src/golang.org/x/net/context/0000755061062106075000000000000012721405224023217 5ustar00stgraberdomain admins00000000000000lxd-2.0.2/dist/src/golang.org/x/net/context/context_test.go0000644061062106075000000003516012721405224026276 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build !go1.7 package context import ( "fmt" "math/rand" "runtime" "strings" "sync" "testing" "time" ) // otherContext is a Context that's not one of the types defined in context.go. // This lets us test code paths that differ based on the underlying type of the // Context. type otherContext struct { Context } func TestBackground(t *testing.T) { c := Background() if c == nil { t.Fatalf("Background returned nil") } select { case x := <-c.Done(): t.Errorf("<-c.Done() == %v want nothing (it should block)", x) default: } if got, want := fmt.Sprint(c), "context.Background"; got != want { t.Errorf("Background().String() = %q want %q", got, want) } } func TestTODO(t *testing.T) { c := TODO() if c == nil { t.Fatalf("TODO returned nil") } select { case x := <-c.Done(): t.Errorf("<-c.Done() == %v want nothing (it should block)", x) default: } if got, want := fmt.Sprint(c), "context.TODO"; got != want { t.Errorf("TODO().String() = %q want %q", got, want) } } func TestWithCancel(t *testing.T) { c1, cancel := WithCancel(Background()) if got, want := fmt.Sprint(c1), "context.Background.WithCancel"; got != want { t.Errorf("c1.String() = %q want %q", got, want) } o := otherContext{c1} c2, _ := WithCancel(o) contexts := []Context{c1, o, c2} for i, c := range contexts { if d := c.Done(); d == nil { t.Errorf("c[%d].Done() == %v want non-nil", i, d) } if e := c.Err(); e != nil { t.Errorf("c[%d].Err() == %v want nil", i, e) } select { case x := <-c.Done(): t.Errorf("<-c.Done() == %v want nothing (it should block)", x) default: } } cancel() time.Sleep(100 * time.Millisecond) // let cancelation propagate for i, c := range contexts { select { case <-c.Done(): default: t.Errorf("<-c[%d].Done() blocked, but shouldn't have", i) } if e := c.Err(); e != Canceled { t.Errorf("c[%d].Err() == %v want %v", i, e, Canceled) } } } func TestParentFinishesChild(t *testing.T) { // Context tree: // parent -> cancelChild // parent -> valueChild -> timerChild parent, cancel := WithCancel(Background()) cancelChild, stop := WithCancel(parent) defer stop() valueChild := WithValue(parent, "key", "value") timerChild, stop := WithTimeout(valueChild, 10000*time.Hour) defer stop() select { case x := <-parent.Done(): t.Errorf("<-parent.Done() == %v want nothing (it should block)", x) case x := <-cancelChild.Done(): t.Errorf("<-cancelChild.Done() == %v want nothing (it should block)", x) case x := <-timerChild.Done(): t.Errorf("<-timerChild.Done() == %v want nothing (it should block)", x) case x := <-valueChild.Done(): t.Errorf("<-valueChild.Done() == %v want nothing (it should block)", x) default: } // The parent's children should contain the two cancelable children. pc := parent.(*cancelCtx) cc := cancelChild.(*cancelCtx) tc := timerChild.(*timerCtx) pc.mu.Lock() if len(pc.children) != 2 || !pc.children[cc] || !pc.children[tc] { t.Errorf("bad linkage: pc.children = %v, want %v and %v", pc.children, cc, tc) } pc.mu.Unlock() if p, ok := parentCancelCtx(cc.Context); !ok || p != pc { t.Errorf("bad linkage: parentCancelCtx(cancelChild.Context) = %v, %v want %v, true", p, ok, pc) } if p, ok := parentCancelCtx(tc.Context); !ok || p != pc { t.Errorf("bad linkage: parentCancelCtx(timerChild.Context) = %v, %v want %v, true", p, ok, pc) } cancel() pc.mu.Lock() if len(pc.children) != 0 { t.Errorf("pc.cancel didn't clear pc.children = %v", pc.children) } pc.mu.Unlock() // parent and children should all be finished. check := func(ctx Context, name string) { select { case <-ctx.Done(): default: t.Errorf("<-%s.Done() blocked, but shouldn't have", name) } if e := ctx.Err(); e != Canceled { t.Errorf("%s.Err() == %v want %v", name, e, Canceled) } } check(parent, "parent") check(cancelChild, "cancelChild") check(valueChild, "valueChild") check(timerChild, "timerChild") // WithCancel should return a canceled context on a canceled parent. precanceledChild := WithValue(parent, "key", "value") select { case <-precanceledChild.Done(): default: t.Errorf("<-precanceledChild.Done() blocked, but shouldn't have") } if e := precanceledChild.Err(); e != Canceled { t.Errorf("precanceledChild.Err() == %v want %v", e, Canceled) } } func TestChildFinishesFirst(t *testing.T) { cancelable, stop := WithCancel(Background()) defer stop() for _, parent := range []Context{Background(), cancelable} { child, cancel := WithCancel(parent) select { case x := <-parent.Done(): t.Errorf("<-parent.Done() == %v want nothing (it should block)", x) case x := <-child.Done(): t.Errorf("<-child.Done() == %v want nothing (it should block)", x) default: } cc := child.(*cancelCtx) pc, pcok := parent.(*cancelCtx) // pcok == false when parent == Background() if p, ok := parentCancelCtx(cc.Context); ok != pcok || (ok && pc != p) { t.Errorf("bad linkage: parentCancelCtx(cc.Context) = %v, %v want %v, %v", p, ok, pc, pcok) } if pcok { pc.mu.Lock() if len(pc.children) != 1 || !pc.children[cc] { t.Errorf("bad linkage: pc.children = %v, cc = %v", pc.children, cc) } pc.mu.Unlock() } cancel() if pcok { pc.mu.Lock() if len(pc.children) != 0 { t.Errorf("child's cancel didn't remove self from pc.children = %v", pc.children) } pc.mu.Unlock() } // child should be finished. select { case <-child.Done(): default: t.Errorf("<-child.Done() blocked, but shouldn't have") } if e := child.Err(); e != Canceled { t.Errorf("child.Err() == %v want %v", e, Canceled) } // parent should not be finished. select { case x := <-parent.Done(): t.Errorf("<-parent.Done() == %v want nothing (it should block)", x) default: } if e := parent.Err(); e != nil { t.Errorf("parent.Err() == %v want nil", e) } } } func testDeadline(c Context, wait time.Duration, t *testing.T) { select { case <-time.After(wait): t.Fatalf("context should have timed out") case <-c.Done(): } if e := c.Err(); e != DeadlineExceeded { t.Errorf("c.Err() == %v want %v", e, DeadlineExceeded) } } func TestDeadline(t *testing.T) { c, _ := WithDeadline(Background(), time.Now().Add(100*time.Millisecond)) if got, prefix := fmt.Sprint(c), "context.Background.WithDeadline("; !strings.HasPrefix(got, prefix) { t.Errorf("c.String() = %q want prefix %q", got, prefix) } testDeadline(c, 200*time.Millisecond, t) c, _ = WithDeadline(Background(), time.Now().Add(100*time.Millisecond)) o := otherContext{c} testDeadline(o, 200*time.Millisecond, t) c, _ = WithDeadline(Background(), time.Now().Add(100*time.Millisecond)) o = otherContext{c} c, _ = WithDeadline(o, time.Now().Add(300*time.Millisecond)) testDeadline(c, 200*time.Millisecond, t) } func TestTimeout(t *testing.T) { c, _ := WithTimeout(Background(), 100*time.Millisecond) if got, prefix := fmt.Sprint(c), "context.Background.WithDeadline("; !strings.HasPrefix(got, prefix) { t.Errorf("c.String() = %q want prefix %q", got, prefix) } testDeadline(c, 200*time.Millisecond, t) c, _ = WithTimeout(Background(), 100*time.Millisecond) o := otherContext{c} testDeadline(o, 200*time.Millisecond, t) c, _ = WithTimeout(Background(), 100*time.Millisecond) o = otherContext{c} c, _ = WithTimeout(o, 300*time.Millisecond) testDeadline(c, 200*time.Millisecond, t) } func TestCanceledTimeout(t *testing.T) { c, _ := WithTimeout(Background(), 200*time.Millisecond) o := otherContext{c} c, cancel := WithTimeout(o, 400*time.Millisecond) cancel() time.Sleep(100 * time.Millisecond) // let cancelation propagate select { case <-c.Done(): default: t.Errorf("<-c.Done() blocked, but shouldn't have") } if e := c.Err(); e != Canceled { t.Errorf("c.Err() == %v want %v", e, Canceled) } } type key1 int type key2 int var k1 = key1(1) var k2 = key2(1) // same int as k1, different type var k3 = key2(3) // same type as k2, different int func TestValues(t *testing.T) { check := func(c Context, nm, v1, v2, v3 string) { if v, ok := c.Value(k1).(string); ok == (len(v1) == 0) || v != v1 { t.Errorf(`%s.Value(k1).(string) = %q, %t want %q, %t`, nm, v, ok, v1, len(v1) != 0) } if v, ok := c.Value(k2).(string); ok == (len(v2) == 0) || v != v2 { t.Errorf(`%s.Value(k2).(string) = %q, %t want %q, %t`, nm, v, ok, v2, len(v2) != 0) } if v, ok := c.Value(k3).(string); ok == (len(v3) == 0) || v != v3 { t.Errorf(`%s.Value(k3).(string) = %q, %t want %q, %t`, nm, v, ok, v3, len(v3) != 0) } } c0 := Background() check(c0, "c0", "", "", "") c1 := WithValue(Background(), k1, "c1k1") check(c1, "c1", "c1k1", "", "") if got, want := fmt.Sprint(c1), `context.Background.WithValue(1, "c1k1")`; got != want { t.Errorf("c.String() = %q want %q", got, want) } c2 := WithValue(c1, k2, "c2k2") check(c2, "c2", "c1k1", "c2k2", "") c3 := WithValue(c2, k3, "c3k3") check(c3, "c2", "c1k1", "c2k2", "c3k3") c4 := WithValue(c3, k1, nil) check(c4, "c4", "", "c2k2", "c3k3") o0 := otherContext{Background()} check(o0, "o0", "", "", "") o1 := otherContext{WithValue(Background(), k1, "c1k1")} check(o1, "o1", "c1k1", "", "") o2 := WithValue(o1, k2, "o2k2") check(o2, "o2", "c1k1", "o2k2", "") o3 := otherContext{c4} check(o3, "o3", "", "c2k2", "c3k3") o4 := WithValue(o3, k3, nil) check(o4, "o4", "", "c2k2", "") } func TestAllocs(t *testing.T) { bg := Background() for _, test := range []struct { desc string f func() limit float64 gccgoLimit float64 }{ { desc: "Background()", f: func() { Background() }, limit: 0, gccgoLimit: 0, }, { desc: fmt.Sprintf("WithValue(bg, %v, nil)", k1), f: func() { c := WithValue(bg, k1, nil) c.Value(k1) }, limit: 3, gccgoLimit: 3, }, { desc: "WithTimeout(bg, 15*time.Millisecond)", f: func() { c, _ := WithTimeout(bg, 15*time.Millisecond) <-c.Done() }, limit: 8, gccgoLimit: 16, }, { desc: "WithCancel(bg)", f: func() { c, cancel := WithCancel(bg) cancel() <-c.Done() }, limit: 5, gccgoLimit: 8, }, { desc: "WithTimeout(bg, 100*time.Millisecond)", f: func() { c, cancel := WithTimeout(bg, 100*time.Millisecond) cancel() <-c.Done() }, limit: 8, gccgoLimit: 25, }, } { limit := test.limit if runtime.Compiler == "gccgo" { // gccgo does not yet do escape analysis. // TODO(iant): Remove this when gccgo does do escape analysis. limit = test.gccgoLimit } if n := testing.AllocsPerRun(100, test.f); n > limit { t.Errorf("%s allocs = %f want %d", test.desc, n, int(limit)) } } } func TestSimultaneousCancels(t *testing.T) { root, cancel := WithCancel(Background()) m := map[Context]CancelFunc{root: cancel} q := []Context{root} // Create a tree of contexts. for len(q) != 0 && len(m) < 100 { parent := q[0] q = q[1:] for i := 0; i < 4; i++ { ctx, cancel := WithCancel(parent) m[ctx] = cancel q = append(q, ctx) } } // Start all the cancels in a random order. var wg sync.WaitGroup wg.Add(len(m)) for _, cancel := range m { go func(cancel CancelFunc) { cancel() wg.Done() }(cancel) } // Wait on all the contexts in a random order. for ctx := range m { select { case <-ctx.Done(): case <-time.After(1 * time.Second): buf := make([]byte, 10<<10) n := runtime.Stack(buf, true) t.Fatalf("timed out waiting for <-ctx.Done(); stacks:\n%s", buf[:n]) } } // Wait for all the cancel functions to return. done := make(chan struct{}) go func() { wg.Wait() close(done) }() select { case <-done: case <-time.After(1 * time.Second): buf := make([]byte, 10<<10) n := runtime.Stack(buf, true) t.Fatalf("timed out waiting for cancel functions; stacks:\n%s", buf[:n]) } } func TestInterlockedCancels(t *testing.T) { parent, cancelParent := WithCancel(Background()) child, cancelChild := WithCancel(parent) go func() { parent.Done() cancelChild() }() cancelParent() select { case <-child.Done(): case <-time.After(1 * time.Second): buf := make([]byte, 10<<10) n := runtime.Stack(buf, true) t.Fatalf("timed out waiting for child.Done(); stacks:\n%s", buf[:n]) } } func TestLayersCancel(t *testing.T) { testLayers(t, time.Now().UnixNano(), false) } func TestLayersTimeout(t *testing.T) { testLayers(t, time.Now().UnixNano(), true) } func testLayers(t *testing.T, seed int64, testTimeout bool) { rand.Seed(seed) errorf := func(format string, a ...interface{}) { t.Errorf(fmt.Sprintf("seed=%d: %s", seed, format), a...) } const ( timeout = 200 * time.Millisecond minLayers = 30 ) type value int var ( vals []*value cancels []CancelFunc numTimers int ctx = Background() ) for i := 0; i < minLayers || numTimers == 0 || len(cancels) == 0 || len(vals) == 0; i++ { switch rand.Intn(3) { case 0: v := new(value) ctx = WithValue(ctx, v, v) vals = append(vals, v) case 1: var cancel CancelFunc ctx, cancel = WithCancel(ctx) cancels = append(cancels, cancel) case 2: var cancel CancelFunc ctx, cancel = WithTimeout(ctx, timeout) cancels = append(cancels, cancel) numTimers++ } } checkValues := func(when string) { for _, key := range vals { if val := ctx.Value(key).(*value); key != val { errorf("%s: ctx.Value(%p) = %p want %p", when, key, val, key) } } } select { case <-ctx.Done(): errorf("ctx should not be canceled yet") default: } if s, prefix := fmt.Sprint(ctx), "context.Background."; !strings.HasPrefix(s, prefix) { t.Errorf("ctx.String() = %q want prefix %q", s, prefix) } t.Log(ctx) checkValues("before cancel") if testTimeout { select { case <-ctx.Done(): case <-time.After(timeout + 100*time.Millisecond): errorf("ctx should have timed out") } checkValues("after timeout") } else { cancel := cancels[rand.Intn(len(cancels))] cancel() select { case <-ctx.Done(): default: errorf("ctx should be canceled") } checkValues("after cancel") } } func TestCancelRemoves(t *testing.T) { checkChildren := func(when string, ctx Context, want int) { if got := len(ctx.(*cancelCtx).children); got != want { t.Errorf("%s: context has %d children, want %d", when, got, want) } } ctx, _ := WithCancel(Background()) checkChildren("after creation", ctx, 0) _, cancel := WithCancel(ctx) checkChildren("with WithCancel child ", ctx, 1) cancel() checkChildren("after cancelling WithCancel child", ctx, 0) ctx, _ = WithCancel(Background()) checkChildren("after creation", ctx, 0) _, cancel = WithTimeout(ctx, 60*time.Minute) checkChildren("with WithTimeout child ", ctx, 1) cancel() checkChildren("after cancelling WithTimeout child", ctx, 0) } lxd-2.0.2/dist/src/golang.org/x/net/context/withtimeout_test.go0000644061062106075000000000124512721405224027171 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package context_test import ( "fmt" "time" "golang.org/x/net/context" ) func ExampleWithTimeout() { // Pass a context with a timeout to tell a blocking function that it // should abandon its work after the timeout elapses. ctx, _ := context.WithTimeout(context.Background(), 100*time.Millisecond) select { case <-time.After(200 * time.Millisecond): fmt.Println("overslept") case <-ctx.Done(): fmt.Println(ctx.Err()) // prints "context deadline exceeded" } // Output: // context deadline exceeded } lxd-2.0.2/dist/src/golang.org/x/net/context/context.go0000644061062106075000000001407612721405224025242 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // Package context defines the Context type, which carries deadlines, // cancelation signals, and other request-scoped values across API boundaries // and between processes. // // Incoming requests to a server should create a Context, and outgoing calls to // servers should accept a Context. The chain of function calls between must // propagate the Context, optionally replacing it with a modified copy created // using WithDeadline, WithTimeout, WithCancel, or WithValue. // // Programs that use Contexts should follow these rules to keep interfaces // consistent across packages and enable static analysis tools to check context // propagation: // // Do not store Contexts inside a struct type; instead, pass a Context // explicitly to each function that needs it. The Context should be the first // parameter, typically named ctx: // // func DoSomething(ctx context.Context, arg Arg) error { // // ... use ctx ... // } // // Do not pass a nil Context, even if a function permits it. Pass context.TODO // if you are unsure about which Context to use. // // Use context Values only for request-scoped data that transits processes and // APIs, not for passing optional parameters to functions. // // The same Context may be passed to functions running in different goroutines; // Contexts are safe for simultaneous use by multiple goroutines. // // See http://blog.golang.org/context for example code for a server that uses // Contexts. package context // import "golang.org/x/net/context" import "time" // A Context carries a deadline, a cancelation signal, and other values across // API boundaries. // // Context's methods may be called by multiple goroutines simultaneously. type Context interface { // Deadline returns the time when work done on behalf of this context // should be canceled. Deadline returns ok==false when no deadline is // set. Successive calls to Deadline return the same results. Deadline() (deadline time.Time, ok bool) // Done returns a channel that's closed when work done on behalf of this // context should be canceled. Done may return nil if this context can // never be canceled. Successive calls to Done return the same value. // // WithCancel arranges for Done to be closed when cancel is called; // WithDeadline arranges for Done to be closed when the deadline // expires; WithTimeout arranges for Done to be closed when the timeout // elapses. // // Done is provided for use in select statements: // // // Stream generates values with DoSomething and sends them to out // // until DoSomething returns an error or ctx.Done is closed. // func Stream(ctx context.Context, out chan<- Value) error { // for { // v, err := DoSomething(ctx) // if err != nil { // return err // } // select { // case <-ctx.Done(): // return ctx.Err() // case out <- v: // } // } // } // // See http://blog.golang.org/pipelines for more examples of how to use // a Done channel for cancelation. Done() <-chan struct{} // Err returns a non-nil error value after Done is closed. Err returns // Canceled if the context was canceled or DeadlineExceeded if the // context's deadline passed. No other values for Err are defined. // After Done is closed, successive calls to Err return the same value. Err() error // Value returns the value associated with this context for key, or nil // if no value is associated with key. Successive calls to Value with // the same key returns the same result. // // Use context values only for request-scoped data that transits // processes and API boundaries, not for passing optional parameters to // functions. // // A key identifies a specific value in a Context. Functions that wish // to store values in Context typically allocate a key in a global // variable then use that key as the argument to context.WithValue and // Context.Value. A key can be any type that supports equality; // packages should define keys as an unexported type to avoid // collisions. // // Packages that define a Context key should provide type-safe accessors // for the values stores using that key: // // // Package user defines a User type that's stored in Contexts. // package user // // import "golang.org/x/net/context" // // // User is the type of value stored in the Contexts. // type User struct {...} // // // key is an unexported type for keys defined in this package. // // This prevents collisions with keys defined in other packages. // type key int // // // userKey is the key for user.User values in Contexts. It is // // unexported; clients use user.NewContext and user.FromContext // // instead of using this key directly. // var userKey key = 0 // // // NewContext returns a new Context that carries value u. // func NewContext(ctx context.Context, u *User) context.Context { // return context.WithValue(ctx, userKey, u) // } // // // FromContext returns the User value stored in ctx, if any. // func FromContext(ctx context.Context) (*User, bool) { // u, ok := ctx.Value(userKey).(*User) // return u, ok // } Value(key interface{}) interface{} } // Background returns a non-nil, empty Context. It is never canceled, has no // values, and has no deadline. It is typically used by the main function, // initialization, and tests, and as the top-level Context for incoming // requests. func Background() Context { return background } // TODO returns a non-nil, empty Context. Code should use context.TODO when // it's unclear which Context to use or it is not yet available (because the // surrounding function has not yet been extended to accept a Context // parameter). TODO is recognized by static analysis tools that determine // whether Contexts are propagated correctly in a program. func TODO() Context { return todo } // A CancelFunc tells an operation to abandon its work. // A CancelFunc does not wait for the work to stop. // After the first call, subsequent calls to a CancelFunc do nothing. type CancelFunc func() lxd-2.0.2/dist/src/golang.org/x/net/context/go17.go0000644061062106075000000000546012721405224024330 0ustar00stgraberdomain admins00000000000000// Copyright 2016 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build go1.7 package context import ( "context" // standard library's context, as of Go 1.7 "time" ) var ( todo = context.TODO() background = context.Background() ) // Canceled is the error returned by Context.Err when the context is canceled. var Canceled = context.Canceled // DeadlineExceeded is the error returned by Context.Err when the context's // deadline passes. var DeadlineExceeded = context.DeadlineExceeded // WithCancel returns a copy of parent with a new Done channel. The returned // context's Done channel is closed when the returned cancel function is called // or when the parent context's Done channel is closed, whichever happens first. // // Canceling this context releases resources associated with it, so code should // call cancel as soon as the operations running in this Context complete. func WithCancel(parent Context) (ctx Context, cancel CancelFunc) { ctx, f := context.WithCancel(parent) return ctx, CancelFunc(f) } // WithDeadline returns a copy of the parent context with the deadline adjusted // to be no later than d. If the parent's deadline is already earlier than d, // WithDeadline(parent, d) is semantically equivalent to parent. The returned // context's Done channel is closed when the deadline expires, when the returned // cancel function is called, or when the parent context's Done channel is // closed, whichever happens first. // // Canceling this context releases resources associated with it, so code should // call cancel as soon as the operations running in this Context complete. func WithDeadline(parent Context, deadline time.Time) (Context, CancelFunc) { ctx, f := context.WithDeadline(parent, deadline) return ctx, CancelFunc(f) } // WithTimeout returns WithDeadline(parent, time.Now().Add(timeout)). // // Canceling this context releases resources associated with it, so code should // call cancel as soon as the operations running in this Context complete: // // func slowOperationWithTimeout(ctx context.Context) (Result, error) { // ctx, cancel := context.WithTimeout(ctx, 100*time.Millisecond) // defer cancel() // releases resources if slowOperation completes before timeout elapses // return slowOperation(ctx) // } func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc) { return WithDeadline(parent, time.Now().Add(timeout)) } // WithValue returns a copy of parent in which the value associated with key is // val. // // Use context Values only for request-scoped data that transits processes and // APIs, not for passing optional parameters to functions. func WithValue(parent Context, key interface{}, val interface{}) Context { return context.WithValue(parent, key, val) } lxd-2.0.2/dist/src/golang.org/x/net/context/pre_go17.go0000644061062106075000000001770412721405224025202 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build !go1.7 package context import ( "errors" "fmt" "sync" "time" ) // An emptyCtx is never canceled, has no values, and has no deadline. It is not // struct{}, since vars of this type must have distinct addresses. type emptyCtx int func (*emptyCtx) Deadline() (deadline time.Time, ok bool) { return } func (*emptyCtx) Done() <-chan struct{} { return nil } func (*emptyCtx) Err() error { return nil } func (*emptyCtx) Value(key interface{}) interface{} { return nil } func (e *emptyCtx) String() string { switch e { case background: return "context.Background" case todo: return "context.TODO" } return "unknown empty Context" } var ( background = new(emptyCtx) todo = new(emptyCtx) ) // Canceled is the error returned by Context.Err when the context is canceled. var Canceled = errors.New("context canceled") // DeadlineExceeded is the error returned by Context.Err when the context's // deadline passes. var DeadlineExceeded = errors.New("context deadline exceeded") // WithCancel returns a copy of parent with a new Done channel. The returned // context's Done channel is closed when the returned cancel function is called // or when the parent context's Done channel is closed, whichever happens first. // // Canceling this context releases resources associated with it, so code should // call cancel as soon as the operations running in this Context complete. func WithCancel(parent Context) (ctx Context, cancel CancelFunc) { c := newCancelCtx(parent) propagateCancel(parent, c) return c, func() { c.cancel(true, Canceled) } } // newCancelCtx returns an initialized cancelCtx. func newCancelCtx(parent Context) *cancelCtx { return &cancelCtx{ Context: parent, done: make(chan struct{}), } } // propagateCancel arranges for child to be canceled when parent is. func propagateCancel(parent Context, child canceler) { if parent.Done() == nil { return // parent is never canceled } if p, ok := parentCancelCtx(parent); ok { p.mu.Lock() if p.err != nil { // parent has already been canceled child.cancel(false, p.err) } else { if p.children == nil { p.children = make(map[canceler]bool) } p.children[child] = true } p.mu.Unlock() } else { go func() { select { case <-parent.Done(): child.cancel(false, parent.Err()) case <-child.Done(): } }() } } // parentCancelCtx follows a chain of parent references until it finds a // *cancelCtx. This function understands how each of the concrete types in this // package represents its parent. func parentCancelCtx(parent Context) (*cancelCtx, bool) { for { switch c := parent.(type) { case *cancelCtx: return c, true case *timerCtx: return c.cancelCtx, true case *valueCtx: parent = c.Context default: return nil, false } } } // removeChild removes a context from its parent. func removeChild(parent Context, child canceler) { p, ok := parentCancelCtx(parent) if !ok { return } p.mu.Lock() if p.children != nil { delete(p.children, child) } p.mu.Unlock() } // A canceler is a context type that can be canceled directly. The // implementations are *cancelCtx and *timerCtx. type canceler interface { cancel(removeFromParent bool, err error) Done() <-chan struct{} } // A cancelCtx can be canceled. When canceled, it also cancels any children // that implement canceler. type cancelCtx struct { Context done chan struct{} // closed by the first cancel call. mu sync.Mutex children map[canceler]bool // set to nil by the first cancel call err error // set to non-nil by the first cancel call } func (c *cancelCtx) Done() <-chan struct{} { return c.done } func (c *cancelCtx) Err() error { c.mu.Lock() defer c.mu.Unlock() return c.err } func (c *cancelCtx) String() string { return fmt.Sprintf("%v.WithCancel", c.Context) } // cancel closes c.done, cancels each of c's children, and, if // removeFromParent is true, removes c from its parent's children. func (c *cancelCtx) cancel(removeFromParent bool, err error) { if err == nil { panic("context: internal error: missing cancel error") } c.mu.Lock() if c.err != nil { c.mu.Unlock() return // already canceled } c.err = err close(c.done) for child := range c.children { // NOTE: acquiring the child's lock while holding parent's lock. child.cancel(false, err) } c.children = nil c.mu.Unlock() if removeFromParent { removeChild(c.Context, c) } } // WithDeadline returns a copy of the parent context with the deadline adjusted // to be no later than d. If the parent's deadline is already earlier than d, // WithDeadline(parent, d) is semantically equivalent to parent. The returned // context's Done channel is closed when the deadline expires, when the returned // cancel function is called, or when the parent context's Done channel is // closed, whichever happens first. // // Canceling this context releases resources associated with it, so code should // call cancel as soon as the operations running in this Context complete. func WithDeadline(parent Context, deadline time.Time) (Context, CancelFunc) { if cur, ok := parent.Deadline(); ok && cur.Before(deadline) { // The current deadline is already sooner than the new one. return WithCancel(parent) } c := &timerCtx{ cancelCtx: newCancelCtx(parent), deadline: deadline, } propagateCancel(parent, c) d := deadline.Sub(time.Now()) if d <= 0 { c.cancel(true, DeadlineExceeded) // deadline has already passed return c, func() { c.cancel(true, Canceled) } } c.mu.Lock() defer c.mu.Unlock() if c.err == nil { c.timer = time.AfterFunc(d, func() { c.cancel(true, DeadlineExceeded) }) } return c, func() { c.cancel(true, Canceled) } } // A timerCtx carries a timer and a deadline. It embeds a cancelCtx to // implement Done and Err. It implements cancel by stopping its timer then // delegating to cancelCtx.cancel. type timerCtx struct { *cancelCtx timer *time.Timer // Under cancelCtx.mu. deadline time.Time } func (c *timerCtx) Deadline() (deadline time.Time, ok bool) { return c.deadline, true } func (c *timerCtx) String() string { return fmt.Sprintf("%v.WithDeadline(%s [%s])", c.cancelCtx.Context, c.deadline, c.deadline.Sub(time.Now())) } func (c *timerCtx) cancel(removeFromParent bool, err error) { c.cancelCtx.cancel(false, err) if removeFromParent { // Remove this timerCtx from its parent cancelCtx's children. removeChild(c.cancelCtx.Context, c) } c.mu.Lock() if c.timer != nil { c.timer.Stop() c.timer = nil } c.mu.Unlock() } // WithTimeout returns WithDeadline(parent, time.Now().Add(timeout)). // // Canceling this context releases resources associated with it, so code should // call cancel as soon as the operations running in this Context complete: // // func slowOperationWithTimeout(ctx context.Context) (Result, error) { // ctx, cancel := context.WithTimeout(ctx, 100*time.Millisecond) // defer cancel() // releases resources if slowOperation completes before timeout elapses // return slowOperation(ctx) // } func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc) { return WithDeadline(parent, time.Now().Add(timeout)) } // WithValue returns a copy of parent in which the value associated with key is // val. // // Use context Values only for request-scoped data that transits processes and // APIs, not for passing optional parameters to functions. func WithValue(parent Context, key interface{}, val interface{}) Context { return &valueCtx{parent, key, val} } // A valueCtx carries a key-value pair. It implements Value for that key and // delegates all other calls to the embedded Context. type valueCtx struct { Context key, val interface{} } func (c *valueCtx) String() string { return fmt.Sprintf("%v.WithValue(%#v, %#v)", c.Context, c.key, c.val) } func (c *valueCtx) Value(key interface{}) interface{} { if c.key == key { return c.val } return c.Context.Value(key) } lxd-2.0.2/dist/src/golang.org/x/net/context/ctxhttp/0000755061062106075000000000000012721405224024715 5ustar00stgraberdomain admins00000000000000lxd-2.0.2/dist/src/golang.org/x/net/context/ctxhttp/ctxhttp_test.go0000644061062106075000000000777212721405224030016 0ustar00stgraberdomain admins00000000000000// Copyright 2015 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build !plan9 package ctxhttp import ( "io/ioutil" "net" "net/http" "net/http/httptest" "sync" "testing" "time" "golang.org/x/net/context" ) const ( requestDuration = 100 * time.Millisecond requestBody = "ok" ) func TestNoTimeout(t *testing.T) { ctx := context.Background() resp, err := doRequest(ctx) if resp == nil || err != nil { t.Fatalf("error received from client: %v %v", err, resp) } } func TestCancel(t *testing.T) { ctx, cancel := context.WithCancel(context.Background()) go func() { time.Sleep(requestDuration / 2) cancel() }() resp, err := doRequest(ctx) if resp != nil || err == nil { t.Fatalf("expected error, didn't get one. resp: %v", resp) } if err != ctx.Err() { t.Fatalf("expected error from context but got: %v", err) } } func TestCancelAfterRequest(t *testing.T) { ctx, cancel := context.WithCancel(context.Background()) resp, err := doRequest(ctx) // Cancel before reading the body. // Request.Body should still be readable after the context is canceled. cancel() b, err := ioutil.ReadAll(resp.Body) if err != nil || string(b) != requestBody { t.Fatalf("could not read body: %q %v", b, err) } } func TestCancelAfterHangingRequest(t *testing.T) { handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { w.WriteHeader(http.StatusOK) w.(http.Flusher).Flush() <-w.(http.CloseNotifier).CloseNotify() }) serv := httptest.NewServer(handler) defer serv.Close() ctx, cancel := context.WithCancel(context.Background()) resp, err := Get(ctx, nil, serv.URL) if err != nil { t.Fatalf("unexpected error in Get: %v", err) } // Cancel befer reading the body. // Reading Request.Body should fail, since the request was // canceled before anything was written. cancel() done := make(chan struct{}) go func() { b, err := ioutil.ReadAll(resp.Body) if len(b) != 0 || err == nil { t.Errorf(`Read got (%q, %v); want ("", error)`, b, err) } close(done) }() select { case <-time.After(1 * time.Second): t.Errorf("Test timed out") case <-done: } } func doRequest(ctx context.Context) (*http.Response, error) { var okHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { time.Sleep(requestDuration) w.Write([]byte(requestBody)) }) serv := httptest.NewServer(okHandler) defer serv.Close() return Get(ctx, nil, serv.URL) } // golang.org/issue/14065 func TestClosesResponseBodyOnCancel(t *testing.T) { defer func() { testHookContextDoneBeforeHeaders = nop }() defer func() { testHookDoReturned = nop }() defer func() { testHookDidBodyClose = nop }() ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {})) defer ts.Close() ctx, cancel := context.WithCancel(context.Background()) // closed when Do enters select case <-ctx.Done() enteredDonePath := make(chan struct{}) testHookContextDoneBeforeHeaders = func() { close(enteredDonePath) } testHookDoReturned = func() { // We now have the result (the Flush'd headers) at least, // so we can cancel the request. cancel() // But block the client.Do goroutine from sending // until Do enters into the <-ctx.Done() path, since // otherwise if both channels are readable, select // picks a random one. <-enteredDonePath } sawBodyClose := make(chan struct{}) testHookDidBodyClose = func() { close(sawBodyClose) } tr := &http.Transport{} defer tr.CloseIdleConnections() c := &http.Client{Transport: tr} req, _ := http.NewRequest("GET", ts.URL, nil) _, doErr := Do(ctx, c, req) select { case <-sawBodyClose: case <-time.After(5 * time.Second): t.Fatal("timeout waiting for body to close") } if doErr != ctx.Err() { t.Errorf("Do error = %v; want %v", doErr, ctx.Err()) } } type noteCloseConn struct { net.Conn onceClose sync.Once closefn func() } func (c *noteCloseConn) Close() error { c.onceClose.Do(c.closefn) return c.Conn.Close() } lxd-2.0.2/dist/src/golang.org/x/net/context/ctxhttp/ctxhttp.go0000644061062106075000000000716612721405224026754 0ustar00stgraberdomain admins00000000000000// Copyright 2015 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // Package ctxhttp provides helper functions for performing context-aware HTTP requests. package ctxhttp // import "golang.org/x/net/context/ctxhttp" import ( "io" "net/http" "net/url" "strings" "golang.org/x/net/context" ) func nop() {} var ( testHookContextDoneBeforeHeaders = nop testHookDoReturned = nop testHookDidBodyClose = nop ) // Do sends an HTTP request with the provided http.Client and returns an HTTP response. // If the client is nil, http.DefaultClient is used. // If the context is canceled or times out, ctx.Err() will be returned. func Do(ctx context.Context, client *http.Client, req *http.Request) (*http.Response, error) { if client == nil { client = http.DefaultClient } // TODO(djd): Respect any existing value of req.Cancel. cancel := make(chan struct{}) req.Cancel = cancel type responseAndError struct { resp *http.Response err error } result := make(chan responseAndError, 1) // Make local copies of test hooks closed over by goroutines below. // Prevents data races in tests. testHookDoReturned := testHookDoReturned testHookDidBodyClose := testHookDidBodyClose go func() { resp, err := client.Do(req) testHookDoReturned() result <- responseAndError{resp, err} }() var resp *http.Response select { case <-ctx.Done(): testHookContextDoneBeforeHeaders() close(cancel) // Clean up after the goroutine calling client.Do: go func() { if r := <-result; r.resp != nil { testHookDidBodyClose() r.resp.Body.Close() } }() return nil, ctx.Err() case r := <-result: var err error resp, err = r.resp, r.err if err != nil { return resp, err } } c := make(chan struct{}) go func() { select { case <-ctx.Done(): close(cancel) case <-c: // The response's Body is closed. } }() resp.Body = ¬ifyingReader{resp.Body, c} return resp, nil } // Get issues a GET request via the Do function. func Get(ctx context.Context, client *http.Client, url string) (*http.Response, error) { req, err := http.NewRequest("GET", url, nil) if err != nil { return nil, err } return Do(ctx, client, req) } // Head issues a HEAD request via the Do function. func Head(ctx context.Context, client *http.Client, url string) (*http.Response, error) { req, err := http.NewRequest("HEAD", url, nil) if err != nil { return nil, err } return Do(ctx, client, req) } // Post issues a POST request via the Do function. func Post(ctx context.Context, client *http.Client, url string, bodyType string, body io.Reader) (*http.Response, error) { req, err := http.NewRequest("POST", url, body) if err != nil { return nil, err } req.Header.Set("Content-Type", bodyType) return Do(ctx, client, req) } // PostForm issues a POST request via the Do function. func PostForm(ctx context.Context, client *http.Client, url string, data url.Values) (*http.Response, error) { return Post(ctx, client, url, "application/x-www-form-urlencoded", strings.NewReader(data.Encode())) } // notifyingReader is an io.ReadCloser that closes the notify channel after // Close is called or a Read fails on the underlying ReadCloser. type notifyingReader struct { io.ReadCloser notify chan<- struct{} } func (r *notifyingReader) Read(p []byte) (int, error) { n, err := r.ReadCloser.Read(p) if err != nil && r.notify != nil { close(r.notify) r.notify = nil } return n, err } func (r *notifyingReader) Close() error { err := r.ReadCloser.Close() if r.notify != nil { close(r.notify) r.notify = nil } return err } lxd-2.0.2/dist/src/golang.org/x/net/websocket/0000755061062106075000000000000012721405224023521 5ustar00stgraberdomain admins00000000000000lxd-2.0.2/dist/src/golang.org/x/net/websocket/websocket_test.go0000644061062106075000000003044212721405224027100 0ustar00stgraberdomain admins00000000000000// Copyright 2009 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package websocket import ( "bytes" "fmt" "io" "log" "net" "net/http" "net/http/httptest" "net/url" "reflect" "runtime" "strings" "sync" "testing" "time" ) var serverAddr string var once sync.Once func echoServer(ws *Conn) { defer ws.Close() io.Copy(ws, ws) } type Count struct { S string N int } func countServer(ws *Conn) { defer ws.Close() for { var count Count err := JSON.Receive(ws, &count) if err != nil { return } count.N++ count.S = strings.Repeat(count.S, count.N) err = JSON.Send(ws, count) if err != nil { return } } } type testCtrlAndDataHandler struct { hybiFrameHandler } func (h *testCtrlAndDataHandler) WritePing(b []byte) (int, error) { h.hybiFrameHandler.conn.wio.Lock() defer h.hybiFrameHandler.conn.wio.Unlock() w, err := h.hybiFrameHandler.conn.frameWriterFactory.NewFrameWriter(PingFrame) if err != nil { return 0, err } n, err := w.Write(b) w.Close() return n, err } func ctrlAndDataServer(ws *Conn) { defer ws.Close() h := &testCtrlAndDataHandler{hybiFrameHandler: hybiFrameHandler{conn: ws}} ws.frameHandler = h go func() { for i := 0; ; i++ { var b []byte if i%2 != 0 { // with or without payload b = []byte(fmt.Sprintf("#%d-CONTROL-FRAME-FROM-SERVER", i)) } if _, err := h.WritePing(b); err != nil { break } if _, err := h.WritePong(b); err != nil { // unsolicited pong break } time.Sleep(10 * time.Millisecond) } }() b := make([]byte, 128) for { n, err := ws.Read(b) if err != nil { break } if _, err := ws.Write(b[:n]); err != nil { break } } } func subProtocolHandshake(config *Config, req *http.Request) error { for _, proto := range config.Protocol { if proto == "chat" { config.Protocol = []string{proto} return nil } } return ErrBadWebSocketProtocol } func subProtoServer(ws *Conn) { for _, proto := range ws.Config().Protocol { io.WriteString(ws, proto) } } func startServer() { http.Handle("/echo", Handler(echoServer)) http.Handle("/count", Handler(countServer)) http.Handle("/ctrldata", Handler(ctrlAndDataServer)) subproto := Server{ Handshake: subProtocolHandshake, Handler: Handler(subProtoServer), } http.Handle("/subproto", subproto) server := httptest.NewServer(nil) serverAddr = server.Listener.Addr().String() log.Print("Test WebSocket server listening on ", serverAddr) } func newConfig(t *testing.T, path string) *Config { config, _ := NewConfig(fmt.Sprintf("ws://%s%s", serverAddr, path), "http://localhost") return config } func TestEcho(t *testing.T) { once.Do(startServer) // websocket.Dial() client, err := net.Dial("tcp", serverAddr) if err != nil { t.Fatal("dialing", err) } conn, err := NewClient(newConfig(t, "/echo"), client) if err != nil { t.Errorf("WebSocket handshake error: %v", err) return } msg := []byte("hello, world\n") if _, err := conn.Write(msg); err != nil { t.Errorf("Write: %v", err) } var actual_msg = make([]byte, 512) n, err := conn.Read(actual_msg) if err != nil { t.Errorf("Read: %v", err) } actual_msg = actual_msg[0:n] if !bytes.Equal(msg, actual_msg) { t.Errorf("Echo: expected %q got %q", msg, actual_msg) } conn.Close() } func TestAddr(t *testing.T) { once.Do(startServer) // websocket.Dial() client, err := net.Dial("tcp", serverAddr) if err != nil { t.Fatal("dialing", err) } conn, err := NewClient(newConfig(t, "/echo"), client) if err != nil { t.Errorf("WebSocket handshake error: %v", err) return } ra := conn.RemoteAddr().String() if !strings.HasPrefix(ra, "ws://") || !strings.HasSuffix(ra, "/echo") { t.Errorf("Bad remote addr: %v", ra) } la := conn.LocalAddr().String() if !strings.HasPrefix(la, "http://") { t.Errorf("Bad local addr: %v", la) } conn.Close() } func TestCount(t *testing.T) { once.Do(startServer) // websocket.Dial() client, err := net.Dial("tcp", serverAddr) if err != nil { t.Fatal("dialing", err) } conn, err := NewClient(newConfig(t, "/count"), client) if err != nil { t.Errorf("WebSocket handshake error: %v", err) return } var count Count count.S = "hello" if err := JSON.Send(conn, count); err != nil { t.Errorf("Write: %v", err) } if err := JSON.Receive(conn, &count); err != nil { t.Errorf("Read: %v", err) } if count.N != 1 { t.Errorf("count: expected %d got %d", 1, count.N) } if count.S != "hello" { t.Errorf("count: expected %q got %q", "hello", count.S) } if err := JSON.Send(conn, count); err != nil { t.Errorf("Write: %v", err) } if err := JSON.Receive(conn, &count); err != nil { t.Errorf("Read: %v", err) } if count.N != 2 { t.Errorf("count: expected %d got %d", 2, count.N) } if count.S != "hellohello" { t.Errorf("count: expected %q got %q", "hellohello", count.S) } conn.Close() } func TestWithQuery(t *testing.T) { once.Do(startServer) client, err := net.Dial("tcp", serverAddr) if err != nil { t.Fatal("dialing", err) } config := newConfig(t, "/echo") config.Location, err = url.ParseRequestURI(fmt.Sprintf("ws://%s/echo?q=v", serverAddr)) if err != nil { t.Fatal("location url", err) } ws, err := NewClient(config, client) if err != nil { t.Errorf("WebSocket handshake: %v", err) return } ws.Close() } func testWithProtocol(t *testing.T, subproto []string) (string, error) { once.Do(startServer) client, err := net.Dial("tcp", serverAddr) if err != nil { t.Fatal("dialing", err) } config := newConfig(t, "/subproto") config.Protocol = subproto ws, err := NewClient(config, client) if err != nil { return "", err } msg := make([]byte, 16) n, err := ws.Read(msg) if err != nil { return "", err } ws.Close() return string(msg[:n]), nil } func TestWithProtocol(t *testing.T) { proto, err := testWithProtocol(t, []string{"chat"}) if err != nil { t.Errorf("SubProto: unexpected error: %v", err) } if proto != "chat" { t.Errorf("SubProto: expected %q, got %q", "chat", proto) } } func TestWithTwoProtocol(t *testing.T) { proto, err := testWithProtocol(t, []string{"test", "chat"}) if err != nil { t.Errorf("SubProto: unexpected error: %v", err) } if proto != "chat" { t.Errorf("SubProto: expected %q, got %q", "chat", proto) } } func TestWithBadProtocol(t *testing.T) { _, err := testWithProtocol(t, []string{"test"}) if err != ErrBadStatus { t.Errorf("SubProto: expected %v, got %v", ErrBadStatus, err) } } func TestHTTP(t *testing.T) { once.Do(startServer) // If the client did not send a handshake that matches the protocol // specification, the server MUST return an HTTP response with an // appropriate error code (such as 400 Bad Request) resp, err := http.Get(fmt.Sprintf("http://%s/echo", serverAddr)) if err != nil { t.Errorf("Get: error %#v", err) return } if resp == nil { t.Error("Get: resp is null") return } if resp.StatusCode != http.StatusBadRequest { t.Errorf("Get: expected %q got %q", http.StatusBadRequest, resp.StatusCode) } } func TestTrailingSpaces(t *testing.T) { // http://code.google.com/p/go/issues/detail?id=955 // The last runs of this create keys with trailing spaces that should not be // generated by the client. once.Do(startServer) config := newConfig(t, "/echo") for i := 0; i < 30; i++ { // body ws, err := DialConfig(config) if err != nil { t.Errorf("Dial #%d failed: %v", i, err) break } ws.Close() } } func TestDialConfigBadVersion(t *testing.T) { once.Do(startServer) config := newConfig(t, "/echo") config.Version = 1234 _, err := DialConfig(config) if dialerr, ok := err.(*DialError); ok { if dialerr.Err != ErrBadProtocolVersion { t.Errorf("dial expected err %q but got %q", ErrBadProtocolVersion, dialerr.Err) } } } func TestSmallBuffer(t *testing.T) { // http://code.google.com/p/go/issues/detail?id=1145 // Read should be able to handle reading a fragment of a frame. once.Do(startServer) // websocket.Dial() client, err := net.Dial("tcp", serverAddr) if err != nil { t.Fatal("dialing", err) } conn, err := NewClient(newConfig(t, "/echo"), client) if err != nil { t.Errorf("WebSocket handshake error: %v", err) return } msg := []byte("hello, world\n") if _, err := conn.Write(msg); err != nil { t.Errorf("Write: %v", err) } var small_msg = make([]byte, 8) n, err := conn.Read(small_msg) if err != nil { t.Errorf("Read: %v", err) } if !bytes.Equal(msg[:len(small_msg)], small_msg) { t.Errorf("Echo: expected %q got %q", msg[:len(small_msg)], small_msg) } var second_msg = make([]byte, len(msg)) n, err = conn.Read(second_msg) if err != nil { t.Errorf("Read: %v", err) } second_msg = second_msg[0:n] if !bytes.Equal(msg[len(small_msg):], second_msg) { t.Errorf("Echo: expected %q got %q", msg[len(small_msg):], second_msg) } conn.Close() } var parseAuthorityTests = []struct { in *url.URL out string }{ { &url.URL{ Scheme: "ws", Host: "www.google.com", }, "www.google.com:80", }, { &url.URL{ Scheme: "wss", Host: "www.google.com", }, "www.google.com:443", }, { &url.URL{ Scheme: "ws", Host: "www.google.com:80", }, "www.google.com:80", }, { &url.URL{ Scheme: "wss", Host: "www.google.com:443", }, "www.google.com:443", }, // some invalid ones for parseAuthority. parseAuthority doesn't // concern itself with the scheme unless it actually knows about it { &url.URL{ Scheme: "http", Host: "www.google.com", }, "www.google.com", }, { &url.URL{ Scheme: "http", Host: "www.google.com:80", }, "www.google.com:80", }, { &url.URL{ Scheme: "asdf", Host: "127.0.0.1", }, "127.0.0.1", }, { &url.URL{ Scheme: "asdf", Host: "www.google.com", }, "www.google.com", }, } func TestParseAuthority(t *testing.T) { for _, tt := range parseAuthorityTests { out := parseAuthority(tt.in) if out != tt.out { t.Errorf("got %v; want %v", out, tt.out) } } } type closerConn struct { net.Conn closed int // count of the number of times Close was called } func (c *closerConn) Close() error { c.closed++ return c.Conn.Close() } func TestClose(t *testing.T) { if runtime.GOOS == "plan9" { t.Skip("see golang.org/issue/11454") } once.Do(startServer) conn, err := net.Dial("tcp", serverAddr) if err != nil { t.Fatal("dialing", err) } cc := closerConn{Conn: conn} client, err := NewClient(newConfig(t, "/echo"), &cc) if err != nil { t.Fatalf("WebSocket handshake: %v", err) } // set the deadline to ten minutes ago, which will have expired by the time // client.Close sends the close status frame. conn.SetDeadline(time.Now().Add(-10 * time.Minute)) if err := client.Close(); err == nil { t.Errorf("ws.Close(): expected error, got %v", err) } if cc.closed < 1 { t.Fatalf("ws.Close(): expected underlying ws.rwc.Close to be called > 0 times, got: %v", cc.closed) } } var originTests = []struct { req *http.Request origin *url.URL }{ { req: &http.Request{ Header: http.Header{ "Origin": []string{"http://www.example.com"}, }, }, origin: &url.URL{ Scheme: "http", Host: "www.example.com", }, }, { req: &http.Request{}, }, } func TestOrigin(t *testing.T) { conf := newConfig(t, "/echo") conf.Version = ProtocolVersionHybi13 for i, tt := range originTests { origin, err := Origin(conf, tt.req) if err != nil { t.Error(err) continue } if !reflect.DeepEqual(origin, tt.origin) { t.Errorf("#%d: got origin %v; want %v", i, origin, tt.origin) continue } } } func TestCtrlAndData(t *testing.T) { once.Do(startServer) c, err := net.Dial("tcp", serverAddr) if err != nil { t.Fatal(err) } ws, err := NewClient(newConfig(t, "/ctrldata"), c) if err != nil { t.Fatal(err) } defer ws.Close() h := &testCtrlAndDataHandler{hybiFrameHandler: hybiFrameHandler{conn: ws}} ws.frameHandler = h b := make([]byte, 128) for i := 0; i < 2; i++ { data := []byte(fmt.Sprintf("#%d-DATA-FRAME-FROM-CLIENT", i)) if _, err := ws.Write(data); err != nil { t.Fatalf("#%d: %v", i, err) } var ctrl []byte if i%2 != 0 { // with or without payload ctrl = []byte(fmt.Sprintf("#%d-CONTROL-FRAME-FROM-CLIENT", i)) } if _, err := h.WritePing(ctrl); err != nil { t.Fatalf("#%d: %v", i, err) } n, err := ws.Read(b) if err != nil { t.Fatalf("#%d: %v", i, err) } if !bytes.Equal(b[:n], data) { t.Fatalf("#%d: got %v; want %v", i, b[:n], data) } } } lxd-2.0.2/dist/src/golang.org/x/net/websocket/client.go0000644061062106075000000000510512721405224025327 0ustar00stgraberdomain admins00000000000000// Copyright 2009 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package websocket import ( "bufio" "crypto/tls" "io" "net" "net/http" "net/url" ) // DialError is an error that occurs while dialling a websocket server. type DialError struct { *Config Err error } func (e *DialError) Error() string { return "websocket.Dial " + e.Config.Location.String() + ": " + e.Err.Error() } // NewConfig creates a new WebSocket config for client connection. func NewConfig(server, origin string) (config *Config, err error) { config = new(Config) config.Version = ProtocolVersionHybi13 config.Location, err = url.ParseRequestURI(server) if err != nil { return } config.Origin, err = url.ParseRequestURI(origin) if err != nil { return } config.Header = http.Header(make(map[string][]string)) return } // NewClient creates a new WebSocket client connection over rwc. func NewClient(config *Config, rwc io.ReadWriteCloser) (ws *Conn, err error) { br := bufio.NewReader(rwc) bw := bufio.NewWriter(rwc) err = hybiClientHandshake(config, br, bw) if err != nil { return } buf := bufio.NewReadWriter(br, bw) ws = newHybiClientConn(config, buf, rwc) return } // Dial opens a new client connection to a WebSocket. func Dial(url_, protocol, origin string) (ws *Conn, err error) { config, err := NewConfig(url_, origin) if err != nil { return nil, err } if protocol != "" { config.Protocol = []string{protocol} } return DialConfig(config) } var portMap = map[string]string{ "ws": "80", "wss": "443", } func parseAuthority(location *url.URL) string { if _, ok := portMap[location.Scheme]; ok { if _, _, err := net.SplitHostPort(location.Host); err != nil { return net.JoinHostPort(location.Host, portMap[location.Scheme]) } } return location.Host } // DialConfig opens a new client connection to a WebSocket with a config. func DialConfig(config *Config) (ws *Conn, err error) { var client net.Conn if config.Location == nil { return nil, &DialError{config, ErrBadWebSocketLocation} } if config.Origin == nil { return nil, &DialError{config, ErrBadWebSocketOrigin} } switch config.Location.Scheme { case "ws": client, err = net.Dial("tcp", parseAuthority(config.Location)) case "wss": client, err = tls.Dial("tcp", parseAuthority(config.Location), config.TlsConfig) default: err = ErrBadScheme } if err != nil { goto Error } ws, err = NewClient(config, client) if err != nil { client.Close() goto Error } return Error: return nil, &DialError{config, err} } lxd-2.0.2/dist/src/golang.org/x/net/websocket/server.go0000644061062106075000000000660212721405224025362 0ustar00stgraberdomain admins00000000000000// Copyright 2009 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package websocket import ( "bufio" "fmt" "io" "net/http" ) func newServerConn(rwc io.ReadWriteCloser, buf *bufio.ReadWriter, req *http.Request, config *Config, handshake func(*Config, *http.Request) error) (conn *Conn, err error) { var hs serverHandshaker = &hybiServerHandshaker{Config: config} code, err := hs.ReadHandshake(buf.Reader, req) if err == ErrBadWebSocketVersion { fmt.Fprintf(buf, "HTTP/1.1 %03d %s\r\n", code, http.StatusText(code)) fmt.Fprintf(buf, "Sec-WebSocket-Version: %s\r\n", SupportedProtocolVersion) buf.WriteString("\r\n") buf.WriteString(err.Error()) buf.Flush() return } if err != nil { fmt.Fprintf(buf, "HTTP/1.1 %03d %s\r\n", code, http.StatusText(code)) buf.WriteString("\r\n") buf.WriteString(err.Error()) buf.Flush() return } if handshake != nil { err = handshake(config, req) if err != nil { code = http.StatusForbidden fmt.Fprintf(buf, "HTTP/1.1 %03d %s\r\n", code, http.StatusText(code)) buf.WriteString("\r\n") buf.Flush() return } } err = hs.AcceptHandshake(buf.Writer) if err != nil { code = http.StatusBadRequest fmt.Fprintf(buf, "HTTP/1.1 %03d %s\r\n", code, http.StatusText(code)) buf.WriteString("\r\n") buf.Flush() return } conn = hs.NewServerConn(buf, rwc, req) return } // Server represents a server of a WebSocket. type Server struct { // Config is a WebSocket configuration for new WebSocket connection. Config // Handshake is an optional function in WebSocket handshake. // For example, you can check, or don't check Origin header. // Another example, you can select config.Protocol. Handshake func(*Config, *http.Request) error // Handler handles a WebSocket connection. Handler } // ServeHTTP implements the http.Handler interface for a WebSocket func (s Server) ServeHTTP(w http.ResponseWriter, req *http.Request) { s.serveWebSocket(w, req) } func (s Server) serveWebSocket(w http.ResponseWriter, req *http.Request) { rwc, buf, err := w.(http.Hijacker).Hijack() if err != nil { panic("Hijack failed: " + err.Error()) } // The server should abort the WebSocket connection if it finds // the client did not send a handshake that matches with protocol // specification. defer rwc.Close() conn, err := newServerConn(rwc, buf, req, &s.Config, s.Handshake) if err != nil { return } if conn == nil { panic("unexpected nil conn") } s.Handler(conn) } // Handler is a simple interface to a WebSocket browser client. // It checks if Origin header is valid URL by default. // You might want to verify websocket.Conn.Config().Origin in the func. // If you use Server instead of Handler, you could call websocket.Origin and // check the origin in your Handshake func. So, if you want to accept // non-browser clients, which do not send an Origin header, set a // Server.Handshake that does not check the origin. type Handler func(*Conn) func checkOrigin(config *Config, req *http.Request) (err error) { config.Origin, err = Origin(config, req) if err == nil && config.Origin == nil { return fmt.Errorf("null origin") } return err } // ServeHTTP implements the http.Handler interface for a WebSocket func (h Handler) ServeHTTP(w http.ResponseWriter, req *http.Request) { s := Server{Handler: h, Handshake: checkOrigin} s.serveWebSocket(w, req) } lxd-2.0.2/dist/src/golang.org/x/net/websocket/hybi_test.go0000644061062106075000000004430412721405224026047 0ustar00stgraberdomain admins00000000000000// Copyright 2011 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package websocket import ( "bufio" "bytes" "fmt" "io" "net/http" "net/url" "strings" "testing" ) // Test the getNonceAccept function with values in // http://tools.ietf.org/html/draft-ietf-hybi-thewebsocketprotocol-17 func TestSecWebSocketAccept(t *testing.T) { nonce := []byte("dGhlIHNhbXBsZSBub25jZQ==") expected := []byte("s3pPLMBiTxaQ9kYGzzhZRbK+xOo=") accept, err := getNonceAccept(nonce) if err != nil { t.Errorf("getNonceAccept: returned error %v", err) return } if !bytes.Equal(expected, accept) { t.Errorf("getNonceAccept: expected %q got %q", expected, accept) } } func TestHybiClientHandshake(t *testing.T) { type test struct { url, host string } tests := []test{ {"ws://server.example.com/chat", "server.example.com"}, {"ws://127.0.0.1/chat", "127.0.0.1"}, } if _, err := url.ParseRequestURI("http://[fe80::1%25lo0]"); err == nil { tests = append(tests, test{"ws://[fe80::1%25lo0]/chat", "[fe80::1]"}) } for _, tt := range tests { var b bytes.Buffer bw := bufio.NewWriter(&b) br := bufio.NewReader(strings.NewReader(`HTTP/1.1 101 Switching Protocols Upgrade: websocket Connection: Upgrade Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo= Sec-WebSocket-Protocol: chat `)) var err error var config Config config.Location, err = url.ParseRequestURI(tt.url) if err != nil { t.Fatal("location url", err) } config.Origin, err = url.ParseRequestURI("http://example.com") if err != nil { t.Fatal("origin url", err) } config.Protocol = append(config.Protocol, "chat") config.Protocol = append(config.Protocol, "superchat") config.Version = ProtocolVersionHybi13 config.handshakeData = map[string]string{ "key": "dGhlIHNhbXBsZSBub25jZQ==", } if err := hybiClientHandshake(&config, br, bw); err != nil { t.Fatal("handshake", err) } req, err := http.ReadRequest(bufio.NewReader(&b)) if err != nil { t.Fatal("read request", err) } if req.Method != "GET" { t.Errorf("request method expected GET, but got %s", req.Method) } if req.URL.Path != "/chat" { t.Errorf("request path expected /chat, but got %s", req.URL.Path) } if req.Proto != "HTTP/1.1" { t.Errorf("request proto expected HTTP/1.1, but got %s", req.Proto) } if req.Host != tt.host { t.Errorf("request host expected %s, but got %s", tt.host, req.Host) } var expectedHeader = map[string]string{ "Connection": "Upgrade", "Upgrade": "websocket", "Sec-Websocket-Key": config.handshakeData["key"], "Origin": config.Origin.String(), "Sec-Websocket-Protocol": "chat, superchat", "Sec-Websocket-Version": fmt.Sprintf("%d", ProtocolVersionHybi13), } for k, v := range expectedHeader { if req.Header.Get(k) != v { t.Errorf("%s expected %s, but got %v", k, v, req.Header.Get(k)) } } } } func TestHybiClientHandshakeWithHeader(t *testing.T) { b := bytes.NewBuffer([]byte{}) bw := bufio.NewWriter(b) br := bufio.NewReader(strings.NewReader(`HTTP/1.1 101 Switching Protocols Upgrade: websocket Connection: Upgrade Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo= Sec-WebSocket-Protocol: chat `)) var err error config := new(Config) config.Location, err = url.ParseRequestURI("ws://server.example.com/chat") if err != nil { t.Fatal("location url", err) } config.Origin, err = url.ParseRequestURI("http://example.com") if err != nil { t.Fatal("origin url", err) } config.Protocol = append(config.Protocol, "chat") config.Protocol = append(config.Protocol, "superchat") config.Version = ProtocolVersionHybi13 config.Header = http.Header(make(map[string][]string)) config.Header.Add("User-Agent", "test") config.handshakeData = map[string]string{ "key": "dGhlIHNhbXBsZSBub25jZQ==", } err = hybiClientHandshake(config, br, bw) if err != nil { t.Errorf("handshake failed: %v", err) } req, err := http.ReadRequest(bufio.NewReader(b)) if err != nil { t.Fatalf("read request: %v", err) } if req.Method != "GET" { t.Errorf("request method expected GET, but got %q", req.Method) } if req.URL.Path != "/chat" { t.Errorf("request path expected /chat, but got %q", req.URL.Path) } if req.Proto != "HTTP/1.1" { t.Errorf("request proto expected HTTP/1.1, but got %q", req.Proto) } if req.Host != "server.example.com" { t.Errorf("request Host expected server.example.com, but got %v", req.Host) } var expectedHeader = map[string]string{ "Connection": "Upgrade", "Upgrade": "websocket", "Sec-Websocket-Key": config.handshakeData["key"], "Origin": config.Origin.String(), "Sec-Websocket-Protocol": "chat, superchat", "Sec-Websocket-Version": fmt.Sprintf("%d", ProtocolVersionHybi13), "User-Agent": "test", } for k, v := range expectedHeader { if req.Header.Get(k) != v { t.Errorf(fmt.Sprintf("%s expected %q but got %q", k, v, req.Header.Get(k))) } } } func TestHybiServerHandshake(t *testing.T) { config := new(Config) handshaker := &hybiServerHandshaker{Config: config} br := bufio.NewReader(strings.NewReader(`GET /chat HTTP/1.1 Host: server.example.com Upgrade: websocket Connection: Upgrade Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ== Origin: http://example.com Sec-WebSocket-Protocol: chat, superchat Sec-WebSocket-Version: 13 `)) req, err := http.ReadRequest(br) if err != nil { t.Fatal("request", err) } code, err := handshaker.ReadHandshake(br, req) if err != nil { t.Errorf("handshake failed: %v", err) } if code != http.StatusSwitchingProtocols { t.Errorf("status expected %q but got %q", http.StatusSwitchingProtocols, code) } expectedProtocols := []string{"chat", "superchat"} if fmt.Sprintf("%v", config.Protocol) != fmt.Sprintf("%v", expectedProtocols) { t.Errorf("protocol expected %q but got %q", expectedProtocols, config.Protocol) } b := bytes.NewBuffer([]byte{}) bw := bufio.NewWriter(b) config.Protocol = config.Protocol[:1] err = handshaker.AcceptHandshake(bw) if err != nil { t.Errorf("handshake response failed: %v", err) } expectedResponse := strings.Join([]string{ "HTTP/1.1 101 Switching Protocols", "Upgrade: websocket", "Connection: Upgrade", "Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=", "Sec-WebSocket-Protocol: chat", "", ""}, "\r\n") if b.String() != expectedResponse { t.Errorf("handshake expected %q but got %q", expectedResponse, b.String()) } } func TestHybiServerHandshakeNoSubProtocol(t *testing.T) { config := new(Config) handshaker := &hybiServerHandshaker{Config: config} br := bufio.NewReader(strings.NewReader(`GET /chat HTTP/1.1 Host: server.example.com Upgrade: websocket Connection: Upgrade Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ== Origin: http://example.com Sec-WebSocket-Version: 13 `)) req, err := http.ReadRequest(br) if err != nil { t.Fatal("request", err) } code, err := handshaker.ReadHandshake(br, req) if err != nil { t.Errorf("handshake failed: %v", err) } if code != http.StatusSwitchingProtocols { t.Errorf("status expected %q but got %q", http.StatusSwitchingProtocols, code) } if len(config.Protocol) != 0 { t.Errorf("len(config.Protocol) expected 0, but got %q", len(config.Protocol)) } b := bytes.NewBuffer([]byte{}) bw := bufio.NewWriter(b) err = handshaker.AcceptHandshake(bw) if err != nil { t.Errorf("handshake response failed: %v", err) } expectedResponse := strings.Join([]string{ "HTTP/1.1 101 Switching Protocols", "Upgrade: websocket", "Connection: Upgrade", "Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=", "", ""}, "\r\n") if b.String() != expectedResponse { t.Errorf("handshake expected %q but got %q", expectedResponse, b.String()) } } func TestHybiServerHandshakeHybiBadVersion(t *testing.T) { config := new(Config) handshaker := &hybiServerHandshaker{Config: config} br := bufio.NewReader(strings.NewReader(`GET /chat HTTP/1.1 Host: server.example.com Upgrade: websocket Connection: Upgrade Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ== Sec-WebSocket-Origin: http://example.com Sec-WebSocket-Protocol: chat, superchat Sec-WebSocket-Version: 9 `)) req, err := http.ReadRequest(br) if err != nil { t.Fatal("request", err) } code, err := handshaker.ReadHandshake(br, req) if err != ErrBadWebSocketVersion { t.Errorf("handshake expected err %q but got %q", ErrBadWebSocketVersion, err) } if code != http.StatusBadRequest { t.Errorf("status expected %q but got %q", http.StatusBadRequest, code) } } func testHybiFrame(t *testing.T, testHeader, testPayload, testMaskedPayload []byte, frameHeader *hybiFrameHeader) { b := bytes.NewBuffer([]byte{}) frameWriterFactory := &hybiFrameWriterFactory{bufio.NewWriter(b), false} w, _ := frameWriterFactory.NewFrameWriter(TextFrame) w.(*hybiFrameWriter).header = frameHeader _, err := w.Write(testPayload) w.Close() if err != nil { t.Errorf("Write error %q", err) } var expectedFrame []byte expectedFrame = append(expectedFrame, testHeader...) expectedFrame = append(expectedFrame, testMaskedPayload...) if !bytes.Equal(expectedFrame, b.Bytes()) { t.Errorf("frame expected %q got %q", expectedFrame, b.Bytes()) } frameReaderFactory := &hybiFrameReaderFactory{bufio.NewReader(b)} r, err := frameReaderFactory.NewFrameReader() if err != nil { t.Errorf("Read error %q", err) } if header := r.HeaderReader(); header == nil { t.Errorf("no header") } else { actualHeader := make([]byte, r.Len()) n, err := header.Read(actualHeader) if err != nil { t.Errorf("Read header error %q", err) } else { if n < len(testHeader) { t.Errorf("header too short %q got %q", testHeader, actualHeader[:n]) } if !bytes.Equal(testHeader, actualHeader[:n]) { t.Errorf("header expected %q got %q", testHeader, actualHeader[:n]) } } } if trailer := r.TrailerReader(); trailer != nil { t.Errorf("unexpected trailer %q", trailer) } frame := r.(*hybiFrameReader) if frameHeader.Fin != frame.header.Fin || frameHeader.OpCode != frame.header.OpCode || len(testPayload) != int(frame.header.Length) { t.Errorf("mismatch %v (%d) vs %v", frameHeader, len(testPayload), frame) } payload := make([]byte, len(testPayload)) _, err = r.Read(payload) if err != nil && err != io.EOF { t.Errorf("read %v", err) } if !bytes.Equal(testPayload, payload) { t.Errorf("payload %q vs %q", testPayload, payload) } } func TestHybiShortTextFrame(t *testing.T) { frameHeader := &hybiFrameHeader{Fin: true, OpCode: TextFrame} payload := []byte("hello") testHybiFrame(t, []byte{0x81, 0x05}, payload, payload, frameHeader) payload = make([]byte, 125) testHybiFrame(t, []byte{0x81, 125}, payload, payload, frameHeader) } func TestHybiShortMaskedTextFrame(t *testing.T) { frameHeader := &hybiFrameHeader{Fin: true, OpCode: TextFrame, MaskingKey: []byte{0xcc, 0x55, 0x80, 0x20}} payload := []byte("hello") maskedPayload := []byte{0xa4, 0x30, 0xec, 0x4c, 0xa3} header := []byte{0x81, 0x85} header = append(header, frameHeader.MaskingKey...) testHybiFrame(t, header, payload, maskedPayload, frameHeader) } func TestHybiShortBinaryFrame(t *testing.T) { frameHeader := &hybiFrameHeader{Fin: true, OpCode: BinaryFrame} payload := []byte("hello") testHybiFrame(t, []byte{0x82, 0x05}, payload, payload, frameHeader) payload = make([]byte, 125) testHybiFrame(t, []byte{0x82, 125}, payload, payload, frameHeader) } func TestHybiControlFrame(t *testing.T) { payload := []byte("hello") frameHeader := &hybiFrameHeader{Fin: true, OpCode: PingFrame} testHybiFrame(t, []byte{0x89, 0x05}, payload, payload, frameHeader) frameHeader = &hybiFrameHeader{Fin: true, OpCode: PingFrame} testHybiFrame(t, []byte{0x89, 0x00}, nil, nil, frameHeader) frameHeader = &hybiFrameHeader{Fin: true, OpCode: PongFrame} testHybiFrame(t, []byte{0x8A, 0x05}, payload, payload, frameHeader) frameHeader = &hybiFrameHeader{Fin: true, OpCode: PongFrame} testHybiFrame(t, []byte{0x8A, 0x00}, nil, nil, frameHeader) frameHeader = &hybiFrameHeader{Fin: true, OpCode: CloseFrame} payload = []byte{0x03, 0xe8} // 1000 testHybiFrame(t, []byte{0x88, 0x02}, payload, payload, frameHeader) } func TestHybiLongFrame(t *testing.T) { frameHeader := &hybiFrameHeader{Fin: true, OpCode: TextFrame} payload := make([]byte, 126) testHybiFrame(t, []byte{0x81, 126, 0x00, 126}, payload, payload, frameHeader) payload = make([]byte, 65535) testHybiFrame(t, []byte{0x81, 126, 0xff, 0xff}, payload, payload, frameHeader) payload = make([]byte, 65536) testHybiFrame(t, []byte{0x81, 127, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00}, payload, payload, frameHeader) } func TestHybiClientRead(t *testing.T) { wireData := []byte{0x81, 0x05, 'h', 'e', 'l', 'l', 'o', 0x89, 0x05, 'h', 'e', 'l', 'l', 'o', // ping 0x81, 0x05, 'w', 'o', 'r', 'l', 'd'} br := bufio.NewReader(bytes.NewBuffer(wireData)) bw := bufio.NewWriter(bytes.NewBuffer([]byte{})) conn := newHybiConn(newConfig(t, "/"), bufio.NewReadWriter(br, bw), nil, nil) msg := make([]byte, 512) n, err := conn.Read(msg) if err != nil { t.Errorf("read 1st frame, error %q", err) } if n != 5 { t.Errorf("read 1st frame, expect 5, got %d", n) } if !bytes.Equal(wireData[2:7], msg[:n]) { t.Errorf("read 1st frame %v, got %v", wireData[2:7], msg[:n]) } n, err = conn.Read(msg) if err != nil { t.Errorf("read 2nd frame, error %q", err) } if n != 5 { t.Errorf("read 2nd frame, expect 5, got %d", n) } if !bytes.Equal(wireData[16:21], msg[:n]) { t.Errorf("read 2nd frame %v, got %v", wireData[16:21], msg[:n]) } n, err = conn.Read(msg) if err == nil { t.Errorf("read not EOF") } if n != 0 { t.Errorf("expect read 0, got %d", n) } } func TestHybiShortRead(t *testing.T) { wireData := []byte{0x81, 0x05, 'h', 'e', 'l', 'l', 'o', 0x89, 0x05, 'h', 'e', 'l', 'l', 'o', // ping 0x81, 0x05, 'w', 'o', 'r', 'l', 'd'} br := bufio.NewReader(bytes.NewBuffer(wireData)) bw := bufio.NewWriter(bytes.NewBuffer([]byte{})) conn := newHybiConn(newConfig(t, "/"), bufio.NewReadWriter(br, bw), nil, nil) step := 0 pos := 0 expectedPos := []int{2, 5, 16, 19} expectedLen := []int{3, 2, 3, 2} for { msg := make([]byte, 3) n, err := conn.Read(msg) if step >= len(expectedPos) { if err == nil { t.Errorf("read not EOF") } if n != 0 { t.Errorf("expect read 0, got %d", n) } return } pos = expectedPos[step] endPos := pos + expectedLen[step] if err != nil { t.Errorf("read from %d, got error %q", pos, err) return } if n != endPos-pos { t.Errorf("read from %d, expect %d, got %d", pos, endPos-pos, n) } if !bytes.Equal(wireData[pos:endPos], msg[:n]) { t.Errorf("read from %d, frame %v, got %v", pos, wireData[pos:endPos], msg[:n]) } step++ } } func TestHybiServerRead(t *testing.T) { wireData := []byte{0x81, 0x85, 0xcc, 0x55, 0x80, 0x20, 0xa4, 0x30, 0xec, 0x4c, 0xa3, // hello 0x89, 0x85, 0xcc, 0x55, 0x80, 0x20, 0xa4, 0x30, 0xec, 0x4c, 0xa3, // ping: hello 0x81, 0x85, 0xed, 0x83, 0xb4, 0x24, 0x9a, 0xec, 0xc6, 0x48, 0x89, // world } br := bufio.NewReader(bytes.NewBuffer(wireData)) bw := bufio.NewWriter(bytes.NewBuffer([]byte{})) conn := newHybiConn(newConfig(t, "/"), bufio.NewReadWriter(br, bw), nil, new(http.Request)) expected := [][]byte{[]byte("hello"), []byte("world")} msg := make([]byte, 512) n, err := conn.Read(msg) if err != nil { t.Errorf("read 1st frame, error %q", err) } if n != 5 { t.Errorf("read 1st frame, expect 5, got %d", n) } if !bytes.Equal(expected[0], msg[:n]) { t.Errorf("read 1st frame %q, got %q", expected[0], msg[:n]) } n, err = conn.Read(msg) if err != nil { t.Errorf("read 2nd frame, error %q", err) } if n != 5 { t.Errorf("read 2nd frame, expect 5, got %d", n) } if !bytes.Equal(expected[1], msg[:n]) { t.Errorf("read 2nd frame %q, got %q", expected[1], msg[:n]) } n, err = conn.Read(msg) if err == nil { t.Errorf("read not EOF") } if n != 0 { t.Errorf("expect read 0, got %d", n) } } func TestHybiServerReadWithoutMasking(t *testing.T) { wireData := []byte{0x81, 0x05, 'h', 'e', 'l', 'l', 'o'} br := bufio.NewReader(bytes.NewBuffer(wireData)) bw := bufio.NewWriter(bytes.NewBuffer([]byte{})) conn := newHybiConn(newConfig(t, "/"), bufio.NewReadWriter(br, bw), nil, new(http.Request)) // server MUST close the connection upon receiving a non-masked frame. msg := make([]byte, 512) _, err := conn.Read(msg) if err != io.EOF { t.Errorf("read 1st frame, expect %q, but got %q", io.EOF, err) } } func TestHybiClientReadWithMasking(t *testing.T) { wireData := []byte{0x81, 0x85, 0xcc, 0x55, 0x80, 0x20, 0xa4, 0x30, 0xec, 0x4c, 0xa3, // hello } br := bufio.NewReader(bytes.NewBuffer(wireData)) bw := bufio.NewWriter(bytes.NewBuffer([]byte{})) conn := newHybiConn(newConfig(t, "/"), bufio.NewReadWriter(br, bw), nil, nil) // client MUST close the connection upon receiving a masked frame. msg := make([]byte, 512) _, err := conn.Read(msg) if err != io.EOF { t.Errorf("read 1st frame, expect %q, but got %q", io.EOF, err) } } // Test the hybiServerHandshaker supports firefox implementation and // checks Connection request header include (but it's not necessary // equal to) "upgrade" func TestHybiServerFirefoxHandshake(t *testing.T) { config := new(Config) handshaker := &hybiServerHandshaker{Config: config} br := bufio.NewReader(strings.NewReader(`GET /chat HTTP/1.1 Host: server.example.com Upgrade: websocket Connection: keep-alive, upgrade Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ== Origin: http://example.com Sec-WebSocket-Protocol: chat, superchat Sec-WebSocket-Version: 13 `)) req, err := http.ReadRequest(br) if err != nil { t.Fatal("request", err) } code, err := handshaker.ReadHandshake(br, req) if err != nil { t.Errorf("handshake failed: %v", err) } if code != http.StatusSwitchingProtocols { t.Errorf("status expected %q but got %q", http.StatusSwitchingProtocols, code) } b := bytes.NewBuffer([]byte{}) bw := bufio.NewWriter(b) config.Protocol = []string{"chat"} err = handshaker.AcceptHandshake(bw) if err != nil { t.Errorf("handshake response failed: %v", err) } expectedResponse := strings.Join([]string{ "HTTP/1.1 101 Switching Protocols", "Upgrade: websocket", "Connection: Upgrade", "Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=", "Sec-WebSocket-Protocol: chat", "", ""}, "\r\n") if b.String() != expectedResponse { t.Errorf("handshake expected %q but got %q", expectedResponse, b.String()) } } lxd-2.0.2/dist/src/golang.org/x/net/websocket/websocket.go0000644061062106075000000002461712721405224026050 0ustar00stgraberdomain admins00000000000000// Copyright 2009 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // Package websocket implements a client and server for the WebSocket protocol // as specified in RFC 6455. package websocket // import "golang.org/x/net/websocket" import ( "bufio" "crypto/tls" "encoding/json" "errors" "io" "io/ioutil" "net" "net/http" "net/url" "sync" "time" ) const ( ProtocolVersionHybi13 = 13 ProtocolVersionHybi = ProtocolVersionHybi13 SupportedProtocolVersion = "13" ContinuationFrame = 0 TextFrame = 1 BinaryFrame = 2 CloseFrame = 8 PingFrame = 9 PongFrame = 10 UnknownFrame = 255 ) // ProtocolError represents WebSocket protocol errors. type ProtocolError struct { ErrorString string } func (err *ProtocolError) Error() string { return err.ErrorString } var ( ErrBadProtocolVersion = &ProtocolError{"bad protocol version"} ErrBadScheme = &ProtocolError{"bad scheme"} ErrBadStatus = &ProtocolError{"bad status"} ErrBadUpgrade = &ProtocolError{"missing or bad upgrade"} ErrBadWebSocketOrigin = &ProtocolError{"missing or bad WebSocket-Origin"} ErrBadWebSocketLocation = &ProtocolError{"missing or bad WebSocket-Location"} ErrBadWebSocketProtocol = &ProtocolError{"missing or bad WebSocket-Protocol"} ErrBadWebSocketVersion = &ProtocolError{"missing or bad WebSocket Version"} ErrChallengeResponse = &ProtocolError{"mismatch challenge/response"} ErrBadFrame = &ProtocolError{"bad frame"} ErrBadFrameBoundary = &ProtocolError{"not on frame boundary"} ErrNotWebSocket = &ProtocolError{"not websocket protocol"} ErrBadRequestMethod = &ProtocolError{"bad method"} ErrNotSupported = &ProtocolError{"not supported"} ) // Addr is an implementation of net.Addr for WebSocket. type Addr struct { *url.URL } // Network returns the network type for a WebSocket, "websocket". func (addr *Addr) Network() string { return "websocket" } // Config is a WebSocket configuration type Config struct { // A WebSocket server address. Location *url.URL // A Websocket client origin. Origin *url.URL // WebSocket subprotocols. Protocol []string // WebSocket protocol version. Version int // TLS config for secure WebSocket (wss). TlsConfig *tls.Config // Additional header fields to be sent in WebSocket opening handshake. Header http.Header handshakeData map[string]string } // serverHandshaker is an interface to handle WebSocket server side handshake. type serverHandshaker interface { // ReadHandshake reads handshake request message from client. // Returns http response code and error if any. ReadHandshake(buf *bufio.Reader, req *http.Request) (code int, err error) // AcceptHandshake accepts the client handshake request and sends // handshake response back to client. AcceptHandshake(buf *bufio.Writer) (err error) // NewServerConn creates a new WebSocket connection. NewServerConn(buf *bufio.ReadWriter, rwc io.ReadWriteCloser, request *http.Request) (conn *Conn) } // frameReader is an interface to read a WebSocket frame. type frameReader interface { // Reader is to read payload of the frame. io.Reader // PayloadType returns payload type. PayloadType() byte // HeaderReader returns a reader to read header of the frame. HeaderReader() io.Reader // TrailerReader returns a reader to read trailer of the frame. // If it returns nil, there is no trailer in the frame. TrailerReader() io.Reader // Len returns total length of the frame, including header and trailer. Len() int } // frameReaderFactory is an interface to creates new frame reader. type frameReaderFactory interface { NewFrameReader() (r frameReader, err error) } // frameWriter is an interface to write a WebSocket frame. type frameWriter interface { // Writer is to write payload of the frame. io.WriteCloser } // frameWriterFactory is an interface to create new frame writer. type frameWriterFactory interface { NewFrameWriter(payloadType byte) (w frameWriter, err error) } type frameHandler interface { HandleFrame(frame frameReader) (r frameReader, err error) WriteClose(status int) (err error) } // Conn represents a WebSocket connection. // // Multiple goroutines may invoke methods on a Conn simultaneously. type Conn struct { config *Config request *http.Request buf *bufio.ReadWriter rwc io.ReadWriteCloser rio sync.Mutex frameReaderFactory frameReader wio sync.Mutex frameWriterFactory frameHandler PayloadType byte defaultCloseStatus int } // Read implements the io.Reader interface: // it reads data of a frame from the WebSocket connection. // if msg is not large enough for the frame data, it fills the msg and next Read // will read the rest of the frame data. // it reads Text frame or Binary frame. func (ws *Conn) Read(msg []byte) (n int, err error) { ws.rio.Lock() defer ws.rio.Unlock() again: if ws.frameReader == nil { frame, err := ws.frameReaderFactory.NewFrameReader() if err != nil { return 0, err } ws.frameReader, err = ws.frameHandler.HandleFrame(frame) if err != nil { return 0, err } if ws.frameReader == nil { goto again } } n, err = ws.frameReader.Read(msg) if err == io.EOF { if trailer := ws.frameReader.TrailerReader(); trailer != nil { io.Copy(ioutil.Discard, trailer) } ws.frameReader = nil goto again } return n, err } // Write implements the io.Writer interface: // it writes data as a frame to the WebSocket connection. func (ws *Conn) Write(msg []byte) (n int, err error) { ws.wio.Lock() defer ws.wio.Unlock() w, err := ws.frameWriterFactory.NewFrameWriter(ws.PayloadType) if err != nil { return 0, err } n, err = w.Write(msg) w.Close() return n, err } // Close implements the io.Closer interface. func (ws *Conn) Close() error { err := ws.frameHandler.WriteClose(ws.defaultCloseStatus) err1 := ws.rwc.Close() if err != nil { return err } return err1 } func (ws *Conn) IsClientConn() bool { return ws.request == nil } func (ws *Conn) IsServerConn() bool { return ws.request != nil } // LocalAddr returns the WebSocket Origin for the connection for client, or // the WebSocket location for server. func (ws *Conn) LocalAddr() net.Addr { if ws.IsClientConn() { return &Addr{ws.config.Origin} } return &Addr{ws.config.Location} } // RemoteAddr returns the WebSocket location for the connection for client, or // the Websocket Origin for server. func (ws *Conn) RemoteAddr() net.Addr { if ws.IsClientConn() { return &Addr{ws.config.Location} } return &Addr{ws.config.Origin} } var errSetDeadline = errors.New("websocket: cannot set deadline: not using a net.Conn") // SetDeadline sets the connection's network read & write deadlines. func (ws *Conn) SetDeadline(t time.Time) error { if conn, ok := ws.rwc.(net.Conn); ok { return conn.SetDeadline(t) } return errSetDeadline } // SetReadDeadline sets the connection's network read deadline. func (ws *Conn) SetReadDeadline(t time.Time) error { if conn, ok := ws.rwc.(net.Conn); ok { return conn.SetReadDeadline(t) } return errSetDeadline } // SetWriteDeadline sets the connection's network write deadline. func (ws *Conn) SetWriteDeadline(t time.Time) error { if conn, ok := ws.rwc.(net.Conn); ok { return conn.SetWriteDeadline(t) } return errSetDeadline } // Config returns the WebSocket config. func (ws *Conn) Config() *Config { return ws.config } // Request returns the http request upgraded to the WebSocket. // It is nil for client side. func (ws *Conn) Request() *http.Request { return ws.request } // Codec represents a symmetric pair of functions that implement a codec. type Codec struct { Marshal func(v interface{}) (data []byte, payloadType byte, err error) Unmarshal func(data []byte, payloadType byte, v interface{}) (err error) } // Send sends v marshaled by cd.Marshal as single frame to ws. func (cd Codec) Send(ws *Conn, v interface{}) (err error) { data, payloadType, err := cd.Marshal(v) if err != nil { return err } ws.wio.Lock() defer ws.wio.Unlock() w, err := ws.frameWriterFactory.NewFrameWriter(payloadType) if err != nil { return err } _, err = w.Write(data) w.Close() return err } // Receive receives single frame from ws, unmarshaled by cd.Unmarshal and stores in v. func (cd Codec) Receive(ws *Conn, v interface{}) (err error) { ws.rio.Lock() defer ws.rio.Unlock() if ws.frameReader != nil { _, err = io.Copy(ioutil.Discard, ws.frameReader) if err != nil { return err } ws.frameReader = nil } again: frame, err := ws.frameReaderFactory.NewFrameReader() if err != nil { return err } frame, err = ws.frameHandler.HandleFrame(frame) if err != nil { return err } if frame == nil { goto again } payloadType := frame.PayloadType() data, err := ioutil.ReadAll(frame) if err != nil { return err } return cd.Unmarshal(data, payloadType, v) } func marshal(v interface{}) (msg []byte, payloadType byte, err error) { switch data := v.(type) { case string: return []byte(data), TextFrame, nil case []byte: return data, BinaryFrame, nil } return nil, UnknownFrame, ErrNotSupported } func unmarshal(msg []byte, payloadType byte, v interface{}) (err error) { switch data := v.(type) { case *string: *data = string(msg) return nil case *[]byte: *data = msg return nil } return ErrNotSupported } /* Message is a codec to send/receive text/binary data in a frame on WebSocket connection. To send/receive text frame, use string type. To send/receive binary frame, use []byte type. Trivial usage: import "websocket" // receive text frame var message string websocket.Message.Receive(ws, &message) // send text frame message = "hello" websocket.Message.Send(ws, message) // receive binary frame var data []byte websocket.Message.Receive(ws, &data) // send binary frame data = []byte{0, 1, 2} websocket.Message.Send(ws, data) */ var Message = Codec{marshal, unmarshal} func jsonMarshal(v interface{}) (msg []byte, payloadType byte, err error) { msg, err = json.Marshal(v) return msg, TextFrame, err } func jsonUnmarshal(msg []byte, payloadType byte, v interface{}) (err error) { return json.Unmarshal(msg, v) } /* JSON is a codec to send/receive JSON data in a frame from a WebSocket connection. Trivial usage: import "websocket" type T struct { Msg string Count int } // receive JSON type T var data T websocket.JSON.Receive(ws, &data) // send JSON type T websocket.JSON.Send(ws, data) */ var JSON = Codec{jsonMarshal, jsonUnmarshal} lxd-2.0.2/dist/src/golang.org/x/net/websocket/hybi.go0000644061062106075000000003720412721405224025011 0ustar00stgraberdomain admins00000000000000// Copyright 2011 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package websocket // This file implements a protocol of hybi draft. // http://tools.ietf.org/html/draft-ietf-hybi-thewebsocketprotocol-17 import ( "bufio" "bytes" "crypto/rand" "crypto/sha1" "encoding/base64" "encoding/binary" "fmt" "io" "io/ioutil" "net/http" "net/url" "strings" ) const ( websocketGUID = "258EAFA5-E914-47DA-95CA-C5AB0DC85B11" closeStatusNormal = 1000 closeStatusGoingAway = 1001 closeStatusProtocolError = 1002 closeStatusUnsupportedData = 1003 closeStatusFrameTooLarge = 1004 closeStatusNoStatusRcvd = 1005 closeStatusAbnormalClosure = 1006 closeStatusBadMessageData = 1007 closeStatusPolicyViolation = 1008 closeStatusTooBigData = 1009 closeStatusExtensionMismatch = 1010 maxControlFramePayloadLength = 125 ) var ( ErrBadMaskingKey = &ProtocolError{"bad masking key"} ErrBadPongMessage = &ProtocolError{"bad pong message"} ErrBadClosingStatus = &ProtocolError{"bad closing status"} ErrUnsupportedExtensions = &ProtocolError{"unsupported extensions"} ErrNotImplemented = &ProtocolError{"not implemented"} handshakeHeader = map[string]bool{ "Host": true, "Upgrade": true, "Connection": true, "Sec-Websocket-Key": true, "Sec-Websocket-Origin": true, "Sec-Websocket-Version": true, "Sec-Websocket-Protocol": true, "Sec-Websocket-Accept": true, } ) // A hybiFrameHeader is a frame header as defined in hybi draft. type hybiFrameHeader struct { Fin bool Rsv [3]bool OpCode byte Length int64 MaskingKey []byte data *bytes.Buffer } // A hybiFrameReader is a reader for hybi frame. type hybiFrameReader struct { reader io.Reader header hybiFrameHeader pos int64 length int } func (frame *hybiFrameReader) Read(msg []byte) (n int, err error) { n, err = frame.reader.Read(msg) if err != nil { return 0, err } if frame.header.MaskingKey != nil { for i := 0; i < n; i++ { msg[i] = msg[i] ^ frame.header.MaskingKey[frame.pos%4] frame.pos++ } } return n, err } func (frame *hybiFrameReader) PayloadType() byte { return frame.header.OpCode } func (frame *hybiFrameReader) HeaderReader() io.Reader { if frame.header.data == nil { return nil } if frame.header.data.Len() == 0 { return nil } return frame.header.data } func (frame *hybiFrameReader) TrailerReader() io.Reader { return nil } func (frame *hybiFrameReader) Len() (n int) { return frame.length } // A hybiFrameReaderFactory creates new frame reader based on its frame type. type hybiFrameReaderFactory struct { *bufio.Reader } // NewFrameReader reads a frame header from the connection, and creates new reader for the frame. // See Section 5.2 Base Framing protocol for detail. // http://tools.ietf.org/html/draft-ietf-hybi-thewebsocketprotocol-17#section-5.2 func (buf hybiFrameReaderFactory) NewFrameReader() (frame frameReader, err error) { hybiFrame := new(hybiFrameReader) frame = hybiFrame var header []byte var b byte // First byte. FIN/RSV1/RSV2/RSV3/OpCode(4bits) b, err = buf.ReadByte() if err != nil { return } header = append(header, b) hybiFrame.header.Fin = ((header[0] >> 7) & 1) != 0 for i := 0; i < 3; i++ { j := uint(6 - i) hybiFrame.header.Rsv[i] = ((header[0] >> j) & 1) != 0 } hybiFrame.header.OpCode = header[0] & 0x0f // Second byte. Mask/Payload len(7bits) b, err = buf.ReadByte() if err != nil { return } header = append(header, b) mask := (b & 0x80) != 0 b &= 0x7f lengthFields := 0 switch { case b <= 125: // Payload length 7bits. hybiFrame.header.Length = int64(b) case b == 126: // Payload length 7+16bits lengthFields = 2 case b == 127: // Payload length 7+64bits lengthFields = 8 } for i := 0; i < lengthFields; i++ { b, err = buf.ReadByte() if err != nil { return } if lengthFields == 8 && i == 0 { // MSB must be zero when 7+64 bits b &= 0x7f } header = append(header, b) hybiFrame.header.Length = hybiFrame.header.Length*256 + int64(b) } if mask { // Masking key. 4 bytes. for i := 0; i < 4; i++ { b, err = buf.ReadByte() if err != nil { return } header = append(header, b) hybiFrame.header.MaskingKey = append(hybiFrame.header.MaskingKey, b) } } hybiFrame.reader = io.LimitReader(buf.Reader, hybiFrame.header.Length) hybiFrame.header.data = bytes.NewBuffer(header) hybiFrame.length = len(header) + int(hybiFrame.header.Length) return } // A HybiFrameWriter is a writer for hybi frame. type hybiFrameWriter struct { writer *bufio.Writer header *hybiFrameHeader } func (frame *hybiFrameWriter) Write(msg []byte) (n int, err error) { var header []byte var b byte if frame.header.Fin { b |= 0x80 } for i := 0; i < 3; i++ { if frame.header.Rsv[i] { j := uint(6 - i) b |= 1 << j } } b |= frame.header.OpCode header = append(header, b) if frame.header.MaskingKey != nil { b = 0x80 } else { b = 0 } lengthFields := 0 length := len(msg) switch { case length <= 125: b |= byte(length) case length < 65536: b |= 126 lengthFields = 2 default: b |= 127 lengthFields = 8 } header = append(header, b) for i := 0; i < lengthFields; i++ { j := uint((lengthFields - i - 1) * 8) b = byte((length >> j) & 0xff) header = append(header, b) } if frame.header.MaskingKey != nil { if len(frame.header.MaskingKey) != 4 { return 0, ErrBadMaskingKey } header = append(header, frame.header.MaskingKey...) frame.writer.Write(header) data := make([]byte, length) for i := range data { data[i] = msg[i] ^ frame.header.MaskingKey[i%4] } frame.writer.Write(data) err = frame.writer.Flush() return length, err } frame.writer.Write(header) frame.writer.Write(msg) err = frame.writer.Flush() return length, err } func (frame *hybiFrameWriter) Close() error { return nil } type hybiFrameWriterFactory struct { *bufio.Writer needMaskingKey bool } func (buf hybiFrameWriterFactory) NewFrameWriter(payloadType byte) (frame frameWriter, err error) { frameHeader := &hybiFrameHeader{Fin: true, OpCode: payloadType} if buf.needMaskingKey { frameHeader.MaskingKey, err = generateMaskingKey() if err != nil { return nil, err } } return &hybiFrameWriter{writer: buf.Writer, header: frameHeader}, nil } type hybiFrameHandler struct { conn *Conn payloadType byte } func (handler *hybiFrameHandler) HandleFrame(frame frameReader) (frameReader, error) { if handler.conn.IsServerConn() { // The client MUST mask all frames sent to the server. if frame.(*hybiFrameReader).header.MaskingKey == nil { handler.WriteClose(closeStatusProtocolError) return nil, io.EOF } } else { // The server MUST NOT mask all frames. if frame.(*hybiFrameReader).header.MaskingKey != nil { handler.WriteClose(closeStatusProtocolError) return nil, io.EOF } } if header := frame.HeaderReader(); header != nil { io.Copy(ioutil.Discard, header) } switch frame.PayloadType() { case ContinuationFrame: frame.(*hybiFrameReader).header.OpCode = handler.payloadType case TextFrame, BinaryFrame: handler.payloadType = frame.PayloadType() case CloseFrame: return nil, io.EOF case PingFrame, PongFrame: b := make([]byte, maxControlFramePayloadLength) n, err := io.ReadFull(frame, b) if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF { return nil, err } io.Copy(ioutil.Discard, frame) if frame.PayloadType() == PingFrame { if _, err := handler.WritePong(b[:n]); err != nil { return nil, err } } return nil, nil } return frame, nil } func (handler *hybiFrameHandler) WriteClose(status int) (err error) { handler.conn.wio.Lock() defer handler.conn.wio.Unlock() w, err := handler.conn.frameWriterFactory.NewFrameWriter(CloseFrame) if err != nil { return err } msg := make([]byte, 2) binary.BigEndian.PutUint16(msg, uint16(status)) _, err = w.Write(msg) w.Close() return err } func (handler *hybiFrameHandler) WritePong(msg []byte) (n int, err error) { handler.conn.wio.Lock() defer handler.conn.wio.Unlock() w, err := handler.conn.frameWriterFactory.NewFrameWriter(PongFrame) if err != nil { return 0, err } n, err = w.Write(msg) w.Close() return n, err } // newHybiConn creates a new WebSocket connection speaking hybi draft protocol. func newHybiConn(config *Config, buf *bufio.ReadWriter, rwc io.ReadWriteCloser, request *http.Request) *Conn { if buf == nil { br := bufio.NewReader(rwc) bw := bufio.NewWriter(rwc) buf = bufio.NewReadWriter(br, bw) } ws := &Conn{config: config, request: request, buf: buf, rwc: rwc, frameReaderFactory: hybiFrameReaderFactory{buf.Reader}, frameWriterFactory: hybiFrameWriterFactory{ buf.Writer, request == nil}, PayloadType: TextFrame, defaultCloseStatus: closeStatusNormal} ws.frameHandler = &hybiFrameHandler{conn: ws} return ws } // generateMaskingKey generates a masking key for a frame. func generateMaskingKey() (maskingKey []byte, err error) { maskingKey = make([]byte, 4) if _, err = io.ReadFull(rand.Reader, maskingKey); err != nil { return } return } // generateNonce generates a nonce consisting of a randomly selected 16-byte // value that has been base64-encoded. func generateNonce() (nonce []byte) { key := make([]byte, 16) if _, err := io.ReadFull(rand.Reader, key); err != nil { panic(err) } nonce = make([]byte, 24) base64.StdEncoding.Encode(nonce, key) return } // removeZone removes IPv6 zone identifer from host. // E.g., "[fe80::1%en0]:8080" to "[fe80::1]:8080" func removeZone(host string) string { if !strings.HasPrefix(host, "[") { return host } i := strings.LastIndex(host, "]") if i < 0 { return host } j := strings.LastIndex(host[:i], "%") if j < 0 { return host } return host[:j] + host[i:] } // getNonceAccept computes the base64-encoded SHA-1 of the concatenation of // the nonce ("Sec-WebSocket-Key" value) with the websocket GUID string. func getNonceAccept(nonce []byte) (expected []byte, err error) { h := sha1.New() if _, err = h.Write(nonce); err != nil { return } if _, err = h.Write([]byte(websocketGUID)); err != nil { return } expected = make([]byte, 28) base64.StdEncoding.Encode(expected, h.Sum(nil)) return } // Client handshake described in draft-ietf-hybi-thewebsocket-protocol-17 func hybiClientHandshake(config *Config, br *bufio.Reader, bw *bufio.Writer) (err error) { bw.WriteString("GET " + config.Location.RequestURI() + " HTTP/1.1\r\n") // According to RFC 6874, an HTTP client, proxy, or other // intermediary must remove any IPv6 zone identifier attached // to an outgoing URI. bw.WriteString("Host: " + removeZone(config.Location.Host) + "\r\n") bw.WriteString("Upgrade: websocket\r\n") bw.WriteString("Connection: Upgrade\r\n") nonce := generateNonce() if config.handshakeData != nil { nonce = []byte(config.handshakeData["key"]) } bw.WriteString("Sec-WebSocket-Key: " + string(nonce) + "\r\n") bw.WriteString("Origin: " + strings.ToLower(config.Origin.String()) + "\r\n") if config.Version != ProtocolVersionHybi13 { return ErrBadProtocolVersion } bw.WriteString("Sec-WebSocket-Version: " + fmt.Sprintf("%d", config.Version) + "\r\n") if len(config.Protocol) > 0 { bw.WriteString("Sec-WebSocket-Protocol: " + strings.Join(config.Protocol, ", ") + "\r\n") } // TODO(ukai): send Sec-WebSocket-Extensions. err = config.Header.WriteSubset(bw, handshakeHeader) if err != nil { return err } bw.WriteString("\r\n") if err = bw.Flush(); err != nil { return err } resp, err := http.ReadResponse(br, &http.Request{Method: "GET"}) if err != nil { return err } if resp.StatusCode != 101 { return ErrBadStatus } if strings.ToLower(resp.Header.Get("Upgrade")) != "websocket" || strings.ToLower(resp.Header.Get("Connection")) != "upgrade" { return ErrBadUpgrade } expectedAccept, err := getNonceAccept(nonce) if err != nil { return err } if resp.Header.Get("Sec-WebSocket-Accept") != string(expectedAccept) { return ErrChallengeResponse } if resp.Header.Get("Sec-WebSocket-Extensions") != "" { return ErrUnsupportedExtensions } offeredProtocol := resp.Header.Get("Sec-WebSocket-Protocol") if offeredProtocol != "" { protocolMatched := false for i := 0; i < len(config.Protocol); i++ { if config.Protocol[i] == offeredProtocol { protocolMatched = true break } } if !protocolMatched { return ErrBadWebSocketProtocol } config.Protocol = []string{offeredProtocol} } return nil } // newHybiClientConn creates a client WebSocket connection after handshake. func newHybiClientConn(config *Config, buf *bufio.ReadWriter, rwc io.ReadWriteCloser) *Conn { return newHybiConn(config, buf, rwc, nil) } // A HybiServerHandshaker performs a server handshake using hybi draft protocol. type hybiServerHandshaker struct { *Config accept []byte } func (c *hybiServerHandshaker) ReadHandshake(buf *bufio.Reader, req *http.Request) (code int, err error) { c.Version = ProtocolVersionHybi13 if req.Method != "GET" { return http.StatusMethodNotAllowed, ErrBadRequestMethod } // HTTP version can be safely ignored. if strings.ToLower(req.Header.Get("Upgrade")) != "websocket" || !strings.Contains(strings.ToLower(req.Header.Get("Connection")), "upgrade") { return http.StatusBadRequest, ErrNotWebSocket } key := req.Header.Get("Sec-Websocket-Key") if key == "" { return http.StatusBadRequest, ErrChallengeResponse } version := req.Header.Get("Sec-Websocket-Version") switch version { case "13": c.Version = ProtocolVersionHybi13 default: return http.StatusBadRequest, ErrBadWebSocketVersion } var scheme string if req.TLS != nil { scheme = "wss" } else { scheme = "ws" } c.Location, err = url.ParseRequestURI(scheme + "://" + req.Host + req.URL.RequestURI()) if err != nil { return http.StatusBadRequest, err } protocol := strings.TrimSpace(req.Header.Get("Sec-Websocket-Protocol")) if protocol != "" { protocols := strings.Split(protocol, ",") for i := 0; i < len(protocols); i++ { c.Protocol = append(c.Protocol, strings.TrimSpace(protocols[i])) } } c.accept, err = getNonceAccept([]byte(key)) if err != nil { return http.StatusInternalServerError, err } return http.StatusSwitchingProtocols, nil } // Origin parses the Origin header in req. // If the Origin header is not set, it returns nil and nil. func Origin(config *Config, req *http.Request) (*url.URL, error) { var origin string switch config.Version { case ProtocolVersionHybi13: origin = req.Header.Get("Origin") } if origin == "" { return nil, nil } return url.ParseRequestURI(origin) } func (c *hybiServerHandshaker) AcceptHandshake(buf *bufio.Writer) (err error) { if len(c.Protocol) > 0 { if len(c.Protocol) != 1 { // You need choose a Protocol in Handshake func in Server. return ErrBadWebSocketProtocol } } buf.WriteString("HTTP/1.1 101 Switching Protocols\r\n") buf.WriteString("Upgrade: websocket\r\n") buf.WriteString("Connection: Upgrade\r\n") buf.WriteString("Sec-WebSocket-Accept: " + string(c.accept) + "\r\n") if len(c.Protocol) > 0 { buf.WriteString("Sec-WebSocket-Protocol: " + c.Protocol[0] + "\r\n") } // TODO(ukai): send Sec-WebSocket-Extensions. if c.Header != nil { err := c.Header.WriteSubset(buf, handshakeHeader) if err != nil { return err } } buf.WriteString("\r\n") return buf.Flush() } func (c *hybiServerHandshaker) NewServerConn(buf *bufio.ReadWriter, rwc io.ReadWriteCloser, request *http.Request) *Conn { return newHybiServerConn(c.Config, buf, rwc, request) } // newHybiServerConn returns a new WebSocket connection speaking hybi draft protocol. func newHybiServerConn(config *Config, buf *bufio.ReadWriter, rwc io.ReadWriteCloser, request *http.Request) *Conn { return newHybiConn(config, buf, rwc, request) } lxd-2.0.2/dist/src/golang.org/x/net/websocket/exampledial_test.go0000644061062106075000000000125112721405224027373 0ustar00stgraberdomain admins00000000000000// Copyright 2012 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package websocket_test import ( "fmt" "log" "golang.org/x/net/websocket" ) // This example demonstrates a trivial client. func ExampleDial() { origin := "http://localhost/" url := "ws://localhost:12345/ws" ws, err := websocket.Dial(url, "", origin) if err != nil { log.Fatal(err) } if _, err := ws.Write([]byte("hello, world!\n")); err != nil { log.Fatal(err) } var msg = make([]byte, 512) var n int if n, err = ws.Read(msg); err != nil { log.Fatal(err) } fmt.Printf("Received: %s.\n", msg[:n]) } lxd-2.0.2/dist/src/golang.org/x/net/websocket/examplehandler_test.go0000644061062106075000000000110712721405224030077 0ustar00stgraberdomain admins00000000000000// Copyright 2012 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package websocket_test import ( "io" "net/http" "golang.org/x/net/websocket" ) // Echo the data received on the WebSocket. func EchoServer(ws *websocket.Conn) { io.Copy(ws, ws) } // This example demonstrates a trivial echo server. func ExampleHandler() { http.Handle("/echo", websocket.Handler(EchoServer)) err := http.ListenAndServe(":12345", nil) if err != nil { panic("ListenAndServe: " + err.Error()) } } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/0000755061062106075000000000000012721405224022417 5ustar00stgraberdomain admins00000000000000lxd-2.0.2/dist/src/golang.org/x/net/ipv6/sys_windows.go0000644061062106075000000000300112721405224025330 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6 import ( "net" "syscall" "golang.org/x/net/internal/iana" ) const ( // See ws2tcpip.h. sysIPV6_UNICAST_HOPS = 0x4 sysIPV6_MULTICAST_IF = 0x9 sysIPV6_MULTICAST_HOPS = 0xa sysIPV6_MULTICAST_LOOP = 0xb sysIPV6_JOIN_GROUP = 0xc sysIPV6_LEAVE_GROUP = 0xd sysIPV6_PKTINFO = 0x13 sysSizeofSockaddrInet6 = 0x1c sysSizeofIPv6Mreq = 0x14 ) type sysSockaddrInet6 struct { Family uint16 Port uint16 Flowinfo uint32 Addr [16]byte /* in6_addr */ Scope_id uint32 } type sysIPv6Mreq struct { Multiaddr [16]byte /* in6_addr */ Interface uint32 } var ( ctlOpts = [ctlMax]ctlOpt{} sockOpts = [ssoMax]sockOpt{ ssoHopLimit: {iana.ProtocolIPv6, sysIPV6_UNICAST_HOPS, ssoTypeInt}, ssoMulticastInterface: {iana.ProtocolIPv6, sysIPV6_MULTICAST_IF, ssoTypeInterface}, ssoMulticastHopLimit: {iana.ProtocolIPv6, sysIPV6_MULTICAST_HOPS, ssoTypeInt}, ssoMulticastLoopback: {iana.ProtocolIPv6, sysIPV6_MULTICAST_LOOP, ssoTypeInt}, ssoJoinGroup: {iana.ProtocolIPv6, sysIPV6_JOIN_GROUP, ssoTypeIPMreq}, ssoLeaveGroup: {iana.ProtocolIPv6, sysIPV6_LEAVE_GROUP, ssoTypeIPMreq}, } ) func (sa *sysSockaddrInet6) setSockaddr(ip net.IP, i int) { sa.Family = syscall.AF_INET6 copy(sa.Addr[:], ip) sa.Scope_id = uint32(i) } func (mreq *sysIPv6Mreq) setIfindex(i int) { mreq.Interface = uint32(i) } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/defs_darwin.go0000644061062106075000000000634712721405224025245 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build ignore // +godefs map struct_in6_addr [16]byte /* in6_addr */ package ipv6 /* #define __APPLE_USE_RFC_3542 #include #include */ import "C" const ( sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP sysIPV6_JOIN_GROUP = C.IPV6_JOIN_GROUP sysIPV6_LEAVE_GROUP = C.IPV6_LEAVE_GROUP sysIPV6_PORTRANGE = C.IPV6_PORTRANGE sysICMP6_FILTER = C.ICMP6_FILTER sysIPV6_2292PKTINFO = C.IPV6_2292PKTINFO sysIPV6_2292HOPLIMIT = C.IPV6_2292HOPLIMIT sysIPV6_2292NEXTHOP = C.IPV6_2292NEXTHOP sysIPV6_2292HOPOPTS = C.IPV6_2292HOPOPTS sysIPV6_2292DSTOPTS = C.IPV6_2292DSTOPTS sysIPV6_2292RTHDR = C.IPV6_2292RTHDR sysIPV6_2292PKTOPTIONS = C.IPV6_2292PKTOPTIONS sysIPV6_CHECKSUM = C.IPV6_CHECKSUM sysIPV6_V6ONLY = C.IPV6_V6ONLY sysIPV6_IPSEC_POLICY = C.IPV6_IPSEC_POLICY sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS sysIPV6_TCLASS = C.IPV6_TCLASS sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS sysIPV6_USE_MIN_MTU = C.IPV6_USE_MIN_MTU sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU sysIPV6_PATHMTU = C.IPV6_PATHMTU sysIPV6_PKTINFO = C.IPV6_PKTINFO sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT sysIPV6_NEXTHOP = C.IPV6_NEXTHOP sysIPV6_HOPOPTS = C.IPV6_HOPOPTS sysIPV6_DSTOPTS = C.IPV6_DSTOPTS sysIPV6_RTHDR = C.IPV6_RTHDR sysIPV6_AUTOFLOWLABEL = C.IPV6_AUTOFLOWLABEL sysIPV6_DONTFRAG = C.IPV6_DONTFRAG sysIPV6_PREFER_TEMPADDR = C.IPV6_PREFER_TEMPADDR sysIPV6_MSFILTER = C.IPV6_MSFILTER sysMCAST_JOIN_GROUP = C.MCAST_JOIN_GROUP sysMCAST_LEAVE_GROUP = C.MCAST_LEAVE_GROUP sysMCAST_JOIN_SOURCE_GROUP = C.MCAST_JOIN_SOURCE_GROUP sysMCAST_LEAVE_SOURCE_GROUP = C.MCAST_LEAVE_SOURCE_GROUP sysMCAST_BLOCK_SOURCE = C.MCAST_BLOCK_SOURCE sysMCAST_UNBLOCK_SOURCE = C.MCAST_UNBLOCK_SOURCE sysIPV6_BOUND_IF = C.IPV6_BOUND_IF sysIPV6_PORTRANGE_DEFAULT = C.IPV6_PORTRANGE_DEFAULT sysIPV6_PORTRANGE_HIGH = C.IPV6_PORTRANGE_HIGH sysIPV6_PORTRANGE_LOW = C.IPV6_PORTRANGE_LOW sysSizeofSockaddrStorage = C.sizeof_struct_sockaddr_storage sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6 sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq sysSizeofGroupReq = C.sizeof_struct_group_req sysSizeofGroupSourceReq = C.sizeof_struct_group_source_req sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter ) type sysSockaddrStorage C.struct_sockaddr_storage type sysSockaddrInet6 C.struct_sockaddr_in6 type sysInet6Pktinfo C.struct_in6_pktinfo type sysIPv6Mtuinfo C.struct_ip6_mtuinfo type sysIPv6Mreq C.struct_ipv6_mreq type sysICMPv6Filter C.struct_icmp6_filter type sysGroupReq C.struct_group_req type sysGroupSourceReq C.struct_group_source_req lxd-2.0.2/dist/src/golang.org/x/net/ipv6/control.go0000644061062106075000000000601212721405224024425 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6 import ( "fmt" "net" "sync" ) // Note that RFC 3542 obsoletes RFC 2292 but OS X Snow Leopard and the // former still support RFC 2292 only. Please be aware that almost // all protocol implementations prohibit using a combination of RFC // 2292 and RFC 3542 for some practical reasons. type rawOpt struct { sync.RWMutex cflags ControlFlags } func (c *rawOpt) set(f ControlFlags) { c.cflags |= f } func (c *rawOpt) clear(f ControlFlags) { c.cflags &^= f } func (c *rawOpt) isset(f ControlFlags) bool { return c.cflags&f != 0 } // A ControlFlags represents per packet basis IP-level socket option // control flags. type ControlFlags uint const ( FlagTrafficClass ControlFlags = 1 << iota // pass the traffic class on the received packet FlagHopLimit // pass the hop limit on the received packet FlagSrc // pass the source address on the received packet FlagDst // pass the destination address on the received packet FlagInterface // pass the interface index on the received packet FlagPathMTU // pass the path MTU on the received packet path ) const flagPacketInfo = FlagDst | FlagInterface // A ControlMessage represents per packet basis IP-level socket // options. type ControlMessage struct { // Receiving socket options: SetControlMessage allows to // receive the options from the protocol stack using ReadFrom // method of PacketConn. // // Specifying socket options: ControlMessage for WriteTo // method of PacketConn allows to send the options to the // protocol stack. // TrafficClass int // traffic class, must be 1 <= value <= 255 when specifying HopLimit int // hop limit, must be 1 <= value <= 255 when specifying Src net.IP // source address, specifying only Dst net.IP // destination address, receiving only IfIndex int // interface index, must be 1 <= value when specifying NextHop net.IP // next hop address, specifying only MTU int // path MTU, receiving only } func (cm *ControlMessage) String() string { if cm == nil { return "" } return fmt.Sprintf("tclass=%#x hoplim=%d src=%v dst=%v ifindex=%d nexthop=%v mtu=%d", cm.TrafficClass, cm.HopLimit, cm.Src, cm.Dst, cm.IfIndex, cm.NextHop, cm.MTU) } // Ancillary data socket options const ( ctlTrafficClass = iota // header field ctlHopLimit // header field ctlPacketInfo // inbound or outbound packet path ctlNextHop // nexthop ctlPathMTU // path mtu ctlMax ) // A ctlOpt represents a binding for ancillary data socket option. type ctlOpt struct { name int // option name, must be equal or greater than 1 length int // option length marshal func([]byte, *ControlMessage) []byte parse func(*ControlMessage, []byte) } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/zsys_linux_s390x.go0000644061062106075000000000744112721405224026151 0ustar00stgraberdomain admins00000000000000// Created by cgo -godefs - DO NOT EDIT // cgo -godefs defs_linux.go // +build linux,s390x package ipv6 const ( sysIPV6_ADDRFORM = 0x1 sysIPV6_2292PKTINFO = 0x2 sysIPV6_2292HOPOPTS = 0x3 sysIPV6_2292DSTOPTS = 0x4 sysIPV6_2292RTHDR = 0x5 sysIPV6_2292PKTOPTIONS = 0x6 sysIPV6_CHECKSUM = 0x7 sysIPV6_2292HOPLIMIT = 0x8 sysIPV6_NEXTHOP = 0x9 sysIPV6_FLOWINFO = 0xb sysIPV6_UNICAST_HOPS = 0x10 sysIPV6_MULTICAST_IF = 0x11 sysIPV6_MULTICAST_HOPS = 0x12 sysIPV6_MULTICAST_LOOP = 0x13 sysIPV6_ADD_MEMBERSHIP = 0x14 sysIPV6_DROP_MEMBERSHIP = 0x15 sysMCAST_JOIN_GROUP = 0x2a sysMCAST_LEAVE_GROUP = 0x2d sysMCAST_JOIN_SOURCE_GROUP = 0x2e sysMCAST_LEAVE_SOURCE_GROUP = 0x2f sysMCAST_BLOCK_SOURCE = 0x2b sysMCAST_UNBLOCK_SOURCE = 0x2c sysMCAST_MSFILTER = 0x30 sysIPV6_ROUTER_ALERT = 0x16 sysIPV6_MTU_DISCOVER = 0x17 sysIPV6_MTU = 0x18 sysIPV6_RECVERR = 0x19 sysIPV6_V6ONLY = 0x1a sysIPV6_JOIN_ANYCAST = 0x1b sysIPV6_LEAVE_ANYCAST = 0x1c sysIPV6_FLOWLABEL_MGR = 0x20 sysIPV6_FLOWINFO_SEND = 0x21 sysIPV6_IPSEC_POLICY = 0x22 sysIPV6_XFRM_POLICY = 0x23 sysIPV6_RECVPKTINFO = 0x31 sysIPV6_PKTINFO = 0x32 sysIPV6_RECVHOPLIMIT = 0x33 sysIPV6_HOPLIMIT = 0x34 sysIPV6_RECVHOPOPTS = 0x35 sysIPV6_HOPOPTS = 0x36 sysIPV6_RTHDRDSTOPTS = 0x37 sysIPV6_RECVRTHDR = 0x38 sysIPV6_RTHDR = 0x39 sysIPV6_RECVDSTOPTS = 0x3a sysIPV6_DSTOPTS = 0x3b sysIPV6_RECVPATHMTU = 0x3c sysIPV6_PATHMTU = 0x3d sysIPV6_DONTFRAG = 0x3e sysIPV6_RECVTCLASS = 0x42 sysIPV6_TCLASS = 0x43 sysIPV6_ADDR_PREFERENCES = 0x48 sysIPV6_PREFER_SRC_TMP = 0x1 sysIPV6_PREFER_SRC_PUBLIC = 0x2 sysIPV6_PREFER_SRC_PUBTMP_DEFAULT = 0x100 sysIPV6_PREFER_SRC_COA = 0x4 sysIPV6_PREFER_SRC_HOME = 0x400 sysIPV6_PREFER_SRC_CGA = 0x8 sysIPV6_PREFER_SRC_NONCGA = 0x800 sysIPV6_MINHOPCOUNT = 0x49 sysIPV6_ORIGDSTADDR = 0x4a sysIPV6_RECVORIGDSTADDR = 0x4a sysIPV6_TRANSPARENT = 0x4b sysIPV6_UNICAST_IF = 0x4c sysICMPV6_FILTER = 0x1 sysICMPV6_FILTER_BLOCK = 0x1 sysICMPV6_FILTER_PASS = 0x2 sysICMPV6_FILTER_BLOCKOTHERS = 0x3 sysICMPV6_FILTER_PASSONLY = 0x4 sysSOL_SOCKET = 0x1 sysSO_ATTACH_FILTER = 0x1a sysSizeofKernelSockaddrStorage = 0x80 sysSizeofSockaddrInet6 = 0x1c sysSizeofInet6Pktinfo = 0x14 sysSizeofIPv6Mtuinfo = 0x20 sysSizeofIPv6FlowlabelReq = 0x20 sysSizeofIPv6Mreq = 0x14 sysSizeofGroupReq = 0x88 sysSizeofGroupSourceReq = 0x108 sysSizeofICMPv6Filter = 0x20 ) type sysKernelSockaddrStorage struct { Family uint16 X__data [126]int8 } type sysSockaddrInet6 struct { Family uint16 Port uint16 Flowinfo uint32 Addr [16]byte /* in6_addr */ Scope_id uint32 } type sysInet6Pktinfo struct { Addr [16]byte /* in6_addr */ Ifindex int32 } type sysIPv6Mtuinfo struct { Addr sysSockaddrInet6 Mtu uint32 } type sysIPv6FlowlabelReq struct { Dst [16]byte /* in6_addr */ Label uint32 Action uint8 Share uint8 Flags uint16 Expires uint16 Linger uint16 X__flr_pad uint32 } type sysIPv6Mreq struct { Multiaddr [16]byte /* in6_addr */ Ifindex int32 } type sysGroupReq struct { Interface uint32 Pad_cgo_0 [4]byte Group sysKernelSockaddrStorage } type sysGroupSourceReq struct { Interface uint32 Pad_cgo_0 [4]byte Group sysKernelSockaddrStorage Source sysKernelSockaddrStorage } type sysICMPv6Filter struct { Data [8]uint32 } type sysSockFProg struct { Len uint16 Pad_cgo_0 [6]byte Filter *sysSockFilter } type sysSockFilter struct { Code uint16 Jt uint8 Jf uint8 K uint32 } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/thunk_linux_386.s0000644061062106075000000000035412721405224025555 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build go1.2 TEXT ยทsocketcall(SB),4,$0-36 JMP syscallยทsocketcall(SB) lxd-2.0.2/dist/src/golang.org/x/net/ipv6/header.go0000644061062106075000000000273212721405224024202 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6 import ( "encoding/binary" "fmt" "net" ) const ( Version = 6 // protocol version HeaderLen = 40 // header length ) // A Header represents an IPv6 base header. type Header struct { Version int // protocol version TrafficClass int // traffic class FlowLabel int // flow label PayloadLen int // payload length NextHeader int // next header HopLimit int // hop limit Src net.IP // source address Dst net.IP // destination address } func (h *Header) String() string { if h == nil { return "" } return fmt.Sprintf("ver=%d tclass=%#x flowlbl=%#x payloadlen=%d nxthdr=%d hoplim=%d src=%v dst=%v", h.Version, h.TrafficClass, h.FlowLabel, h.PayloadLen, h.NextHeader, h.HopLimit, h.Src, h.Dst) } // ParseHeader parses b as an IPv6 base header. func ParseHeader(b []byte) (*Header, error) { if len(b) < HeaderLen { return nil, errHeaderTooShort } h := &Header{ Version: int(b[0]) >> 4, TrafficClass: int(b[0]&0x0f)<<4 | int(b[1])>>4, FlowLabel: int(b[1]&0x0f)<<16 | int(b[2])<<8 | int(b[3]), PayloadLen: int(binary.BigEndian.Uint16(b[4:6])), NextHeader: int(b[6]), HopLimit: int(b[7]), } h.Src = make(net.IP, net.IPv6len) copy(h.Src, b[8:24]) h.Dst = make(net.IP, net.IPv6len) copy(h.Dst, b[24:40]) return h, nil } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/defs_solaris.go0000644061062106075000000000560312721405224025427 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build ignore // +godefs map struct_in6_addr [16]byte /* in6_addr */ package ipv6 /* #include #include */ import "C" const ( sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP sysIPV6_JOIN_GROUP = C.IPV6_JOIN_GROUP sysIPV6_LEAVE_GROUP = C.IPV6_LEAVE_GROUP sysIPV6_PKTINFO = C.IPV6_PKTINFO sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT sysIPV6_NEXTHOP = C.IPV6_NEXTHOP sysIPV6_HOPOPTS = C.IPV6_HOPOPTS sysIPV6_DSTOPTS = C.IPV6_DSTOPTS sysIPV6_RTHDR = C.IPV6_RTHDR sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR sysIPV6_RECVRTHDRDSTOPTS = C.IPV6_RECVRTHDRDSTOPTS sysIPV6_CHECKSUM = C.IPV6_CHECKSUM sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS sysIPV6_USE_MIN_MTU = C.IPV6_USE_MIN_MTU sysIPV6_DONTFRAG = C.IPV6_DONTFRAG sysIPV6_SEC_OPT = C.IPV6_SEC_OPT sysIPV6_SRC_PREFERENCES = C.IPV6_SRC_PREFERENCES sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU sysIPV6_PATHMTU = C.IPV6_PATHMTU sysIPV6_TCLASS = C.IPV6_TCLASS sysIPV6_V6ONLY = C.IPV6_V6ONLY sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS sysIPV6_PREFER_SRC_HOME = C.IPV6_PREFER_SRC_HOME sysIPV6_PREFER_SRC_COA = C.IPV6_PREFER_SRC_COA sysIPV6_PREFER_SRC_PUBLIC = C.IPV6_PREFER_SRC_PUBLIC sysIPV6_PREFER_SRC_TMP = C.IPV6_PREFER_SRC_TMP sysIPV6_PREFER_SRC_NONCGA = C.IPV6_PREFER_SRC_NONCGA sysIPV6_PREFER_SRC_CGA = C.IPV6_PREFER_SRC_CGA sysIPV6_PREFER_SRC_MIPMASK = C.IPV6_PREFER_SRC_MIPMASK sysIPV6_PREFER_SRC_MIPDEFAULT = C.IPV6_PREFER_SRC_MIPDEFAULT sysIPV6_PREFER_SRC_TMPMASK = C.IPV6_PREFER_SRC_TMPMASK sysIPV6_PREFER_SRC_TMPDEFAULT = C.IPV6_PREFER_SRC_TMPDEFAULT sysIPV6_PREFER_SRC_CGAMASK = C.IPV6_PREFER_SRC_CGAMASK sysIPV6_PREFER_SRC_CGADEFAULT = C.IPV6_PREFER_SRC_CGADEFAULT sysIPV6_PREFER_SRC_MASK = C.IPV6_PREFER_SRC_MASK sysIPV6_PREFER_SRC_DEFAULT = C.IPV6_PREFER_SRC_DEFAULT sysIPV6_BOUND_IF = C.IPV6_BOUND_IF sysIPV6_UNSPEC_SRC = C.IPV6_UNSPEC_SRC sysICMP6_FILTER = C.ICMP6_FILTER sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6 sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter ) type sysSockaddrInet6 C.struct_sockaddr_in6 type sysInet6Pktinfo C.struct_in6_pktinfo type sysIPv6Mtuinfo C.struct_ip6_mtuinfo type sysIPv6Mreq C.struct_ipv6_mreq type sysICMPv6Filter C.struct_icmp6_filter lxd-2.0.2/dist/src/golang.org/x/net/ipv6/zsys_linux_mips64.go0000644061062106075000000000744212721405224026406 0ustar00stgraberdomain admins00000000000000// Created by cgo -godefs - DO NOT EDIT // cgo -godefs defs_linux.go // +build linux,mips64 package ipv6 const ( sysIPV6_ADDRFORM = 0x1 sysIPV6_2292PKTINFO = 0x2 sysIPV6_2292HOPOPTS = 0x3 sysIPV6_2292DSTOPTS = 0x4 sysIPV6_2292RTHDR = 0x5 sysIPV6_2292PKTOPTIONS = 0x6 sysIPV6_CHECKSUM = 0x7 sysIPV6_2292HOPLIMIT = 0x8 sysIPV6_NEXTHOP = 0x9 sysIPV6_FLOWINFO = 0xb sysIPV6_UNICAST_HOPS = 0x10 sysIPV6_MULTICAST_IF = 0x11 sysIPV6_MULTICAST_HOPS = 0x12 sysIPV6_MULTICAST_LOOP = 0x13 sysIPV6_ADD_MEMBERSHIP = 0x14 sysIPV6_DROP_MEMBERSHIP = 0x15 sysMCAST_JOIN_GROUP = 0x2a sysMCAST_LEAVE_GROUP = 0x2d sysMCAST_JOIN_SOURCE_GROUP = 0x2e sysMCAST_LEAVE_SOURCE_GROUP = 0x2f sysMCAST_BLOCK_SOURCE = 0x2b sysMCAST_UNBLOCK_SOURCE = 0x2c sysMCAST_MSFILTER = 0x30 sysIPV6_ROUTER_ALERT = 0x16 sysIPV6_MTU_DISCOVER = 0x17 sysIPV6_MTU = 0x18 sysIPV6_RECVERR = 0x19 sysIPV6_V6ONLY = 0x1a sysIPV6_JOIN_ANYCAST = 0x1b sysIPV6_LEAVE_ANYCAST = 0x1c sysIPV6_FLOWLABEL_MGR = 0x20 sysIPV6_FLOWINFO_SEND = 0x21 sysIPV6_IPSEC_POLICY = 0x22 sysIPV6_XFRM_POLICY = 0x23 sysIPV6_RECVPKTINFO = 0x31 sysIPV6_PKTINFO = 0x32 sysIPV6_RECVHOPLIMIT = 0x33 sysIPV6_HOPLIMIT = 0x34 sysIPV6_RECVHOPOPTS = 0x35 sysIPV6_HOPOPTS = 0x36 sysIPV6_RTHDRDSTOPTS = 0x37 sysIPV6_RECVRTHDR = 0x38 sysIPV6_RTHDR = 0x39 sysIPV6_RECVDSTOPTS = 0x3a sysIPV6_DSTOPTS = 0x3b sysIPV6_RECVPATHMTU = 0x3c sysIPV6_PATHMTU = 0x3d sysIPV6_DONTFRAG = 0x3e sysIPV6_RECVTCLASS = 0x42 sysIPV6_TCLASS = 0x43 sysIPV6_ADDR_PREFERENCES = 0x48 sysIPV6_PREFER_SRC_TMP = 0x1 sysIPV6_PREFER_SRC_PUBLIC = 0x2 sysIPV6_PREFER_SRC_PUBTMP_DEFAULT = 0x100 sysIPV6_PREFER_SRC_COA = 0x4 sysIPV6_PREFER_SRC_HOME = 0x400 sysIPV6_PREFER_SRC_CGA = 0x8 sysIPV6_PREFER_SRC_NONCGA = 0x800 sysIPV6_MINHOPCOUNT = 0x49 sysIPV6_ORIGDSTADDR = 0x4a sysIPV6_RECVORIGDSTADDR = 0x4a sysIPV6_TRANSPARENT = 0x4b sysIPV6_UNICAST_IF = 0x4c sysICMPV6_FILTER = 0x1 sysICMPV6_FILTER_BLOCK = 0x1 sysICMPV6_FILTER_PASS = 0x2 sysICMPV6_FILTER_BLOCKOTHERS = 0x3 sysICMPV6_FILTER_PASSONLY = 0x4 sysSOL_SOCKET = 0x1 sysSO_ATTACH_FILTER = 0x1a sysSizeofKernelSockaddrStorage = 0x80 sysSizeofSockaddrInet6 = 0x1c sysSizeofInet6Pktinfo = 0x14 sysSizeofIPv6Mtuinfo = 0x20 sysSizeofIPv6FlowlabelReq = 0x20 sysSizeofIPv6Mreq = 0x14 sysSizeofGroupReq = 0x88 sysSizeofGroupSourceReq = 0x108 sysSizeofICMPv6Filter = 0x20 ) type sysKernelSockaddrStorage struct { Family uint16 X__data [126]int8 } type sysSockaddrInet6 struct { Family uint16 Port uint16 Flowinfo uint32 Addr [16]byte /* in6_addr */ Scope_id uint32 } type sysInet6Pktinfo struct { Addr [16]byte /* in6_addr */ Ifindex int32 } type sysIPv6Mtuinfo struct { Addr sysSockaddrInet6 Mtu uint32 } type sysIPv6FlowlabelReq struct { Dst [16]byte /* in6_addr */ Label uint32 Action uint8 Share uint8 Flags uint16 Expires uint16 Linger uint16 X__flr_pad uint32 } type sysIPv6Mreq struct { Multiaddr [16]byte /* in6_addr */ Ifindex int32 } type sysGroupReq struct { Interface uint32 Pad_cgo_0 [4]byte Group sysKernelSockaddrStorage } type sysGroupSourceReq struct { Interface uint32 Pad_cgo_0 [4]byte Group sysKernelSockaddrStorage Source sysKernelSockaddrStorage } type sysICMPv6Filter struct { Data [8]uint32 } type sysSockFProg struct { Len uint16 Pad_cgo_0 [6]byte Filter *sysSockFilter } type sysSockFilter struct { Code uint16 Jt uint8 Jf uint8 K uint32 } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/dgramopt_stub.go0000644061062106075000000001023212721405224025616 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build nacl plan9 solaris package ipv6 import "net" // MulticastHopLimit returns the hop limit field value for outgoing // multicast packets. func (c *dgramOpt) MulticastHopLimit() (int, error) { return 0, errOpNoSupport } // SetMulticastHopLimit sets the hop limit field value for future // outgoing multicast packets. func (c *dgramOpt) SetMulticastHopLimit(hoplim int) error { return errOpNoSupport } // MulticastInterface returns the default interface for multicast // packet transmissions. func (c *dgramOpt) MulticastInterface() (*net.Interface, error) { return nil, errOpNoSupport } // SetMulticastInterface sets the default interface for future // multicast packet transmissions. func (c *dgramOpt) SetMulticastInterface(ifi *net.Interface) error { return errOpNoSupport } // MulticastLoopback reports whether transmitted multicast packets // should be copied and send back to the originator. func (c *dgramOpt) MulticastLoopback() (bool, error) { return false, errOpNoSupport } // SetMulticastLoopback sets whether transmitted multicast packets // should be copied and send back to the originator. func (c *dgramOpt) SetMulticastLoopback(on bool) error { return errOpNoSupport } // JoinGroup joins the group address group on the interface ifi. // By default all sources that can cast data to group are accepted. // It's possible to mute and unmute data transmission from a specific // source by using ExcludeSourceSpecificGroup and // IncludeSourceSpecificGroup. // JoinGroup uses the system assigned multicast interface when ifi is // nil, although this is not recommended because the assignment // depends on platforms and sometimes it might require routing // configuration. func (c *dgramOpt) JoinGroup(ifi *net.Interface, group net.Addr) error { return errOpNoSupport } // LeaveGroup leaves the group address group on the interface ifi // regardless of whether the group is any-source group or // source-specific group. func (c *dgramOpt) LeaveGroup(ifi *net.Interface, group net.Addr) error { return errOpNoSupport } // JoinSourceSpecificGroup joins the source-specific group comprising // group and source on the interface ifi. // JoinSourceSpecificGroup uses the system assigned multicast // interface when ifi is nil, although this is not recommended because // the assignment depends on platforms and sometimes it might require // routing configuration. func (c *dgramOpt) JoinSourceSpecificGroup(ifi *net.Interface, group, source net.Addr) error { return errOpNoSupport } // LeaveSourceSpecificGroup leaves the source-specific group on the // interface ifi. func (c *dgramOpt) LeaveSourceSpecificGroup(ifi *net.Interface, group, source net.Addr) error { return errOpNoSupport } // ExcludeSourceSpecificGroup excludes the source-specific group from // the already joined any-source groups by JoinGroup on the interface // ifi. func (c *dgramOpt) ExcludeSourceSpecificGroup(ifi *net.Interface, group, source net.Addr) error { return errOpNoSupport } // IncludeSourceSpecificGroup includes the excluded source-specific // group by ExcludeSourceSpecificGroup again on the interface ifi. func (c *dgramOpt) IncludeSourceSpecificGroup(ifi *net.Interface, group, source net.Addr) error { return errOpNoSupport } // Checksum reports whether the kernel will compute, store or verify a // checksum for both incoming and outgoing packets. If on is true, it // returns an offset in bytes into the data of where the checksum // field is located. func (c *dgramOpt) Checksum() (on bool, offset int, err error) { return false, 0, errOpNoSupport } // SetChecksum enables the kernel checksum processing. If on is ture, // the offset should be an offset in bytes into the data of where the // checksum field is located. func (c *dgramOpt) SetChecksum(on bool, offset int) error { return errOpNoSupport } // ICMPFilter returns an ICMP filter. func (c *dgramOpt) ICMPFilter() (*ICMPFilter, error) { return nil, errOpNoSupport } // SetICMPFilter deploys the ICMP filter. func (c *dgramOpt) SetICMPFilter(f *ICMPFilter) error { return errOpNoSupport } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/defs_linux.go0000644061062106075000000001203412721405224025106 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build ignore // +godefs map struct_in6_addr [16]byte /* in6_addr */ package ipv6 /* #include #include #include #include #include #include */ import "C" const ( sysIPV6_ADDRFORM = C.IPV6_ADDRFORM sysIPV6_2292PKTINFO = C.IPV6_2292PKTINFO sysIPV6_2292HOPOPTS = C.IPV6_2292HOPOPTS sysIPV6_2292DSTOPTS = C.IPV6_2292DSTOPTS sysIPV6_2292RTHDR = C.IPV6_2292RTHDR sysIPV6_2292PKTOPTIONS = C.IPV6_2292PKTOPTIONS sysIPV6_CHECKSUM = C.IPV6_CHECKSUM sysIPV6_2292HOPLIMIT = C.IPV6_2292HOPLIMIT sysIPV6_NEXTHOP = C.IPV6_NEXTHOP sysIPV6_FLOWINFO = C.IPV6_FLOWINFO sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP sysIPV6_ADD_MEMBERSHIP = C.IPV6_ADD_MEMBERSHIP sysIPV6_DROP_MEMBERSHIP = C.IPV6_DROP_MEMBERSHIP sysMCAST_JOIN_GROUP = C.MCAST_JOIN_GROUP sysMCAST_LEAVE_GROUP = C.MCAST_LEAVE_GROUP sysMCAST_JOIN_SOURCE_GROUP = C.MCAST_JOIN_SOURCE_GROUP sysMCAST_LEAVE_SOURCE_GROUP = C.MCAST_LEAVE_SOURCE_GROUP sysMCAST_BLOCK_SOURCE = C.MCAST_BLOCK_SOURCE sysMCAST_UNBLOCK_SOURCE = C.MCAST_UNBLOCK_SOURCE sysMCAST_MSFILTER = C.MCAST_MSFILTER sysIPV6_ROUTER_ALERT = C.IPV6_ROUTER_ALERT sysIPV6_MTU_DISCOVER = C.IPV6_MTU_DISCOVER sysIPV6_MTU = C.IPV6_MTU sysIPV6_RECVERR = C.IPV6_RECVERR sysIPV6_V6ONLY = C.IPV6_V6ONLY sysIPV6_JOIN_ANYCAST = C.IPV6_JOIN_ANYCAST sysIPV6_LEAVE_ANYCAST = C.IPV6_LEAVE_ANYCAST //sysIPV6_PMTUDISC_DONT = C.IPV6_PMTUDISC_DONT //sysIPV6_PMTUDISC_WANT = C.IPV6_PMTUDISC_WANT //sysIPV6_PMTUDISC_DO = C.IPV6_PMTUDISC_DO //sysIPV6_PMTUDISC_PROBE = C.IPV6_PMTUDISC_PROBE //sysIPV6_PMTUDISC_INTERFACE = C.IPV6_PMTUDISC_INTERFACE //sysIPV6_PMTUDISC_OMIT = C.IPV6_PMTUDISC_OMIT sysIPV6_FLOWLABEL_MGR = C.IPV6_FLOWLABEL_MGR sysIPV6_FLOWINFO_SEND = C.IPV6_FLOWINFO_SEND sysIPV6_IPSEC_POLICY = C.IPV6_IPSEC_POLICY sysIPV6_XFRM_POLICY = C.IPV6_XFRM_POLICY sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO sysIPV6_PKTINFO = C.IPV6_PKTINFO sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS sysIPV6_HOPOPTS = C.IPV6_HOPOPTS sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR sysIPV6_RTHDR = C.IPV6_RTHDR sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS sysIPV6_DSTOPTS = C.IPV6_DSTOPTS sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU sysIPV6_PATHMTU = C.IPV6_PATHMTU sysIPV6_DONTFRAG = C.IPV6_DONTFRAG sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS sysIPV6_TCLASS = C.IPV6_TCLASS sysIPV6_ADDR_PREFERENCES = C.IPV6_ADDR_PREFERENCES sysIPV6_PREFER_SRC_TMP = C.IPV6_PREFER_SRC_TMP sysIPV6_PREFER_SRC_PUBLIC = C.IPV6_PREFER_SRC_PUBLIC sysIPV6_PREFER_SRC_PUBTMP_DEFAULT = C.IPV6_PREFER_SRC_PUBTMP_DEFAULT sysIPV6_PREFER_SRC_COA = C.IPV6_PREFER_SRC_COA sysIPV6_PREFER_SRC_HOME = C.IPV6_PREFER_SRC_HOME sysIPV6_PREFER_SRC_CGA = C.IPV6_PREFER_SRC_CGA sysIPV6_PREFER_SRC_NONCGA = C.IPV6_PREFER_SRC_NONCGA sysIPV6_MINHOPCOUNT = C.IPV6_MINHOPCOUNT sysIPV6_ORIGDSTADDR = C.IPV6_ORIGDSTADDR sysIPV6_RECVORIGDSTADDR = C.IPV6_RECVORIGDSTADDR sysIPV6_TRANSPARENT = C.IPV6_TRANSPARENT sysIPV6_UNICAST_IF = C.IPV6_UNICAST_IF sysICMPV6_FILTER = C.ICMPV6_FILTER sysICMPV6_FILTER_BLOCK = C.ICMPV6_FILTER_BLOCK sysICMPV6_FILTER_PASS = C.ICMPV6_FILTER_PASS sysICMPV6_FILTER_BLOCKOTHERS = C.ICMPV6_FILTER_BLOCKOTHERS sysICMPV6_FILTER_PASSONLY = C.ICMPV6_FILTER_PASSONLY sysSOL_SOCKET = C.SOL_SOCKET sysSO_ATTACH_FILTER = C.SO_ATTACH_FILTER sysSizeofKernelSockaddrStorage = C.sizeof_struct___kernel_sockaddr_storage sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6 sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo sysSizeofIPv6FlowlabelReq = C.sizeof_struct_in6_flowlabel_req sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq sysSizeofGroupReq = C.sizeof_struct_group_req sysSizeofGroupSourceReq = C.sizeof_struct_group_source_req sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter ) type sysKernelSockaddrStorage C.struct___kernel_sockaddr_storage type sysSockaddrInet6 C.struct_sockaddr_in6 type sysInet6Pktinfo C.struct_in6_pktinfo type sysIPv6Mtuinfo C.struct_ip6_mtuinfo type sysIPv6FlowlabelReq C.struct_in6_flowlabel_req type sysIPv6Mreq C.struct_ipv6_mreq type sysGroupReq C.struct_group_req type sysGroupSourceReq C.struct_group_source_req type sysICMPv6Filter C.struct_icmp6_filter type sysSockFProg C.struct_sock_fprog type sysSockFilter C.struct_sock_filter lxd-2.0.2/dist/src/golang.org/x/net/ipv6/dgramopt_posix.go0000644061062106075000000001641512721405224026014 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build darwin dragonfly freebsd linux netbsd openbsd windows package ipv6 import ( "net" "syscall" ) // MulticastHopLimit returns the hop limit field value for outgoing // multicast packets. func (c *dgramOpt) MulticastHopLimit() (int, error) { if !c.ok() { return 0, syscall.EINVAL } fd, err := c.sysfd() if err != nil { return 0, err } return getInt(fd, &sockOpts[ssoMulticastHopLimit]) } // SetMulticastHopLimit sets the hop limit field value for future // outgoing multicast packets. func (c *dgramOpt) SetMulticastHopLimit(hoplim int) error { if !c.ok() { return syscall.EINVAL } fd, err := c.sysfd() if err != nil { return err } return setInt(fd, &sockOpts[ssoMulticastHopLimit], hoplim) } // MulticastInterface returns the default interface for multicast // packet transmissions. func (c *dgramOpt) MulticastInterface() (*net.Interface, error) { if !c.ok() { return nil, syscall.EINVAL } fd, err := c.sysfd() if err != nil { return nil, err } return getInterface(fd, &sockOpts[ssoMulticastInterface]) } // SetMulticastInterface sets the default interface for future // multicast packet transmissions. func (c *dgramOpt) SetMulticastInterface(ifi *net.Interface) error { if !c.ok() { return syscall.EINVAL } fd, err := c.sysfd() if err != nil { return err } return setInterface(fd, &sockOpts[ssoMulticastInterface], ifi) } // MulticastLoopback reports whether transmitted multicast packets // should be copied and send back to the originator. func (c *dgramOpt) MulticastLoopback() (bool, error) { if !c.ok() { return false, syscall.EINVAL } fd, err := c.sysfd() if err != nil { return false, err } on, err := getInt(fd, &sockOpts[ssoMulticastLoopback]) if err != nil { return false, err } return on == 1, nil } // SetMulticastLoopback sets whether transmitted multicast packets // should be copied and send back to the originator. func (c *dgramOpt) SetMulticastLoopback(on bool) error { if !c.ok() { return syscall.EINVAL } fd, err := c.sysfd() if err != nil { return err } return setInt(fd, &sockOpts[ssoMulticastLoopback], boolint(on)) } // JoinGroup joins the group address group on the interface ifi. // By default all sources that can cast data to group are accepted. // It's possible to mute and unmute data transmission from a specific // source by using ExcludeSourceSpecificGroup and // IncludeSourceSpecificGroup. // JoinGroup uses the system assigned multicast interface when ifi is // nil, although this is not recommended because the assignment // depends on platforms and sometimes it might require routing // configuration. func (c *dgramOpt) JoinGroup(ifi *net.Interface, group net.Addr) error { if !c.ok() { return syscall.EINVAL } fd, err := c.sysfd() if err != nil { return err } grp := netAddrToIP16(group) if grp == nil { return errMissingAddress } return setGroup(fd, &sockOpts[ssoJoinGroup], ifi, grp) } // LeaveGroup leaves the group address group on the interface ifi // regardless of whether the group is any-source group or // source-specific group. func (c *dgramOpt) LeaveGroup(ifi *net.Interface, group net.Addr) error { if !c.ok() { return syscall.EINVAL } fd, err := c.sysfd() if err != nil { return err } grp := netAddrToIP16(group) if grp == nil { return errMissingAddress } return setGroup(fd, &sockOpts[ssoLeaveGroup], ifi, grp) } // JoinSourceSpecificGroup joins the source-specific group comprising // group and source on the interface ifi. // JoinSourceSpecificGroup uses the system assigned multicast // interface when ifi is nil, although this is not recommended because // the assignment depends on platforms and sometimes it might require // routing configuration. func (c *dgramOpt) JoinSourceSpecificGroup(ifi *net.Interface, group, source net.Addr) error { if !c.ok() { return syscall.EINVAL } fd, err := c.sysfd() if err != nil { return err } grp := netAddrToIP16(group) if grp == nil { return errMissingAddress } src := netAddrToIP16(source) if src == nil { return errMissingAddress } return setSourceGroup(fd, &sockOpts[ssoJoinSourceGroup], ifi, grp, src) } // LeaveSourceSpecificGroup leaves the source-specific group on the // interface ifi. func (c *dgramOpt) LeaveSourceSpecificGroup(ifi *net.Interface, group, source net.Addr) error { if !c.ok() { return syscall.EINVAL } fd, err := c.sysfd() if err != nil { return err } grp := netAddrToIP16(group) if grp == nil { return errMissingAddress } src := netAddrToIP16(source) if src == nil { return errMissingAddress } return setSourceGroup(fd, &sockOpts[ssoLeaveSourceGroup], ifi, grp, src) } // ExcludeSourceSpecificGroup excludes the source-specific group from // the already joined any-source groups by JoinGroup on the interface // ifi. func (c *dgramOpt) ExcludeSourceSpecificGroup(ifi *net.Interface, group, source net.Addr) error { if !c.ok() { return syscall.EINVAL } fd, err := c.sysfd() if err != nil { return err } grp := netAddrToIP16(group) if grp == nil { return errMissingAddress } src := netAddrToIP16(source) if src == nil { return errMissingAddress } return setSourceGroup(fd, &sockOpts[ssoBlockSourceGroup], ifi, grp, src) } // IncludeSourceSpecificGroup includes the excluded source-specific // group by ExcludeSourceSpecificGroup again on the interface ifi. func (c *dgramOpt) IncludeSourceSpecificGroup(ifi *net.Interface, group, source net.Addr) error { if !c.ok() { return syscall.EINVAL } fd, err := c.sysfd() if err != nil { return err } grp := netAddrToIP16(group) if grp == nil { return errMissingAddress } src := netAddrToIP16(source) if src == nil { return errMissingAddress } return setSourceGroup(fd, &sockOpts[ssoUnblockSourceGroup], ifi, grp, src) } // Checksum reports whether the kernel will compute, store or verify a // checksum for both incoming and outgoing packets. If on is true, it // returns an offset in bytes into the data of where the checksum // field is located. func (c *dgramOpt) Checksum() (on bool, offset int, err error) { if !c.ok() { return false, 0, syscall.EINVAL } fd, err := c.sysfd() if err != nil { return false, 0, err } offset, err = getInt(fd, &sockOpts[ssoChecksum]) if err != nil { return false, 0, err } if offset < 0 { return false, 0, nil } return true, offset, nil } // SetChecksum enables the kernel checksum processing. If on is ture, // the offset should be an offset in bytes into the data of where the // checksum field is located. func (c *dgramOpt) SetChecksum(on bool, offset int) error { if !c.ok() { return syscall.EINVAL } fd, err := c.sysfd() if err != nil { return err } if !on { offset = -1 } return setInt(fd, &sockOpts[ssoChecksum], offset) } // ICMPFilter returns an ICMP filter. func (c *dgramOpt) ICMPFilter() (*ICMPFilter, error) { if !c.ok() { return nil, syscall.EINVAL } fd, err := c.sysfd() if err != nil { return nil, err } return getICMPFilter(fd, &sockOpts[ssoICMPFilter]) } // SetICMPFilter deploys the ICMP filter. func (c *dgramOpt) SetICMPFilter(f *ICMPFilter) error { if !c.ok() { return syscall.EINVAL } fd, err := c.sysfd() if err != nil { return err } return setICMPFilter(fd, &sockOpts[ssoICMPFilter], f) } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/sockopt_unix.go0000644061062106075000000000637412721405224025505 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build darwin dragonfly freebsd linux netbsd openbsd package ipv6 import ( "net" "os" "unsafe" ) func getInt(fd int, opt *sockOpt) (int, error) { if opt.name < 1 || opt.typ != ssoTypeInt { return 0, errOpNoSupport } var i int32 l := uint32(4) if err := getsockopt(fd, opt.level, opt.name, unsafe.Pointer(&i), &l); err != nil { return 0, os.NewSyscallError("getsockopt", err) } return int(i), nil } func setInt(fd int, opt *sockOpt, v int) error { if opt.name < 1 || opt.typ != ssoTypeInt { return errOpNoSupport } i := int32(v) return os.NewSyscallError("setsockopt", setsockopt(fd, opt.level, opt.name, unsafe.Pointer(&i), 4)) } func getInterface(fd int, opt *sockOpt) (*net.Interface, error) { if opt.name < 1 || opt.typ != ssoTypeInterface { return nil, errOpNoSupport } var i int32 l := uint32(4) if err := getsockopt(fd, opt.level, opt.name, unsafe.Pointer(&i), &l); err != nil { return nil, os.NewSyscallError("getsockopt", err) } if i == 0 { return nil, nil } ifi, err := net.InterfaceByIndex(int(i)) if err != nil { return nil, err } return ifi, nil } func setInterface(fd int, opt *sockOpt, ifi *net.Interface) error { if opt.name < 1 || opt.typ != ssoTypeInterface { return errOpNoSupport } var i int32 if ifi != nil { i = int32(ifi.Index) } return os.NewSyscallError("setsockopt", setsockopt(fd, opt.level, opt.name, unsafe.Pointer(&i), 4)) } func getICMPFilter(fd int, opt *sockOpt) (*ICMPFilter, error) { if opt.name < 1 || opt.typ != ssoTypeICMPFilter { return nil, errOpNoSupport } var f ICMPFilter l := uint32(sysSizeofICMPv6Filter) if err := getsockopt(fd, opt.level, opt.name, unsafe.Pointer(&f.sysICMPv6Filter), &l); err != nil { return nil, os.NewSyscallError("getsockopt", err) } return &f, nil } func setICMPFilter(fd int, opt *sockOpt, f *ICMPFilter) error { if opt.name < 1 || opt.typ != ssoTypeICMPFilter { return errOpNoSupport } return os.NewSyscallError("setsockopt", setsockopt(fd, opt.level, opt.name, unsafe.Pointer(&f.sysICMPv6Filter), sysSizeofICMPv6Filter)) } func getMTUInfo(fd int, opt *sockOpt) (*net.Interface, int, error) { if opt.name < 1 || opt.typ != ssoTypeMTUInfo { return nil, 0, errOpNoSupport } var mi sysIPv6Mtuinfo l := uint32(sysSizeofIPv6Mtuinfo) if err := getsockopt(fd, opt.level, opt.name, unsafe.Pointer(&mi), &l); err != nil { return nil, 0, os.NewSyscallError("getsockopt", err) } if mi.Addr.Scope_id == 0 { return nil, int(mi.Mtu), nil } ifi, err := net.InterfaceByIndex(int(mi.Addr.Scope_id)) if err != nil { return nil, 0, err } return ifi, int(mi.Mtu), nil } func setGroup(fd int, opt *sockOpt, ifi *net.Interface, grp net.IP) error { if opt.name < 1 { return errOpNoSupport } switch opt.typ { case ssoTypeIPMreq: return setsockoptIPMreq(fd, opt, ifi, grp) case ssoTypeGroupReq: return setsockoptGroupReq(fd, opt, ifi, grp) default: return errOpNoSupport } } func setSourceGroup(fd int, opt *sockOpt, ifi *net.Interface, grp, src net.IP) error { if opt.name < 1 || opt.typ != ssoTypeGroupSourceReq { return errOpNoSupport } return setsockoptGroupSourceReq(fd, opt, ifi, grp, src) } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/doc.go0000644061062106075000000001725112721405224023521 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // Package ipv6 implements IP-level socket options for the Internet // Protocol version 6. // // The package provides IP-level socket options that allow // manipulation of IPv6 facilities. // // The IPv6 protocol is defined in RFC 2460. // Basic and advanced socket interface extensions are defined in RFC // 3493 and RFC 3542. // Socket interface extensions for multicast source filters are // defined in RFC 3678. // MLDv1 and MLDv2 are defined in RFC 2710 and RFC 3810. // Source-specific multicast is defined in RFC 4607. // // // Unicasting // // The options for unicasting are available for net.TCPConn, // net.UDPConn and net.IPConn which are created as network connections // that use the IPv6 transport. When a single TCP connection carrying // a data flow of multiple packets needs to indicate the flow is // important, ipv6.Conn is used to set the traffic class field on the // IPv6 header for each packet. // // ln, err := net.Listen("tcp6", "[::]:1024") // if err != nil { // // error handling // } // defer ln.Close() // for { // c, err := ln.Accept() // if err != nil { // // error handling // } // go func(c net.Conn) { // defer c.Close() // // The outgoing packets will be labeled DiffServ assured forwarding // class 1 low drop precedence, known as AF11 packets. // // if err := ipv6.NewConn(c).SetTrafficClass(0x28); err != nil { // // error handling // } // if _, err := c.Write(data); err != nil { // // error handling // } // }(c) // } // // // Multicasting // // The options for multicasting are available for net.UDPConn and // net.IPconn which are created as network connections that use the // IPv6 transport. A few network facilities must be prepared before // you begin multicasting, at a minimum joining network interfaces and // multicast groups. // // en0, err := net.InterfaceByName("en0") // if err != nil { // // error handling // } // en1, err := net.InterfaceByIndex(911) // if err != nil { // // error handling // } // group := net.ParseIP("ff02::114") // // First, an application listens to an appropriate address with an // appropriate service port. // // c, err := net.ListenPacket("udp6", "[::]:1024") // if err != nil { // // error handling // } // defer c.Close() // // Second, the application joins multicast groups, starts listening to // the groups on the specified network interfaces. Note that the // service port for transport layer protocol does not matter with this // operation as joining groups affects only network and link layer // protocols, such as IPv6 and Ethernet. // // p := ipv6.NewPacketConn(c) // if err := p.JoinGroup(en0, &net.UDPAddr{IP: group}); err != nil { // // error handling // } // if err := p.JoinGroup(en1, &net.UDPAddr{IP: group}); err != nil { // // error handling // } // // The application might set per packet control message transmissions // between the protocol stack within the kernel. When the application // needs a destination address on an incoming packet, // SetControlMessage of ipv6.PacketConn is used to enable control // message transmissons. // // if err := p.SetControlMessage(ipv6.FlagDst, true); err != nil { // // error handling // } // // The application could identify whether the received packets are // of interest by using the control message that contains the // destination address of the received packet. // // b := make([]byte, 1500) // for { // n, rcm, src, err := p.ReadFrom(b) // if err != nil { // // error handling // } // if rcm.Dst.IsMulticast() { // if rcm.Dst.Equal(group) { // // joined group, do something // } else { // // unknown group, discard // continue // } // } // // The application can also send both unicast and multicast packets. // // p.SetTrafficClass(0x0) // p.SetHopLimit(16) // if _, err := p.WriteTo(data[:n], nil, src); err != nil { // // error handling // } // dst := &net.UDPAddr{IP: group, Port: 1024} // wcm := ipv6.ControlMessage{TrafficClass: 0xe0, HopLimit: 1} // for _, ifi := range []*net.Interface{en0, en1} { // wcm.IfIndex = ifi.Index // if _, err := p.WriteTo(data[:n], &wcm, dst); err != nil { // // error handling // } // } // } // // // More multicasting // // An application that uses PacketConn may join multiple multicast // groups. For example, a UDP listener with port 1024 might join two // different groups across over two different network interfaces by // using: // // c, err := net.ListenPacket("udp6", "[::]:1024") // if err != nil { // // error handling // } // defer c.Close() // p := ipv6.NewPacketConn(c) // if err := p.JoinGroup(en0, &net.UDPAddr{IP: net.ParseIP("ff02::1:114")}); err != nil { // // error handling // } // if err := p.JoinGroup(en0, &net.UDPAddr{IP: net.ParseIP("ff02::2:114")}); err != nil { // // error handling // } // if err := p.JoinGroup(en1, &net.UDPAddr{IP: net.ParseIP("ff02::2:114")}); err != nil { // // error handling // } // // It is possible for multiple UDP listeners that listen on the same // UDP port to join the same multicast group. The net package will // provide a socket that listens to a wildcard address with reusable // UDP port when an appropriate multicast address prefix is passed to // the net.ListenPacket or net.ListenUDP. // // c1, err := net.ListenPacket("udp6", "[ff02::]:1024") // if err != nil { // // error handling // } // defer c1.Close() // c2, err := net.ListenPacket("udp6", "[ff02::]:1024") // if err != nil { // // error handling // } // defer c2.Close() // p1 := ipv6.NewPacketConn(c1) // if err := p1.JoinGroup(en0, &net.UDPAddr{IP: net.ParseIP("ff02::114")}); err != nil { // // error handling // } // p2 := ipv6.NewPacketConn(c2) // if err := p2.JoinGroup(en0, &net.UDPAddr{IP: net.ParseIP("ff02::114")}); err != nil { // // error handling // } // // Also it is possible for the application to leave or rejoin a // multicast group on the network interface. // // if err := p.LeaveGroup(en0, &net.UDPAddr{IP: net.ParseIP("ff02::114")}); err != nil { // // error handling // } // if err := p.JoinGroup(en0, &net.UDPAddr{IP: net.ParseIP("ff01::114")}); err != nil { // // error handling // } // // // Source-specific multicasting // // An application that uses PacketConn on MLDv2 supported platform is // able to join source-specific multicast groups. // The application may use JoinSourceSpecificGroup and // LeaveSourceSpecificGroup for the operation known as "include" mode, // // ssmgroup := net.UDPAddr{IP: net.ParseIP("ff32::8000:9")} // ssmsource := net.UDPAddr{IP: net.ParseIP("fe80::cafe")} // if err := p.JoinSourceSpecificGroup(en0, &ssmgroup, &ssmsource); err != nil { // // error handling // } // if err := p.LeaveSourceSpecificGroup(en0, &ssmgroup, &ssmsource); err != nil { // // error handling // } // // or JoinGroup, ExcludeSourceSpecificGroup, // IncludeSourceSpecificGroup and LeaveGroup for the operation known // as "exclude" mode. // // exclsource := net.UDPAddr{IP: net.ParseIP("fe80::dead")} // if err := p.JoinGroup(en0, &ssmgroup); err != nil { // // error handling // } // if err := p.ExcludeSourceSpecificGroup(en0, &ssmgroup, &exclsource); err != nil { // // error handling // } // if err := p.LeaveGroup(en0, &ssmgroup); err != nil { // // error handling // } // // Note that it depends on each platform implementation what happens // when an application which runs on MLDv2 unsupported platform uses // JoinSourceSpecificGroup and LeaveSourceSpecificGroup. // In general the platform tries to fall back to conversations using // MLDv1 and starts to listen to multicast traffic. // In the fallback case, ExcludeSourceSpecificGroup and // IncludeSourceSpecificGroup may return an error. package ipv6 // import "golang.org/x/net/ipv6" lxd-2.0.2/dist/src/golang.org/x/net/ipv6/defs_openbsd.go0000644061062106075000000000471112721405224025404 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build ignore // +godefs map struct_in6_addr [16]byte /* in6_addr */ package ipv6 /* #include #include #include #include */ import "C" const ( sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP sysIPV6_JOIN_GROUP = C.IPV6_JOIN_GROUP sysIPV6_LEAVE_GROUP = C.IPV6_LEAVE_GROUP sysIPV6_PORTRANGE = C.IPV6_PORTRANGE sysICMP6_FILTER = C.ICMP6_FILTER sysIPV6_CHECKSUM = C.IPV6_CHECKSUM sysIPV6_V6ONLY = C.IPV6_V6ONLY sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS sysIPV6_USE_MIN_MTU = C.IPV6_USE_MIN_MTU sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU sysIPV6_PATHMTU = C.IPV6_PATHMTU sysIPV6_PKTINFO = C.IPV6_PKTINFO sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT sysIPV6_NEXTHOP = C.IPV6_NEXTHOP sysIPV6_HOPOPTS = C.IPV6_HOPOPTS sysIPV6_DSTOPTS = C.IPV6_DSTOPTS sysIPV6_RTHDR = C.IPV6_RTHDR sysIPV6_AUTH_LEVEL = C.IPV6_AUTH_LEVEL sysIPV6_ESP_TRANS_LEVEL = C.IPV6_ESP_TRANS_LEVEL sysIPV6_ESP_NETWORK_LEVEL = C.IPV6_ESP_NETWORK_LEVEL sysIPSEC6_OUTSA = C.IPSEC6_OUTSA sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS sysIPV6_AUTOFLOWLABEL = C.IPV6_AUTOFLOWLABEL sysIPV6_IPCOMP_LEVEL = C.IPV6_IPCOMP_LEVEL sysIPV6_TCLASS = C.IPV6_TCLASS sysIPV6_DONTFRAG = C.IPV6_DONTFRAG sysIPV6_PIPEX = C.IPV6_PIPEX sysIPV6_RTABLE = C.IPV6_RTABLE sysIPV6_PORTRANGE_DEFAULT = C.IPV6_PORTRANGE_DEFAULT sysIPV6_PORTRANGE_HIGH = C.IPV6_PORTRANGE_HIGH sysIPV6_PORTRANGE_LOW = C.IPV6_PORTRANGE_LOW sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6 sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter ) type sysSockaddrInet6 C.struct_sockaddr_in6 type sysInet6Pktinfo C.struct_in6_pktinfo type sysIPv6Mtuinfo C.struct_ip6_mtuinfo type sysIPv6Mreq C.struct_ipv6_mreq type sysICMPv6Filter C.struct_icmp6_filter lxd-2.0.2/dist/src/golang.org/x/net/ipv6/sockopt_ssmreq_unix.go0000644061062106075000000000267212721405224027074 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build darwin freebsd linux package ipv6 import ( "net" "os" "unsafe" ) var freebsd32o64 bool func setsockoptGroupReq(fd int, opt *sockOpt, ifi *net.Interface, grp net.IP) error { var gr sysGroupReq if ifi != nil { gr.Interface = uint32(ifi.Index) } gr.setGroup(grp) var p unsafe.Pointer var l uint32 if freebsd32o64 { var d [sysSizeofGroupReq + 4]byte s := (*[sysSizeofGroupReq]byte)(unsafe.Pointer(&gr)) copy(d[:4], s[:4]) copy(d[8:], s[4:]) p = unsafe.Pointer(&d[0]) l = sysSizeofGroupReq + 4 } else { p = unsafe.Pointer(&gr) l = sysSizeofGroupReq } return os.NewSyscallError("setsockopt", setsockopt(fd, opt.level, opt.name, p, l)) } func setsockoptGroupSourceReq(fd int, opt *sockOpt, ifi *net.Interface, grp, src net.IP) error { var gsr sysGroupSourceReq if ifi != nil { gsr.Interface = uint32(ifi.Index) } gsr.setSourceGroup(grp, src) var p unsafe.Pointer var l uint32 if freebsd32o64 { var d [sysSizeofGroupSourceReq + 4]byte s := (*[sysSizeofGroupSourceReq]byte)(unsafe.Pointer(&gsr)) copy(d[:4], s[:4]) copy(d[8:], s[4:]) p = unsafe.Pointer(&d[0]) l = sysSizeofGroupSourceReq + 4 } else { p = unsafe.Pointer(&gsr) l = sysSizeofGroupSourceReq } return os.NewSyscallError("setsockopt", setsockopt(fd, opt.level, opt.name, p, l)) } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/zsys_linux_amd64.go0000644061062106075000000000741212721405224026174 0ustar00stgraberdomain admins00000000000000// Created by cgo -godefs - DO NOT EDIT // cgo -godefs defs_linux.go package ipv6 const ( sysIPV6_ADDRFORM = 0x1 sysIPV6_2292PKTINFO = 0x2 sysIPV6_2292HOPOPTS = 0x3 sysIPV6_2292DSTOPTS = 0x4 sysIPV6_2292RTHDR = 0x5 sysIPV6_2292PKTOPTIONS = 0x6 sysIPV6_CHECKSUM = 0x7 sysIPV6_2292HOPLIMIT = 0x8 sysIPV6_NEXTHOP = 0x9 sysIPV6_FLOWINFO = 0xb sysIPV6_UNICAST_HOPS = 0x10 sysIPV6_MULTICAST_IF = 0x11 sysIPV6_MULTICAST_HOPS = 0x12 sysIPV6_MULTICAST_LOOP = 0x13 sysIPV6_ADD_MEMBERSHIP = 0x14 sysIPV6_DROP_MEMBERSHIP = 0x15 sysMCAST_JOIN_GROUP = 0x2a sysMCAST_LEAVE_GROUP = 0x2d sysMCAST_JOIN_SOURCE_GROUP = 0x2e sysMCAST_LEAVE_SOURCE_GROUP = 0x2f sysMCAST_BLOCK_SOURCE = 0x2b sysMCAST_UNBLOCK_SOURCE = 0x2c sysMCAST_MSFILTER = 0x30 sysIPV6_ROUTER_ALERT = 0x16 sysIPV6_MTU_DISCOVER = 0x17 sysIPV6_MTU = 0x18 sysIPV6_RECVERR = 0x19 sysIPV6_V6ONLY = 0x1a sysIPV6_JOIN_ANYCAST = 0x1b sysIPV6_LEAVE_ANYCAST = 0x1c sysIPV6_FLOWLABEL_MGR = 0x20 sysIPV6_FLOWINFO_SEND = 0x21 sysIPV6_IPSEC_POLICY = 0x22 sysIPV6_XFRM_POLICY = 0x23 sysIPV6_RECVPKTINFO = 0x31 sysIPV6_PKTINFO = 0x32 sysIPV6_RECVHOPLIMIT = 0x33 sysIPV6_HOPLIMIT = 0x34 sysIPV6_RECVHOPOPTS = 0x35 sysIPV6_HOPOPTS = 0x36 sysIPV6_RTHDRDSTOPTS = 0x37 sysIPV6_RECVRTHDR = 0x38 sysIPV6_RTHDR = 0x39 sysIPV6_RECVDSTOPTS = 0x3a sysIPV6_DSTOPTS = 0x3b sysIPV6_RECVPATHMTU = 0x3c sysIPV6_PATHMTU = 0x3d sysIPV6_DONTFRAG = 0x3e sysIPV6_RECVTCLASS = 0x42 sysIPV6_TCLASS = 0x43 sysIPV6_ADDR_PREFERENCES = 0x48 sysIPV6_PREFER_SRC_TMP = 0x1 sysIPV6_PREFER_SRC_PUBLIC = 0x2 sysIPV6_PREFER_SRC_PUBTMP_DEFAULT = 0x100 sysIPV6_PREFER_SRC_COA = 0x4 sysIPV6_PREFER_SRC_HOME = 0x400 sysIPV6_PREFER_SRC_CGA = 0x8 sysIPV6_PREFER_SRC_NONCGA = 0x800 sysIPV6_MINHOPCOUNT = 0x49 sysIPV6_ORIGDSTADDR = 0x4a sysIPV6_RECVORIGDSTADDR = 0x4a sysIPV6_TRANSPARENT = 0x4b sysIPV6_UNICAST_IF = 0x4c sysICMPV6_FILTER = 0x1 sysICMPV6_FILTER_BLOCK = 0x1 sysICMPV6_FILTER_PASS = 0x2 sysICMPV6_FILTER_BLOCKOTHERS = 0x3 sysICMPV6_FILTER_PASSONLY = 0x4 sysSOL_SOCKET = 0x1 sysSO_ATTACH_FILTER = 0x1a sysSizeofKernelSockaddrStorage = 0x80 sysSizeofSockaddrInet6 = 0x1c sysSizeofInet6Pktinfo = 0x14 sysSizeofIPv6Mtuinfo = 0x20 sysSizeofIPv6FlowlabelReq = 0x20 sysSizeofIPv6Mreq = 0x14 sysSizeofGroupReq = 0x88 sysSizeofGroupSourceReq = 0x108 sysSizeofICMPv6Filter = 0x20 ) type sysKernelSockaddrStorage struct { Family uint16 X__data [126]int8 } type sysSockaddrInet6 struct { Family uint16 Port uint16 Flowinfo uint32 Addr [16]byte /* in6_addr */ Scope_id uint32 } type sysInet6Pktinfo struct { Addr [16]byte /* in6_addr */ Ifindex int32 } type sysIPv6Mtuinfo struct { Addr sysSockaddrInet6 Mtu uint32 } type sysIPv6FlowlabelReq struct { Dst [16]byte /* in6_addr */ Label uint32 Action uint8 Share uint8 Flags uint16 Expires uint16 Linger uint16 X__flr_pad uint32 } type sysIPv6Mreq struct { Multiaddr [16]byte /* in6_addr */ Ifindex int32 } type sysGroupReq struct { Interface uint32 Pad_cgo_0 [4]byte Group sysKernelSockaddrStorage } type sysGroupSourceReq struct { Interface uint32 Pad_cgo_0 [4]byte Group sysKernelSockaddrStorage Source sysKernelSockaddrStorage } type sysICMPv6Filter struct { Data [8]uint32 } type sysSockFProg struct { Len uint16 Pad_cgo_0 [6]byte Filter *sysSockFilter } type sysSockFilter struct { Code uint16 Jt uint8 Jf uint8 K uint32 } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/iana.go0000644061062106075000000001125612721405224023663 0ustar00stgraberdomain admins00000000000000// go generate gen.go // GENERATED BY THE COMMAND ABOVE; DO NOT EDIT package ipv6 // Internet Control Message Protocol version 6 (ICMPv6) Parameters, Updated: 2015-07-07 const ( ICMPTypeDestinationUnreachable ICMPType = 1 // Destination Unreachable ICMPTypePacketTooBig ICMPType = 2 // Packet Too Big ICMPTypeTimeExceeded ICMPType = 3 // Time Exceeded ICMPTypeParameterProblem ICMPType = 4 // Parameter Problem ICMPTypeEchoRequest ICMPType = 128 // Echo Request ICMPTypeEchoReply ICMPType = 129 // Echo Reply ICMPTypeMulticastListenerQuery ICMPType = 130 // Multicast Listener Query ICMPTypeMulticastListenerReport ICMPType = 131 // Multicast Listener Report ICMPTypeMulticastListenerDone ICMPType = 132 // Multicast Listener Done ICMPTypeRouterSolicitation ICMPType = 133 // Router Solicitation ICMPTypeRouterAdvertisement ICMPType = 134 // Router Advertisement ICMPTypeNeighborSolicitation ICMPType = 135 // Neighbor Solicitation ICMPTypeNeighborAdvertisement ICMPType = 136 // Neighbor Advertisement ICMPTypeRedirect ICMPType = 137 // Redirect Message ICMPTypeRouterRenumbering ICMPType = 138 // Router Renumbering ICMPTypeNodeInformationQuery ICMPType = 139 // ICMP Node Information Query ICMPTypeNodeInformationResponse ICMPType = 140 // ICMP Node Information Response ICMPTypeInverseNeighborDiscoverySolicitation ICMPType = 141 // Inverse Neighbor Discovery Solicitation Message ICMPTypeInverseNeighborDiscoveryAdvertisement ICMPType = 142 // Inverse Neighbor Discovery Advertisement Message ICMPTypeVersion2MulticastListenerReport ICMPType = 143 // Version 2 Multicast Listener Report ICMPTypeHomeAgentAddressDiscoveryRequest ICMPType = 144 // Home Agent Address Discovery Request Message ICMPTypeHomeAgentAddressDiscoveryReply ICMPType = 145 // Home Agent Address Discovery Reply Message ICMPTypeMobilePrefixSolicitation ICMPType = 146 // Mobile Prefix Solicitation ICMPTypeMobilePrefixAdvertisement ICMPType = 147 // Mobile Prefix Advertisement ICMPTypeCertificationPathSolicitation ICMPType = 148 // Certification Path Solicitation Message ICMPTypeCertificationPathAdvertisement ICMPType = 149 // Certification Path Advertisement Message ICMPTypeMulticastRouterAdvertisement ICMPType = 151 // Multicast Router Advertisement ICMPTypeMulticastRouterSolicitation ICMPType = 152 // Multicast Router Solicitation ICMPTypeMulticastRouterTermination ICMPType = 153 // Multicast Router Termination ICMPTypeFMIPv6 ICMPType = 154 // FMIPv6 Messages ICMPTypeRPLControl ICMPType = 155 // RPL Control Message ICMPTypeILNPv6LocatorUpdate ICMPType = 156 // ILNPv6 Locator Update Message ICMPTypeDuplicateAddressRequest ICMPType = 157 // Duplicate Address Request ICMPTypeDuplicateAddressConfirmation ICMPType = 158 // Duplicate Address Confirmation ICMPTypeMPLControl ICMPType = 159 // MPL Control Message ) // Internet Control Message Protocol version 6 (ICMPv6) Parameters, Updated: 2015-07-07 var icmpTypes = map[ICMPType]string{ 1: "destination unreachable", 2: "packet too big", 3: "time exceeded", 4: "parameter problem", 128: "echo request", 129: "echo reply", 130: "multicast listener query", 131: "multicast listener report", 132: "multicast listener done", 133: "router solicitation", 134: "router advertisement", 135: "neighbor solicitation", 136: "neighbor advertisement", 137: "redirect message", 138: "router renumbering", 139: "icmp node information query", 140: "icmp node information response", 141: "inverse neighbor discovery solicitation message", 142: "inverse neighbor discovery advertisement message", 143: "version 2 multicast listener report", 144: "home agent address discovery request message", 145: "home agent address discovery reply message", 146: "mobile prefix solicitation", 147: "mobile prefix advertisement", 148: "certification path solicitation message", 149: "certification path advertisement message", 151: "multicast router advertisement", 152: "multicast router solicitation", 153: "multicast router termination", 154: "fmipv6 messages", 155: "rpl control message", 156: "ilnpv6 locator update message", 157: "duplicate address request", 158: "duplicate address confirmation", 159: "mpl control message", } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/readwrite_test.go0000644061062106075000000001033012721405224025770 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6_test import ( "bytes" "net" "runtime" "strings" "sync" "testing" "golang.org/x/net/internal/iana" "golang.org/x/net/internal/nettest" "golang.org/x/net/ipv6" ) func benchmarkUDPListener() (net.PacketConn, net.Addr, error) { c, err := net.ListenPacket("udp6", "[::1]:0") if err != nil { return nil, nil, err } dst, err := net.ResolveUDPAddr("udp6", c.LocalAddr().String()) if err != nil { c.Close() return nil, nil, err } return c, dst, nil } func BenchmarkReadWriteNetUDP(b *testing.B) { if !supportsIPv6 { b.Skip("ipv6 is not supported") } c, dst, err := benchmarkUDPListener() if err != nil { b.Fatal(err) } defer c.Close() wb, rb := []byte("HELLO-R-U-THERE"), make([]byte, 128) b.ResetTimer() for i := 0; i < b.N; i++ { benchmarkReadWriteNetUDP(b, c, wb, rb, dst) } } func benchmarkReadWriteNetUDP(b *testing.B, c net.PacketConn, wb, rb []byte, dst net.Addr) { if _, err := c.WriteTo(wb, dst); err != nil { b.Fatal(err) } if _, _, err := c.ReadFrom(rb); err != nil { b.Fatal(err) } } func BenchmarkReadWriteIPv6UDP(b *testing.B) { if !supportsIPv6 { b.Skip("ipv6 is not supported") } c, dst, err := benchmarkUDPListener() if err != nil { b.Fatal(err) } defer c.Close() p := ipv6.NewPacketConn(c) cf := ipv6.FlagTrafficClass | ipv6.FlagHopLimit | ipv6.FlagSrc | ipv6.FlagDst | ipv6.FlagInterface | ipv6.FlagPathMTU if err := p.SetControlMessage(cf, true); err != nil { b.Fatal(err) } ifi := nettest.RoutedInterface("ip6", net.FlagUp|net.FlagLoopback) wb, rb := []byte("HELLO-R-U-THERE"), make([]byte, 128) b.ResetTimer() for i := 0; i < b.N; i++ { benchmarkReadWriteIPv6UDP(b, p, wb, rb, dst, ifi) } } func benchmarkReadWriteIPv6UDP(b *testing.B, p *ipv6.PacketConn, wb, rb []byte, dst net.Addr, ifi *net.Interface) { cm := ipv6.ControlMessage{ TrafficClass: iana.DiffServAF11 | iana.CongestionExperienced, HopLimit: 1, } if ifi != nil { cm.IfIndex = ifi.Index } if n, err := p.WriteTo(wb, &cm, dst); err != nil { b.Fatal(err) } else if n != len(wb) { b.Fatalf("got %v; want %v", n, len(wb)) } if _, _, _, err := p.ReadFrom(rb); err != nil { b.Fatal(err) } } func TestPacketConnConcurrentReadWriteUnicastUDP(t *testing.T) { switch runtime.GOOS { case "nacl", "plan9", "solaris", "windows": t.Skipf("not supported on %s", runtime.GOOS) } if !supportsIPv6 { t.Skip("ipv6 is not supported") } c, err := net.ListenPacket("udp6", "[::1]:0") if err != nil { t.Fatal(err) } defer c.Close() p := ipv6.NewPacketConn(c) defer p.Close() dst, err := net.ResolveUDPAddr("udp6", c.LocalAddr().String()) if err != nil { t.Fatal(err) } ifi := nettest.RoutedInterface("ip6", net.FlagUp|net.FlagLoopback) cf := ipv6.FlagTrafficClass | ipv6.FlagHopLimit | ipv6.FlagSrc | ipv6.FlagDst | ipv6.FlagInterface | ipv6.FlagPathMTU wb := []byte("HELLO-R-U-THERE") if err := p.SetControlMessage(cf, true); err != nil { // probe before test if nettest.ProtocolNotSupported(err) { t.Skipf("not supported on %s", runtime.GOOS) } t.Fatal(err) } var wg sync.WaitGroup reader := func() { defer wg.Done() rb := make([]byte, 128) if n, cm, _, err := p.ReadFrom(rb); err != nil { t.Error(err) return } else if !bytes.Equal(rb[:n], wb) { t.Errorf("got %v; want %v", rb[:n], wb) return } else { s := cm.String() if strings.Contains(s, ",") { t.Errorf("should be space-separated values: %s", s) } } } writer := func(toggle bool) { defer wg.Done() cm := ipv6.ControlMessage{ TrafficClass: iana.DiffServAF11 | iana.CongestionExperienced, Src: net.IPv6loopback, } if ifi != nil { cm.IfIndex = ifi.Index } if err := p.SetControlMessage(cf, toggle); err != nil { t.Error(err) return } if n, err := p.WriteTo(wb, &cm, dst); err != nil { t.Error(err) return } else if n != len(wb) { t.Errorf("got %v; want %v", n, len(wb)) return } } const N = 10 wg.Add(N) for i := 0; i < N; i++ { go reader() } wg.Add(2 * N) for i := 0; i < 2*N; i++ { go writer(i%2 != 0) } wg.Add(N) for i := 0; i < N; i++ { go reader() } wg.Wait() } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/icmp_bsd.go0000644061062106075000000000124112721405224024524 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build darwin dragonfly freebsd netbsd openbsd package ipv6 func (f *sysICMPv6Filter) accept(typ ICMPType) { f.Filt[typ>>5] |= 1 << (uint32(typ) & 31) } func (f *sysICMPv6Filter) block(typ ICMPType) { f.Filt[typ>>5] &^= 1 << (uint32(typ) & 31) } func (f *sysICMPv6Filter) setAll(block bool) { for i := range f.Filt { if block { f.Filt[i] = 0 } else { f.Filt[i] = 1<<32 - 1 } } } func (f *sysICMPv6Filter) willBlock(typ ICMPType) bool { return f.Filt[typ>>5]&(1<<(uint32(typ)&31)) == 0 } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/bpf_test.go0000644061062106075000000000354712721405224024565 0ustar00stgraberdomain admins00000000000000// Copyright 2016 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6_test import ( "net" "runtime" "testing" "time" "golang.org/x/net/bpf" "golang.org/x/net/ipv6" ) func TestBPF(t *testing.T) { if runtime.GOOS != "linux" { t.Skipf("not supported on %s", runtime.GOOS) } l, err := net.ListenPacket("udp6", "[::1]:0") if err != nil { t.Fatal(err) } defer l.Close() p := ipv6.NewPacketConn(l) // This filter accepts UDP packets whose first payload byte is // even. prog, err := bpf.Assemble([]bpf.Instruction{ // Load the first byte of the payload (skipping UDP header). bpf.LoadAbsolute{Off: 8, Size: 1}, // Select LSB of the byte. bpf.ALUOpConstant{Op: bpf.ALUOpAnd, Val: 1}, // Byte is even? bpf.JumpIf{Cond: bpf.JumpEqual, Val: 0, SkipFalse: 1}, // Accept. bpf.RetConstant{Val: 4096}, // Ignore. bpf.RetConstant{Val: 0}, }) if err != nil { t.Fatalf("compiling BPF: %s", err) } if err = p.SetBPF(prog); err != nil { t.Fatalf("attaching filter to Conn: %s", err) } s, err := net.Dial("udp6", l.LocalAddr().String()) if err != nil { t.Fatal(err) } defer s.Close() go func() { for i := byte(0); i < 10; i++ { s.Write([]byte{i}) } }() l.SetDeadline(time.Now().Add(2 * time.Second)) seen := make([]bool, 5) for { var b [512]byte n, _, err := l.ReadFrom(b[:]) if err != nil { t.Fatalf("reading from listener: %s", err) } if n != 1 { t.Fatalf("unexpected packet length, want 1, got %d", n) } if b[0] >= 10 { t.Fatalf("unexpected byte, want 0-9, got %d", b[0]) } if b[0]%2 != 0 { t.Fatalf("got odd byte %d, wanted only even bytes", b[0]) } seen[b[0]/2] = true seenAll := true for _, v := range seen { if !v { seenAll = false break } } if seenAll { break } } } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/sys_stub.go0000644061062106075000000000041712721405224024623 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build nacl plan9 solaris package ipv6 var ( ctlOpts = [ctlMax]ctlOpt{} sockOpts = [ssoMax]sockOpt{} ) lxd-2.0.2/dist/src/golang.org/x/net/ipv6/multicastsockopt_test.go0000644061062106075000000001024712721405224027421 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6_test import ( "net" "runtime" "testing" "golang.org/x/net/internal/nettest" "golang.org/x/net/ipv6" ) var packetConnMulticastSocketOptionTests = []struct { net, proto, addr string grp, src net.Addr }{ {"udp6", "", "[ff02::]:0", &net.UDPAddr{IP: net.ParseIP("ff02::114")}, nil}, // see RFC 4727 {"ip6", ":ipv6-icmp", "::", &net.IPAddr{IP: net.ParseIP("ff02::115")}, nil}, // see RFC 4727 {"udp6", "", "[ff30::8000:0]:0", &net.UDPAddr{IP: net.ParseIP("ff30::8000:1")}, &net.UDPAddr{IP: net.IPv6loopback}}, // see RFC 5771 {"ip6", ":ipv6-icmp", "::", &net.IPAddr{IP: net.ParseIP("ff30::8000:2")}, &net.IPAddr{IP: net.IPv6loopback}}, // see RFC 5771 } func TestPacketConnMulticastSocketOptions(t *testing.T) { switch runtime.GOOS { case "nacl", "plan9", "solaris", "windows": t.Skipf("not supported on %s", runtime.GOOS) } if !supportsIPv6 { t.Skip("ipv6 is not supported") } ifi := nettest.RoutedInterface("ip6", net.FlagUp|net.FlagMulticast|net.FlagLoopback) if ifi == nil { t.Skipf("not available on %s", runtime.GOOS) } m, ok := nettest.SupportsRawIPSocket() for _, tt := range packetConnMulticastSocketOptionTests { if tt.net == "ip6" && !ok { t.Log(m) continue } c, err := net.ListenPacket(tt.net+tt.proto, tt.addr) if err != nil { t.Fatal(err) } defer c.Close() p := ipv6.NewPacketConn(c) defer p.Close() if tt.src == nil { testMulticastSocketOptions(t, p, ifi, tt.grp) } else { testSourceSpecificMulticastSocketOptions(t, p, ifi, tt.grp, tt.src) } } } type testIPv6MulticastConn interface { MulticastHopLimit() (int, error) SetMulticastHopLimit(ttl int) error MulticastLoopback() (bool, error) SetMulticastLoopback(bool) error JoinGroup(*net.Interface, net.Addr) error LeaveGroup(*net.Interface, net.Addr) error JoinSourceSpecificGroup(*net.Interface, net.Addr, net.Addr) error LeaveSourceSpecificGroup(*net.Interface, net.Addr, net.Addr) error ExcludeSourceSpecificGroup(*net.Interface, net.Addr, net.Addr) error IncludeSourceSpecificGroup(*net.Interface, net.Addr, net.Addr) error } func testMulticastSocketOptions(t *testing.T, c testIPv6MulticastConn, ifi *net.Interface, grp net.Addr) { const hoplim = 255 if err := c.SetMulticastHopLimit(hoplim); err != nil { t.Error(err) return } if v, err := c.MulticastHopLimit(); err != nil { t.Error(err) return } else if v != hoplim { t.Errorf("got %v; want %v", v, hoplim) return } for _, toggle := range []bool{true, false} { if err := c.SetMulticastLoopback(toggle); err != nil { t.Error(err) return } if v, err := c.MulticastLoopback(); err != nil { t.Error(err) return } else if v != toggle { t.Errorf("got %v; want %v", v, toggle) return } } if err := c.JoinGroup(ifi, grp); err != nil { t.Error(err) return } if err := c.LeaveGroup(ifi, grp); err != nil { t.Error(err) return } } func testSourceSpecificMulticastSocketOptions(t *testing.T, c testIPv6MulticastConn, ifi *net.Interface, grp, src net.Addr) { // MCAST_JOIN_GROUP -> MCAST_BLOCK_SOURCE -> MCAST_UNBLOCK_SOURCE -> MCAST_LEAVE_GROUP if err := c.JoinGroup(ifi, grp); err != nil { t.Error(err) return } if err := c.ExcludeSourceSpecificGroup(ifi, grp, src); err != nil { switch runtime.GOOS { case "freebsd", "linux": default: // platforms that don't support MLDv2 fail here t.Logf("not supported on %s", runtime.GOOS) return } t.Error(err) return } if err := c.IncludeSourceSpecificGroup(ifi, grp, src); err != nil { t.Error(err) return } if err := c.LeaveGroup(ifi, grp); err != nil { t.Error(err) return } // MCAST_JOIN_SOURCE_GROUP -> MCAST_LEAVE_SOURCE_GROUP if err := c.JoinSourceSpecificGroup(ifi, grp, src); err != nil { t.Error(err) return } if err := c.LeaveSourceSpecificGroup(ifi, grp, src); err != nil { t.Error(err) return } // MCAST_JOIN_SOURCE_GROUP -> MCAST_LEAVE_GROUP if err := c.JoinSourceSpecificGroup(ifi, grp, src); err != nil { t.Error(err) return } if err := c.LeaveGroup(ifi, grp); err != nil { t.Error(err) return } } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/sys_freebsd.go0000644061062106075000000000651612721405224025266 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6 import ( "net" "runtime" "strings" "syscall" "unsafe" "golang.org/x/net/internal/iana" ) var ( ctlOpts = [ctlMax]ctlOpt{ ctlTrafficClass: {sysIPV6_TCLASS, 4, marshalTrafficClass, parseTrafficClass}, ctlHopLimit: {sysIPV6_HOPLIMIT, 4, marshalHopLimit, parseHopLimit}, ctlPacketInfo: {sysIPV6_PKTINFO, sysSizeofInet6Pktinfo, marshalPacketInfo, parsePacketInfo}, ctlNextHop: {sysIPV6_NEXTHOP, sysSizeofSockaddrInet6, marshalNextHop, parseNextHop}, ctlPathMTU: {sysIPV6_PATHMTU, sysSizeofIPv6Mtuinfo, marshalPathMTU, parsePathMTU}, } sockOpts = [ssoMax]sockOpt{ ssoTrafficClass: {iana.ProtocolIPv6, sysIPV6_TCLASS, ssoTypeInt}, ssoHopLimit: {iana.ProtocolIPv6, sysIPV6_UNICAST_HOPS, ssoTypeInt}, ssoMulticastInterface: {iana.ProtocolIPv6, sysIPV6_MULTICAST_IF, ssoTypeInterface}, ssoMulticastHopLimit: {iana.ProtocolIPv6, sysIPV6_MULTICAST_HOPS, ssoTypeInt}, ssoMulticastLoopback: {iana.ProtocolIPv6, sysIPV6_MULTICAST_LOOP, ssoTypeInt}, ssoReceiveTrafficClass: {iana.ProtocolIPv6, sysIPV6_RECVTCLASS, ssoTypeInt}, ssoReceiveHopLimit: {iana.ProtocolIPv6, sysIPV6_RECVHOPLIMIT, ssoTypeInt}, ssoReceivePacketInfo: {iana.ProtocolIPv6, sysIPV6_RECVPKTINFO, ssoTypeInt}, ssoReceivePathMTU: {iana.ProtocolIPv6, sysIPV6_RECVPATHMTU, ssoTypeInt}, ssoPathMTU: {iana.ProtocolIPv6, sysIPV6_PATHMTU, ssoTypeMTUInfo}, ssoChecksum: {iana.ProtocolIPv6, sysIPV6_CHECKSUM, ssoTypeInt}, ssoICMPFilter: {iana.ProtocolIPv6ICMP, sysICMP6_FILTER, ssoTypeICMPFilter}, ssoJoinGroup: {iana.ProtocolIPv6, sysMCAST_JOIN_GROUP, ssoTypeGroupReq}, ssoLeaveGroup: {iana.ProtocolIPv6, sysMCAST_LEAVE_GROUP, ssoTypeGroupReq}, ssoJoinSourceGroup: {iana.ProtocolIPv6, sysMCAST_JOIN_SOURCE_GROUP, ssoTypeGroupSourceReq}, ssoLeaveSourceGroup: {iana.ProtocolIPv6, sysMCAST_LEAVE_SOURCE_GROUP, ssoTypeGroupSourceReq}, ssoBlockSourceGroup: {iana.ProtocolIPv6, sysMCAST_BLOCK_SOURCE, ssoTypeGroupSourceReq}, ssoUnblockSourceGroup: {iana.ProtocolIPv6, sysMCAST_UNBLOCK_SOURCE, ssoTypeGroupSourceReq}, } ) func init() { if runtime.GOOS == "freebsd" && runtime.GOARCH == "386" { archs, _ := syscall.Sysctl("kern.supported_archs") for _, s := range strings.Fields(archs) { if s == "amd64" { freebsd32o64 = true break } } } } func (sa *sysSockaddrInet6) setSockaddr(ip net.IP, i int) { sa.Len = sysSizeofSockaddrInet6 sa.Family = syscall.AF_INET6 copy(sa.Addr[:], ip) sa.Scope_id = uint32(i) } func (pi *sysInet6Pktinfo) setIfindex(i int) { pi.Ifindex = uint32(i) } func (mreq *sysIPv6Mreq) setIfindex(i int) { mreq.Interface = uint32(i) } func (gr *sysGroupReq) setGroup(grp net.IP) { sa := (*sysSockaddrInet6)(unsafe.Pointer(&gr.Group)) sa.Len = sysSizeofSockaddrInet6 sa.Family = syscall.AF_INET6 copy(sa.Addr[:], grp) } func (gsr *sysGroupSourceReq) setSourceGroup(grp, src net.IP) { sa := (*sysSockaddrInet6)(unsafe.Pointer(&gsr.Group)) sa.Len = sysSizeofSockaddrInet6 sa.Family = syscall.AF_INET6 copy(sa.Addr[:], grp) sa = (*sysSockaddrInet6)(unsafe.Pointer(&gsr.Source)) sa.Len = sysSizeofSockaddrInet6 sa.Family = syscall.AF_INET6 copy(sa.Addr[:], src) } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/icmp.go0000644061062106075000000000276112721405224023704 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6 import "golang.org/x/net/internal/iana" // An ICMPType represents a type of ICMP message. type ICMPType int func (typ ICMPType) String() string { s, ok := icmpTypes[typ] if !ok { return "" } return s } // Protocol returns the ICMPv6 protocol number. func (typ ICMPType) Protocol() int { return iana.ProtocolIPv6ICMP } // An ICMPFilter represents an ICMP message filter for incoming // packets. The filter belongs to a packet delivery path on a host and // it cannot interact with forwarding packets or tunnel-outer packets. // // Note: RFC 2460 defines a reasonable role model. A node means a // device that implements IP. A router means a node that forwards IP // packets not explicitly addressed to itself, and a host means a node // that is not a router. type ICMPFilter struct { sysICMPv6Filter } // Accept accepts incoming ICMP packets including the type field value // typ. func (f *ICMPFilter) Accept(typ ICMPType) { f.accept(typ) } // Block blocks incoming ICMP packets including the type field value // typ. func (f *ICMPFilter) Block(typ ICMPType) { f.block(typ) } // SetAll sets the filter action to the filter. func (f *ICMPFilter) SetAll(block bool) { f.setAll(block) } // WillBlock reports whether the ICMP type will be blocked. func (f *ICMPFilter) WillBlock(typ ICMPType) bool { return f.willBlock(typ) } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/zsys_linux_ppc64.go0000644061062106075000000000744112721405224026217 0ustar00stgraberdomain admins00000000000000// Created by cgo -godefs - DO NOT EDIT // cgo -godefs defs_linux.go // +build linux,ppc64 package ipv6 const ( sysIPV6_ADDRFORM = 0x1 sysIPV6_2292PKTINFO = 0x2 sysIPV6_2292HOPOPTS = 0x3 sysIPV6_2292DSTOPTS = 0x4 sysIPV6_2292RTHDR = 0x5 sysIPV6_2292PKTOPTIONS = 0x6 sysIPV6_CHECKSUM = 0x7 sysIPV6_2292HOPLIMIT = 0x8 sysIPV6_NEXTHOP = 0x9 sysIPV6_FLOWINFO = 0xb sysIPV6_UNICAST_HOPS = 0x10 sysIPV6_MULTICAST_IF = 0x11 sysIPV6_MULTICAST_HOPS = 0x12 sysIPV6_MULTICAST_LOOP = 0x13 sysIPV6_ADD_MEMBERSHIP = 0x14 sysIPV6_DROP_MEMBERSHIP = 0x15 sysMCAST_JOIN_GROUP = 0x2a sysMCAST_LEAVE_GROUP = 0x2d sysMCAST_JOIN_SOURCE_GROUP = 0x2e sysMCAST_LEAVE_SOURCE_GROUP = 0x2f sysMCAST_BLOCK_SOURCE = 0x2b sysMCAST_UNBLOCK_SOURCE = 0x2c sysMCAST_MSFILTER = 0x30 sysIPV6_ROUTER_ALERT = 0x16 sysIPV6_MTU_DISCOVER = 0x17 sysIPV6_MTU = 0x18 sysIPV6_RECVERR = 0x19 sysIPV6_V6ONLY = 0x1a sysIPV6_JOIN_ANYCAST = 0x1b sysIPV6_LEAVE_ANYCAST = 0x1c sysIPV6_FLOWLABEL_MGR = 0x20 sysIPV6_FLOWINFO_SEND = 0x21 sysIPV6_IPSEC_POLICY = 0x22 sysIPV6_XFRM_POLICY = 0x23 sysIPV6_RECVPKTINFO = 0x31 sysIPV6_PKTINFO = 0x32 sysIPV6_RECVHOPLIMIT = 0x33 sysIPV6_HOPLIMIT = 0x34 sysIPV6_RECVHOPOPTS = 0x35 sysIPV6_HOPOPTS = 0x36 sysIPV6_RTHDRDSTOPTS = 0x37 sysIPV6_RECVRTHDR = 0x38 sysIPV6_RTHDR = 0x39 sysIPV6_RECVDSTOPTS = 0x3a sysIPV6_DSTOPTS = 0x3b sysIPV6_RECVPATHMTU = 0x3c sysIPV6_PATHMTU = 0x3d sysIPV6_DONTFRAG = 0x3e sysIPV6_RECVTCLASS = 0x42 sysIPV6_TCLASS = 0x43 sysIPV6_ADDR_PREFERENCES = 0x48 sysIPV6_PREFER_SRC_TMP = 0x1 sysIPV6_PREFER_SRC_PUBLIC = 0x2 sysIPV6_PREFER_SRC_PUBTMP_DEFAULT = 0x100 sysIPV6_PREFER_SRC_COA = 0x4 sysIPV6_PREFER_SRC_HOME = 0x400 sysIPV6_PREFER_SRC_CGA = 0x8 sysIPV6_PREFER_SRC_NONCGA = 0x800 sysIPV6_MINHOPCOUNT = 0x49 sysIPV6_ORIGDSTADDR = 0x4a sysIPV6_RECVORIGDSTADDR = 0x4a sysIPV6_TRANSPARENT = 0x4b sysIPV6_UNICAST_IF = 0x4c sysICMPV6_FILTER = 0x1 sysICMPV6_FILTER_BLOCK = 0x1 sysICMPV6_FILTER_PASS = 0x2 sysICMPV6_FILTER_BLOCKOTHERS = 0x3 sysICMPV6_FILTER_PASSONLY = 0x4 sysSOL_SOCKET = 0x1 sysSO_ATTACH_FILTER = 0x1a sysSizeofKernelSockaddrStorage = 0x80 sysSizeofSockaddrInet6 = 0x1c sysSizeofInet6Pktinfo = 0x14 sysSizeofIPv6Mtuinfo = 0x20 sysSizeofIPv6FlowlabelReq = 0x20 sysSizeofIPv6Mreq = 0x14 sysSizeofGroupReq = 0x88 sysSizeofGroupSourceReq = 0x108 sysSizeofICMPv6Filter = 0x20 ) type sysKernelSockaddrStorage struct { Family uint16 X__data [126]int8 } type sysSockaddrInet6 struct { Family uint16 Port uint16 Flowinfo uint32 Addr [16]byte /* in6_addr */ Scope_id uint32 } type sysInet6Pktinfo struct { Addr [16]byte /* in6_addr */ Ifindex int32 } type sysIPv6Mtuinfo struct { Addr sysSockaddrInet6 Mtu uint32 } type sysIPv6FlowlabelReq struct { Dst [16]byte /* in6_addr */ Label uint32 Action uint8 Share uint8 Flags uint16 Expires uint16 Linger uint16 X__flr_pad uint32 } type sysIPv6Mreq struct { Multiaddr [16]byte /* in6_addr */ Ifindex int32 } type sysGroupReq struct { Interface uint32 Pad_cgo_0 [4]byte Group sysKernelSockaddrStorage } type sysGroupSourceReq struct { Interface uint32 Pad_cgo_0 [4]byte Group sysKernelSockaddrStorage Source sysKernelSockaddrStorage } type sysICMPv6Filter struct { Data [8]uint32 } type sysSockFProg struct { Len uint16 Pad_cgo_0 [6]byte Filter *sysSockFilter } type sysSockFilter struct { Code uint16 Jt uint8 Jf uint8 K uint32 } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/mocktransponder_test.go0000644061062106075000000000111312721405224027212 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6_test import ( "net" "testing" ) func connector(t *testing.T, network, addr string, done chan<- bool) { defer func() { done <- true }() c, err := net.Dial(network, addr) if err != nil { t.Error(err) return } c.Close() } func acceptor(t *testing.T, ln net.Listener, done chan<- bool) { defer func() { done <- true }() c, err := ln.Accept() if err != nil { t.Error(err) return } c.Close() } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/zsys_solaris.go0000644061062106075000000000420312721405224025511 0ustar00stgraberdomain admins00000000000000// Created by cgo -godefs - DO NOT EDIT // cgo -godefs defs_solaris.go // +build solaris package ipv6 const ( sysIPV6_UNICAST_HOPS = 0x5 sysIPV6_MULTICAST_IF = 0x6 sysIPV6_MULTICAST_HOPS = 0x7 sysIPV6_MULTICAST_LOOP = 0x8 sysIPV6_JOIN_GROUP = 0x9 sysIPV6_LEAVE_GROUP = 0xa sysIPV6_PKTINFO = 0xb sysIPV6_HOPLIMIT = 0xc sysIPV6_NEXTHOP = 0xd sysIPV6_HOPOPTS = 0xe sysIPV6_DSTOPTS = 0xf sysIPV6_RTHDR = 0x10 sysIPV6_RTHDRDSTOPTS = 0x11 sysIPV6_RECVPKTINFO = 0x12 sysIPV6_RECVHOPLIMIT = 0x13 sysIPV6_RECVHOPOPTS = 0x14 sysIPV6_RECVRTHDR = 0x16 sysIPV6_RECVRTHDRDSTOPTS = 0x17 sysIPV6_CHECKSUM = 0x18 sysIPV6_RECVTCLASS = 0x19 sysIPV6_USE_MIN_MTU = 0x20 sysIPV6_DONTFRAG = 0x21 sysIPV6_SEC_OPT = 0x22 sysIPV6_SRC_PREFERENCES = 0x23 sysIPV6_RECVPATHMTU = 0x24 sysIPV6_PATHMTU = 0x25 sysIPV6_TCLASS = 0x26 sysIPV6_V6ONLY = 0x27 sysIPV6_RECVDSTOPTS = 0x28 sysIPV6_PREFER_SRC_HOME = 0x1 sysIPV6_PREFER_SRC_COA = 0x2 sysIPV6_PREFER_SRC_PUBLIC = 0x4 sysIPV6_PREFER_SRC_TMP = 0x8 sysIPV6_PREFER_SRC_NONCGA = 0x10 sysIPV6_PREFER_SRC_CGA = 0x20 sysIPV6_PREFER_SRC_MIPMASK = 0x3 sysIPV6_PREFER_SRC_MIPDEFAULT = 0x1 sysIPV6_PREFER_SRC_TMPMASK = 0xc sysIPV6_PREFER_SRC_TMPDEFAULT = 0x4 sysIPV6_PREFER_SRC_CGAMASK = 0x30 sysIPV6_PREFER_SRC_CGADEFAULT = 0x10 sysIPV6_PREFER_SRC_MASK = 0x3f sysIPV6_PREFER_SRC_DEFAULT = 0x15 sysIPV6_BOUND_IF = 0x41 sysIPV6_UNSPEC_SRC = 0x42 sysICMP6_FILTER = 0x1 sysSizeofSockaddrInet6 = 0x20 sysSizeofInet6Pktinfo = 0x14 sysSizeofIPv6Mtuinfo = 0x24 sysSizeofIPv6Mreq = 0x14 sysSizeofICMPv6Filter = 0x20 ) type sysSockaddrInet6 struct { Family uint16 Port uint16 Flowinfo uint32 Addr [16]byte /* in6_addr */ Scope_id uint32 X__sin6_src_id uint32 } type sysInet6Pktinfo struct { Addr [16]byte /* in6_addr */ Ifindex uint32 } type sysIPv6Mtuinfo struct { Addr sysSockaddrInet6 Mtu uint32 } type sysIPv6Mreq struct { Multiaddr [16]byte /* in6_addr */ Interface uint32 } type sysICMPv6Filter struct { X__icmp6_filt [8]uint32 } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/example_test.go0000644061062106075000000001231412721405224025441 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6_test import ( "fmt" "log" "net" "os" "time" "golang.org/x/net/icmp" "golang.org/x/net/ipv6" ) func ExampleConn_markingTCP() { ln, err := net.Listen("tcp", "[::]:1024") if err != nil { log.Fatal(err) } defer ln.Close() for { c, err := ln.Accept() if err != nil { log.Fatal(err) } go func(c net.Conn) { defer c.Close() if c.RemoteAddr().(*net.TCPAddr).IP.To16() != nil && c.RemoteAddr().(*net.TCPAddr).IP.To4() == nil { p := ipv6.NewConn(c) if err := p.SetTrafficClass(0x28); err != nil { // DSCP AF11 log.Fatal(err) } if err := p.SetHopLimit(128); err != nil { log.Fatal(err) } } if _, err := c.Write([]byte("HELLO-R-U-THERE-ACK")); err != nil { log.Fatal(err) } }(c) } } func ExamplePacketConn_servingOneShotMulticastDNS() { c, err := net.ListenPacket("udp6", "[::]:5353") // mDNS over UDP if err != nil { log.Fatal(err) } defer c.Close() p := ipv6.NewPacketConn(c) en0, err := net.InterfaceByName("en0") if err != nil { log.Fatal(err) } mDNSLinkLocal := net.UDPAddr{IP: net.ParseIP("ff02::fb")} if err := p.JoinGroup(en0, &mDNSLinkLocal); err != nil { log.Fatal(err) } defer p.LeaveGroup(en0, &mDNSLinkLocal) if err := p.SetControlMessage(ipv6.FlagDst|ipv6.FlagInterface, true); err != nil { log.Fatal(err) } var wcm ipv6.ControlMessage b := make([]byte, 1500) for { _, rcm, peer, err := p.ReadFrom(b) if err != nil { log.Fatal(err) } if !rcm.Dst.IsMulticast() || !rcm.Dst.Equal(mDNSLinkLocal.IP) { continue } wcm.IfIndex = rcm.IfIndex answers := []byte("FAKE-MDNS-ANSWERS") // fake mDNS answers, you need to implement this if _, err := p.WriteTo(answers, &wcm, peer); err != nil { log.Fatal(err) } } } func ExamplePacketConn_tracingIPPacketRoute() { // Tracing an IP packet route to www.google.com. const host = "www.google.com" ips, err := net.LookupIP(host) if err != nil { log.Fatal(err) } var dst net.IPAddr for _, ip := range ips { if ip.To16() != nil && ip.To4() == nil { dst.IP = ip fmt.Printf("using %v for tracing an IP packet route to %s\n", dst.IP, host) break } } if dst.IP == nil { log.Fatal("no AAAA record found") } c, err := net.ListenPacket("ip6:58", "::") // ICMP for IPv6 if err != nil { log.Fatal(err) } defer c.Close() p := ipv6.NewPacketConn(c) if err := p.SetControlMessage(ipv6.FlagHopLimit|ipv6.FlagSrc|ipv6.FlagDst|ipv6.FlagInterface, true); err != nil { log.Fatal(err) } wm := icmp.Message{ Type: ipv6.ICMPTypeEchoRequest, Code: 0, Body: &icmp.Echo{ ID: os.Getpid() & 0xffff, Data: []byte("HELLO-R-U-THERE"), }, } var f ipv6.ICMPFilter f.SetAll(true) f.Accept(ipv6.ICMPTypeTimeExceeded) f.Accept(ipv6.ICMPTypeEchoReply) if err := p.SetICMPFilter(&f); err != nil { log.Fatal(err) } var wcm ipv6.ControlMessage rb := make([]byte, 1500) for i := 1; i <= 64; i++ { // up to 64 hops wm.Body.(*icmp.Echo).Seq = i wb, err := wm.Marshal(nil) if err != nil { log.Fatal(err) } // In the real world usually there are several // multiple traffic-engineered paths for each hop. // You may need to probe a few times to each hop. begin := time.Now() wcm.HopLimit = i if _, err := p.WriteTo(wb, &wcm, &dst); err != nil { log.Fatal(err) } if err := p.SetReadDeadline(time.Now().Add(3 * time.Second)); err != nil { log.Fatal(err) } n, rcm, peer, err := p.ReadFrom(rb) if err != nil { if err, ok := err.(net.Error); ok && err.Timeout() { fmt.Printf("%v\t*\n", i) continue } log.Fatal(err) } rm, err := icmp.ParseMessage(58, rb[:n]) if err != nil { log.Fatal(err) } rtt := time.Since(begin) // In the real world you need to determine whether the // received message is yours using ControlMessage.Src, // ControlMesage.Dst, icmp.Echo.ID and icmp.Echo.Seq. switch rm.Type { case ipv6.ICMPTypeTimeExceeded: names, _ := net.LookupAddr(peer.String()) fmt.Printf("%d\t%v %+v %v\n\t%+v\n", i, peer, names, rtt, rcm) case ipv6.ICMPTypeEchoReply: names, _ := net.LookupAddr(peer.String()) fmt.Printf("%d\t%v %+v %v\n\t%+v\n", i, peer, names, rtt, rcm) return } } } func ExamplePacketConn_advertisingOSPFHello() { c, err := net.ListenPacket("ip6:89", "::") // OSPF for IPv6 if err != nil { log.Fatal(err) } defer c.Close() p := ipv6.NewPacketConn(c) en0, err := net.InterfaceByName("en0") if err != nil { log.Fatal(err) } allSPFRouters := net.IPAddr{IP: net.ParseIP("ff02::5")} if err := p.JoinGroup(en0, &allSPFRouters); err != nil { log.Fatal(err) } defer p.LeaveGroup(en0, &allSPFRouters) hello := make([]byte, 24) // fake hello data, you need to implement this ospf := make([]byte, 16) // fake ospf header, you need to implement this ospf[0] = 3 // version 3 ospf[1] = 1 // hello packet ospf = append(ospf, hello...) if err := p.SetChecksum(true, 12); err != nil { log.Fatal(err) } cm := ipv6.ControlMessage{ TrafficClass: 0xc0, // DSCP CS6 HopLimit: 1, IfIndex: en0.Index, } if _, err := p.WriteTo(ospf, &cm, &allSPFRouters); err != nil { log.Fatal(err) } } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/zsys_darwin.go0000644061062106075000000000476312721405224025334 0ustar00stgraberdomain admins00000000000000// Created by cgo -godefs - DO NOT EDIT // cgo -godefs defs_darwin.go package ipv6 const ( sysIPV6_UNICAST_HOPS = 0x4 sysIPV6_MULTICAST_IF = 0x9 sysIPV6_MULTICAST_HOPS = 0xa sysIPV6_MULTICAST_LOOP = 0xb sysIPV6_JOIN_GROUP = 0xc sysIPV6_LEAVE_GROUP = 0xd sysIPV6_PORTRANGE = 0xe sysICMP6_FILTER = 0x12 sysIPV6_2292PKTINFO = 0x13 sysIPV6_2292HOPLIMIT = 0x14 sysIPV6_2292NEXTHOP = 0x15 sysIPV6_2292HOPOPTS = 0x16 sysIPV6_2292DSTOPTS = 0x17 sysIPV6_2292RTHDR = 0x18 sysIPV6_2292PKTOPTIONS = 0x19 sysIPV6_CHECKSUM = 0x1a sysIPV6_V6ONLY = 0x1b sysIPV6_IPSEC_POLICY = 0x1c sysIPV6_RECVTCLASS = 0x23 sysIPV6_TCLASS = 0x24 sysIPV6_RTHDRDSTOPTS = 0x39 sysIPV6_RECVPKTINFO = 0x3d sysIPV6_RECVHOPLIMIT = 0x25 sysIPV6_RECVRTHDR = 0x26 sysIPV6_RECVHOPOPTS = 0x27 sysIPV6_RECVDSTOPTS = 0x28 sysIPV6_USE_MIN_MTU = 0x2a sysIPV6_RECVPATHMTU = 0x2b sysIPV6_PATHMTU = 0x2c sysIPV6_PKTINFO = 0x2e sysIPV6_HOPLIMIT = 0x2f sysIPV6_NEXTHOP = 0x30 sysIPV6_HOPOPTS = 0x31 sysIPV6_DSTOPTS = 0x32 sysIPV6_RTHDR = 0x33 sysIPV6_AUTOFLOWLABEL = 0x3b sysIPV6_DONTFRAG = 0x3e sysIPV6_PREFER_TEMPADDR = 0x3f sysIPV6_MSFILTER = 0x4a sysMCAST_JOIN_GROUP = 0x50 sysMCAST_LEAVE_GROUP = 0x51 sysMCAST_JOIN_SOURCE_GROUP = 0x52 sysMCAST_LEAVE_SOURCE_GROUP = 0x53 sysMCAST_BLOCK_SOURCE = 0x54 sysMCAST_UNBLOCK_SOURCE = 0x55 sysIPV6_BOUND_IF = 0x7d sysIPV6_PORTRANGE_DEFAULT = 0x0 sysIPV6_PORTRANGE_HIGH = 0x1 sysIPV6_PORTRANGE_LOW = 0x2 sysSizeofSockaddrStorage = 0x80 sysSizeofSockaddrInet6 = 0x1c sysSizeofInet6Pktinfo = 0x14 sysSizeofIPv6Mtuinfo = 0x20 sysSizeofIPv6Mreq = 0x14 sysSizeofGroupReq = 0x84 sysSizeofGroupSourceReq = 0x104 sysSizeofICMPv6Filter = 0x20 ) type sysSockaddrStorage struct { Len uint8 Family uint8 X__ss_pad1 [6]int8 X__ss_align int64 X__ss_pad2 [112]int8 } type sysSockaddrInet6 struct { Len uint8 Family uint8 Port uint16 Flowinfo uint32 Addr [16]byte /* in6_addr */ Scope_id uint32 } type sysInet6Pktinfo struct { Addr [16]byte /* in6_addr */ Ifindex uint32 } type sysIPv6Mtuinfo struct { Addr sysSockaddrInet6 Mtu uint32 } type sysIPv6Mreq struct { Multiaddr [16]byte /* in6_addr */ Interface uint32 } type sysICMPv6Filter struct { Filt [8]uint32 } type sysGroupReq struct { Interface uint32 Pad_cgo_0 [128]byte } type sysGroupSourceReq struct { Interface uint32 Pad_cgo_0 [128]byte Pad_cgo_1 [128]byte } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/control_windows.go0000644061062106075000000000124312721405224026200 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6 import "syscall" func setControlMessage(fd syscall.Handle, opt *rawOpt, cf ControlFlags, on bool) error { // TODO(mikio): implement this return syscall.EWINDOWS } func newControlMessage(opt *rawOpt) (oob []byte) { // TODO(mikio): implement this return nil } func parseControlMessage(b []byte) (*ControlMessage, error) { // TODO(mikio): implement this return nil, syscall.EWINDOWS } func marshalControlMessage(cm *ControlMessage) (oob []byte) { // TODO(mikio): implement this return nil } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/endpoint.go0000644061062106075000000000570212721405224024572 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6 import ( "net" "syscall" "time" ) // A Conn represents a network endpoint that uses IPv6 transport. // It allows to set basic IP-level socket options such as traffic // class and hop limit. type Conn struct { genericOpt } type genericOpt struct { net.Conn } func (c *genericOpt) ok() bool { return c != nil && c.Conn != nil } // PathMTU returns a path MTU value for the destination associated // with the endpoint. func (c *Conn) PathMTU() (int, error) { if !c.genericOpt.ok() { return 0, syscall.EINVAL } fd, err := c.genericOpt.sysfd() if err != nil { return 0, err } _, mtu, err := getMTUInfo(fd, &sockOpts[ssoPathMTU]) if err != nil { return 0, err } return mtu, nil } // NewConn returns a new Conn. func NewConn(c net.Conn) *Conn { return &Conn{ genericOpt: genericOpt{Conn: c}, } } // A PacketConn represents a packet network endpoint that uses IPv6 // transport. It is used to control several IP-level socket options // including IPv6 header manipulation. It also provides datagram // based network I/O methods specific to the IPv6 and higher layer // protocols such as OSPF, GRE, and UDP. type PacketConn struct { genericOpt dgramOpt payloadHandler } type dgramOpt struct { net.PacketConn } func (c *dgramOpt) ok() bool { return c != nil && c.PacketConn != nil } // SetControlMessage allows to receive the per packet basis IP-level // socket options. func (c *PacketConn) SetControlMessage(cf ControlFlags, on bool) error { if !c.payloadHandler.ok() { return syscall.EINVAL } fd, err := c.payloadHandler.sysfd() if err != nil { return err } return setControlMessage(fd, &c.payloadHandler.rawOpt, cf, on) } // SetDeadline sets the read and write deadlines associated with the // endpoint. func (c *PacketConn) SetDeadline(t time.Time) error { if !c.payloadHandler.ok() { return syscall.EINVAL } return c.payloadHandler.SetDeadline(t) } // SetReadDeadline sets the read deadline associated with the // endpoint. func (c *PacketConn) SetReadDeadline(t time.Time) error { if !c.payloadHandler.ok() { return syscall.EINVAL } return c.payloadHandler.SetReadDeadline(t) } // SetWriteDeadline sets the write deadline associated with the // endpoint. func (c *PacketConn) SetWriteDeadline(t time.Time) error { if !c.payloadHandler.ok() { return syscall.EINVAL } return c.payloadHandler.SetWriteDeadline(t) } // Close closes the endpoint. func (c *PacketConn) Close() error { if !c.payloadHandler.ok() { return syscall.EINVAL } return c.payloadHandler.Close() } // NewPacketConn returns a new PacketConn using c as its underlying // transport. func NewPacketConn(c net.PacketConn) *PacketConn { return &PacketConn{ genericOpt: genericOpt{Conn: c.(net.Conn)}, dgramOpt: dgramOpt{PacketConn: c}, payloadHandler: payloadHandler{PacketConn: c}, } } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/sockopt_test.go0000644061062106075000000000564712721405224025503 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6_test import ( "fmt" "net" "runtime" "testing" "golang.org/x/net/internal/iana" "golang.org/x/net/internal/nettest" "golang.org/x/net/ipv6" ) var supportsIPv6 bool = nettest.SupportsIPv6() func TestConnInitiatorPathMTU(t *testing.T) { switch runtime.GOOS { case "nacl", "plan9", "solaris", "windows": t.Skipf("not supported on %s", runtime.GOOS) } if !supportsIPv6 { t.Skip("ipv6 is not supported") } ln, err := net.Listen("tcp6", "[::1]:0") if err != nil { t.Fatal(err) } defer ln.Close() done := make(chan bool) go acceptor(t, ln, done) c, err := net.Dial("tcp6", ln.Addr().String()) if err != nil { t.Fatal(err) } defer c.Close() if pmtu, err := ipv6.NewConn(c).PathMTU(); err != nil { switch runtime.GOOS { case "darwin": // older darwin kernels don't support IPV6_PATHMTU option t.Logf("not supported on %s", runtime.GOOS) default: t.Fatal(err) } } else { t.Logf("path mtu for %v: %v", c.RemoteAddr(), pmtu) } <-done } func TestConnResponderPathMTU(t *testing.T) { switch runtime.GOOS { case "nacl", "plan9", "solaris", "windows": t.Skipf("not supported on %s", runtime.GOOS) } if !supportsIPv6 { t.Skip("ipv6 is not supported") } ln, err := net.Listen("tcp6", "[::1]:0") if err != nil { t.Fatal(err) } defer ln.Close() done := make(chan bool) go connector(t, "tcp6", ln.Addr().String(), done) c, err := ln.Accept() if err != nil { t.Fatal(err) } defer c.Close() if pmtu, err := ipv6.NewConn(c).PathMTU(); err != nil { switch runtime.GOOS { case "darwin": // older darwin kernels don't support IPV6_PATHMTU option t.Logf("not supported on %s", runtime.GOOS) default: t.Fatal(err) } } else { t.Logf("path mtu for %v: %v", c.RemoteAddr(), pmtu) } <-done } func TestPacketConnChecksum(t *testing.T) { switch runtime.GOOS { case "nacl", "plan9", "solaris", "windows": t.Skipf("not supported on %s", runtime.GOOS) } if !supportsIPv6 { t.Skip("ipv6 is not supported") } if m, ok := nettest.SupportsRawIPSocket(); !ok { t.Skip(m) } c, err := net.ListenPacket(fmt.Sprintf("ip6:%d", iana.ProtocolOSPFIGP), "::") // OSPF for IPv6 if err != nil { t.Fatal(err) } defer c.Close() p := ipv6.NewPacketConn(c) offset := 12 // see RFC 5340 for _, toggle := range []bool{false, true} { if err := p.SetChecksum(toggle, offset); err != nil { if toggle { t.Fatalf("ipv6.PacketConn.SetChecksum(%v, %v) failed: %v", toggle, offset, err) } else { // Some platforms never allow to disable the kernel // checksum processing. t.Logf("ipv6.PacketConn.SetChecksum(%v, %v) failed: %v", toggle, offset, err) } } if on, offset, err := p.Checksum(); err != nil { t.Fatal(err) } else { t.Logf("kernel checksum processing enabled=%v, offset=%v", on, offset) } } } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/helper_windows.go0000644061062106075000000000212712721405224026001 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6 import ( "net" "reflect" "syscall" ) func (c *genericOpt) sysfd() (syscall.Handle, error) { switch p := c.Conn.(type) { case *net.TCPConn, *net.UDPConn, *net.IPConn: return sysfd(p) } return syscall.InvalidHandle, errInvalidConnType } func (c *dgramOpt) sysfd() (syscall.Handle, error) { switch p := c.PacketConn.(type) { case *net.UDPConn, *net.IPConn: return sysfd(p.(net.Conn)) } return syscall.InvalidHandle, errInvalidConnType } func (c *payloadHandler) sysfd() (syscall.Handle, error) { return sysfd(c.PacketConn.(net.Conn)) } func sysfd(c net.Conn) (syscall.Handle, error) { cv := reflect.ValueOf(c) switch ce := cv.Elem(); ce.Kind() { case reflect.Struct: netfd := ce.FieldByName("conn").FieldByName("fd") switch fe := netfd.Elem(); fe.Kind() { case reflect.Struct: fd := fe.FieldByName("sysfd") return syscall.Handle(fd.Uint()), nil } } return syscall.InvalidHandle, errInvalidConnType } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/icmp_test.go0000644061062106075000000000372212721405224024741 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6_test import ( "net" "reflect" "runtime" "testing" "golang.org/x/net/internal/nettest" "golang.org/x/net/ipv6" ) var icmpStringTests = []struct { in ipv6.ICMPType out string }{ {ipv6.ICMPTypeDestinationUnreachable, "destination unreachable"}, {256, ""}, } func TestICMPString(t *testing.T) { for _, tt := range icmpStringTests { s := tt.in.String() if s != tt.out { t.Errorf("got %s; want %s", s, tt.out) } } } func TestICMPFilter(t *testing.T) { switch runtime.GOOS { case "nacl", "plan9", "solaris", "windows": t.Skipf("not supported on %s", runtime.GOOS) } var f ipv6.ICMPFilter for _, toggle := range []bool{false, true} { f.SetAll(toggle) for _, typ := range []ipv6.ICMPType{ ipv6.ICMPTypeDestinationUnreachable, ipv6.ICMPTypeEchoReply, ipv6.ICMPTypeNeighborSolicitation, ipv6.ICMPTypeDuplicateAddressConfirmation, } { f.Accept(typ) if f.WillBlock(typ) { t.Errorf("ipv6.ICMPFilter.Set(%v, false) failed", typ) } f.Block(typ) if !f.WillBlock(typ) { t.Errorf("ipv6.ICMPFilter.Set(%v, true) failed", typ) } } } } func TestSetICMPFilter(t *testing.T) { switch runtime.GOOS { case "nacl", "plan9", "solaris", "windows": t.Skipf("not supported on %s", runtime.GOOS) } if !supportsIPv6 { t.Skip("ipv6 is not supported") } if m, ok := nettest.SupportsRawIPSocket(); !ok { t.Skip(m) } c, err := net.ListenPacket("ip6:ipv6-icmp", "::1") if err != nil { t.Fatal(err) } defer c.Close() p := ipv6.NewPacketConn(c) var f ipv6.ICMPFilter f.SetAll(true) f.Accept(ipv6.ICMPTypeEchoRequest) f.Accept(ipv6.ICMPTypeEchoReply) if err := p.SetICMPFilter(&f); err != nil { t.Fatal(err) } kf, err := p.ICMPFilter() if err != nil { t.Fatal(err) } if !reflect.DeepEqual(kf, &f) { t.Fatalf("got %#v; want %#v", kf, f) } } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/header_test.go0000644061062106075000000000231312721405224025234 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6_test import ( "net" "reflect" "strings" "testing" "golang.org/x/net/internal/iana" "golang.org/x/net/ipv6" ) var ( wireHeaderFromKernel = [ipv6.HeaderLen]byte{ 0x69, 0x8b, 0xee, 0xf1, 0xca, 0xfe, 0x2c, 0x01, 0x20, 0x01, 0x0d, 0xb8, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x20, 0x01, 0x0d, 0xb8, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, } testHeader = &ipv6.Header{ Version: ipv6.Version, TrafficClass: iana.DiffServAF43, FlowLabel: 0xbeef1, PayloadLen: 0xcafe, NextHeader: iana.ProtocolIPv6Frag, HopLimit: 1, Src: net.ParseIP("2001:db8:1::1"), Dst: net.ParseIP("2001:db8:2::1"), } ) func TestParseHeader(t *testing.T) { h, err := ipv6.ParseHeader(wireHeaderFromKernel[:]) if err != nil { t.Fatal(err) } if !reflect.DeepEqual(h, testHeader) { t.Fatalf("got %#v; want %#v", h, testHeader) } s := h.String() if strings.Contains(s, ",") { t.Fatalf("should be space-separated values: %s", s) } } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/sys_linux.go0000644061062106075000000000555112721405224025011 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6 import ( "net" "syscall" "unsafe" "golang.org/x/net/internal/iana" ) var ( ctlOpts = [ctlMax]ctlOpt{ ctlTrafficClass: {sysIPV6_TCLASS, 4, marshalTrafficClass, parseTrafficClass}, ctlHopLimit: {sysIPV6_HOPLIMIT, 4, marshalHopLimit, parseHopLimit}, ctlPacketInfo: {sysIPV6_PKTINFO, sysSizeofInet6Pktinfo, marshalPacketInfo, parsePacketInfo}, ctlPathMTU: {sysIPV6_PATHMTU, sysSizeofIPv6Mtuinfo, marshalPathMTU, parsePathMTU}, } sockOpts = [ssoMax]sockOpt{ ssoTrafficClass: {iana.ProtocolIPv6, sysIPV6_TCLASS, ssoTypeInt}, ssoHopLimit: {iana.ProtocolIPv6, sysIPV6_UNICAST_HOPS, ssoTypeInt}, ssoMulticastInterface: {iana.ProtocolIPv6, sysIPV6_MULTICAST_IF, ssoTypeInterface}, ssoMulticastHopLimit: {iana.ProtocolIPv6, sysIPV6_MULTICAST_HOPS, ssoTypeInt}, ssoMulticastLoopback: {iana.ProtocolIPv6, sysIPV6_MULTICAST_LOOP, ssoTypeInt}, ssoReceiveTrafficClass: {iana.ProtocolIPv6, sysIPV6_RECVTCLASS, ssoTypeInt}, ssoReceiveHopLimit: {iana.ProtocolIPv6, sysIPV6_RECVHOPLIMIT, ssoTypeInt}, ssoReceivePacketInfo: {iana.ProtocolIPv6, sysIPV6_RECVPKTINFO, ssoTypeInt}, ssoReceivePathMTU: {iana.ProtocolIPv6, sysIPV6_RECVPATHMTU, ssoTypeInt}, ssoPathMTU: {iana.ProtocolIPv6, sysIPV6_PATHMTU, ssoTypeMTUInfo}, ssoChecksum: {iana.ProtocolReserved, sysIPV6_CHECKSUM, ssoTypeInt}, ssoICMPFilter: {iana.ProtocolIPv6ICMP, sysICMPV6_FILTER, ssoTypeICMPFilter}, ssoJoinGroup: {iana.ProtocolIPv6, sysMCAST_JOIN_GROUP, ssoTypeGroupReq}, ssoLeaveGroup: {iana.ProtocolIPv6, sysMCAST_LEAVE_GROUP, ssoTypeGroupReq}, ssoJoinSourceGroup: {iana.ProtocolIPv6, sysMCAST_JOIN_SOURCE_GROUP, ssoTypeGroupSourceReq}, ssoLeaveSourceGroup: {iana.ProtocolIPv6, sysMCAST_LEAVE_SOURCE_GROUP, ssoTypeGroupSourceReq}, ssoBlockSourceGroup: {iana.ProtocolIPv6, sysMCAST_BLOCK_SOURCE, ssoTypeGroupSourceReq}, ssoUnblockSourceGroup: {iana.ProtocolIPv6, sysMCAST_UNBLOCK_SOURCE, ssoTypeGroupSourceReq}, } ) func (sa *sysSockaddrInet6) setSockaddr(ip net.IP, i int) { sa.Family = syscall.AF_INET6 copy(sa.Addr[:], ip) sa.Scope_id = uint32(i) } func (pi *sysInet6Pktinfo) setIfindex(i int) { pi.Ifindex = int32(i) } func (mreq *sysIPv6Mreq) setIfindex(i int) { mreq.Ifindex = int32(i) } func (gr *sysGroupReq) setGroup(grp net.IP) { sa := (*sysSockaddrInet6)(unsafe.Pointer(&gr.Group)) sa.Family = syscall.AF_INET6 copy(sa.Addr[:], grp) } func (gsr *sysGroupSourceReq) setSourceGroup(grp, src net.IP) { sa := (*sysSockaddrInet6)(unsafe.Pointer(&gsr.Group)) sa.Family = syscall.AF_INET6 copy(sa.Addr[:], grp) sa = (*sysSockaddrInet6)(unsafe.Pointer(&gsr.Source)) sa.Family = syscall.AF_INET6 copy(sa.Addr[:], src) } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/zsys_freebsd_386.go0000644061062106075000000000446712721405224026063 0ustar00stgraberdomain admins00000000000000// Created by cgo -godefs - DO NOT EDIT // cgo -godefs defs_freebsd.go package ipv6 const ( sysIPV6_UNICAST_HOPS = 0x4 sysIPV6_MULTICAST_IF = 0x9 sysIPV6_MULTICAST_HOPS = 0xa sysIPV6_MULTICAST_LOOP = 0xb sysIPV6_JOIN_GROUP = 0xc sysIPV6_LEAVE_GROUP = 0xd sysIPV6_PORTRANGE = 0xe sysICMP6_FILTER = 0x12 sysIPV6_CHECKSUM = 0x1a sysIPV6_V6ONLY = 0x1b sysIPV6_IPSEC_POLICY = 0x1c sysIPV6_RTHDRDSTOPTS = 0x23 sysIPV6_RECVPKTINFO = 0x24 sysIPV6_RECVHOPLIMIT = 0x25 sysIPV6_RECVRTHDR = 0x26 sysIPV6_RECVHOPOPTS = 0x27 sysIPV6_RECVDSTOPTS = 0x28 sysIPV6_USE_MIN_MTU = 0x2a sysIPV6_RECVPATHMTU = 0x2b sysIPV6_PATHMTU = 0x2c sysIPV6_PKTINFO = 0x2e sysIPV6_HOPLIMIT = 0x2f sysIPV6_NEXTHOP = 0x30 sysIPV6_HOPOPTS = 0x31 sysIPV6_DSTOPTS = 0x32 sysIPV6_RTHDR = 0x33 sysIPV6_RECVTCLASS = 0x39 sysIPV6_AUTOFLOWLABEL = 0x3b sysIPV6_TCLASS = 0x3d sysIPV6_DONTFRAG = 0x3e sysIPV6_PREFER_TEMPADDR = 0x3f sysIPV6_BINDANY = 0x40 sysIPV6_MSFILTER = 0x4a sysMCAST_JOIN_GROUP = 0x50 sysMCAST_LEAVE_GROUP = 0x51 sysMCAST_JOIN_SOURCE_GROUP = 0x52 sysMCAST_LEAVE_SOURCE_GROUP = 0x53 sysMCAST_BLOCK_SOURCE = 0x54 sysMCAST_UNBLOCK_SOURCE = 0x55 sysIPV6_PORTRANGE_DEFAULT = 0x0 sysIPV6_PORTRANGE_HIGH = 0x1 sysIPV6_PORTRANGE_LOW = 0x2 sysSizeofSockaddrStorage = 0x80 sysSizeofSockaddrInet6 = 0x1c sysSizeofInet6Pktinfo = 0x14 sysSizeofIPv6Mtuinfo = 0x20 sysSizeofIPv6Mreq = 0x14 sysSizeofGroupReq = 0x84 sysSizeofGroupSourceReq = 0x104 sysSizeofICMPv6Filter = 0x20 ) type sysSockaddrStorage struct { Len uint8 Family uint8 X__ss_pad1 [6]int8 X__ss_align int64 X__ss_pad2 [112]int8 } type sysSockaddrInet6 struct { Len uint8 Family uint8 Port uint16 Flowinfo uint32 Addr [16]byte /* in6_addr */ Scope_id uint32 } type sysInet6Pktinfo struct { Addr [16]byte /* in6_addr */ Ifindex uint32 } type sysIPv6Mtuinfo struct { Addr sysSockaddrInet6 Mtu uint32 } type sysIPv6Mreq struct { Multiaddr [16]byte /* in6_addr */ Interface uint32 } type sysGroupReq struct { Interface uint32 Group sysSockaddrStorage } type sysGroupSourceReq struct { Interface uint32 Group sysSockaddrStorage Source sysSockaddrStorage } type sysICMPv6Filter struct { Filt [8]uint32 } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/defs_freebsd.go0000644061062106075000000000567512721405224025376 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build ignore // +godefs map struct_in6_addr [16]byte /* in6_addr */ package ipv6 /* #include #include #include #include */ import "C" const ( sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP sysIPV6_JOIN_GROUP = C.IPV6_JOIN_GROUP sysIPV6_LEAVE_GROUP = C.IPV6_LEAVE_GROUP sysIPV6_PORTRANGE = C.IPV6_PORTRANGE sysICMP6_FILTER = C.ICMP6_FILTER sysIPV6_CHECKSUM = C.IPV6_CHECKSUM sysIPV6_V6ONLY = C.IPV6_V6ONLY sysIPV6_IPSEC_POLICY = C.IPV6_IPSEC_POLICY sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS sysIPV6_USE_MIN_MTU = C.IPV6_USE_MIN_MTU sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU sysIPV6_PATHMTU = C.IPV6_PATHMTU sysIPV6_PKTINFO = C.IPV6_PKTINFO sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT sysIPV6_NEXTHOP = C.IPV6_NEXTHOP sysIPV6_HOPOPTS = C.IPV6_HOPOPTS sysIPV6_DSTOPTS = C.IPV6_DSTOPTS sysIPV6_RTHDR = C.IPV6_RTHDR sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS sysIPV6_AUTOFLOWLABEL = C.IPV6_AUTOFLOWLABEL sysIPV6_TCLASS = C.IPV6_TCLASS sysIPV6_DONTFRAG = C.IPV6_DONTFRAG sysIPV6_PREFER_TEMPADDR = C.IPV6_PREFER_TEMPADDR sysIPV6_BINDANY = C.IPV6_BINDANY sysIPV6_MSFILTER = C.IPV6_MSFILTER sysMCAST_JOIN_GROUP = C.MCAST_JOIN_GROUP sysMCAST_LEAVE_GROUP = C.MCAST_LEAVE_GROUP sysMCAST_JOIN_SOURCE_GROUP = C.MCAST_JOIN_SOURCE_GROUP sysMCAST_LEAVE_SOURCE_GROUP = C.MCAST_LEAVE_SOURCE_GROUP sysMCAST_BLOCK_SOURCE = C.MCAST_BLOCK_SOURCE sysMCAST_UNBLOCK_SOURCE = C.MCAST_UNBLOCK_SOURCE sysIPV6_PORTRANGE_DEFAULT = C.IPV6_PORTRANGE_DEFAULT sysIPV6_PORTRANGE_HIGH = C.IPV6_PORTRANGE_HIGH sysIPV6_PORTRANGE_LOW = C.IPV6_PORTRANGE_LOW sysSizeofSockaddrStorage = C.sizeof_struct_sockaddr_storage sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6 sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq sysSizeofGroupReq = C.sizeof_struct_group_req sysSizeofGroupSourceReq = C.sizeof_struct_group_source_req sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter ) type sysSockaddrStorage C.struct_sockaddr_storage type sysSockaddrInet6 C.struct_sockaddr_in6 type sysInet6Pktinfo C.struct_in6_pktinfo type sysIPv6Mtuinfo C.struct_ip6_mtuinfo type sysIPv6Mreq C.struct_ipv6_mreq type sysGroupReq C.struct_group_req type sysGroupSourceReq C.struct_group_source_req type sysICMPv6Filter C.struct_icmp6_filter lxd-2.0.2/dist/src/golang.org/x/net/ipv6/zsys_linux_arm.go0000644061062106075000000000734412721405224026044 0ustar00stgraberdomain admins00000000000000// Created by cgo -godefs - DO NOT EDIT // cgo -godefs defs_linux.go package ipv6 const ( sysIPV6_ADDRFORM = 0x1 sysIPV6_2292PKTINFO = 0x2 sysIPV6_2292HOPOPTS = 0x3 sysIPV6_2292DSTOPTS = 0x4 sysIPV6_2292RTHDR = 0x5 sysIPV6_2292PKTOPTIONS = 0x6 sysIPV6_CHECKSUM = 0x7 sysIPV6_2292HOPLIMIT = 0x8 sysIPV6_NEXTHOP = 0x9 sysIPV6_FLOWINFO = 0xb sysIPV6_UNICAST_HOPS = 0x10 sysIPV6_MULTICAST_IF = 0x11 sysIPV6_MULTICAST_HOPS = 0x12 sysIPV6_MULTICAST_LOOP = 0x13 sysIPV6_ADD_MEMBERSHIP = 0x14 sysIPV6_DROP_MEMBERSHIP = 0x15 sysMCAST_JOIN_GROUP = 0x2a sysMCAST_LEAVE_GROUP = 0x2d sysMCAST_JOIN_SOURCE_GROUP = 0x2e sysMCAST_LEAVE_SOURCE_GROUP = 0x2f sysMCAST_BLOCK_SOURCE = 0x2b sysMCAST_UNBLOCK_SOURCE = 0x2c sysMCAST_MSFILTER = 0x30 sysIPV6_ROUTER_ALERT = 0x16 sysIPV6_MTU_DISCOVER = 0x17 sysIPV6_MTU = 0x18 sysIPV6_RECVERR = 0x19 sysIPV6_V6ONLY = 0x1a sysIPV6_JOIN_ANYCAST = 0x1b sysIPV6_LEAVE_ANYCAST = 0x1c sysIPV6_FLOWLABEL_MGR = 0x20 sysIPV6_FLOWINFO_SEND = 0x21 sysIPV6_IPSEC_POLICY = 0x22 sysIPV6_XFRM_POLICY = 0x23 sysIPV6_RECVPKTINFO = 0x31 sysIPV6_PKTINFO = 0x32 sysIPV6_RECVHOPLIMIT = 0x33 sysIPV6_HOPLIMIT = 0x34 sysIPV6_RECVHOPOPTS = 0x35 sysIPV6_HOPOPTS = 0x36 sysIPV6_RTHDRDSTOPTS = 0x37 sysIPV6_RECVRTHDR = 0x38 sysIPV6_RTHDR = 0x39 sysIPV6_RECVDSTOPTS = 0x3a sysIPV6_DSTOPTS = 0x3b sysIPV6_RECVPATHMTU = 0x3c sysIPV6_PATHMTU = 0x3d sysIPV6_DONTFRAG = 0x3e sysIPV6_RECVTCLASS = 0x42 sysIPV6_TCLASS = 0x43 sysIPV6_ADDR_PREFERENCES = 0x48 sysIPV6_PREFER_SRC_TMP = 0x1 sysIPV6_PREFER_SRC_PUBLIC = 0x2 sysIPV6_PREFER_SRC_PUBTMP_DEFAULT = 0x100 sysIPV6_PREFER_SRC_COA = 0x4 sysIPV6_PREFER_SRC_HOME = 0x400 sysIPV6_PREFER_SRC_CGA = 0x8 sysIPV6_PREFER_SRC_NONCGA = 0x800 sysIPV6_MINHOPCOUNT = 0x49 sysIPV6_ORIGDSTADDR = 0x4a sysIPV6_RECVORIGDSTADDR = 0x4a sysIPV6_TRANSPARENT = 0x4b sysIPV6_UNICAST_IF = 0x4c sysICMPV6_FILTER = 0x1 sysICMPV6_FILTER_BLOCK = 0x1 sysICMPV6_FILTER_PASS = 0x2 sysICMPV6_FILTER_BLOCKOTHERS = 0x3 sysICMPV6_FILTER_PASSONLY = 0x4 sysSOL_SOCKET = 0x1 sysSO_ATTACH_FILTER = 0x1a sysSizeofKernelSockaddrStorage = 0x80 sysSizeofSockaddrInet6 = 0x1c sysSizeofInet6Pktinfo = 0x14 sysSizeofIPv6Mtuinfo = 0x20 sysSizeofIPv6FlowlabelReq = 0x20 sysSizeofIPv6Mreq = 0x14 sysSizeofGroupReq = 0x84 sysSizeofGroupSourceReq = 0x104 sysSizeofICMPv6Filter = 0x20 ) type sysKernelSockaddrStorage struct { Family uint16 X__data [126]int8 } type sysSockaddrInet6 struct { Family uint16 Port uint16 Flowinfo uint32 Addr [16]byte /* in6_addr */ Scope_id uint32 } type sysInet6Pktinfo struct { Addr [16]byte /* in6_addr */ Ifindex int32 } type sysIPv6Mtuinfo struct { Addr sysSockaddrInet6 Mtu uint32 } type sysIPv6FlowlabelReq struct { Dst [16]byte /* in6_addr */ Label uint32 Action uint8 Share uint8 Flags uint16 Expires uint16 Linger uint16 X__flr_pad uint32 } type sysIPv6Mreq struct { Multiaddr [16]byte /* in6_addr */ Ifindex int32 } type sysGroupReq struct { Interface uint32 Group sysKernelSockaddrStorage } type sysGroupSourceReq struct { Interface uint32 Group sysKernelSockaddrStorage Source sysKernelSockaddrStorage } type sysICMPv6Filter struct { Data [8]uint32 } type sysSockFProg struct { Len uint16 Pad_cgo_0 [2]byte Filter *sysSockFilter } type sysSockFilter struct { Code uint16 Jt uint8 Jf uint8 K uint32 } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/helper_unix.go0000644061062106075000000000201512721405224025266 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build darwin dragonfly freebsd linux netbsd openbsd package ipv6 import ( "net" "reflect" ) func (c *genericOpt) sysfd() (int, error) { switch p := c.Conn.(type) { case *net.TCPConn, *net.UDPConn, *net.IPConn: return sysfd(p) } return 0, errInvalidConnType } func (c *dgramOpt) sysfd() (int, error) { switch p := c.PacketConn.(type) { case *net.UDPConn, *net.IPConn: return sysfd(p.(net.Conn)) } return 0, errInvalidConnType } func (c *payloadHandler) sysfd() (int, error) { return sysfd(c.PacketConn.(net.Conn)) } func sysfd(c net.Conn) (int, error) { cv := reflect.ValueOf(c) switch ce := cv.Elem(); ce.Kind() { case reflect.Struct: nfd := ce.FieldByName("conn").FieldByName("fd") switch fe := nfd.Elem(); fe.Kind() { case reflect.Struct: fd := fe.FieldByName("sysfd") return int(fd.Int()), nil } } return 0, errInvalidConnType } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/zsys_freebsd_amd64.go0000644061062106075000000000453512721405224026452 0ustar00stgraberdomain admins00000000000000// Created by cgo -godefs - DO NOT EDIT // cgo -godefs defs_freebsd.go package ipv6 const ( sysIPV6_UNICAST_HOPS = 0x4 sysIPV6_MULTICAST_IF = 0x9 sysIPV6_MULTICAST_HOPS = 0xa sysIPV6_MULTICAST_LOOP = 0xb sysIPV6_JOIN_GROUP = 0xc sysIPV6_LEAVE_GROUP = 0xd sysIPV6_PORTRANGE = 0xe sysICMP6_FILTER = 0x12 sysIPV6_CHECKSUM = 0x1a sysIPV6_V6ONLY = 0x1b sysIPV6_IPSEC_POLICY = 0x1c sysIPV6_RTHDRDSTOPTS = 0x23 sysIPV6_RECVPKTINFO = 0x24 sysIPV6_RECVHOPLIMIT = 0x25 sysIPV6_RECVRTHDR = 0x26 sysIPV6_RECVHOPOPTS = 0x27 sysIPV6_RECVDSTOPTS = 0x28 sysIPV6_USE_MIN_MTU = 0x2a sysIPV6_RECVPATHMTU = 0x2b sysIPV6_PATHMTU = 0x2c sysIPV6_PKTINFO = 0x2e sysIPV6_HOPLIMIT = 0x2f sysIPV6_NEXTHOP = 0x30 sysIPV6_HOPOPTS = 0x31 sysIPV6_DSTOPTS = 0x32 sysIPV6_RTHDR = 0x33 sysIPV6_RECVTCLASS = 0x39 sysIPV6_AUTOFLOWLABEL = 0x3b sysIPV6_TCLASS = 0x3d sysIPV6_DONTFRAG = 0x3e sysIPV6_PREFER_TEMPADDR = 0x3f sysIPV6_BINDANY = 0x40 sysIPV6_MSFILTER = 0x4a sysMCAST_JOIN_GROUP = 0x50 sysMCAST_LEAVE_GROUP = 0x51 sysMCAST_JOIN_SOURCE_GROUP = 0x52 sysMCAST_LEAVE_SOURCE_GROUP = 0x53 sysMCAST_BLOCK_SOURCE = 0x54 sysMCAST_UNBLOCK_SOURCE = 0x55 sysIPV6_PORTRANGE_DEFAULT = 0x0 sysIPV6_PORTRANGE_HIGH = 0x1 sysIPV6_PORTRANGE_LOW = 0x2 sysSizeofSockaddrStorage = 0x80 sysSizeofSockaddrInet6 = 0x1c sysSizeofInet6Pktinfo = 0x14 sysSizeofIPv6Mtuinfo = 0x20 sysSizeofIPv6Mreq = 0x14 sysSizeofGroupReq = 0x88 sysSizeofGroupSourceReq = 0x108 sysSizeofICMPv6Filter = 0x20 ) type sysSockaddrStorage struct { Len uint8 Family uint8 X__ss_pad1 [6]int8 X__ss_align int64 X__ss_pad2 [112]int8 } type sysSockaddrInet6 struct { Len uint8 Family uint8 Port uint16 Flowinfo uint32 Addr [16]byte /* in6_addr */ Scope_id uint32 } type sysInet6Pktinfo struct { Addr [16]byte /* in6_addr */ Ifindex uint32 } type sysIPv6Mtuinfo struct { Addr sysSockaddrInet6 Mtu uint32 } type sysIPv6Mreq struct { Multiaddr [16]byte /* in6_addr */ Interface uint32 } type sysGroupReq struct { Interface uint32 Pad_cgo_0 [4]byte Group sysSockaddrStorage } type sysGroupSourceReq struct { Interface uint32 Pad_cgo_0 [4]byte Group sysSockaddrStorage Source sysSockaddrStorage } type sysICMPv6Filter struct { Filt [8]uint32 } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/zsys_openbsd.go0000644061062106075000000000342712721405224025476 0ustar00stgraberdomain admins00000000000000// Created by cgo -godefs - DO NOT EDIT // cgo -godefs defs_openbsd.go package ipv6 const ( sysIPV6_UNICAST_HOPS = 0x4 sysIPV6_MULTICAST_IF = 0x9 sysIPV6_MULTICAST_HOPS = 0xa sysIPV6_MULTICAST_LOOP = 0xb sysIPV6_JOIN_GROUP = 0xc sysIPV6_LEAVE_GROUP = 0xd sysIPV6_PORTRANGE = 0xe sysICMP6_FILTER = 0x12 sysIPV6_CHECKSUM = 0x1a sysIPV6_V6ONLY = 0x1b sysIPV6_RTHDRDSTOPTS = 0x23 sysIPV6_RECVPKTINFO = 0x24 sysIPV6_RECVHOPLIMIT = 0x25 sysIPV6_RECVRTHDR = 0x26 sysIPV6_RECVHOPOPTS = 0x27 sysIPV6_RECVDSTOPTS = 0x28 sysIPV6_USE_MIN_MTU = 0x2a sysIPV6_RECVPATHMTU = 0x2b sysIPV6_PATHMTU = 0x2c sysIPV6_PKTINFO = 0x2e sysIPV6_HOPLIMIT = 0x2f sysIPV6_NEXTHOP = 0x30 sysIPV6_HOPOPTS = 0x31 sysIPV6_DSTOPTS = 0x32 sysIPV6_RTHDR = 0x33 sysIPV6_AUTH_LEVEL = 0x35 sysIPV6_ESP_TRANS_LEVEL = 0x36 sysIPV6_ESP_NETWORK_LEVEL = 0x37 sysIPSEC6_OUTSA = 0x38 sysIPV6_RECVTCLASS = 0x39 sysIPV6_AUTOFLOWLABEL = 0x3b sysIPV6_IPCOMP_LEVEL = 0x3c sysIPV6_TCLASS = 0x3d sysIPV6_DONTFRAG = 0x3e sysIPV6_PIPEX = 0x3f sysIPV6_RTABLE = 0x1021 sysIPV6_PORTRANGE_DEFAULT = 0x0 sysIPV6_PORTRANGE_HIGH = 0x1 sysIPV6_PORTRANGE_LOW = 0x2 sysSizeofSockaddrInet6 = 0x1c sysSizeofInet6Pktinfo = 0x14 sysSizeofIPv6Mtuinfo = 0x20 sysSizeofIPv6Mreq = 0x14 sysSizeofICMPv6Filter = 0x20 ) type sysSockaddrInet6 struct { Len uint8 Family uint8 Port uint16 Flowinfo uint32 Addr [16]byte /* in6_addr */ Scope_id uint32 } type sysInet6Pktinfo struct { Addr [16]byte /* in6_addr */ Ifindex uint32 } type sysIPv6Mtuinfo struct { Addr sysSockaddrInet6 Mtu uint32 } type sysIPv6Mreq struct { Multiaddr [16]byte /* in6_addr */ Interface uint32 } type sysICMPv6Filter struct { Filt [8]uint32 } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/syscall_unix.go0000644061062106075000000000144212721405224025464 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build darwin dragonfly freebsd linux,!386 netbsd openbsd package ipv6 import ( "syscall" "unsafe" ) func getsockopt(fd, level, name int, v unsafe.Pointer, l *uint32) error { if _, _, errno := syscall.Syscall6(syscall.SYS_GETSOCKOPT, uintptr(fd), uintptr(level), uintptr(name), uintptr(v), uintptr(unsafe.Pointer(l)), 0); errno != 0 { return error(errno) } return nil } func setsockopt(fd, level, name int, v unsafe.Pointer, l uint32) error { if _, _, errno := syscall.Syscall6(syscall.SYS_SETSOCKOPT, uintptr(fd), uintptr(level), uintptr(name), uintptr(v), uintptr(l), 0); errno != 0 { return error(errno) } return nil } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/icmp_linux.go0000644061062106075000000000115612721405224025120 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6 func (f *sysICMPv6Filter) accept(typ ICMPType) { f.Data[typ>>5] &^= 1 << (uint32(typ) & 31) } func (f *sysICMPv6Filter) block(typ ICMPType) { f.Data[typ>>5] |= 1 << (uint32(typ) & 31) } func (f *sysICMPv6Filter) setAll(block bool) { for i := range f.Data { if block { f.Data[i] = 1<<32 - 1 } else { f.Data[i] = 0 } } } func (f *sysICMPv6Filter) willBlock(typ ICMPType) bool { return f.Data[typ>>5]&(1<<(uint32(typ)&31)) != 0 } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/syscall_linux_386.go0000644061062106075000000000150412721405224026237 0ustar00stgraberdomain admins00000000000000// Copyright 2009 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6 import ( "syscall" "unsafe" ) const ( sysGETSOCKOPT = 0xf sysSETSOCKOPT = 0xe ) func socketcall(call int, a0, a1, a2, a3, a4, a5 uintptr) (int, syscall.Errno) func getsockopt(fd, level, name int, v unsafe.Pointer, l *uint32) error { if _, errno := socketcall(sysGETSOCKOPT, uintptr(fd), uintptr(level), uintptr(name), uintptr(v), uintptr(unsafe.Pointer(l)), 0); errno != 0 { return error(errno) } return nil } func setsockopt(fd, level, name int, v unsafe.Pointer, l uint32) error { if _, errno := socketcall(sysSETSOCKOPT, uintptr(fd), uintptr(level), uintptr(name), uintptr(v), uintptr(l), 0); errno != 0 { return error(errno) } return nil } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/bpfopt_linux.go0000644061062106075000000000126012721405224025456 0ustar00stgraberdomain admins00000000000000// Copyright 2016 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6 import ( "os" "unsafe" "golang.org/x/net/bpf" ) // SetBPF attaches a BPF program to the connection. // // Only supported on Linux. func (c *dgramOpt) SetBPF(filter []bpf.RawInstruction) error { fd, err := c.sysfd() if err != nil { return err } prog := sysSockFProg{ Len: uint16(len(filter)), Filter: (*sysSockFilter)(unsafe.Pointer(&filter[0])), } return os.NewSyscallError("setsockopt", setsockopt(fd, sysSOL_SOCKET, sysSO_ATTACH_FILTER, unsafe.Pointer(&prog), uint32(unsafe.Sizeof(prog)))) } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/control_unix.go0000644061062106075000000001044512721405224025475 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build darwin dragonfly freebsd linux netbsd openbsd package ipv6 import ( "os" "syscall" "golang.org/x/net/internal/iana" ) func setControlMessage(fd int, opt *rawOpt, cf ControlFlags, on bool) error { opt.Lock() defer opt.Unlock() if cf&FlagTrafficClass != 0 && sockOpts[ssoReceiveTrafficClass].name > 0 { if err := setInt(fd, &sockOpts[ssoReceiveTrafficClass], boolint(on)); err != nil { return err } if on { opt.set(FlagTrafficClass) } else { opt.clear(FlagTrafficClass) } } if cf&FlagHopLimit != 0 && sockOpts[ssoReceiveHopLimit].name > 0 { if err := setInt(fd, &sockOpts[ssoReceiveHopLimit], boolint(on)); err != nil { return err } if on { opt.set(FlagHopLimit) } else { opt.clear(FlagHopLimit) } } if cf&flagPacketInfo != 0 && sockOpts[ssoReceivePacketInfo].name > 0 { if err := setInt(fd, &sockOpts[ssoReceivePacketInfo], boolint(on)); err != nil { return err } if on { opt.set(cf & flagPacketInfo) } else { opt.clear(cf & flagPacketInfo) } } if cf&FlagPathMTU != 0 && sockOpts[ssoReceivePathMTU].name > 0 { if err := setInt(fd, &sockOpts[ssoReceivePathMTU], boolint(on)); err != nil { return err } if on { opt.set(FlagPathMTU) } else { opt.clear(FlagPathMTU) } } return nil } func newControlMessage(opt *rawOpt) (oob []byte) { opt.RLock() var l int if opt.isset(FlagTrafficClass) && ctlOpts[ctlTrafficClass].name > 0 { l += syscall.CmsgSpace(ctlOpts[ctlTrafficClass].length) } if opt.isset(FlagHopLimit) && ctlOpts[ctlHopLimit].name > 0 { l += syscall.CmsgSpace(ctlOpts[ctlHopLimit].length) } if opt.isset(flagPacketInfo) && ctlOpts[ctlPacketInfo].name > 0 { l += syscall.CmsgSpace(ctlOpts[ctlPacketInfo].length) } if opt.isset(FlagPathMTU) && ctlOpts[ctlPathMTU].name > 0 { l += syscall.CmsgSpace(ctlOpts[ctlPathMTU].length) } if l > 0 { oob = make([]byte, l) b := oob if opt.isset(FlagTrafficClass) && ctlOpts[ctlTrafficClass].name > 0 { b = ctlOpts[ctlTrafficClass].marshal(b, nil) } if opt.isset(FlagHopLimit) && ctlOpts[ctlHopLimit].name > 0 { b = ctlOpts[ctlHopLimit].marshal(b, nil) } if opt.isset(flagPacketInfo) && ctlOpts[ctlPacketInfo].name > 0 { b = ctlOpts[ctlPacketInfo].marshal(b, nil) } if opt.isset(FlagPathMTU) && ctlOpts[ctlPathMTU].name > 0 { b = ctlOpts[ctlPathMTU].marshal(b, nil) } } opt.RUnlock() return } func parseControlMessage(b []byte) (*ControlMessage, error) { if len(b) == 0 { return nil, nil } cmsgs, err := syscall.ParseSocketControlMessage(b) if err != nil { return nil, os.NewSyscallError("parse socket control message", err) } cm := &ControlMessage{} for _, m := range cmsgs { if m.Header.Level != iana.ProtocolIPv6 { continue } switch int(m.Header.Type) { case ctlOpts[ctlTrafficClass].name: ctlOpts[ctlTrafficClass].parse(cm, m.Data[:]) case ctlOpts[ctlHopLimit].name: ctlOpts[ctlHopLimit].parse(cm, m.Data[:]) case ctlOpts[ctlPacketInfo].name: ctlOpts[ctlPacketInfo].parse(cm, m.Data[:]) case ctlOpts[ctlPathMTU].name: ctlOpts[ctlPathMTU].parse(cm, m.Data[:]) } } return cm, nil } func marshalControlMessage(cm *ControlMessage) (oob []byte) { if cm == nil { return } var l int tclass := false if ctlOpts[ctlTrafficClass].name > 0 && cm.TrafficClass > 0 { tclass = true l += syscall.CmsgSpace(ctlOpts[ctlTrafficClass].length) } hoplimit := false if ctlOpts[ctlHopLimit].name > 0 && cm.HopLimit > 0 { hoplimit = true l += syscall.CmsgSpace(ctlOpts[ctlHopLimit].length) } pktinfo := false if ctlOpts[ctlPacketInfo].name > 0 && (cm.Src.To16() != nil && cm.Src.To4() == nil || cm.IfIndex > 0) { pktinfo = true l += syscall.CmsgSpace(ctlOpts[ctlPacketInfo].length) } nexthop := false if ctlOpts[ctlNextHop].name > 0 && cm.NextHop.To16() != nil && cm.NextHop.To4() == nil { nexthop = true l += syscall.CmsgSpace(ctlOpts[ctlNextHop].length) } if l > 0 { oob = make([]byte, l) b := oob if tclass { b = ctlOpts[ctlTrafficClass].marshal(b, cm) } if hoplimit { b = ctlOpts[ctlHopLimit].marshal(b, cm) } if pktinfo { b = ctlOpts[ctlPacketInfo].marshal(b, cm) } if nexthop { b = ctlOpts[ctlNextHop].marshal(b, cm) } } return } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/helper.go0000644061062106075000000000202312721405224024222 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6 import ( "encoding/binary" "errors" "net" "unsafe" ) var ( errMissingAddress = errors.New("missing address") errHeaderTooShort = errors.New("header too short") errInvalidConnType = errors.New("invalid conn type") errOpNoSupport = errors.New("operation not supported") errNoSuchInterface = errors.New("no such interface") nativeEndian binary.ByteOrder ) func init() { i := uint32(1) b := (*[4]byte)(unsafe.Pointer(&i)) if b[0] == 1 { nativeEndian = binary.LittleEndian } else { nativeEndian = binary.BigEndian } } func boolint(b bool) int { if b { return 1 } return 0 } func netAddrToIP16(a net.Addr) net.IP { switch v := a.(type) { case *net.UDPAddr: if ip := v.IP.To16(); ip != nil && ip.To4() == nil { return ip } case *net.IPAddr: if ip := v.IP.To16(); ip != nil && ip.To4() == nil { return ip } } return nil } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/zsys_linux_ppc64le.go0000644061062106075000000000744312721405224026542 0ustar00stgraberdomain admins00000000000000// Created by cgo -godefs - DO NOT EDIT // cgo -godefs defs_linux.go // +build linux,ppc64le package ipv6 const ( sysIPV6_ADDRFORM = 0x1 sysIPV6_2292PKTINFO = 0x2 sysIPV6_2292HOPOPTS = 0x3 sysIPV6_2292DSTOPTS = 0x4 sysIPV6_2292RTHDR = 0x5 sysIPV6_2292PKTOPTIONS = 0x6 sysIPV6_CHECKSUM = 0x7 sysIPV6_2292HOPLIMIT = 0x8 sysIPV6_NEXTHOP = 0x9 sysIPV6_FLOWINFO = 0xb sysIPV6_UNICAST_HOPS = 0x10 sysIPV6_MULTICAST_IF = 0x11 sysIPV6_MULTICAST_HOPS = 0x12 sysIPV6_MULTICAST_LOOP = 0x13 sysIPV6_ADD_MEMBERSHIP = 0x14 sysIPV6_DROP_MEMBERSHIP = 0x15 sysMCAST_JOIN_GROUP = 0x2a sysMCAST_LEAVE_GROUP = 0x2d sysMCAST_JOIN_SOURCE_GROUP = 0x2e sysMCAST_LEAVE_SOURCE_GROUP = 0x2f sysMCAST_BLOCK_SOURCE = 0x2b sysMCAST_UNBLOCK_SOURCE = 0x2c sysMCAST_MSFILTER = 0x30 sysIPV6_ROUTER_ALERT = 0x16 sysIPV6_MTU_DISCOVER = 0x17 sysIPV6_MTU = 0x18 sysIPV6_RECVERR = 0x19 sysIPV6_V6ONLY = 0x1a sysIPV6_JOIN_ANYCAST = 0x1b sysIPV6_LEAVE_ANYCAST = 0x1c sysIPV6_FLOWLABEL_MGR = 0x20 sysIPV6_FLOWINFO_SEND = 0x21 sysIPV6_IPSEC_POLICY = 0x22 sysIPV6_XFRM_POLICY = 0x23 sysIPV6_RECVPKTINFO = 0x31 sysIPV6_PKTINFO = 0x32 sysIPV6_RECVHOPLIMIT = 0x33 sysIPV6_HOPLIMIT = 0x34 sysIPV6_RECVHOPOPTS = 0x35 sysIPV6_HOPOPTS = 0x36 sysIPV6_RTHDRDSTOPTS = 0x37 sysIPV6_RECVRTHDR = 0x38 sysIPV6_RTHDR = 0x39 sysIPV6_RECVDSTOPTS = 0x3a sysIPV6_DSTOPTS = 0x3b sysIPV6_RECVPATHMTU = 0x3c sysIPV6_PATHMTU = 0x3d sysIPV6_DONTFRAG = 0x3e sysIPV6_RECVTCLASS = 0x42 sysIPV6_TCLASS = 0x43 sysIPV6_ADDR_PREFERENCES = 0x48 sysIPV6_PREFER_SRC_TMP = 0x1 sysIPV6_PREFER_SRC_PUBLIC = 0x2 sysIPV6_PREFER_SRC_PUBTMP_DEFAULT = 0x100 sysIPV6_PREFER_SRC_COA = 0x4 sysIPV6_PREFER_SRC_HOME = 0x400 sysIPV6_PREFER_SRC_CGA = 0x8 sysIPV6_PREFER_SRC_NONCGA = 0x800 sysIPV6_MINHOPCOUNT = 0x49 sysIPV6_ORIGDSTADDR = 0x4a sysIPV6_RECVORIGDSTADDR = 0x4a sysIPV6_TRANSPARENT = 0x4b sysIPV6_UNICAST_IF = 0x4c sysICMPV6_FILTER = 0x1 sysICMPV6_FILTER_BLOCK = 0x1 sysICMPV6_FILTER_PASS = 0x2 sysICMPV6_FILTER_BLOCKOTHERS = 0x3 sysICMPV6_FILTER_PASSONLY = 0x4 sysSOL_SOCKET = 0x1 sysSO_ATTACH_FILTER = 0x1a sysSizeofKernelSockaddrStorage = 0x80 sysSizeofSockaddrInet6 = 0x1c sysSizeofInet6Pktinfo = 0x14 sysSizeofIPv6Mtuinfo = 0x20 sysSizeofIPv6FlowlabelReq = 0x20 sysSizeofIPv6Mreq = 0x14 sysSizeofGroupReq = 0x88 sysSizeofGroupSourceReq = 0x108 sysSizeofICMPv6Filter = 0x20 ) type sysKernelSockaddrStorage struct { Family uint16 X__data [126]int8 } type sysSockaddrInet6 struct { Family uint16 Port uint16 Flowinfo uint32 Addr [16]byte /* in6_addr */ Scope_id uint32 } type sysInet6Pktinfo struct { Addr [16]byte /* in6_addr */ Ifindex int32 } type sysIPv6Mtuinfo struct { Addr sysSockaddrInet6 Mtu uint32 } type sysIPv6FlowlabelReq struct { Dst [16]byte /* in6_addr */ Label uint32 Action uint8 Share uint8 Flags uint16 Expires uint16 Linger uint16 X__flr_pad uint32 } type sysIPv6Mreq struct { Multiaddr [16]byte /* in6_addr */ Ifindex int32 } type sysGroupReq struct { Interface uint32 Pad_cgo_0 [4]byte Group sysKernelSockaddrStorage } type sysGroupSourceReq struct { Interface uint32 Pad_cgo_0 [4]byte Group sysKernelSockaddrStorage Source sysKernelSockaddrStorage } type sysICMPv6Filter struct { Data [8]uint32 } type sysSockFProg struct { Len uint16 Pad_cgo_0 [6]byte Filter *sysSockFilter } type sysSockFilter struct { Code uint16 Jt uint8 Jf uint8 K uint32 } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/unicast_test.go0000644061062106075000000001126712721405224025462 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6_test import ( "bytes" "net" "os" "runtime" "testing" "time" "golang.org/x/net/icmp" "golang.org/x/net/internal/iana" "golang.org/x/net/internal/nettest" "golang.org/x/net/ipv6" ) func TestPacketConnReadWriteUnicastUDP(t *testing.T) { switch runtime.GOOS { case "nacl", "plan9", "solaris", "windows": t.Skipf("not supported on %s", runtime.GOOS) } if !supportsIPv6 { t.Skip("ipv6 is not supported") } c, err := net.ListenPacket("udp6", "[::1]:0") if err != nil { t.Fatal(err) } defer c.Close() p := ipv6.NewPacketConn(c) defer p.Close() dst, err := net.ResolveUDPAddr("udp6", c.LocalAddr().String()) if err != nil { t.Fatal(err) } cm := ipv6.ControlMessage{ TrafficClass: iana.DiffServAF11 | iana.CongestionExperienced, Src: net.IPv6loopback, } cf := ipv6.FlagTrafficClass | ipv6.FlagHopLimit | ipv6.FlagSrc | ipv6.FlagDst | ipv6.FlagInterface | ipv6.FlagPathMTU ifi := nettest.RoutedInterface("ip6", net.FlagUp|net.FlagLoopback) if ifi != nil { cm.IfIndex = ifi.Index } wb := []byte("HELLO-R-U-THERE") for i, toggle := range []bool{true, false, true} { if err := p.SetControlMessage(cf, toggle); err != nil { if nettest.ProtocolNotSupported(err) { t.Skipf("not supported on %s", runtime.GOOS) } t.Fatal(err) } cm.HopLimit = i + 1 if err := p.SetWriteDeadline(time.Now().Add(100 * time.Millisecond)); err != nil { t.Fatal(err) } if n, err := p.WriteTo(wb, &cm, dst); err != nil { t.Fatal(err) } else if n != len(wb) { t.Fatalf("got %v; want %v", n, len(wb)) } rb := make([]byte, 128) if err := p.SetReadDeadline(time.Now().Add(100 * time.Millisecond)); err != nil { t.Fatal(err) } if n, _, _, err := p.ReadFrom(rb); err != nil { t.Fatal(err) } else if !bytes.Equal(rb[:n], wb) { t.Fatalf("got %v; want %v", rb[:n], wb) } } } func TestPacketConnReadWriteUnicastICMP(t *testing.T) { switch runtime.GOOS { case "nacl", "plan9", "solaris", "windows": t.Skipf("not supported on %s", runtime.GOOS) } if !supportsIPv6 { t.Skip("ipv6 is not supported") } if m, ok := nettest.SupportsRawIPSocket(); !ok { t.Skip(m) } c, err := net.ListenPacket("ip6:ipv6-icmp", "::1") if err != nil { t.Fatal(err) } defer c.Close() p := ipv6.NewPacketConn(c) defer p.Close() dst, err := net.ResolveIPAddr("ip6", "::1") if err != nil { t.Fatal(err) } pshicmp := icmp.IPv6PseudoHeader(c.LocalAddr().(*net.IPAddr).IP, dst.IP) cm := ipv6.ControlMessage{ TrafficClass: iana.DiffServAF11 | iana.CongestionExperienced, Src: net.IPv6loopback, } cf := ipv6.FlagTrafficClass | ipv6.FlagHopLimit | ipv6.FlagSrc | ipv6.FlagDst | ipv6.FlagInterface | ipv6.FlagPathMTU ifi := nettest.RoutedInterface("ip6", net.FlagUp|net.FlagLoopback) if ifi != nil { cm.IfIndex = ifi.Index } var f ipv6.ICMPFilter f.SetAll(true) f.Accept(ipv6.ICMPTypeEchoReply) if err := p.SetICMPFilter(&f); err != nil { t.Fatal(err) } var psh []byte for i, toggle := range []bool{true, false, true} { if toggle { psh = nil if err := p.SetChecksum(true, 2); err != nil { t.Fatal(err) } } else { psh = pshicmp // Some platforms never allow to disable the // kernel checksum processing. p.SetChecksum(false, -1) } wb, err := (&icmp.Message{ Type: ipv6.ICMPTypeEchoRequest, Code: 0, Body: &icmp.Echo{ ID: os.Getpid() & 0xffff, Seq: i + 1, Data: []byte("HELLO-R-U-THERE"), }, }).Marshal(psh) if err != nil { t.Fatal(err) } if err := p.SetControlMessage(cf, toggle); err != nil { if nettest.ProtocolNotSupported(err) { t.Skipf("not supported on %s", runtime.GOOS) } t.Fatal(err) } cm.HopLimit = i + 1 if err := p.SetWriteDeadline(time.Now().Add(100 * time.Millisecond)); err != nil { t.Fatal(err) } if n, err := p.WriteTo(wb, &cm, dst); err != nil { t.Fatal(err) } else if n != len(wb) { t.Fatalf("got %v; want %v", n, len(wb)) } rb := make([]byte, 128) if err := p.SetReadDeadline(time.Now().Add(100 * time.Millisecond)); err != nil { t.Fatal(err) } if n, _, _, err := p.ReadFrom(rb); err != nil { switch runtime.GOOS { case "darwin": // older darwin kernels have some limitation on receiving icmp packet through raw socket t.Logf("not supported on %s", runtime.GOOS) continue } t.Fatal(err) } else { if m, err := icmp.ParseMessage(iana.ProtocolIPv6ICMP, rb[:n]); err != nil { t.Fatal(err) } else if m.Type != ipv6.ICMPTypeEchoReply || m.Code != 0 { t.Fatalf("got type=%v, code=%v; want type=%v, code=%v", m.Type, m.Code, ipv6.ICMPTypeEchoReply, 0) } } } } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/sockopt_windows.go0000644061062106075000000000451312721405224026205 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6 import ( "net" "os" "syscall" "unsafe" ) func getInt(fd syscall.Handle, opt *sockOpt) (int, error) { if opt.name < 1 || opt.typ != ssoTypeInt { return 0, errOpNoSupport } var i int32 l := int32(4) if err := syscall.Getsockopt(fd, int32(opt.level), int32(opt.name), (*byte)(unsafe.Pointer(&i)), &l); err != nil { return 0, os.NewSyscallError("getsockopt", err) } return int(i), nil } func setInt(fd syscall.Handle, opt *sockOpt, v int) error { if opt.name < 1 || opt.typ != ssoTypeInt { return errOpNoSupport } i := int32(v) return os.NewSyscallError("setsockopt", syscall.Setsockopt(fd, int32(opt.level), int32(opt.name), (*byte)(unsafe.Pointer(&i)), 4)) } func getInterface(fd syscall.Handle, opt *sockOpt) (*net.Interface, error) { if opt.name < 1 || opt.typ != ssoTypeInterface { return nil, errOpNoSupport } var i int32 l := int32(4) if err := syscall.Getsockopt(fd, int32(opt.level), int32(opt.name), (*byte)(unsafe.Pointer(&i)), &l); err != nil { return nil, os.NewSyscallError("getsockopt", err) } if i == 0 { return nil, nil } ifi, err := net.InterfaceByIndex(int(i)) if err != nil { return nil, err } return ifi, nil } func setInterface(fd syscall.Handle, opt *sockOpt, ifi *net.Interface) error { if opt.name < 1 || opt.typ != ssoTypeInterface { return errOpNoSupport } var i int32 if ifi != nil { i = int32(ifi.Index) } return os.NewSyscallError("setsockopt", syscall.Setsockopt(fd, int32(opt.level), int32(opt.name), (*byte)(unsafe.Pointer(&i)), 4)) } func getICMPFilter(fd syscall.Handle, opt *sockOpt) (*ICMPFilter, error) { return nil, errOpNoSupport } func setICMPFilter(fd syscall.Handle, opt *sockOpt, f *ICMPFilter) error { return errOpNoSupport } func getMTUInfo(fd syscall.Handle, opt *sockOpt) (*net.Interface, int, error) { return nil, 0, errOpNoSupport } func setGroup(fd syscall.Handle, opt *sockOpt, ifi *net.Interface, grp net.IP) error { if opt.name < 1 || opt.typ != ssoTypeIPMreq { return errOpNoSupport } return setsockoptIPMreq(fd, opt, ifi, grp) } func setSourceGroup(fd syscall.Handle, opt *sockOpt, ifi *net.Interface, grp, src net.IP) error { // TODO(mikio): implement this return errOpNoSupport } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/zsys_linux_arm64.go0000644061062106075000000000744112721405224026214 0ustar00stgraberdomain admins00000000000000// Created by cgo -godefs - DO NOT EDIT // cgo -godefs defs_linux.go // +build linux,arm64 package ipv6 const ( sysIPV6_ADDRFORM = 0x1 sysIPV6_2292PKTINFO = 0x2 sysIPV6_2292HOPOPTS = 0x3 sysIPV6_2292DSTOPTS = 0x4 sysIPV6_2292RTHDR = 0x5 sysIPV6_2292PKTOPTIONS = 0x6 sysIPV6_CHECKSUM = 0x7 sysIPV6_2292HOPLIMIT = 0x8 sysIPV6_NEXTHOP = 0x9 sysIPV6_FLOWINFO = 0xb sysIPV6_UNICAST_HOPS = 0x10 sysIPV6_MULTICAST_IF = 0x11 sysIPV6_MULTICAST_HOPS = 0x12 sysIPV6_MULTICAST_LOOP = 0x13 sysIPV6_ADD_MEMBERSHIP = 0x14 sysIPV6_DROP_MEMBERSHIP = 0x15 sysMCAST_JOIN_GROUP = 0x2a sysMCAST_LEAVE_GROUP = 0x2d sysMCAST_JOIN_SOURCE_GROUP = 0x2e sysMCAST_LEAVE_SOURCE_GROUP = 0x2f sysMCAST_BLOCK_SOURCE = 0x2b sysMCAST_UNBLOCK_SOURCE = 0x2c sysMCAST_MSFILTER = 0x30 sysIPV6_ROUTER_ALERT = 0x16 sysIPV6_MTU_DISCOVER = 0x17 sysIPV6_MTU = 0x18 sysIPV6_RECVERR = 0x19 sysIPV6_V6ONLY = 0x1a sysIPV6_JOIN_ANYCAST = 0x1b sysIPV6_LEAVE_ANYCAST = 0x1c sysIPV6_FLOWLABEL_MGR = 0x20 sysIPV6_FLOWINFO_SEND = 0x21 sysIPV6_IPSEC_POLICY = 0x22 sysIPV6_XFRM_POLICY = 0x23 sysIPV6_RECVPKTINFO = 0x31 sysIPV6_PKTINFO = 0x32 sysIPV6_RECVHOPLIMIT = 0x33 sysIPV6_HOPLIMIT = 0x34 sysIPV6_RECVHOPOPTS = 0x35 sysIPV6_HOPOPTS = 0x36 sysIPV6_RTHDRDSTOPTS = 0x37 sysIPV6_RECVRTHDR = 0x38 sysIPV6_RTHDR = 0x39 sysIPV6_RECVDSTOPTS = 0x3a sysIPV6_DSTOPTS = 0x3b sysIPV6_RECVPATHMTU = 0x3c sysIPV6_PATHMTU = 0x3d sysIPV6_DONTFRAG = 0x3e sysIPV6_RECVTCLASS = 0x42 sysIPV6_TCLASS = 0x43 sysIPV6_ADDR_PREFERENCES = 0x48 sysIPV6_PREFER_SRC_TMP = 0x1 sysIPV6_PREFER_SRC_PUBLIC = 0x2 sysIPV6_PREFER_SRC_PUBTMP_DEFAULT = 0x100 sysIPV6_PREFER_SRC_COA = 0x4 sysIPV6_PREFER_SRC_HOME = 0x400 sysIPV6_PREFER_SRC_CGA = 0x8 sysIPV6_PREFER_SRC_NONCGA = 0x800 sysIPV6_MINHOPCOUNT = 0x49 sysIPV6_ORIGDSTADDR = 0x4a sysIPV6_RECVORIGDSTADDR = 0x4a sysIPV6_TRANSPARENT = 0x4b sysIPV6_UNICAST_IF = 0x4c sysICMPV6_FILTER = 0x1 sysICMPV6_FILTER_BLOCK = 0x1 sysICMPV6_FILTER_PASS = 0x2 sysICMPV6_FILTER_BLOCKOTHERS = 0x3 sysICMPV6_FILTER_PASSONLY = 0x4 sysSOL_SOCKET = 0x1 sysSO_ATTACH_FILTER = 0x1a sysSizeofKernelSockaddrStorage = 0x80 sysSizeofSockaddrInet6 = 0x1c sysSizeofInet6Pktinfo = 0x14 sysSizeofIPv6Mtuinfo = 0x20 sysSizeofIPv6FlowlabelReq = 0x20 sysSizeofIPv6Mreq = 0x14 sysSizeofGroupReq = 0x88 sysSizeofGroupSourceReq = 0x108 sysSizeofICMPv6Filter = 0x20 ) type sysKernelSockaddrStorage struct { Family uint16 X__data [126]int8 } type sysSockaddrInet6 struct { Family uint16 Port uint16 Flowinfo uint32 Addr [16]byte /* in6_addr */ Scope_id uint32 } type sysInet6Pktinfo struct { Addr [16]byte /* in6_addr */ Ifindex int32 } type sysIPv6Mtuinfo struct { Addr sysSockaddrInet6 Mtu uint32 } type sysIPv6FlowlabelReq struct { Dst [16]byte /* in6_addr */ Label uint32 Action uint8 Share uint8 Flags uint16 Expires uint16 Linger uint16 X__flr_pad uint32 } type sysIPv6Mreq struct { Multiaddr [16]byte /* in6_addr */ Ifindex int32 } type sysGroupReq struct { Interface uint32 Pad_cgo_0 [4]byte Group sysKernelSockaddrStorage } type sysGroupSourceReq struct { Interface uint32 Pad_cgo_0 [4]byte Group sysKernelSockaddrStorage Source sysKernelSockaddrStorage } type sysICMPv6Filter struct { Data [8]uint32 } type sysSockFProg struct { Len uint16 Pad_cgo_0 [6]byte Filter *sysSockFilter } type sysSockFilter struct { Code uint16 Jt uint8 Jf uint8 K uint32 } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/payload.go0000644061062106075000000000060512721405224024400 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6 import "net" // A payloadHandler represents the IPv6 datagram payload handler. type payloadHandler struct { net.PacketConn rawOpt } func (c *payloadHandler) ok() bool { return c != nil && c.PacketConn != nil } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/multicastlistener_test.go0000644061062106075000000001320612721405224027562 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6_test import ( "fmt" "net" "runtime" "testing" "golang.org/x/net/internal/nettest" "golang.org/x/net/ipv6" ) var udpMultipleGroupListenerTests = []net.Addr{ &net.UDPAddr{IP: net.ParseIP("ff02::114")}, // see RFC 4727 &net.UDPAddr{IP: net.ParseIP("ff02::1:114")}, &net.UDPAddr{IP: net.ParseIP("ff02::2:114")}, } func TestUDPSinglePacketConnWithMultipleGroupListeners(t *testing.T) { switch runtime.GOOS { case "nacl", "plan9", "solaris", "windows": t.Skipf("not supported on %s", runtime.GOOS) } if !supportsIPv6 { t.Skip("ipv6 is not supported") } for _, gaddr := range udpMultipleGroupListenerTests { c, err := net.ListenPacket("udp6", "[::]:0") // wildcard address with non-reusable port if err != nil { t.Fatal(err) } defer c.Close() p := ipv6.NewPacketConn(c) var mift []*net.Interface ift, err := net.Interfaces() if err != nil { t.Fatal(err) } for i, ifi := range ift { if _, ok := nettest.IsMulticastCapable("ip6", &ifi); !ok { continue } if err := p.JoinGroup(&ifi, gaddr); err != nil { t.Fatal(err) } mift = append(mift, &ift[i]) } for _, ifi := range mift { if err := p.LeaveGroup(ifi, gaddr); err != nil { t.Fatal(err) } } } } func TestUDPMultiplePacketConnWithMultipleGroupListeners(t *testing.T) { switch runtime.GOOS { case "nacl", "plan9", "solaris", "windows": t.Skipf("not supported on %s", runtime.GOOS) } if !supportsIPv6 { t.Skip("ipv6 is not supported") } for _, gaddr := range udpMultipleGroupListenerTests { c1, err := net.ListenPacket("udp6", "[ff02::]:1024") // wildcard address with reusable port if err != nil { t.Fatal(err) } defer c1.Close() c2, err := net.ListenPacket("udp6", "[ff02::]:1024") // wildcard address with reusable port if err != nil { t.Fatal(err) } defer c2.Close() var ps [2]*ipv6.PacketConn ps[0] = ipv6.NewPacketConn(c1) ps[1] = ipv6.NewPacketConn(c2) var mift []*net.Interface ift, err := net.Interfaces() if err != nil { t.Fatal(err) } for i, ifi := range ift { if _, ok := nettest.IsMulticastCapable("ip6", &ifi); !ok { continue } for _, p := range ps { if err := p.JoinGroup(&ifi, gaddr); err != nil { t.Fatal(err) } } mift = append(mift, &ift[i]) } for _, ifi := range mift { for _, p := range ps { if err := p.LeaveGroup(ifi, gaddr); err != nil { t.Fatal(err) } } } } } func TestUDPPerInterfaceSinglePacketConnWithSingleGroupListener(t *testing.T) { switch runtime.GOOS { case "nacl", "plan9", "solaris", "windows": t.Skipf("not supported on %s", runtime.GOOS) } if !supportsIPv6 { t.Skip("ipv6 is not supported") } gaddr := net.IPAddr{IP: net.ParseIP("ff02::114")} // see RFC 4727 type ml struct { c *ipv6.PacketConn ifi *net.Interface } var mlt []*ml ift, err := net.Interfaces() if err != nil { t.Fatal(err) } for i, ifi := range ift { ip, ok := nettest.IsMulticastCapable("ip6", &ifi) if !ok { continue } c, err := net.ListenPacket("udp6", fmt.Sprintf("[%s%%%s]:1024", ip.String(), ifi.Name)) // unicast address with non-reusable port if err != nil { t.Fatal(err) } defer c.Close() p := ipv6.NewPacketConn(c) if err := p.JoinGroup(&ifi, &gaddr); err != nil { t.Fatal(err) } mlt = append(mlt, &ml{p, &ift[i]}) } for _, m := range mlt { if err := m.c.LeaveGroup(m.ifi, &gaddr); err != nil { t.Fatal(err) } } } func TestIPSinglePacketConnWithSingleGroupListener(t *testing.T) { switch runtime.GOOS { case "nacl", "plan9", "solaris", "windows": t.Skipf("not supported on %s", runtime.GOOS) } if !supportsIPv6 { t.Skip("ipv6 is not supported") } if m, ok := nettest.SupportsRawIPSocket(); !ok { t.Skip(m) } c, err := net.ListenPacket("ip6:ipv6-icmp", "::") // wildcard address if err != nil { t.Fatal(err) } defer c.Close() p := ipv6.NewPacketConn(c) gaddr := net.IPAddr{IP: net.ParseIP("ff02::114")} // see RFC 4727 var mift []*net.Interface ift, err := net.Interfaces() if err != nil { t.Fatal(err) } for i, ifi := range ift { if _, ok := nettest.IsMulticastCapable("ip6", &ifi); !ok { continue } if err := p.JoinGroup(&ifi, &gaddr); err != nil { t.Fatal(err) } mift = append(mift, &ift[i]) } for _, ifi := range mift { if err := p.LeaveGroup(ifi, &gaddr); err != nil { t.Fatal(err) } } } func TestIPPerInterfaceSinglePacketConnWithSingleGroupListener(t *testing.T) { switch runtime.GOOS { case "darwin", "dragonfly", "openbsd": // platforms that return fe80::1%lo0: bind: can't assign requested address t.Skipf("not supported on %s", runtime.GOOS) case "nacl", "plan9", "solaris", "windows": t.Skipf("not supported on %s", runtime.GOOS) } if !supportsIPv6 { t.Skip("ipv6 is not supported") } if m, ok := nettest.SupportsRawIPSocket(); !ok { t.Skip(m) } gaddr := net.IPAddr{IP: net.ParseIP("ff02::114")} // see RFC 4727 type ml struct { c *ipv6.PacketConn ifi *net.Interface } var mlt []*ml ift, err := net.Interfaces() if err != nil { t.Fatal(err) } for i, ifi := range ift { ip, ok := nettest.IsMulticastCapable("ip6", &ifi) if !ok { continue } c, err := net.ListenPacket("ip6:ipv6-icmp", fmt.Sprintf("%s%%%s", ip.String(), ifi.Name)) // unicast address if err != nil { t.Fatal(err) } defer c.Close() p := ipv6.NewPacketConn(c) if err := p.JoinGroup(&ifi, &gaddr); err != nil { t.Fatal(err) } mlt = append(mlt, &ml{p, &ift[i]}) } for _, m := range mlt { if err := m.c.LeaveGroup(m.ifi, &gaddr); err != nil { t.Fatal(err) } } } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/control_stub.go0000644061062106075000000000104012721405224025456 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build nacl plan9 solaris package ipv6 func setControlMessage(fd int, opt *rawOpt, cf ControlFlags, on bool) error { return errOpNoSupport } func newControlMessage(opt *rawOpt) (oob []byte) { return nil } func parseControlMessage(b []byte) (*ControlMessage, error) { return nil, errOpNoSupport } func marshalControlMessage(cm *ControlMessage) (oob []byte) { return nil } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/icmp_solaris.go0000644061062106075000000000104412721405224025431 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build solaris package ipv6 func (f *sysICMPv6Filter) accept(typ ICMPType) { // TODO(mikio): implement this } func (f *sysICMPv6Filter) block(typ ICMPType) { // TODO(mikio): implement this } func (f *sysICMPv6Filter) setAll(block bool) { // TODO(mikio): implement this } func (f *sysICMPv6Filter) willBlock(typ ICMPType) bool { // TODO(mikio): implement this return false } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/genericopt_stub.go0000644061062106075000000000150112721405224026137 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build nacl plan9 solaris package ipv6 // TrafficClass returns the traffic class field value for outgoing // packets. func (c *genericOpt) TrafficClass() (int, error) { return 0, errOpNoSupport } // SetTrafficClass sets the traffic class field value for future // outgoing packets. func (c *genericOpt) SetTrafficClass(tclass int) error { return errOpNoSupport } // HopLimit returns the hop limit field value for outgoing packets. func (c *genericOpt) HopLimit() (int, error) { return 0, errOpNoSupport } // SetHopLimit sets the hop limit field value for future outgoing // packets. func (c *genericOpt) SetHopLimit(hoplim int) error { return errOpNoSupport } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/helper_stub.go0000644061062106075000000000065112721405224025264 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build nacl plan9 solaris package ipv6 func (c *genericOpt) sysfd() (int, error) { return 0, errOpNoSupport } func (c *dgramOpt) sysfd() (int, error) { return 0, errOpNoSupport } func (c *payloadHandler) sysfd() (int, error) { return 0, errOpNoSupport } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/multicast_test.go0000644061062106075000000001613612721405224026021 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6_test import ( "bytes" "net" "os" "runtime" "testing" "time" "golang.org/x/net/icmp" "golang.org/x/net/internal/iana" "golang.org/x/net/internal/nettest" "golang.org/x/net/ipv6" ) var packetConnReadWriteMulticastUDPTests = []struct { addr string grp, src *net.UDPAddr }{ {"[ff02::]:0", &net.UDPAddr{IP: net.ParseIP("ff02::114")}, nil}, // see RFC 4727 {"[ff30::8000:0]:0", &net.UDPAddr{IP: net.ParseIP("ff30::8000:1")}, &net.UDPAddr{IP: net.IPv6loopback}}, // see RFC 5771 } func TestPacketConnReadWriteMulticastUDP(t *testing.T) { switch runtime.GOOS { case "freebsd": // due to a bug on loopback marking // See http://www.freebsd.org/cgi/query-pr.cgi?pr=180065. t.Skipf("not supported on %s", runtime.GOOS) case "nacl", "plan9", "solaris", "windows": t.Skipf("not supported on %s", runtime.GOOS) } if !supportsIPv6 { t.Skip("ipv6 is not supported") } ifi := nettest.RoutedInterface("ip6", net.FlagUp|net.FlagMulticast|net.FlagLoopback) if ifi == nil { t.Skipf("not available on %s", runtime.GOOS) } for _, tt := range packetConnReadWriteMulticastUDPTests { c, err := net.ListenPacket("udp6", tt.addr) if err != nil { t.Fatal(err) } defer c.Close() grp := *tt.grp grp.Port = c.LocalAddr().(*net.UDPAddr).Port p := ipv6.NewPacketConn(c) defer p.Close() if tt.src == nil { if err := p.JoinGroup(ifi, &grp); err != nil { t.Fatal(err) } defer p.LeaveGroup(ifi, &grp) } else { if err := p.JoinSourceSpecificGroup(ifi, &grp, tt.src); err != nil { switch runtime.GOOS { case "freebsd", "linux": default: // platforms that don't support MLDv2 fail here t.Logf("not supported on %s", runtime.GOOS) continue } t.Fatal(err) } defer p.LeaveSourceSpecificGroup(ifi, &grp, tt.src) } if err := p.SetMulticastInterface(ifi); err != nil { t.Fatal(err) } if _, err := p.MulticastInterface(); err != nil { t.Fatal(err) } if err := p.SetMulticastLoopback(true); err != nil { t.Fatal(err) } if _, err := p.MulticastLoopback(); err != nil { t.Fatal(err) } cm := ipv6.ControlMessage{ TrafficClass: iana.DiffServAF11 | iana.CongestionExperienced, Src: net.IPv6loopback, IfIndex: ifi.Index, } cf := ipv6.FlagTrafficClass | ipv6.FlagHopLimit | ipv6.FlagSrc | ipv6.FlagDst | ipv6.FlagInterface | ipv6.FlagPathMTU wb := []byte("HELLO-R-U-THERE") for i, toggle := range []bool{true, false, true} { if err := p.SetControlMessage(cf, toggle); err != nil { if nettest.ProtocolNotSupported(err) { t.Logf("not supported on %s", runtime.GOOS) continue } t.Fatal(err) } if err := p.SetDeadline(time.Now().Add(200 * time.Millisecond)); err != nil { t.Fatal(err) } cm.HopLimit = i + 1 if n, err := p.WriteTo(wb, &cm, &grp); err != nil { t.Fatal(err) } else if n != len(wb) { t.Fatal(err) } rb := make([]byte, 128) if n, _, _, err := p.ReadFrom(rb); err != nil { t.Fatal(err) } else if !bytes.Equal(rb[:n], wb) { t.Fatalf("got %v; want %v", rb[:n], wb) } } } } var packetConnReadWriteMulticastICMPTests = []struct { grp, src *net.IPAddr }{ {&net.IPAddr{IP: net.ParseIP("ff02::114")}, nil}, // see RFC 4727 {&net.IPAddr{IP: net.ParseIP("ff30::8000:1")}, &net.IPAddr{IP: net.IPv6loopback}}, // see RFC 5771 } func TestPacketConnReadWriteMulticastICMP(t *testing.T) { switch runtime.GOOS { case "freebsd": // due to a bug on loopback marking // See http://www.freebsd.org/cgi/query-pr.cgi?pr=180065. t.Skipf("not supported on %s", runtime.GOOS) case "nacl", "plan9", "solaris", "windows": t.Skipf("not supported on %s", runtime.GOOS) } if !supportsIPv6 { t.Skip("ipv6 is not supported") } if m, ok := nettest.SupportsRawIPSocket(); !ok { t.Skip(m) } ifi := nettest.RoutedInterface("ip6", net.FlagUp|net.FlagMulticast|net.FlagLoopback) if ifi == nil { t.Skipf("not available on %s", runtime.GOOS) } for _, tt := range packetConnReadWriteMulticastICMPTests { c, err := net.ListenPacket("ip6:ipv6-icmp", "::") if err != nil { t.Fatal(err) } defer c.Close() pshicmp := icmp.IPv6PseudoHeader(c.LocalAddr().(*net.IPAddr).IP, tt.grp.IP) p := ipv6.NewPacketConn(c) defer p.Close() if tt.src == nil { if err := p.JoinGroup(ifi, tt.grp); err != nil { t.Fatal(err) } defer p.LeaveGroup(ifi, tt.grp) } else { if err := p.JoinSourceSpecificGroup(ifi, tt.grp, tt.src); err != nil { switch runtime.GOOS { case "freebsd", "linux": default: // platforms that don't support MLDv2 fail here t.Logf("not supported on %s", runtime.GOOS) continue } t.Fatal(err) } defer p.LeaveSourceSpecificGroup(ifi, tt.grp, tt.src) } if err := p.SetMulticastInterface(ifi); err != nil { t.Fatal(err) } if _, err := p.MulticastInterface(); err != nil { t.Fatal(err) } if err := p.SetMulticastLoopback(true); err != nil { t.Fatal(err) } if _, err := p.MulticastLoopback(); err != nil { t.Fatal(err) } cm := ipv6.ControlMessage{ TrafficClass: iana.DiffServAF11 | iana.CongestionExperienced, Src: net.IPv6loopback, IfIndex: ifi.Index, } cf := ipv6.FlagTrafficClass | ipv6.FlagHopLimit | ipv6.FlagSrc | ipv6.FlagDst | ipv6.FlagInterface | ipv6.FlagPathMTU var f ipv6.ICMPFilter f.SetAll(true) f.Accept(ipv6.ICMPTypeEchoReply) if err := p.SetICMPFilter(&f); err != nil { t.Fatal(err) } var psh []byte for i, toggle := range []bool{true, false, true} { if toggle { psh = nil if err := p.SetChecksum(true, 2); err != nil { t.Fatal(err) } } else { psh = pshicmp // Some platforms never allow to // disable the kernel checksum // processing. p.SetChecksum(false, -1) } wb, err := (&icmp.Message{ Type: ipv6.ICMPTypeEchoRequest, Code: 0, Body: &icmp.Echo{ ID: os.Getpid() & 0xffff, Seq: i + 1, Data: []byte("HELLO-R-U-THERE"), }, }).Marshal(psh) if err != nil { t.Fatal(err) } if err := p.SetControlMessage(cf, toggle); err != nil { if nettest.ProtocolNotSupported(err) { t.Logf("not supported on %s", runtime.GOOS) continue } t.Fatal(err) } if err := p.SetDeadline(time.Now().Add(200 * time.Millisecond)); err != nil { t.Fatal(err) } cm.HopLimit = i + 1 if n, err := p.WriteTo(wb, &cm, tt.grp); err != nil { t.Fatal(err) } else if n != len(wb) { t.Fatalf("got %v; want %v", n, len(wb)) } rb := make([]byte, 128) if n, _, _, err := p.ReadFrom(rb); err != nil { switch runtime.GOOS { case "darwin": // older darwin kernels have some limitation on receiving icmp packet through raw socket t.Logf("not supported on %s", runtime.GOOS) continue } t.Fatal(err) } else { if m, err := icmp.ParseMessage(iana.ProtocolIPv6ICMP, rb[:n]); err != nil { t.Fatal(err) } else if m.Type != ipv6.ICMPTypeEchoReply || m.Code != 0 { t.Fatalf("got type=%v, code=%v; want type=%v, code=%v", m.Type, m.Code, ipv6.ICMPTypeEchoReply, 0) } } } } } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/control_rfc2292_unix.go0000644061062106075000000000275512721405224026653 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build darwin package ipv6 import ( "syscall" "unsafe" "golang.org/x/net/internal/iana" ) func marshal2292HopLimit(b []byte, cm *ControlMessage) []byte { m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0])) m.Level = iana.ProtocolIPv6 m.Type = sysIPV6_2292HOPLIMIT m.SetLen(syscall.CmsgLen(4)) if cm != nil { data := b[syscall.CmsgLen(0):] nativeEndian.PutUint32(data[:4], uint32(cm.HopLimit)) } return b[syscall.CmsgSpace(4):] } func marshal2292PacketInfo(b []byte, cm *ControlMessage) []byte { m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0])) m.Level = iana.ProtocolIPv6 m.Type = sysIPV6_2292PKTINFO m.SetLen(syscall.CmsgLen(sysSizeofInet6Pktinfo)) if cm != nil { pi := (*sysInet6Pktinfo)(unsafe.Pointer(&b[syscall.CmsgLen(0)])) if ip := cm.Src.To16(); ip != nil && ip.To4() == nil { copy(pi.Addr[:], ip) } if cm.IfIndex > 0 { pi.setIfindex(cm.IfIndex) } } return b[syscall.CmsgSpace(sysSizeofInet6Pktinfo):] } func marshal2292NextHop(b []byte, cm *ControlMessage) []byte { m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0])) m.Level = iana.ProtocolIPv6 m.Type = sysIPV6_2292NEXTHOP m.SetLen(syscall.CmsgLen(sysSizeofSockaddrInet6)) if cm != nil { sa := (*sysSockaddrInet6)(unsafe.Pointer(&b[syscall.CmsgLen(0)])) sa.setSockaddr(cm.NextHop, cm.IfIndex) } return b[syscall.CmsgSpace(sysSizeofSockaddrInet6):] } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/zsys_netbsd.go0000644061062106075000000000307012721405224025315 0ustar00stgraberdomain admins00000000000000// Created by cgo -godefs - DO NOT EDIT // cgo -godefs defs_netbsd.go package ipv6 const ( sysIPV6_UNICAST_HOPS = 0x4 sysIPV6_MULTICAST_IF = 0x9 sysIPV6_MULTICAST_HOPS = 0xa sysIPV6_MULTICAST_LOOP = 0xb sysIPV6_JOIN_GROUP = 0xc sysIPV6_LEAVE_GROUP = 0xd sysIPV6_PORTRANGE = 0xe sysICMP6_FILTER = 0x12 sysIPV6_CHECKSUM = 0x1a sysIPV6_V6ONLY = 0x1b sysIPV6_IPSEC_POLICY = 0x1c sysIPV6_RTHDRDSTOPTS = 0x23 sysIPV6_RECVPKTINFO = 0x24 sysIPV6_RECVHOPLIMIT = 0x25 sysIPV6_RECVRTHDR = 0x26 sysIPV6_RECVHOPOPTS = 0x27 sysIPV6_RECVDSTOPTS = 0x28 sysIPV6_USE_MIN_MTU = 0x2a sysIPV6_RECVPATHMTU = 0x2b sysIPV6_PATHMTU = 0x2c sysIPV6_PKTINFO = 0x2e sysIPV6_HOPLIMIT = 0x2f sysIPV6_NEXTHOP = 0x30 sysIPV6_HOPOPTS = 0x31 sysIPV6_DSTOPTS = 0x32 sysIPV6_RTHDR = 0x33 sysIPV6_RECVTCLASS = 0x39 sysIPV6_TCLASS = 0x3d sysIPV6_DONTFRAG = 0x3e sysIPV6_PORTRANGE_DEFAULT = 0x0 sysIPV6_PORTRANGE_HIGH = 0x1 sysIPV6_PORTRANGE_LOW = 0x2 sysSizeofSockaddrInet6 = 0x1c sysSizeofInet6Pktinfo = 0x14 sysSizeofIPv6Mtuinfo = 0x20 sysSizeofIPv6Mreq = 0x14 sysSizeofICMPv6Filter = 0x20 ) type sysSockaddrInet6 struct { Len uint8 Family uint8 Port uint16 Flowinfo uint32 Addr [16]byte /* in6_addr */ Scope_id uint32 } type sysInet6Pktinfo struct { Addr [16]byte /* in6_addr */ Ifindex uint32 } type sysIPv6Mtuinfo struct { Addr sysSockaddrInet6 Mtu uint32 } type sysIPv6Mreq struct { Multiaddr [16]byte /* in6_addr */ Interface uint32 } type sysICMPv6Filter struct { Filt [8]uint32 } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/zsys_linux_mips64le.go0000644061062106075000000000744412721405224026731 0ustar00stgraberdomain admins00000000000000// Created by cgo -godefs - DO NOT EDIT // cgo -godefs defs_linux.go // +build linux,mips64le package ipv6 const ( sysIPV6_ADDRFORM = 0x1 sysIPV6_2292PKTINFO = 0x2 sysIPV6_2292HOPOPTS = 0x3 sysIPV6_2292DSTOPTS = 0x4 sysIPV6_2292RTHDR = 0x5 sysIPV6_2292PKTOPTIONS = 0x6 sysIPV6_CHECKSUM = 0x7 sysIPV6_2292HOPLIMIT = 0x8 sysIPV6_NEXTHOP = 0x9 sysIPV6_FLOWINFO = 0xb sysIPV6_UNICAST_HOPS = 0x10 sysIPV6_MULTICAST_IF = 0x11 sysIPV6_MULTICAST_HOPS = 0x12 sysIPV6_MULTICAST_LOOP = 0x13 sysIPV6_ADD_MEMBERSHIP = 0x14 sysIPV6_DROP_MEMBERSHIP = 0x15 sysMCAST_JOIN_GROUP = 0x2a sysMCAST_LEAVE_GROUP = 0x2d sysMCAST_JOIN_SOURCE_GROUP = 0x2e sysMCAST_LEAVE_SOURCE_GROUP = 0x2f sysMCAST_BLOCK_SOURCE = 0x2b sysMCAST_UNBLOCK_SOURCE = 0x2c sysMCAST_MSFILTER = 0x30 sysIPV6_ROUTER_ALERT = 0x16 sysIPV6_MTU_DISCOVER = 0x17 sysIPV6_MTU = 0x18 sysIPV6_RECVERR = 0x19 sysIPV6_V6ONLY = 0x1a sysIPV6_JOIN_ANYCAST = 0x1b sysIPV6_LEAVE_ANYCAST = 0x1c sysIPV6_FLOWLABEL_MGR = 0x20 sysIPV6_FLOWINFO_SEND = 0x21 sysIPV6_IPSEC_POLICY = 0x22 sysIPV6_XFRM_POLICY = 0x23 sysIPV6_RECVPKTINFO = 0x31 sysIPV6_PKTINFO = 0x32 sysIPV6_RECVHOPLIMIT = 0x33 sysIPV6_HOPLIMIT = 0x34 sysIPV6_RECVHOPOPTS = 0x35 sysIPV6_HOPOPTS = 0x36 sysIPV6_RTHDRDSTOPTS = 0x37 sysIPV6_RECVRTHDR = 0x38 sysIPV6_RTHDR = 0x39 sysIPV6_RECVDSTOPTS = 0x3a sysIPV6_DSTOPTS = 0x3b sysIPV6_RECVPATHMTU = 0x3c sysIPV6_PATHMTU = 0x3d sysIPV6_DONTFRAG = 0x3e sysIPV6_RECVTCLASS = 0x42 sysIPV6_TCLASS = 0x43 sysIPV6_ADDR_PREFERENCES = 0x48 sysIPV6_PREFER_SRC_TMP = 0x1 sysIPV6_PREFER_SRC_PUBLIC = 0x2 sysIPV6_PREFER_SRC_PUBTMP_DEFAULT = 0x100 sysIPV6_PREFER_SRC_COA = 0x4 sysIPV6_PREFER_SRC_HOME = 0x400 sysIPV6_PREFER_SRC_CGA = 0x8 sysIPV6_PREFER_SRC_NONCGA = 0x800 sysIPV6_MINHOPCOUNT = 0x49 sysIPV6_ORIGDSTADDR = 0x4a sysIPV6_RECVORIGDSTADDR = 0x4a sysIPV6_TRANSPARENT = 0x4b sysIPV6_UNICAST_IF = 0x4c sysICMPV6_FILTER = 0x1 sysICMPV6_FILTER_BLOCK = 0x1 sysICMPV6_FILTER_PASS = 0x2 sysICMPV6_FILTER_BLOCKOTHERS = 0x3 sysICMPV6_FILTER_PASSONLY = 0x4 sysSOL_SOCKET = 0x1 sysSO_ATTACH_FILTER = 0x1a sysSizeofKernelSockaddrStorage = 0x80 sysSizeofSockaddrInet6 = 0x1c sysSizeofInet6Pktinfo = 0x14 sysSizeofIPv6Mtuinfo = 0x20 sysSizeofIPv6FlowlabelReq = 0x20 sysSizeofIPv6Mreq = 0x14 sysSizeofGroupReq = 0x88 sysSizeofGroupSourceReq = 0x108 sysSizeofICMPv6Filter = 0x20 ) type sysKernelSockaddrStorage struct { Family uint16 X__data [126]int8 } type sysSockaddrInet6 struct { Family uint16 Port uint16 Flowinfo uint32 Addr [16]byte /* in6_addr */ Scope_id uint32 } type sysInet6Pktinfo struct { Addr [16]byte /* in6_addr */ Ifindex int32 } type sysIPv6Mtuinfo struct { Addr sysSockaddrInet6 Mtu uint32 } type sysIPv6FlowlabelReq struct { Dst [16]byte /* in6_addr */ Label uint32 Action uint8 Share uint8 Flags uint16 Expires uint16 Linger uint16 X__flr_pad uint32 } type sysIPv6Mreq struct { Multiaddr [16]byte /* in6_addr */ Ifindex int32 } type sysGroupReq struct { Interface uint32 Pad_cgo_0 [4]byte Group sysKernelSockaddrStorage } type sysGroupSourceReq struct { Interface uint32 Pad_cgo_0 [4]byte Group sysKernelSockaddrStorage Source sysKernelSockaddrStorage } type sysICMPv6Filter struct { Data [8]uint32 } type sysSockFProg struct { Len uint16 Pad_cgo_0 [6]byte Filter *sysSockFilter } type sysSockFilter struct { Code uint16 Jt uint8 Jf uint8 K uint32 } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/bpfopt_stub.go0000644061062106075000000000061212721405224025274 0ustar00stgraberdomain admins00000000000000// Copyright 2016 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build !linux package ipv6 import "golang.org/x/net/bpf" // SetBPF attaches a BPF program to the connection. // // Only supported on Linux. func (c *dgramOpt) SetBPF(filter []bpf.RawInstruction) error { return errOpNoSupport } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/icmp_windows.go0000644061062106075000000000112212721405224025444 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6 type sysICMPv6Filter struct { // TODO(mikio): implement this } func (f *sysICMPv6Filter) accept(typ ICMPType) { // TODO(mikio): implement this } func (f *sysICMPv6Filter) block(typ ICMPType) { // TODO(mikio): implement this } func (f *sysICMPv6Filter) setAll(block bool) { // TODO(mikio): implement this } func (f *sysICMPv6Filter) willBlock(typ ICMPType) bool { // TODO(mikio): implement this return false } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/sys_darwin.go0000644061062106075000000001211612721405224025131 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6 import ( "net" "syscall" "unsafe" "golang.org/x/net/internal/iana" ) var ( ctlOpts = [ctlMax]ctlOpt{ ctlHopLimit: {sysIPV6_2292HOPLIMIT, 4, marshal2292HopLimit, parseHopLimit}, ctlPacketInfo: {sysIPV6_2292PKTINFO, sysSizeofInet6Pktinfo, marshal2292PacketInfo, parsePacketInfo}, } sockOpts = [ssoMax]sockOpt{ ssoHopLimit: {iana.ProtocolIPv6, sysIPV6_UNICAST_HOPS, ssoTypeInt}, ssoMulticastInterface: {iana.ProtocolIPv6, sysIPV6_MULTICAST_IF, ssoTypeInterface}, ssoMulticastHopLimit: {iana.ProtocolIPv6, sysIPV6_MULTICAST_HOPS, ssoTypeInt}, ssoMulticastLoopback: {iana.ProtocolIPv6, sysIPV6_MULTICAST_LOOP, ssoTypeInt}, ssoReceiveHopLimit: {iana.ProtocolIPv6, sysIPV6_2292HOPLIMIT, ssoTypeInt}, ssoReceivePacketInfo: {iana.ProtocolIPv6, sysIPV6_2292PKTINFO, ssoTypeInt}, ssoChecksum: {iana.ProtocolIPv6, sysIPV6_CHECKSUM, ssoTypeInt}, ssoICMPFilter: {iana.ProtocolIPv6ICMP, sysICMP6_FILTER, ssoTypeICMPFilter}, ssoJoinGroup: {iana.ProtocolIPv6, sysIPV6_JOIN_GROUP, ssoTypeIPMreq}, ssoLeaveGroup: {iana.ProtocolIPv6, sysIPV6_LEAVE_GROUP, ssoTypeIPMreq}, } ) func init() { // Seems like kern.osreldate is veiled on latest OS X. We use // kern.osrelease instead. osver, err := syscall.Sysctl("kern.osrelease") if err != nil { return } var i int for i = range osver { if osver[i] == '.' { break } } // The IP_PKTINFO and protocol-independent multicast API were // introduced in OS X 10.7 (Darwin 11.0.0). But it looks like // those features require OS X 10.8 (Darwin 12.0.0) and above. // See http://support.apple.com/kb/HT1633. if i > 2 || i == 2 && osver[0] >= '1' && osver[1] >= '2' { ctlOpts[ctlTrafficClass].name = sysIPV6_TCLASS ctlOpts[ctlTrafficClass].length = 4 ctlOpts[ctlTrafficClass].marshal = marshalTrafficClass ctlOpts[ctlTrafficClass].parse = parseTrafficClass ctlOpts[ctlHopLimit].name = sysIPV6_HOPLIMIT ctlOpts[ctlHopLimit].marshal = marshalHopLimit ctlOpts[ctlPacketInfo].name = sysIPV6_PKTINFO ctlOpts[ctlPacketInfo].marshal = marshalPacketInfo ctlOpts[ctlNextHop].name = sysIPV6_NEXTHOP ctlOpts[ctlNextHop].length = sysSizeofSockaddrInet6 ctlOpts[ctlNextHop].marshal = marshalNextHop ctlOpts[ctlNextHop].parse = parseNextHop ctlOpts[ctlPathMTU].name = sysIPV6_PATHMTU ctlOpts[ctlPathMTU].length = sysSizeofIPv6Mtuinfo ctlOpts[ctlPathMTU].marshal = marshalPathMTU ctlOpts[ctlPathMTU].parse = parsePathMTU sockOpts[ssoTrafficClass].level = iana.ProtocolIPv6 sockOpts[ssoTrafficClass].name = sysIPV6_TCLASS sockOpts[ssoTrafficClass].typ = ssoTypeInt sockOpts[ssoReceiveTrafficClass].level = iana.ProtocolIPv6 sockOpts[ssoReceiveTrafficClass].name = sysIPV6_RECVTCLASS sockOpts[ssoReceiveTrafficClass].typ = ssoTypeInt sockOpts[ssoReceiveHopLimit].name = sysIPV6_RECVHOPLIMIT sockOpts[ssoReceivePacketInfo].name = sysIPV6_RECVPKTINFO sockOpts[ssoReceivePathMTU].level = iana.ProtocolIPv6 sockOpts[ssoReceivePathMTU].name = sysIPV6_RECVPATHMTU sockOpts[ssoReceivePathMTU].typ = ssoTypeInt sockOpts[ssoPathMTU].level = iana.ProtocolIPv6 sockOpts[ssoPathMTU].name = sysIPV6_PATHMTU sockOpts[ssoPathMTU].typ = ssoTypeMTUInfo sockOpts[ssoJoinGroup].name = sysMCAST_JOIN_GROUP sockOpts[ssoJoinGroup].typ = ssoTypeGroupReq sockOpts[ssoLeaveGroup].name = sysMCAST_LEAVE_GROUP sockOpts[ssoLeaveGroup].typ = ssoTypeGroupReq sockOpts[ssoJoinSourceGroup].level = iana.ProtocolIPv6 sockOpts[ssoJoinSourceGroup].name = sysMCAST_JOIN_SOURCE_GROUP sockOpts[ssoJoinSourceGroup].typ = ssoTypeGroupSourceReq sockOpts[ssoLeaveSourceGroup].level = iana.ProtocolIPv6 sockOpts[ssoLeaveSourceGroup].name = sysMCAST_LEAVE_SOURCE_GROUP sockOpts[ssoLeaveSourceGroup].typ = ssoTypeGroupSourceReq sockOpts[ssoBlockSourceGroup].level = iana.ProtocolIPv6 sockOpts[ssoBlockSourceGroup].name = sysMCAST_BLOCK_SOURCE sockOpts[ssoBlockSourceGroup].typ = ssoTypeGroupSourceReq sockOpts[ssoUnblockSourceGroup].level = iana.ProtocolIPv6 sockOpts[ssoUnblockSourceGroup].name = sysMCAST_UNBLOCK_SOURCE sockOpts[ssoUnblockSourceGroup].typ = ssoTypeGroupSourceReq } } func (sa *sysSockaddrInet6) setSockaddr(ip net.IP, i int) { sa.Len = sysSizeofSockaddrInet6 sa.Family = syscall.AF_INET6 copy(sa.Addr[:], ip) sa.Scope_id = uint32(i) } func (pi *sysInet6Pktinfo) setIfindex(i int) { pi.Ifindex = uint32(i) } func (mreq *sysIPv6Mreq) setIfindex(i int) { mreq.Interface = uint32(i) } func (gr *sysGroupReq) setGroup(grp net.IP) { sa := (*sysSockaddrInet6)(unsafe.Pointer(&gr.Pad_cgo_0[0])) sa.Len = sysSizeofSockaddrInet6 sa.Family = syscall.AF_INET6 copy(sa.Addr[:], grp) } func (gsr *sysGroupSourceReq) setSourceGroup(grp, src net.IP) { sa := (*sysSockaddrInet6)(unsafe.Pointer(&gsr.Pad_cgo_0[0])) sa.Len = sysSizeofSockaddrInet6 sa.Family = syscall.AF_INET6 copy(sa.Addr[:], grp) sa = (*sysSockaddrInet6)(unsafe.Pointer(&gsr.Pad_cgo_1[0])) sa.Len = sysSizeofSockaddrInet6 sa.Family = syscall.AF_INET6 copy(sa.Addr[:], src) } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/payload_cmsg.go0000644061062106075000000000371512721405224025416 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build !nacl,!plan9,!windows package ipv6 import ( "net" "syscall" ) // ReadFrom reads a payload of the received IPv6 datagram, from the // endpoint c, copying the payload into b. It returns the number of // bytes copied into b, the control message cm and the source address // src of the received datagram. func (c *payloadHandler) ReadFrom(b []byte) (n int, cm *ControlMessage, src net.Addr, err error) { if !c.ok() { return 0, nil, nil, syscall.EINVAL } oob := newControlMessage(&c.rawOpt) var oobn int switch c := c.PacketConn.(type) { case *net.UDPConn: if n, oobn, _, src, err = c.ReadMsgUDP(b, oob); err != nil { return 0, nil, nil, err } case *net.IPConn: if n, oobn, _, src, err = c.ReadMsgIP(b, oob); err != nil { return 0, nil, nil, err } default: return 0, nil, nil, errInvalidConnType } if cm, err = parseControlMessage(oob[:oobn]); err != nil { return 0, nil, nil, err } if cm != nil { cm.Src = netAddrToIP16(src) } return } // WriteTo writes a payload of the IPv6 datagram, to the destination // address dst through the endpoint c, copying the payload from b. It // returns the number of bytes written. The control message cm allows // the IPv6 header fields and the datagram path to be specified. The // cm may be nil if control of the outgoing datagram is not required. func (c *payloadHandler) WriteTo(b []byte, cm *ControlMessage, dst net.Addr) (n int, err error) { if !c.ok() { return 0, syscall.EINVAL } oob := marshalControlMessage(cm) if dst == nil { return 0, errMissingAddress } switch c := c.PacketConn.(type) { case *net.UDPConn: n, _, err = c.WriteMsgUDP(b, oob, dst.(*net.UDPAddr)) case *net.IPConn: n, _, err = c.WriteMsgIP(b, oob, dst.(*net.IPAddr)) default: return 0, errInvalidConnType } if err != nil { return 0, err } return } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/sockopt_ssmreq_stub.go0000644061062106075000000000071112721405224027056 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build !darwin,!freebsd,!linux package ipv6 import "net" func setsockoptGroupReq(fd int, opt *sockOpt, ifi *net.Interface, grp net.IP) error { return errOpNoSupport } func setsockoptGroupSourceReq(fd int, opt *sockOpt, ifi *net.Interface, grp, src net.IP) error { return errOpNoSupport } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/genericopt_posix.go0000644061062106075000000000254412721405224026334 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build darwin dragonfly freebsd linux netbsd openbsd windows package ipv6 import "syscall" // TrafficClass returns the traffic class field value for outgoing // packets. func (c *genericOpt) TrafficClass() (int, error) { if !c.ok() { return 0, syscall.EINVAL } fd, err := c.sysfd() if err != nil { return 0, err } return getInt(fd, &sockOpts[ssoTrafficClass]) } // SetTrafficClass sets the traffic class field value for future // outgoing packets. func (c *genericOpt) SetTrafficClass(tclass int) error { if !c.ok() { return syscall.EINVAL } fd, err := c.sysfd() if err != nil { return err } return setInt(fd, &sockOpts[ssoTrafficClass], tclass) } // HopLimit returns the hop limit field value for outgoing packets. func (c *genericOpt) HopLimit() (int, error) { if !c.ok() { return 0, syscall.EINVAL } fd, err := c.sysfd() if err != nil { return 0, err } return getInt(fd, &sockOpts[ssoHopLimit]) } // SetHopLimit sets the hop limit field value for future outgoing // packets. func (c *genericOpt) SetHopLimit(hoplim int) error { if !c.ok() { return syscall.EINVAL } fd, err := c.sysfd() if err != nil { return err } return setInt(fd, &sockOpts[ssoHopLimit], hoplim) } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/defs_dragonfly.go0000644061062106075000000000435512721405224025743 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build ignore // +godefs map struct_in6_addr [16]byte /* in6_addr */ package ipv6 /* #include #include #include #include */ import "C" const ( sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP sysIPV6_JOIN_GROUP = C.IPV6_JOIN_GROUP sysIPV6_LEAVE_GROUP = C.IPV6_LEAVE_GROUP sysIPV6_PORTRANGE = C.IPV6_PORTRANGE sysICMP6_FILTER = C.ICMP6_FILTER sysIPV6_CHECKSUM = C.IPV6_CHECKSUM sysIPV6_V6ONLY = C.IPV6_V6ONLY sysIPV6_IPSEC_POLICY = C.IPV6_IPSEC_POLICY sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS sysIPV6_USE_MIN_MTU = C.IPV6_USE_MIN_MTU sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU sysIPV6_PATHMTU = C.IPV6_PATHMTU sysIPV6_PKTINFO = C.IPV6_PKTINFO sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT sysIPV6_NEXTHOP = C.IPV6_NEXTHOP sysIPV6_HOPOPTS = C.IPV6_HOPOPTS sysIPV6_DSTOPTS = C.IPV6_DSTOPTS sysIPV6_RTHDR = C.IPV6_RTHDR sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS sysIPV6_AUTOFLOWLABEL = C.IPV6_AUTOFLOWLABEL sysIPV6_TCLASS = C.IPV6_TCLASS sysIPV6_DONTFRAG = C.IPV6_DONTFRAG sysIPV6_PREFER_TEMPADDR = C.IPV6_PREFER_TEMPADDR sysIPV6_PORTRANGE_DEFAULT = C.IPV6_PORTRANGE_DEFAULT sysIPV6_PORTRANGE_HIGH = C.IPV6_PORTRANGE_HIGH sysIPV6_PORTRANGE_LOW = C.IPV6_PORTRANGE_LOW sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6 sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter ) type sysSockaddrInet6 C.struct_sockaddr_in6 type sysInet6Pktinfo C.struct_in6_pktinfo type sysIPv6Mtuinfo C.struct_ip6_mtuinfo type sysIPv6Mreq C.struct_ipv6_mreq type sysICMPv6Filter C.struct_icmp6_filter lxd-2.0.2/dist/src/golang.org/x/net/ipv6/sys_bsd.go0000644061062106075000000000430112721405224024412 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build dragonfly netbsd openbsd package ipv6 import ( "net" "syscall" "golang.org/x/net/internal/iana" ) var ( ctlOpts = [ctlMax]ctlOpt{ ctlTrafficClass: {sysIPV6_TCLASS, 4, marshalTrafficClass, parseTrafficClass}, ctlHopLimit: {sysIPV6_HOPLIMIT, 4, marshalHopLimit, parseHopLimit}, ctlPacketInfo: {sysIPV6_PKTINFO, sysSizeofInet6Pktinfo, marshalPacketInfo, parsePacketInfo}, ctlNextHop: {sysIPV6_NEXTHOP, sysSizeofSockaddrInet6, marshalNextHop, parseNextHop}, ctlPathMTU: {sysIPV6_PATHMTU, sysSizeofIPv6Mtuinfo, marshalPathMTU, parsePathMTU}, } sockOpts = [ssoMax]sockOpt{ ssoTrafficClass: {iana.ProtocolIPv6, sysIPV6_TCLASS, ssoTypeInt}, ssoHopLimit: {iana.ProtocolIPv6, sysIPV6_UNICAST_HOPS, ssoTypeInt}, ssoMulticastInterface: {iana.ProtocolIPv6, sysIPV6_MULTICAST_IF, ssoTypeInterface}, ssoMulticastHopLimit: {iana.ProtocolIPv6, sysIPV6_MULTICAST_HOPS, ssoTypeInt}, ssoMulticastLoopback: {iana.ProtocolIPv6, sysIPV6_MULTICAST_LOOP, ssoTypeInt}, ssoReceiveTrafficClass: {iana.ProtocolIPv6, sysIPV6_RECVTCLASS, ssoTypeInt}, ssoReceiveHopLimit: {iana.ProtocolIPv6, sysIPV6_RECVHOPLIMIT, ssoTypeInt}, ssoReceivePacketInfo: {iana.ProtocolIPv6, sysIPV6_RECVPKTINFO, ssoTypeInt}, ssoReceivePathMTU: {iana.ProtocolIPv6, sysIPV6_RECVPATHMTU, ssoTypeInt}, ssoPathMTU: {iana.ProtocolIPv6, sysIPV6_PATHMTU, ssoTypeMTUInfo}, ssoChecksum: {iana.ProtocolIPv6, sysIPV6_CHECKSUM, ssoTypeInt}, ssoICMPFilter: {iana.ProtocolIPv6ICMP, sysICMP6_FILTER, ssoTypeICMPFilter}, ssoJoinGroup: {iana.ProtocolIPv6, sysIPV6_JOIN_GROUP, ssoTypeIPMreq}, ssoLeaveGroup: {iana.ProtocolIPv6, sysIPV6_LEAVE_GROUP, ssoTypeIPMreq}, } ) func (sa *sysSockaddrInet6) setSockaddr(ip net.IP, i int) { sa.Len = sysSizeofSockaddrInet6 sa.Family = syscall.AF_INET6 copy(sa.Addr[:], ip) sa.Scope_id = uint32(i) } func (pi *sysInet6Pktinfo) setIfindex(i int) { pi.Ifindex = uint32(i) } func (mreq *sysIPv6Mreq) setIfindex(i int) { mreq.Interface = uint32(i) } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/sockopt_stub.go0000644061062106075000000000050112721405224025461 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build nacl plan9 solaris package ipv6 import "net" func getMTUInfo(fd int, opt *sockOpt) (*net.Interface, int, error) { return nil, 0, errOpNoSupport } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/control_rfc3542_unix.go0000644061062106075000000000527212721405224026647 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build darwin dragonfly freebsd linux netbsd openbsd package ipv6 import ( "syscall" "unsafe" "golang.org/x/net/internal/iana" ) func marshalTrafficClass(b []byte, cm *ControlMessage) []byte { m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0])) m.Level = iana.ProtocolIPv6 m.Type = sysIPV6_TCLASS m.SetLen(syscall.CmsgLen(4)) if cm != nil { data := b[syscall.CmsgLen(0):] nativeEndian.PutUint32(data[:4], uint32(cm.TrafficClass)) } return b[syscall.CmsgSpace(4):] } func parseTrafficClass(cm *ControlMessage, b []byte) { cm.TrafficClass = int(nativeEndian.Uint32(b[:4])) } func marshalHopLimit(b []byte, cm *ControlMessage) []byte { m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0])) m.Level = iana.ProtocolIPv6 m.Type = sysIPV6_HOPLIMIT m.SetLen(syscall.CmsgLen(4)) if cm != nil { data := b[syscall.CmsgLen(0):] nativeEndian.PutUint32(data[:4], uint32(cm.HopLimit)) } return b[syscall.CmsgSpace(4):] } func parseHopLimit(cm *ControlMessage, b []byte) { cm.HopLimit = int(nativeEndian.Uint32(b[:4])) } func marshalPacketInfo(b []byte, cm *ControlMessage) []byte { m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0])) m.Level = iana.ProtocolIPv6 m.Type = sysIPV6_PKTINFO m.SetLen(syscall.CmsgLen(sysSizeofInet6Pktinfo)) if cm != nil { pi := (*sysInet6Pktinfo)(unsafe.Pointer(&b[syscall.CmsgLen(0)])) if ip := cm.Src.To16(); ip != nil && ip.To4() == nil { copy(pi.Addr[:], ip) } if cm.IfIndex > 0 { pi.setIfindex(cm.IfIndex) } } return b[syscall.CmsgSpace(sysSizeofInet6Pktinfo):] } func parsePacketInfo(cm *ControlMessage, b []byte) { pi := (*sysInet6Pktinfo)(unsafe.Pointer(&b[0])) cm.Dst = pi.Addr[:] cm.IfIndex = int(pi.Ifindex) } func marshalNextHop(b []byte, cm *ControlMessage) []byte { m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0])) m.Level = iana.ProtocolIPv6 m.Type = sysIPV6_NEXTHOP m.SetLen(syscall.CmsgLen(sysSizeofSockaddrInet6)) if cm != nil { sa := (*sysSockaddrInet6)(unsafe.Pointer(&b[syscall.CmsgLen(0)])) sa.setSockaddr(cm.NextHop, cm.IfIndex) } return b[syscall.CmsgSpace(sysSizeofSockaddrInet6):] } func parseNextHop(cm *ControlMessage, b []byte) { } func marshalPathMTU(b []byte, cm *ControlMessage) []byte { m := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0])) m.Level = iana.ProtocolIPv6 m.Type = sysIPV6_PATHMTU m.SetLen(syscall.CmsgLen(sysSizeofIPv6Mtuinfo)) return b[syscall.CmsgSpace(sysSizeofIPv6Mtuinfo):] } func parsePathMTU(cm *ControlMessage, b []byte) { mi := (*sysIPv6Mtuinfo)(unsafe.Pointer(&b[0])) cm.Dst = mi.Addr.Addr[:] cm.IfIndex = int(mi.Addr.Scope_id) cm.MTU = int(mi.Mtu) } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/payload_nocmsg.go0000644061062106075000000000243312721405224025747 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build nacl plan9 windows package ipv6 import ( "net" "syscall" ) // ReadFrom reads a payload of the received IPv6 datagram, from the // endpoint c, copying the payload into b. It returns the number of // bytes copied into b, the control message cm and the source address // src of the received datagram. func (c *payloadHandler) ReadFrom(b []byte) (n int, cm *ControlMessage, src net.Addr, err error) { if !c.ok() { return 0, nil, nil, syscall.EINVAL } if n, src, err = c.PacketConn.ReadFrom(b); err != nil { return 0, nil, nil, err } return } // WriteTo writes a payload of the IPv6 datagram, to the destination // address dst through the endpoint c, copying the payload from b. It // returns the number of bytes written. The control message cm allows // the IPv6 header fields and the datagram path to be specified. The // cm may be nil if control of the outgoing datagram is not required. func (c *payloadHandler) WriteTo(b []byte, cm *ControlMessage, dst net.Addr) (n int, err error) { if !c.ok() { return 0, syscall.EINVAL } if dst == nil { return 0, errMissingAddress } return c.PacketConn.WriteTo(b, dst) } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/zsys_freebsd_arm.go0000644061062106075000000000453512721405224026316 0ustar00stgraberdomain admins00000000000000// Created by cgo -godefs - DO NOT EDIT // cgo -godefs defs_freebsd.go package ipv6 const ( sysIPV6_UNICAST_HOPS = 0x4 sysIPV6_MULTICAST_IF = 0x9 sysIPV6_MULTICAST_HOPS = 0xa sysIPV6_MULTICAST_LOOP = 0xb sysIPV6_JOIN_GROUP = 0xc sysIPV6_LEAVE_GROUP = 0xd sysIPV6_PORTRANGE = 0xe sysICMP6_FILTER = 0x12 sysIPV6_CHECKSUM = 0x1a sysIPV6_V6ONLY = 0x1b sysIPV6_IPSEC_POLICY = 0x1c sysIPV6_RTHDRDSTOPTS = 0x23 sysIPV6_RECVPKTINFO = 0x24 sysIPV6_RECVHOPLIMIT = 0x25 sysIPV6_RECVRTHDR = 0x26 sysIPV6_RECVHOPOPTS = 0x27 sysIPV6_RECVDSTOPTS = 0x28 sysIPV6_USE_MIN_MTU = 0x2a sysIPV6_RECVPATHMTU = 0x2b sysIPV6_PATHMTU = 0x2c sysIPV6_PKTINFO = 0x2e sysIPV6_HOPLIMIT = 0x2f sysIPV6_NEXTHOP = 0x30 sysIPV6_HOPOPTS = 0x31 sysIPV6_DSTOPTS = 0x32 sysIPV6_RTHDR = 0x33 sysIPV6_RECVTCLASS = 0x39 sysIPV6_AUTOFLOWLABEL = 0x3b sysIPV6_TCLASS = 0x3d sysIPV6_DONTFRAG = 0x3e sysIPV6_PREFER_TEMPADDR = 0x3f sysIPV6_BINDANY = 0x40 sysIPV6_MSFILTER = 0x4a sysMCAST_JOIN_GROUP = 0x50 sysMCAST_LEAVE_GROUP = 0x51 sysMCAST_JOIN_SOURCE_GROUP = 0x52 sysMCAST_LEAVE_SOURCE_GROUP = 0x53 sysMCAST_BLOCK_SOURCE = 0x54 sysMCAST_UNBLOCK_SOURCE = 0x55 sysIPV6_PORTRANGE_DEFAULT = 0x0 sysIPV6_PORTRANGE_HIGH = 0x1 sysIPV6_PORTRANGE_LOW = 0x2 sysSizeofSockaddrStorage = 0x80 sysSizeofSockaddrInet6 = 0x1c sysSizeofInet6Pktinfo = 0x14 sysSizeofIPv6Mtuinfo = 0x20 sysSizeofIPv6Mreq = 0x14 sysSizeofGroupReq = 0x88 sysSizeofGroupSourceReq = 0x108 sysSizeofICMPv6Filter = 0x20 ) type sysSockaddrStorage struct { Len uint8 Family uint8 X__ss_pad1 [6]int8 X__ss_align int64 X__ss_pad2 [112]int8 } type sysSockaddrInet6 struct { Len uint8 Family uint8 Port uint16 Flowinfo uint32 Addr [16]byte /* in6_addr */ Scope_id uint32 } type sysInet6Pktinfo struct { Addr [16]byte /* in6_addr */ Ifindex uint32 } type sysIPv6Mtuinfo struct { Addr sysSockaddrInet6 Mtu uint32 } type sysIPv6Mreq struct { Multiaddr [16]byte /* in6_addr */ Interface uint32 } type sysGroupReq struct { Interface uint32 Pad_cgo_0 [4]byte Group sysSockaddrStorage } type sysGroupSourceReq struct { Interface uint32 Pad_cgo_0 [4]byte Group sysSockaddrStorage Source sysSockaddrStorage } type sysICMPv6Filter struct { Filt [8]uint32 } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/icmp_stub.go0000644061062106075000000000071012721405224024731 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build nacl plan9 package ipv6 type sysICMPv6Filter struct { } func (f *sysICMPv6Filter) accept(typ ICMPType) { } func (f *sysICMPv6Filter) block(typ ICMPType) { } func (f *sysICMPv6Filter) setAll(block bool) { } func (f *sysICMPv6Filter) willBlock(typ ICMPType) bool { return false } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/zsys_linux_386.go0000644061062106075000000000734412721405224025605 0ustar00stgraberdomain admins00000000000000// Created by cgo -godefs - DO NOT EDIT // cgo -godefs defs_linux.go package ipv6 const ( sysIPV6_ADDRFORM = 0x1 sysIPV6_2292PKTINFO = 0x2 sysIPV6_2292HOPOPTS = 0x3 sysIPV6_2292DSTOPTS = 0x4 sysIPV6_2292RTHDR = 0x5 sysIPV6_2292PKTOPTIONS = 0x6 sysIPV6_CHECKSUM = 0x7 sysIPV6_2292HOPLIMIT = 0x8 sysIPV6_NEXTHOP = 0x9 sysIPV6_FLOWINFO = 0xb sysIPV6_UNICAST_HOPS = 0x10 sysIPV6_MULTICAST_IF = 0x11 sysIPV6_MULTICAST_HOPS = 0x12 sysIPV6_MULTICAST_LOOP = 0x13 sysIPV6_ADD_MEMBERSHIP = 0x14 sysIPV6_DROP_MEMBERSHIP = 0x15 sysMCAST_JOIN_GROUP = 0x2a sysMCAST_LEAVE_GROUP = 0x2d sysMCAST_JOIN_SOURCE_GROUP = 0x2e sysMCAST_LEAVE_SOURCE_GROUP = 0x2f sysMCAST_BLOCK_SOURCE = 0x2b sysMCAST_UNBLOCK_SOURCE = 0x2c sysMCAST_MSFILTER = 0x30 sysIPV6_ROUTER_ALERT = 0x16 sysIPV6_MTU_DISCOVER = 0x17 sysIPV6_MTU = 0x18 sysIPV6_RECVERR = 0x19 sysIPV6_V6ONLY = 0x1a sysIPV6_JOIN_ANYCAST = 0x1b sysIPV6_LEAVE_ANYCAST = 0x1c sysIPV6_FLOWLABEL_MGR = 0x20 sysIPV6_FLOWINFO_SEND = 0x21 sysIPV6_IPSEC_POLICY = 0x22 sysIPV6_XFRM_POLICY = 0x23 sysIPV6_RECVPKTINFO = 0x31 sysIPV6_PKTINFO = 0x32 sysIPV6_RECVHOPLIMIT = 0x33 sysIPV6_HOPLIMIT = 0x34 sysIPV6_RECVHOPOPTS = 0x35 sysIPV6_HOPOPTS = 0x36 sysIPV6_RTHDRDSTOPTS = 0x37 sysIPV6_RECVRTHDR = 0x38 sysIPV6_RTHDR = 0x39 sysIPV6_RECVDSTOPTS = 0x3a sysIPV6_DSTOPTS = 0x3b sysIPV6_RECVPATHMTU = 0x3c sysIPV6_PATHMTU = 0x3d sysIPV6_DONTFRAG = 0x3e sysIPV6_RECVTCLASS = 0x42 sysIPV6_TCLASS = 0x43 sysIPV6_ADDR_PREFERENCES = 0x48 sysIPV6_PREFER_SRC_TMP = 0x1 sysIPV6_PREFER_SRC_PUBLIC = 0x2 sysIPV6_PREFER_SRC_PUBTMP_DEFAULT = 0x100 sysIPV6_PREFER_SRC_COA = 0x4 sysIPV6_PREFER_SRC_HOME = 0x400 sysIPV6_PREFER_SRC_CGA = 0x8 sysIPV6_PREFER_SRC_NONCGA = 0x800 sysIPV6_MINHOPCOUNT = 0x49 sysIPV6_ORIGDSTADDR = 0x4a sysIPV6_RECVORIGDSTADDR = 0x4a sysIPV6_TRANSPARENT = 0x4b sysIPV6_UNICAST_IF = 0x4c sysICMPV6_FILTER = 0x1 sysICMPV6_FILTER_BLOCK = 0x1 sysICMPV6_FILTER_PASS = 0x2 sysICMPV6_FILTER_BLOCKOTHERS = 0x3 sysICMPV6_FILTER_PASSONLY = 0x4 sysSOL_SOCKET = 0x1 sysSO_ATTACH_FILTER = 0x1a sysSizeofKernelSockaddrStorage = 0x80 sysSizeofSockaddrInet6 = 0x1c sysSizeofInet6Pktinfo = 0x14 sysSizeofIPv6Mtuinfo = 0x20 sysSizeofIPv6FlowlabelReq = 0x20 sysSizeofIPv6Mreq = 0x14 sysSizeofGroupReq = 0x84 sysSizeofGroupSourceReq = 0x104 sysSizeofICMPv6Filter = 0x20 ) type sysKernelSockaddrStorage struct { Family uint16 X__data [126]int8 } type sysSockaddrInet6 struct { Family uint16 Port uint16 Flowinfo uint32 Addr [16]byte /* in6_addr */ Scope_id uint32 } type sysInet6Pktinfo struct { Addr [16]byte /* in6_addr */ Ifindex int32 } type sysIPv6Mtuinfo struct { Addr sysSockaddrInet6 Mtu uint32 } type sysIPv6FlowlabelReq struct { Dst [16]byte /* in6_addr */ Label uint32 Action uint8 Share uint8 Flags uint16 Expires uint16 Linger uint16 X__flr_pad uint32 } type sysIPv6Mreq struct { Multiaddr [16]byte /* in6_addr */ Ifindex int32 } type sysGroupReq struct { Interface uint32 Group sysKernelSockaddrStorage } type sysGroupSourceReq struct { Interface uint32 Group sysKernelSockaddrStorage Source sysKernelSockaddrStorage } type sysICMPv6Filter struct { Data [8]uint32 } type sysSockFProg struct { Len uint16 Pad_cgo_0 [2]byte Filter *sysSockFilter } type sysSockFilter struct { Code uint16 Jt uint8 Jf uint8 K uint32 } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/unicastsockopt_test.go0000644061062106075000000000452512721405224027064 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6_test import ( "net" "runtime" "testing" "golang.org/x/net/internal/iana" "golang.org/x/net/internal/nettest" "golang.org/x/net/ipv6" ) func TestConnUnicastSocketOptions(t *testing.T) { switch runtime.GOOS { case "nacl", "plan9", "solaris", "windows": t.Skipf("not supported on %s", runtime.GOOS) } if !supportsIPv6 { t.Skip("ipv6 is not supported") } ln, err := net.Listen("tcp6", "[::1]:0") if err != nil { t.Fatal(err) } defer ln.Close() done := make(chan bool) go acceptor(t, ln, done) c, err := net.Dial("tcp6", ln.Addr().String()) if err != nil { t.Fatal(err) } defer c.Close() testUnicastSocketOptions(t, ipv6.NewConn(c)) <-done } var packetConnUnicastSocketOptionTests = []struct { net, proto, addr string }{ {"udp6", "", "[::1]:0"}, {"ip6", ":ipv6-icmp", "::1"}, } func TestPacketConnUnicastSocketOptions(t *testing.T) { switch runtime.GOOS { case "nacl", "plan9", "solaris", "windows": t.Skipf("not supported on %s", runtime.GOOS) } if !supportsIPv6 { t.Skip("ipv6 is not supported") } m, ok := nettest.SupportsRawIPSocket() for _, tt := range packetConnUnicastSocketOptionTests { if tt.net == "ip6" && !ok { t.Log(m) continue } c, err := net.ListenPacket(tt.net+tt.proto, tt.addr) if err != nil { t.Fatal(err) } defer c.Close() testUnicastSocketOptions(t, ipv6.NewPacketConn(c)) } } type testIPv6UnicastConn interface { TrafficClass() (int, error) SetTrafficClass(int) error HopLimit() (int, error) SetHopLimit(int) error } func testUnicastSocketOptions(t *testing.T, c testIPv6UnicastConn) { tclass := iana.DiffServCS0 | iana.NotECNTransport if err := c.SetTrafficClass(tclass); err != nil { switch runtime.GOOS { case "darwin": // older darwin kernels don't support IPV6_TCLASS option t.Logf("not supported on %s", runtime.GOOS) goto next } t.Fatal(err) } if v, err := c.TrafficClass(); err != nil { t.Fatal(err) } else if v != tclass { t.Fatalf("got %v; want %v", v, tclass) } next: hoplim := 255 if err := c.SetHopLimit(hoplim); err != nil { t.Fatal(err) } if v, err := c.HopLimit(); err != nil { t.Fatal(err) } else if v != hoplim { t.Fatalf("got %v; want %v", v, hoplim) } } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/gen.go0000644061062106075000000001174112721405224023523 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build ignore //go:generate go run gen.go // This program generates system adaptation constants and types, // internet protocol constants and tables by reading template files // and IANA protocol registries. package main import ( "bytes" "encoding/xml" "fmt" "go/format" "io" "io/ioutil" "net/http" "os" "os/exec" "runtime" "strconv" "strings" ) func main() { if err := genzsys(); err != nil { fmt.Fprintln(os.Stderr, err) os.Exit(1) } if err := geniana(); err != nil { fmt.Fprintln(os.Stderr, err) os.Exit(1) } } func genzsys() error { defs := "defs_" + runtime.GOOS + ".go" f, err := os.Open(defs) if err != nil { if os.IsNotExist(err) { return nil } return err } f.Close() cmd := exec.Command("go", "tool", "cgo", "-godefs", defs) b, err := cmd.Output() if err != nil { return err } // The ipv6 package still supports go1.2, and so we need to // take care of additional platforms in go1.3 and above for // working with go1.2. switch { case runtime.GOOS == "dragonfly" || runtime.GOOS == "solaris": b = bytes.Replace(b, []byte("package ipv6\n"), []byte("// +build "+runtime.GOOS+"\n\npackage ipv6\n"), 1) case runtime.GOOS == "linux" && (runtime.GOARCH == "arm64" || runtime.GOARCH == "mips64" || runtime.GOARCH == "mips64le" || runtime.GOARCH == "ppc64" || runtime.GOARCH == "ppc64le" || runtime.GOARCH == "s390x"): b = bytes.Replace(b, []byte("package ipv6\n"), []byte("// +build "+runtime.GOOS+","+runtime.GOARCH+"\n\npackage ipv6\n"), 1) } b, err = format.Source(b) if err != nil { return err } zsys := "zsys_" + runtime.GOOS + ".go" switch runtime.GOOS { case "freebsd", "linux": zsys = "zsys_" + runtime.GOOS + "_" + runtime.GOARCH + ".go" } if err := ioutil.WriteFile(zsys, b, 0644); err != nil { return err } return nil } var registries = []struct { url string parse func(io.Writer, io.Reader) error }{ { "http://www.iana.org/assignments/icmpv6-parameters/icmpv6-parameters.xml", parseICMPv6Parameters, }, } func geniana() error { var bb bytes.Buffer fmt.Fprintf(&bb, "// go generate gen.go\n") fmt.Fprintf(&bb, "// GENERATED BY THE COMMAND ABOVE; DO NOT EDIT\n\n") fmt.Fprintf(&bb, "package ipv6\n\n") for _, r := range registries { resp, err := http.Get(r.url) if err != nil { return err } defer resp.Body.Close() if resp.StatusCode != http.StatusOK { return fmt.Errorf("got HTTP status code %v for %v\n", resp.StatusCode, r.url) } if err := r.parse(&bb, resp.Body); err != nil { return err } fmt.Fprintf(&bb, "\n") } b, err := format.Source(bb.Bytes()) if err != nil { return err } if err := ioutil.WriteFile("iana.go", b, 0644); err != nil { return err } return nil } func parseICMPv6Parameters(w io.Writer, r io.Reader) error { dec := xml.NewDecoder(r) var icp icmpv6Parameters if err := dec.Decode(&icp); err != nil { return err } prs := icp.escape() fmt.Fprintf(w, "// %s, Updated: %s\n", icp.Title, icp.Updated) fmt.Fprintf(w, "const (\n") for _, pr := range prs { if pr.Name == "" { continue } fmt.Fprintf(w, "ICMPType%s ICMPType = %d", pr.Name, pr.Value) fmt.Fprintf(w, "// %s\n", pr.OrigName) } fmt.Fprintf(w, ")\n\n") fmt.Fprintf(w, "// %s, Updated: %s\n", icp.Title, icp.Updated) fmt.Fprintf(w, "var icmpTypes = map[ICMPType]string{\n") for _, pr := range prs { if pr.Name == "" { continue } fmt.Fprintf(w, "%d: %q,\n", pr.Value, strings.ToLower(pr.OrigName)) } fmt.Fprintf(w, "}\n") return nil } type icmpv6Parameters struct { XMLName xml.Name `xml:"registry"` Title string `xml:"title"` Updated string `xml:"updated"` Registries []struct { Title string `xml:"title"` Records []struct { Value string `xml:"value"` Name string `xml:"name"` } `xml:"record"` } `xml:"registry"` } type canonICMPv6ParamRecord struct { OrigName string Name string Value int } func (icp *icmpv6Parameters) escape() []canonICMPv6ParamRecord { id := -1 for i, r := range icp.Registries { if strings.Contains(r.Title, "Type") || strings.Contains(r.Title, "type") { id = i break } } if id < 0 { return nil } prs := make([]canonICMPv6ParamRecord, len(icp.Registries[id].Records)) sr := strings.NewReplacer( "Messages", "", "Message", "", "ICMP", "", "+", "P", "-", "", "/", "", ".", "", " ", "", ) for i, pr := range icp.Registries[id].Records { if strings.Contains(pr.Name, "Reserved") || strings.Contains(pr.Name, "Unassigned") || strings.Contains(pr.Name, "Deprecated") || strings.Contains(pr.Name, "Experiment") || strings.Contains(pr.Name, "experiment") { continue } ss := strings.Split(pr.Name, "\n") if len(ss) > 1 { prs[i].Name = strings.Join(ss, " ") } else { prs[i].Name = ss[0] } s := strings.TrimSpace(prs[i].Name) prs[i].OrigName = s prs[i].Name = sr.Replace(s) prs[i].Value, _ = strconv.Atoi(pr.Value) } return prs } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/zsys_dragonfly.go0000644061062106075000000000321412721405224026023 0ustar00stgraberdomain admins00000000000000// Created by cgo -godefs - DO NOT EDIT // cgo -godefs defs_dragonfly.go // +build dragonfly package ipv6 const ( sysIPV6_UNICAST_HOPS = 0x4 sysIPV6_MULTICAST_IF = 0x9 sysIPV6_MULTICAST_HOPS = 0xa sysIPV6_MULTICAST_LOOP = 0xb sysIPV6_JOIN_GROUP = 0xc sysIPV6_LEAVE_GROUP = 0xd sysIPV6_PORTRANGE = 0xe sysICMP6_FILTER = 0x12 sysIPV6_CHECKSUM = 0x1a sysIPV6_V6ONLY = 0x1b sysIPV6_IPSEC_POLICY = 0x1c sysIPV6_RTHDRDSTOPTS = 0x23 sysIPV6_RECVPKTINFO = 0x24 sysIPV6_RECVHOPLIMIT = 0x25 sysIPV6_RECVRTHDR = 0x26 sysIPV6_RECVHOPOPTS = 0x27 sysIPV6_RECVDSTOPTS = 0x28 sysIPV6_USE_MIN_MTU = 0x2a sysIPV6_RECVPATHMTU = 0x2b sysIPV6_PATHMTU = 0x2c sysIPV6_PKTINFO = 0x2e sysIPV6_HOPLIMIT = 0x2f sysIPV6_NEXTHOP = 0x30 sysIPV6_HOPOPTS = 0x31 sysIPV6_DSTOPTS = 0x32 sysIPV6_RTHDR = 0x33 sysIPV6_RECVTCLASS = 0x39 sysIPV6_AUTOFLOWLABEL = 0x3b sysIPV6_TCLASS = 0x3d sysIPV6_DONTFRAG = 0x3e sysIPV6_PREFER_TEMPADDR = 0x3f sysIPV6_PORTRANGE_DEFAULT = 0x0 sysIPV6_PORTRANGE_HIGH = 0x1 sysIPV6_PORTRANGE_LOW = 0x2 sysSizeofSockaddrInet6 = 0x1c sysSizeofInet6Pktinfo = 0x14 sysSizeofIPv6Mtuinfo = 0x20 sysSizeofIPv6Mreq = 0x14 sysSizeofICMPv6Filter = 0x20 ) type sysSockaddrInet6 struct { Len uint8 Family uint8 Port uint16 Flowinfo uint32 Addr [16]byte /* in6_addr */ Scope_id uint32 } type sysInet6Pktinfo struct { Addr [16]byte /* in6_addr */ Ifindex uint32 } type sysIPv6Mtuinfo struct { Addr sysSockaddrInet6 Mtu uint32 } type sysIPv6Mreq struct { Multiaddr [16]byte /* in6_addr */ Interface uint32 } type sysICMPv6Filter struct { Filt [8]uint32 } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/sockopt.go0000644061062106075000000000355412721405224024437 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6 // Sticky socket options const ( ssoTrafficClass = iota // header field for unicast packet, RFC 3542 ssoHopLimit // header field for unicast packet, RFC 3493 ssoMulticastInterface // outbound interface for multicast packet, RFC 3493 ssoMulticastHopLimit // header field for multicast packet, RFC 3493 ssoMulticastLoopback // loopback for multicast packet, RFC 3493 ssoReceiveTrafficClass // header field on received packet, RFC 3542 ssoReceiveHopLimit // header field on received packet, RFC 2292 or 3542 ssoReceivePacketInfo // incbound or outbound packet path, RFC 2292 or 3542 ssoReceivePathMTU // path mtu, RFC 3542 ssoPathMTU // path mtu, RFC 3542 ssoChecksum // packet checksum, RFC 2292 or 3542 ssoICMPFilter // icmp filter, RFC 2292 or 3542 ssoJoinGroup // any-source multicast, RFC 3493 ssoLeaveGroup // any-source multicast, RFC 3493 ssoJoinSourceGroup // source-specific multicast ssoLeaveSourceGroup // source-specific multicast ssoBlockSourceGroup // any-source or source-specific multicast ssoUnblockSourceGroup // any-source or source-specific multicast ssoMax ) // Sticky socket option value types const ( ssoTypeInt = iota + 1 ssoTypeInterface ssoTypeICMPFilter ssoTypeMTUInfo ssoTypeIPMreq ssoTypeGroupReq ssoTypeGroupSourceReq ) // A sockOpt represents a binding for sticky socket option. type sockOpt struct { level int // option level name int // option name, must be equal or greater than 1 typ int // option value type, must be equal or greater than 1 } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/sockopt_asmreq_windows.go0000644061062106075000000000107212721405224027552 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package ipv6 import ( "net" "os" "syscall" "unsafe" ) func setsockoptIPMreq(fd syscall.Handle, opt *sockOpt, ifi *net.Interface, grp net.IP) error { var mreq sysIPv6Mreq copy(mreq.Multiaddr[:], grp) if ifi != nil { mreq.setIfindex(ifi.Index) } return os.NewSyscallError("setsockopt", syscall.Setsockopt(fd, int32(opt.level), int32(opt.name), (*byte)(unsafe.Pointer(&mreq)), sysSizeofIPv6Mreq)) } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/sockopt_asmreq_unix.go0000644061062106075000000000107612721405224027047 0ustar00stgraberdomain admins00000000000000// Copyright 2013 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build darwin dragonfly freebsd linux netbsd openbsd package ipv6 import ( "net" "os" "unsafe" ) func setsockoptIPMreq(fd int, opt *sockOpt, ifi *net.Interface, grp net.IP) error { var mreq sysIPv6Mreq copy(mreq.Multiaddr[:], grp) if ifi != nil { mreq.setIfindex(ifi.Index) } return os.NewSyscallError("setsockopt", setsockopt(fd, opt.level, opt.name, unsafe.Pointer(&mreq), sysSizeofIPv6Mreq)) } lxd-2.0.2/dist/src/golang.org/x/net/ipv6/defs_netbsd.go0000644061062106075000000000421712721405224025232 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build ignore // +godefs map struct_in6_addr [16]byte /* in6_addr */ package ipv6 /* #include #include #include #include */ import "C" const ( sysIPV6_UNICAST_HOPS = C.IPV6_UNICAST_HOPS sysIPV6_MULTICAST_IF = C.IPV6_MULTICAST_IF sysIPV6_MULTICAST_HOPS = C.IPV6_MULTICAST_HOPS sysIPV6_MULTICAST_LOOP = C.IPV6_MULTICAST_LOOP sysIPV6_JOIN_GROUP = C.IPV6_JOIN_GROUP sysIPV6_LEAVE_GROUP = C.IPV6_LEAVE_GROUP sysIPV6_PORTRANGE = C.IPV6_PORTRANGE sysICMP6_FILTER = C.ICMP6_FILTER sysIPV6_CHECKSUM = C.IPV6_CHECKSUM sysIPV6_V6ONLY = C.IPV6_V6ONLY sysIPV6_IPSEC_POLICY = C.IPV6_IPSEC_POLICY sysIPV6_RTHDRDSTOPTS = C.IPV6_RTHDRDSTOPTS sysIPV6_RECVPKTINFO = C.IPV6_RECVPKTINFO sysIPV6_RECVHOPLIMIT = C.IPV6_RECVHOPLIMIT sysIPV6_RECVRTHDR = C.IPV6_RECVRTHDR sysIPV6_RECVHOPOPTS = C.IPV6_RECVHOPOPTS sysIPV6_RECVDSTOPTS = C.IPV6_RECVDSTOPTS sysIPV6_USE_MIN_MTU = C.IPV6_USE_MIN_MTU sysIPV6_RECVPATHMTU = C.IPV6_RECVPATHMTU sysIPV6_PATHMTU = C.IPV6_PATHMTU sysIPV6_PKTINFO = C.IPV6_PKTINFO sysIPV6_HOPLIMIT = C.IPV6_HOPLIMIT sysIPV6_NEXTHOP = C.IPV6_NEXTHOP sysIPV6_HOPOPTS = C.IPV6_HOPOPTS sysIPV6_DSTOPTS = C.IPV6_DSTOPTS sysIPV6_RTHDR = C.IPV6_RTHDR sysIPV6_RECVTCLASS = C.IPV6_RECVTCLASS sysIPV6_TCLASS = C.IPV6_TCLASS sysIPV6_DONTFRAG = C.IPV6_DONTFRAG sysIPV6_PORTRANGE_DEFAULT = C.IPV6_PORTRANGE_DEFAULT sysIPV6_PORTRANGE_HIGH = C.IPV6_PORTRANGE_HIGH sysIPV6_PORTRANGE_LOW = C.IPV6_PORTRANGE_LOW sysSizeofSockaddrInet6 = C.sizeof_struct_sockaddr_in6 sysSizeofInet6Pktinfo = C.sizeof_struct_in6_pktinfo sysSizeofIPv6Mtuinfo = C.sizeof_struct_ip6_mtuinfo sysSizeofIPv6Mreq = C.sizeof_struct_ipv6_mreq sysSizeofICMPv6Filter = C.sizeof_struct_icmp6_filter ) type sysSockaddrInet6 C.struct_sockaddr_in6 type sysInet6Pktinfo C.struct_in6_pktinfo type sysIPv6Mtuinfo C.struct_ip6_mtuinfo type sysIPv6Mreq C.struct_ipv6_mreq type sysICMPv6Filter C.struct_icmp6_filter lxd-2.0.2/dist/src/golang.org/x/net/webdav/0000755061062106075000000000000012721405224023003 5ustar00stgraberdomain admins00000000000000lxd-2.0.2/dist/src/golang.org/x/net/webdav/lock.go0000644061062106075000000002752512721405224024275 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package webdav import ( "container/heap" "errors" "strconv" "strings" "sync" "time" ) var ( // ErrConfirmationFailed is returned by a LockSystem's Confirm method. ErrConfirmationFailed = errors.New("webdav: confirmation failed") // ErrForbidden is returned by a LockSystem's Unlock method. ErrForbidden = errors.New("webdav: forbidden") // ErrLocked is returned by a LockSystem's Create, Refresh and Unlock methods. ErrLocked = errors.New("webdav: locked") // ErrNoSuchLock is returned by a LockSystem's Refresh and Unlock methods. ErrNoSuchLock = errors.New("webdav: no such lock") ) // Condition can match a WebDAV resource, based on a token or ETag. // Exactly one of Token and ETag should be non-empty. type Condition struct { Not bool Token string ETag string } // LockSystem manages access to a collection of named resources. The elements // in a lock name are separated by slash ('/', U+002F) characters, regardless // of host operating system convention. type LockSystem interface { // Confirm confirms that the caller can claim all of the locks specified by // the given conditions, and that holding the union of all of those locks // gives exclusive access to all of the named resources. Up to two resources // can be named. Empty names are ignored. // // Exactly one of release and err will be non-nil. If release is non-nil, // all of the requested locks are held until release is called. Calling // release does not unlock the lock, in the WebDAV UNLOCK sense, but once // Confirm has confirmed that a lock claim is valid, that lock cannot be // Confirmed again until it has been released. // // If Confirm returns ErrConfirmationFailed then the Handler will continue // to try any other set of locks presented (a WebDAV HTTP request can // present more than one set of locks). If it returns any other non-nil // error, the Handler will write a "500 Internal Server Error" HTTP status. Confirm(now time.Time, name0, name1 string, conditions ...Condition) (release func(), err error) // Create creates a lock with the given depth, duration, owner and root // (name). The depth will either be negative (meaning infinite) or zero. // // If Create returns ErrLocked then the Handler will write a "423 Locked" // HTTP status. If it returns any other non-nil error, the Handler will // write a "500 Internal Server Error" HTTP status. // // See http://www.webdav.org/specs/rfc4918.html#rfc.section.9.10.6 for // when to use each error. // // The token returned identifies the created lock. It should be an absolute // URI as defined by RFC 3986, Section 4.3. In particular, it should not // contain whitespace. Create(now time.Time, details LockDetails) (token string, err error) // Refresh refreshes the lock with the given token. // // If Refresh returns ErrLocked then the Handler will write a "423 Locked" // HTTP Status. If Refresh returns ErrNoSuchLock then the Handler will write // a "412 Precondition Failed" HTTP Status. If it returns any other non-nil // error, the Handler will write a "500 Internal Server Error" HTTP status. // // See http://www.webdav.org/specs/rfc4918.html#rfc.section.9.10.6 for // when to use each error. Refresh(now time.Time, token string, duration time.Duration) (LockDetails, error) // Unlock unlocks the lock with the given token. // // If Unlock returns ErrForbidden then the Handler will write a "403 // Forbidden" HTTP Status. If Unlock returns ErrLocked then the Handler // will write a "423 Locked" HTTP status. If Unlock returns ErrNoSuchLock // then the Handler will write a "409 Conflict" HTTP Status. If it returns // any other non-nil error, the Handler will write a "500 Internal Server // Error" HTTP status. // // See http://www.webdav.org/specs/rfc4918.html#rfc.section.9.11.1 for // when to use each error. Unlock(now time.Time, token string) error } // LockDetails are a lock's metadata. type LockDetails struct { // Root is the root resource name being locked. For a zero-depth lock, the // root is the only resource being locked. Root string // Duration is the lock timeout. A negative duration means infinite. Duration time.Duration // OwnerXML is the verbatim XML given in a LOCK HTTP request. // // TODO: does the "verbatim" nature play well with XML namespaces? // Does the OwnerXML field need to have more structure? See // https://codereview.appspot.com/175140043/#msg2 OwnerXML string // ZeroDepth is whether the lock has zero depth. If it does not have zero // depth, it has infinite depth. ZeroDepth bool } // NewMemLS returns a new in-memory LockSystem. func NewMemLS() LockSystem { return &memLS{ byName: make(map[string]*memLSNode), byToken: make(map[string]*memLSNode), gen: uint64(time.Now().Unix()), } } type memLS struct { mu sync.Mutex byName map[string]*memLSNode byToken map[string]*memLSNode gen uint64 // byExpiry only contains those nodes whose LockDetails have a finite // Duration and are yet to expire. byExpiry byExpiry } func (m *memLS) nextToken() string { m.gen++ return strconv.FormatUint(m.gen, 10) } func (m *memLS) collectExpiredNodes(now time.Time) { for len(m.byExpiry) > 0 { if now.Before(m.byExpiry[0].expiry) { break } m.remove(m.byExpiry[0]) } } func (m *memLS) Confirm(now time.Time, name0, name1 string, conditions ...Condition) (func(), error) { m.mu.Lock() defer m.mu.Unlock() m.collectExpiredNodes(now) var n0, n1 *memLSNode if name0 != "" { if n0 = m.lookup(slashClean(name0), conditions...); n0 == nil { return nil, ErrConfirmationFailed } } if name1 != "" { if n1 = m.lookup(slashClean(name1), conditions...); n1 == nil { return nil, ErrConfirmationFailed } } // Don't hold the same node twice. if n1 == n0 { n1 = nil } if n0 != nil { m.hold(n0) } if n1 != nil { m.hold(n1) } return func() { m.mu.Lock() defer m.mu.Unlock() if n1 != nil { m.unhold(n1) } if n0 != nil { m.unhold(n0) } }, nil } // lookup returns the node n that locks the named resource, provided that n // matches at least one of the given conditions and that lock isn't held by // another party. Otherwise, it returns nil. // // n may be a parent of the named resource, if n is an infinite depth lock. func (m *memLS) lookup(name string, conditions ...Condition) (n *memLSNode) { // TODO: support Condition.Not and Condition.ETag. for _, c := range conditions { n = m.byToken[c.Token] if n == nil || n.held { continue } if name == n.details.Root { return n } if n.details.ZeroDepth { continue } if n.details.Root == "/" || strings.HasPrefix(name, n.details.Root+"/") { return n } } return nil } func (m *memLS) hold(n *memLSNode) { if n.held { panic("webdav: memLS inconsistent held state") } n.held = true if n.details.Duration >= 0 && n.byExpiryIndex >= 0 { heap.Remove(&m.byExpiry, n.byExpiryIndex) } } func (m *memLS) unhold(n *memLSNode) { if !n.held { panic("webdav: memLS inconsistent held state") } n.held = false if n.details.Duration >= 0 { heap.Push(&m.byExpiry, n) } } func (m *memLS) Create(now time.Time, details LockDetails) (string, error) { m.mu.Lock() defer m.mu.Unlock() m.collectExpiredNodes(now) details.Root = slashClean(details.Root) if !m.canCreate(details.Root, details.ZeroDepth) { return "", ErrLocked } n := m.create(details.Root) n.token = m.nextToken() m.byToken[n.token] = n n.details = details if n.details.Duration >= 0 { n.expiry = now.Add(n.details.Duration) heap.Push(&m.byExpiry, n) } return n.token, nil } func (m *memLS) Refresh(now time.Time, token string, duration time.Duration) (LockDetails, error) { m.mu.Lock() defer m.mu.Unlock() m.collectExpiredNodes(now) n := m.byToken[token] if n == nil { return LockDetails{}, ErrNoSuchLock } if n.held { return LockDetails{}, ErrLocked } if n.byExpiryIndex >= 0 { heap.Remove(&m.byExpiry, n.byExpiryIndex) } n.details.Duration = duration if n.details.Duration >= 0 { n.expiry = now.Add(n.details.Duration) heap.Push(&m.byExpiry, n) } return n.details, nil } func (m *memLS) Unlock(now time.Time, token string) error { m.mu.Lock() defer m.mu.Unlock() m.collectExpiredNodes(now) n := m.byToken[token] if n == nil { return ErrNoSuchLock } if n.held { return ErrLocked } m.remove(n) return nil } func (m *memLS) canCreate(name string, zeroDepth bool) bool { return walkToRoot(name, func(name0 string, first bool) bool { n := m.byName[name0] if n == nil { return true } if first { if n.token != "" { // The target node is already locked. return false } if !zeroDepth { // The requested lock depth is infinite, and the fact that n exists // (n != nil) means that a descendent of the target node is locked. return false } } else if n.token != "" && !n.details.ZeroDepth { // An ancestor of the target node is locked with infinite depth. return false } return true }) } func (m *memLS) create(name string) (ret *memLSNode) { walkToRoot(name, func(name0 string, first bool) bool { n := m.byName[name0] if n == nil { n = &memLSNode{ details: LockDetails{ Root: name0, }, byExpiryIndex: -1, } m.byName[name0] = n } n.refCount++ if first { ret = n } return true }) return ret } func (m *memLS) remove(n *memLSNode) { delete(m.byToken, n.token) n.token = "" walkToRoot(n.details.Root, func(name0 string, first bool) bool { x := m.byName[name0] x.refCount-- if x.refCount == 0 { delete(m.byName, name0) } return true }) if n.byExpiryIndex >= 0 { heap.Remove(&m.byExpiry, n.byExpiryIndex) } } func walkToRoot(name string, f func(name0 string, first bool) bool) bool { for first := true; ; first = false { if !f(name, first) { return false } if name == "/" { break } name = name[:strings.LastIndex(name, "/")] if name == "" { name = "/" } } return true } type memLSNode struct { // details are the lock metadata. Even if this node's name is not explicitly locked, // details.Root will still equal the node's name. details LockDetails // token is the unique identifier for this node's lock. An empty token means that // this node is not explicitly locked. token string // refCount is the number of self-or-descendent nodes that are explicitly locked. refCount int // expiry is when this node's lock expires. expiry time.Time // byExpiryIndex is the index of this node in memLS.byExpiry. It is -1 // if this node does not expire, or has expired. byExpiryIndex int // held is whether this node's lock is actively held by a Confirm call. held bool } type byExpiry []*memLSNode func (b *byExpiry) Len() int { return len(*b) } func (b *byExpiry) Less(i, j int) bool { return (*b)[i].expiry.Before((*b)[j].expiry) } func (b *byExpiry) Swap(i, j int) { (*b)[i], (*b)[j] = (*b)[j], (*b)[i] (*b)[i].byExpiryIndex = i (*b)[j].byExpiryIndex = j } func (b *byExpiry) Push(x interface{}) { n := x.(*memLSNode) n.byExpiryIndex = len(*b) *b = append(*b, n) } func (b *byExpiry) Pop() interface{} { i := len(*b) - 1 n := (*b)[i] (*b)[i] = nil n.byExpiryIndex = -1 *b = (*b)[:i] return n } const infiniteTimeout = -1 // parseTimeout parses the Timeout HTTP header, as per section 10.7. If s is // empty, an infiniteTimeout is returned. func parseTimeout(s string) (time.Duration, error) { if s == "" { return infiniteTimeout, nil } if i := strings.IndexByte(s, ','); i >= 0 { s = s[:i] } s = strings.TrimSpace(s) if s == "Infinite" { return infiniteTimeout, nil } const pre = "Second-" if !strings.HasPrefix(s, pre) { return 0, errInvalidTimeout } s = s[len(pre):] if s == "" || s[0] < '0' || '9' < s[0] { return 0, errInvalidTimeout } n, err := strconv.ParseInt(s, 10, 64) if err != nil || 1<<32-1 < n { return 0, errInvalidTimeout } return time.Duration(n) * time.Second, nil } lxd-2.0.2/dist/src/golang.org/x/net/webdav/webdav.go0000644061062106075000000004757112721405224024620 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // Package webdav provides a WebDAV server implementation. package webdav // import "golang.org/x/net/webdav" import ( "errors" "fmt" "io" "net/http" "net/url" "os" "path" "strings" "time" ) type Handler struct { // Prefix is the URL path prefix to strip from WebDAV resource paths. Prefix string // FileSystem is the virtual file system. FileSystem FileSystem // LockSystem is the lock management system. LockSystem LockSystem // Logger is an optional error logger. If non-nil, it will be called // for all HTTP requests. Logger func(*http.Request, error) } func (h *Handler) stripPrefix(p string) (string, int, error) { if h.Prefix == "" { return p, http.StatusOK, nil } if r := strings.TrimPrefix(p, h.Prefix); len(r) < len(p) { return r, http.StatusOK, nil } return p, http.StatusNotFound, errPrefixMismatch } func (h *Handler) ServeHTTP(w http.ResponseWriter, r *http.Request) { status, err := http.StatusBadRequest, errUnsupportedMethod if h.FileSystem == nil { status, err = http.StatusInternalServerError, errNoFileSystem } else if h.LockSystem == nil { status, err = http.StatusInternalServerError, errNoLockSystem } else { switch r.Method { case "OPTIONS": status, err = h.handleOptions(w, r) case "GET", "HEAD", "POST": status, err = h.handleGetHeadPost(w, r) case "DELETE": status, err = h.handleDelete(w, r) case "PUT": status, err = h.handlePut(w, r) case "MKCOL": status, err = h.handleMkcol(w, r) case "COPY", "MOVE": status, err = h.handleCopyMove(w, r) case "LOCK": status, err = h.handleLock(w, r) case "UNLOCK": status, err = h.handleUnlock(w, r) case "PROPFIND": status, err = h.handlePropfind(w, r) case "PROPPATCH": status, err = h.handleProppatch(w, r) } } if status != 0 { w.WriteHeader(status) if status != http.StatusNoContent { w.Write([]byte(StatusText(status))) } } if h.Logger != nil { h.Logger(r, err) } } func (h *Handler) lock(now time.Time, root string) (token string, status int, err error) { token, err = h.LockSystem.Create(now, LockDetails{ Root: root, Duration: infiniteTimeout, ZeroDepth: true, }) if err != nil { if err == ErrLocked { return "", StatusLocked, err } return "", http.StatusInternalServerError, err } return token, 0, nil } func (h *Handler) confirmLocks(r *http.Request, src, dst string) (release func(), status int, err error) { hdr := r.Header.Get("If") if hdr == "" { // An empty If header means that the client hasn't previously created locks. // Even if this client doesn't care about locks, we still need to check that // the resources aren't locked by another client, so we create temporary // locks that would conflict with another client's locks. These temporary // locks are unlocked at the end of the HTTP request. now, srcToken, dstToken := time.Now(), "", "" if src != "" { srcToken, status, err = h.lock(now, src) if err != nil { return nil, status, err } } if dst != "" { dstToken, status, err = h.lock(now, dst) if err != nil { if srcToken != "" { h.LockSystem.Unlock(now, srcToken) } return nil, status, err } } return func() { if dstToken != "" { h.LockSystem.Unlock(now, dstToken) } if srcToken != "" { h.LockSystem.Unlock(now, srcToken) } }, 0, nil } ih, ok := parseIfHeader(hdr) if !ok { return nil, http.StatusBadRequest, errInvalidIfHeader } // ih is a disjunction (OR) of ifLists, so any ifList will do. for _, l := range ih.lists { lsrc := l.resourceTag if lsrc == "" { lsrc = src } else { u, err := url.Parse(lsrc) if err != nil { continue } if u.Host != r.Host { continue } lsrc = u.Path } release, err = h.LockSystem.Confirm(time.Now(), lsrc, dst, l.conditions...) if err == ErrConfirmationFailed { continue } if err != nil { return nil, http.StatusInternalServerError, err } return release, 0, nil } // Section 10.4.1 says that "If this header is evaluated and all state lists // fail, then the request must fail with a 412 (Precondition Failed) status." // We follow the spec even though the cond_put_corrupt_token test case from // the litmus test warns on seeing a 412 instead of a 423 (Locked). return nil, http.StatusPreconditionFailed, ErrLocked } func (h *Handler) handleOptions(w http.ResponseWriter, r *http.Request) (status int, err error) { reqPath, status, err := h.stripPrefix(r.URL.Path) if err != nil { return status, err } allow := "OPTIONS, LOCK, PUT, MKCOL" if fi, err := h.FileSystem.Stat(reqPath); err == nil { if fi.IsDir() { allow = "OPTIONS, LOCK, DELETE, PROPPATCH, COPY, MOVE, UNLOCK, PROPFIND" } else { allow = "OPTIONS, LOCK, GET, HEAD, POST, DELETE, PROPPATCH, COPY, MOVE, UNLOCK, PROPFIND, PUT" } } w.Header().Set("Allow", allow) // http://www.webdav.org/specs/rfc4918.html#dav.compliance.classes w.Header().Set("DAV", "1, 2") // http://msdn.microsoft.com/en-au/library/cc250217.aspx w.Header().Set("MS-Author-Via", "DAV") return 0, nil } func (h *Handler) handleGetHeadPost(w http.ResponseWriter, r *http.Request) (status int, err error) { reqPath, status, err := h.stripPrefix(r.URL.Path) if err != nil { return status, err } // TODO: check locks for read-only access?? f, err := h.FileSystem.OpenFile(reqPath, os.O_RDONLY, 0) if err != nil { return http.StatusNotFound, err } defer f.Close() fi, err := f.Stat() if err != nil { return http.StatusNotFound, err } if fi.IsDir() { return http.StatusMethodNotAllowed, nil } etag, err := findETag(h.FileSystem, h.LockSystem, reqPath, fi) if err != nil { return http.StatusInternalServerError, err } w.Header().Set("ETag", etag) // Let ServeContent determine the Content-Type header. http.ServeContent(w, r, reqPath, fi.ModTime(), f) return 0, nil } func (h *Handler) handleDelete(w http.ResponseWriter, r *http.Request) (status int, err error) { reqPath, status, err := h.stripPrefix(r.URL.Path) if err != nil { return status, err } release, status, err := h.confirmLocks(r, reqPath, "") if err != nil { return status, err } defer release() // TODO: return MultiStatus where appropriate. // "godoc os RemoveAll" says that "If the path does not exist, RemoveAll // returns nil (no error)." WebDAV semantics are that it should return a // "404 Not Found". We therefore have to Stat before we RemoveAll. if _, err := h.FileSystem.Stat(reqPath); err != nil { if os.IsNotExist(err) { return http.StatusNotFound, err } return http.StatusMethodNotAllowed, err } if err := h.FileSystem.RemoveAll(reqPath); err != nil { return http.StatusMethodNotAllowed, err } return http.StatusNoContent, nil } func (h *Handler) handlePut(w http.ResponseWriter, r *http.Request) (status int, err error) { reqPath, status, err := h.stripPrefix(r.URL.Path) if err != nil { return status, err } release, status, err := h.confirmLocks(r, reqPath, "") if err != nil { return status, err } defer release() // TODO(rost): Support the If-Match, If-None-Match headers? See bradfitz' // comments in http.checkEtag. f, err := h.FileSystem.OpenFile(reqPath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0666) if err != nil { return http.StatusNotFound, err } _, copyErr := io.Copy(f, r.Body) fi, statErr := f.Stat() closeErr := f.Close() // TODO(rost): Returning 405 Method Not Allowed might not be appropriate. if copyErr != nil { return http.StatusMethodNotAllowed, copyErr } if statErr != nil { return http.StatusMethodNotAllowed, statErr } if closeErr != nil { return http.StatusMethodNotAllowed, closeErr } etag, err := findETag(h.FileSystem, h.LockSystem, reqPath, fi) if err != nil { return http.StatusInternalServerError, err } w.Header().Set("ETag", etag) return http.StatusCreated, nil } func (h *Handler) handleMkcol(w http.ResponseWriter, r *http.Request) (status int, err error) { reqPath, status, err := h.stripPrefix(r.URL.Path) if err != nil { return status, err } release, status, err := h.confirmLocks(r, reqPath, "") if err != nil { return status, err } defer release() if r.ContentLength > 0 { return http.StatusUnsupportedMediaType, nil } if err := h.FileSystem.Mkdir(reqPath, 0777); err != nil { if os.IsNotExist(err) { return http.StatusConflict, err } return http.StatusMethodNotAllowed, err } return http.StatusCreated, nil } func (h *Handler) handleCopyMove(w http.ResponseWriter, r *http.Request) (status int, err error) { hdr := r.Header.Get("Destination") if hdr == "" { return http.StatusBadRequest, errInvalidDestination } u, err := url.Parse(hdr) if err != nil { return http.StatusBadRequest, errInvalidDestination } if u.Host != r.Host { return http.StatusBadGateway, errInvalidDestination } src, status, err := h.stripPrefix(r.URL.Path) if err != nil { return status, err } dst, status, err := h.stripPrefix(u.Path) if err != nil { return status, err } if dst == "" { return http.StatusBadGateway, errInvalidDestination } if dst == src { return http.StatusForbidden, errDestinationEqualsSource } if r.Method == "COPY" { // Section 7.5.1 says that a COPY only needs to lock the destination, // not both destination and source. Strictly speaking, this is racy, // even though a COPY doesn't modify the source, if a concurrent // operation modifies the source. However, the litmus test explicitly // checks that COPYing a locked-by-another source is OK. release, status, err := h.confirmLocks(r, "", dst) if err != nil { return status, err } defer release() // Section 9.8.3 says that "The COPY method on a collection without a Depth // header must act as if a Depth header with value "infinity" was included". depth := infiniteDepth if hdr := r.Header.Get("Depth"); hdr != "" { depth = parseDepth(hdr) if depth != 0 && depth != infiniteDepth { // Section 9.8.3 says that "A client may submit a Depth header on a // COPY on a collection with a value of "0" or "infinity"." return http.StatusBadRequest, errInvalidDepth } } return copyFiles(h.FileSystem, src, dst, r.Header.Get("Overwrite") != "F", depth, 0) } release, status, err := h.confirmLocks(r, src, dst) if err != nil { return status, err } defer release() // Section 9.9.2 says that "The MOVE method on a collection must act as if // a "Depth: infinity" header was used on it. A client must not submit a // Depth header on a MOVE on a collection with any value but "infinity"." if hdr := r.Header.Get("Depth"); hdr != "" { if parseDepth(hdr) != infiniteDepth { return http.StatusBadRequest, errInvalidDepth } } return moveFiles(h.FileSystem, src, dst, r.Header.Get("Overwrite") == "T") } func (h *Handler) handleLock(w http.ResponseWriter, r *http.Request) (retStatus int, retErr error) { duration, err := parseTimeout(r.Header.Get("Timeout")) if err != nil { return http.StatusBadRequest, err } li, status, err := readLockInfo(r.Body) if err != nil { return status, err } token, ld, now, created := "", LockDetails{}, time.Now(), false if li == (lockInfo{}) { // An empty lockInfo means to refresh the lock. ih, ok := parseIfHeader(r.Header.Get("If")) if !ok { return http.StatusBadRequest, errInvalidIfHeader } if len(ih.lists) == 1 && len(ih.lists[0].conditions) == 1 { token = ih.lists[0].conditions[0].Token } if token == "" { return http.StatusBadRequest, errInvalidLockToken } ld, err = h.LockSystem.Refresh(now, token, duration) if err != nil { if err == ErrNoSuchLock { return http.StatusPreconditionFailed, err } return http.StatusInternalServerError, err } } else { // Section 9.10.3 says that "If no Depth header is submitted on a LOCK request, // then the request MUST act as if a "Depth:infinity" had been submitted." depth := infiniteDepth if hdr := r.Header.Get("Depth"); hdr != "" { depth = parseDepth(hdr) if depth != 0 && depth != infiniteDepth { // Section 9.10.3 says that "Values other than 0 or infinity must not be // used with the Depth header on a LOCK method". return http.StatusBadRequest, errInvalidDepth } } reqPath, status, err := h.stripPrefix(r.URL.Path) if err != nil { return status, err } ld = LockDetails{ Root: reqPath, Duration: duration, OwnerXML: li.Owner.InnerXML, ZeroDepth: depth == 0, } token, err = h.LockSystem.Create(now, ld) if err != nil { if err == ErrLocked { return StatusLocked, err } return http.StatusInternalServerError, err } defer func() { if retErr != nil { h.LockSystem.Unlock(now, token) } }() // Create the resource if it didn't previously exist. if _, err := h.FileSystem.Stat(reqPath); err != nil { f, err := h.FileSystem.OpenFile(reqPath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0666) if err != nil { // TODO: detect missing intermediate dirs and return http.StatusConflict? return http.StatusInternalServerError, err } f.Close() created = true } // http://www.webdav.org/specs/rfc4918.html#HEADER_Lock-Token says that the // Lock-Token value is a Coded-URL. We add angle brackets. w.Header().Set("Lock-Token", "<"+token+">") } w.Header().Set("Content-Type", "application/xml; charset=utf-8") if created { // This is "w.WriteHeader(http.StatusCreated)" and not "return // http.StatusCreated, nil" because we write our own (XML) response to w // and Handler.ServeHTTP would otherwise write "Created". w.WriteHeader(http.StatusCreated) } writeLockInfo(w, token, ld) return 0, nil } func (h *Handler) handleUnlock(w http.ResponseWriter, r *http.Request) (status int, err error) { // http://www.webdav.org/specs/rfc4918.html#HEADER_Lock-Token says that the // Lock-Token value is a Coded-URL. We strip its angle brackets. t := r.Header.Get("Lock-Token") if len(t) < 2 || t[0] != '<' || t[len(t)-1] != '>' { return http.StatusBadRequest, errInvalidLockToken } t = t[1 : len(t)-1] switch err = h.LockSystem.Unlock(time.Now(), t); err { case nil: return http.StatusNoContent, err case ErrForbidden: return http.StatusForbidden, err case ErrLocked: return StatusLocked, err case ErrNoSuchLock: return http.StatusConflict, err default: return http.StatusInternalServerError, err } } func (h *Handler) handlePropfind(w http.ResponseWriter, r *http.Request) (status int, err error) { reqPath, status, err := h.stripPrefix(r.URL.Path) if err != nil { return status, err } fi, err := h.FileSystem.Stat(reqPath) if err != nil { if os.IsNotExist(err) { return http.StatusNotFound, err } return http.StatusMethodNotAllowed, err } depth := infiniteDepth if hdr := r.Header.Get("Depth"); hdr != "" { depth = parseDepth(hdr) if depth == invalidDepth { return http.StatusBadRequest, errInvalidDepth } } pf, status, err := readPropfind(r.Body) if err != nil { return status, err } mw := multistatusWriter{w: w} walkFn := func(reqPath string, info os.FileInfo, err error) error { if err != nil { return err } var pstats []Propstat if pf.Propname != nil { pnames, err := propnames(h.FileSystem, h.LockSystem, reqPath) if err != nil { return err } pstat := Propstat{Status: http.StatusOK} for _, xmlname := range pnames { pstat.Props = append(pstat.Props, Property{XMLName: xmlname}) } pstats = append(pstats, pstat) } else if pf.Allprop != nil { pstats, err = allprop(h.FileSystem, h.LockSystem, reqPath, pf.Prop) } else { pstats, err = props(h.FileSystem, h.LockSystem, reqPath, pf.Prop) } if err != nil { return err } return mw.write(makePropstatResponse(path.Join(h.Prefix, reqPath), pstats)) } walkErr := walkFS(h.FileSystem, depth, reqPath, fi, walkFn) closeErr := mw.close() if walkErr != nil { return http.StatusInternalServerError, walkErr } if closeErr != nil { return http.StatusInternalServerError, closeErr } return 0, nil } func (h *Handler) handleProppatch(w http.ResponseWriter, r *http.Request) (status int, err error) { reqPath, status, err := h.stripPrefix(r.URL.Path) if err != nil { return status, err } release, status, err := h.confirmLocks(r, reqPath, "") if err != nil { return status, err } defer release() if _, err := h.FileSystem.Stat(reqPath); err != nil { if os.IsNotExist(err) { return http.StatusNotFound, err } return http.StatusMethodNotAllowed, err } patches, status, err := readProppatch(r.Body) if err != nil { return status, err } pstats, err := patch(h.FileSystem, h.LockSystem, reqPath, patches) if err != nil { return http.StatusInternalServerError, err } mw := multistatusWriter{w: w} writeErr := mw.write(makePropstatResponse(r.URL.Path, pstats)) closeErr := mw.close() if writeErr != nil { return http.StatusInternalServerError, writeErr } if closeErr != nil { return http.StatusInternalServerError, closeErr } return 0, nil } func makePropstatResponse(href string, pstats []Propstat) *response { resp := response{ Href: []string{(&url.URL{Path: href}).EscapedPath()}, Propstat: make([]propstat, 0, len(pstats)), } for _, p := range pstats { var xmlErr *xmlError if p.XMLError != "" { xmlErr = &xmlError{InnerXML: []byte(p.XMLError)} } resp.Propstat = append(resp.Propstat, propstat{ Status: fmt.Sprintf("HTTP/1.1 %d %s", p.Status, StatusText(p.Status)), Prop: p.Props, ResponseDescription: p.ResponseDescription, Error: xmlErr, }) } return &resp } const ( infiniteDepth = -1 invalidDepth = -2 ) // parseDepth maps the strings "0", "1" and "infinity" to 0, 1 and // infiniteDepth. Parsing any other string returns invalidDepth. // // Different WebDAV methods have further constraints on valid depths: // - PROPFIND has no further restrictions, as per section 9.1. // - COPY accepts only "0" or "infinity", as per section 9.8.3. // - MOVE accepts only "infinity", as per section 9.9.2. // - LOCK accepts only "0" or "infinity", as per section 9.10.3. // These constraints are enforced by the handleXxx methods. func parseDepth(s string) int { switch s { case "0": return 0 case "1": return 1 case "infinity": return infiniteDepth } return invalidDepth } // http://www.webdav.org/specs/rfc4918.html#status.code.extensions.to.http11 const ( StatusMulti = 207 StatusUnprocessableEntity = 422 StatusLocked = 423 StatusFailedDependency = 424 StatusInsufficientStorage = 507 ) func StatusText(code int) string { switch code { case StatusMulti: return "Multi-Status" case StatusUnprocessableEntity: return "Unprocessable Entity" case StatusLocked: return "Locked" case StatusFailedDependency: return "Failed Dependency" case StatusInsufficientStorage: return "Insufficient Storage" } return http.StatusText(code) } var ( errDestinationEqualsSource = errors.New("webdav: destination equals source") errDirectoryNotEmpty = errors.New("webdav: directory not empty") errInvalidDepth = errors.New("webdav: invalid depth") errInvalidDestination = errors.New("webdav: invalid destination") errInvalidIfHeader = errors.New("webdav: invalid If header") errInvalidLockInfo = errors.New("webdav: invalid lock info") errInvalidLockToken = errors.New("webdav: invalid lock token") errInvalidPropfind = errors.New("webdav: invalid propfind") errInvalidProppatch = errors.New("webdav: invalid proppatch") errInvalidResponse = errors.New("webdav: invalid response") errInvalidTimeout = errors.New("webdav: invalid timeout") errNoFileSystem = errors.New("webdav: no file system") errNoLockSystem = errors.New("webdav: no lock system") errNotADirectory = errors.New("webdav: not a directory") errPrefixMismatch = errors.New("webdav: prefix mismatch") errRecursionTooDeep = errors.New("webdav: recursion too deep") errUnsupportedLockInfo = errors.New("webdav: unsupported lock info") errUnsupportedMethod = errors.New("webdav: unsupported method") ) lxd-2.0.2/dist/src/golang.org/x/net/webdav/litmus_test_server.go0000644061062106075000000000534512721405224027303 0ustar00stgraberdomain admins00000000000000// Copyright 2015 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build ignore /* This program is a server for the WebDAV 'litmus' compliance test at http://www.webdav.org/neon/litmus/ To run the test: go run litmus_test_server.go and separately, from the downloaded litmus-xxx directory: make URL=http://localhost:9999/ check */ package main import ( "flag" "fmt" "log" "net/http" "net/url" "golang.org/x/net/webdav" ) var port = flag.Int("port", 9999, "server port") func main() { flag.Parse() log.SetFlags(0) h := &webdav.Handler{ FileSystem: webdav.NewMemFS(), LockSystem: webdav.NewMemLS(), Logger: func(r *http.Request, err error) { litmus := r.Header.Get("X-Litmus") if len(litmus) > 19 { litmus = litmus[:16] + "..." } switch r.Method { case "COPY", "MOVE": dst := "" if u, err := url.Parse(r.Header.Get("Destination")); err == nil { dst = u.Path } o := r.Header.Get("Overwrite") log.Printf("%-20s%-10s%-30s%-30so=%-2s%v", litmus, r.Method, r.URL.Path, dst, o, err) default: log.Printf("%-20s%-10s%-30s%v", litmus, r.Method, r.URL.Path, err) } }, } // The next line would normally be: // http.Handle("/", h) // but we wrap that HTTP handler h to cater for a special case. // // The propfind_invalid2 litmus test case expects an empty namespace prefix // declaration to be an error. The FAQ in the webdav litmus test says: // // "What does the "propfind_invalid2" test check for?... // // If a request was sent with an XML body which included an empty namespace // prefix declaration (xmlns:ns1=""), then the server must reject that with // a "400 Bad Request" response, as it is invalid according to the XML // Namespace specification." // // On the other hand, the Go standard library's encoding/xml package // accepts an empty xmlns namespace, as per the discussion at // https://github.com/golang/go/issues/8068 // // Empty namespaces seem disallowed in the second (2006) edition of the XML // standard, but allowed in a later edition. The grammar differs between // http://www.w3.org/TR/2006/REC-xml-names-20060816/#ns-decl and // http://www.w3.org/TR/REC-xml-names/#dt-prefix // // Thus, we assume that the propfind_invalid2 test is obsolete, and // hard-code the 400 Bad Request response that the test expects. http.Handle("/", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { if r.Header.Get("X-Litmus") == "props: 3 (propfind_invalid2)" { http.Error(w, "400 Bad Request", http.StatusBadRequest) return } h.ServeHTTP(w, r) })) addr := fmt.Sprintf(":%d", *port) log.Printf("Serving %v", addr) log.Fatal(http.ListenAndServe(addr, nil)) } lxd-2.0.2/dist/src/golang.org/x/net/webdav/if.go0000644061062106075000000000757112721405224023742 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package webdav // The If header is covered by Section 10.4. // http://www.webdav.org/specs/rfc4918.html#HEADER_If import ( "strings" ) // ifHeader is a disjunction (OR) of ifLists. type ifHeader struct { lists []ifList } // ifList is a conjunction (AND) of Conditions, and an optional resource tag. type ifList struct { resourceTag string conditions []Condition } // parseIfHeader parses the "If: foo bar" HTTP header. The httpHeader string // should omit the "If:" prefix and have any "\r\n"s collapsed to a " ", as is // returned by req.Header.Get("If") for a http.Request req. func parseIfHeader(httpHeader string) (h ifHeader, ok bool) { s := strings.TrimSpace(httpHeader) switch tokenType, _, _ := lex(s); tokenType { case '(': return parseNoTagLists(s) case angleTokenType: return parseTaggedLists(s) default: return ifHeader{}, false } } func parseNoTagLists(s string) (h ifHeader, ok bool) { for { l, remaining, ok := parseList(s) if !ok { return ifHeader{}, false } h.lists = append(h.lists, l) if remaining == "" { return h, true } s = remaining } } func parseTaggedLists(s string) (h ifHeader, ok bool) { resourceTag, n := "", 0 for first := true; ; first = false { tokenType, tokenStr, remaining := lex(s) switch tokenType { case angleTokenType: if !first && n == 0 { return ifHeader{}, false } resourceTag, n = tokenStr, 0 s = remaining case '(': n++ l, remaining, ok := parseList(s) if !ok { return ifHeader{}, false } l.resourceTag = resourceTag h.lists = append(h.lists, l) if remaining == "" { return h, true } s = remaining default: return ifHeader{}, false } } } func parseList(s string) (l ifList, remaining string, ok bool) { tokenType, _, s := lex(s) if tokenType != '(' { return ifList{}, "", false } for { tokenType, _, remaining = lex(s) if tokenType == ')' { if len(l.conditions) == 0 { return ifList{}, "", false } return l, remaining, true } c, remaining, ok := parseCondition(s) if !ok { return ifList{}, "", false } l.conditions = append(l.conditions, c) s = remaining } } func parseCondition(s string) (c Condition, remaining string, ok bool) { tokenType, tokenStr, s := lex(s) if tokenType == notTokenType { c.Not = true tokenType, tokenStr, s = lex(s) } switch tokenType { case strTokenType, angleTokenType: c.Token = tokenStr case squareTokenType: c.ETag = tokenStr default: return Condition{}, "", false } return c, s, true } // Single-rune tokens like '(' or ')' have a token type equal to their rune. // All other tokens have a negative token type. const ( errTokenType = rune(-1) eofTokenType = rune(-2) strTokenType = rune(-3) notTokenType = rune(-4) angleTokenType = rune(-5) squareTokenType = rune(-6) ) func lex(s string) (tokenType rune, tokenStr string, remaining string) { // The net/textproto Reader that parses the HTTP header will collapse // Linear White Space that spans multiple "\r\n" lines to a single " ", // so we don't need to look for '\r' or '\n'. for len(s) > 0 && (s[0] == '\t' || s[0] == ' ') { s = s[1:] } if len(s) == 0 { return eofTokenType, "", "" } i := 0 loop: for ; i < len(s); i++ { switch s[i] { case '\t', ' ', '(', ')', '<', '>', '[', ']': break loop } } if i != 0 { tokenStr, remaining = s[:i], s[i:] if tokenStr == "Not" { return notTokenType, "", remaining } return strTokenType, tokenStr, remaining } j := 0 switch s[0] { case '<': j, tokenType = strings.IndexByte(s, '>'), angleTokenType case '[': j, tokenType = strings.IndexByte(s, ']'), squareTokenType default: return rune(s[0]), "", s[1:] } if j < 0 { return errTokenType, "", "" } return tokenType, s[1:j], s[j+1:] } lxd-2.0.2/dist/src/golang.org/x/net/webdav/internal/0000755061062106075000000000000012721405224024617 5ustar00stgraberdomain admins00000000000000lxd-2.0.2/dist/src/golang.org/x/net/webdav/internal/xml/0000755061062106075000000000000012721405224025417 5ustar00stgraberdomain admins00000000000000lxd-2.0.2/dist/src/golang.org/x/net/webdav/internal/xml/atom_test.go0000644061062106075000000000312612721405224027747 0ustar00stgraberdomain admins00000000000000// Copyright 2011 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package xml import "time" var atomValue = &Feed{ XMLName: Name{"http://www.w3.org/2005/Atom", "feed"}, Title: "Example Feed", Link: []Link{{Href: "http://example.org/"}}, Updated: ParseTime("2003-12-13T18:30:02Z"), Author: Person{Name: "John Doe"}, Id: "urn:uuid:60a76c80-d399-11d9-b93C-0003939e0af6", Entry: []Entry{ { Title: "Atom-Powered Robots Run Amok", Link: []Link{{Href: "http://example.org/2003/12/13/atom03"}}, Id: "urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a", Updated: ParseTime("2003-12-13T18:30:02Z"), Summary: NewText("Some text."), }, }, } var atomXml = `` + `` + `Example Feed` + `urn:uuid:60a76c80-d399-11d9-b93C-0003939e0af6` + `` + `John Doe` + `` + `Atom-Powered Robots Run Amok` + `urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a` + `` + `2003-12-13T18:30:02Z` + `` + `Some text.` + `` + `` func ParseTime(str string) time.Time { t, err := time.Parse(time.RFC3339, str) if err != nil { panic(err) } return t } func NewText(text string) Text { return Text{ Body: text, } } lxd-2.0.2/dist/src/golang.org/x/net/webdav/internal/xml/xml_test.go0000644061062106075000000004654112721405224027617 0ustar00stgraberdomain admins00000000000000// Copyright 2009 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package xml import ( "bytes" "fmt" "io" "reflect" "strings" "testing" "unicode/utf8" ) const testInput = ` World <>'" 白鵬็ฟ” &ไฝ•; &is-it; ` var testEntity = map[string]string{"ไฝ•": "What", "is-it": "is it?"} var rawTokens = []Token{ CharData("\n"), ProcInst{"xml", []byte(`version="1.0" encoding="UTF-8"`)}, CharData("\n"), Directive(`DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"`), CharData("\n"), StartElement{Name{"", "body"}, []Attr{{Name{"xmlns", "foo"}, "ns1"}, {Name{"", "xmlns"}, "ns2"}, {Name{"xmlns", "tag"}, "ns3"}}}, CharData("\n "), StartElement{Name{"", "hello"}, []Attr{{Name{"", "lang"}, "en"}}}, CharData("World <>'\" ็™ฝ้ตฌ็ฟ”"), EndElement{Name{"", "hello"}}, CharData("\n "), StartElement{Name{"", "query"}, []Attr{}}, CharData("What is it?"), EndElement{Name{"", "query"}}, CharData("\n "), StartElement{Name{"", "goodbye"}, []Attr{}}, EndElement{Name{"", "goodbye"}}, CharData("\n "), StartElement{Name{"", "outer"}, []Attr{{Name{"foo", "attr"}, "value"}, {Name{"xmlns", "tag"}, "ns4"}}}, CharData("\n "), StartElement{Name{"", "inner"}, []Attr{}}, EndElement{Name{"", "inner"}}, CharData("\n "), EndElement{Name{"", "outer"}}, CharData("\n "), StartElement{Name{"tag", "name"}, []Attr{}}, CharData("\n "), CharData("Some text here."), CharData("\n "), EndElement{Name{"tag", "name"}}, CharData("\n"), EndElement{Name{"", "body"}}, Comment(" missing final newline "), } var cookedTokens = []Token{ CharData("\n"), ProcInst{"xml", []byte(`version="1.0" encoding="UTF-8"`)}, CharData("\n"), Directive(`DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"`), CharData("\n"), StartElement{Name{"ns2", "body"}, []Attr{{Name{"xmlns", "foo"}, "ns1"}, {Name{"", "xmlns"}, "ns2"}, {Name{"xmlns", "tag"}, "ns3"}}}, CharData("\n "), StartElement{Name{"ns2", "hello"}, []Attr{{Name{"", "lang"}, "en"}}}, CharData("World <>'\" ็™ฝ้ตฌ็ฟ”"), EndElement{Name{"ns2", "hello"}}, CharData("\n "), StartElement{Name{"ns2", "query"}, []Attr{}}, CharData("What is it?"), EndElement{Name{"ns2", "query"}}, CharData("\n "), StartElement{Name{"ns2", "goodbye"}, []Attr{}}, EndElement{Name{"ns2", "goodbye"}}, CharData("\n "), StartElement{Name{"ns2", "outer"}, []Attr{{Name{"ns1", "attr"}, "value"}, {Name{"xmlns", "tag"}, "ns4"}}}, CharData("\n "), StartElement{Name{"ns2", "inner"}, []Attr{}}, EndElement{Name{"ns2", "inner"}}, CharData("\n "), EndElement{Name{"ns2", "outer"}}, CharData("\n "), StartElement{Name{"ns3", "name"}, []Attr{}}, CharData("\n "), CharData("Some text here."), CharData("\n "), EndElement{Name{"ns3", "name"}}, CharData("\n"), EndElement{Name{"ns2", "body"}}, Comment(" missing final newline "), } const testInputAltEncoding = ` VALUE` var rawTokensAltEncoding = []Token{ CharData("\n"), ProcInst{"xml", []byte(`version="1.0" encoding="x-testing-uppercase"`)}, CharData("\n"), StartElement{Name{"", "tag"}, []Attr{}}, CharData("value"), EndElement{Name{"", "tag"}}, } var xmlInput = []string{ // unexpected EOF cases "<", "", "", "", // "", // let the Token() caller handle "", "", "", "", " c;", "", "", "", // "", // let the Token() caller handle "", "", "cdata]]>", } func TestRawToken(t *testing.T) { d := NewDecoder(strings.NewReader(testInput)) d.Entity = testEntity testRawToken(t, d, testInput, rawTokens) } const nonStrictInput = ` non&entity &unknown;entity { &#zzz; &ใชใพใˆ3; <-gt; &; &0a; ` var nonStringEntity = map[string]string{"": "oops!", "0a": "oops!"} var nonStrictTokens = []Token{ CharData("\n"), StartElement{Name{"", "tag"}, []Attr{}}, CharData("non&entity"), EndElement{Name{"", "tag"}}, CharData("\n"), StartElement{Name{"", "tag"}, []Attr{}}, CharData("&unknown;entity"), EndElement{Name{"", "tag"}}, CharData("\n"), StartElement{Name{"", "tag"}, []Attr{}}, CharData("{"), EndElement{Name{"", "tag"}}, CharData("\n"), StartElement{Name{"", "tag"}, []Attr{}}, CharData("&#zzz;"), EndElement{Name{"", "tag"}}, CharData("\n"), StartElement{Name{"", "tag"}, []Attr{}}, CharData("&ใชใพใˆ3;"), EndElement{Name{"", "tag"}}, CharData("\n"), StartElement{Name{"", "tag"}, []Attr{}}, CharData("<-gt;"), EndElement{Name{"", "tag"}}, CharData("\n"), StartElement{Name{"", "tag"}, []Attr{}}, CharData("&;"), EndElement{Name{"", "tag"}}, CharData("\n"), StartElement{Name{"", "tag"}, []Attr{}}, CharData("&0a;"), EndElement{Name{"", "tag"}}, CharData("\n"), } func TestNonStrictRawToken(t *testing.T) { d := NewDecoder(strings.NewReader(nonStrictInput)) d.Strict = false testRawToken(t, d, nonStrictInput, nonStrictTokens) } type downCaser struct { t *testing.T r io.ByteReader } func (d *downCaser) ReadByte() (c byte, err error) { c, err = d.r.ReadByte() if c >= 'A' && c <= 'Z' { c += 'a' - 'A' } return } func (d *downCaser) Read(p []byte) (int, error) { d.t.Fatalf("unexpected Read call on downCaser reader") panic("unreachable") } func TestRawTokenAltEncoding(t *testing.T) { d := NewDecoder(strings.NewReader(testInputAltEncoding)) d.CharsetReader = func(charset string, input io.Reader) (io.Reader, error) { if charset != "x-testing-uppercase" { t.Fatalf("unexpected charset %q", charset) } return &downCaser{t, input.(io.ByteReader)}, nil } testRawToken(t, d, testInputAltEncoding, rawTokensAltEncoding) } func TestRawTokenAltEncodingNoConverter(t *testing.T) { d := NewDecoder(strings.NewReader(testInputAltEncoding)) token, err := d.RawToken() if token == nil { t.Fatalf("expected a token on first RawToken call") } if err != nil { t.Fatal(err) } token, err = d.RawToken() if token != nil { t.Errorf("expected a nil token; got %#v", token) } if err == nil { t.Fatalf("expected an error on second RawToken call") } const encoding = "x-testing-uppercase" if !strings.Contains(err.Error(), encoding) { t.Errorf("expected error to contain %q; got error: %v", encoding, err) } } func testRawToken(t *testing.T, d *Decoder, raw string, rawTokens []Token) { lastEnd := int64(0) for i, want := range rawTokens { start := d.InputOffset() have, err := d.RawToken() end := d.InputOffset() if err != nil { t.Fatalf("token %d: unexpected error: %s", i, err) } if !reflect.DeepEqual(have, want) { var shave, swant string if _, ok := have.(CharData); ok { shave = fmt.Sprintf("CharData(%q)", have) } else { shave = fmt.Sprintf("%#v", have) } if _, ok := want.(CharData); ok { swant = fmt.Sprintf("CharData(%q)", want) } else { swant = fmt.Sprintf("%#v", want) } t.Errorf("token %d = %s, want %s", i, shave, swant) } // Check that InputOffset returned actual token. switch { case start < lastEnd: t.Errorf("token %d: position [%d,%d) for %T is before previous token", i, start, end, have) case start >= end: // Special case: EndElement can be synthesized. if start == end && end == lastEnd { break } t.Errorf("token %d: position [%d,%d) for %T is empty", i, start, end, have) case end > int64(len(raw)): t.Errorf("token %d: position [%d,%d) for %T extends beyond input", i, start, end, have) default: text := raw[start:end] if strings.ContainsAny(text, "<>") && (!strings.HasPrefix(text, "<") || !strings.HasSuffix(text, ">")) { t.Errorf("token %d: misaligned raw token %#q for %T", i, text, have) } } lastEnd = end } } // Ensure that directives (specifically !DOCTYPE) include the complete // text of any nested directives, noting that < and > do not change // nesting depth if they are in single or double quotes. var nestedDirectivesInput = ` ]> ">]> ]> '>]> ]> '>]> ]> ` var nestedDirectivesTokens = []Token{ CharData("\n"), Directive(`DOCTYPE []`), CharData("\n"), Directive(`DOCTYPE [">]`), CharData("\n"), Directive(`DOCTYPE []`), CharData("\n"), Directive(`DOCTYPE ['>]`), CharData("\n"), Directive(`DOCTYPE []`), CharData("\n"), Directive(`DOCTYPE ['>]`), CharData("\n"), Directive(`DOCTYPE []`), CharData("\n"), } func TestNestedDirectives(t *testing.T) { d := NewDecoder(strings.NewReader(nestedDirectivesInput)) for i, want := range nestedDirectivesTokens { have, err := d.Token() if err != nil { t.Fatalf("token %d: unexpected error: %s", i, err) } if !reflect.DeepEqual(have, want) { t.Errorf("token %d = %#v want %#v", i, have, want) } } } func TestToken(t *testing.T) { d := NewDecoder(strings.NewReader(testInput)) d.Entity = testEntity for i, want := range cookedTokens { have, err := d.Token() if err != nil { t.Fatalf("token %d: unexpected error: %s", i, err) } if !reflect.DeepEqual(have, want) { t.Errorf("token %d = %#v want %#v", i, have, want) } } } func TestSyntax(t *testing.T) { for i := range xmlInput { d := NewDecoder(strings.NewReader(xmlInput[i])) var err error for _, err = d.Token(); err == nil; _, err = d.Token() { } if _, ok := err.(*SyntaxError); !ok { t.Fatalf(`xmlInput "%s": expected SyntaxError not received`, xmlInput[i]) } } } type allScalars struct { True1 bool True2 bool False1 bool False2 bool Int int Int8 int8 Int16 int16 Int32 int32 Int64 int64 Uint int Uint8 uint8 Uint16 uint16 Uint32 uint32 Uint64 uint64 Uintptr uintptr Float32 float32 Float64 float64 String string PtrString *string } var all = allScalars{ True1: true, True2: true, False1: false, False2: false, Int: 1, Int8: -2, Int16: 3, Int32: -4, Int64: 5, Uint: 6, Uint8: 7, Uint16: 8, Uint32: 9, Uint64: 10, Uintptr: 11, Float32: 13.0, Float64: 14.0, String: "15", PtrString: &sixteen, } var sixteen = "16" const testScalarsInput = ` true 1 false 0 1 -2 3 -4 5 6 7 8 9 10 11 12.0 13.0 14.0 15 16 ` func TestAllScalars(t *testing.T) { var a allScalars err := Unmarshal([]byte(testScalarsInput), &a) if err != nil { t.Fatal(err) } if !reflect.DeepEqual(a, all) { t.Errorf("have %+v want %+v", a, all) } } type item struct { Field_a string } func TestIssue569(t *testing.T) { data := `abcd` var i item err := Unmarshal([]byte(data), &i) if err != nil || i.Field_a != "abcd" { t.Fatal("Expecting abcd") } } func TestUnquotedAttrs(t *testing.T) { data := "" d := NewDecoder(strings.NewReader(data)) d.Strict = false token, err := d.Token() if _, ok := err.(*SyntaxError); ok { t.Errorf("Unexpected error: %v", err) } if token.(StartElement).Name.Local != "tag" { t.Errorf("Unexpected tag name: %v", token.(StartElement).Name.Local) } attr := token.(StartElement).Attr[0] if attr.Value != "azAZ09:-_" { t.Errorf("Unexpected attribute value: %v", attr.Value) } if attr.Name.Local != "attr" { t.Errorf("Unexpected attribute name: %v", attr.Name.Local) } } func TestValuelessAttrs(t *testing.T) { tests := [][3]string{ {"

", "p", "nowrap"}, {"

", "p", "nowrap"}, {"", "input", "checked"}, {"", "input", "checked"}, } for _, test := range tests { d := NewDecoder(strings.NewReader(test[0])) d.Strict = false token, err := d.Token() if _, ok := err.(*SyntaxError); ok { t.Errorf("Unexpected error: %v", err) } if token.(StartElement).Name.Local != test[1] { t.Errorf("Unexpected tag name: %v", token.(StartElement).Name.Local) } attr := token.(StartElement).Attr[0] if attr.Value != test[2] { t.Errorf("Unexpected attribute value: %v", attr.Value) } if attr.Name.Local != test[2] { t.Errorf("Unexpected attribute name: %v", attr.Name.Local) } } } func TestCopyTokenCharData(t *testing.T) { data := []byte("same data") var tok1 Token = CharData(data) tok2 := CopyToken(tok1) if !reflect.DeepEqual(tok1, tok2) { t.Error("CopyToken(CharData) != CharData") } data[1] = 'o' if reflect.DeepEqual(tok1, tok2) { t.Error("CopyToken(CharData) uses same buffer.") } } func TestCopyTokenStartElement(t *testing.T) { elt := StartElement{Name{"", "hello"}, []Attr{{Name{"", "lang"}, "en"}}} var tok1 Token = elt tok2 := CopyToken(tok1) if tok1.(StartElement).Attr[0].Value != "en" { t.Error("CopyToken overwrote Attr[0]") } if !reflect.DeepEqual(tok1, tok2) { t.Error("CopyToken(StartElement) != StartElement") } tok1.(StartElement).Attr[0] = Attr{Name{"", "lang"}, "de"} if reflect.DeepEqual(tok1, tok2) { t.Error("CopyToken(CharData) uses same buffer.") } } func TestSyntaxErrorLineNum(t *testing.T) { testInput := "

Foo

\n\n

Bar\n" d := NewDecoder(strings.NewReader(testInput)) var err error for _, err = d.Token(); err == nil; _, err = d.Token() { } synerr, ok := err.(*SyntaxError) if !ok { t.Error("Expected SyntaxError.") } if synerr.Line != 3 { t.Error("SyntaxError didn't have correct line number.") } } func TestTrailingRawToken(t *testing.T) { input := ` ` d := NewDecoder(strings.NewReader(input)) var err error for _, err = d.RawToken(); err == nil; _, err = d.RawToken() { } if err != io.EOF { t.Fatalf("d.RawToken() = _, %v, want _, io.EOF", err) } } func TestTrailingToken(t *testing.T) { input := ` ` d := NewDecoder(strings.NewReader(input)) var err error for _, err = d.Token(); err == nil; _, err = d.Token() { } if err != io.EOF { t.Fatalf("d.Token() = _, %v, want _, io.EOF", err) } } func TestEntityInsideCDATA(t *testing.T) { input := `` d := NewDecoder(strings.NewReader(input)) var err error for _, err = d.Token(); err == nil; _, err = d.Token() { } if err != io.EOF { t.Fatalf("d.Token() = _, %v, want _, io.EOF", err) } } var characterTests = []struct { in string err string }{ {"\x12", "illegal character code U+0012"}, {"\x0b", "illegal character code U+000B"}, {"\xef\xbf\xbe", "illegal character code U+FFFE"}, {"\r\n\x07", "illegal character code U+0007"}, {"what's up", "expected attribute name in element"}, {"&abc\x01;", "invalid character entity &abc (no semicolon)"}, {"&\x01;", "invalid character entity & (no semicolon)"}, {"&\xef\xbf\xbe;", "invalid character entity &\uFFFE;"}, {"&hello;", "invalid character entity &hello;"}, } func TestDisallowedCharacters(t *testing.T) { for i, tt := range characterTests { d := NewDecoder(strings.NewReader(tt.in)) var err error for err == nil { _, err = d.Token() } synerr, ok := err.(*SyntaxError) if !ok { t.Fatalf("input %d d.Token() = _, %v, want _, *SyntaxError", i, err) } if synerr.Msg != tt.err { t.Fatalf("input %d synerr.Msg wrong: want %q, got %q", i, tt.err, synerr.Msg) } } } type procInstEncodingTest struct { expect, got string } var procInstTests = []struct { input string expect [2]string }{ {`version="1.0" encoding="utf-8"`, [2]string{"1.0", "utf-8"}}, {`version="1.0" encoding='utf-8'`, [2]string{"1.0", "utf-8"}}, {`version="1.0" encoding='utf-8' `, [2]string{"1.0", "utf-8"}}, {`version="1.0" encoding=utf-8`, [2]string{"1.0", ""}}, {`encoding="FOO" `, [2]string{"", "FOO"}}, } func TestProcInstEncoding(t *testing.T) { for _, test := range procInstTests { if got := procInst("version", test.input); got != test.expect[0] { t.Errorf("procInst(version, %q) = %q; want %q", test.input, got, test.expect[0]) } if got := procInst("encoding", test.input); got != test.expect[1] { t.Errorf("procInst(encoding, %q) = %q; want %q", test.input, got, test.expect[1]) } } } // Ensure that directives with comments include the complete // text of any nested directives. var directivesWithCommentsInput = ` ]> ]> --> --> []> ` var directivesWithCommentsTokens = []Token{ CharData("\n"), Directive(`DOCTYPE []`), CharData("\n"), Directive(`DOCTYPE []`), CharData("\n"), Directive(`DOCTYPE []`), CharData("\n"), } func TestDirectivesWithComments(t *testing.T) { d := NewDecoder(strings.NewReader(directivesWithCommentsInput)) for i, want := range directivesWithCommentsTokens { have, err := d.Token() if err != nil { t.Fatalf("token %d: unexpected error: %s", i, err) } if !reflect.DeepEqual(have, want) { t.Errorf("token %d = %#v want %#v", i, have, want) } } } // Writer whose Write method always returns an error. type errWriter struct{} func (errWriter) Write(p []byte) (n int, err error) { return 0, fmt.Errorf("unwritable") } func TestEscapeTextIOErrors(t *testing.T) { expectErr := "unwritable" err := EscapeText(errWriter{}, []byte{'A'}) if err == nil || err.Error() != expectErr { t.Errorf("have %v, want %v", err, expectErr) } } func TestEscapeTextInvalidChar(t *testing.T) { input := []byte("A \x00 terminated string.") expected := "A \uFFFD terminated string." buff := new(bytes.Buffer) if err := EscapeText(buff, input); err != nil { t.Fatalf("have %v, want nil", err) } text := buff.String() if text != expected { t.Errorf("have %v, want %v", text, expected) } } func TestIssue5880(t *testing.T) { type T []byte data, err := Marshal(T{192, 168, 0, 1}) if err != nil { t.Errorf("Marshal error: %v", err) } if !utf8.Valid(data) { t.Errorf("Marshal generated invalid UTF-8: %x", data) } } lxd-2.0.2/dist/src/golang.org/x/net/webdav/internal/xml/example_test.go0000644061062106075000000000733712721405224030452 0ustar00stgraberdomain admins00000000000000// Copyright 2012 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package xml_test import ( "encoding/xml" "fmt" "os" ) func ExampleMarshalIndent() { type Address struct { City, State string } type Person struct { XMLName xml.Name `xml:"person"` Id int `xml:"id,attr"` FirstName string `xml:"name>first"` LastName string `xml:"name>last"` Age int `xml:"age"` Height float32 `xml:"height,omitempty"` Married bool Address Comment string `xml:",comment"` } v := &Person{Id: 13, FirstName: "John", LastName: "Doe", Age: 42} v.Comment = " Need more details. " v.Address = Address{"Hanga Roa", "Easter Island"} output, err := xml.MarshalIndent(v, " ", " ") if err != nil { fmt.Printf("error: %v\n", err) } os.Stdout.Write(output) // Output: // // // John // Doe // // 42 // false // Hanga Roa // Easter Island // // } func ExampleEncoder() { type Address struct { City, State string } type Person struct { XMLName xml.Name `xml:"person"` Id int `xml:"id,attr"` FirstName string `xml:"name>first"` LastName string `xml:"name>last"` Age int `xml:"age"` Height float32 `xml:"height,omitempty"` Married bool Address Comment string `xml:",comment"` } v := &Person{Id: 13, FirstName: "John", LastName: "Doe", Age: 42} v.Comment = " Need more details. " v.Address = Address{"Hanga Roa", "Easter Island"} enc := xml.NewEncoder(os.Stdout) enc.Indent(" ", " ") if err := enc.Encode(v); err != nil { fmt.Printf("error: %v\n", err) } // Output: // // // John // Doe // // 42 // false // Hanga Roa // Easter Island // // } // This example demonstrates unmarshaling an XML excerpt into a value with // some preset fields. Note that the Phone field isn't modified and that // the XML element is ignored. Also, the Groups field is assigned // considering the element path provided in its tag. func ExampleUnmarshal() { type Email struct { Where string `xml:"where,attr"` Addr string } type Address struct { City, State string } type Result struct { XMLName xml.Name `xml:"Person"` Name string `xml:"FullName"` Phone string Email []Email Groups []string `xml:"Group>Value"` Address } v := Result{Name: "none", Phone: "none"} data := ` Grace R. Emlin Example Inc. gre@example.com gre@work.com Friends Squash Hanga Roa Easter Island ` err := xml.Unmarshal([]byte(data), &v) if err != nil { fmt.Printf("error: %v", err) return } fmt.Printf("XMLName: %#v\n", v.XMLName) fmt.Printf("Name: %q\n", v.Name) fmt.Printf("Phone: %q\n", v.Phone) fmt.Printf("Email: %v\n", v.Email) fmt.Printf("Groups: %v\n", v.Groups) fmt.Printf("Address: %v\n", v.Address) // Output: // XMLName: xml.Name{Space:"", Local:"Person"} // Name: "Grace R. Emlin" // Phone: "none" // Email: [{home gre@example.com} {work gre@work.com}] // Groups: [Friends Squash] // Address: {Hanga Roa Easter Island} } lxd-2.0.2/dist/src/golang.org/x/net/webdav/internal/xml/README0000644061062106075000000000067112721405224026303 0ustar00stgraberdomain admins00000000000000This is a fork of the encoding/xml package at ca1d6c4, the last commit before https://go.googlesource.com/go/+/c0d6d33 "encoding/xml: restore Go 1.4 name space behavior" made late in the lead-up to the Go 1.5 release. The list of encoding/xml changes is at https://go.googlesource.com/go/+log/master/src/encoding/xml This fork is temporary, and I (nigeltao) expect to revert it after Go 1.6 is released. See http://golang.org/issue/11841 lxd-2.0.2/dist/src/golang.org/x/net/webdav/internal/xml/marshal_test.go0000644061062106075000000013743412721405224030450 0ustar00stgraberdomain admins00000000000000// Copyright 2011 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package xml import ( "bytes" "errors" "fmt" "io" "reflect" "strconv" "strings" "sync" "testing" "time" ) type DriveType int const ( HyperDrive DriveType = iota ImprobabilityDrive ) type Passenger struct { Name []string `xml:"name"` Weight float32 `xml:"weight"` } type Ship struct { XMLName struct{} `xml:"spaceship"` Name string `xml:"name,attr"` Pilot string `xml:"pilot,attr"` Drive DriveType `xml:"drive"` Age uint `xml:"age"` Passenger []*Passenger `xml:"passenger"` secret string } type NamedType string type Port struct { XMLName struct{} `xml:"port"` Type string `xml:"type,attr,omitempty"` Comment string `xml:",comment"` Number string `xml:",chardata"` } type Domain struct { XMLName struct{} `xml:"domain"` Country string `xml:",attr,omitempty"` Name []byte `xml:",chardata"` Comment []byte `xml:",comment"` } type Book struct { XMLName struct{} `xml:"book"` Title string `xml:",chardata"` } type Event struct { XMLName struct{} `xml:"event"` Year int `xml:",chardata"` } type Movie struct { XMLName struct{} `xml:"movie"` Length uint `xml:",chardata"` } type Pi struct { XMLName struct{} `xml:"pi"` Approximation float32 `xml:",chardata"` } type Universe struct { XMLName struct{} `xml:"universe"` Visible float64 `xml:",chardata"` } type Particle struct { XMLName struct{} `xml:"particle"` HasMass bool `xml:",chardata"` } type Departure struct { XMLName struct{} `xml:"departure"` When time.Time `xml:",chardata"` } type SecretAgent struct { XMLName struct{} `xml:"agent"` Handle string `xml:"handle,attr"` Identity string Obfuscate string `xml:",innerxml"` } type NestedItems struct { XMLName struct{} `xml:"result"` Items []string `xml:">item"` Item1 []string `xml:"Items>item1"` } type NestedOrder struct { XMLName struct{} `xml:"result"` Field1 string `xml:"parent>c"` Field2 string `xml:"parent>b"` Field3 string `xml:"parent>a"` } type MixedNested struct { XMLName struct{} `xml:"result"` A string `xml:"parent1>a"` B string `xml:"b"` C string `xml:"parent1>parent2>c"` D string `xml:"parent1>d"` } type NilTest struct { A interface{} `xml:"parent1>parent2>a"` B interface{} `xml:"parent1>b"` C interface{} `xml:"parent1>parent2>c"` } type Service struct { XMLName struct{} `xml:"service"` Domain *Domain `xml:"host>domain"` Port *Port `xml:"host>port"` Extra1 interface{} Extra2 interface{} `xml:"host>extra2"` } var nilStruct *Ship type EmbedA struct { EmbedC EmbedB EmbedB FieldA string } type EmbedB struct { FieldB string *EmbedC } type EmbedC struct { FieldA1 string `xml:"FieldA>A1"` FieldA2 string `xml:"FieldA>A2"` FieldB string FieldC string } type NameCasing struct { XMLName struct{} `xml:"casing"` Xy string XY string XyA string `xml:"Xy,attr"` XYA string `xml:"XY,attr"` } type NamePrecedence struct { XMLName Name `xml:"Parent"` FromTag XMLNameWithoutTag `xml:"InTag"` FromNameVal XMLNameWithoutTag FromNameTag XMLNameWithTag InFieldName string } type XMLNameWithTag struct { XMLName Name `xml:"InXMLNameTag"` Value string `xml:",chardata"` } type XMLNameWithNSTag struct { XMLName Name `xml:"ns InXMLNameWithNSTag"` Value string `xml:",chardata"` } type XMLNameWithoutTag struct { XMLName Name Value string `xml:",chardata"` } type NameInField struct { Foo Name `xml:"ns foo"` } type AttrTest struct { Int int `xml:",attr"` Named int `xml:"int,attr"` Float float64 `xml:",attr"` Uint8 uint8 `xml:",attr"` Bool bool `xml:",attr"` Str string `xml:",attr"` Bytes []byte `xml:",attr"` } type OmitAttrTest struct { Int int `xml:",attr,omitempty"` Named int `xml:"int,attr,omitempty"` Float float64 `xml:",attr,omitempty"` Uint8 uint8 `xml:",attr,omitempty"` Bool bool `xml:",attr,omitempty"` Str string `xml:",attr,omitempty"` Bytes []byte `xml:",attr,omitempty"` } type OmitFieldTest struct { Int int `xml:",omitempty"` Named int `xml:"int,omitempty"` Float float64 `xml:",omitempty"` Uint8 uint8 `xml:",omitempty"` Bool bool `xml:",omitempty"` Str string `xml:",omitempty"` Bytes []byte `xml:",omitempty"` Ptr *PresenceTest `xml:",omitempty"` } type AnyTest struct { XMLName struct{} `xml:"a"` Nested string `xml:"nested>value"` AnyField AnyHolder `xml:",any"` } type AnyOmitTest struct { XMLName struct{} `xml:"a"` Nested string `xml:"nested>value"` AnyField *AnyHolder `xml:",any,omitempty"` } type AnySliceTest struct { XMLName struct{} `xml:"a"` Nested string `xml:"nested>value"` AnyField []AnyHolder `xml:",any"` } type AnyHolder struct { XMLName Name XML string `xml:",innerxml"` } type RecurseA struct { A string B *RecurseB } type RecurseB struct { A *RecurseA B string } type PresenceTest struct { Exists *struct{} } type IgnoreTest struct { PublicSecret string `xml:"-"` } type MyBytes []byte type Data struct { Bytes []byte Attr []byte `xml:",attr"` Custom MyBytes } type Plain struct { V interface{} } type MyInt int type EmbedInt struct { MyInt } type Strings struct { X []string `xml:"A>B,omitempty"` } type PointerFieldsTest struct { XMLName Name `xml:"dummy"` Name *string `xml:"name,attr"` Age *uint `xml:"age,attr"` Empty *string `xml:"empty,attr"` Contents *string `xml:",chardata"` } type ChardataEmptyTest struct { XMLName Name `xml:"test"` Contents *string `xml:",chardata"` } type MyMarshalerTest struct { } var _ Marshaler = (*MyMarshalerTest)(nil) func (m *MyMarshalerTest) MarshalXML(e *Encoder, start StartElement) error { e.EncodeToken(start) e.EncodeToken(CharData([]byte("hello world"))) e.EncodeToken(EndElement{start.Name}) return nil } type MyMarshalerAttrTest struct{} var _ MarshalerAttr = (*MyMarshalerAttrTest)(nil) func (m *MyMarshalerAttrTest) MarshalXMLAttr(name Name) (Attr, error) { return Attr{name, "hello world"}, nil } type MyMarshalerValueAttrTest struct{} var _ MarshalerAttr = MyMarshalerValueAttrTest{} func (m MyMarshalerValueAttrTest) MarshalXMLAttr(name Name) (Attr, error) { return Attr{name, "hello world"}, nil } type MarshalerStruct struct { Foo MyMarshalerAttrTest `xml:",attr"` } type MarshalerValueStruct struct { Foo MyMarshalerValueAttrTest `xml:",attr"` } type InnerStruct struct { XMLName Name `xml:"testns outer"` } type OuterStruct struct { InnerStruct IntAttr int `xml:"int,attr"` } type OuterNamedStruct struct { InnerStruct XMLName Name `xml:"outerns test"` IntAttr int `xml:"int,attr"` } type OuterNamedOrderedStruct struct { XMLName Name `xml:"outerns test"` InnerStruct IntAttr int `xml:"int,attr"` } type OuterOuterStruct struct { OuterStruct } type NestedAndChardata struct { AB []string `xml:"A>B"` Chardata string `xml:",chardata"` } type NestedAndComment struct { AB []string `xml:"A>B"` Comment string `xml:",comment"` } type XMLNSFieldStruct struct { Ns string `xml:"xmlns,attr"` Body string } type NamedXMLNSFieldStruct struct { XMLName struct{} `xml:"testns test"` Ns string `xml:"xmlns,attr"` Body string } type XMLNSFieldStructWithOmitEmpty struct { Ns string `xml:"xmlns,attr,omitempty"` Body string } type NamedXMLNSFieldStructWithEmptyNamespace struct { XMLName struct{} `xml:"test"` Ns string `xml:"xmlns,attr"` Body string } type RecursiveXMLNSFieldStruct struct { Ns string `xml:"xmlns,attr"` Body *RecursiveXMLNSFieldStruct `xml:",omitempty"` Text string `xml:",omitempty"` } func ifaceptr(x interface{}) interface{} { return &x } var ( nameAttr = "Sarah" ageAttr = uint(12) contentsAttr = "lorem ipsum" ) // Unless explicitly stated as such (or *Plain), all of the // tests below are two-way tests. When introducing new tests, // please try to make them two-way as well to ensure that // marshalling and unmarshalling are as symmetrical as feasible. var marshalTests = []struct { Value interface{} ExpectXML string MarshalOnly bool UnmarshalOnly bool }{ // Test nil marshals to nothing {Value: nil, ExpectXML: ``, MarshalOnly: true}, {Value: nilStruct, ExpectXML: ``, MarshalOnly: true}, // Test value types {Value: &Plain{true}, ExpectXML: `true`}, {Value: &Plain{false}, ExpectXML: `false`}, {Value: &Plain{int(42)}, ExpectXML: `42`}, {Value: &Plain{int8(42)}, ExpectXML: `42`}, {Value: &Plain{int16(42)}, ExpectXML: `42`}, {Value: &Plain{int32(42)}, ExpectXML: `42`}, {Value: &Plain{uint(42)}, ExpectXML: `42`}, {Value: &Plain{uint8(42)}, ExpectXML: `42`}, {Value: &Plain{uint16(42)}, ExpectXML: `42`}, {Value: &Plain{uint32(42)}, ExpectXML: `42`}, {Value: &Plain{float32(1.25)}, ExpectXML: `1.25`}, {Value: &Plain{float64(1.25)}, ExpectXML: `1.25`}, {Value: &Plain{uintptr(0xFFDD)}, ExpectXML: `65501`}, {Value: &Plain{"gopher"}, ExpectXML: `gopher`}, {Value: &Plain{[]byte("gopher")}, ExpectXML: `gopher`}, {Value: &Plain{""}, ExpectXML: `</>`}, {Value: &Plain{[]byte("")}, ExpectXML: `</>`}, {Value: &Plain{[3]byte{'<', '/', '>'}}, ExpectXML: `</>`}, {Value: &Plain{NamedType("potato")}, ExpectXML: `potato`}, {Value: &Plain{[]int{1, 2, 3}}, ExpectXML: `123`}, {Value: &Plain{[3]int{1, 2, 3}}, ExpectXML: `123`}, {Value: ifaceptr(true), MarshalOnly: true, ExpectXML: `true`}, // Test time. { Value: &Plain{time.Unix(1e9, 123456789).UTC()}, ExpectXML: `2001-09-09T01:46:40.123456789Z`, }, // A pointer to struct{} may be used to test for an element's presence. { Value: &PresenceTest{new(struct{})}, ExpectXML: ``, }, { Value: &PresenceTest{}, ExpectXML: ``, }, // A pointer to struct{} may be used to test for an element's presence. { Value: &PresenceTest{new(struct{})}, ExpectXML: ``, }, { Value: &PresenceTest{}, ExpectXML: ``, }, // A []byte field is only nil if the element was not found. { Value: &Data{}, ExpectXML: ``, UnmarshalOnly: true, }, { Value: &Data{Bytes: []byte{}, Custom: MyBytes{}, Attr: []byte{}}, ExpectXML: ``, UnmarshalOnly: true, }, // Check that []byte works, including named []byte types. { Value: &Data{Bytes: []byte("ab"), Custom: MyBytes("cd"), Attr: []byte{'v'}}, ExpectXML: `abcd`, }, // Test innerxml { Value: &SecretAgent{ Handle: "007", Identity: "James Bond", Obfuscate: "", }, ExpectXML: `James Bond`, MarshalOnly: true, }, { Value: &SecretAgent{ Handle: "007", Identity: "James Bond", Obfuscate: "James Bond", }, ExpectXML: `James Bond`, UnmarshalOnly: true, }, // Test structs {Value: &Port{Type: "ssl", Number: "443"}, ExpectXML: `443`}, {Value: &Port{Number: "443"}, ExpectXML: `443`}, {Value: &Port{Type: ""}, ExpectXML: ``}, {Value: &Port{Number: "443", Comment: "https"}, ExpectXML: `443`}, {Value: &Port{Number: "443", Comment: "add space-"}, ExpectXML: `443`, MarshalOnly: true}, {Value: &Domain{Name: []byte("google.com&friends")}, ExpectXML: `google.com&friends`}, {Value: &Domain{Name: []byte("google.com"), Comment: []byte(" &friends ")}, ExpectXML: `google.com`}, {Value: &Book{Title: "Pride & Prejudice"}, ExpectXML: `Pride & Prejudice`}, {Value: &Event{Year: -3114}, ExpectXML: `-3114`}, {Value: &Movie{Length: 13440}, ExpectXML: `13440`}, {Value: &Pi{Approximation: 3.14159265}, ExpectXML: `3.1415927`}, {Value: &Universe{Visible: 9.3e13}, ExpectXML: `9.3e+13`}, {Value: &Particle{HasMass: true}, ExpectXML: `true`}, {Value: &Departure{When: ParseTime("2013-01-09T00:15:00-09:00")}, ExpectXML: `2013-01-09T00:15:00-09:00`}, {Value: atomValue, ExpectXML: atomXml}, { Value: &Ship{ Name: "Heart of Gold", Pilot: "Computer", Age: 1, Drive: ImprobabilityDrive, Passenger: []*Passenger{ { Name: []string{"Zaphod", "Beeblebrox"}, Weight: 7.25, }, { Name: []string{"Trisha", "McMillen"}, Weight: 5.5, }, { Name: []string{"Ford", "Prefect"}, Weight: 7, }, { Name: []string{"Arthur", "Dent"}, Weight: 6.75, }, }, }, ExpectXML: `` + `` + strconv.Itoa(int(ImprobabilityDrive)) + `` + `1` + `` + `Zaphod` + `Beeblebrox` + `7.25` + `` + `` + `Trisha` + `McMillen` + `5.5` + `` + `` + `Ford` + `Prefect` + `7` + `` + `` + `Arthur` + `Dent` + `6.75` + `` + ``, }, // Test a>b { Value: &NestedItems{Items: nil, Item1: nil}, ExpectXML: `` + `` + `` + ``, }, { Value: &NestedItems{Items: []string{}, Item1: []string{}}, ExpectXML: `` + `` + `` + ``, MarshalOnly: true, }, { Value: &NestedItems{Items: nil, Item1: []string{"A"}}, ExpectXML: `` + `` + `A` + `` + ``, }, { Value: &NestedItems{Items: []string{"A", "B"}, Item1: nil}, ExpectXML: `` + `` + `A` + `B` + `` + ``, }, { Value: &NestedItems{Items: []string{"A", "B"}, Item1: []string{"C"}}, ExpectXML: `` + `` + `A` + `B` + `C` + `` + ``, }, { Value: &NestedOrder{Field1: "C", Field2: "B", Field3: "A"}, ExpectXML: `` + `` + `C` + `B` + `A` + `` + ``, }, { Value: &NilTest{A: "A", B: nil, C: "C"}, ExpectXML: `` + `` + `A` + `C` + `` + ``, MarshalOnly: true, // Uses interface{} }, { Value: &MixedNested{A: "A", B: "B", C: "C", D: "D"}, ExpectXML: `` + `A` + `B` + `` + `C` + `D` + `` + ``, }, { Value: &Service{Port: &Port{Number: "80"}}, ExpectXML: `80`, }, { Value: &Service{}, ExpectXML: ``, }, { Value: &Service{Port: &Port{Number: "80"}, Extra1: "A", Extra2: "B"}, ExpectXML: `` + `80` + `A` + `B` + ``, MarshalOnly: true, }, { Value: &Service{Port: &Port{Number: "80"}, Extra2: "example"}, ExpectXML: `` + `80` + `example` + ``, MarshalOnly: true, }, { Value: &struct { XMLName struct{} `xml:"space top"` A string `xml:"x>a"` B string `xml:"x>b"` C string `xml:"space x>c"` C1 string `xml:"space1 x>c"` D1 string `xml:"space1 x>d"` E1 string `xml:"x>e"` }{ A: "a", B: "b", C: "c", C1: "c1", D1: "d1", E1: "e1", }, ExpectXML: `` + `abc` + `` + `c1` + `d1` + `` + `` + `e1` + `` + ``, }, { Value: &struct { XMLName Name A string `xml:"x>a"` B string `xml:"x>b"` C string `xml:"space x>c"` C1 string `xml:"space1 x>c"` D1 string `xml:"space1 x>d"` }{ XMLName: Name{ Space: "space0", Local: "top", }, A: "a", B: "b", C: "c", C1: "c1", D1: "d1", }, ExpectXML: `` + `ab` + `c` + `` + `c1` + `d1` + `` + ``, }, { Value: &struct { XMLName struct{} `xml:"top"` B string `xml:"space x>b"` B1 string `xml:"space1 x>b"` }{ B: "b", B1: "b1", }, ExpectXML: `` + `b` + `b1` + ``, }, // Test struct embedding { Value: &EmbedA{ EmbedC: EmbedC{ FieldA1: "", // Shadowed by A.A FieldA2: "", // Shadowed by A.A FieldB: "A.C.B", FieldC: "A.C.C", }, EmbedB: EmbedB{ FieldB: "A.B.B", EmbedC: &EmbedC{ FieldA1: "A.B.C.A1", FieldA2: "A.B.C.A2", FieldB: "", // Shadowed by A.B.B FieldC: "A.B.C.C", }, }, FieldA: "A.A", }, ExpectXML: `` + `A.C.B` + `A.C.C` + `` + `A.B.B` + `` + `A.B.C.A1` + `A.B.C.A2` + `` + `A.B.C.C` + `` + `A.A` + ``, }, // Test that name casing matters { Value: &NameCasing{Xy: "mixed", XY: "upper", XyA: "mixedA", XYA: "upperA"}, ExpectXML: `mixedupper`, }, // Test the order in which the XML element name is chosen { Value: &NamePrecedence{ FromTag: XMLNameWithoutTag{Value: "A"}, FromNameVal: XMLNameWithoutTag{XMLName: Name{Local: "InXMLName"}, Value: "B"}, FromNameTag: XMLNameWithTag{Value: "C"}, InFieldName: "D", }, ExpectXML: `` + `A` + `B` + `C` + `D` + ``, MarshalOnly: true, }, { Value: &NamePrecedence{ XMLName: Name{Local: "Parent"}, FromTag: XMLNameWithoutTag{XMLName: Name{Local: "InTag"}, Value: "A"}, FromNameVal: XMLNameWithoutTag{XMLName: Name{Local: "FromNameVal"}, Value: "B"}, FromNameTag: XMLNameWithTag{XMLName: Name{Local: "InXMLNameTag"}, Value: "C"}, InFieldName: "D", }, ExpectXML: `` + `A` + `B` + `C` + `D` + ``, UnmarshalOnly: true, }, // xml.Name works in a plain field as well. { Value: &NameInField{Name{Space: "ns", Local: "foo"}}, ExpectXML: ``, }, { Value: &NameInField{Name{Space: "ns", Local: "foo"}}, ExpectXML: ``, UnmarshalOnly: true, }, // Marshaling zero xml.Name uses the tag or field name. { Value: &NameInField{}, ExpectXML: ``, MarshalOnly: true, }, // Test attributes { Value: &AttrTest{ Int: 8, Named: 9, Float: 23.5, Uint8: 255, Bool: true, Str: "str", Bytes: []byte("byt"), }, ExpectXML: ``, }, { Value: &AttrTest{Bytes: []byte{}}, ExpectXML: ``, }, { Value: &OmitAttrTest{ Int: 8, Named: 9, Float: 23.5, Uint8: 255, Bool: true, Str: "str", Bytes: []byte("byt"), }, ExpectXML: ``, }, { Value: &OmitAttrTest{}, ExpectXML: ``, }, // pointer fields { Value: &PointerFieldsTest{Name: &nameAttr, Age: &ageAttr, Contents: &contentsAttr}, ExpectXML: `lorem ipsum`, MarshalOnly: true, }, // empty chardata pointer field { Value: &ChardataEmptyTest{}, ExpectXML: ``, MarshalOnly: true, }, // omitempty on fields { Value: &OmitFieldTest{ Int: 8, Named: 9, Float: 23.5, Uint8: 255, Bool: true, Str: "str", Bytes: []byte("byt"), Ptr: &PresenceTest{}, }, ExpectXML: `` + `8` + `9` + `23.5` + `255` + `true` + `str` + `byt` + `` + ``, }, { Value: &OmitFieldTest{}, ExpectXML: ``, }, // Test ",any" { ExpectXML: `knownunknown`, Value: &AnyTest{ Nested: "known", AnyField: AnyHolder{ XMLName: Name{Local: "other"}, XML: "unknown", }, }, }, { Value: &AnyTest{Nested: "known", AnyField: AnyHolder{ XML: "", XMLName: Name{Local: "AnyField"}, }, }, ExpectXML: `known`, }, { ExpectXML: `b`, Value: &AnyOmitTest{ Nested: "b", }, }, { ExpectXML: `bei`, Value: &AnySliceTest{ Nested: "b", AnyField: []AnyHolder{ { XMLName: Name{Local: "c"}, XML: "e", }, { XMLName: Name{Space: "f", Local: "g"}, XML: "i", }, }, }, }, { ExpectXML: `b`, Value: &AnySliceTest{ Nested: "b", }, }, // Test recursive types. { Value: &RecurseA{ A: "a1", B: &RecurseB{ A: &RecurseA{"a2", nil}, B: "b1", }, }, ExpectXML: `a1a2b1`, }, // Test ignoring fields via "-" tag { ExpectXML: ``, Value: &IgnoreTest{}, }, { ExpectXML: ``, Value: &IgnoreTest{PublicSecret: "can't tell"}, MarshalOnly: true, }, { ExpectXML: `ignore me`, Value: &IgnoreTest{}, UnmarshalOnly: true, }, // Test escaping. { ExpectXML: `dquote: "; squote: '; ampersand: &; less: <; greater: >;`, Value: &AnyTest{ Nested: `dquote: "; squote: '; ampersand: &; less: <; greater: >;`, AnyField: AnyHolder{XMLName: Name{Local: "empty"}}, }, }, { ExpectXML: `newline: ; cr: ; tab: ;`, Value: &AnyTest{ Nested: "newline: \n; cr: \r; tab: \t;", AnyField: AnyHolder{XMLName: Name{Local: "AnyField"}}, }, }, { ExpectXML: "1\r2\r\n3\n\r4\n5", Value: &AnyTest{ Nested: "1\n2\n3\n\n4\n5", }, UnmarshalOnly: true, }, { ExpectXML: `42`, Value: &EmbedInt{ MyInt: 42, }, }, // Test omitempty with parent chain; see golang.org/issue/4168. { ExpectXML: ``, Value: &Strings{}, }, // Custom marshalers. { ExpectXML: `hello world`, Value: &MyMarshalerTest{}, }, { ExpectXML: ``, Value: &MarshalerStruct{}, }, { ExpectXML: ``, Value: &MarshalerValueStruct{}, }, { ExpectXML: ``, Value: &OuterStruct{IntAttr: 10}, }, { ExpectXML: ``, Value: &OuterNamedStruct{XMLName: Name{Space: "outerns", Local: "test"}, IntAttr: 10}, }, { ExpectXML: ``, Value: &OuterNamedOrderedStruct{XMLName: Name{Space: "outerns", Local: "test"}, IntAttr: 10}, }, { ExpectXML: ``, Value: &OuterOuterStruct{OuterStruct{IntAttr: 10}}, }, { ExpectXML: `test`, Value: &NestedAndChardata{AB: make([]string, 2), Chardata: "test"}, }, { ExpectXML: ``, Value: &NestedAndComment{AB: make([]string, 2), Comment: "test"}, }, { ExpectXML: `hello world`, Value: &XMLNSFieldStruct{Ns: "http://example.com/ns", Body: "hello world"}, }, { ExpectXML: `hello world`, Value: &NamedXMLNSFieldStruct{Ns: "http://example.com/ns", Body: "hello world"}, }, { ExpectXML: `hello world`, Value: &NamedXMLNSFieldStruct{Ns: "", Body: "hello world"}, }, { ExpectXML: `hello world`, Value: &XMLNSFieldStructWithOmitEmpty{Body: "hello world"}, }, { // The xmlns attribute must be ignored because the // element is in the empty namespace, so it's not possible // to set the default namespace to something non-empty. ExpectXML: `hello world`, Value: &NamedXMLNSFieldStructWithEmptyNamespace{Ns: "foo", Body: "hello world"}, MarshalOnly: true, }, { ExpectXML: `hello world`, Value: &RecursiveXMLNSFieldStruct{ Ns: "foo", Body: &RecursiveXMLNSFieldStruct{ Text: "hello world", }, }, }, } func TestMarshal(t *testing.T) { for idx, test := range marshalTests { if test.UnmarshalOnly { continue } data, err := Marshal(test.Value) if err != nil { t.Errorf("#%d: marshal(%#v): %s", idx, test.Value, err) continue } if got, want := string(data), test.ExpectXML; got != want { if strings.Contains(want, "\n") { t.Errorf("#%d: marshal(%#v):\nHAVE:\n%s\nWANT:\n%s", idx, test.Value, got, want) } else { t.Errorf("#%d: marshal(%#v):\nhave %#q\nwant %#q", idx, test.Value, got, want) } } } } type AttrParent struct { X string `xml:"X>Y,attr"` } type BadAttr struct { Name []string `xml:"name,attr"` } var marshalErrorTests = []struct { Value interface{} Err string Kind reflect.Kind }{ { Value: make(chan bool), Err: "xml: unsupported type: chan bool", Kind: reflect.Chan, }, { Value: map[string]string{ "question": "What do you get when you multiply six by nine?", "answer": "42", }, Err: "xml: unsupported type: map[string]string", Kind: reflect.Map, }, { Value: map[*Ship]bool{nil: false}, Err: "xml: unsupported type: map[*xml.Ship]bool", Kind: reflect.Map, }, { Value: &Domain{Comment: []byte("f--bar")}, Err: `xml: comments must not contain "--"`, }, // Reject parent chain with attr, never worked; see golang.org/issue/5033. { Value: &AttrParent{}, Err: `xml: X>Y chain not valid with attr flag`, }, { Value: BadAttr{[]string{"X", "Y"}}, Err: `xml: unsupported type: []string`, }, } var marshalIndentTests = []struct { Value interface{} Prefix string Indent string ExpectXML string }{ { Value: &SecretAgent{ Handle: "007", Identity: "James Bond", Obfuscate: "", }, Prefix: "", Indent: "\t", ExpectXML: fmt.Sprintf("\n\tJames Bond\n"), }, } func TestMarshalErrors(t *testing.T) { for idx, test := range marshalErrorTests { data, err := Marshal(test.Value) if err == nil { t.Errorf("#%d: marshal(%#v) = [success] %q, want error %v", idx, test.Value, data, test.Err) continue } if err.Error() != test.Err { t.Errorf("#%d: marshal(%#v) = [error] %v, want %v", idx, test.Value, err, test.Err) } if test.Kind != reflect.Invalid { if kind := err.(*UnsupportedTypeError).Type.Kind(); kind != test.Kind { t.Errorf("#%d: marshal(%#v) = [error kind] %s, want %s", idx, test.Value, kind, test.Kind) } } } } // Do invertibility testing on the various structures that we test func TestUnmarshal(t *testing.T) { for i, test := range marshalTests { if test.MarshalOnly { continue } if _, ok := test.Value.(*Plain); ok { continue } vt := reflect.TypeOf(test.Value) dest := reflect.New(vt.Elem()).Interface() err := Unmarshal([]byte(test.ExpectXML), dest) switch fix := dest.(type) { case *Feed: fix.Author.InnerXML = "" for i := range fix.Entry { fix.Entry[i].Author.InnerXML = "" } } if err != nil { t.Errorf("#%d: unexpected error: %#v", i, err) } else if got, want := dest, test.Value; !reflect.DeepEqual(got, want) { t.Errorf("#%d: unmarshal(%q):\nhave %#v\nwant %#v", i, test.ExpectXML, got, want) } } } func TestMarshalIndent(t *testing.T) { for i, test := range marshalIndentTests { data, err := MarshalIndent(test.Value, test.Prefix, test.Indent) if err != nil { t.Errorf("#%d: Error: %s", i, err) continue } if got, want := string(data), test.ExpectXML; got != want { t.Errorf("#%d: MarshalIndent:\nGot:%s\nWant:\n%s", i, got, want) } } } type limitedBytesWriter struct { w io.Writer remain int // until writes fail } func (lw *limitedBytesWriter) Write(p []byte) (n int, err error) { if lw.remain <= 0 { println("error") return 0, errors.New("write limit hit") } if len(p) > lw.remain { p = p[:lw.remain] n, _ = lw.w.Write(p) lw.remain = 0 return n, errors.New("write limit hit") } n, err = lw.w.Write(p) lw.remain -= n return n, err } func TestMarshalWriteErrors(t *testing.T) { var buf bytes.Buffer const writeCap = 1024 w := &limitedBytesWriter{&buf, writeCap} enc := NewEncoder(w) var err error var i int const n = 4000 for i = 1; i <= n; i++ { err = enc.Encode(&Passenger{ Name: []string{"Alice", "Bob"}, Weight: 5, }) if err != nil { break } } if err == nil { t.Error("expected an error") } if i == n { t.Errorf("expected to fail before the end") } if buf.Len() != writeCap { t.Errorf("buf.Len() = %d; want %d", buf.Len(), writeCap) } } func TestMarshalWriteIOErrors(t *testing.T) { enc := NewEncoder(errWriter{}) expectErr := "unwritable" err := enc.Encode(&Passenger{}) if err == nil || err.Error() != expectErr { t.Errorf("EscapeTest = [error] %v, want %v", err, expectErr) } } func TestMarshalFlush(t *testing.T) { var buf bytes.Buffer enc := NewEncoder(&buf) if err := enc.EncodeToken(CharData("hello world")); err != nil { t.Fatalf("enc.EncodeToken: %v", err) } if buf.Len() > 0 { t.Fatalf("enc.EncodeToken caused actual write: %q", buf.Bytes()) } if err := enc.Flush(); err != nil { t.Fatalf("enc.Flush: %v", err) } if buf.String() != "hello world" { t.Fatalf("after enc.Flush, buf.String() = %q, want %q", buf.String(), "hello world") } } var encodeElementTests = []struct { desc string value interface{} start StartElement expectXML string }{{ desc: "simple string", value: "hello", start: StartElement{ Name: Name{Local: "a"}, }, expectXML: `hello`, }, { desc: "string with added attributes", value: "hello", start: StartElement{ Name: Name{Local: "a"}, Attr: []Attr{{ Name: Name{Local: "x"}, Value: "y", }, { Name: Name{Local: "foo"}, Value: "bar", }}, }, expectXML: `hello`, }, { desc: "start element with default name space", value: struct { Foo XMLNameWithNSTag }{ Foo: XMLNameWithNSTag{ Value: "hello", }, }, start: StartElement{ Name: Name{Space: "ns", Local: "a"}, Attr: []Attr{{ Name: Name{Local: "xmlns"}, // "ns" is the name space defined in XMLNameWithNSTag Value: "ns", }}, }, expectXML: `hello`, }, { desc: "start element in name space with different default name space", value: struct { Foo XMLNameWithNSTag }{ Foo: XMLNameWithNSTag{ Value: "hello", }, }, start: StartElement{ Name: Name{Space: "ns2", Local: "a"}, Attr: []Attr{{ Name: Name{Local: "xmlns"}, // "ns" is the name space defined in XMLNameWithNSTag Value: "ns", }}, }, expectXML: `hello`, }, { desc: "XMLMarshaler with start element with default name space", value: &MyMarshalerTest{}, start: StartElement{ Name: Name{Space: "ns2", Local: "a"}, Attr: []Attr{{ Name: Name{Local: "xmlns"}, // "ns" is the name space defined in XMLNameWithNSTag Value: "ns", }}, }, expectXML: `hello world`, }} func TestEncodeElement(t *testing.T) { for idx, test := range encodeElementTests { var buf bytes.Buffer enc := NewEncoder(&buf) err := enc.EncodeElement(test.value, test.start) if err != nil { t.Fatalf("enc.EncodeElement: %v", err) } err = enc.Flush() if err != nil { t.Fatalf("enc.Flush: %v", err) } if got, want := buf.String(), test.expectXML; got != want { t.Errorf("#%d(%s): EncodeElement(%#v, %#v):\nhave %#q\nwant %#q", idx, test.desc, test.value, test.start, got, want) } } } func BenchmarkMarshal(b *testing.B) { b.ReportAllocs() for i := 0; i < b.N; i++ { Marshal(atomValue) } } func BenchmarkUnmarshal(b *testing.B) { b.ReportAllocs() xml := []byte(atomXml) for i := 0; i < b.N; i++ { Unmarshal(xml, &Feed{}) } } // golang.org/issue/6556 func TestStructPointerMarshal(t *testing.T) { type A struct { XMLName string `xml:"a"` B []interface{} } type C struct { XMLName Name Value string `xml:"value"` } a := new(A) a.B = append(a.B, &C{ XMLName: Name{Local: "c"}, Value: "x", }) b, err := Marshal(a) if err != nil { t.Fatal(err) } if x := string(b); x != "x" { t.Fatal(x) } var v A err = Unmarshal(b, &v) if err != nil { t.Fatal(err) } } var encodeTokenTests = []struct { desc string toks []Token want string err string }{{ desc: "start element with name space", toks: []Token{ StartElement{Name{"space", "local"}, nil}, }, want: ``, }, { desc: "start element with no name", toks: []Token{ StartElement{Name{"space", ""}, nil}, }, err: "xml: start tag with no name", }, { desc: "end element with no name", toks: []Token{ EndElement{Name{"space", ""}}, }, err: "xml: end tag with no name", }, { desc: "char data", toks: []Token{ CharData("foo"), }, want: `foo`, }, { desc: "char data with escaped chars", toks: []Token{ CharData(" \t\n"), }, want: " \n", }, { desc: "comment", toks: []Token{ Comment("foo"), }, want: ``, }, { desc: "comment with invalid content", toks: []Token{ Comment("foo-->"), }, err: "xml: EncodeToken of Comment containing --> marker", }, { desc: "proc instruction", toks: []Token{ ProcInst{"Target", []byte("Instruction")}, }, want: ``, }, { desc: "proc instruction with empty target", toks: []Token{ ProcInst{"", []byte("Instruction")}, }, err: "xml: EncodeToken of ProcInst with invalid Target", }, { desc: "proc instruction with bad content", toks: []Token{ ProcInst{"", []byte("Instruction?>")}, }, err: "xml: EncodeToken of ProcInst with invalid Target", }, { desc: "directive", toks: []Token{ Directive("foo"), }, want: ``, }, { desc: "more complex directive", toks: []Token{ Directive("DOCTYPE doc [ '> ]"), }, want: `'> ]>`, }, { desc: "directive instruction with bad name", toks: []Token{ Directive("foo>"), }, err: "xml: EncodeToken of Directive containing wrong < or > markers", }, { desc: "end tag without start tag", toks: []Token{ EndElement{Name{"foo", "bar"}}, }, err: "xml: end tag without start tag", }, { desc: "mismatching end tag local name", toks: []Token{ StartElement{Name{"", "foo"}, nil}, EndElement{Name{"", "bar"}}, }, err: "xml: end tag does not match start tag ", want: ``, }, { desc: "mismatching end tag namespace", toks: []Token{ StartElement{Name{"space", "foo"}, nil}, EndElement{Name{"another", "foo"}}, }, err: "xml: end tag in namespace another does not match start tag in namespace space", want: ``, }, { desc: "start element with explicit namespace", toks: []Token{ StartElement{Name{"space", "local"}, []Attr{ {Name{"xmlns", "x"}, "space"}, {Name{"space", "foo"}, "value"}, }}, }, want: ``, }, { desc: "start element with explicit namespace and colliding prefix", toks: []Token{ StartElement{Name{"space", "local"}, []Attr{ {Name{"xmlns", "x"}, "space"}, {Name{"space", "foo"}, "value"}, {Name{"x", "bar"}, "other"}, }}, }, want: ``, }, { desc: "start element using previously defined namespace", toks: []Token{ StartElement{Name{"", "local"}, []Attr{ {Name{"xmlns", "x"}, "space"}, }}, StartElement{Name{"space", "foo"}, []Attr{ {Name{"space", "x"}, "y"}, }}, }, want: ``, }, { desc: "nested name space with same prefix", toks: []Token{ StartElement{Name{"", "foo"}, []Attr{ {Name{"xmlns", "x"}, "space1"}, }}, StartElement{Name{"", "foo"}, []Attr{ {Name{"xmlns", "x"}, "space2"}, }}, StartElement{Name{"", "foo"}, []Attr{ {Name{"space1", "a"}, "space1 value"}, {Name{"space2", "b"}, "space2 value"}, }}, EndElement{Name{"", "foo"}}, EndElement{Name{"", "foo"}}, StartElement{Name{"", "foo"}, []Attr{ {Name{"space1", "a"}, "space1 value"}, {Name{"space2", "b"}, "space2 value"}, }}, }, want: ``, }, { desc: "start element defining several prefixes for the same name space", toks: []Token{ StartElement{Name{"space", "foo"}, []Attr{ {Name{"xmlns", "a"}, "space"}, {Name{"xmlns", "b"}, "space"}, {Name{"space", "x"}, "value"}, }}, }, want: ``, }, { desc: "nested element redefines name space", toks: []Token{ StartElement{Name{"", "foo"}, []Attr{ {Name{"xmlns", "x"}, "space"}, }}, StartElement{Name{"space", "foo"}, []Attr{ {Name{"xmlns", "y"}, "space"}, {Name{"space", "a"}, "value"}, }}, }, want: ``, }, { desc: "nested element creates alias for default name space", toks: []Token{ StartElement{Name{"space", "foo"}, []Attr{ {Name{"", "xmlns"}, "space"}, }}, StartElement{Name{"space", "foo"}, []Attr{ {Name{"xmlns", "y"}, "space"}, {Name{"space", "a"}, "value"}, }}, }, want: ``, }, { desc: "nested element defines default name space with existing prefix", toks: []Token{ StartElement{Name{"", "foo"}, []Attr{ {Name{"xmlns", "x"}, "space"}, }}, StartElement{Name{"space", "foo"}, []Attr{ {Name{"", "xmlns"}, "space"}, {Name{"space", "a"}, "value"}, }}, }, want: ``, }, { desc: "nested element uses empty attribute name space when default ns defined", toks: []Token{ StartElement{Name{"space", "foo"}, []Attr{ {Name{"", "xmlns"}, "space"}, }}, StartElement{Name{"space", "foo"}, []Attr{ {Name{"", "attr"}, "value"}, }}, }, want: ``, }, { desc: "redefine xmlns", toks: []Token{ StartElement{Name{"", "foo"}, []Attr{ {Name{"foo", "xmlns"}, "space"}, }}, }, err: `xml: cannot redefine xmlns attribute prefix`, }, { desc: "xmlns with explicit name space #1", toks: []Token{ StartElement{Name{"space", "foo"}, []Attr{ {Name{"xml", "xmlns"}, "space"}, }}, }, want: ``, }, { desc: "xmlns with explicit name space #2", toks: []Token{ StartElement{Name{"space", "foo"}, []Attr{ {Name{xmlURL, "xmlns"}, "space"}, }}, }, want: ``, }, { desc: "empty name space declaration is ignored", toks: []Token{ StartElement{Name{"", "foo"}, []Attr{ {Name{"xmlns", "foo"}, ""}, }}, }, want: ``, }, { desc: "attribute with no name is ignored", toks: []Token{ StartElement{Name{"", "foo"}, []Attr{ {Name{"", ""}, "value"}, }}, }, want: ``, }, { desc: "namespace URL with non-valid name", toks: []Token{ StartElement{Name{"/34", "foo"}, []Attr{ {Name{"/34", "x"}, "value"}, }}, }, want: `<_:foo xmlns:_="/34" _:x="value">`, }, { desc: "nested element resets default namespace to empty", toks: []Token{ StartElement{Name{"space", "foo"}, []Attr{ {Name{"", "xmlns"}, "space"}, }}, StartElement{Name{"", "foo"}, []Attr{ {Name{"", "xmlns"}, ""}, {Name{"", "x"}, "value"}, {Name{"space", "x"}, "value"}, }}, }, want: ``, }, { desc: "nested element requires empty default name space", toks: []Token{ StartElement{Name{"space", "foo"}, []Attr{ {Name{"", "xmlns"}, "space"}, }}, StartElement{Name{"", "foo"}, nil}, }, want: ``, }, { desc: "attribute uses name space from xmlns", toks: []Token{ StartElement{Name{"some/space", "foo"}, []Attr{ {Name{"", "attr"}, "value"}, {Name{"some/space", "other"}, "other value"}, }}, }, want: ``, }, { desc: "default name space should not be used by attributes", toks: []Token{ StartElement{Name{"space", "foo"}, []Attr{ {Name{"", "xmlns"}, "space"}, {Name{"xmlns", "bar"}, "space"}, {Name{"space", "baz"}, "foo"}, }}, StartElement{Name{"space", "baz"}, nil}, EndElement{Name{"space", "baz"}}, EndElement{Name{"space", "foo"}}, }, want: ``, }, { desc: "default name space not used by attributes, not explicitly defined", toks: []Token{ StartElement{Name{"space", "foo"}, []Attr{ {Name{"", "xmlns"}, "space"}, {Name{"space", "baz"}, "foo"}, }}, StartElement{Name{"space", "baz"}, nil}, EndElement{Name{"space", "baz"}}, EndElement{Name{"space", "foo"}}, }, want: ``, }, { desc: "impossible xmlns declaration", toks: []Token{ StartElement{Name{"", "foo"}, []Attr{ {Name{"", "xmlns"}, "space"}, }}, StartElement{Name{"space", "bar"}, []Attr{ {Name{"space", "attr"}, "value"}, }}, }, want: ``, }} func TestEncodeToken(t *testing.T) { loop: for i, tt := range encodeTokenTests { var buf bytes.Buffer enc := NewEncoder(&buf) var err error for j, tok := range tt.toks { err = enc.EncodeToken(tok) if err != nil && j < len(tt.toks)-1 { t.Errorf("#%d %s token #%d: %v", i, tt.desc, j, err) continue loop } } errorf := func(f string, a ...interface{}) { t.Errorf("#%d %s token #%d:%s", i, tt.desc, len(tt.toks)-1, fmt.Sprintf(f, a...)) } switch { case tt.err != "" && err == nil: errorf(" expected error; got none") continue case tt.err == "" && err != nil: errorf(" got error: %v", err) continue case tt.err != "" && err != nil && tt.err != err.Error(): errorf(" error mismatch; got %v, want %v", err, tt.err) continue } if err := enc.Flush(); err != nil { errorf(" %v", err) continue } if got := buf.String(); got != tt.want { errorf("\ngot %v\nwant %v", got, tt.want) continue } } } func TestProcInstEncodeToken(t *testing.T) { var buf bytes.Buffer enc := NewEncoder(&buf) if err := enc.EncodeToken(ProcInst{"xml", []byte("Instruction")}); err != nil { t.Fatalf("enc.EncodeToken: expected to be able to encode xml target ProcInst as first token, %s", err) } if err := enc.EncodeToken(ProcInst{"Target", []byte("Instruction")}); err != nil { t.Fatalf("enc.EncodeToken: expected to be able to add non-xml target ProcInst") } if err := enc.EncodeToken(ProcInst{"xml", []byte("Instruction")}); err == nil { t.Fatalf("enc.EncodeToken: expected to not be allowed to encode xml target ProcInst when not first token") } } func TestDecodeEncode(t *testing.T) { var in, out bytes.Buffer in.WriteString(` `) dec := NewDecoder(&in) enc := NewEncoder(&out) for tok, err := dec.Token(); err == nil; tok, err = dec.Token() { err = enc.EncodeToken(tok) if err != nil { t.Fatalf("enc.EncodeToken: Unable to encode token (%#v), %v", tok, err) } } } // Issue 9796. Used to fail with GORACE="halt_on_error=1" -race. func TestRace9796(t *testing.T) { type A struct{} type B struct { C []A `xml:"X>Y"` } var wg sync.WaitGroup for i := 0; i < 2; i++ { wg.Add(1) go func() { Marshal(B{[]A{A{}}}) wg.Done() }() } wg.Wait() } func TestIsValidDirective(t *testing.T) { testOK := []string{ "<>", "< < > >", "' '>' >", " ]>", " '<' ' doc ANY> ]>", ">>> a < comment --> [ ] >", } testKO := []string{ "<", ">", "", "< > > < < >", " -->", "", "'", "", } for _, s := range testOK { if !isValidDirective(Directive(s)) { t.Errorf("Directive %q is expected to be valid", s) } } for _, s := range testKO { if isValidDirective(Directive(s)) { t.Errorf("Directive %q is expected to be invalid", s) } } } // Issue 11719. EncodeToken used to silently eat tokens with an invalid type. func TestSimpleUseOfEncodeToken(t *testing.T) { var buf bytes.Buffer enc := NewEncoder(&buf) if err := enc.EncodeToken(&StartElement{Name: Name{"", "object1"}}); err == nil { t.Errorf("enc.EncodeToken: pointer type should be rejected") } if err := enc.EncodeToken(&EndElement{Name: Name{"", "object1"}}); err == nil { t.Errorf("enc.EncodeToken: pointer type should be rejected") } if err := enc.EncodeToken(StartElement{Name: Name{"", "object2"}}); err != nil { t.Errorf("enc.EncodeToken: StartElement %s", err) } if err := enc.EncodeToken(EndElement{Name: Name{"", "object2"}}); err != nil { t.Errorf("enc.EncodeToken: EndElement %s", err) } if err := enc.EncodeToken(Universe{}); err == nil { t.Errorf("enc.EncodeToken: invalid type not caught") } if err := enc.Flush(); err != nil { t.Errorf("enc.Flush: %s", err) } if buf.Len() == 0 { t.Errorf("enc.EncodeToken: empty buffer") } want := "" if buf.String() != want { t.Errorf("enc.EncodeToken: expected %q; got %q", want, buf.String()) } } lxd-2.0.2/dist/src/golang.org/x/net/webdav/internal/xml/read.go0000644061062106075000000004746512721405224026701 0ustar00stgraberdomain admins00000000000000// Copyright 2009 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package xml import ( "bytes" "encoding" "errors" "fmt" "reflect" "strconv" "strings" ) // BUG(rsc): Mapping between XML elements and data structures is inherently flawed: // an XML element is an order-dependent collection of anonymous // values, while a data structure is an order-independent collection // of named values. // See package json for a textual representation more suitable // to data structures. // Unmarshal parses the XML-encoded data and stores the result in // the value pointed to by v, which must be an arbitrary struct, // slice, or string. Well-formed data that does not fit into v is // discarded. // // Because Unmarshal uses the reflect package, it can only assign // to exported (upper case) fields. Unmarshal uses a case-sensitive // comparison to match XML element names to tag values and struct // field names. // // Unmarshal maps an XML element to a struct using the following rules. // In the rules, the tag of a field refers to the value associated with the // key 'xml' in the struct field's tag (see the example above). // // * If the struct has a field of type []byte or string with tag // ",innerxml", Unmarshal accumulates the raw XML nested inside the // element in that field. The rest of the rules still apply. // // * If the struct has a field named XMLName of type xml.Name, // Unmarshal records the element name in that field. // // * If the XMLName field has an associated tag of the form // "name" or "namespace-URL name", the XML element must have // the given name (and, optionally, name space) or else Unmarshal // returns an error. // // * If the XML element has an attribute whose name matches a // struct field name with an associated tag containing ",attr" or // the explicit name in a struct field tag of the form "name,attr", // Unmarshal records the attribute value in that field. // // * If the XML element contains character data, that data is // accumulated in the first struct field that has tag ",chardata". // The struct field may have type []byte or string. // If there is no such field, the character data is discarded. // // * If the XML element contains comments, they are accumulated in // the first struct field that has tag ",comment". The struct // field may have type []byte or string. If there is no such // field, the comments are discarded. // // * If the XML element contains a sub-element whose name matches // the prefix of a tag formatted as "a" or "a>b>c", unmarshal // will descend into the XML structure looking for elements with the // given names, and will map the innermost elements to that struct // field. A tag starting with ">" is equivalent to one starting // with the field name followed by ">". // // * If the XML element contains a sub-element whose name matches // a struct field's XMLName tag and the struct field has no // explicit name tag as per the previous rule, unmarshal maps // the sub-element to that struct field. // // * If the XML element contains a sub-element whose name matches a // field without any mode flags (",attr", ",chardata", etc), Unmarshal // maps the sub-element to that struct field. // // * If the XML element contains a sub-element that hasn't matched any // of the above rules and the struct has a field with tag ",any", // unmarshal maps the sub-element to that struct field. // // * An anonymous struct field is handled as if the fields of its // value were part of the outer struct. // // * A struct field with tag "-" is never unmarshalled into. // // Unmarshal maps an XML element to a string or []byte by saving the // concatenation of that element's character data in the string or // []byte. The saved []byte is never nil. // // Unmarshal maps an attribute value to a string or []byte by saving // the value in the string or slice. // // Unmarshal maps an XML element to a slice by extending the length of // the slice and mapping the element to the newly created value. // // Unmarshal maps an XML element or attribute value to a bool by // setting it to the boolean value represented by the string. // // Unmarshal maps an XML element or attribute value to an integer or // floating-point field by setting the field to the result of // interpreting the string value in decimal. There is no check for // overflow. // // Unmarshal maps an XML element to an xml.Name by recording the // element name. // // Unmarshal maps an XML element to a pointer by setting the pointer // to a freshly allocated value and then mapping the element to that value. // func Unmarshal(data []byte, v interface{}) error { return NewDecoder(bytes.NewReader(data)).Decode(v) } // Decode works like xml.Unmarshal, except it reads the decoder // stream to find the start element. func (d *Decoder) Decode(v interface{}) error { return d.DecodeElement(v, nil) } // DecodeElement works like xml.Unmarshal except that it takes // a pointer to the start XML element to decode into v. // It is useful when a client reads some raw XML tokens itself // but also wants to defer to Unmarshal for some elements. func (d *Decoder) DecodeElement(v interface{}, start *StartElement) error { val := reflect.ValueOf(v) if val.Kind() != reflect.Ptr { return errors.New("non-pointer passed to Unmarshal") } return d.unmarshal(val.Elem(), start) } // An UnmarshalError represents an error in the unmarshalling process. type UnmarshalError string func (e UnmarshalError) Error() string { return string(e) } // Unmarshaler is the interface implemented by objects that can unmarshal // an XML element description of themselves. // // UnmarshalXML decodes a single XML element // beginning with the given start element. // If it returns an error, the outer call to Unmarshal stops and // returns that error. // UnmarshalXML must consume exactly one XML element. // One common implementation strategy is to unmarshal into // a separate value with a layout matching the expected XML // using d.DecodeElement, and then to copy the data from // that value into the receiver. // Another common strategy is to use d.Token to process the // XML object one token at a time. // UnmarshalXML may not use d.RawToken. type Unmarshaler interface { UnmarshalXML(d *Decoder, start StartElement) error } // UnmarshalerAttr is the interface implemented by objects that can unmarshal // an XML attribute description of themselves. // // UnmarshalXMLAttr decodes a single XML attribute. // If it returns an error, the outer call to Unmarshal stops and // returns that error. // UnmarshalXMLAttr is used only for struct fields with the // "attr" option in the field tag. type UnmarshalerAttr interface { UnmarshalXMLAttr(attr Attr) error } // receiverType returns the receiver type to use in an expression like "%s.MethodName". func receiverType(val interface{}) string { t := reflect.TypeOf(val) if t.Name() != "" { return t.String() } return "(" + t.String() + ")" } // unmarshalInterface unmarshals a single XML element into val. // start is the opening tag of the element. func (p *Decoder) unmarshalInterface(val Unmarshaler, start *StartElement) error { // Record that decoder must stop at end tag corresponding to start. p.pushEOF() p.unmarshalDepth++ err := val.UnmarshalXML(p, *start) p.unmarshalDepth-- if err != nil { p.popEOF() return err } if !p.popEOF() { return fmt.Errorf("xml: %s.UnmarshalXML did not consume entire <%s> element", receiverType(val), start.Name.Local) } return nil } // unmarshalTextInterface unmarshals a single XML element into val. // The chardata contained in the element (but not its children) // is passed to the text unmarshaler. func (p *Decoder) unmarshalTextInterface(val encoding.TextUnmarshaler, start *StartElement) error { var buf []byte depth := 1 for depth > 0 { t, err := p.Token() if err != nil { return err } switch t := t.(type) { case CharData: if depth == 1 { buf = append(buf, t...) } case StartElement: depth++ case EndElement: depth-- } } return val.UnmarshalText(buf) } // unmarshalAttr unmarshals a single XML attribute into val. func (p *Decoder) unmarshalAttr(val reflect.Value, attr Attr) error { if val.Kind() == reflect.Ptr { if val.IsNil() { val.Set(reflect.New(val.Type().Elem())) } val = val.Elem() } if val.CanInterface() && val.Type().Implements(unmarshalerAttrType) { // This is an unmarshaler with a non-pointer receiver, // so it's likely to be incorrect, but we do what we're told. return val.Interface().(UnmarshalerAttr).UnmarshalXMLAttr(attr) } if val.CanAddr() { pv := val.Addr() if pv.CanInterface() && pv.Type().Implements(unmarshalerAttrType) { return pv.Interface().(UnmarshalerAttr).UnmarshalXMLAttr(attr) } } // Not an UnmarshalerAttr; try encoding.TextUnmarshaler. if val.CanInterface() && val.Type().Implements(textUnmarshalerType) { // This is an unmarshaler with a non-pointer receiver, // so it's likely to be incorrect, but we do what we're told. return val.Interface().(encoding.TextUnmarshaler).UnmarshalText([]byte(attr.Value)) } if val.CanAddr() { pv := val.Addr() if pv.CanInterface() && pv.Type().Implements(textUnmarshalerType) { return pv.Interface().(encoding.TextUnmarshaler).UnmarshalText([]byte(attr.Value)) } } copyValue(val, []byte(attr.Value)) return nil } var ( unmarshalerType = reflect.TypeOf((*Unmarshaler)(nil)).Elem() unmarshalerAttrType = reflect.TypeOf((*UnmarshalerAttr)(nil)).Elem() textUnmarshalerType = reflect.TypeOf((*encoding.TextUnmarshaler)(nil)).Elem() ) // Unmarshal a single XML element into val. func (p *Decoder) unmarshal(val reflect.Value, start *StartElement) error { // Find start element if we need it. if start == nil { for { tok, err := p.Token() if err != nil { return err } if t, ok := tok.(StartElement); ok { start = &t break } } } // Load value from interface, but only if the result will be // usefully addressable. if val.Kind() == reflect.Interface && !val.IsNil() { e := val.Elem() if e.Kind() == reflect.Ptr && !e.IsNil() { val = e } } if val.Kind() == reflect.Ptr { if val.IsNil() { val.Set(reflect.New(val.Type().Elem())) } val = val.Elem() } if val.CanInterface() && val.Type().Implements(unmarshalerType) { // This is an unmarshaler with a non-pointer receiver, // so it's likely to be incorrect, but we do what we're told. return p.unmarshalInterface(val.Interface().(Unmarshaler), start) } if val.CanAddr() { pv := val.Addr() if pv.CanInterface() && pv.Type().Implements(unmarshalerType) { return p.unmarshalInterface(pv.Interface().(Unmarshaler), start) } } if val.CanInterface() && val.Type().Implements(textUnmarshalerType) { return p.unmarshalTextInterface(val.Interface().(encoding.TextUnmarshaler), start) } if val.CanAddr() { pv := val.Addr() if pv.CanInterface() && pv.Type().Implements(textUnmarshalerType) { return p.unmarshalTextInterface(pv.Interface().(encoding.TextUnmarshaler), start) } } var ( data []byte saveData reflect.Value comment []byte saveComment reflect.Value saveXML reflect.Value saveXMLIndex int saveXMLData []byte saveAny reflect.Value sv reflect.Value tinfo *typeInfo err error ) switch v := val; v.Kind() { default: return errors.New("unknown type " + v.Type().String()) case reflect.Interface: // TODO: For now, simply ignore the field. In the near // future we may choose to unmarshal the start // element on it, if not nil. return p.Skip() case reflect.Slice: typ := v.Type() if typ.Elem().Kind() == reflect.Uint8 { // []byte saveData = v break } // Slice of element values. // Grow slice. n := v.Len() if n >= v.Cap() { ncap := 2 * n if ncap < 4 { ncap = 4 } new := reflect.MakeSlice(typ, n, ncap) reflect.Copy(new, v) v.Set(new) } v.SetLen(n + 1) // Recur to read element into slice. if err := p.unmarshal(v.Index(n), start); err != nil { v.SetLen(n) return err } return nil case reflect.Bool, reflect.Float32, reflect.Float64, reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr, reflect.String: saveData = v case reflect.Struct: typ := v.Type() if typ == nameType { v.Set(reflect.ValueOf(start.Name)) break } sv = v tinfo, err = getTypeInfo(typ) if err != nil { return err } // Validate and assign element name. if tinfo.xmlname != nil { finfo := tinfo.xmlname if finfo.name != "" && finfo.name != start.Name.Local { return UnmarshalError("expected element type <" + finfo.name + "> but have <" + start.Name.Local + ">") } if finfo.xmlns != "" && finfo.xmlns != start.Name.Space { e := "expected element <" + finfo.name + "> in name space " + finfo.xmlns + " but have " if start.Name.Space == "" { e += "no name space" } else { e += start.Name.Space } return UnmarshalError(e) } fv := finfo.value(sv) if _, ok := fv.Interface().(Name); ok { fv.Set(reflect.ValueOf(start.Name)) } } // Assign attributes. // Also, determine whether we need to save character data or comments. for i := range tinfo.fields { finfo := &tinfo.fields[i] switch finfo.flags & fMode { case fAttr: strv := finfo.value(sv) // Look for attribute. for _, a := range start.Attr { if a.Name.Local == finfo.name && (finfo.xmlns == "" || finfo.xmlns == a.Name.Space) { if err := p.unmarshalAttr(strv, a); err != nil { return err } break } } case fCharData: if !saveData.IsValid() { saveData = finfo.value(sv) } case fComment: if !saveComment.IsValid() { saveComment = finfo.value(sv) } case fAny, fAny | fElement: if !saveAny.IsValid() { saveAny = finfo.value(sv) } case fInnerXml: if !saveXML.IsValid() { saveXML = finfo.value(sv) if p.saved == nil { saveXMLIndex = 0 p.saved = new(bytes.Buffer) } else { saveXMLIndex = p.savedOffset() } } } } } // Find end element. // Process sub-elements along the way. Loop: for { var savedOffset int if saveXML.IsValid() { savedOffset = p.savedOffset() } tok, err := p.Token() if err != nil { return err } switch t := tok.(type) { case StartElement: consumed := false if sv.IsValid() { consumed, err = p.unmarshalPath(tinfo, sv, nil, &t) if err != nil { return err } if !consumed && saveAny.IsValid() { consumed = true if err := p.unmarshal(saveAny, &t); err != nil { return err } } } if !consumed { if err := p.Skip(); err != nil { return err } } case EndElement: if saveXML.IsValid() { saveXMLData = p.saved.Bytes()[saveXMLIndex:savedOffset] if saveXMLIndex == 0 { p.saved = nil } } break Loop case CharData: if saveData.IsValid() { data = append(data, t...) } case Comment: if saveComment.IsValid() { comment = append(comment, t...) } } } if saveData.IsValid() && saveData.CanInterface() && saveData.Type().Implements(textUnmarshalerType) { if err := saveData.Interface().(encoding.TextUnmarshaler).UnmarshalText(data); err != nil { return err } saveData = reflect.Value{} } if saveData.IsValid() && saveData.CanAddr() { pv := saveData.Addr() if pv.CanInterface() && pv.Type().Implements(textUnmarshalerType) { if err := pv.Interface().(encoding.TextUnmarshaler).UnmarshalText(data); err != nil { return err } saveData = reflect.Value{} } } if err := copyValue(saveData, data); err != nil { return err } switch t := saveComment; t.Kind() { case reflect.String: t.SetString(string(comment)) case reflect.Slice: t.Set(reflect.ValueOf(comment)) } switch t := saveXML; t.Kind() { case reflect.String: t.SetString(string(saveXMLData)) case reflect.Slice: t.Set(reflect.ValueOf(saveXMLData)) } return nil } func copyValue(dst reflect.Value, src []byte) (err error) { dst0 := dst if dst.Kind() == reflect.Ptr { if dst.IsNil() { dst.Set(reflect.New(dst.Type().Elem())) } dst = dst.Elem() } // Save accumulated data. switch dst.Kind() { case reflect.Invalid: // Probably a comment. default: return errors.New("cannot unmarshal into " + dst0.Type().String()) case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: itmp, err := strconv.ParseInt(string(src), 10, dst.Type().Bits()) if err != nil { return err } dst.SetInt(itmp) case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: utmp, err := strconv.ParseUint(string(src), 10, dst.Type().Bits()) if err != nil { return err } dst.SetUint(utmp) case reflect.Float32, reflect.Float64: ftmp, err := strconv.ParseFloat(string(src), dst.Type().Bits()) if err != nil { return err } dst.SetFloat(ftmp) case reflect.Bool: value, err := strconv.ParseBool(strings.TrimSpace(string(src))) if err != nil { return err } dst.SetBool(value) case reflect.String: dst.SetString(string(src)) case reflect.Slice: if len(src) == 0 { // non-nil to flag presence src = []byte{} } dst.SetBytes(src) } return nil } // unmarshalPath walks down an XML structure looking for wanted // paths, and calls unmarshal on them. // The consumed result tells whether XML elements have been consumed // from the Decoder until start's matching end element, or if it's // still untouched because start is uninteresting for sv's fields. func (p *Decoder) unmarshalPath(tinfo *typeInfo, sv reflect.Value, parents []string, start *StartElement) (consumed bool, err error) { recurse := false Loop: for i := range tinfo.fields { finfo := &tinfo.fields[i] if finfo.flags&fElement == 0 || len(finfo.parents) < len(parents) || finfo.xmlns != "" && finfo.xmlns != start.Name.Space { continue } for j := range parents { if parents[j] != finfo.parents[j] { continue Loop } } if len(finfo.parents) == len(parents) && finfo.name == start.Name.Local { // It's a perfect match, unmarshal the field. return true, p.unmarshal(finfo.value(sv), start) } if len(finfo.parents) > len(parents) && finfo.parents[len(parents)] == start.Name.Local { // It's a prefix for the field. Break and recurse // since it's not ok for one field path to be itself // the prefix for another field path. recurse = true // We can reuse the same slice as long as we // don't try to append to it. parents = finfo.parents[:len(parents)+1] break } } if !recurse { // We have no business with this element. return false, nil } // The element is not a perfect match for any field, but one // or more fields have the path to this element as a parent // prefix. Recurse and attempt to match these. for { var tok Token tok, err = p.Token() if err != nil { return true, err } switch t := tok.(type) { case StartElement: consumed2, err := p.unmarshalPath(tinfo, sv, parents, &t) if err != nil { return true, err } if !consumed2 { if err := p.Skip(); err != nil { return true, err } } case EndElement: return true, nil } } } // Skip reads tokens until it has consumed the end element // matching the most recent start element already consumed. // It recurs if it encounters a start element, so it can be used to // skip nested structures. // It returns nil if it finds an end element matching the start // element; otherwise it returns an error describing the problem. func (d *Decoder) Skip() error { for { tok, err := d.Token() if err != nil { return err } switch tok.(type) { case StartElement: if err := d.Skip(); err != nil { return err } case EndElement: return nil } } } lxd-2.0.2/dist/src/golang.org/x/net/webdav/internal/xml/typeinfo.go0000644061062106075000000002300712721405224027605 0ustar00stgraberdomain admins00000000000000// Copyright 2011 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package xml import ( "fmt" "reflect" "strings" "sync" ) // typeInfo holds details for the xml representation of a type. type typeInfo struct { xmlname *fieldInfo fields []fieldInfo } // fieldInfo holds details for the xml representation of a single field. type fieldInfo struct { idx []int name string xmlns string flags fieldFlags parents []string } type fieldFlags int const ( fElement fieldFlags = 1 << iota fAttr fCharData fInnerXml fComment fAny fOmitEmpty fMode = fElement | fAttr | fCharData | fInnerXml | fComment | fAny ) var tinfoMap = make(map[reflect.Type]*typeInfo) var tinfoLock sync.RWMutex var nameType = reflect.TypeOf(Name{}) // getTypeInfo returns the typeInfo structure with details necessary // for marshalling and unmarshalling typ. func getTypeInfo(typ reflect.Type) (*typeInfo, error) { tinfoLock.RLock() tinfo, ok := tinfoMap[typ] tinfoLock.RUnlock() if ok { return tinfo, nil } tinfo = &typeInfo{} if typ.Kind() == reflect.Struct && typ != nameType { n := typ.NumField() for i := 0; i < n; i++ { f := typ.Field(i) if f.PkgPath != "" || f.Tag.Get("xml") == "-" { continue // Private field } // For embedded structs, embed its fields. if f.Anonymous { t := f.Type if t.Kind() == reflect.Ptr { t = t.Elem() } if t.Kind() == reflect.Struct { inner, err := getTypeInfo(t) if err != nil { return nil, err } if tinfo.xmlname == nil { tinfo.xmlname = inner.xmlname } for _, finfo := range inner.fields { finfo.idx = append([]int{i}, finfo.idx...) if err := addFieldInfo(typ, tinfo, &finfo); err != nil { return nil, err } } continue } } finfo, err := structFieldInfo(typ, &f) if err != nil { return nil, err } if f.Name == "XMLName" { tinfo.xmlname = finfo continue } // Add the field if it doesn't conflict with other fields. if err := addFieldInfo(typ, tinfo, finfo); err != nil { return nil, err } } } tinfoLock.Lock() tinfoMap[typ] = tinfo tinfoLock.Unlock() return tinfo, nil } // structFieldInfo builds and returns a fieldInfo for f. func structFieldInfo(typ reflect.Type, f *reflect.StructField) (*fieldInfo, error) { finfo := &fieldInfo{idx: f.Index} // Split the tag from the xml namespace if necessary. tag := f.Tag.Get("xml") if i := strings.Index(tag, " "); i >= 0 { finfo.xmlns, tag = tag[:i], tag[i+1:] } // Parse flags. tokens := strings.Split(tag, ",") if len(tokens) == 1 { finfo.flags = fElement } else { tag = tokens[0] for _, flag := range tokens[1:] { switch flag { case "attr": finfo.flags |= fAttr case "chardata": finfo.flags |= fCharData case "innerxml": finfo.flags |= fInnerXml case "comment": finfo.flags |= fComment case "any": finfo.flags |= fAny case "omitempty": finfo.flags |= fOmitEmpty } } // Validate the flags used. valid := true switch mode := finfo.flags & fMode; mode { case 0: finfo.flags |= fElement case fAttr, fCharData, fInnerXml, fComment, fAny: if f.Name == "XMLName" || tag != "" && mode != fAttr { valid = false } default: // This will also catch multiple modes in a single field. valid = false } if finfo.flags&fMode == fAny { finfo.flags |= fElement } if finfo.flags&fOmitEmpty != 0 && finfo.flags&(fElement|fAttr) == 0 { valid = false } if !valid { return nil, fmt.Errorf("xml: invalid tag in field %s of type %s: %q", f.Name, typ, f.Tag.Get("xml")) } } // Use of xmlns without a name is not allowed. if finfo.xmlns != "" && tag == "" { return nil, fmt.Errorf("xml: namespace without name in field %s of type %s: %q", f.Name, typ, f.Tag.Get("xml")) } if f.Name == "XMLName" { // The XMLName field records the XML element name. Don't // process it as usual because its name should default to // empty rather than to the field name. finfo.name = tag return finfo, nil } if tag == "" { // If the name part of the tag is completely empty, get // default from XMLName of underlying struct if feasible, // or field name otherwise. if xmlname := lookupXMLName(f.Type); xmlname != nil { finfo.xmlns, finfo.name = xmlname.xmlns, xmlname.name } else { finfo.name = f.Name } return finfo, nil } if finfo.xmlns == "" && finfo.flags&fAttr == 0 { // If it's an element no namespace specified, get the default // from the XMLName of enclosing struct if possible. if xmlname := lookupXMLName(typ); xmlname != nil { finfo.xmlns = xmlname.xmlns } } // Prepare field name and parents. parents := strings.Split(tag, ">") if parents[0] == "" { parents[0] = f.Name } if parents[len(parents)-1] == "" { return nil, fmt.Errorf("xml: trailing '>' in field %s of type %s", f.Name, typ) } finfo.name = parents[len(parents)-1] if len(parents) > 1 { if (finfo.flags & fElement) == 0 { return nil, fmt.Errorf("xml: %s chain not valid with %s flag", tag, strings.Join(tokens[1:], ",")) } finfo.parents = parents[:len(parents)-1] } // If the field type has an XMLName field, the names must match // so that the behavior of both marshalling and unmarshalling // is straightforward and unambiguous. if finfo.flags&fElement != 0 { ftyp := f.Type xmlname := lookupXMLName(ftyp) if xmlname != nil && xmlname.name != finfo.name { return nil, fmt.Errorf("xml: name %q in tag of %s.%s conflicts with name %q in %s.XMLName", finfo.name, typ, f.Name, xmlname.name, ftyp) } } return finfo, nil } // lookupXMLName returns the fieldInfo for typ's XMLName field // in case it exists and has a valid xml field tag, otherwise // it returns nil. func lookupXMLName(typ reflect.Type) (xmlname *fieldInfo) { for typ.Kind() == reflect.Ptr { typ = typ.Elem() } if typ.Kind() != reflect.Struct { return nil } for i, n := 0, typ.NumField(); i < n; i++ { f := typ.Field(i) if f.Name != "XMLName" { continue } finfo, err := structFieldInfo(typ, &f) if finfo.name != "" && err == nil { return finfo } // Also consider errors as a non-existent field tag // and let getTypeInfo itself report the error. break } return nil } func min(a, b int) int { if a <= b { return a } return b } // addFieldInfo adds finfo to tinfo.fields if there are no // conflicts, or if conflicts arise from previous fields that were // obtained from deeper embedded structures than finfo. In the latter // case, the conflicting entries are dropped. // A conflict occurs when the path (parent + name) to a field is // itself a prefix of another path, or when two paths match exactly. // It is okay for field paths to share a common, shorter prefix. func addFieldInfo(typ reflect.Type, tinfo *typeInfo, newf *fieldInfo) error { var conflicts []int Loop: // First, figure all conflicts. Most working code will have none. for i := range tinfo.fields { oldf := &tinfo.fields[i] if oldf.flags&fMode != newf.flags&fMode { continue } if oldf.xmlns != "" && newf.xmlns != "" && oldf.xmlns != newf.xmlns { continue } minl := min(len(newf.parents), len(oldf.parents)) for p := 0; p < minl; p++ { if oldf.parents[p] != newf.parents[p] { continue Loop } } if len(oldf.parents) > len(newf.parents) { if oldf.parents[len(newf.parents)] == newf.name { conflicts = append(conflicts, i) } } else if len(oldf.parents) < len(newf.parents) { if newf.parents[len(oldf.parents)] == oldf.name { conflicts = append(conflicts, i) } } else { if newf.name == oldf.name { conflicts = append(conflicts, i) } } } // Without conflicts, add the new field and return. if conflicts == nil { tinfo.fields = append(tinfo.fields, *newf) return nil } // If any conflict is shallower, ignore the new field. // This matches the Go field resolution on embedding. for _, i := range conflicts { if len(tinfo.fields[i].idx) < len(newf.idx) { return nil } } // Otherwise, if any of them is at the same depth level, it's an error. for _, i := range conflicts { oldf := &tinfo.fields[i] if len(oldf.idx) == len(newf.idx) { f1 := typ.FieldByIndex(oldf.idx) f2 := typ.FieldByIndex(newf.idx) return &TagPathError{typ, f1.Name, f1.Tag.Get("xml"), f2.Name, f2.Tag.Get("xml")} } } // Otherwise, the new field is shallower, and thus takes precedence, // so drop the conflicting fields from tinfo and append the new one. for c := len(conflicts) - 1; c >= 0; c-- { i := conflicts[c] copy(tinfo.fields[i:], tinfo.fields[i+1:]) tinfo.fields = tinfo.fields[:len(tinfo.fields)-1] } tinfo.fields = append(tinfo.fields, *newf) return nil } // A TagPathError represents an error in the unmarshalling process // caused by the use of field tags with conflicting paths. type TagPathError struct { Struct reflect.Type Field1, Tag1 string Field2, Tag2 string } func (e *TagPathError) Error() string { return fmt.Sprintf("%s field %q with tag %q conflicts with field %q with tag %q", e.Struct, e.Field1, e.Tag1, e.Field2, e.Tag2) } // value returns v's field value corresponding to finfo. // It's equivalent to v.FieldByIndex(finfo.idx), but initializes // and dereferences pointers as necessary. func (finfo *fieldInfo) value(v reflect.Value) reflect.Value { for i, x := range finfo.idx { if i > 0 { t := v.Type() if t.Kind() == reflect.Ptr && t.Elem().Kind() == reflect.Struct { if v.IsNil() { v.Set(reflect.New(v.Type().Elem())) } v = v.Elem() } } v = v.Field(x) } return v } lxd-2.0.2/dist/src/golang.org/x/net/webdav/internal/xml/read_test.go0000644061062106075000000005056612721405224027734 0ustar00stgraberdomain admins00000000000000// Copyright 2009 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package xml import ( "bytes" "fmt" "io" "reflect" "strings" "testing" "time" ) // Stripped down Atom feed data structures. func TestUnmarshalFeed(t *testing.T) { var f Feed if err := Unmarshal([]byte(atomFeedString), &f); err != nil { t.Fatalf("Unmarshal: %s", err) } if !reflect.DeepEqual(f, atomFeed) { t.Fatalf("have %#v\nwant %#v", f, atomFeed) } } // hget http://codereview.appspot.com/rss/mine/rsc const atomFeedString = ` Code Review - My issueshttp://codereview.appspot.com/rietveld<>rietveld: an attempt at pubsubhubbub 2009-10-04T01:35:58+00:00email-address-removedurn:md5:134d9179c41f806be79b3a5f7877d19a

An attempt at adding pubsubhubbub support to Rietveld. http://code.google.com/p/pubsubhubbub http://code.google.com/p/rietveld/issues/detail?id=155 The server side of the protocol is trivial: 1. add a &lt;link rel=&quot;hub&quot; href=&quot;hub-server&quot;&gt; tag to all feeds that will be pubsubhubbubbed. 2. every time one of those feeds changes, tell the hub with a simple POST request. I have tested this by adding debug prints to a local hub server and checking that the server got the right publish requests. I can&#39;t quite get the server to work, but I think the bug is not in my code. I think that the server expects to be able to grab the feed and see the feed&#39;s actual URL in the link rel=&quot;self&quot;, but the default value for that drops the :port from the URL, and I cannot for the life of me figure out how to get the Atom generator deep inside django not to do that, or even where it is doing that, or even what code is running to generate the Atom feed. (I thought I knew but I added some assert False statements and it kept running!) Ignoring that particular problem, I would appreciate feedback on the right way to get the two values at the top of feeds.py marked NOTE(rsc). rietveld: correct tab handling 2009-10-03T23:02:17+00:00email-address-removedurn:md5:0a2a4f19bb815101f0ba2904aed7c35a This fixes the buggy tab rendering that can be seen at http://codereview.appspot.com/116075/diff/1/2 The fundamental problem was that the tab code was not being told what column the text began in, so it didn&#39;t know where to put the tab stops. Another problem was that some of the code assumed that string byte offsets were the same as column offsets, which is only true if there are no tabs. In the process of fixing this, I cleaned up the arguments to Fold and ExpandTabs and renamed them Break and _ExpandTabs so that I could be sure that I found all the call sites. I also wanted to verify that ExpandTabs was not being used from outside intra_region_diff.py. ` type Feed struct { XMLName Name `xml:"http://www.w3.org/2005/Atom feed"` Title string `xml:"title"` Id string `xml:"id"` Link []Link `xml:"link"` Updated time.Time `xml:"updated,attr"` Author Person `xml:"author"` Entry []Entry `xml:"entry"` } type Entry struct { Title string `xml:"title"` Id string `xml:"id"` Link []Link `xml:"link"` Updated time.Time `xml:"updated"` Author Person `xml:"author"` Summary Text `xml:"summary"` } type Link struct { Rel string `xml:"rel,attr,omitempty"` Href string `xml:"href,attr"` } type Person struct { Name string `xml:"name"` URI string `xml:"uri"` Email string `xml:"email"` InnerXML string `xml:",innerxml"` } type Text struct { Type string `xml:"type,attr,omitempty"` Body string `xml:",chardata"` } var atomFeed = Feed{ XMLName: Name{"http://www.w3.org/2005/Atom", "feed"}, Title: "Code Review - My issues", Link: []Link{ {Rel: "alternate", Href: "http://codereview.appspot.com/"}, {Rel: "self", Href: "http://codereview.appspot.com/rss/mine/rsc"}, }, Id: "http://codereview.appspot.com/", Updated: ParseTime("2009-10-04T01:35:58+00:00"), Author: Person{ Name: "rietveld<>", InnerXML: "rietveld<>", }, Entry: []Entry{ { Title: "rietveld: an attempt at pubsubhubbub\n", Link: []Link{ {Rel: "alternate", Href: "http://codereview.appspot.com/126085"}, }, Updated: ParseTime("2009-10-04T01:35:58+00:00"), Author: Person{ Name: "email-address-removed", InnerXML: "email-address-removed", }, Id: "urn:md5:134d9179c41f806be79b3a5f7877d19a", Summary: Text{ Type: "html", Body: ` An attempt at adding pubsubhubbub support to Rietveld. http://code.google.com/p/pubsubhubbub http://code.google.com/p/rietveld/issues/detail?id=155 The server side of the protocol is trivial: 1. add a <link rel="hub" href="hub-server"> tag to all feeds that will be pubsubhubbubbed. 2. every time one of those feeds changes, tell the hub with a simple POST request. I have tested this by adding debug prints to a local hub server and checking that the server got the right publish requests. I can't quite get the server to work, but I think the bug is not in my code. I think that the server expects to be able to grab the feed and see the feed's actual URL in the link rel="self", but the default value for that drops the :port from the URL, and I cannot for the life of me figure out how to get the Atom generator deep inside django not to do that, or even where it is doing that, or even what code is running to generate the Atom feed. (I thought I knew but I added some assert False statements and it kept running!) Ignoring that particular problem, I would appreciate feedback on the right way to get the two values at the top of feeds.py marked NOTE(rsc). `, }, }, { Title: "rietveld: correct tab handling\n", Link: []Link{ {Rel: "alternate", Href: "http://codereview.appspot.com/124106"}, }, Updated: ParseTime("2009-10-03T23:02:17+00:00"), Author: Person{ Name: "email-address-removed", InnerXML: "email-address-removed", }, Id: "urn:md5:0a2a4f19bb815101f0ba2904aed7c35a", Summary: Text{ Type: "html", Body: ` This fixes the buggy tab rendering that can be seen at http://codereview.appspot.com/116075/diff/1/2 The fundamental problem was that the tab code was not being told what column the text began in, so it didn't know where to put the tab stops. Another problem was that some of the code assumed that string byte offsets were the same as column offsets, which is only true if there are no tabs. In the process of fixing this, I cleaned up the arguments to Fold and ExpandTabs and renamed them Break and _ExpandTabs so that I could be sure that I found all the call sites. I also wanted to verify that ExpandTabs was not being used from outside intra_region_diff.py. `, }, }, }, } const pathTestString = ` 1 A B C D <_> E 2 ` type PathTestItem struct { Value string } type PathTestA struct { Items []PathTestItem `xml:">Item1"` Before, After string } type PathTestB struct { Other []PathTestItem `xml:"Items>Item1"` Before, After string } type PathTestC struct { Values1 []string `xml:"Items>Item1>Value"` Values2 []string `xml:"Items>Item2>Value"` Before, After string } type PathTestSet struct { Item1 []PathTestItem } type PathTestD struct { Other PathTestSet `xml:"Items"` Before, After string } type PathTestE struct { Underline string `xml:"Items>_>Value"` Before, After string } var pathTests = []interface{}{ &PathTestA{Items: []PathTestItem{{"A"}, {"D"}}, Before: "1", After: "2"}, &PathTestB{Other: []PathTestItem{{"A"}, {"D"}}, Before: "1", After: "2"}, &PathTestC{Values1: []string{"A", "C", "D"}, Values2: []string{"B"}, Before: "1", After: "2"}, &PathTestD{Other: PathTestSet{Item1: []PathTestItem{{"A"}, {"D"}}}, Before: "1", After: "2"}, &PathTestE{Underline: "E", Before: "1", After: "2"}, } func TestUnmarshalPaths(t *testing.T) { for _, pt := range pathTests { v := reflect.New(reflect.TypeOf(pt).Elem()).Interface() if err := Unmarshal([]byte(pathTestString), v); err != nil { t.Fatalf("Unmarshal: %s", err) } if !reflect.DeepEqual(v, pt) { t.Fatalf("have %#v\nwant %#v", v, pt) } } } type BadPathTestA struct { First string `xml:"items>item1"` Other string `xml:"items>item2"` Second string `xml:"items"` } type BadPathTestB struct { Other string `xml:"items>item2>value"` First string `xml:"items>item1"` Second string `xml:"items>item1>value"` } type BadPathTestC struct { First string Second string `xml:"First"` } type BadPathTestD struct { BadPathEmbeddedA BadPathEmbeddedB } type BadPathEmbeddedA struct { First string } type BadPathEmbeddedB struct { Second string `xml:"First"` } var badPathTests = []struct { v, e interface{} }{ {&BadPathTestA{}, &TagPathError{reflect.TypeOf(BadPathTestA{}), "First", "items>item1", "Second", "items"}}, {&BadPathTestB{}, &TagPathError{reflect.TypeOf(BadPathTestB{}), "First", "items>item1", "Second", "items>item1>value"}}, {&BadPathTestC{}, &TagPathError{reflect.TypeOf(BadPathTestC{}), "First", "", "Second", "First"}}, {&BadPathTestD{}, &TagPathError{reflect.TypeOf(BadPathTestD{}), "First", "", "Second", "First"}}, } func TestUnmarshalBadPaths(t *testing.T) { for _, tt := range badPathTests { err := Unmarshal([]byte(pathTestString), tt.v) if !reflect.DeepEqual(err, tt.e) { t.Fatalf("Unmarshal with %#v didn't fail properly:\nhave %#v,\nwant %#v", tt.v, err, tt.e) } } } const OK = "OK" const withoutNameTypeData = ` ` type TestThree struct { XMLName Name `xml:"Test3"` Attr string `xml:",attr"` } func TestUnmarshalWithoutNameType(t *testing.T) { var x TestThree if err := Unmarshal([]byte(withoutNameTypeData), &x); err != nil { t.Fatalf("Unmarshal: %s", err) } if x.Attr != OK { t.Fatalf("have %v\nwant %v", x.Attr, OK) } } func TestUnmarshalAttr(t *testing.T) { type ParamVal struct { Int int `xml:"int,attr"` } type ParamPtr struct { Int *int `xml:"int,attr"` } type ParamStringPtr struct { Int *string `xml:"int,attr"` } x := []byte(``) p1 := &ParamPtr{} if err := Unmarshal(x, p1); err != nil { t.Fatalf("Unmarshal: %s", err) } if p1.Int == nil { t.Fatalf("Unmarshal failed in to *int field") } else if *p1.Int != 1 { t.Fatalf("Unmarshal with %s failed:\nhave %#v,\n want %#v", x, p1.Int, 1) } p2 := &ParamVal{} if err := Unmarshal(x, p2); err != nil { t.Fatalf("Unmarshal: %s", err) } if p2.Int != 1 { t.Fatalf("Unmarshal with %s failed:\nhave %#v,\n want %#v", x, p2.Int, 1) } p3 := &ParamStringPtr{} if err := Unmarshal(x, p3); err != nil { t.Fatalf("Unmarshal: %s", err) } if p3.Int == nil { t.Fatalf("Unmarshal failed in to *string field") } else if *p3.Int != "1" { t.Fatalf("Unmarshal with %s failed:\nhave %#v,\n want %#v", x, p3.Int, 1) } } type Tables struct { HTable string `xml:"http://www.w3.org/TR/html4/ table"` FTable string `xml:"http://www.w3schools.com/furniture table"` } var tables = []struct { xml string tab Tables ns string }{ { xml: `` + `hello
` + `world
` + `
`, tab: Tables{"hello", "world"}, }, { xml: `` + `world
` + `hello
` + `
`, tab: Tables{"hello", "world"}, }, { xml: `` + `world` + `hello` + ``, tab: Tables{"hello", "world"}, }, { xml: `` + `bogus
` + `
`, tab: Tables{}, }, { xml: `` + `only
` + `
`, tab: Tables{HTable: "only"}, ns: "http://www.w3.org/TR/html4/", }, { xml: `` + `only
` + `
`, tab: Tables{FTable: "only"}, ns: "http://www.w3schools.com/furniture", }, { xml: `` + `only
` + `
`, tab: Tables{}, ns: "something else entirely", }, } func TestUnmarshalNS(t *testing.T) { for i, tt := range tables { var dst Tables var err error if tt.ns != "" { d := NewDecoder(strings.NewReader(tt.xml)) d.DefaultSpace = tt.ns err = d.Decode(&dst) } else { err = Unmarshal([]byte(tt.xml), &dst) } if err != nil { t.Errorf("#%d: Unmarshal: %v", i, err) continue } want := tt.tab if dst != want { t.Errorf("#%d: dst=%+v, want %+v", i, dst, want) } } } func TestRoundTrip(t *testing.T) { // From issue 7535 const s = `` in := bytes.NewBufferString(s) for i := 0; i < 10; i++ { out := &bytes.Buffer{} d := NewDecoder(in) e := NewEncoder(out) for { t, err := d.Token() if err == io.EOF { break } if err != nil { fmt.Println("failed:", err) return } e.EncodeToken(t) } e.Flush() in = out } if got := in.String(); got != s { t.Errorf("have: %q\nwant: %q\n", got, s) } } func TestMarshalNS(t *testing.T) { dst := Tables{"hello", "world"} data, err := Marshal(&dst) if err != nil { t.Fatalf("Marshal: %v", err) } want := `hello
world
` str := string(data) if str != want { t.Errorf("have: %q\nwant: %q\n", str, want) } } type TableAttrs struct { TAttr TAttr } type TAttr struct { HTable string `xml:"http://www.w3.org/TR/html4/ table,attr"` FTable string `xml:"http://www.w3schools.com/furniture table,attr"` Lang string `xml:"http://www.w3.org/XML/1998/namespace lang,attr,omitempty"` Other1 string `xml:"http://golang.org/xml/ other,attr,omitempty"` Other2 string `xml:"http://golang.org/xmlfoo/ other,attr,omitempty"` Other3 string `xml:"http://golang.org/json/ other,attr,omitempty"` Other4 string `xml:"http://golang.org/2/json/ other,attr,omitempty"` } var tableAttrs = []struct { xml string tab TableAttrs ns string }{ { xml: ``, tab: TableAttrs{TAttr{HTable: "hello", FTable: "world"}}, }, { xml: ``, tab: TableAttrs{TAttr{HTable: "hello", FTable: "world"}}, }, { xml: ``, tab: TableAttrs{TAttr{HTable: "hello", FTable: "world"}}, }, { // Default space does not apply to attribute names. xml: ``, tab: TableAttrs{TAttr{HTable: "hello", FTable: ""}}, }, { // Default space does not apply to attribute names. xml: ``, tab: TableAttrs{TAttr{HTable: "", FTable: "world"}}, }, { xml: ``, tab: TableAttrs{}, }, { // Default space does not apply to attribute names. xml: ``, tab: TableAttrs{TAttr{HTable: "hello", FTable: ""}}, ns: "http://www.w3schools.com/furniture", }, { // Default space does not apply to attribute names. xml: ``, tab: TableAttrs{TAttr{HTable: "", FTable: "world"}}, ns: "http://www.w3.org/TR/html4/", }, { xml: ``, tab: TableAttrs{}, ns: "something else entirely", }, } func TestUnmarshalNSAttr(t *testing.T) { for i, tt := range tableAttrs { var dst TableAttrs var err error if tt.ns != "" { d := NewDecoder(strings.NewReader(tt.xml)) d.DefaultSpace = tt.ns err = d.Decode(&dst) } else { err = Unmarshal([]byte(tt.xml), &dst) } if err != nil { t.Errorf("#%d: Unmarshal: %v", i, err) continue } want := tt.tab if dst != want { t.Errorf("#%d: dst=%+v, want %+v", i, dst, want) } } } func TestMarshalNSAttr(t *testing.T) { src := TableAttrs{TAttr{"hello", "world", "en_US", "other1", "other2", "other3", "other4"}} data, err := Marshal(&src) if err != nil { t.Fatalf("Marshal: %v", err) } want := `` str := string(data) if str != want { t.Errorf("Marshal:\nhave: %#q\nwant: %#q\n", str, want) } var dst TableAttrs if err := Unmarshal(data, &dst); err != nil { t.Errorf("Unmarshal: %v", err) } if dst != src { t.Errorf("Unmarshal = %q, want %q", dst, src) } } type MyCharData struct { body string } func (m *MyCharData) UnmarshalXML(d *Decoder, start StartElement) error { for { t, err := d.Token() if err == io.EOF { // found end of element break } if err != nil { return err } if char, ok := t.(CharData); ok { m.body += string(char) } } return nil } var _ Unmarshaler = (*MyCharData)(nil) func (m *MyCharData) UnmarshalXMLAttr(attr Attr) error { panic("must not call") } type MyAttr struct { attr string } func (m *MyAttr) UnmarshalXMLAttr(attr Attr) error { m.attr = attr.Value return nil } var _ UnmarshalerAttr = (*MyAttr)(nil) type MyStruct struct { Data *MyCharData Attr *MyAttr `xml:",attr"` Data2 MyCharData Attr2 MyAttr `xml:",attr"` } func TestUnmarshaler(t *testing.T) { xml := ` hello world howdy world ` var m MyStruct if err := Unmarshal([]byte(xml), &m); err != nil { t.Fatal(err) } if m.Data == nil || m.Attr == nil || m.Data.body != "hello world" || m.Attr.attr != "attr1" || m.Data2.body != "howdy world" || m.Attr2.attr != "attr2" { t.Errorf("m=%#+v\n", m) } } type Pea struct { Cotelydon string } type Pod struct { Pea interface{} `xml:"Pea"` } // https://golang.org/issue/6836 func TestUnmarshalIntoInterface(t *testing.T) { pod := new(Pod) pod.Pea = new(Pea) xml := `Green stuff` err := Unmarshal([]byte(xml), pod) if err != nil { t.Fatalf("failed to unmarshal %q: %v", xml, err) } pea, ok := pod.Pea.(*Pea) if !ok { t.Fatalf("unmarshalled into wrong type: have %T want *Pea", pod.Pea) } have, want := pea.Cotelydon, "Green stuff" if have != want { t.Errorf("failed to unmarshal into interface, have %q want %q", have, want) } } lxd-2.0.2/dist/src/golang.org/x/net/webdav/internal/xml/marshal.go0000644061062106075000000010606212721405224027402 0ustar00stgraberdomain admins00000000000000// Copyright 2011 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package xml import ( "bufio" "bytes" "encoding" "fmt" "io" "reflect" "strconv" "strings" ) const ( // A generic XML header suitable for use with the output of Marshal. // This is not automatically added to any output of this package, // it is provided as a convenience. Header = `` + "\n" ) // Marshal returns the XML encoding of v. // // Marshal handles an array or slice by marshalling each of the elements. // Marshal handles a pointer by marshalling the value it points at or, if the // pointer is nil, by writing nothing. Marshal handles an interface value by // marshalling the value it contains or, if the interface value is nil, by // writing nothing. Marshal handles all other data by writing one or more XML // elements containing the data. // // The name for the XML elements is taken from, in order of preference: // - the tag on the XMLName field, if the data is a struct // - the value of the XMLName field of type xml.Name // - the tag of the struct field used to obtain the data // - the name of the struct field used to obtain the data // - the name of the marshalled type // // The XML element for a struct contains marshalled elements for each of the // exported fields of the struct, with these exceptions: // - the XMLName field, described above, is omitted. // - a field with tag "-" is omitted. // - a field with tag "name,attr" becomes an attribute with // the given name in the XML element. // - a field with tag ",attr" becomes an attribute with the // field name in the XML element. // - a field with tag ",chardata" is written as character data, // not as an XML element. // - a field with tag ",innerxml" is written verbatim, not subject // to the usual marshalling procedure. // - a field with tag ",comment" is written as an XML comment, not // subject to the usual marshalling procedure. It must not contain // the "--" string within it. // - a field with a tag including the "omitempty" option is omitted // if the field value is empty. The empty values are false, 0, any // nil pointer or interface value, and any array, slice, map, or // string of length zero. // - an anonymous struct field is handled as if the fields of its // value were part of the outer struct. // // If a field uses a tag "a>b>c", then the element c will be nested inside // parent elements a and b. Fields that appear next to each other that name // the same parent will be enclosed in one XML element. // // See MarshalIndent for an example. // // Marshal will return an error if asked to marshal a channel, function, or map. func Marshal(v interface{}) ([]byte, error) { var b bytes.Buffer if err := NewEncoder(&b).Encode(v); err != nil { return nil, err } return b.Bytes(), nil } // Marshaler is the interface implemented by objects that can marshal // themselves into valid XML elements. // // MarshalXML encodes the receiver as zero or more XML elements. // By convention, arrays or slices are typically encoded as a sequence // of elements, one per entry. // Using start as the element tag is not required, but doing so // will enable Unmarshal to match the XML elements to the correct // struct field. // One common implementation strategy is to construct a separate // value with a layout corresponding to the desired XML and then // to encode it using e.EncodeElement. // Another common strategy is to use repeated calls to e.EncodeToken // to generate the XML output one token at a time. // The sequence of encoded tokens must make up zero or more valid // XML elements. type Marshaler interface { MarshalXML(e *Encoder, start StartElement) error } // MarshalerAttr is the interface implemented by objects that can marshal // themselves into valid XML attributes. // // MarshalXMLAttr returns an XML attribute with the encoded value of the receiver. // Using name as the attribute name is not required, but doing so // will enable Unmarshal to match the attribute to the correct // struct field. // If MarshalXMLAttr returns the zero attribute Attr{}, no attribute // will be generated in the output. // MarshalXMLAttr is used only for struct fields with the // "attr" option in the field tag. type MarshalerAttr interface { MarshalXMLAttr(name Name) (Attr, error) } // MarshalIndent works like Marshal, but each XML element begins on a new // indented line that starts with prefix and is followed by one or more // copies of indent according to the nesting depth. func MarshalIndent(v interface{}, prefix, indent string) ([]byte, error) { var b bytes.Buffer enc := NewEncoder(&b) enc.Indent(prefix, indent) if err := enc.Encode(v); err != nil { return nil, err } return b.Bytes(), nil } // An Encoder writes XML data to an output stream. type Encoder struct { p printer } // NewEncoder returns a new encoder that writes to w. func NewEncoder(w io.Writer) *Encoder { e := &Encoder{printer{Writer: bufio.NewWriter(w)}} e.p.encoder = e return e } // Indent sets the encoder to generate XML in which each element // begins on a new indented line that starts with prefix and is followed by // one or more copies of indent according to the nesting depth. func (enc *Encoder) Indent(prefix, indent string) { enc.p.prefix = prefix enc.p.indent = indent } // Encode writes the XML encoding of v to the stream. // // See the documentation for Marshal for details about the conversion // of Go values to XML. // // Encode calls Flush before returning. func (enc *Encoder) Encode(v interface{}) error { err := enc.p.marshalValue(reflect.ValueOf(v), nil, nil) if err != nil { return err } return enc.p.Flush() } // EncodeElement writes the XML encoding of v to the stream, // using start as the outermost tag in the encoding. // // See the documentation for Marshal for details about the conversion // of Go values to XML. // // EncodeElement calls Flush before returning. func (enc *Encoder) EncodeElement(v interface{}, start StartElement) error { err := enc.p.marshalValue(reflect.ValueOf(v), nil, &start) if err != nil { return err } return enc.p.Flush() } var ( begComment = []byte("") endProcInst = []byte("?>") endDirective = []byte(">") ) // EncodeToken writes the given XML token to the stream. // It returns an error if StartElement and EndElement tokens are not // properly matched. // // EncodeToken does not call Flush, because usually it is part of a // larger operation such as Encode or EncodeElement (or a custom // Marshaler's MarshalXML invoked during those), and those will call // Flush when finished. Callers that create an Encoder and then invoke // EncodeToken directly, without using Encode or EncodeElement, need to // call Flush when finished to ensure that the XML is written to the // underlying writer. // // EncodeToken allows writing a ProcInst with Target set to "xml" only // as the first token in the stream. // // When encoding a StartElement holding an XML namespace prefix // declaration for a prefix that is not already declared, contained // elements (including the StartElement itself) will use the declared // prefix when encoding names with matching namespace URIs. func (enc *Encoder) EncodeToken(t Token) error { p := &enc.p switch t := t.(type) { case StartElement: if err := p.writeStart(&t); err != nil { return err } case EndElement: if err := p.writeEnd(t.Name); err != nil { return err } case CharData: escapeText(p, t, false) case Comment: if bytes.Contains(t, endComment) { return fmt.Errorf("xml: EncodeToken of Comment containing --> marker") } p.WriteString("") return p.cachedWriteError() case ProcInst: // First token to be encoded which is also a ProcInst with target of xml // is the xml declaration. The only ProcInst where target of xml is allowed. if t.Target == "xml" && p.Buffered() != 0 { return fmt.Errorf("xml: EncodeToken of ProcInst xml target only valid for xml declaration, first token encoded") } if !isNameString(t.Target) { return fmt.Errorf("xml: EncodeToken of ProcInst with invalid Target") } if bytes.Contains(t.Inst, endProcInst) { return fmt.Errorf("xml: EncodeToken of ProcInst containing ?> marker") } p.WriteString(" 0 { p.WriteByte(' ') p.Write(t.Inst) } p.WriteString("?>") case Directive: if !isValidDirective(t) { return fmt.Errorf("xml: EncodeToken of Directive containing wrong < or > markers") } p.WriteString("") default: return fmt.Errorf("xml: EncodeToken of invalid token type") } return p.cachedWriteError() } // isValidDirective reports whether dir is a valid directive text, // meaning angle brackets are matched, ignoring comments and strings. func isValidDirective(dir Directive) bool { var ( depth int inquote uint8 incomment bool ) for i, c := range dir { switch { case incomment: if c == '>' { if n := 1 + i - len(endComment); n >= 0 && bytes.Equal(dir[n:i+1], endComment) { incomment = false } } // Just ignore anything in comment case inquote != 0: if c == inquote { inquote = 0 } // Just ignore anything within quotes case c == '\'' || c == '"': inquote = c case c == '<': if i+len(begComment) < len(dir) && bytes.Equal(dir[i:i+len(begComment)], begComment) { incomment = true } else { depth++ } case c == '>': if depth == 0 { return false } depth-- } } return depth == 0 && inquote == 0 && !incomment } // Flush flushes any buffered XML to the underlying writer. // See the EncodeToken documentation for details about when it is necessary. func (enc *Encoder) Flush() error { return enc.p.Flush() } type printer struct { *bufio.Writer encoder *Encoder seq int indent string prefix string depth int indentedIn bool putNewline bool defaultNS string attrNS map[string]string // map prefix -> name space attrPrefix map[string]string // map name space -> prefix prefixes []printerPrefix tags []Name } // printerPrefix holds a namespace undo record. // When an element is popped, the prefix record // is set back to the recorded URL. The empty // prefix records the URL for the default name space. // // The start of an element is recorded with an element // that has mark=true. type printerPrefix struct { prefix string url string mark bool } func (p *printer) prefixForNS(url string, isAttr bool) string { // The "http://www.w3.org/XML/1998/namespace" name space is predefined as "xml" // and must be referred to that way. // (The "http://www.w3.org/2000/xmlns/" name space is also predefined as "xmlns", // but users should not be trying to use that one directly - that's our job.) if url == xmlURL { return "xml" } if !isAttr && url == p.defaultNS { // We can use the default name space. return "" } return p.attrPrefix[url] } // defineNS pushes any namespace definition found in the given attribute. // If ignoreNonEmptyDefault is true, an xmlns="nonempty" // attribute will be ignored. func (p *printer) defineNS(attr Attr, ignoreNonEmptyDefault bool) error { var prefix string if attr.Name.Local == "xmlns" { if attr.Name.Space != "" && attr.Name.Space != "xml" && attr.Name.Space != xmlURL { return fmt.Errorf("xml: cannot redefine xmlns attribute prefix") } } else if attr.Name.Space == "xmlns" && attr.Name.Local != "" { prefix = attr.Name.Local if attr.Value == "" { // Technically, an empty XML namespace is allowed for an attribute. // From http://www.w3.org/TR/xml-names11/#scoping-defaulting: // // The attribute value in a namespace declaration for a prefix may be // empty. This has the effect, within the scope of the declaration, of removing // any association of the prefix with a namespace name. // // However our namespace prefixes here are used only as hints. There's // no need to respect the removal of a namespace prefix, so we ignore it. return nil } } else { // Ignore: it's not a namespace definition return nil } if prefix == "" { if attr.Value == p.defaultNS { // No need for redefinition. return nil } if attr.Value != "" && ignoreNonEmptyDefault { // We have an xmlns="..." value but // it can't define a name space in this context, // probably because the element has an empty // name space. In this case, we just ignore // the name space declaration. return nil } } else if _, ok := p.attrPrefix[attr.Value]; ok { // There's already a prefix for the given name space, // so use that. This prevents us from // having two prefixes for the same name space // so attrNS and attrPrefix can remain bijective. return nil } p.pushPrefix(prefix, attr.Value) return nil } // createNSPrefix creates a name space prefix attribute // to use for the given name space, defining a new prefix // if necessary. // If isAttr is true, the prefix is to be created for an attribute // prefix, which means that the default name space cannot // be used. func (p *printer) createNSPrefix(url string, isAttr bool) { if _, ok := p.attrPrefix[url]; ok { // We already have a prefix for the given URL. return } switch { case !isAttr && url == p.defaultNS: // We can use the default name space. return case url == "": // The only way we can encode names in the empty // name space is by using the default name space, // so we must use that. if p.defaultNS != "" { // The default namespace is non-empty, so we // need to set it to empty. p.pushPrefix("", "") } return case url == xmlURL: return } // TODO If the URL is an existing prefix, we could // use it as is. That would enable the // marshaling of elements that had been unmarshaled // and with a name space prefix that was not found. // although technically it would be incorrect. // Pick a name. We try to use the final element of the path // but fall back to _. prefix := strings.TrimRight(url, "/") if i := strings.LastIndex(prefix, "/"); i >= 0 { prefix = prefix[i+1:] } if prefix == "" || !isName([]byte(prefix)) || strings.Contains(prefix, ":") { prefix = "_" } if strings.HasPrefix(prefix, "xml") { // xmlanything is reserved. prefix = "_" + prefix } if p.attrNS[prefix] != "" { // Name is taken. Find a better one. for p.seq++; ; p.seq++ { if id := prefix + "_" + strconv.Itoa(p.seq); p.attrNS[id] == "" { prefix = id break } } } p.pushPrefix(prefix, url) } // writeNamespaces writes xmlns attributes for all the // namespace prefixes that have been defined in // the current element. func (p *printer) writeNamespaces() { for i := len(p.prefixes) - 1; i >= 0; i-- { prefix := p.prefixes[i] if prefix.mark { return } p.WriteString(" ") if prefix.prefix == "" { // Default name space. p.WriteString(`xmlns="`) } else { p.WriteString("xmlns:") p.WriteString(prefix.prefix) p.WriteString(`="`) } EscapeText(p, []byte(p.nsForPrefix(prefix.prefix))) p.WriteString(`"`) } } // pushPrefix pushes a new prefix on the prefix stack // without checking to see if it is already defined. func (p *printer) pushPrefix(prefix, url string) { p.prefixes = append(p.prefixes, printerPrefix{ prefix: prefix, url: p.nsForPrefix(prefix), }) p.setAttrPrefix(prefix, url) } // nsForPrefix returns the name space for the given // prefix. Note that this is not valid for the // empty attribute prefix, which always has an empty // name space. func (p *printer) nsForPrefix(prefix string) string { if prefix == "" { return p.defaultNS } return p.attrNS[prefix] } // markPrefix marks the start of an element on the prefix // stack. func (p *printer) markPrefix() { p.prefixes = append(p.prefixes, printerPrefix{ mark: true, }) } // popPrefix pops all defined prefixes for the current // element. func (p *printer) popPrefix() { for len(p.prefixes) > 0 { prefix := p.prefixes[len(p.prefixes)-1] p.prefixes = p.prefixes[:len(p.prefixes)-1] if prefix.mark { break } p.setAttrPrefix(prefix.prefix, prefix.url) } } // setAttrPrefix sets an attribute name space prefix. // If url is empty, the attribute is removed. // If prefix is empty, the default name space is set. func (p *printer) setAttrPrefix(prefix, url string) { if prefix == "" { p.defaultNS = url return } if url == "" { delete(p.attrPrefix, p.attrNS[prefix]) delete(p.attrNS, prefix) return } if p.attrPrefix == nil { // Need to define a new name space. p.attrPrefix = make(map[string]string) p.attrNS = make(map[string]string) } // Remove any old prefix value. This is OK because we maintain a // strict one-to-one mapping between prefix and URL (see // defineNS) delete(p.attrPrefix, p.attrNS[prefix]) p.attrPrefix[url] = prefix p.attrNS[prefix] = url } var ( marshalerType = reflect.TypeOf((*Marshaler)(nil)).Elem() marshalerAttrType = reflect.TypeOf((*MarshalerAttr)(nil)).Elem() textMarshalerType = reflect.TypeOf((*encoding.TextMarshaler)(nil)).Elem() ) // marshalValue writes one or more XML elements representing val. // If val was obtained from a struct field, finfo must have its details. func (p *printer) marshalValue(val reflect.Value, finfo *fieldInfo, startTemplate *StartElement) error { if startTemplate != nil && startTemplate.Name.Local == "" { return fmt.Errorf("xml: EncodeElement of StartElement with missing name") } if !val.IsValid() { return nil } if finfo != nil && finfo.flags&fOmitEmpty != 0 && isEmptyValue(val) { return nil } // Drill into interfaces and pointers. // This can turn into an infinite loop given a cyclic chain, // but it matches the Go 1 behavior. for val.Kind() == reflect.Interface || val.Kind() == reflect.Ptr { if val.IsNil() { return nil } val = val.Elem() } kind := val.Kind() typ := val.Type() // Check for marshaler. if val.CanInterface() && typ.Implements(marshalerType) { return p.marshalInterface(val.Interface().(Marshaler), p.defaultStart(typ, finfo, startTemplate)) } if val.CanAddr() { pv := val.Addr() if pv.CanInterface() && pv.Type().Implements(marshalerType) { return p.marshalInterface(pv.Interface().(Marshaler), p.defaultStart(pv.Type(), finfo, startTemplate)) } } // Check for text marshaler. if val.CanInterface() && typ.Implements(textMarshalerType) { return p.marshalTextInterface(val.Interface().(encoding.TextMarshaler), p.defaultStart(typ, finfo, startTemplate)) } if val.CanAddr() { pv := val.Addr() if pv.CanInterface() && pv.Type().Implements(textMarshalerType) { return p.marshalTextInterface(pv.Interface().(encoding.TextMarshaler), p.defaultStart(pv.Type(), finfo, startTemplate)) } } // Slices and arrays iterate over the elements. They do not have an enclosing tag. if (kind == reflect.Slice || kind == reflect.Array) && typ.Elem().Kind() != reflect.Uint8 { for i, n := 0, val.Len(); i < n; i++ { if err := p.marshalValue(val.Index(i), finfo, startTemplate); err != nil { return err } } return nil } tinfo, err := getTypeInfo(typ) if err != nil { return err } // Create start element. // Precedence for the XML element name is: // 0. startTemplate // 1. XMLName field in underlying struct; // 2. field name/tag in the struct field; and // 3. type name var start StartElement // explicitNS records whether the element's name space has been // explicitly set (for example an XMLName field). explicitNS := false if startTemplate != nil { start.Name = startTemplate.Name explicitNS = true start.Attr = append(start.Attr, startTemplate.Attr...) } else if tinfo.xmlname != nil { xmlname := tinfo.xmlname if xmlname.name != "" { start.Name.Space, start.Name.Local = xmlname.xmlns, xmlname.name } else if v, ok := xmlname.value(val).Interface().(Name); ok && v.Local != "" { start.Name = v } explicitNS = true } if start.Name.Local == "" && finfo != nil { start.Name.Local = finfo.name if finfo.xmlns != "" { start.Name.Space = finfo.xmlns explicitNS = true } } if start.Name.Local == "" { name := typ.Name() if name == "" { return &UnsupportedTypeError{typ} } start.Name.Local = name } // defaultNS records the default name space as set by a xmlns="..." // attribute. We don't set p.defaultNS because we want to let // the attribute writing code (in p.defineNS) be solely responsible // for maintaining that. defaultNS := p.defaultNS // Attributes for i := range tinfo.fields { finfo := &tinfo.fields[i] if finfo.flags&fAttr == 0 { continue } attr, err := p.fieldAttr(finfo, val) if err != nil { return err } if attr.Name.Local == "" { continue } start.Attr = append(start.Attr, attr) if attr.Name.Space == "" && attr.Name.Local == "xmlns" { defaultNS = attr.Value } } if !explicitNS { // Historic behavior: elements use the default name space // they are contained in by default. start.Name.Space = defaultNS } // Historic behaviour: an element that's in a namespace sets // the default namespace for all elements contained within it. start.setDefaultNamespace() if err := p.writeStart(&start); err != nil { return err } if val.Kind() == reflect.Struct { err = p.marshalStruct(tinfo, val) } else { s, b, err1 := p.marshalSimple(typ, val) if err1 != nil { err = err1 } else if b != nil { EscapeText(p, b) } else { p.EscapeString(s) } } if err != nil { return err } if err := p.writeEnd(start.Name); err != nil { return err } return p.cachedWriteError() } // fieldAttr returns the attribute of the given field. // If the returned attribute has an empty Name.Local, // it should not be used. // The given value holds the value containing the field. func (p *printer) fieldAttr(finfo *fieldInfo, val reflect.Value) (Attr, error) { fv := finfo.value(val) name := Name{Space: finfo.xmlns, Local: finfo.name} if finfo.flags&fOmitEmpty != 0 && isEmptyValue(fv) { return Attr{}, nil } if fv.Kind() == reflect.Interface && fv.IsNil() { return Attr{}, nil } if fv.CanInterface() && fv.Type().Implements(marshalerAttrType) { attr, err := fv.Interface().(MarshalerAttr).MarshalXMLAttr(name) return attr, err } if fv.CanAddr() { pv := fv.Addr() if pv.CanInterface() && pv.Type().Implements(marshalerAttrType) { attr, err := pv.Interface().(MarshalerAttr).MarshalXMLAttr(name) return attr, err } } if fv.CanInterface() && fv.Type().Implements(textMarshalerType) { text, err := fv.Interface().(encoding.TextMarshaler).MarshalText() if err != nil { return Attr{}, err } return Attr{name, string(text)}, nil } if fv.CanAddr() { pv := fv.Addr() if pv.CanInterface() && pv.Type().Implements(textMarshalerType) { text, err := pv.Interface().(encoding.TextMarshaler).MarshalText() if err != nil { return Attr{}, err } return Attr{name, string(text)}, nil } } // Dereference or skip nil pointer, interface values. switch fv.Kind() { case reflect.Ptr, reflect.Interface: if fv.IsNil() { return Attr{}, nil } fv = fv.Elem() } s, b, err := p.marshalSimple(fv.Type(), fv) if err != nil { return Attr{}, err } if b != nil { s = string(b) } return Attr{name, s}, nil } // defaultStart returns the default start element to use, // given the reflect type, field info, and start template. func (p *printer) defaultStart(typ reflect.Type, finfo *fieldInfo, startTemplate *StartElement) StartElement { var start StartElement // Precedence for the XML element name is as above, // except that we do not look inside structs for the first field. if startTemplate != nil { start.Name = startTemplate.Name start.Attr = append(start.Attr, startTemplate.Attr...) } else if finfo != nil && finfo.name != "" { start.Name.Local = finfo.name start.Name.Space = finfo.xmlns } else if typ.Name() != "" { start.Name.Local = typ.Name() } else { // Must be a pointer to a named type, // since it has the Marshaler methods. start.Name.Local = typ.Elem().Name() } // Historic behaviour: elements use the name space of // the element they are contained in by default. if start.Name.Space == "" { start.Name.Space = p.defaultNS } start.setDefaultNamespace() return start } // marshalInterface marshals a Marshaler interface value. func (p *printer) marshalInterface(val Marshaler, start StartElement) error { // Push a marker onto the tag stack so that MarshalXML // cannot close the XML tags that it did not open. p.tags = append(p.tags, Name{}) n := len(p.tags) err := val.MarshalXML(p.encoder, start) if err != nil { return err } // Make sure MarshalXML closed all its tags. p.tags[n-1] is the mark. if len(p.tags) > n { return fmt.Errorf("xml: %s.MarshalXML wrote invalid XML: <%s> not closed", receiverType(val), p.tags[len(p.tags)-1].Local) } p.tags = p.tags[:n-1] return nil } // marshalTextInterface marshals a TextMarshaler interface value. func (p *printer) marshalTextInterface(val encoding.TextMarshaler, start StartElement) error { if err := p.writeStart(&start); err != nil { return err } text, err := val.MarshalText() if err != nil { return err } EscapeText(p, text) return p.writeEnd(start.Name) } // writeStart writes the given start element. func (p *printer) writeStart(start *StartElement) error { if start.Name.Local == "" { return fmt.Errorf("xml: start tag with no name") } p.tags = append(p.tags, start.Name) p.markPrefix() // Define any name spaces explicitly declared in the attributes. // We do this as a separate pass so that explicitly declared prefixes // will take precedence over implicitly declared prefixes // regardless of the order of the attributes. ignoreNonEmptyDefault := start.Name.Space == "" for _, attr := range start.Attr { if err := p.defineNS(attr, ignoreNonEmptyDefault); err != nil { return err } } // Define any new name spaces implied by the attributes. for _, attr := range start.Attr { name := attr.Name // From http://www.w3.org/TR/xml-names11/#defaulting // "Default namespace declarations do not apply directly // to attribute names; the interpretation of unprefixed // attributes is determined by the element on which they // appear." // This means we don't need to create a new namespace // when an attribute name space is empty. if name.Space != "" && !name.isNamespace() { p.createNSPrefix(name.Space, true) } } p.createNSPrefix(start.Name.Space, false) p.writeIndent(1) p.WriteByte('<') p.writeName(start.Name, false) p.writeNamespaces() for _, attr := range start.Attr { name := attr.Name if name.Local == "" || name.isNamespace() { // Namespaces have already been written by writeNamespaces above. continue } p.WriteByte(' ') p.writeName(name, true) p.WriteString(`="`) p.EscapeString(attr.Value) p.WriteByte('"') } p.WriteByte('>') return nil } // writeName writes the given name. It assumes // that p.createNSPrefix(name) has already been called. func (p *printer) writeName(name Name, isAttr bool) { if prefix := p.prefixForNS(name.Space, isAttr); prefix != "" { p.WriteString(prefix) p.WriteByte(':') } p.WriteString(name.Local) } func (p *printer) writeEnd(name Name) error { if name.Local == "" { return fmt.Errorf("xml: end tag with no name") } if len(p.tags) == 0 || p.tags[len(p.tags)-1].Local == "" { return fmt.Errorf("xml: end tag without start tag", name.Local) } if top := p.tags[len(p.tags)-1]; top != name { if top.Local != name.Local { return fmt.Errorf("xml: end tag does not match start tag <%s>", name.Local, top.Local) } return fmt.Errorf("xml: end tag in namespace %s does not match start tag <%s> in namespace %s", name.Local, name.Space, top.Local, top.Space) } p.tags = p.tags[:len(p.tags)-1] p.writeIndent(-1) p.WriteByte('<') p.WriteByte('/') p.writeName(name, false) p.WriteByte('>') p.popPrefix() return nil } func (p *printer) marshalSimple(typ reflect.Type, val reflect.Value) (string, []byte, error) { switch val.Kind() { case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: return strconv.FormatInt(val.Int(), 10), nil, nil case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: return strconv.FormatUint(val.Uint(), 10), nil, nil case reflect.Float32, reflect.Float64: return strconv.FormatFloat(val.Float(), 'g', -1, val.Type().Bits()), nil, nil case reflect.String: return val.String(), nil, nil case reflect.Bool: return strconv.FormatBool(val.Bool()), nil, nil case reflect.Array: if typ.Elem().Kind() != reflect.Uint8 { break } // [...]byte var bytes []byte if val.CanAddr() { bytes = val.Slice(0, val.Len()).Bytes() } else { bytes = make([]byte, val.Len()) reflect.Copy(reflect.ValueOf(bytes), val) } return "", bytes, nil case reflect.Slice: if typ.Elem().Kind() != reflect.Uint8 { break } // []byte return "", val.Bytes(), nil } return "", nil, &UnsupportedTypeError{typ} } var ddBytes = []byte("--") func (p *printer) marshalStruct(tinfo *typeInfo, val reflect.Value) error { s := parentStack{p: p} for i := range tinfo.fields { finfo := &tinfo.fields[i] if finfo.flags&fAttr != 0 { continue } vf := finfo.value(val) // Dereference or skip nil pointer, interface values. switch vf.Kind() { case reflect.Ptr, reflect.Interface: if !vf.IsNil() { vf = vf.Elem() } } switch finfo.flags & fMode { case fCharData: if err := s.setParents(&noField, reflect.Value{}); err != nil { return err } if vf.CanInterface() && vf.Type().Implements(textMarshalerType) { data, err := vf.Interface().(encoding.TextMarshaler).MarshalText() if err != nil { return err } Escape(p, data) continue } if vf.CanAddr() { pv := vf.Addr() if pv.CanInterface() && pv.Type().Implements(textMarshalerType) { data, err := pv.Interface().(encoding.TextMarshaler).MarshalText() if err != nil { return err } Escape(p, data) continue } } var scratch [64]byte switch vf.Kind() { case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: Escape(p, strconv.AppendInt(scratch[:0], vf.Int(), 10)) case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: Escape(p, strconv.AppendUint(scratch[:0], vf.Uint(), 10)) case reflect.Float32, reflect.Float64: Escape(p, strconv.AppendFloat(scratch[:0], vf.Float(), 'g', -1, vf.Type().Bits())) case reflect.Bool: Escape(p, strconv.AppendBool(scratch[:0], vf.Bool())) case reflect.String: if err := EscapeText(p, []byte(vf.String())); err != nil { return err } case reflect.Slice: if elem, ok := vf.Interface().([]byte); ok { if err := EscapeText(p, elem); err != nil { return err } } } continue case fComment: if err := s.setParents(&noField, reflect.Value{}); err != nil { return err } k := vf.Kind() if !(k == reflect.String || k == reflect.Slice && vf.Type().Elem().Kind() == reflect.Uint8) { return fmt.Errorf("xml: bad type for comment field of %s", val.Type()) } if vf.Len() == 0 { continue } p.writeIndent(0) p.WriteString("" is invalid grammar. Make it "- -->" p.WriteByte(' ') } p.WriteString("-->") continue case fInnerXml: iface := vf.Interface() switch raw := iface.(type) { case []byte: p.Write(raw) continue case string: p.WriteString(raw) continue } case fElement, fElement | fAny: if err := s.setParents(finfo, vf); err != nil { return err } } if err := p.marshalValue(vf, finfo, nil); err != nil { return err } } if err := s.setParents(&noField, reflect.Value{}); err != nil { return err } return p.cachedWriteError() } var noField fieldInfo // return the bufio Writer's cached write error func (p *printer) cachedWriteError() error { _, err := p.Write(nil) return err } func (p *printer) writeIndent(depthDelta int) { if len(p.prefix) == 0 && len(p.indent) == 0 { return } if depthDelta < 0 { p.depth-- if p.indentedIn { p.indentedIn = false return } p.indentedIn = false } if p.putNewline { p.WriteByte('\n') } else { p.putNewline = true } if len(p.prefix) > 0 { p.WriteString(p.prefix) } if len(p.indent) > 0 { for i := 0; i < p.depth; i++ { p.WriteString(p.indent) } } if depthDelta > 0 { p.depth++ p.indentedIn = true } } type parentStack struct { p *printer xmlns string parents []string } // setParents sets the stack of current parents to those found in finfo. // It only writes the start elements if vf holds a non-nil value. // If finfo is &noField, it pops all elements. func (s *parentStack) setParents(finfo *fieldInfo, vf reflect.Value) error { xmlns := s.p.defaultNS if finfo.xmlns != "" { xmlns = finfo.xmlns } commonParents := 0 if xmlns == s.xmlns { for ; commonParents < len(finfo.parents) && commonParents < len(s.parents); commonParents++ { if finfo.parents[commonParents] != s.parents[commonParents] { break } } } // Pop off any parents that aren't in common with the previous field. for i := len(s.parents) - 1; i >= commonParents; i-- { if err := s.p.writeEnd(Name{ Space: s.xmlns, Local: s.parents[i], }); err != nil { return err } } s.parents = finfo.parents s.xmlns = xmlns if commonParents >= len(s.parents) { // No new elements to push. return nil } if (vf.Kind() == reflect.Ptr || vf.Kind() == reflect.Interface) && vf.IsNil() { // The element is nil, so no need for the start elements. s.parents = s.parents[:commonParents] return nil } // Push any new parents required. for _, name := range s.parents[commonParents:] { start := &StartElement{ Name: Name{ Space: s.xmlns, Local: name, }, } // Set the default name space for parent elements // to match what we do with other elements. if s.xmlns != s.p.defaultNS { start.setDefaultNamespace() } if err := s.p.writeStart(start); err != nil { return err } } return nil } // A MarshalXMLError is returned when Marshal encounters a type // that cannot be converted into XML. type UnsupportedTypeError struct { Type reflect.Type } func (e *UnsupportedTypeError) Error() string { return "xml: unsupported type: " + e.Type.String() } func isEmptyValue(v reflect.Value) bool { switch v.Kind() { case reflect.Array, reflect.Map, reflect.Slice, reflect.String: return v.Len() == 0 case reflect.Bool: return !v.Bool() case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: return v.Int() == 0 case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: return v.Uint() == 0 case reflect.Float32, reflect.Float64: return v.Float() == 0 case reflect.Interface, reflect.Ptr: return v.IsNil() } return false } lxd-2.0.2/dist/src/golang.org/x/net/webdav/internal/xml/xml.go0000644061062106075000000013172512721405224026557 0ustar00stgraberdomain admins00000000000000// Copyright 2009 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // Package xml implements a simple XML 1.0 parser that // understands XML name spaces. package xml // References: // Annotated XML spec: http://www.xml.com/axml/testaxml.htm // XML name spaces: http://www.w3.org/TR/REC-xml-names/ // TODO(rsc): // Test error handling. import ( "bufio" "bytes" "errors" "fmt" "io" "strconv" "strings" "unicode" "unicode/utf8" ) // A SyntaxError represents a syntax error in the XML input stream. type SyntaxError struct { Msg string Line int } func (e *SyntaxError) Error() string { return "XML syntax error on line " + strconv.Itoa(e.Line) + ": " + e.Msg } // A Name represents an XML name (Local) annotated with a name space // identifier (Space). In tokens returned by Decoder.Token, the Space // identifier is given as a canonical URL, not the short prefix used in // the document being parsed. // // As a special case, XML namespace declarations will use the literal // string "xmlns" for the Space field instead of the fully resolved URL. // See Encoder.EncodeToken for more information on namespace encoding // behaviour. type Name struct { Space, Local string } // isNamespace reports whether the name is a namespace-defining name. func (name Name) isNamespace() bool { return name.Local == "xmlns" || name.Space == "xmlns" } // An Attr represents an attribute in an XML element (Name=Value). type Attr struct { Name Name Value string } // A Token is an interface holding one of the token types: // StartElement, EndElement, CharData, Comment, ProcInst, or Directive. type Token interface{} // A StartElement represents an XML start element. type StartElement struct { Name Name Attr []Attr } func (e StartElement) Copy() StartElement { attrs := make([]Attr, len(e.Attr)) copy(attrs, e.Attr) e.Attr = attrs return e } // End returns the corresponding XML end element. func (e StartElement) End() EndElement { return EndElement{e.Name} } // setDefaultNamespace sets the namespace of the element // as the default for all elements contained within it. func (e *StartElement) setDefaultNamespace() { if e.Name.Space == "" { // If there's no namespace on the element, don't // set the default. Strictly speaking this might be wrong, as // we can't tell if the element had no namespace set // or was just using the default namespace. return } // Don't add a default name space if there's already one set. for _, attr := range e.Attr { if attr.Name.Space == "" && attr.Name.Local == "xmlns" { return } } e.Attr = append(e.Attr, Attr{ Name: Name{ Local: "xmlns", }, Value: e.Name.Space, }) } // An EndElement represents an XML end element. type EndElement struct { Name Name } // A CharData represents XML character data (raw text), // in which XML escape sequences have been replaced by // the characters they represent. type CharData []byte func makeCopy(b []byte) []byte { b1 := make([]byte, len(b)) copy(b1, b) return b1 } func (c CharData) Copy() CharData { return CharData(makeCopy(c)) } // A Comment represents an XML comment of the form . // The bytes do not include the comment markers. type Comment []byte func (c Comment) Copy() Comment { return Comment(makeCopy(c)) } // A ProcInst represents an XML processing instruction of the form type ProcInst struct { Target string Inst []byte } func (p ProcInst) Copy() ProcInst { p.Inst = makeCopy(p.Inst) return p } // A Directive represents an XML directive of the form . // The bytes do not include the markers. type Directive []byte func (d Directive) Copy() Directive { return Directive(makeCopy(d)) } // CopyToken returns a copy of a Token. func CopyToken(t Token) Token { switch v := t.(type) { case CharData: return v.Copy() case Comment: return v.Copy() case Directive: return v.Copy() case ProcInst: return v.Copy() case StartElement: return v.Copy() } return t } // A Decoder represents an XML parser reading a particular input stream. // The parser assumes that its input is encoded in UTF-8. type Decoder struct { // Strict defaults to true, enforcing the requirements // of the XML specification. // If set to false, the parser allows input containing common // mistakes: // * If an element is missing an end tag, the parser invents // end tags as necessary to keep the return values from Token // properly balanced. // * In attribute values and character data, unknown or malformed // character entities (sequences beginning with &) are left alone. // // Setting: // // d.Strict = false; // d.AutoClose = HTMLAutoClose; // d.Entity = HTMLEntity // // creates a parser that can handle typical HTML. // // Strict mode does not enforce the requirements of the XML name spaces TR. // In particular it does not reject name space tags using undefined prefixes. // Such tags are recorded with the unknown prefix as the name space URL. Strict bool // When Strict == false, AutoClose indicates a set of elements to // consider closed immediately after they are opened, regardless // of whether an end element is present. AutoClose []string // Entity can be used to map non-standard entity names to string replacements. // The parser behaves as if these standard mappings are present in the map, // regardless of the actual map content: // // "lt": "<", // "gt": ">", // "amp": "&", // "apos": "'", // "quot": `"`, Entity map[string]string // CharsetReader, if non-nil, defines a function to generate // charset-conversion readers, converting from the provided // non-UTF-8 charset into UTF-8. If CharsetReader is nil or // returns an error, parsing stops with an error. One of the // the CharsetReader's result values must be non-nil. CharsetReader func(charset string, input io.Reader) (io.Reader, error) // DefaultSpace sets the default name space used for unadorned tags, // as if the entire XML stream were wrapped in an element containing // the attribute xmlns="DefaultSpace". DefaultSpace string r io.ByteReader buf bytes.Buffer saved *bytes.Buffer stk *stack free *stack needClose bool toClose Name nextToken Token nextByte int ns map[string]string err error line int offset int64 unmarshalDepth int } // NewDecoder creates a new XML parser reading from r. // If r does not implement io.ByteReader, NewDecoder will // do its own buffering. func NewDecoder(r io.Reader) *Decoder { d := &Decoder{ ns: make(map[string]string), nextByte: -1, line: 1, Strict: true, } d.switchToReader(r) return d } // Token returns the next XML token in the input stream. // At the end of the input stream, Token returns nil, io.EOF. // // Slices of bytes in the returned token data refer to the // parser's internal buffer and remain valid only until the next // call to Token. To acquire a copy of the bytes, call CopyToken // or the token's Copy method. // // Token expands self-closing elements such as
// into separate start and end elements returned by successive calls. // // Token guarantees that the StartElement and EndElement // tokens it returns are properly nested and matched: // if Token encounters an unexpected end element, // it will return an error. // // Token implements XML name spaces as described by // http://www.w3.org/TR/REC-xml-names/. Each of the // Name structures contained in the Token has the Space // set to the URL identifying its name space when known. // If Token encounters an unrecognized name space prefix, // it uses the prefix as the Space rather than report an error. func (d *Decoder) Token() (t Token, err error) { if d.stk != nil && d.stk.kind == stkEOF { err = io.EOF return } if d.nextToken != nil { t = d.nextToken d.nextToken = nil } else if t, err = d.rawToken(); err != nil { return } if !d.Strict { if t1, ok := d.autoClose(t); ok { d.nextToken = t t = t1 } } switch t1 := t.(type) { case StartElement: // In XML name spaces, the translations listed in the // attributes apply to the element name and // to the other attribute names, so process // the translations first. for _, a := range t1.Attr { if a.Name.Space == "xmlns" { v, ok := d.ns[a.Name.Local] d.pushNs(a.Name.Local, v, ok) d.ns[a.Name.Local] = a.Value } if a.Name.Space == "" && a.Name.Local == "xmlns" { // Default space for untagged names v, ok := d.ns[""] d.pushNs("", v, ok) d.ns[""] = a.Value } } d.translate(&t1.Name, true) for i := range t1.Attr { d.translate(&t1.Attr[i].Name, false) } d.pushElement(t1.Name) t = t1 case EndElement: d.translate(&t1.Name, true) if !d.popElement(&t1) { return nil, d.err } t = t1 } return } const xmlURL = "http://www.w3.org/XML/1998/namespace" // Apply name space translation to name n. // The default name space (for Space=="") // applies only to element names, not to attribute names. func (d *Decoder) translate(n *Name, isElementName bool) { switch { case n.Space == "xmlns": return case n.Space == "" && !isElementName: return case n.Space == "xml": n.Space = xmlURL case n.Space == "" && n.Local == "xmlns": return } if v, ok := d.ns[n.Space]; ok { n.Space = v } else if n.Space == "" { n.Space = d.DefaultSpace } } func (d *Decoder) switchToReader(r io.Reader) { // Get efficient byte at a time reader. // Assume that if reader has its own // ReadByte, it's efficient enough. // Otherwise, use bufio. if rb, ok := r.(io.ByteReader); ok { d.r = rb } else { d.r = bufio.NewReader(r) } } // Parsing state - stack holds old name space translations // and the current set of open elements. The translations to pop when // ending a given tag are *below* it on the stack, which is // more work but forced on us by XML. type stack struct { next *stack kind int name Name ok bool } const ( stkStart = iota stkNs stkEOF ) func (d *Decoder) push(kind int) *stack { s := d.free if s != nil { d.free = s.next } else { s = new(stack) } s.next = d.stk s.kind = kind d.stk = s return s } func (d *Decoder) pop() *stack { s := d.stk if s != nil { d.stk = s.next s.next = d.free d.free = s } return s } // Record that after the current element is finished // (that element is already pushed on the stack) // Token should return EOF until popEOF is called. func (d *Decoder) pushEOF() { // Walk down stack to find Start. // It might not be the top, because there might be stkNs // entries above it. start := d.stk for start.kind != stkStart { start = start.next } // The stkNs entries below a start are associated with that // element too; skip over them. for start.next != nil && start.next.kind == stkNs { start = start.next } s := d.free if s != nil { d.free = s.next } else { s = new(stack) } s.kind = stkEOF s.next = start.next start.next = s } // Undo a pushEOF. // The element must have been finished, so the EOF should be at the top of the stack. func (d *Decoder) popEOF() bool { if d.stk == nil || d.stk.kind != stkEOF { return false } d.pop() return true } // Record that we are starting an element with the given name. func (d *Decoder) pushElement(name Name) { s := d.push(stkStart) s.name = name } // Record that we are changing the value of ns[local]. // The old value is url, ok. func (d *Decoder) pushNs(local string, url string, ok bool) { s := d.push(stkNs) s.name.Local = local s.name.Space = url s.ok = ok } // Creates a SyntaxError with the current line number. func (d *Decoder) syntaxError(msg string) error { return &SyntaxError{Msg: msg, Line: d.line} } // Record that we are ending an element with the given name. // The name must match the record at the top of the stack, // which must be a pushElement record. // After popping the element, apply any undo records from // the stack to restore the name translations that existed // before we saw this element. func (d *Decoder) popElement(t *EndElement) bool { s := d.pop() name := t.Name switch { case s == nil || s.kind != stkStart: d.err = d.syntaxError("unexpected end element ") return false case s.name.Local != name.Local: if !d.Strict { d.needClose = true d.toClose = t.Name t.Name = s.name return true } d.err = d.syntaxError("element <" + s.name.Local + "> closed by ") return false case s.name.Space != name.Space: d.err = d.syntaxError("element <" + s.name.Local + "> in space " + s.name.Space + "closed by in space " + name.Space) return false } // Pop stack until a Start or EOF is on the top, undoing the // translations that were associated with the element we just closed. for d.stk != nil && d.stk.kind != stkStart && d.stk.kind != stkEOF { s := d.pop() if s.ok { d.ns[s.name.Local] = s.name.Space } else { delete(d.ns, s.name.Local) } } return true } // If the top element on the stack is autoclosing and // t is not the end tag, invent the end tag. func (d *Decoder) autoClose(t Token) (Token, bool) { if d.stk == nil || d.stk.kind != stkStart { return nil, false } name := strings.ToLower(d.stk.name.Local) for _, s := range d.AutoClose { if strings.ToLower(s) == name { // This one should be auto closed if t doesn't close it. et, ok := t.(EndElement) if !ok || et.Name.Local != name { return EndElement{d.stk.name}, true } break } } return nil, false } var errRawToken = errors.New("xml: cannot use RawToken from UnmarshalXML method") // RawToken is like Token but does not verify that // start and end elements match and does not translate // name space prefixes to their corresponding URLs. func (d *Decoder) RawToken() (Token, error) { if d.unmarshalDepth > 0 { return nil, errRawToken } return d.rawToken() } func (d *Decoder) rawToken() (Token, error) { if d.err != nil { return nil, d.err } if d.needClose { // The last element we read was self-closing and // we returned just the StartElement half. // Return the EndElement half now. d.needClose = false return EndElement{d.toClose}, nil } b, ok := d.getc() if !ok { return nil, d.err } if b != '<' { // Text section. d.ungetc(b) data := d.text(-1, false) if data == nil { return nil, d.err } return CharData(data), nil } if b, ok = d.mustgetc(); !ok { return nil, d.err } switch b { case '/': // ' { d.err = d.syntaxError("invalid characters between ") return nil, d.err } return EndElement{name}, nil case '?': // ' { break } b0 = b } data := d.buf.Bytes() data = data[0 : len(data)-2] // chop ?> if target == "xml" { content := string(data) ver := procInst("version", content) if ver != "" && ver != "1.0" { d.err = fmt.Errorf("xml: unsupported version %q; only version 1.0 is supported", ver) return nil, d.err } enc := procInst("encoding", content) if enc != "" && enc != "utf-8" && enc != "UTF-8" { if d.CharsetReader == nil { d.err = fmt.Errorf("xml: encoding %q declared but Decoder.CharsetReader is nil", enc) return nil, d.err } newr, err := d.CharsetReader(enc, d.r.(io.Reader)) if err != nil { d.err = fmt.Errorf("xml: opening charset %q: %v", enc, err) return nil, d.err } if newr == nil { panic("CharsetReader returned a nil Reader for charset " + enc) } d.switchToReader(newr) } } return ProcInst{target, data}, nil case '!': // ' { break } b0, b1 = b1, b } data := d.buf.Bytes() data = data[0 : len(data)-3] // chop --> return Comment(data), nil case '[': // . data := d.text(-1, true) if data == nil { return nil, d.err } return CharData(data), nil } // Probably a directive: , , etc. // We don't care, but accumulate for caller. Quoted angle // brackets do not count for nesting. d.buf.Reset() d.buf.WriteByte(b) inquote := uint8(0) depth := 0 for { if b, ok = d.mustgetc(); !ok { return nil, d.err } if inquote == 0 && b == '>' && depth == 0 { break } HandleB: d.buf.WriteByte(b) switch { case b == inquote: inquote = 0 case inquote != 0: // in quotes, no special action case b == '\'' || b == '"': inquote = b case b == '>' && inquote == 0: depth-- case b == '<' && inquote == 0: // Look for \n" + " \n" + " \n" + "", wantPF: propfind{ XMLName: ixml.Name{Space: "DAV:", Local: "propfind"}, Prop: propfindProps{xml.Name{Space: "DAV:", Local: "displayname"}}, }, }, { desc: "propfind: propfind with ignored whitespace", input: "" + "\n" + " \n" + "", wantPF: propfind{ XMLName: ixml.Name{Space: "DAV:", Local: "propfind"}, Prop: propfindProps{xml.Name{Space: "DAV:", Local: "displayname"}}, }, }, { desc: "propfind: propfind with ignored mixed-content", input: "" + "\n" + " foobar\n" + "", wantPF: propfind{ XMLName: ixml.Name{Space: "DAV:", Local: "propfind"}, Prop: propfindProps{xml.Name{Space: "DAV:", Local: "displayname"}}, }, }, { desc: "propfind: propname with ignored element (section A.4)", input: "" + "\n" + " \n" + " *boss*\n" + "", wantPF: propfind{ XMLName: ixml.Name{Space: "DAV:", Local: "propfind"}, Propname: new(struct{}), }, }, { desc: "propfind: bad: junk", input: "xxx", wantStatus: http.StatusBadRequest, }, { desc: "propfind: bad: propname and allprop (section A.3)", input: "" + "\n" + " " + " " + "", wantStatus: http.StatusBadRequest, }, { desc: "propfind: bad: propname and prop", input: "" + "\n" + " \n" + " \n" + "", wantStatus: http.StatusBadRequest, }, { desc: "propfind: bad: allprop and prop", input: "" + "\n" + " \n" + " \n" + "", wantStatus: http.StatusBadRequest, }, { desc: "propfind: bad: empty propfind with ignored element (section A.4)", input: "" + "\n" + " \n" + "", wantStatus: http.StatusBadRequest, }, { desc: "propfind: bad: empty prop", input: "" + "\n" + " \n" + "", wantStatus: http.StatusBadRequest, }, { desc: "propfind: bad: prop with just chardata", input: "" + "\n" + " foo\n" + "", wantStatus: http.StatusBadRequest, }, { desc: "bad: interrupted prop", input: "" + "\n" + " \n", wantStatus: http.StatusBadRequest, }, { desc: "bad: malformed end element prop", input: "" + "\n" + " \n", wantStatus: http.StatusBadRequest, }, { desc: "propfind: bad: property with chardata value", input: "" + "\n" + " bar\n" + "", wantStatus: http.StatusBadRequest, }, { desc: "propfind: bad: property with whitespace value", input: "" + "\n" + " \n" + "", wantStatus: http.StatusBadRequest, }, { desc: "propfind: bad: include without allprop", input: "" + "\n" + " \n" + "", wantStatus: http.StatusBadRequest, }} for _, tc := range testCases { pf, status, err := readPropfind(strings.NewReader(tc.input)) if tc.wantStatus != 0 { if err == nil { t.Errorf("%s: got nil error, want non-nil", tc.desc) continue } } else if err != nil { t.Errorf("%s: %v", tc.desc, err) continue } if !reflect.DeepEqual(pf, tc.wantPF) || status != tc.wantStatus { t.Errorf("%s:\ngot propfind=%v, status=%v\nwant propfind=%v, status=%v", tc.desc, pf, status, tc.wantPF, tc.wantStatus) continue } } } func TestMultistatusWriter(t *testing.T) { ///The "section x.y.z" test cases come from section x.y.z of the spec at // http://www.webdav.org/specs/rfc4918.html testCases := []struct { desc string responses []response respdesc string writeHeader bool wantXML string wantCode int wantErr error }{{ desc: "section 9.2.2 (failed dependency)", responses: []response{{ Href: []string{"http://example.com/foo"}, Propstat: []propstat{{ Prop: []Property{{ XMLName: xml.Name{ Space: "http://ns.example.com/", Local: "Authors", }, }}, Status: "HTTP/1.1 424 Failed Dependency", }, { Prop: []Property{{ XMLName: xml.Name{ Space: "http://ns.example.com/", Local: "Copyright-Owner", }, }}, Status: "HTTP/1.1 409 Conflict", }}, ResponseDescription: "Copyright Owner cannot be deleted or altered.", }}, wantXML: `` + `` + `` + ` ` + ` http://example.com/foo` + ` ` + ` ` + ` ` + ` ` + ` HTTP/1.1 424 Failed Dependency` + ` ` + ` ` + ` ` + ` ` + ` ` + ` HTTP/1.1 409 Conflict` + ` ` + ` Copyright Owner cannot be deleted or altered.` + `` + ``, wantCode: StatusMulti, }, { desc: "section 9.6.2 (lock-token-submitted)", responses: []response{{ Href: []string{"http://example.com/foo"}, Status: "HTTP/1.1 423 Locked", Error: &xmlError{ InnerXML: []byte(``), }, }}, wantXML: `` + `` + `` + ` ` + ` http://example.com/foo` + ` HTTP/1.1 423 Locked` + ` ` + ` ` + ``, wantCode: StatusMulti, }, { desc: "section 9.1.3", responses: []response{{ Href: []string{"http://example.com/foo"}, Propstat: []propstat{{ Prop: []Property{{ XMLName: xml.Name{Space: "http://ns.example.com/boxschema/", Local: "bigbox"}, InnerXML: []byte(`` + `` + `Box type A` + ``), }, { XMLName: xml.Name{Space: "http://ns.example.com/boxschema/", Local: "author"}, InnerXML: []byte(`` + `` + `J.J. Johnson` + ``), }}, Status: "HTTP/1.1 200 OK", }, { Prop: []Property{{ XMLName: xml.Name{Space: "http://ns.example.com/boxschema/", Local: "DingALing"}, }, { XMLName: xml.Name{Space: "http://ns.example.com/boxschema/", Local: "Random"}, }}, Status: "HTTP/1.1 403 Forbidden", ResponseDescription: "The user does not have access to the DingALing property.", }}, }}, respdesc: "There has been an access violation error.", wantXML: `` + `` + `` + ` ` + ` http://example.com/foo` + ` ` + ` ` + ` Box type A` + ` J.J. Johnson` + ` ` + ` HTTP/1.1 200 OK` + ` ` + ` ` + ` ` + ` ` + ` ` + ` ` + ` HTTP/1.1 403 Forbidden` + ` The user does not have access to the DingALing property.` + ` ` + ` ` + ` There has been an access violation error.` + ``, wantCode: StatusMulti, }, { desc: "no response written", // default of http.responseWriter wantCode: http.StatusOK, }, { desc: "no response written (with description)", respdesc: "too bad", // default of http.responseWriter wantCode: http.StatusOK, }, { desc: "empty multistatus with header", writeHeader: true, wantXML: ``, wantCode: StatusMulti, }, { desc: "bad: no href", responses: []response{{ Propstat: []propstat{{ Prop: []Property{{ XMLName: xml.Name{ Space: "http://example.com/", Local: "foo", }, }}, Status: "HTTP/1.1 200 OK", }}, }}, wantErr: errInvalidResponse, // default of http.responseWriter wantCode: http.StatusOK, }, { desc: "bad: multiple hrefs and no status", responses: []response{{ Href: []string{"http://example.com/foo", "http://example.com/bar"}, }}, wantErr: errInvalidResponse, // default of http.responseWriter wantCode: http.StatusOK, }, { desc: "bad: one href and no propstat", responses: []response{{ Href: []string{"http://example.com/foo"}, }}, wantErr: errInvalidResponse, // default of http.responseWriter wantCode: http.StatusOK, }, { desc: "bad: status with one href and propstat", responses: []response{{ Href: []string{"http://example.com/foo"}, Propstat: []propstat{{ Prop: []Property{{ XMLName: xml.Name{ Space: "http://example.com/", Local: "foo", }, }}, Status: "HTTP/1.1 200 OK", }}, Status: "HTTP/1.1 200 OK", }}, wantErr: errInvalidResponse, // default of http.responseWriter wantCode: http.StatusOK, }, { desc: "bad: multiple hrefs and propstat", responses: []response{{ Href: []string{ "http://example.com/foo", "http://example.com/bar", }, Propstat: []propstat{{ Prop: []Property{{ XMLName: xml.Name{ Space: "http://example.com/", Local: "foo", }, }}, Status: "HTTP/1.1 200 OK", }}, }}, wantErr: errInvalidResponse, // default of http.responseWriter wantCode: http.StatusOK, }} n := xmlNormalizer{omitWhitespace: true} loop: for _, tc := range testCases { rec := httptest.NewRecorder() w := multistatusWriter{w: rec, responseDescription: tc.respdesc} if tc.writeHeader { if err := w.writeHeader(); err != nil { t.Errorf("%s: got writeHeader error %v, want nil", tc.desc, err) continue } } for _, r := range tc.responses { if err := w.write(&r); err != nil { if err != tc.wantErr { t.Errorf("%s: got write error %v, want %v", tc.desc, err, tc.wantErr) } continue loop } } if err := w.close(); err != tc.wantErr { t.Errorf("%s: got close error %v, want %v", tc.desc, err, tc.wantErr) continue } if rec.Code != tc.wantCode { t.Errorf("%s: got HTTP status code %d, want %d\n", tc.desc, rec.Code, tc.wantCode) continue } gotXML := rec.Body.String() eq, err := n.equalXML(strings.NewReader(gotXML), strings.NewReader(tc.wantXML)) if err != nil { t.Errorf("%s: equalXML: %v", tc.desc, err) continue } if !eq { t.Errorf("%s: XML body\ngot %s\nwant %s", tc.desc, gotXML, tc.wantXML) } } } func TestReadProppatch(t *testing.T) { ppStr := func(pps []Proppatch) string { var outer []string for _, pp := range pps { var inner []string for _, p := range pp.Props { inner = append(inner, fmt.Sprintf("{XMLName: %q, Lang: %q, InnerXML: %q}", p.XMLName, p.Lang, p.InnerXML)) } outer = append(outer, fmt.Sprintf("{Remove: %t, Props: [%s]}", pp.Remove, strings.Join(inner, ", "))) } return "[" + strings.Join(outer, ", ") + "]" } testCases := []struct { desc string input string wantPP []Proppatch wantStatus int }{{ desc: "proppatch: section 9.2 (with simple property value)", input: `` + `` + `` + ` ` + ` somevalue` + ` ` + ` ` + ` ` + ` ` + ``, wantPP: []Proppatch{{ Props: []Property{{ xml.Name{Space: "http://ns.example.com/z/", Local: "Authors"}, "", []byte(`somevalue`), }}, }, { Remove: true, Props: []Property{{ xml.Name{Space: "http://ns.example.com/z/", Local: "Copyright-Owner"}, "", nil, }}, }}, }, { desc: "proppatch: lang attribute on prop", input: `` + `` + `` + ` ` + ` ` + ` ` + ` ` + ` ` + ``, wantPP: []Proppatch{{ Props: []Property{{ xml.Name{Space: "http://example.com/ns", Local: "foo"}, "en", nil, }}, }}, }, { desc: "bad: remove with value", input: `` + `` + `` + ` ` + ` ` + ` ` + ` Jim Whitehead` + ` ` + ` ` + ` ` + ``, wantStatus: http.StatusBadRequest, }, { desc: "bad: empty propertyupdate", input: `` + `` + ``, wantStatus: http.StatusBadRequest, }, { desc: "bad: empty prop", input: `` + `` + `` + ` ` + ` ` + ` ` + ``, wantStatus: http.StatusBadRequest, }} for _, tc := range testCases { pp, status, err := readProppatch(strings.NewReader(tc.input)) if tc.wantStatus != 0 { if err == nil { t.Errorf("%s: got nil error, want non-nil", tc.desc) continue } } else if err != nil { t.Errorf("%s: %v", tc.desc, err) continue } if status != tc.wantStatus { t.Errorf("%s: got status %d, want %d", tc.desc, status, tc.wantStatus) continue } if !reflect.DeepEqual(pp, tc.wantPP) || status != tc.wantStatus { t.Errorf("%s: proppatch\ngot %v\nwant %v", tc.desc, ppStr(pp), ppStr(tc.wantPP)) } } } func TestUnmarshalXMLValue(t *testing.T) { testCases := []struct { desc string input string wantVal string }{{ desc: "simple char data", input: "foo", wantVal: "foo", }, { desc: "empty element", input: "", wantVal: "", }, { desc: "preserve namespace", input: ``, wantVal: ``, }, { desc: "preserve root element namespace", input: ``, wantVal: ``, }, { desc: "preserve whitespace", input: " \t ", wantVal: " \t ", }, { desc: "preserve mixed content", input: ` a `, wantVal: ` a `, }, { desc: "section 9.2", input: `` + `` + ` Jim Whitehead` + ` Roy Fielding` + ``, wantVal: `` + ` Jim Whitehead` + ` Roy Fielding`, }, { desc: "section 4.3.1 (mixed content)", input: `` + `` + ` Jane Doe` + ` ` + ` mailto:jane.doe@example.com` + ` http://www.example.com` + ` ` + ` Jane has been working way too long on the` + ` long-awaited revision of ]]>.` + ` ` + ``, wantVal: `` + ` Jane Doe` + ` ` + ` mailto:jane.doe@example.com` + ` http://www.example.com` + ` ` + ` Jane has been working way too long on the` + ` long-awaited revision of <RFC2518>.` + ` `, }} var n xmlNormalizer for _, tc := range testCases { d := ixml.NewDecoder(strings.NewReader(tc.input)) var v xmlValue if err := d.Decode(&v); err != nil { t.Errorf("%s: got error %v, want nil", tc.desc, err) continue } eq, err := n.equalXML(bytes.NewReader(v), strings.NewReader(tc.wantVal)) if err != nil { t.Errorf("%s: equalXML: %v", tc.desc, err) continue } if !eq { t.Errorf("%s:\ngot %s\nwant %s", tc.desc, string(v), tc.wantVal) } } } // xmlNormalizer normalizes XML. type xmlNormalizer struct { // omitWhitespace instructs to ignore whitespace between element tags. omitWhitespace bool // omitComments instructs to ignore XML comments. omitComments bool } // normalize writes the normalized XML content of r to w. It applies the // following rules // // * Rename namespace prefixes according to an internal heuristic. // * Remove unnecessary namespace declarations. // * Sort attributes in XML start elements in lexical order of their // fully qualified name. // * Remove XML directives and processing instructions. // * Remove CDATA between XML tags that only contains whitespace, if // instructed to do so. // * Remove comments, if instructed to do so. // func (n *xmlNormalizer) normalize(w io.Writer, r io.Reader) error { d := ixml.NewDecoder(r) e := ixml.NewEncoder(w) for { t, err := d.Token() if err != nil { if t == nil && err == io.EOF { break } return err } switch val := t.(type) { case ixml.Directive, ixml.ProcInst: continue case ixml.Comment: if n.omitComments { continue } case ixml.CharData: if n.omitWhitespace && len(bytes.TrimSpace(val)) == 0 { continue } case ixml.StartElement: start, _ := ixml.CopyToken(val).(ixml.StartElement) attr := start.Attr[:0] for _, a := range start.Attr { if a.Name.Space == "xmlns" || a.Name.Local == "xmlns" { continue } attr = append(attr, a) } sort.Sort(byName(attr)) start.Attr = attr t = start } err = e.EncodeToken(t) if err != nil { return err } } return e.Flush() } // equalXML tests for equality of the normalized XML contents of a and b. func (n *xmlNormalizer) equalXML(a, b io.Reader) (bool, error) { var buf bytes.Buffer if err := n.normalize(&buf, a); err != nil { return false, err } normA := buf.String() buf.Reset() if err := n.normalize(&buf, b); err != nil { return false, err } normB := buf.String() return normA == normB, nil } type byName []ixml.Attr func (a byName) Len() int { return len(a) } func (a byName) Swap(i, j int) { a[i], a[j] = a[j], a[i] } func (a byName) Less(i, j int) bool { if a[i].Name.Space != a[j].Name.Space { return a[i].Name.Space < a[j].Name.Space } return a[i].Name.Local < a[j].Name.Local } lxd-2.0.2/dist/src/golang.org/x/net/webdav/prop.go0000644061062106075000000002737212721405224024325 0ustar00stgraberdomain admins00000000000000// Copyright 2015 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package webdav import ( "encoding/xml" "fmt" "io" "mime" "net/http" "os" "path/filepath" "strconv" ) // Proppatch describes a property update instruction as defined in RFC 4918. // See http://www.webdav.org/specs/rfc4918.html#METHOD_PROPPATCH type Proppatch struct { // Remove specifies whether this patch removes properties. If it does not // remove them, it sets them. Remove bool // Props contains the properties to be set or removed. Props []Property } // Propstat describes a XML propstat element as defined in RFC 4918. // See http://www.webdav.org/specs/rfc4918.html#ELEMENT_propstat type Propstat struct { // Props contains the properties for which Status applies. Props []Property // Status defines the HTTP status code of the properties in Prop. // Allowed values include, but are not limited to the WebDAV status // code extensions for HTTP/1.1. // http://www.webdav.org/specs/rfc4918.html#status.code.extensions.to.http11 Status int // XMLError contains the XML representation of the optional error element. // XML content within this field must not rely on any predefined // namespace declarations or prefixes. If empty, the XML error element // is omitted. XMLError string // ResponseDescription contains the contents of the optional // responsedescription field. If empty, the XML element is omitted. ResponseDescription string } // makePropstats returns a slice containing those of x and y whose Props slice // is non-empty. If both are empty, it returns a slice containing an otherwise // zero Propstat whose HTTP status code is 200 OK. func makePropstats(x, y Propstat) []Propstat { pstats := make([]Propstat, 0, 2) if len(x.Props) != 0 { pstats = append(pstats, x) } if len(y.Props) != 0 { pstats = append(pstats, y) } if len(pstats) == 0 { pstats = append(pstats, Propstat{ Status: http.StatusOK, }) } return pstats } // DeadPropsHolder holds the dead properties of a resource. // // Dead properties are those properties that are explicitly defined. In // comparison, live properties, such as DAV:getcontentlength, are implicitly // defined by the underlying resource, and cannot be explicitly overridden or // removed. See the Terminology section of // http://www.webdav.org/specs/rfc4918.html#rfc.section.3 // // There is a whitelist of the names of live properties. This package handles // all live properties, and will only pass non-whitelisted names to the Patch // method of DeadPropsHolder implementations. type DeadPropsHolder interface { // DeadProps returns a copy of the dead properties held. DeadProps() (map[xml.Name]Property, error) // Patch patches the dead properties held. // // Patching is atomic; either all or no patches succeed. It returns (nil, // non-nil) if an internal server error occurred, otherwise the Propstats // collectively contain one Property for each proposed patch Property. If // all patches succeed, Patch returns a slice of length one and a Propstat // element with a 200 OK HTTP status code. If none succeed, for reasons // other than an internal server error, no Propstat has status 200 OK. // // For more details on when various HTTP status codes apply, see // http://www.webdav.org/specs/rfc4918.html#PROPPATCH-status Patch([]Proppatch) ([]Propstat, error) } // liveProps contains all supported, protected DAV: properties. var liveProps = map[xml.Name]struct { // findFn implements the propfind function of this property. If nil, // it indicates a hidden property. findFn func(FileSystem, LockSystem, string, os.FileInfo) (string, error) // dir is true if the property applies to directories. dir bool }{ xml.Name{Space: "DAV:", Local: "resourcetype"}: { findFn: findResourceType, dir: true, }, xml.Name{Space: "DAV:", Local: "displayname"}: { findFn: findDisplayName, dir: true, }, xml.Name{Space: "DAV:", Local: "getcontentlength"}: { findFn: findContentLength, dir: false, }, xml.Name{Space: "DAV:", Local: "getlastmodified"}: { findFn: findLastModified, dir: false, }, xml.Name{Space: "DAV:", Local: "creationdate"}: { findFn: nil, dir: false, }, xml.Name{Space: "DAV:", Local: "getcontentlanguage"}: { findFn: nil, dir: false, }, xml.Name{Space: "DAV:", Local: "getcontenttype"}: { findFn: findContentType, dir: false, }, xml.Name{Space: "DAV:", Local: "getetag"}: { findFn: findETag, // findETag implements ETag as the concatenated hex values of a file's // modification time and size. This is not a reliable synchronization // mechanism for directories, so we do not advertise getetag for DAV // collections. dir: false, }, // TODO: The lockdiscovery property requires LockSystem to list the // active locks on a resource. xml.Name{Space: "DAV:", Local: "lockdiscovery"}: {}, xml.Name{Space: "DAV:", Local: "supportedlock"}: { findFn: findSupportedLock, dir: true, }, } // TODO(nigeltao) merge props and allprop? // Props returns the status of the properties named pnames for resource name. // // Each Propstat has a unique status and each property name will only be part // of one Propstat element. func props(fs FileSystem, ls LockSystem, name string, pnames []xml.Name) ([]Propstat, error) { f, err := fs.OpenFile(name, os.O_RDONLY, 0) if err != nil { return nil, err } defer f.Close() fi, err := f.Stat() if err != nil { return nil, err } isDir := fi.IsDir() var deadProps map[xml.Name]Property if dph, ok := f.(DeadPropsHolder); ok { deadProps, err = dph.DeadProps() if err != nil { return nil, err } } pstatOK := Propstat{Status: http.StatusOK} pstatNotFound := Propstat{Status: http.StatusNotFound} for _, pn := range pnames { // If this file has dead properties, check if they contain pn. if dp, ok := deadProps[pn]; ok { pstatOK.Props = append(pstatOK.Props, dp) continue } // Otherwise, it must either be a live property or we don't know it. if prop := liveProps[pn]; prop.findFn != nil && (prop.dir || !isDir) { innerXML, err := prop.findFn(fs, ls, name, fi) if err != nil { return nil, err } pstatOK.Props = append(pstatOK.Props, Property{ XMLName: pn, InnerXML: []byte(innerXML), }) } else { pstatNotFound.Props = append(pstatNotFound.Props, Property{ XMLName: pn, }) } } return makePropstats(pstatOK, pstatNotFound), nil } // Propnames returns the property names defined for resource name. func propnames(fs FileSystem, ls LockSystem, name string) ([]xml.Name, error) { f, err := fs.OpenFile(name, os.O_RDONLY, 0) if err != nil { return nil, err } defer f.Close() fi, err := f.Stat() if err != nil { return nil, err } isDir := fi.IsDir() var deadProps map[xml.Name]Property if dph, ok := f.(DeadPropsHolder); ok { deadProps, err = dph.DeadProps() if err != nil { return nil, err } } pnames := make([]xml.Name, 0, len(liveProps)+len(deadProps)) for pn, prop := range liveProps { if prop.findFn != nil && (prop.dir || !isDir) { pnames = append(pnames, pn) } } for pn := range deadProps { pnames = append(pnames, pn) } return pnames, nil } // Allprop returns the properties defined for resource name and the properties // named in include. // // Note that RFC 4918 defines 'allprop' to return the DAV: properties defined // within the RFC plus dead properties. Other live properties should only be // returned if they are named in 'include'. // // See http://www.webdav.org/specs/rfc4918.html#METHOD_PROPFIND func allprop(fs FileSystem, ls LockSystem, name string, include []xml.Name) ([]Propstat, error) { pnames, err := propnames(fs, ls, name) if err != nil { return nil, err } // Add names from include if they are not already covered in pnames. nameset := make(map[xml.Name]bool) for _, pn := range pnames { nameset[pn] = true } for _, pn := range include { if !nameset[pn] { pnames = append(pnames, pn) } } return props(fs, ls, name, pnames) } // Patch patches the properties of resource name. The return values are // constrained in the same manner as DeadPropsHolder.Patch. func patch(fs FileSystem, ls LockSystem, name string, patches []Proppatch) ([]Propstat, error) { conflict := false loop: for _, patch := range patches { for _, p := range patch.Props { if _, ok := liveProps[p.XMLName]; ok { conflict = true break loop } } } if conflict { pstatForbidden := Propstat{ Status: http.StatusForbidden, XMLError: ``, } pstatFailedDep := Propstat{ Status: StatusFailedDependency, } for _, patch := range patches { for _, p := range patch.Props { if _, ok := liveProps[p.XMLName]; ok { pstatForbidden.Props = append(pstatForbidden.Props, Property{XMLName: p.XMLName}) } else { pstatFailedDep.Props = append(pstatFailedDep.Props, Property{XMLName: p.XMLName}) } } } return makePropstats(pstatForbidden, pstatFailedDep), nil } f, err := fs.OpenFile(name, os.O_RDWR, 0) if err != nil { return nil, err } defer f.Close() if dph, ok := f.(DeadPropsHolder); ok { ret, err := dph.Patch(patches) if err != nil { return nil, err } // http://www.webdav.org/specs/rfc4918.html#ELEMENT_propstat says that // "The contents of the prop XML element must only list the names of // properties to which the result in the status element applies." for _, pstat := range ret { for i, p := range pstat.Props { pstat.Props[i] = Property{XMLName: p.XMLName} } } return ret, nil } // The file doesn't implement the optional DeadPropsHolder interface, so // all patches are forbidden. pstat := Propstat{Status: http.StatusForbidden} for _, patch := range patches { for _, p := range patch.Props { pstat.Props = append(pstat.Props, Property{XMLName: p.XMLName}) } } return []Propstat{pstat}, nil } func findResourceType(fs FileSystem, ls LockSystem, name string, fi os.FileInfo) (string, error) { if fi.IsDir() { return ``, nil } return "", nil } func findDisplayName(fs FileSystem, ls LockSystem, name string, fi os.FileInfo) (string, error) { if slashClean(name) == "/" { // Hide the real name of a possibly prefixed root directory. return "", nil } return fi.Name(), nil } func findContentLength(fs FileSystem, ls LockSystem, name string, fi os.FileInfo) (string, error) { return strconv.FormatInt(fi.Size(), 10), nil } func findLastModified(fs FileSystem, ls LockSystem, name string, fi os.FileInfo) (string, error) { return fi.ModTime().Format(http.TimeFormat), nil } func findContentType(fs FileSystem, ls LockSystem, name string, fi os.FileInfo) (string, error) { f, err := fs.OpenFile(name, os.O_RDONLY, 0) if err != nil { return "", err } defer f.Close() // This implementation is based on serveContent's code in the standard net/http package. ctype := mime.TypeByExtension(filepath.Ext(name)) if ctype != "" { return ctype, nil } // Read a chunk to decide between utf-8 text and binary. var buf [512]byte n, err := io.ReadFull(f, buf[:]) if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF { return "", err } ctype = http.DetectContentType(buf[:n]) // Rewind file. _, err = f.Seek(0, os.SEEK_SET) return ctype, err } func findETag(fs FileSystem, ls LockSystem, name string, fi os.FileInfo) (string, error) { // The Apache http 2.4 web server by default concatenates the // modification time and size of a file. We replicate the heuristic // with nanosecond granularity. return fmt.Sprintf(`"%x%x"`, fi.ModTime().UnixNano(), fi.Size()), nil } func findSupportedLock(fs FileSystem, ls LockSystem, name string, fi os.FileInfo) (string, error) { return `` + `` + `` + `` + ``, nil } lxd-2.0.2/dist/src/golang.org/x/net/webdav/file_test.go0000644061062106075000000006545012721405224025322 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package webdav import ( "encoding/xml" "fmt" "io" "io/ioutil" "os" "path" "path/filepath" "reflect" "runtime" "sort" "strconv" "strings" "testing" ) func TestSlashClean(t *testing.T) { testCases := []string{ "", ".", "/", "/./", "//", "//.", "//a", "/a", "/a/b/c", "/a//b/./../c/d/", "a", "a/b/c", } for _, tc := range testCases { got := slashClean(tc) want := path.Clean("/" + tc) if got != want { t.Errorf("tc=%q: got %q, want %q", tc, got, want) } } } func TestDirResolve(t *testing.T) { testCases := []struct { dir, name, want string }{ {"/", "", "/"}, {"/", "/", "/"}, {"/", ".", "/"}, {"/", "./a", "/a"}, {"/", "..", "/"}, {"/", "..", "/"}, {"/", "../", "/"}, {"/", "../.", "/"}, {"/", "../a", "/a"}, {"/", "../..", "/"}, {"/", "../bar/a", "/bar/a"}, {"/", "../baz/a", "/baz/a"}, {"/", "...", "/..."}, {"/", ".../a", "/.../a"}, {"/", ".../..", "/"}, {"/", "a", "/a"}, {"/", "a/./b", "/a/b"}, {"/", "a/../../b", "/b"}, {"/", "a/../b", "/b"}, {"/", "a/b", "/a/b"}, {"/", "a/b/c/../../d", "/a/d"}, {"/", "a/b/c/../../../d", "/d"}, {"/", "a/b/c/../../../../d", "/d"}, {"/", "a/b/c/d", "/a/b/c/d"}, {"/foo/bar", "", "/foo/bar"}, {"/foo/bar", "/", "/foo/bar"}, {"/foo/bar", ".", "/foo/bar"}, {"/foo/bar", "./a", "/foo/bar/a"}, {"/foo/bar", "..", "/foo/bar"}, {"/foo/bar", "../", "/foo/bar"}, {"/foo/bar", "../.", "/foo/bar"}, {"/foo/bar", "../a", "/foo/bar/a"}, {"/foo/bar", "../..", "/foo/bar"}, {"/foo/bar", "../bar/a", "/foo/bar/bar/a"}, {"/foo/bar", "../baz/a", "/foo/bar/baz/a"}, {"/foo/bar", "...", "/foo/bar/..."}, {"/foo/bar", ".../a", "/foo/bar/.../a"}, {"/foo/bar", ".../..", "/foo/bar"}, {"/foo/bar", "a", "/foo/bar/a"}, {"/foo/bar", "a/./b", "/foo/bar/a/b"}, {"/foo/bar", "a/../../b", "/foo/bar/b"}, {"/foo/bar", "a/../b", "/foo/bar/b"}, {"/foo/bar", "a/b", "/foo/bar/a/b"}, {"/foo/bar", "a/b/c/../../d", "/foo/bar/a/d"}, {"/foo/bar", "a/b/c/../../../d", "/foo/bar/d"}, {"/foo/bar", "a/b/c/../../../../d", "/foo/bar/d"}, {"/foo/bar", "a/b/c/d", "/foo/bar/a/b/c/d"}, {"/foo/bar/", "", "/foo/bar"}, {"/foo/bar/", "/", "/foo/bar"}, {"/foo/bar/", ".", "/foo/bar"}, {"/foo/bar/", "./a", "/foo/bar/a"}, {"/foo/bar/", "..", "/foo/bar"}, {"/foo//bar///", "", "/foo/bar"}, {"/foo//bar///", "/", "/foo/bar"}, {"/foo//bar///", ".", "/foo/bar"}, {"/foo//bar///", "./a", "/foo/bar/a"}, {"/foo//bar///", "..", "/foo/bar"}, {"/x/y/z", "ab/c\x00d/ef", ""}, {".", "", "."}, {".", "/", "."}, {".", ".", "."}, {".", "./a", "a"}, {".", "..", "."}, {".", "..", "."}, {".", "../", "."}, {".", "../.", "."}, {".", "../a", "a"}, {".", "../..", "."}, {".", "../bar/a", "bar/a"}, {".", "../baz/a", "baz/a"}, {".", "...", "..."}, {".", ".../a", ".../a"}, {".", ".../..", "."}, {".", "a", "a"}, {".", "a/./b", "a/b"}, {".", "a/../../b", "b"}, {".", "a/../b", "b"}, {".", "a/b", "a/b"}, {".", "a/b/c/../../d", "a/d"}, {".", "a/b/c/../../../d", "d"}, {".", "a/b/c/../../../../d", "d"}, {".", "a/b/c/d", "a/b/c/d"}, {"", "", "."}, {"", "/", "."}, {"", ".", "."}, {"", "./a", "a"}, {"", "..", "."}, } for _, tc := range testCases { d := Dir(filepath.FromSlash(tc.dir)) if got := filepath.ToSlash(d.resolve(tc.name)); got != tc.want { t.Errorf("dir=%q, name=%q: got %q, want %q", tc.dir, tc.name, got, tc.want) } } } func TestWalk(t *testing.T) { type walkStep struct { name, frag string final bool } testCases := []struct { dir string want []walkStep }{ {"", []walkStep{ {"", "", true}, }}, {"/", []walkStep{ {"", "", true}, }}, {"/a", []walkStep{ {"", "a", true}, }}, {"/a/", []walkStep{ {"", "a", true}, }}, {"/a/b", []walkStep{ {"", "a", false}, {"a", "b", true}, }}, {"/a/b/", []walkStep{ {"", "a", false}, {"a", "b", true}, }}, {"/a/b/c", []walkStep{ {"", "a", false}, {"a", "b", false}, {"b", "c", true}, }}, // The following test case is the one mentioned explicitly // in the method description. {"/foo/bar/x", []walkStep{ {"", "foo", false}, {"foo", "bar", false}, {"bar", "x", true}, }}, } for _, tc := range testCases { fs := NewMemFS().(*memFS) parts := strings.Split(tc.dir, "/") for p := 2; p < len(parts); p++ { d := strings.Join(parts[:p], "/") if err := fs.Mkdir(d, 0666); err != nil { t.Errorf("tc.dir=%q: mkdir: %q: %v", tc.dir, d, err) } } i, prevFrag := 0, "" err := fs.walk("test", tc.dir, func(dir *memFSNode, frag string, final bool) error { got := walkStep{ name: prevFrag, frag: frag, final: final, } want := tc.want[i] if got != want { return fmt.Errorf("got %+v, want %+v", got, want) } i, prevFrag = i+1, frag return nil }) if err != nil { t.Errorf("tc.dir=%q: %v", tc.dir, err) } } } // find appends to ss the names of the named file and its children. It is // analogous to the Unix find command. // // The returned strings are not guaranteed to be in any particular order. func find(ss []string, fs FileSystem, name string) ([]string, error) { stat, err := fs.Stat(name) if err != nil { return nil, err } ss = append(ss, name) if stat.IsDir() { f, err := fs.OpenFile(name, os.O_RDONLY, 0) if err != nil { return nil, err } defer f.Close() children, err := f.Readdir(-1) if err != nil { return nil, err } for _, c := range children { ss, err = find(ss, fs, path.Join(name, c.Name())) if err != nil { return nil, err } } } return ss, nil } func testFS(t *testing.T, fs FileSystem) { errStr := func(err error) string { switch { case os.IsExist(err): return "errExist" case os.IsNotExist(err): return "errNotExist" case err != nil: return "err" } return "ok" } // The non-"find" non-"stat" test cases should change the file system state. The // indentation of the "find"s and "stat"s helps distinguish such test cases. testCases := []string{ " stat / want dir", " stat /a want errNotExist", " stat /d want errNotExist", " stat /d/e want errNotExist", "create /a A want ok", " stat /a want 1", "create /d/e EEE want errNotExist", "mk-dir /a want errExist", "mk-dir /d/m want errNotExist", "mk-dir /d want ok", " stat /d want dir", "create /d/e EEE want ok", " stat /d/e want 3", " find / /a /d /d/e", "create /d/f FFFF want ok", "create /d/g GGGGGGG want ok", "mk-dir /d/m want ok", "mk-dir /d/m want errExist", "create /d/m/p PPPPP want ok", " stat /d/e want 3", " stat /d/f want 4", " stat /d/g want 7", " stat /d/h want errNotExist", " stat /d/m want dir", " stat /d/m/p want 5", " find / /a /d /d/e /d/f /d/g /d/m /d/m/p", "rm-all /d want ok", " stat /a want 1", " stat /d want errNotExist", " stat /d/e want errNotExist", " stat /d/f want errNotExist", " stat /d/g want errNotExist", " stat /d/m want errNotExist", " stat /d/m/p want errNotExist", " find / /a", "mk-dir /d/m want errNotExist", "mk-dir /d want ok", "create /d/f FFFF want ok", "rm-all /d/f want ok", "mk-dir /d/m want ok", "rm-all /z want ok", "rm-all / want err", "create /b BB want ok", " stat / want dir", " stat /a want 1", " stat /b want 2", " stat /c want errNotExist", " stat /d want dir", " stat /d/m want dir", " find / /a /b /d /d/m", "move__ o=F /b /c want ok", " stat /b want errNotExist", " stat /c want 2", " stat /d/m want dir", " stat /d/n want errNotExist", " find / /a /c /d /d/m", "move__ o=F /d/m /d/n want ok", "create /d/n/q QQQQ want ok", " stat /d/m want errNotExist", " stat /d/n want dir", " stat /d/n/q want 4", "move__ o=F /d /d/n/z want err", "move__ o=T /c /d/n/q want ok", " stat /c want errNotExist", " stat /d/n/q want 2", " find / /a /d /d/n /d/n/q", "create /d/n/r RRRRR want ok", "mk-dir /u want ok", "mk-dir /u/v want ok", "move__ o=F /d/n /u want errExist", "create /t TTTTTT want ok", "move__ o=F /d/n /t want errExist", "rm-all /t want ok", "move__ o=F /d/n /t want ok", " stat /d want dir", " stat /d/n want errNotExist", " stat /d/n/r want errNotExist", " stat /t want dir", " stat /t/q want 2", " stat /t/r want 5", " find / /a /d /t /t/q /t/r /u /u/v", "move__ o=F /t / want errExist", "move__ o=T /t /u/v want ok", " stat /u/v/r want 5", "move__ o=F / /z want err", " find / /a /d /u /u/v /u/v/q /u/v/r", " stat /a want 1", " stat /b want errNotExist", " stat /c want errNotExist", " stat /u/v/r want 5", "copy__ o=F d=0 /a /b want ok", "copy__ o=T d=0 /a /c want ok", " stat /a want 1", " stat /b want 1", " stat /c want 1", " stat /u/v/r want 5", "copy__ o=F d=0 /u/v/r /b want errExist", " stat /b want 1", "copy__ o=T d=0 /u/v/r /b want ok", " stat /a want 1", " stat /b want 5", " stat /u/v/r want 5", "rm-all /a want ok", "rm-all /b want ok", "mk-dir /u/v/w want ok", "create /u/v/w/s SSSSSSSS want ok", " stat /d want dir", " stat /d/x want errNotExist", " stat /d/y want errNotExist", " stat /u/v/r want 5", " stat /u/v/w/s want 8", " find / /c /d /u /u/v /u/v/q /u/v/r /u/v/w /u/v/w/s", "copy__ o=T d=0 /u/v /d/x want ok", "copy__ o=T d=โˆž /u/v /d/y want ok", "rm-all /u want ok", " stat /d/x want dir", " stat /d/x/q want errNotExist", " stat /d/x/r want errNotExist", " stat /d/x/w want errNotExist", " stat /d/x/w/s want errNotExist", " stat /d/y want dir", " stat /d/y/q want 2", " stat /d/y/r want 5", " stat /d/y/w want dir", " stat /d/y/w/s want 8", " stat /u want errNotExist", " find / /c /d /d/x /d/y /d/y/q /d/y/r /d/y/w /d/y/w/s", "copy__ o=F d=โˆž /d/y /d/x want errExist", } for i, tc := range testCases { tc = strings.TrimSpace(tc) j := strings.IndexByte(tc, ' ') if j < 0 { t.Fatalf("test case #%d %q: invalid command", i, tc) } op, arg := tc[:j], tc[j+1:] switch op { default: t.Fatalf("test case #%d %q: invalid operation %q", i, tc, op) case "create": parts := strings.Split(arg, " ") if len(parts) != 4 || parts[2] != "want" { t.Fatalf("test case #%d %q: invalid write", i, tc) } f, opErr := fs.OpenFile(parts[0], os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0666) if got := errStr(opErr); got != parts[3] { t.Fatalf("test case #%d %q: OpenFile: got %q (%v), want %q", i, tc, got, opErr, parts[3]) } if f != nil { if _, err := f.Write([]byte(parts[1])); err != nil { t.Fatalf("test case #%d %q: Write: %v", i, tc, err) } if err := f.Close(); err != nil { t.Fatalf("test case #%d %q: Close: %v", i, tc, err) } } case "find": got, err := find(nil, fs, "/") if err != nil { t.Fatalf("test case #%d %q: find: %v", i, tc, err) } sort.Strings(got) want := strings.Split(arg, " ") if !reflect.DeepEqual(got, want) { t.Fatalf("test case #%d %q:\ngot %s\nwant %s", i, tc, got, want) } case "copy__", "mk-dir", "move__", "rm-all", "stat": nParts := 3 switch op { case "copy__": nParts = 6 case "move__": nParts = 5 } parts := strings.Split(arg, " ") if len(parts) != nParts { t.Fatalf("test case #%d %q: invalid %s", i, tc, op) } got, opErr := "", error(nil) switch op { case "copy__": depth := 0 if parts[1] == "d=โˆž" { depth = infiniteDepth } _, opErr = copyFiles(fs, parts[2], parts[3], parts[0] == "o=T", depth, 0) case "mk-dir": opErr = fs.Mkdir(parts[0], 0777) case "move__": _, opErr = moveFiles(fs, parts[1], parts[2], parts[0] == "o=T") case "rm-all": opErr = fs.RemoveAll(parts[0]) case "stat": var stat os.FileInfo fileName := parts[0] if stat, opErr = fs.Stat(fileName); opErr == nil { if stat.IsDir() { got = "dir" } else { got = strconv.Itoa(int(stat.Size())) } if fileName == "/" { // For a Dir FileSystem, the virtual file system root maps to a // real file system name like "/tmp/webdav-test012345", which does // not end with "/". We skip such cases. } else if statName := stat.Name(); path.Base(fileName) != statName { t.Fatalf("test case #%d %q: file name %q inconsistent with stat name %q", i, tc, fileName, statName) } } } if got == "" { got = errStr(opErr) } if parts[len(parts)-2] != "want" { t.Fatalf("test case #%d %q: invalid %s", i, tc, op) } if want := parts[len(parts)-1]; got != want { t.Fatalf("test case #%d %q: got %q (%v), want %q", i, tc, got, opErr, want) } } } } func TestDir(t *testing.T) { switch runtime.GOOS { case "nacl": t.Skip("see golang.org/issue/12004") case "plan9": t.Skip("see golang.org/issue/11453") } td, err := ioutil.TempDir("", "webdav-test") if err != nil { t.Fatal(err) } defer os.RemoveAll(td) testFS(t, Dir(td)) } func TestMemFS(t *testing.T) { testFS(t, NewMemFS()) } func TestMemFSRoot(t *testing.T) { fs := NewMemFS() for i := 0; i < 5; i++ { stat, err := fs.Stat("/") if err != nil { t.Fatalf("i=%d: Stat: %v", i, err) } if !stat.IsDir() { t.Fatalf("i=%d: Stat.IsDir is false, want true", i) } f, err := fs.OpenFile("/", os.O_RDONLY, 0) if err != nil { t.Fatalf("i=%d: OpenFile: %v", i, err) } defer f.Close() children, err := f.Readdir(-1) if err != nil { t.Fatalf("i=%d: Readdir: %v", i, err) } if len(children) != i { t.Fatalf("i=%d: got %d children, want %d", i, len(children), i) } if _, err := f.Write(make([]byte, 1)); err == nil { t.Fatalf("i=%d: Write: got nil error, want non-nil", i) } if err := fs.Mkdir(fmt.Sprintf("/dir%d", i), 0777); err != nil { t.Fatalf("i=%d: Mkdir: %v", i, err) } } } func TestMemFileReaddir(t *testing.T) { fs := NewMemFS() if err := fs.Mkdir("/foo", 0777); err != nil { t.Fatalf("Mkdir: %v", err) } readdir := func(count int) ([]os.FileInfo, error) { f, err := fs.OpenFile("/foo", os.O_RDONLY, 0) if err != nil { t.Fatalf("OpenFile: %v", err) } defer f.Close() return f.Readdir(count) } if got, err := readdir(-1); len(got) != 0 || err != nil { t.Fatalf("readdir(-1): got %d fileInfos with err=%v, want 0, ", len(got), err) } if got, err := readdir(+1); len(got) != 0 || err != io.EOF { t.Fatalf("readdir(+1): got %d fileInfos with err=%v, want 0, EOF", len(got), err) } } func TestMemFile(t *testing.T) { testCases := []string{ "wantData ", "wantSize 0", "write abc", "wantData abc", "write de", "wantData abcde", "wantSize 5", "write 5*x", "write 4*y+2*z", "write 3*st", "wantData abcdexxxxxyyyyzzststst", "wantSize 22", "seek set 4 want 4", "write EFG", "wantData abcdEFGxxxyyyyzzststst", "wantSize 22", "seek set 2 want 2", "read cdEF", "read Gx", "seek cur 0 want 8", "seek cur 2 want 10", "seek cur -1 want 9", "write J", "wantData abcdEFGxxJyyyyzzststst", "wantSize 22", "seek cur -4 want 6", "write ghijk", "wantData abcdEFghijkyyyzzststst", "wantSize 22", "read yyyz", "seek cur 0 want 15", "write ", "seek cur 0 want 15", "read ", "seek cur 0 want 15", "seek end -3 want 19", "write ZZ", "wantData abcdEFghijkyyyzzstsZZt", "wantSize 22", "write 4*A", "wantData abcdEFghijkyyyzzstsZZAAAA", "wantSize 25", "seek end 0 want 25", "seek end -5 want 20", "read Z+4*A", "write 5*B", "wantData abcdEFghijkyyyzzstsZZAAAABBBBB", "wantSize 30", "seek end 10 want 40", "write C", "wantData abcdEFghijkyyyzzstsZZAAAABBBBB..........C", "wantSize 41", "write D", "wantData abcdEFghijkyyyzzstsZZAAAABBBBB..........CD", "wantSize 42", "seek set 43 want 43", "write E", "wantData abcdEFghijkyyyzzstsZZAAAABBBBB..........CD.E", "wantSize 44", "seek set 0 want 0", "write 5*123456789_", "wantData 123456789_123456789_123456789_123456789_123456789_", "wantSize 50", "seek cur 0 want 50", "seek cur -99 want err", } const filename = "/foo" fs := NewMemFS() f, err := fs.OpenFile(filename, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0666) if err != nil { t.Fatalf("OpenFile: %v", err) } defer f.Close() for i, tc := range testCases { j := strings.IndexByte(tc, ' ') if j < 0 { t.Fatalf("test case #%d %q: invalid command", i, tc) } op, arg := tc[:j], tc[j+1:] // Expand an arg like "3*a+2*b" to "aaabb". parts := strings.Split(arg, "+") for j, part := range parts { if k := strings.IndexByte(part, '*'); k >= 0 { repeatCount, repeatStr := part[:k], part[k+1:] n, err := strconv.Atoi(repeatCount) if err != nil { t.Fatalf("test case #%d %q: invalid repeat count %q", i, tc, repeatCount) } parts[j] = strings.Repeat(repeatStr, n) } } arg = strings.Join(parts, "") switch op { default: t.Fatalf("test case #%d %q: invalid operation %q", i, tc, op) case "read": buf := make([]byte, len(arg)) if _, err := io.ReadFull(f, buf); err != nil { t.Fatalf("test case #%d %q: ReadFull: %v", i, tc, err) } if got := string(buf); got != arg { t.Fatalf("test case #%d %q:\ngot %q\nwant %q", i, tc, got, arg) } case "seek": parts := strings.Split(arg, " ") if len(parts) != 4 { t.Fatalf("test case #%d %q: invalid seek", i, tc) } whence := 0 switch parts[0] { default: t.Fatalf("test case #%d %q: invalid seek whence", i, tc) case "set": whence = os.SEEK_SET case "cur": whence = os.SEEK_CUR case "end": whence = os.SEEK_END } offset, err := strconv.Atoi(parts[1]) if err != nil { t.Fatalf("test case #%d %q: invalid offset %q", i, tc, parts[1]) } if parts[2] != "want" { t.Fatalf("test case #%d %q: invalid seek", i, tc) } if parts[3] == "err" { _, err := f.Seek(int64(offset), whence) if err == nil { t.Fatalf("test case #%d %q: Seek returned nil error, want non-nil", i, tc) } } else { got, err := f.Seek(int64(offset), whence) if err != nil { t.Fatalf("test case #%d %q: Seek: %v", i, tc, err) } want, err := strconv.Atoi(parts[3]) if err != nil { t.Fatalf("test case #%d %q: invalid want %q", i, tc, parts[3]) } if got != int64(want) { t.Fatalf("test case #%d %q: got %d, want %d", i, tc, got, want) } } case "write": n, err := f.Write([]byte(arg)) if err != nil { t.Fatalf("test case #%d %q: write: %v", i, tc, err) } if n != len(arg) { t.Fatalf("test case #%d %q: write returned %d bytes, want %d", i, tc, n, len(arg)) } case "wantData": g, err := fs.OpenFile(filename, os.O_RDONLY, 0666) if err != nil { t.Fatalf("test case #%d %q: OpenFile: %v", i, tc, err) } gotBytes, err := ioutil.ReadAll(g) if err != nil { t.Fatalf("test case #%d %q: ReadAll: %v", i, tc, err) } for i, c := range gotBytes { if c == '\x00' { gotBytes[i] = '.' } } got := string(gotBytes) if got != arg { t.Fatalf("test case #%d %q:\ngot %q\nwant %q", i, tc, got, arg) } if err := g.Close(); err != nil { t.Fatalf("test case #%d %q: Close: %v", i, tc, err) } case "wantSize": n, err := strconv.Atoi(arg) if err != nil { t.Fatalf("test case #%d %q: invalid size %q", i, tc, arg) } fi, err := fs.Stat(filename) if err != nil { t.Fatalf("test case #%d %q: Stat: %v", i, tc, err) } if got, want := fi.Size(), int64(n); got != want { t.Fatalf("test case #%d %q: got %d, want %d", i, tc, got, want) } } } } // TestMemFileWriteAllocs tests that writing N consecutive 1KiB chunks to a // memFile doesn't allocate a new buffer for each of those N times. Otherwise, // calling io.Copy(aMemFile, src) is likely to have quadratic complexity. func TestMemFileWriteAllocs(t *testing.T) { fs := NewMemFS() f, err := fs.OpenFile("/xxx", os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0666) if err != nil { t.Fatalf("OpenFile: %v", err) } defer f.Close() xxx := make([]byte, 1024) for i := range xxx { xxx[i] = 'x' } a := testing.AllocsPerRun(100, func() { f.Write(xxx) }) // AllocsPerRun returns an integral value, so we compare the rounded-down // number to zero. if a > 0 { t.Fatalf("%v allocs per run, want 0", a) } } func BenchmarkMemFileWrite(b *testing.B) { fs := NewMemFS() xxx := make([]byte, 1024) for i := range xxx { xxx[i] = 'x' } b.ResetTimer() for i := 0; i < b.N; i++ { f, err := fs.OpenFile("/xxx", os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0666) if err != nil { b.Fatalf("OpenFile: %v", err) } for j := 0; j < 100; j++ { f.Write(xxx) } if err := f.Close(); err != nil { b.Fatalf("Close: %v", err) } if err := fs.RemoveAll("/xxx"); err != nil { b.Fatalf("RemoveAll: %v", err) } } } func TestCopyMoveProps(t *testing.T) { fs := NewMemFS() create := func(name string) error { f, err := fs.OpenFile(name, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0666) if err != nil { return err } _, wErr := f.Write([]byte("contents")) cErr := f.Close() if wErr != nil { return wErr } return cErr } patch := func(name string, patches ...Proppatch) error { f, err := fs.OpenFile(name, os.O_RDWR, 0666) if err != nil { return err } _, pErr := f.(DeadPropsHolder).Patch(patches) cErr := f.Close() if pErr != nil { return pErr } return cErr } props := func(name string) (map[xml.Name]Property, error) { f, err := fs.OpenFile(name, os.O_RDWR, 0666) if err != nil { return nil, err } m, pErr := f.(DeadPropsHolder).DeadProps() cErr := f.Close() if pErr != nil { return nil, pErr } if cErr != nil { return nil, cErr } return m, nil } p0 := Property{ XMLName: xml.Name{Space: "x:", Local: "boat"}, InnerXML: []byte("pea-green"), } p1 := Property{ XMLName: xml.Name{Space: "x:", Local: "ring"}, InnerXML: []byte("1 shilling"), } p2 := Property{ XMLName: xml.Name{Space: "x:", Local: "spoon"}, InnerXML: []byte("runcible"), } p3 := Property{ XMLName: xml.Name{Space: "x:", Local: "moon"}, InnerXML: []byte("light"), } if err := create("/src"); err != nil { t.Fatalf("create /src: %v", err) } if err := patch("/src", Proppatch{Props: []Property{p0, p1}}); err != nil { t.Fatalf("patch /src +p0 +p1: %v", err) } if _, err := copyFiles(fs, "/src", "/tmp", true, infiniteDepth, 0); err != nil { t.Fatalf("copyFiles /src /tmp: %v", err) } if _, err := moveFiles(fs, "/tmp", "/dst", true); err != nil { t.Fatalf("moveFiles /tmp /dst: %v", err) } if err := patch("/src", Proppatch{Props: []Property{p0}, Remove: true}); err != nil { t.Fatalf("patch /src -p0: %v", err) } if err := patch("/src", Proppatch{Props: []Property{p2}}); err != nil { t.Fatalf("patch /src +p2: %v", err) } if err := patch("/dst", Proppatch{Props: []Property{p1}, Remove: true}); err != nil { t.Fatalf("patch /dst -p1: %v", err) } if err := patch("/dst", Proppatch{Props: []Property{p3}}); err != nil { t.Fatalf("patch /dst +p3: %v", err) } gotSrc, err := props("/src") if err != nil { t.Fatalf("props /src: %v", err) } wantSrc := map[xml.Name]Property{ p1.XMLName: p1, p2.XMLName: p2, } if !reflect.DeepEqual(gotSrc, wantSrc) { t.Fatalf("props /src:\ngot %v\nwant %v", gotSrc, wantSrc) } gotDst, err := props("/dst") if err != nil { t.Fatalf("props /dst: %v", err) } wantDst := map[xml.Name]Property{ p0.XMLName: p0, p3.XMLName: p3, } if !reflect.DeepEqual(gotDst, wantDst) { t.Fatalf("props /dst:\ngot %v\nwant %v", gotDst, wantDst) } } func TestWalkFS(t *testing.T) { testCases := []struct { desc string buildfs []string startAt string depth int walkFn filepath.WalkFunc want []string }{{ "just root", []string{}, "/", infiniteDepth, nil, []string{ "/", }, }, { "infinite walk from root", []string{ "mkdir /a", "mkdir /a/b", "touch /a/b/c", "mkdir /a/d", "mkdir /e", "touch /f", }, "/", infiniteDepth, nil, []string{ "/", "/a", "/a/b", "/a/b/c", "/a/d", "/e", "/f", }, }, { "infinite walk from subdir", []string{ "mkdir /a", "mkdir /a/b", "touch /a/b/c", "mkdir /a/d", "mkdir /e", "touch /f", }, "/a", infiniteDepth, nil, []string{ "/a", "/a/b", "/a/b/c", "/a/d", }, }, { "depth 1 walk from root", []string{ "mkdir /a", "mkdir /a/b", "touch /a/b/c", "mkdir /a/d", "mkdir /e", "touch /f", }, "/", 1, nil, []string{ "/", "/a", "/e", "/f", }, }, { "depth 1 walk from subdir", []string{ "mkdir /a", "mkdir /a/b", "touch /a/b/c", "mkdir /a/b/g", "mkdir /a/b/g/h", "touch /a/b/g/i", "touch /a/b/g/h/j", }, "/a/b", 1, nil, []string{ "/a/b", "/a/b/c", "/a/b/g", }, }, { "depth 0 walk from subdir", []string{ "mkdir /a", "mkdir /a/b", "touch /a/b/c", "mkdir /a/b/g", "mkdir /a/b/g/h", "touch /a/b/g/i", "touch /a/b/g/h/j", }, "/a/b", 0, nil, []string{ "/a/b", }, }, { "infinite walk from file", []string{ "mkdir /a", "touch /a/b", "touch /a/c", }, "/a/b", 0, nil, []string{ "/a/b", }, }, { "infinite walk with skipped subdir", []string{ "mkdir /a", "mkdir /a/b", "touch /a/b/c", "mkdir /a/b/g", "mkdir /a/b/g/h", "touch /a/b/g/i", "touch /a/b/g/h/j", "touch /a/b/z", }, "/", infiniteDepth, func(path string, info os.FileInfo, err error) error { if path == "/a/b/g" { return filepath.SkipDir } return nil }, []string{ "/", "/a", "/a/b", "/a/b/c", "/a/b/z", }, }} for _, tc := range testCases { fs, err := buildTestFS(tc.buildfs) if err != nil { t.Fatalf("%s: cannot create test filesystem: %v", tc.desc, err) } var got []string traceFn := func(path string, info os.FileInfo, err error) error { if tc.walkFn != nil { err = tc.walkFn(path, info, err) if err != nil { return err } } got = append(got, path) return nil } fi, err := fs.Stat(tc.startAt) if err != nil { t.Fatalf("%s: cannot stat: %v", tc.desc, err) } err = walkFS(fs, tc.depth, tc.startAt, fi, traceFn) if err != nil { t.Errorf("%s:\ngot error %v, want nil", tc.desc, err) continue } sort.Strings(got) sort.Strings(tc.want) if !reflect.DeepEqual(got, tc.want) { t.Errorf("%s:\ngot %q\nwant %q", tc.desc, got, tc.want) continue } } } func buildTestFS(buildfs []string) (FileSystem, error) { // TODO: Could this be merged with the build logic in TestFS? fs := NewMemFS() for _, b := range buildfs { op := strings.Split(b, " ") switch op[0] { case "mkdir": err := fs.Mkdir(op[1], os.ModeDir|0777) if err != nil { return nil, err } case "touch": f, err := fs.OpenFile(op[1], os.O_RDWR|os.O_CREATE, 0666) if err != nil { return nil, err } f.Close() case "write": f, err := fs.OpenFile(op[1], os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0666) if err != nil { return nil, err } _, err = f.Write([]byte(op[2])) f.Close() if err != nil { return nil, err } default: return nil, fmt.Errorf("unknown file operation %q", op[0]) } } return fs, nil } lxd-2.0.2/dist/src/golang.org/x/net/webdav/lock_test.go0000644061062106075000000004327412721405224025333 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package webdav import ( "fmt" "math/rand" "path" "reflect" "sort" "strconv" "strings" "testing" "time" ) func TestWalkToRoot(t *testing.T) { testCases := []struct { name string want []string }{{ "/a/b/c/d", []string{ "/a/b/c/d", "/a/b/c", "/a/b", "/a", "/", }, }, { "/a", []string{ "/a", "/", }, }, { "/", []string{ "/", }, }} for _, tc := range testCases { var got []string if !walkToRoot(tc.name, func(name0 string, first bool) bool { if first != (len(got) == 0) { t.Errorf("name=%q: first=%t but len(got)==%d", tc.name, first, len(got)) return false } got = append(got, name0) return true }) { continue } if !reflect.DeepEqual(got, tc.want) { t.Errorf("name=%q:\ngot %q\nwant %q", tc.name, got, tc.want) } } } var lockTestDurations = []time.Duration{ infiniteTimeout, // infiniteTimeout means to never expire. 0, // A zero duration means to expire immediately. 100 * time.Hour, // A very large duration will not expire in these tests. } // lockTestNames are the names of a set of mutually compatible locks. For each // name fragment: // - _ means no explicit lock. // - i means a infinite-depth lock, // - z means a zero-depth lock, var lockTestNames = []string{ "/_/_/_/_/z", "/_/_/i", "/_/z", "/_/z/i", "/_/z/z", "/_/z/_/i", "/_/z/_/z", "/i", "/z", "/z/_/i", "/z/_/z", } func lockTestZeroDepth(name string) bool { switch name[len(name)-1] { case 'i': return false case 'z': return true } panic(fmt.Sprintf("lock name %q did not end with 'i' or 'z'", name)) } func TestMemLSCanCreate(t *testing.T) { now := time.Unix(0, 0) m := NewMemLS().(*memLS) for _, name := range lockTestNames { _, err := m.Create(now, LockDetails{ Root: name, Duration: infiniteTimeout, ZeroDepth: lockTestZeroDepth(name), }) if err != nil { t.Fatalf("creating lock for %q: %v", name, err) } } wantCanCreate := func(name string, zeroDepth bool) bool { for _, n := range lockTestNames { switch { case n == name: // An existing lock has the same name as the proposed lock. return false case strings.HasPrefix(n, name): // An existing lock would be a child of the proposed lock, // which conflicts if the proposed lock has infinite depth. if !zeroDepth { return false } case strings.HasPrefix(name, n): // An existing lock would be an ancestor of the proposed lock, // which conflicts if the ancestor has infinite depth. if n[len(n)-1] == 'i' { return false } } } return true } var check func(int, string) check = func(recursion int, name string) { for _, zeroDepth := range []bool{false, true} { got := m.canCreate(name, zeroDepth) want := wantCanCreate(name, zeroDepth) if got != want { t.Errorf("canCreate name=%q zeroDepth=%t: got %t, want %t", name, zeroDepth, got, want) } } if recursion == 6 { return } if name != "/" { name += "/" } for _, c := range "_iz" { check(recursion+1, name+string(c)) } } check(0, "/") } func TestMemLSLookup(t *testing.T) { now := time.Unix(0, 0) m := NewMemLS().(*memLS) badToken := m.nextToken() t.Logf("badToken=%q", badToken) for _, name := range lockTestNames { token, err := m.Create(now, LockDetails{ Root: name, Duration: infiniteTimeout, ZeroDepth: lockTestZeroDepth(name), }) if err != nil { t.Fatalf("creating lock for %q: %v", name, err) } t.Logf("%-15q -> node=%p token=%q", name, m.byName[name], token) } baseNames := append([]string{"/a", "/b/c"}, lockTestNames...) for _, baseName := range baseNames { for _, suffix := range []string{"", "/0", "/1/2/3"} { name := baseName + suffix goodToken := "" base := m.byName[baseName] if base != nil && (suffix == "" || !lockTestZeroDepth(baseName)) { goodToken = base.token } for _, token := range []string{badToken, goodToken} { if token == "" { continue } got := m.lookup(name, Condition{Token: token}) want := base if token == badToken { want = nil } if got != want { t.Errorf("name=%-20qtoken=%q (bad=%t): got %p, want %p", name, token, token == badToken, got, want) } } } } } func TestMemLSConfirm(t *testing.T) { now := time.Unix(0, 0) m := NewMemLS().(*memLS) alice, err := m.Create(now, LockDetails{ Root: "/alice", Duration: infiniteTimeout, ZeroDepth: false, }) tweedle, err := m.Create(now, LockDetails{ Root: "/tweedle", Duration: infiniteTimeout, ZeroDepth: false, }) if err != nil { t.Fatalf("Create: %v", err) } if err := m.consistent(); err != nil { t.Fatalf("Create: inconsistent state: %v", err) } // Test a mismatch between name and condition. _, err = m.Confirm(now, "/tweedle/dee", "", Condition{Token: alice}) if err != ErrConfirmationFailed { t.Fatalf("Confirm (mismatch): got %v, want ErrConfirmationFailed", err) } if err := m.consistent(); err != nil { t.Fatalf("Confirm (mismatch): inconsistent state: %v", err) } // Test two names (that fall under the same lock) in the one Confirm call. release, err := m.Confirm(now, "/tweedle/dee", "/tweedle/dum", Condition{Token: tweedle}) if err != nil { t.Fatalf("Confirm (twins): %v", err) } if err := m.consistent(); err != nil { t.Fatalf("Confirm (twins): inconsistent state: %v", err) } release() if err := m.consistent(); err != nil { t.Fatalf("release (twins): inconsistent state: %v", err) } // Test the same two names in overlapping Confirm / release calls. releaseDee, err := m.Confirm(now, "/tweedle/dee", "", Condition{Token: tweedle}) if err != nil { t.Fatalf("Confirm (sequence #0): %v", err) } if err := m.consistent(); err != nil { t.Fatalf("Confirm (sequence #0): inconsistent state: %v", err) } _, err = m.Confirm(now, "/tweedle/dum", "", Condition{Token: tweedle}) if err != ErrConfirmationFailed { t.Fatalf("Confirm (sequence #1): got %v, want ErrConfirmationFailed", err) } if err := m.consistent(); err != nil { t.Fatalf("Confirm (sequence #1): inconsistent state: %v", err) } releaseDee() if err := m.consistent(); err != nil { t.Fatalf("release (sequence #2): inconsistent state: %v", err) } releaseDum, err := m.Confirm(now, "/tweedle/dum", "", Condition{Token: tweedle}) if err != nil { t.Fatalf("Confirm (sequence #3): %v", err) } if err := m.consistent(); err != nil { t.Fatalf("Confirm (sequence #3): inconsistent state: %v", err) } // Test that you can't unlock a held lock. err = m.Unlock(now, tweedle) if err != ErrLocked { t.Fatalf("Unlock (sequence #4): got %v, want ErrLocked", err) } releaseDum() if err := m.consistent(); err != nil { t.Fatalf("release (sequence #5): inconsistent state: %v", err) } err = m.Unlock(now, tweedle) if err != nil { t.Fatalf("Unlock (sequence #6): %v", err) } if err := m.consistent(); err != nil { t.Fatalf("Unlock (sequence #6): inconsistent state: %v", err) } } func TestMemLSNonCanonicalRoot(t *testing.T) { now := time.Unix(0, 0) m := NewMemLS().(*memLS) token, err := m.Create(now, LockDetails{ Root: "/foo/./bar//", Duration: 1 * time.Second, }) if err != nil { t.Fatalf("Create: %v", err) } if err := m.consistent(); err != nil { t.Fatalf("Create: inconsistent state: %v", err) } if err := m.Unlock(now, token); err != nil { t.Fatalf("Unlock: %v", err) } if err := m.consistent(); err != nil { t.Fatalf("Unlock: inconsistent state: %v", err) } } func TestMemLSExpiry(t *testing.T) { m := NewMemLS().(*memLS) testCases := []string{ "setNow 0", "create /a.5", "want /a.5", "create /c.6", "want /a.5 /c.6", "create /a/b.7", "want /a.5 /a/b.7 /c.6", "setNow 4", "want /a.5 /a/b.7 /c.6", "setNow 5", "want /a/b.7 /c.6", "setNow 6", "want /a/b.7", "setNow 7", "want ", "setNow 8", "want ", "create /a.12", "create /b.13", "create /c.15", "create /a/d.16", "want /a.12 /a/d.16 /b.13 /c.15", "refresh /a.14", "want /a.14 /a/d.16 /b.13 /c.15", "setNow 12", "want /a.14 /a/d.16 /b.13 /c.15", "setNow 13", "want /a.14 /a/d.16 /c.15", "setNow 14", "want /a/d.16 /c.15", "refresh /a/d.20", "refresh /c.20", "want /a/d.20 /c.20", "setNow 20", "want ", } tokens := map[string]string{} zTime := time.Unix(0, 0) now := zTime for i, tc := range testCases { j := strings.IndexByte(tc, ' ') if j < 0 { t.Fatalf("test case #%d %q: invalid command", i, tc) } op, arg := tc[:j], tc[j+1:] switch op { default: t.Fatalf("test case #%d %q: invalid operation %q", i, tc, op) case "create", "refresh": parts := strings.Split(arg, ".") if len(parts) != 2 { t.Fatalf("test case #%d %q: invalid create", i, tc) } root := parts[0] d, err := strconv.Atoi(parts[1]) if err != nil { t.Fatalf("test case #%d %q: invalid duration", i, tc) } dur := time.Unix(0, 0).Add(time.Duration(d) * time.Second).Sub(now) switch op { case "create": token, err := m.Create(now, LockDetails{ Root: root, Duration: dur, ZeroDepth: true, }) if err != nil { t.Fatalf("test case #%d %q: Create: %v", i, tc, err) } tokens[root] = token case "refresh": token := tokens[root] if token == "" { t.Fatalf("test case #%d %q: no token for %q", i, tc, root) } got, err := m.Refresh(now, token, dur) if err != nil { t.Fatalf("test case #%d %q: Refresh: %v", i, tc, err) } want := LockDetails{ Root: root, Duration: dur, ZeroDepth: true, } if got != want { t.Fatalf("test case #%d %q:\ngot %v\nwant %v", i, tc, got, want) } } case "setNow": d, err := strconv.Atoi(arg) if err != nil { t.Fatalf("test case #%d %q: invalid duration", i, tc) } now = time.Unix(0, 0).Add(time.Duration(d) * time.Second) case "want": m.mu.Lock() m.collectExpiredNodes(now) got := make([]string, 0, len(m.byToken)) for _, n := range m.byToken { got = append(got, fmt.Sprintf("%s.%d", n.details.Root, n.expiry.Sub(zTime)/time.Second)) } m.mu.Unlock() sort.Strings(got) want := []string{} if arg != "" { want = strings.Split(arg, " ") } if !reflect.DeepEqual(got, want) { t.Fatalf("test case #%d %q:\ngot %q\nwant %q", i, tc, got, want) } } if err := m.consistent(); err != nil { t.Fatalf("test case #%d %q: inconsistent state: %v", i, tc, err) } } } func TestMemLS(t *testing.T) { now := time.Unix(0, 0) m := NewMemLS().(*memLS) rng := rand.New(rand.NewSource(0)) tokens := map[string]string{} nConfirm, nCreate, nRefresh, nUnlock := 0, 0, 0, 0 const N = 2000 for i := 0; i < N; i++ { name := lockTestNames[rng.Intn(len(lockTestNames))] duration := lockTestDurations[rng.Intn(len(lockTestDurations))] confirmed, unlocked := false, false // If the name was already locked, we randomly confirm/release, refresh // or unlock it. Otherwise, we create a lock. token := tokens[name] if token != "" { switch rng.Intn(3) { case 0: confirmed = true nConfirm++ release, err := m.Confirm(now, name, "", Condition{Token: token}) if err != nil { t.Fatalf("iteration #%d: Confirm %q: %v", i, name, err) } if err := m.consistent(); err != nil { t.Fatalf("iteration #%d: inconsistent state: %v", i, err) } release() case 1: nRefresh++ if _, err := m.Refresh(now, token, duration); err != nil { t.Fatalf("iteration #%d: Refresh %q: %v", i, name, err) } case 2: unlocked = true nUnlock++ if err := m.Unlock(now, token); err != nil { t.Fatalf("iteration #%d: Unlock %q: %v", i, name, err) } } } else { nCreate++ var err error token, err = m.Create(now, LockDetails{ Root: name, Duration: duration, ZeroDepth: lockTestZeroDepth(name), }) if err != nil { t.Fatalf("iteration #%d: Create %q: %v", i, name, err) } } if !confirmed { if duration == 0 || unlocked { // A zero-duration lock should expire immediately and is // effectively equivalent to being unlocked. tokens[name] = "" } else { tokens[name] = token } } if err := m.consistent(); err != nil { t.Fatalf("iteration #%d: inconsistent state: %v", i, err) } } if nConfirm < N/10 { t.Fatalf("too few Confirm calls: got %d, want >= %d", nConfirm, N/10) } if nCreate < N/10 { t.Fatalf("too few Create calls: got %d, want >= %d", nCreate, N/10) } if nRefresh < N/10 { t.Fatalf("too few Refresh calls: got %d, want >= %d", nRefresh, N/10) } if nUnlock < N/10 { t.Fatalf("too few Unlock calls: got %d, want >= %d", nUnlock, N/10) } } func (m *memLS) consistent() error { m.mu.Lock() defer m.mu.Unlock() // If m.byName is non-empty, then it must contain an entry for the root "/", // and its refCount should equal the number of locked nodes. if len(m.byName) > 0 { n := m.byName["/"] if n == nil { return fmt.Errorf(`non-empty m.byName does not contain the root "/"`) } if n.refCount != len(m.byToken) { return fmt.Errorf("root node refCount=%d, differs from len(m.byToken)=%d", n.refCount, len(m.byToken)) } } for name, n := range m.byName { // The map keys should be consistent with the node's copy of the key. if n.details.Root != name { return fmt.Errorf("node name %q != byName map key %q", n.details.Root, name) } // A name must be clean, and start with a "/". if len(name) == 0 || name[0] != '/' { return fmt.Errorf(`node name %q does not start with "/"`, name) } if name != path.Clean(name) { return fmt.Errorf(`node name %q is not clean`, name) } // A node's refCount should be positive. if n.refCount <= 0 { return fmt.Errorf("non-positive refCount for node at name %q", name) } // A node's refCount should be the number of self-or-descendents that // are locked (i.e. have a non-empty token). var list []string for name0, n0 := range m.byName { // All of lockTestNames' name fragments are one byte long: '_', 'i' or 'z', // so strings.HasPrefix is equivalent to self-or-descendent name match. // We don't have to worry about "/foo/bar" being a false positive match // for "/foo/b". if strings.HasPrefix(name0, name) && n0.token != "" { list = append(list, name0) } } if n.refCount != len(list) { sort.Strings(list) return fmt.Errorf("node at name %q has refCount %d but locked self-or-descendents are %q (len=%d)", name, n.refCount, list, len(list)) } // A node n is in m.byToken if it has a non-empty token. if n.token != "" { if _, ok := m.byToken[n.token]; !ok { return fmt.Errorf("node at name %q has token %q but not in m.byToken", name, n.token) } } // A node n is in m.byExpiry if it has a non-negative byExpiryIndex. if n.byExpiryIndex >= 0 { if n.byExpiryIndex >= len(m.byExpiry) { return fmt.Errorf("node at name %q has byExpiryIndex %d but m.byExpiry has length %d", name, n.byExpiryIndex, len(m.byExpiry)) } if n != m.byExpiry[n.byExpiryIndex] { return fmt.Errorf("node at name %q has byExpiryIndex %d but that indexes a different node", name, n.byExpiryIndex) } } } for token, n := range m.byToken { // The map keys should be consistent with the node's copy of the key. if n.token != token { return fmt.Errorf("node token %q != byToken map key %q", n.token, token) } // Every node in m.byToken is in m.byName. if _, ok := m.byName[n.details.Root]; !ok { return fmt.Errorf("node at name %q in m.byToken but not in m.byName", n.details.Root) } } for i, n := range m.byExpiry { // The slice indices should be consistent with the node's copy of the index. if n.byExpiryIndex != i { return fmt.Errorf("node byExpiryIndex %d != byExpiry slice index %d", n.byExpiryIndex, i) } // Every node in m.byExpiry is in m.byName. if _, ok := m.byName[n.details.Root]; !ok { return fmt.Errorf("node at name %q in m.byExpiry but not in m.byName", n.details.Root) } // No node in m.byExpiry should be held. if n.held { return fmt.Errorf("node at name %q in m.byExpiry is held", n.details.Root) } } return nil } func TestParseTimeout(t *testing.T) { testCases := []struct { s string want time.Duration wantErr error }{{ "", infiniteTimeout, nil, }, { "Infinite", infiniteTimeout, nil, }, { "Infinitesimal", 0, errInvalidTimeout, }, { "infinite", 0, errInvalidTimeout, }, { "Second-0", 0 * time.Second, nil, }, { "Second-123", 123 * time.Second, nil, }, { " Second-456 ", 456 * time.Second, nil, }, { "Second-4100000000", 4100000000 * time.Second, nil, }, { "junk", 0, errInvalidTimeout, }, { "Second-", 0, errInvalidTimeout, }, { "Second--1", 0, errInvalidTimeout, }, { "Second--123", 0, errInvalidTimeout, }, { "Second-+123", 0, errInvalidTimeout, }, { "Second-0x123", 0, errInvalidTimeout, }, { "second-123", 0, errInvalidTimeout, }, { "Second-4294967295", 4294967295 * time.Second, nil, }, { // Section 10.7 says that "The timeout value for TimeType "Second" // must not be greater than 2^32-1." "Second-4294967296", 0, errInvalidTimeout, }, { // This test case comes from section 9.10.9 of the spec. It says, // // "In this request, the client has specified that it desires an // infinite-length lock, if available, otherwise a timeout of 4.1 // billion seconds, if available." // // The Go WebDAV package always supports infinite length locks, // and ignores the fallback after the comma. "Infinite, Second-4100000000", infiniteTimeout, nil, }} for _, tc := range testCases { got, gotErr := parseTimeout(tc.s) if got != tc.want || gotErr != tc.wantErr { t.Errorf("parsing %q:\ngot %v, %v\nwant %v, %v", tc.s, got, gotErr, tc.want, tc.wantErr) } } } lxd-2.0.2/dist/src/golang.org/x/net/webdav/file.go0000644061062106075000000004643612721405224024266 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package webdav import ( "encoding/xml" "io" "net/http" "os" "path" "path/filepath" "strings" "sync" "time" ) // slashClean is equivalent to but slightly more efficient than // path.Clean("/" + name). func slashClean(name string) string { if name == "" || name[0] != '/' { name = "/" + name } return path.Clean(name) } // A FileSystem implements access to a collection of named files. The elements // in a file path are separated by slash ('/', U+002F) characters, regardless // of host operating system convention. // // Each method has the same semantics as the os package's function of the same // name. // // Note that the os.Rename documentation says that "OS-specific restrictions // might apply". In particular, whether or not renaming a file or directory // overwriting another existing file or directory is an error is OS-dependent. type FileSystem interface { Mkdir(name string, perm os.FileMode) error OpenFile(name string, flag int, perm os.FileMode) (File, error) RemoveAll(name string) error Rename(oldName, newName string) error Stat(name string) (os.FileInfo, error) } // A File is returned by a FileSystem's OpenFile method and can be served by a // Handler. // // A File may optionally implement the DeadPropsHolder interface, if it can // load and save dead properties. type File interface { http.File io.Writer } // A Dir implements FileSystem using the native file system restricted to a // specific directory tree. // // While the FileSystem.OpenFile method takes '/'-separated paths, a Dir's // string value is a filename on the native file system, not a URL, so it is // separated by filepath.Separator, which isn't necessarily '/'. // // An empty Dir is treated as ".". type Dir string func (d Dir) resolve(name string) string { // This implementation is based on Dir.Open's code in the standard net/http package. if filepath.Separator != '/' && strings.IndexRune(name, filepath.Separator) >= 0 || strings.Contains(name, "\x00") { return "" } dir := string(d) if dir == "" { dir = "." } return filepath.Join(dir, filepath.FromSlash(slashClean(name))) } func (d Dir) Mkdir(name string, perm os.FileMode) error { if name = d.resolve(name); name == "" { return os.ErrNotExist } return os.Mkdir(name, perm) } func (d Dir) OpenFile(name string, flag int, perm os.FileMode) (File, error) { if name = d.resolve(name); name == "" { return nil, os.ErrNotExist } f, err := os.OpenFile(name, flag, perm) if err != nil { return nil, err } return f, nil } func (d Dir) RemoveAll(name string) error { if name = d.resolve(name); name == "" { return os.ErrNotExist } if name == filepath.Clean(string(d)) { // Prohibit removing the virtual root directory. return os.ErrInvalid } return os.RemoveAll(name) } func (d Dir) Rename(oldName, newName string) error { if oldName = d.resolve(oldName); oldName == "" { return os.ErrNotExist } if newName = d.resolve(newName); newName == "" { return os.ErrNotExist } if root := filepath.Clean(string(d)); root == oldName || root == newName { // Prohibit renaming from or to the virtual root directory. return os.ErrInvalid } return os.Rename(oldName, newName) } func (d Dir) Stat(name string) (os.FileInfo, error) { if name = d.resolve(name); name == "" { return nil, os.ErrNotExist } return os.Stat(name) } // NewMemFS returns a new in-memory FileSystem implementation. func NewMemFS() FileSystem { return &memFS{ root: memFSNode{ children: make(map[string]*memFSNode), mode: 0660 | os.ModeDir, modTime: time.Now(), }, } } // A memFS implements FileSystem, storing all metadata and actual file data // in-memory. No limits on filesystem size are used, so it is not recommended // this be used where the clients are untrusted. // // Concurrent access is permitted. The tree structure is protected by a mutex, // and each node's contents and metadata are protected by a per-node mutex. // // TODO: Enforce file permissions. type memFS struct { mu sync.Mutex root memFSNode } // TODO: clean up and rationalize the walk/find code. // walk walks the directory tree for the fullname, calling f at each step. If f // returns an error, the walk will be aborted and return that same error. // // dir is the directory at that step, frag is the name fragment, and final is // whether it is the final step. For example, walking "/foo/bar/x" will result // in 3 calls to f: // - "/", "foo", false // - "/foo/", "bar", false // - "/foo/bar/", "x", true // The frag argument will be empty only if dir is the root node and the walk // ends at that root node. func (fs *memFS) walk(op, fullname string, f func(dir *memFSNode, frag string, final bool) error) error { original := fullname fullname = slashClean(fullname) // Strip any leading "/"s to make fullname a relative path, as the walk // starts at fs.root. if fullname[0] == '/' { fullname = fullname[1:] } dir := &fs.root for { frag, remaining := fullname, "" i := strings.IndexRune(fullname, '/') final := i < 0 if !final { frag, remaining = fullname[:i], fullname[i+1:] } if frag == "" && dir != &fs.root { panic("webdav: empty path fragment for a clean path") } if err := f(dir, frag, final); err != nil { return &os.PathError{ Op: op, Path: original, Err: err, } } if final { break } child := dir.children[frag] if child == nil { return &os.PathError{ Op: op, Path: original, Err: os.ErrNotExist, } } if !child.mode.IsDir() { return &os.PathError{ Op: op, Path: original, Err: os.ErrInvalid, } } dir, fullname = child, remaining } return nil } // find returns the parent of the named node and the relative name fragment // from the parent to the child. For example, if finding "/foo/bar/baz" then // parent will be the node for "/foo/bar" and frag will be "baz". // // If the fullname names the root node, then parent, frag and err will be zero. // // find returns an error if the parent does not already exist or the parent // isn't a directory, but it will not return an error per se if the child does // not already exist. The error returned is either nil or an *os.PathError // whose Op is op. func (fs *memFS) find(op, fullname string) (parent *memFSNode, frag string, err error) { err = fs.walk(op, fullname, func(parent0 *memFSNode, frag0 string, final bool) error { if !final { return nil } if frag0 != "" { parent, frag = parent0, frag0 } return nil }) return parent, frag, err } func (fs *memFS) Mkdir(name string, perm os.FileMode) error { fs.mu.Lock() defer fs.mu.Unlock() dir, frag, err := fs.find("mkdir", name) if err != nil { return err } if dir == nil { // We can't create the root. return os.ErrInvalid } if _, ok := dir.children[frag]; ok { return os.ErrExist } dir.children[frag] = &memFSNode{ children: make(map[string]*memFSNode), mode: perm.Perm() | os.ModeDir, modTime: time.Now(), } return nil } func (fs *memFS) OpenFile(name string, flag int, perm os.FileMode) (File, error) { fs.mu.Lock() defer fs.mu.Unlock() dir, frag, err := fs.find("open", name) if err != nil { return nil, err } var n *memFSNode if dir == nil { // We're opening the root. if flag&(os.O_WRONLY|os.O_RDWR) != 0 { return nil, os.ErrPermission } n, frag = &fs.root, "/" } else { n = dir.children[frag] if flag&(os.O_SYNC|os.O_APPEND) != 0 { // memFile doesn't support these flags yet. return nil, os.ErrInvalid } if flag&os.O_CREATE != 0 { if flag&os.O_EXCL != 0 && n != nil { return nil, os.ErrExist } if n == nil { n = &memFSNode{ mode: perm.Perm(), } dir.children[frag] = n } } if n == nil { return nil, os.ErrNotExist } if flag&(os.O_WRONLY|os.O_RDWR) != 0 && flag&os.O_TRUNC != 0 { n.mu.Lock() n.data = nil n.mu.Unlock() } } children := make([]os.FileInfo, 0, len(n.children)) for cName, c := range n.children { children = append(children, c.stat(cName)) } return &memFile{ n: n, nameSnapshot: frag, childrenSnapshot: children, }, nil } func (fs *memFS) RemoveAll(name string) error { fs.mu.Lock() defer fs.mu.Unlock() dir, frag, err := fs.find("remove", name) if err != nil { return err } if dir == nil { // We can't remove the root. return os.ErrInvalid } delete(dir.children, frag) return nil } func (fs *memFS) Rename(oldName, newName string) error { fs.mu.Lock() defer fs.mu.Unlock() oldName = slashClean(oldName) newName = slashClean(newName) if oldName == newName { return nil } if strings.HasPrefix(newName, oldName+"/") { // We can't rename oldName to be a sub-directory of itself. return os.ErrInvalid } oDir, oFrag, err := fs.find("rename", oldName) if err != nil { return err } if oDir == nil { // We can't rename from the root. return os.ErrInvalid } nDir, nFrag, err := fs.find("rename", newName) if err != nil { return err } if nDir == nil { // We can't rename to the root. return os.ErrInvalid } oNode, ok := oDir.children[oFrag] if !ok { return os.ErrNotExist } if oNode.children != nil { if nNode, ok := nDir.children[nFrag]; ok { if nNode.children == nil { return errNotADirectory } if len(nNode.children) != 0 { return errDirectoryNotEmpty } } } delete(oDir.children, oFrag) nDir.children[nFrag] = oNode return nil } func (fs *memFS) Stat(name string) (os.FileInfo, error) { fs.mu.Lock() defer fs.mu.Unlock() dir, frag, err := fs.find("stat", name) if err != nil { return nil, err } if dir == nil { // We're stat'ting the root. return fs.root.stat("/"), nil } if n, ok := dir.children[frag]; ok { return n.stat(path.Base(name)), nil } return nil, os.ErrNotExist } // A memFSNode represents a single entry in the in-memory filesystem and also // implements os.FileInfo. type memFSNode struct { // children is protected by memFS.mu. children map[string]*memFSNode mu sync.Mutex data []byte mode os.FileMode modTime time.Time deadProps map[xml.Name]Property } func (n *memFSNode) stat(name string) *memFileInfo { n.mu.Lock() defer n.mu.Unlock() return &memFileInfo{ name: name, size: int64(len(n.data)), mode: n.mode, modTime: n.modTime, } } func (n *memFSNode) DeadProps() (map[xml.Name]Property, error) { n.mu.Lock() defer n.mu.Unlock() if len(n.deadProps) == 0 { return nil, nil } ret := make(map[xml.Name]Property, len(n.deadProps)) for k, v := range n.deadProps { ret[k] = v } return ret, nil } func (n *memFSNode) Patch(patches []Proppatch) ([]Propstat, error) { n.mu.Lock() defer n.mu.Unlock() pstat := Propstat{Status: http.StatusOK} for _, patch := range patches { for _, p := range patch.Props { pstat.Props = append(pstat.Props, Property{XMLName: p.XMLName}) if patch.Remove { delete(n.deadProps, p.XMLName) continue } if n.deadProps == nil { n.deadProps = map[xml.Name]Property{} } n.deadProps[p.XMLName] = p } } return []Propstat{pstat}, nil } type memFileInfo struct { name string size int64 mode os.FileMode modTime time.Time } func (f *memFileInfo) Name() string { return f.name } func (f *memFileInfo) Size() int64 { return f.size } func (f *memFileInfo) Mode() os.FileMode { return f.mode } func (f *memFileInfo) ModTime() time.Time { return f.modTime } func (f *memFileInfo) IsDir() bool { return f.mode.IsDir() } func (f *memFileInfo) Sys() interface{} { return nil } // A memFile is a File implementation for a memFSNode. It is a per-file (not // per-node) read/write position, and a snapshot of the memFS' tree structure // (a node's name and children) for that node. type memFile struct { n *memFSNode nameSnapshot string childrenSnapshot []os.FileInfo // pos is protected by n.mu. pos int } // A *memFile implements the optional DeadPropsHolder interface. var _ DeadPropsHolder = (*memFile)(nil) func (f *memFile) DeadProps() (map[xml.Name]Property, error) { return f.n.DeadProps() } func (f *memFile) Patch(patches []Proppatch) ([]Propstat, error) { return f.n.Patch(patches) } func (f *memFile) Close() error { return nil } func (f *memFile) Read(p []byte) (int, error) { f.n.mu.Lock() defer f.n.mu.Unlock() if f.n.mode.IsDir() { return 0, os.ErrInvalid } if f.pos >= len(f.n.data) { return 0, io.EOF } n := copy(p, f.n.data[f.pos:]) f.pos += n return n, nil } func (f *memFile) Readdir(count int) ([]os.FileInfo, error) { f.n.mu.Lock() defer f.n.mu.Unlock() if !f.n.mode.IsDir() { return nil, os.ErrInvalid } old := f.pos if old >= len(f.childrenSnapshot) { // The os.File Readdir docs say that at the end of a directory, // the error is io.EOF if count > 0 and nil if count <= 0. if count > 0 { return nil, io.EOF } return nil, nil } if count > 0 { f.pos += count if f.pos > len(f.childrenSnapshot) { f.pos = len(f.childrenSnapshot) } } else { f.pos = len(f.childrenSnapshot) old = 0 } return f.childrenSnapshot[old:f.pos], nil } func (f *memFile) Seek(offset int64, whence int) (int64, error) { f.n.mu.Lock() defer f.n.mu.Unlock() npos := f.pos // TODO: How to handle offsets greater than the size of system int? switch whence { case os.SEEK_SET: npos = int(offset) case os.SEEK_CUR: npos += int(offset) case os.SEEK_END: npos = len(f.n.data) + int(offset) default: npos = -1 } if npos < 0 { return 0, os.ErrInvalid } f.pos = npos return int64(f.pos), nil } func (f *memFile) Stat() (os.FileInfo, error) { return f.n.stat(f.nameSnapshot), nil } func (f *memFile) Write(p []byte) (int, error) { lenp := len(p) f.n.mu.Lock() defer f.n.mu.Unlock() if f.n.mode.IsDir() { return 0, os.ErrInvalid } if f.pos < len(f.n.data) { n := copy(f.n.data[f.pos:], p) f.pos += n p = p[n:] } else if f.pos > len(f.n.data) { // Write permits the creation of holes, if we've seek'ed past the // existing end of file. if f.pos <= cap(f.n.data) { oldLen := len(f.n.data) f.n.data = f.n.data[:f.pos] hole := f.n.data[oldLen:] for i := range hole { hole[i] = 0 } } else { d := make([]byte, f.pos, f.pos+len(p)) copy(d, f.n.data) f.n.data = d } } if len(p) > 0 { // We should only get here if f.pos == len(f.n.data). f.n.data = append(f.n.data, p...) f.pos = len(f.n.data) } f.n.modTime = time.Now() return lenp, nil } // moveFiles moves files and/or directories from src to dst. // // See section 9.9.4 for when various HTTP status codes apply. func moveFiles(fs FileSystem, src, dst string, overwrite bool) (status int, err error) { created := false if _, err := fs.Stat(dst); err != nil { if !os.IsNotExist(err) { return http.StatusForbidden, err } created = true } else if overwrite { // Section 9.9.3 says that "If a resource exists at the destination // and the Overwrite header is "T", then prior to performing the move, // the server must perform a DELETE with "Depth: infinity" on the // destination resource. if err := fs.RemoveAll(dst); err != nil { return http.StatusForbidden, err } } else { return http.StatusPreconditionFailed, os.ErrExist } if err := fs.Rename(src, dst); err != nil { return http.StatusForbidden, err } if created { return http.StatusCreated, nil } return http.StatusNoContent, nil } func copyProps(dst, src File) error { d, ok := dst.(DeadPropsHolder) if !ok { return nil } s, ok := src.(DeadPropsHolder) if !ok { return nil } m, err := s.DeadProps() if err != nil { return err } props := make([]Property, 0, len(m)) for _, prop := range m { props = append(props, prop) } _, err = d.Patch([]Proppatch{{Props: props}}) return err } // copyFiles copies files and/or directories from src to dst. // // See section 9.8.5 for when various HTTP status codes apply. func copyFiles(fs FileSystem, src, dst string, overwrite bool, depth int, recursion int) (status int, err error) { if recursion == 1000 { return http.StatusInternalServerError, errRecursionTooDeep } recursion++ // TODO: section 9.8.3 says that "Note that an infinite-depth COPY of /A/ // into /A/B/ could lead to infinite recursion if not handled correctly." srcFile, err := fs.OpenFile(src, os.O_RDONLY, 0) if err != nil { if os.IsNotExist(err) { return http.StatusNotFound, err } return http.StatusInternalServerError, err } defer srcFile.Close() srcStat, err := srcFile.Stat() if err != nil { if os.IsNotExist(err) { return http.StatusNotFound, err } return http.StatusInternalServerError, err } srcPerm := srcStat.Mode() & os.ModePerm created := false if _, err := fs.Stat(dst); err != nil { if os.IsNotExist(err) { created = true } else { return http.StatusForbidden, err } } else { if !overwrite { return http.StatusPreconditionFailed, os.ErrExist } if err := fs.RemoveAll(dst); err != nil && !os.IsNotExist(err) { return http.StatusForbidden, err } } if srcStat.IsDir() { if err := fs.Mkdir(dst, srcPerm); err != nil { return http.StatusForbidden, err } if depth == infiniteDepth { children, err := srcFile.Readdir(-1) if err != nil { return http.StatusForbidden, err } for _, c := range children { name := c.Name() s := path.Join(src, name) d := path.Join(dst, name) cStatus, cErr := copyFiles(fs, s, d, overwrite, depth, recursion) if cErr != nil { // TODO: MultiStatus. return cStatus, cErr } } } } else { dstFile, err := fs.OpenFile(dst, os.O_RDWR|os.O_CREATE|os.O_TRUNC, srcPerm) if err != nil { if os.IsNotExist(err) { return http.StatusConflict, err } return http.StatusForbidden, err } _, copyErr := io.Copy(dstFile, srcFile) propsErr := copyProps(dstFile, srcFile) closeErr := dstFile.Close() if copyErr != nil { return http.StatusInternalServerError, copyErr } if propsErr != nil { return http.StatusInternalServerError, propsErr } if closeErr != nil { return http.StatusInternalServerError, closeErr } } if created { return http.StatusCreated, nil } return http.StatusNoContent, nil } // walkFS traverses filesystem fs starting at name up to depth levels. // // Allowed values for depth are 0, 1 or infiniteDepth. For each visited node, // walkFS calls walkFn. If a visited file system node is a directory and // walkFn returns filepath.SkipDir, walkFS will skip traversal of this node. func walkFS(fs FileSystem, depth int, name string, info os.FileInfo, walkFn filepath.WalkFunc) error { // This implementation is based on Walk's code in the standard path/filepath package. err := walkFn(name, info, nil) if err != nil { if info.IsDir() && err == filepath.SkipDir { return nil } return err } if !info.IsDir() || depth == 0 { return nil } if depth == 1 { depth = 0 } // Read directory names. f, err := fs.OpenFile(name, os.O_RDONLY, 0) if err != nil { return walkFn(name, info, err) } fileInfos, err := f.Readdir(0) f.Close() if err != nil { return walkFn(name, info, err) } for _, fileInfo := range fileInfos { filename := path.Join(name, fileInfo.Name()) fileInfo, err := fs.Stat(filename) if err != nil { if err := walkFn(filename, fileInfo, err); err != nil && err != filepath.SkipDir { return err } } else { err = walkFS(fs, depth, filename, fileInfo, walkFn) if err != nil { if !fileInfo.IsDir() || err != filepath.SkipDir { return err } } } } return nil } lxd-2.0.2/dist/src/golang.org/x/net/webdav/prop_test.go0000644061062106075000000004033412721405224025355 0ustar00stgraberdomain admins00000000000000// Copyright 2015 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package webdav import ( "encoding/xml" "fmt" "net/http" "os" "reflect" "sort" "testing" ) func TestMemPS(t *testing.T) { // calcProps calculates the getlastmodified and getetag DAV: property // values in pstats for resource name in file-system fs. calcProps := func(name string, fs FileSystem, ls LockSystem, pstats []Propstat) error { fi, err := fs.Stat(name) if err != nil { return err } for _, pst := range pstats { for i, p := range pst.Props { switch p.XMLName { case xml.Name{Space: "DAV:", Local: "getlastmodified"}: p.InnerXML = []byte(fi.ModTime().Format(http.TimeFormat)) pst.Props[i] = p case xml.Name{Space: "DAV:", Local: "getetag"}: if fi.IsDir() { continue } etag, err := findETag(fs, ls, name, fi) if err != nil { return err } p.InnerXML = []byte(etag) pst.Props[i] = p } } } return nil } const ( lockEntry = `` + `` + `` + `` + `` statForbiddenError = `` ) type propOp struct { op string name string pnames []xml.Name patches []Proppatch wantPnames []xml.Name wantPropstats []Propstat } testCases := []struct { desc string noDeadProps bool buildfs []string propOp []propOp }{{ desc: "propname", buildfs: []string{"mkdir /dir", "touch /file"}, propOp: []propOp{{ op: "propname", name: "/dir", wantPnames: []xml.Name{ {Space: "DAV:", Local: "resourcetype"}, {Space: "DAV:", Local: "displayname"}, {Space: "DAV:", Local: "supportedlock"}, }, }, { op: "propname", name: "/file", wantPnames: []xml.Name{ {Space: "DAV:", Local: "resourcetype"}, {Space: "DAV:", Local: "displayname"}, {Space: "DAV:", Local: "getcontentlength"}, {Space: "DAV:", Local: "getlastmodified"}, {Space: "DAV:", Local: "getcontenttype"}, {Space: "DAV:", Local: "getetag"}, {Space: "DAV:", Local: "supportedlock"}, }, }}, }, { desc: "allprop dir and file", buildfs: []string{"mkdir /dir", "write /file foobarbaz"}, propOp: []propOp{{ op: "allprop", name: "/dir", wantPropstats: []Propstat{{ Status: http.StatusOK, Props: []Property{{ XMLName: xml.Name{Space: "DAV:", Local: "resourcetype"}, InnerXML: []byte(``), }, { XMLName: xml.Name{Space: "DAV:", Local: "displayname"}, InnerXML: []byte("dir"), }, { XMLName: xml.Name{Space: "DAV:", Local: "supportedlock"}, InnerXML: []byte(lockEntry), }}, }}, }, { op: "allprop", name: "/file", wantPropstats: []Propstat{{ Status: http.StatusOK, Props: []Property{{ XMLName: xml.Name{Space: "DAV:", Local: "resourcetype"}, InnerXML: []byte(""), }, { XMLName: xml.Name{Space: "DAV:", Local: "displayname"}, InnerXML: []byte("file"), }, { XMLName: xml.Name{Space: "DAV:", Local: "getcontentlength"}, InnerXML: []byte("9"), }, { XMLName: xml.Name{Space: "DAV:", Local: "getlastmodified"}, InnerXML: nil, // Calculated during test. }, { XMLName: xml.Name{Space: "DAV:", Local: "getcontenttype"}, InnerXML: []byte("text/plain; charset=utf-8"), }, { XMLName: xml.Name{Space: "DAV:", Local: "getetag"}, InnerXML: nil, // Calculated during test. }, { XMLName: xml.Name{Space: "DAV:", Local: "supportedlock"}, InnerXML: []byte(lockEntry), }}, }}, }, { op: "allprop", name: "/file", pnames: []xml.Name{ {"DAV:", "resourcetype"}, {"foo", "bar"}, }, wantPropstats: []Propstat{{ Status: http.StatusOK, Props: []Property{{ XMLName: xml.Name{Space: "DAV:", Local: "resourcetype"}, InnerXML: []byte(""), }, { XMLName: xml.Name{Space: "DAV:", Local: "displayname"}, InnerXML: []byte("file"), }, { XMLName: xml.Name{Space: "DAV:", Local: "getcontentlength"}, InnerXML: []byte("9"), }, { XMLName: xml.Name{Space: "DAV:", Local: "getlastmodified"}, InnerXML: nil, // Calculated during test. }, { XMLName: xml.Name{Space: "DAV:", Local: "getcontenttype"}, InnerXML: []byte("text/plain; charset=utf-8"), }, { XMLName: xml.Name{Space: "DAV:", Local: "getetag"}, InnerXML: nil, // Calculated during test. }, { XMLName: xml.Name{Space: "DAV:", Local: "supportedlock"}, InnerXML: []byte(lockEntry), }}}, { Status: http.StatusNotFound, Props: []Property{{ XMLName: xml.Name{Space: "foo", Local: "bar"}, }}}, }, }}, }, { desc: "propfind DAV:resourcetype", buildfs: []string{"mkdir /dir", "touch /file"}, propOp: []propOp{{ op: "propfind", name: "/dir", pnames: []xml.Name{{"DAV:", "resourcetype"}}, wantPropstats: []Propstat{{ Status: http.StatusOK, Props: []Property{{ XMLName: xml.Name{Space: "DAV:", Local: "resourcetype"}, InnerXML: []byte(``), }}, }}, }, { op: "propfind", name: "/file", pnames: []xml.Name{{"DAV:", "resourcetype"}}, wantPropstats: []Propstat{{ Status: http.StatusOK, Props: []Property{{ XMLName: xml.Name{Space: "DAV:", Local: "resourcetype"}, InnerXML: []byte(""), }}, }}, }}, }, { desc: "propfind unsupported DAV properties", buildfs: []string{"mkdir /dir"}, propOp: []propOp{{ op: "propfind", name: "/dir", pnames: []xml.Name{{"DAV:", "getcontentlanguage"}}, wantPropstats: []Propstat{{ Status: http.StatusNotFound, Props: []Property{{ XMLName: xml.Name{Space: "DAV:", Local: "getcontentlanguage"}, }}, }}, }, { op: "propfind", name: "/dir", pnames: []xml.Name{{"DAV:", "creationdate"}}, wantPropstats: []Propstat{{ Status: http.StatusNotFound, Props: []Property{{ XMLName: xml.Name{Space: "DAV:", Local: "creationdate"}, }}, }}, }}, }, { desc: "propfind getetag for files but not for directories", buildfs: []string{"mkdir /dir", "touch /file"}, propOp: []propOp{{ op: "propfind", name: "/dir", pnames: []xml.Name{{"DAV:", "getetag"}}, wantPropstats: []Propstat{{ Status: http.StatusNotFound, Props: []Property{{ XMLName: xml.Name{Space: "DAV:", Local: "getetag"}, }}, }}, }, { op: "propfind", name: "/file", pnames: []xml.Name{{"DAV:", "getetag"}}, wantPropstats: []Propstat{{ Status: http.StatusOK, Props: []Property{{ XMLName: xml.Name{Space: "DAV:", Local: "getetag"}, InnerXML: nil, // Calculated during test. }}, }}, }}, }, { desc: "proppatch property on no-dead-properties file system", buildfs: []string{"mkdir /dir"}, noDeadProps: true, propOp: []propOp{{ op: "proppatch", name: "/dir", patches: []Proppatch{{ Props: []Property{{ XMLName: xml.Name{Space: "foo", Local: "bar"}, }}, }}, wantPropstats: []Propstat{{ Status: http.StatusForbidden, Props: []Property{{ XMLName: xml.Name{Space: "foo", Local: "bar"}, }}, }}, }, { op: "proppatch", name: "/dir", patches: []Proppatch{{ Props: []Property{{ XMLName: xml.Name{Space: "DAV:", Local: "getetag"}, }}, }}, wantPropstats: []Propstat{{ Status: http.StatusForbidden, XMLError: statForbiddenError, Props: []Property{{ XMLName: xml.Name{Space: "DAV:", Local: "getetag"}, }}, }}, }}, }, { desc: "proppatch dead property", buildfs: []string{"mkdir /dir"}, propOp: []propOp{{ op: "proppatch", name: "/dir", patches: []Proppatch{{ Props: []Property{{ XMLName: xml.Name{Space: "foo", Local: "bar"}, InnerXML: []byte("baz"), }}, }}, wantPropstats: []Propstat{{ Status: http.StatusOK, Props: []Property{{ XMLName: xml.Name{Space: "foo", Local: "bar"}, }}, }}, }, { op: "propfind", name: "/dir", pnames: []xml.Name{{Space: "foo", Local: "bar"}}, wantPropstats: []Propstat{{ Status: http.StatusOK, Props: []Property{{ XMLName: xml.Name{Space: "foo", Local: "bar"}, InnerXML: []byte("baz"), }}, }}, }}, }, { desc: "proppatch dead property with failed dependency", buildfs: []string{"mkdir /dir"}, propOp: []propOp{{ op: "proppatch", name: "/dir", patches: []Proppatch{{ Props: []Property{{ XMLName: xml.Name{Space: "foo", Local: "bar"}, InnerXML: []byte("baz"), }}, }, { Props: []Property{{ XMLName: xml.Name{Space: "DAV:", Local: "displayname"}, InnerXML: []byte("xxx"), }}, }}, wantPropstats: []Propstat{{ Status: http.StatusForbidden, XMLError: statForbiddenError, Props: []Property{{ XMLName: xml.Name{Space: "DAV:", Local: "displayname"}, }}, }, { Status: StatusFailedDependency, Props: []Property{{ XMLName: xml.Name{Space: "foo", Local: "bar"}, }}, }}, }, { op: "propfind", name: "/dir", pnames: []xml.Name{{Space: "foo", Local: "bar"}}, wantPropstats: []Propstat{{ Status: http.StatusNotFound, Props: []Property{{ XMLName: xml.Name{Space: "foo", Local: "bar"}, }}, }}, }}, }, { desc: "proppatch remove dead property", buildfs: []string{"mkdir /dir"}, propOp: []propOp{{ op: "proppatch", name: "/dir", patches: []Proppatch{{ Props: []Property{{ XMLName: xml.Name{Space: "foo", Local: "bar"}, InnerXML: []byte("baz"), }, { XMLName: xml.Name{Space: "spam", Local: "ham"}, InnerXML: []byte("eggs"), }}, }}, wantPropstats: []Propstat{{ Status: http.StatusOK, Props: []Property{{ XMLName: xml.Name{Space: "foo", Local: "bar"}, }, { XMLName: xml.Name{Space: "spam", Local: "ham"}, }}, }}, }, { op: "propfind", name: "/dir", pnames: []xml.Name{ {Space: "foo", Local: "bar"}, {Space: "spam", Local: "ham"}, }, wantPropstats: []Propstat{{ Status: http.StatusOK, Props: []Property{{ XMLName: xml.Name{Space: "foo", Local: "bar"}, InnerXML: []byte("baz"), }, { XMLName: xml.Name{Space: "spam", Local: "ham"}, InnerXML: []byte("eggs"), }}, }}, }, { op: "proppatch", name: "/dir", patches: []Proppatch{{ Remove: true, Props: []Property{{ XMLName: xml.Name{Space: "foo", Local: "bar"}, }}, }}, wantPropstats: []Propstat{{ Status: http.StatusOK, Props: []Property{{ XMLName: xml.Name{Space: "foo", Local: "bar"}, }}, }}, }, { op: "propfind", name: "/dir", pnames: []xml.Name{ {Space: "foo", Local: "bar"}, {Space: "spam", Local: "ham"}, }, wantPropstats: []Propstat{{ Status: http.StatusNotFound, Props: []Property{{ XMLName: xml.Name{Space: "foo", Local: "bar"}, }}, }, { Status: http.StatusOK, Props: []Property{{ XMLName: xml.Name{Space: "spam", Local: "ham"}, InnerXML: []byte("eggs"), }}, }}, }}, }, { desc: "propname with dead property", buildfs: []string{"touch /file"}, propOp: []propOp{{ op: "proppatch", name: "/file", patches: []Proppatch{{ Props: []Property{{ XMLName: xml.Name{Space: "foo", Local: "bar"}, InnerXML: []byte("baz"), }}, }}, wantPropstats: []Propstat{{ Status: http.StatusOK, Props: []Property{{ XMLName: xml.Name{Space: "foo", Local: "bar"}, }}, }}, }, { op: "propname", name: "/file", wantPnames: []xml.Name{ {Space: "DAV:", Local: "resourcetype"}, {Space: "DAV:", Local: "displayname"}, {Space: "DAV:", Local: "getcontentlength"}, {Space: "DAV:", Local: "getlastmodified"}, {Space: "DAV:", Local: "getcontenttype"}, {Space: "DAV:", Local: "getetag"}, {Space: "DAV:", Local: "supportedlock"}, {Space: "foo", Local: "bar"}, }, }}, }, { desc: "proppatch remove unknown dead property", buildfs: []string{"mkdir /dir"}, propOp: []propOp{{ op: "proppatch", name: "/dir", patches: []Proppatch{{ Remove: true, Props: []Property{{ XMLName: xml.Name{Space: "foo", Local: "bar"}, }}, }}, wantPropstats: []Propstat{{ Status: http.StatusOK, Props: []Property{{ XMLName: xml.Name{Space: "foo", Local: "bar"}, }}, }}, }}, }, { desc: "bad: propfind unknown property", buildfs: []string{"mkdir /dir"}, propOp: []propOp{{ op: "propfind", name: "/dir", pnames: []xml.Name{{"foo:", "bar"}}, wantPropstats: []Propstat{{ Status: http.StatusNotFound, Props: []Property{{ XMLName: xml.Name{Space: "foo:", Local: "bar"}, }}, }}, }}, }} for _, tc := range testCases { fs, err := buildTestFS(tc.buildfs) if err != nil { t.Fatalf("%s: cannot create test filesystem: %v", tc.desc, err) } if tc.noDeadProps { fs = noDeadPropsFS{fs} } ls := NewMemLS() for _, op := range tc.propOp { desc := fmt.Sprintf("%s: %s %s", tc.desc, op.op, op.name) if err = calcProps(op.name, fs, ls, op.wantPropstats); err != nil { t.Fatalf("%s: calcProps: %v", desc, err) } // Call property system. var propstats []Propstat switch op.op { case "propname": pnames, err := propnames(fs, ls, op.name) if err != nil { t.Errorf("%s: got error %v, want nil", desc, err) continue } sort.Sort(byXMLName(pnames)) sort.Sort(byXMLName(op.wantPnames)) if !reflect.DeepEqual(pnames, op.wantPnames) { t.Errorf("%s: pnames\ngot %q\nwant %q", desc, pnames, op.wantPnames) } continue case "allprop": propstats, err = allprop(fs, ls, op.name, op.pnames) case "propfind": propstats, err = props(fs, ls, op.name, op.pnames) case "proppatch": propstats, err = patch(fs, ls, op.name, op.patches) default: t.Fatalf("%s: %s not implemented", desc, op.op) } if err != nil { t.Errorf("%s: got error %v, want nil", desc, err) continue } // Compare return values from allprop, propfind or proppatch. for _, pst := range propstats { sort.Sort(byPropname(pst.Props)) } for _, pst := range op.wantPropstats { sort.Sort(byPropname(pst.Props)) } sort.Sort(byStatus(propstats)) sort.Sort(byStatus(op.wantPropstats)) if !reflect.DeepEqual(propstats, op.wantPropstats) { t.Errorf("%s: propstat\ngot %q\nwant %q", desc, propstats, op.wantPropstats) } } } } func cmpXMLName(a, b xml.Name) bool { if a.Space != b.Space { return a.Space < b.Space } return a.Local < b.Local } type byXMLName []xml.Name func (b byXMLName) Len() int { return len(b) } func (b byXMLName) Swap(i, j int) { b[i], b[j] = b[j], b[i] } func (b byXMLName) Less(i, j int) bool { return cmpXMLName(b[i], b[j]) } type byPropname []Property func (b byPropname) Len() int { return len(b) } func (b byPropname) Swap(i, j int) { b[i], b[j] = b[j], b[i] } func (b byPropname) Less(i, j int) bool { return cmpXMLName(b[i].XMLName, b[j].XMLName) } type byStatus []Propstat func (b byStatus) Len() int { return len(b) } func (b byStatus) Swap(i, j int) { b[i], b[j] = b[j], b[i] } func (b byStatus) Less(i, j int) bool { return b[i].Status < b[j].Status } type noDeadPropsFS struct { FileSystem } func (fs noDeadPropsFS) OpenFile(name string, flag int, perm os.FileMode) (File, error) { f, err := fs.FileSystem.OpenFile(name, flag, perm) if err != nil { return nil, err } return noDeadPropsFile{f}, nil } // noDeadPropsFile wraps a File but strips any optional DeadPropsHolder methods // provided by the underlying File implementation. type noDeadPropsFile struct { f File } func (f noDeadPropsFile) Close() error { return f.f.Close() } func (f noDeadPropsFile) Read(p []byte) (int, error) { return f.f.Read(p) } func (f noDeadPropsFile) Readdir(count int) ([]os.FileInfo, error) { return f.f.Readdir(count) } func (f noDeadPropsFile) Seek(off int64, whence int) (int64, error) { return f.f.Seek(off, whence) } func (f noDeadPropsFile) Stat() (os.FileInfo, error) { return f.f.Stat() } func (f noDeadPropsFile) Write(p []byte) (int, error) { return f.f.Write(p) } lxd-2.0.2/dist/src/golang.org/x/net/webdav/if_test.go0000644061062106075000000001515212721405224024773 0ustar00stgraberdomain admins00000000000000// Copyright 2014 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package webdav import ( "reflect" "strings" "testing" ) func TestParseIfHeader(t *testing.T) { // The "section x.y.z" test cases come from section x.y.z of the spec at // http://www.webdav.org/specs/rfc4918.html testCases := []struct { desc string input string want ifHeader }{{ "bad: empty", ``, ifHeader{}, }, { "bad: no parens", `foobar`, ifHeader{}, }, { "bad: empty list #1", `()`, ifHeader{}, }, { "bad: empty list #2", `(a) (b c) () (d)`, ifHeader{}, }, { "bad: no list after resource #1", ``, ifHeader{}, }, { "bad: no list after resource #2", ` (a)`, ifHeader{}, }, { "bad: no list after resource #3", ` (a) (b) `, ifHeader{}, }, { "bad: no-tag-list followed by tagged-list", `(a) (b) (c)`, ifHeader{}, }, { "bad: unfinished list", `(a`, ifHeader{}, }, { "bad: unfinished ETag", `([b`, ifHeader{}, }, { "bad: unfinished Notted list", `(Not a`, ifHeader{}, }, { "bad: double Not", `(Not Not a)`, ifHeader{}, }, { "good: one list with a Token", `(a)`, ifHeader{ lists: []ifList{{ conditions: []Condition{{ Token: `a`, }}, }}, }, }, { "good: one list with an ETag", `([a])`, ifHeader{ lists: []ifList{{ conditions: []Condition{{ ETag: `a`, }}, }}, }, }, { "good: one list with three Nots", `(Not a Not b Not [d])`, ifHeader{ lists: []ifList{{ conditions: []Condition{{ Not: true, Token: `a`, }, { Not: true, Token: `b`, }, { Not: true, ETag: `d`, }}, }}, }, }, { "good: two lists", `(a) (b)`, ifHeader{ lists: []ifList{{ conditions: []Condition{{ Token: `a`, }}, }, { conditions: []Condition{{ Token: `b`, }}, }}, }, }, { "good: two Notted lists", `(Not a) (Not b)`, ifHeader{ lists: []ifList{{ conditions: []Condition{{ Not: true, Token: `a`, }}, }, { conditions: []Condition{{ Not: true, Token: `b`, }}, }}, }, }, { "section 7.5.1", ` ()`, ifHeader{ lists: []ifList{{ resourceTag: `http://www.example.com/users/f/fielding/index.html`, conditions: []Condition{{ Token: `urn:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6`, }}, }}, }, }, { "section 7.5.2 #1", `()`, ifHeader{ lists: []ifList{{ conditions: []Condition{{ Token: `urn:uuid:150852e2-3847-42d5-8cbe-0f4f296f26cf`, }}, }}, }, }, { "section 7.5.2 #2", ` ()`, ifHeader{ lists: []ifList{{ resourceTag: `http://example.com/locked/`, conditions: []Condition{{ Token: `urn:uuid:150852e2-3847-42d5-8cbe-0f4f296f26cf`, }}, }}, }, }, { "section 7.5.2 #3", ` ()`, ifHeader{ lists: []ifList{{ resourceTag: `http://example.com/locked/member`, conditions: []Condition{{ Token: `urn:uuid:150852e2-3847-42d5-8cbe-0f4f296f26cf`, }}, }}, }, }, { "section 9.9.6", `() ()`, ifHeader{ lists: []ifList{{ conditions: []Condition{{ Token: `urn:uuid:fe184f2e-6eec-41d0-c765-01adc56e6bb4`, }}, }, { conditions: []Condition{{ Token: `urn:uuid:e454f3f3-acdc-452a-56c7-00a5c91e4b77`, }}, }}, }, }, { "section 9.10.8", `()`, ifHeader{ lists: []ifList{{ conditions: []Condition{{ Token: `urn:uuid:e71d4fae-5dec-22d6-fea5-00a0c91e6be4`, }}, }}, }, }, { "section 10.4.6", `( ["I am an ETag"]) (["I am another ETag"])`, ifHeader{ lists: []ifList{{ conditions: []Condition{{ Token: `urn:uuid:181d4fae-7d8c-11d0-a765-00a0c91e6bf2`, }, { ETag: `"I am an ETag"`, }}, }, { conditions: []Condition{{ ETag: `"I am another ETag"`, }}, }}, }, }, { "section 10.4.7", `(Not )`, ifHeader{ lists: []ifList{{ conditions: []Condition{{ Not: true, Token: `urn:uuid:181d4fae-7d8c-11d0-a765-00a0c91e6bf2`, }, { Token: `urn:uuid:58f202ac-22cf-11d1-b12d-002035b29092`, }}, }}, }, }, { "section 10.4.8", `() (Not )`, ifHeader{ lists: []ifList{{ conditions: []Condition{{ Token: `urn:uuid:181d4fae-7d8c-11d0-a765-00a0c91e6bf2`, }}, }, { conditions: []Condition{{ Not: true, Token: `DAV:no-lock`, }}, }}, }, }, { "section 10.4.9", ` ( [W/"A weak ETag"]) (["strong ETag"])`, ifHeader{ lists: []ifList{{ resourceTag: `/resource1`, conditions: []Condition{{ Token: `urn:uuid:181d4fae-7d8c-11d0-a765-00a0c91e6bf2`, }, { ETag: `W/"A weak ETag"`, }}, }, { resourceTag: `/resource1`, conditions: []Condition{{ ETag: `"strong ETag"`, }}, }}, }, }, { "section 10.4.10", ` ()`, ifHeader{ lists: []ifList{{ resourceTag: `http://www.example.com/specs/`, conditions: []Condition{{ Token: `urn:uuid:181d4fae-7d8c-11d0-a765-00a0c91e6bf2`, }}, }}, }, }, { "section 10.4.11 #1", ` (["4217"])`, ifHeader{ lists: []ifList{{ resourceTag: `/specs/rfc2518.doc`, conditions: []Condition{{ ETag: `"4217"`, }}, }}, }, }, { "section 10.4.11 #2", ` (Not ["4217"])`, ifHeader{ lists: []ifList{{ resourceTag: `/specs/rfc2518.doc`, conditions: []Condition{{ Not: true, ETag: `"4217"`, }}, }}, }, }} for _, tc := range testCases { got, ok := parseIfHeader(strings.Replace(tc.input, "\n", "", -1)) if gotEmpty := reflect.DeepEqual(got, ifHeader{}); gotEmpty == ok { t.Errorf("%s: should be different: empty header == %t, ok == %t", tc.desc, gotEmpty, ok) continue } if !reflect.DeepEqual(got, tc.want) { t.Errorf("%s:\ngot %v\nwant %v", tc.desc, got, tc.want) continue } } } lxd-2.0.2/dist/src/golang.org/x/net/webdav/webdav_test.go0000644061062106075000000001327512721405224025651 0ustar00stgraberdomain admins00000000000000// Copyright 2015 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package webdav import ( "errors" "fmt" "io" "io/ioutil" "net/http" "net/http/httptest" "net/url" "os" "reflect" "regexp" "sort" "strings" "testing" ) // TODO: add tests to check XML responses with the expected prefix path func TestPrefix(t *testing.T) { const dst, blah = "Destination", "blah blah blah" do := func(method, urlStr string, body io.Reader, wantStatusCode int, headers ...string) error { req, err := http.NewRequest(method, urlStr, body) if err != nil { return err } for len(headers) >= 2 { req.Header.Add(headers[0], headers[1]) headers = headers[2:] } res, err := http.DefaultClient.Do(req) if err != nil { return err } defer res.Body.Close() if res.StatusCode != wantStatusCode { return fmt.Errorf("got status code %d, want %d", res.StatusCode, wantStatusCode) } return nil } prefixes := []string{ "/", "/a/", "/a/b/", "/a/b/c/", } for _, prefix := range prefixes { fs := NewMemFS() h := &Handler{ FileSystem: fs, LockSystem: NewMemLS(), } mux := http.NewServeMux() if prefix != "/" { h.Prefix = prefix } mux.Handle(prefix, h) srv := httptest.NewServer(mux) defer srv.Close() // The script is: // MKCOL /a // MKCOL /a/b // PUT /a/b/c // COPY /a/b/c /a/b/d // MKCOL /a/b/e // MOVE /a/b/d /a/b/e/f // which should yield the (possibly stripped) filenames /a/b/c and // /a/b/e/f, plus their parent directories. wantA := map[string]int{ "/": http.StatusCreated, "/a/": http.StatusMovedPermanently, "/a/b/": http.StatusNotFound, "/a/b/c/": http.StatusNotFound, }[prefix] if err := do("MKCOL", srv.URL+"/a", nil, wantA); err != nil { t.Errorf("prefix=%-9q MKCOL /a: %v", prefix, err) continue } wantB := map[string]int{ "/": http.StatusCreated, "/a/": http.StatusCreated, "/a/b/": http.StatusMovedPermanently, "/a/b/c/": http.StatusNotFound, }[prefix] if err := do("MKCOL", srv.URL+"/a/b", nil, wantB); err != nil { t.Errorf("prefix=%-9q MKCOL /a/b: %v", prefix, err) continue } wantC := map[string]int{ "/": http.StatusCreated, "/a/": http.StatusCreated, "/a/b/": http.StatusCreated, "/a/b/c/": http.StatusMovedPermanently, }[prefix] if err := do("PUT", srv.URL+"/a/b/c", strings.NewReader(blah), wantC); err != nil { t.Errorf("prefix=%-9q PUT /a/b/c: %v", prefix, err) continue } wantD := map[string]int{ "/": http.StatusCreated, "/a/": http.StatusCreated, "/a/b/": http.StatusCreated, "/a/b/c/": http.StatusMovedPermanently, }[prefix] if err := do("COPY", srv.URL+"/a/b/c", nil, wantD, dst, srv.URL+"/a/b/d"); err != nil { t.Errorf("prefix=%-9q COPY /a/b/c /a/b/d: %v", prefix, err) continue } wantE := map[string]int{ "/": http.StatusCreated, "/a/": http.StatusCreated, "/a/b/": http.StatusCreated, "/a/b/c/": http.StatusNotFound, }[prefix] if err := do("MKCOL", srv.URL+"/a/b/e", nil, wantE); err != nil { t.Errorf("prefix=%-9q MKCOL /a/b/e: %v", prefix, err) continue } wantF := map[string]int{ "/": http.StatusCreated, "/a/": http.StatusCreated, "/a/b/": http.StatusCreated, "/a/b/c/": http.StatusNotFound, }[prefix] if err := do("MOVE", srv.URL+"/a/b/d", nil, wantF, dst, srv.URL+"/a/b/e/f"); err != nil { t.Errorf("prefix=%-9q MOVE /a/b/d /a/b/e/f: %v", prefix, err) continue } got, err := find(nil, fs, "/") if err != nil { t.Errorf("prefix=%-9q find: %v", prefix, err) continue } sort.Strings(got) want := map[string][]string{ "/": {"/", "/a", "/a/b", "/a/b/c", "/a/b/e", "/a/b/e/f"}, "/a/": {"/", "/b", "/b/c", "/b/e", "/b/e/f"}, "/a/b/": {"/", "/c", "/e", "/e/f"}, "/a/b/c/": {"/"}, }[prefix] if !reflect.DeepEqual(got, want) { t.Errorf("prefix=%-9q find:\ngot %v\nwant %v", prefix, got, want) continue } } } func TestFilenameEscape(t *testing.T) { re := regexp.MustCompile(`([^<]*)`) do := func(method, urlStr string) (string, error) { req, err := http.NewRequest(method, urlStr, nil) if err != nil { return "", err } res, err := http.DefaultClient.Do(req) if err != nil { return "", err } defer res.Body.Close() b, err := ioutil.ReadAll(res.Body) if err != nil { return "", err } m := re.FindStringSubmatch(string(b)) if len(m) != 2 { return "", errors.New("D:href not found") } return m[1], nil } testCases := []struct { name, want string }{{ name: `/foo%bar`, want: `/foo%25bar`, }, { name: `/ใ“ใ‚“ใซใกใ‚ไธ–็•Œ`, want: `/%E3%81%93%E3%82%93%E3%81%AB%E3%81%A1%E3%82%8F%E4%B8%96%E7%95%8C`, }, { name: `/Program Files/`, want: `/Program%20Files`, }, { name: `/go+lang`, want: `/go+lang`, }, { name: `/go&lang`, want: `/go&lang`, }} fs := NewMemFS() for _, tc := range testCases { if strings.HasSuffix(tc.name, "/") { if err := fs.Mkdir(tc.name, 0755); err != nil { t.Fatalf("name=%q: Mkdir: %v", tc.name, err) } } else { f, err := fs.OpenFile(tc.name, os.O_CREATE, 0644) if err != nil { t.Fatalf("name=%q: OpenFile: %v", tc.name, err) } f.Close() } } srv := httptest.NewServer(&Handler{ FileSystem: fs, LockSystem: NewMemLS(), }) defer srv.Close() u, err := url.Parse(srv.URL) if err != nil { t.Fatal(err) } for _, tc := range testCases { u.Path = tc.name got, err := do("PROPFIND", u.String()) if err != nil { t.Errorf("name=%q: PROPFIND: %v", tc.name, err) continue } if got != tc.want { t.Errorf("name=%q: got %q, want %q", tc.name, got, tc.want) } } } lxd-2.0.2/dist/src/golang.org/x/net/idna/0000755061062106075000000000000012721405224022446 5ustar00stgraberdomain admins00000000000000lxd-2.0.2/dist/src/golang.org/x/net/idna/idna.go0000644061062106075000000000342012721405224023707 0ustar00stgraberdomain admins00000000000000// Copyright 2012 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // Package idna implements IDNA2008 (Internationalized Domain Names for // Applications), defined in RFC 5890, RFC 5891, RFC 5892, RFC 5893 and // RFC 5894. package idna // import "golang.org/x/net/idna" import ( "strings" "unicode/utf8" ) // TODO(nigeltao): specify when errors occur. For example, is ToASCII(".") or // ToASCII("foo\x00") an error? See also http://www.unicode.org/faq/idn.html#11 // acePrefix is the ASCII Compatible Encoding prefix. const acePrefix = "xn--" // ToASCII converts a domain or domain label to its ASCII form. For example, // ToASCII("bรผcher.example.com") is "xn--bcher-kva.example.com", and // ToASCII("golang") is "golang". func ToASCII(s string) (string, error) { if ascii(s) { return s, nil } labels := strings.Split(s, ".") for i, label := range labels { if !ascii(label) { a, err := encode(acePrefix, label) if err != nil { return "", err } labels[i] = a } } return strings.Join(labels, "."), nil } // ToUnicode converts a domain or domain label to its Unicode form. For example, // ToUnicode("xn--bcher-kva.example.com") is "bรผcher.example.com", and // ToUnicode("golang") is "golang". func ToUnicode(s string) (string, error) { if !strings.Contains(s, acePrefix) { return s, nil } labels := strings.Split(s, ".") for i, label := range labels { if strings.HasPrefix(label, acePrefix) { u, err := decode(label[len(acePrefix):]) if err != nil { return "", err } labels[i] = u } } return strings.Join(labels, "."), nil } func ascii(s string) bool { for i := 0; i < len(s); i++ { if s[i] >= utf8.RuneSelf { return false } } return true } lxd-2.0.2/dist/src/golang.org/x/net/idna/punycode.go0000644061062106075000000001053712721405224024631 0ustar00stgraberdomain admins00000000000000// Copyright 2012 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package idna // This file implements the Punycode algorithm from RFC 3492. import ( "fmt" "math" "strings" "unicode/utf8" ) // These parameter values are specified in section 5. // // All computation is done with int32s, so that overflow behavior is identical // regardless of whether int is 32-bit or 64-bit. const ( base int32 = 36 damp int32 = 700 initialBias int32 = 72 initialN int32 = 128 skew int32 = 38 tmax int32 = 26 tmin int32 = 1 ) // decode decodes a string as specified in section 6.2. func decode(encoded string) (string, error) { if encoded == "" { return "", nil } pos := 1 + strings.LastIndex(encoded, "-") if pos == 1 { return "", fmt.Errorf("idna: invalid label %q", encoded) } if pos == len(encoded) { return encoded[:len(encoded)-1], nil } output := make([]rune, 0, len(encoded)) if pos != 0 { for _, r := range encoded[:pos-1] { output = append(output, r) } } i, n, bias := int32(0), initialN, initialBias for pos < len(encoded) { oldI, w := i, int32(1) for k := base; ; k += base { if pos == len(encoded) { return "", fmt.Errorf("idna: invalid label %q", encoded) } digit, ok := decodeDigit(encoded[pos]) if !ok { return "", fmt.Errorf("idna: invalid label %q", encoded) } pos++ i += digit * w if i < 0 { return "", fmt.Errorf("idna: invalid label %q", encoded) } t := k - bias if t < tmin { t = tmin } else if t > tmax { t = tmax } if digit < t { break } w *= base - t if w >= math.MaxInt32/base { return "", fmt.Errorf("idna: invalid label %q", encoded) } } x := int32(len(output) + 1) bias = adapt(i-oldI, x, oldI == 0) n += i / x i %= x if n > utf8.MaxRune || len(output) >= 1024 { return "", fmt.Errorf("idna: invalid label %q", encoded) } output = append(output, 0) copy(output[i+1:], output[i:]) output[i] = n i++ } return string(output), nil } // encode encodes a string as specified in section 6.3 and prepends prefix to // the result. // // The "while h < length(input)" line in the specification becomes "for // remaining != 0" in the Go code, because len(s) in Go is in bytes, not runes. func encode(prefix, s string) (string, error) { output := make([]byte, len(prefix), len(prefix)+1+2*len(s)) copy(output, prefix) delta, n, bias := int32(0), initialN, initialBias b, remaining := int32(0), int32(0) for _, r := range s { if r < 0x80 { b++ output = append(output, byte(r)) } else { remaining++ } } h := b if b > 0 { output = append(output, '-') } for remaining != 0 { m := int32(0x7fffffff) for _, r := range s { if m > r && r >= n { m = r } } delta += (m - n) * (h + 1) if delta < 0 { return "", fmt.Errorf("idna: invalid label %q", s) } n = m for _, r := range s { if r < n { delta++ if delta < 0 { return "", fmt.Errorf("idna: invalid label %q", s) } continue } if r > n { continue } q := delta for k := base; ; k += base { t := k - bias if t < tmin { t = tmin } else if t > tmax { t = tmax } if q < t { break } output = append(output, encodeDigit(t+(q-t)%(base-t))) q = (q - t) / (base - t) } output = append(output, encodeDigit(q)) bias = adapt(delta, h+1, h == b) delta = 0 h++ remaining-- } delta++ n++ } return string(output), nil } func decodeDigit(x byte) (digit int32, ok bool) { switch { case '0' <= x && x <= '9': return int32(x - ('0' - 26)), true case 'A' <= x && x <= 'Z': return int32(x - 'A'), true case 'a' <= x && x <= 'z': return int32(x - 'a'), true } return 0, false } func encodeDigit(digit int32) byte { switch { case 0 <= digit && digit < 26: return byte(digit + 'a') case 26 <= digit && digit < 36: return byte(digit + ('0' - 26)) } panic("idna: internal error in punycode encoding") } // adapt is the bias adaptation function specified in section 6.1. func adapt(delta, numPoints int32, firstTime bool) int32 { if firstTime { delta /= damp } else { delta /= 2 } delta += delta / numPoints k := int32(0) for delta > ((base-tmin)*tmax)/2 { delta /= base - tmin k += base } return k + (base-tmin+1)*delta/(delta+skew) } lxd-2.0.2/dist/src/golang.org/x/net/idna/punycode_test.go0000644061062106075000000001343312721405224025666 0ustar00stgraberdomain admins00000000000000// Copyright 2012 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package idna import ( "strings" "testing" ) var punycodeTestCases = [...]struct { s, encoded string }{ {"", ""}, {"-", "--"}, {"-a", "-a-"}, {"-a-", "-a--"}, {"a", "a-"}, {"a-", "a--"}, {"a-b", "a-b-"}, {"books", "books-"}, {"bรผcher", "bcher-kva"}, {"Helloไธ–็•Œ", "Hello-ck1hg65u"}, {"รผ", "tda"}, {"รผรฝ", "tdac"}, // The test cases below come from RFC 3492 section 7.1 with Errata 3026. { // (A) Arabic (Egyptian). "\u0644\u064A\u0647\u0645\u0627\u0628\u062A\u0643\u0644" + "\u0645\u0648\u0634\u0639\u0631\u0628\u064A\u061F", "egbpdaj6bu4bxfgehfvwxn", }, { // (B) Chinese (simplified). "\u4ED6\u4EEC\u4E3A\u4EC0\u4E48\u4E0D\u8BF4\u4E2D\u6587", "ihqwcrb4cv8a8dqg056pqjye", }, { // (C) Chinese (traditional). "\u4ED6\u5011\u7232\u4EC0\u9EBD\u4E0D\u8AAA\u4E2D\u6587", "ihqwctvzc91f659drss3x8bo0yb", }, { // (D) Czech. "\u0050\u0072\u006F\u010D\u0070\u0072\u006F\u0073\u0074" + "\u011B\u006E\u0065\u006D\u006C\u0075\u0076\u00ED\u010D" + "\u0065\u0073\u006B\u0079", "Proprostnemluvesky-uyb24dma41a", }, { // (E) Hebrew. "\u05DC\u05DE\u05D4\u05D4\u05DD\u05E4\u05E9\u05D5\u05D8" + "\u05DC\u05D0\u05DE\u05D3\u05D1\u05E8\u05D9\u05DD\u05E2" + "\u05D1\u05E8\u05D9\u05EA", "4dbcagdahymbxekheh6e0a7fei0b", }, { // (F) Hindi (Devanagari). "\u092F\u0939\u0932\u094B\u0917\u0939\u093F\u0928\u094D" + "\u0926\u0940\u0915\u094D\u092F\u094B\u0902\u0928\u0939" + "\u0940\u0902\u092C\u094B\u0932\u0938\u0915\u0924\u0947" + "\u0939\u0948\u0902", "i1baa7eci9glrd9b2ae1bj0hfcgg6iyaf8o0a1dig0cd", }, { // (G) Japanese (kanji and hiragana). "\u306A\u305C\u307F\u3093\u306A\u65E5\u672C\u8A9E\u3092" + "\u8A71\u3057\u3066\u304F\u308C\u306A\u3044\u306E\u304B", "n8jok5ay5dzabd5bym9f0cm5685rrjetr6pdxa", }, { // (H) Korean (Hangul syllables). "\uC138\uACC4\uC758\uBAA8\uB4E0\uC0AC\uB78C\uB4E4\uC774" + "\uD55C\uAD6D\uC5B4\uB97C\uC774\uD574\uD55C\uB2E4\uBA74" + "\uC5BC\uB9C8\uB098\uC88B\uC744\uAE4C", "989aomsvi5e83db1d2a355cv1e0vak1dwrv93d5xbh15a0dt30a5j" + "psd879ccm6fea98c", }, { // (I) Russian (Cyrillic). "\u043F\u043E\u0447\u0435\u043C\u0443\u0436\u0435\u043E" + "\u043D\u0438\u043D\u0435\u0433\u043E\u0432\u043E\u0440" + "\u044F\u0442\u043F\u043E\u0440\u0443\u0441\u0441\u043A" + "\u0438", "b1abfaaepdrnnbgefbadotcwatmq2g4l", }, { // (J) Spanish. "\u0050\u006F\u0072\u0071\u0075\u00E9\u006E\u006F\u0070" + "\u0075\u0065\u0064\u0065\u006E\u0073\u0069\u006D\u0070" + "\u006C\u0065\u006D\u0065\u006E\u0074\u0065\u0068\u0061" + "\u0062\u006C\u0061\u0072\u0065\u006E\u0045\u0073\u0070" + "\u0061\u00F1\u006F\u006C", "PorqunopuedensimplementehablarenEspaol-fmd56a", }, { // (K) Vietnamese. "\u0054\u1EA1\u0069\u0073\u0061\u006F\u0068\u1ECD\u006B" + "\u0068\u00F4\u006E\u0067\u0074\u0068\u1EC3\u0063\u0068" + "\u1EC9\u006E\u00F3\u0069\u0074\u0069\u1EBF\u006E\u0067" + "\u0056\u0069\u1EC7\u0074", "TisaohkhngthchnitingVit-kjcr8268qyxafd2f1b9g", }, { // (L) 3B. "\u0033\u5E74\u0042\u7D44\u91D1\u516B\u5148\u751F", "3B-ww4c5e180e575a65lsy2b", }, { // (M) -with-SUPER-MONKEYS. "\u5B89\u5BA4\u5948\u7F8E\u6075\u002D\u0077\u0069\u0074" + "\u0068\u002D\u0053\u0055\u0050\u0045\u0052\u002D\u004D" + "\u004F\u004E\u004B\u0045\u0059\u0053", "-with-SUPER-MONKEYS-pc58ag80a8qai00g7n9n", }, { // (N) Hello-Another-Way-. "\u0048\u0065\u006C\u006C\u006F\u002D\u0041\u006E\u006F" + "\u0074\u0068\u0065\u0072\u002D\u0057\u0061\u0079\u002D" + "\u305D\u308C\u305E\u308C\u306E\u5834\u6240", "Hello-Another-Way--fc4qua05auwb3674vfr0b", }, { // (O) 2. "\u3072\u3068\u3064\u5C4B\u6839\u306E\u4E0B\u0032", "2-u9tlzr9756bt3uc0v", }, { // (P) MajiKoi5 "\u004D\u0061\u006A\u0069\u3067\u004B\u006F\u0069\u3059" + "\u308B\u0035\u79D2\u524D", "MajiKoi5-783gue6qz075azm5e", }, { // (Q) de "\u30D1\u30D5\u30A3\u30FC\u0064\u0065\u30EB\u30F3\u30D0", "de-jg4avhby1noc0d", }, { // (R) "\u305D\u306E\u30B9\u30D4\u30FC\u30C9\u3067", "d9juau41awczczp", }, { // (S) -> $1.00 <- "\u002D\u003E\u0020\u0024\u0031\u002E\u0030\u0030\u0020" + "\u003C\u002D", "-> $1.00 <--", }, } func TestPunycode(t *testing.T) { for _, tc := range punycodeTestCases { if got, err := decode(tc.encoded); err != nil { t.Errorf("decode(%q): %v", tc.encoded, err) } else if got != tc.s { t.Errorf("decode(%q): got %q, want %q", tc.encoded, got, tc.s) } if got, err := encode("", tc.s); err != nil { t.Errorf(`encode("", %q): %v`, tc.s, err) } else if got != tc.encoded { t.Errorf(`encode("", %q): got %q, want %q`, tc.s, got, tc.encoded) } } } var punycodeErrorTestCases = [...]string{ "decode -", // A sole '-' is invalid. "decode foo\x00bar", // '\x00' is not in [0-9A-Za-z]. "decode foo#bar", // '#' is not in [0-9A-Za-z]. "decode foo\u00A3bar", // '\u00A3' is not in [0-9A-Za-z]. "decode 9", // "9a" decodes to codepoint \u00A3; "9" is truncated. "decode 99999a", // "99999a" decodes to codepoint \U0048A3C1, which is > \U0010FFFF. "decode 9999999999a", // "9999999999a" overflows the int32 calculation. "encode " + strings.Repeat("x", 65536) + "\uff00", // int32 overflow. } func TestPunycodeErrors(t *testing.T) { for _, tc := range punycodeErrorTestCases { var err error switch { case strings.HasPrefix(tc, "decode "): _, err = decode(tc[7:]) case strings.HasPrefix(tc, "encode "): _, err = encode("", tc[7:]) } if err == nil { if len(tc) > 256 { tc = tc[:100] + "..." + tc[len(tc)-100:] } t.Errorf("no error for %s", tc) } } } lxd-2.0.2/dist/src/golang.org/x/net/idna/idna_test.go0000644061062106075000000000211512721405224024746 0ustar00stgraberdomain admins00000000000000// Copyright 2012 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package idna import ( "testing" ) var idnaTestCases = [...]struct { ascii, unicode string }{ // Labels. {"books", "books"}, {"xn--bcher-kva", "bรผcher"}, // Domains. {"foo--xn--bar.org", "foo--xn--bar.org"}, {"golang.org", "golang.org"}, {"example.xn--p1ai", "example.ั€ั„"}, {"xn--czrw28b.tw", "ๅ•†ๆฅญ.tw"}, {"www.xn--mller-kva.de", "www.mรผller.de"}, } func TestIDNA(t *testing.T) { for _, tc := range idnaTestCases { if a, err := ToASCII(tc.unicode); err != nil { t.Errorf("ToASCII(%q): %v", tc.unicode, err) } else if a != tc.ascii { t.Errorf("ToASCII(%q): got %q, want %q", tc.unicode, a, tc.ascii) } if u, err := ToUnicode(tc.ascii); err != nil { t.Errorf("ToUnicode(%q): %v", tc.ascii, err) } else if u != tc.unicode { t.Errorf("ToUnicode(%q): got %q, want %q", tc.ascii, u, tc.unicode) } } } // TODO(nigeltao): test errors, once we've specified when ToASCII and ToUnicode // return errors. lxd-2.0.2/dist/src/golang.org/x/net/AUTHORS0000644061062106075000000000025512721405224022605 0ustar00stgraberdomain admins00000000000000# This source code refers to The Go Authors for copyright purposes. # The master list of authors is in the main Go distribution, # visible at http://tip.golang.org/AUTHORS. lxd-2.0.2/dist/src/golang.org/x/net/html/0000755061062106075000000000000012721405224022477 5ustar00stgraberdomain admins00000000000000lxd-2.0.2/dist/src/golang.org/x/net/html/entity_test.go0000644061062106075000000000177612721405224025414 0ustar00stgraberdomain admins00000000000000// Copyright 2010 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package html import ( "testing" "unicode/utf8" ) func TestEntityLength(t *testing.T) { // We verify that the length of UTF-8 encoding of each value is <= 1 + len(key). // The +1 comes from the leading "&". This property implies that the length of // unescaped text is <= the length of escaped text. for k, v := range entity { if 1+len(k) < utf8.RuneLen(v) { t.Error("escaped entity &" + k + " is shorter than its UTF-8 encoding " + string(v)) } if len(k) > longestEntityWithoutSemicolon && k[len(k)-1] != ';' { t.Errorf("entity name %s is %d characters, but longestEntityWithoutSemicolon=%d", k, len(k), longestEntityWithoutSemicolon) } } for k, v := range entity2 { if 1+len(k) < utf8.RuneLen(v[0])+utf8.RuneLen(v[1]) { t.Error("escaped entity &" + k + " is shorter than its UTF-8 encoding " + string(v[0]) + string(v[1])) } } } lxd-2.0.2/dist/src/golang.org/x/net/html/foreign.go0000644061062106075000000001540112721405224024460 0ustar00stgraberdomain admins00000000000000// Copyright 2011 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package html import ( "strings" ) func adjustAttributeNames(aa []Attribute, nameMap map[string]string) { for i := range aa { if newName, ok := nameMap[aa[i].Key]; ok { aa[i].Key = newName } } } func adjustForeignAttributes(aa []Attribute) { for i, a := range aa { if a.Key == "" || a.Key[0] != 'x' { continue } switch a.Key { case "xlink:actuate", "xlink:arcrole", "xlink:href", "xlink:role", "xlink:show", "xlink:title", "xlink:type", "xml:base", "xml:lang", "xml:space", "xmlns:xlink": j := strings.Index(a.Key, ":") aa[i].Namespace = a.Key[:j] aa[i].Key = a.Key[j+1:] } } } func htmlIntegrationPoint(n *Node) bool { if n.Type != ElementNode { return false } switch n.Namespace { case "math": if n.Data == "annotation-xml" { for _, a := range n.Attr { if a.Key == "encoding" { val := strings.ToLower(a.Val) if val == "text/html" || val == "application/xhtml+xml" { return true } } } } case "svg": switch n.Data { case "desc", "foreignObject", "title": return true } } return false } func mathMLTextIntegrationPoint(n *Node) bool { if n.Namespace != "math" { return false } switch n.Data { case "mi", "mo", "mn", "ms", "mtext": return true } return false } // Section 12.2.5.5. var breakout = map[string]bool{ "b": true, "big": true, "blockquote": true, "body": true, "br": true, "center": true, "code": true, "dd": true, "div": true, "dl": true, "dt": true, "em": true, "embed": true, "h1": true, "h2": true, "h3": true, "h4": true, "h5": true, "h6": true, "head": true, "hr": true, "i": true, "img": true, "li": true, "listing": true, "menu": true, "meta": true, "nobr": true, "ol": true, "p": true, "pre": true, "ruby": true, "s": true, "small": true, "span": true, "strong": true, "strike": true, "sub": true, "sup": true, "table": true, "tt": true, "u": true, "ul": true, "var": true, } // Section 12.2.5.5. var svgTagNameAdjustments = map[string]string{ "altglyph": "altGlyph", "altglyphdef": "altGlyphDef", "altglyphitem": "altGlyphItem", "animatecolor": "animateColor", "animatemotion": "animateMotion", "animatetransform": "animateTransform", "clippath": "clipPath", "feblend": "feBlend", "fecolormatrix": "feColorMatrix", "fecomponenttransfer": "feComponentTransfer", "fecomposite": "feComposite", "feconvolvematrix": "feConvolveMatrix", "fediffuselighting": "feDiffuseLighting", "fedisplacementmap": "feDisplacementMap", "fedistantlight": "feDistantLight", "feflood": "feFlood", "fefunca": "feFuncA", "fefuncb": "feFuncB", "fefuncg": "feFuncG", "fefuncr": "feFuncR", "fegaussianblur": "feGaussianBlur", "feimage": "feImage", "femerge": "feMerge", "femergenode": "feMergeNode", "femorphology": "feMorphology", "feoffset": "feOffset", "fepointlight": "fePointLight", "fespecularlighting": "feSpecularLighting", "fespotlight": "feSpotLight", "fetile": "feTile", "feturbulence": "feTurbulence", "foreignobject": "foreignObject", "glyphref": "glyphRef", "lineargradient": "linearGradient", "radialgradient": "radialGradient", "textpath": "textPath", } // Section 12.2.5.1 var mathMLAttributeAdjustments = map[string]string{ "definitionurl": "definitionURL", } var svgAttributeAdjustments = map[string]string{ "attributename": "attributeName", "attributetype": "attributeType", "basefrequency": "baseFrequency", "baseprofile": "baseProfile", "calcmode": "calcMode", "clippathunits": "clipPathUnits", "contentscripttype": "contentScriptType", "contentstyletype": "contentStyleType", "diffuseconstant": "diffuseConstant", "edgemode": "edgeMode", "externalresourcesrequired": "externalResourcesRequired", "filterres": "filterRes", "filterunits": "filterUnits", "glyphref": "glyphRef", "gradienttransform": "gradientTransform", "gradientunits": "gradientUnits", "kernelmatrix": "kernelMatrix", "kernelunitlength": "kernelUnitLength", "keypoints": "keyPoints", "keysplines": "keySplines", "keytimes": "keyTimes", "lengthadjust": "lengthAdjust", "limitingconeangle": "limitingConeAngle", "markerheight": "markerHeight", "markerunits": "markerUnits", "markerwidth": "markerWidth", "maskcontentunits": "maskContentUnits", "maskunits": "maskUnits", "numoctaves": "numOctaves", "pathlength": "pathLength", "patterncontentunits": "patternContentUnits", "patterntransform": "patternTransform", "patternunits": "patternUnits", "pointsatx": "pointsAtX", "pointsaty": "pointsAtY", "pointsatz": "pointsAtZ", "preservealpha": "preserveAlpha", "preserveaspectratio": "preserveAspectRatio", "primitiveunits": "primitiveUnits", "refx": "refX", "refy": "refY", "repeatcount": "repeatCount", "repeatdur": "repeatDur", "requiredextensions": "requiredExtensions", "requiredfeatures": "requiredFeatures", "specularconstant": "specularConstant", "specularexponent": "specularExponent", "spreadmethod": "spreadMethod", "startoffset": "startOffset", "stddeviation": "stdDeviation", "stitchtiles": "stitchTiles", "surfacescale": "surfaceScale", "systemlanguage": "systemLanguage", "tablevalues": "tableValues", "targetx": "targetX", "targety": "targetY", "textlength": "textLength", "viewbox": "viewBox", "viewtarget": "viewTarget", "xchannelselector": "xChannelSelector", "ychannelselector": "yChannelSelector", "zoomandpan": "zoomAndPan", } lxd-2.0.2/dist/src/golang.org/x/net/html/example_test.go0000644061062106075000000000150212721405224025516 0ustar00stgraberdomain admins00000000000000// Copyright 2012 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // This example demonstrates parsing HTML data and walking the resulting tree. package html_test import ( "fmt" "log" "strings" "golang.org/x/net/html" ) func ExampleParse() { s := `

Links:

  • Foo
  • BarBaz
` doc, err := html.Parse(strings.NewReader(s)) if err != nil { log.Fatal(err) } var f func(*html.Node) f = func(n *html.Node) { if n.Type == html.ElementNode && n.Data == "a" { for _, a := range n.Attr { if a.Key == "href" { fmt.Println(a.Val) break } } } for c := n.FirstChild; c != nil; c = c.NextSibling { f(c) } } f(doc) // Output: // foo // /bar/baz } lxd-2.0.2/dist/src/golang.org/x/net/html/doctype.go0000644061062106075000000001147512721405224024505 0ustar00stgraberdomain admins00000000000000// Copyright 2011 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package html import ( "strings" ) // parseDoctype parses the data from a DoctypeToken into a name, // public identifier, and system identifier. It returns a Node whose Type // is DoctypeNode, whose Data is the name, and which has attributes // named "system" and "public" for the two identifiers if they were present. // quirks is whether the document should be parsed in "quirks mode". func parseDoctype(s string) (n *Node, quirks bool) { n = &Node{Type: DoctypeNode} // Find the name. space := strings.IndexAny(s, whitespace) if space == -1 { space = len(s) } n.Data = s[:space] // The comparison to "html" is case-sensitive. if n.Data != "html" { quirks = true } n.Data = strings.ToLower(n.Data) s = strings.TrimLeft(s[space:], whitespace) if len(s) < 6 { // It can't start with "PUBLIC" or "SYSTEM". // Ignore the rest of the string. return n, quirks || s != "" } key := strings.ToLower(s[:6]) s = s[6:] for key == "public" || key == "system" { s = strings.TrimLeft(s, whitespace) if s == "" { break } quote := s[0] if quote != '"' && quote != '\'' { break } s = s[1:] q := strings.IndexRune(s, rune(quote)) var id string if q == -1 { id = s s = "" } else { id = s[:q] s = s[q+1:] } n.Attr = append(n.Attr, Attribute{Key: key, Val: id}) if key == "public" { key = "system" } else { key = "" } } if key != "" || s != "" { quirks = true } else if len(n.Attr) > 0 { if n.Attr[0].Key == "public" { public := strings.ToLower(n.Attr[0].Val) switch public { case "-//w3o//dtd w3 html strict 3.0//en//", "-/w3d/dtd html 4.0 transitional/en", "html": quirks = true default: for _, q := range quirkyIDs { if strings.HasPrefix(public, q) { quirks = true break } } } // The following two public IDs only cause quirks mode if there is no system ID. if len(n.Attr) == 1 && (strings.HasPrefix(public, "-//w3c//dtd html 4.01 frameset//") || strings.HasPrefix(public, "-//w3c//dtd html 4.01 transitional//")) { quirks = true } } if lastAttr := n.Attr[len(n.Attr)-1]; lastAttr.Key == "system" && strings.ToLower(lastAttr.Val) == "http://www.ibm.com/data/dtd/v11/ibmxhtml1-transitional.dtd" { quirks = true } } return n, quirks } // quirkyIDs is a list of public doctype identifiers that cause a document // to be interpreted in quirks mode. The identifiers should be in lower case. var quirkyIDs = []string{ "+//silmaril//dtd html pro v0r11 19970101//", "-//advasoft ltd//dtd html 3.0 aswedit + extensions//", "-//as//dtd html 3.0 aswedit + extensions//", "-//ietf//dtd html 2.0 level 1//", "-//ietf//dtd html 2.0 level 2//", "-//ietf//dtd html 2.0 strict level 1//", "-//ietf//dtd html 2.0 strict level 2//", "-//ietf//dtd html 2.0 strict//", "-//ietf//dtd html 2.0//", "-//ietf//dtd html 2.1e//", "-//ietf//dtd html 3.0//", "-//ietf//dtd html 3.2 final//", "-//ietf//dtd html 3.2//", "-//ietf//dtd html 3//", "-//ietf//dtd html level 0//", "-//ietf//dtd html level 1//", "-//ietf//dtd html level 2//", "-//ietf//dtd html level 3//", "-//ietf//dtd html strict level 0//", "-//ietf//dtd html strict level 1//", "-//ietf//dtd html strict level 2//", "-//ietf//dtd html strict level 3//", "-//ietf//dtd html strict//", "-//ietf//dtd html//", "-//metrius//dtd metrius presentational//", "-//microsoft//dtd internet explorer 2.0 html strict//", "-//microsoft//dtd internet explorer 2.0 html//", "-//microsoft//dtd internet explorer 2.0 tables//", "-//microsoft//dtd internet explorer 3.0 html strict//", "-//microsoft//dtd internet explorer 3.0 html//", "-//microsoft//dtd internet explorer 3.0 tables//", "-//netscape comm. corp.//dtd html//", "-//netscape comm. corp.//dtd strict html//", "-//o'reilly and associates//dtd html 2.0//", "-//o'reilly and associates//dtd html extended 1.0//", "-//o'reilly and associates//dtd html extended relaxed 1.0//", "-//softquad software//dtd hotmetal pro 6.0::19990601::extensions to html 4.0//", "-//softquad//dtd hotmetal pro 4.0::19971010::extensions to html 4.0//", "-//spyglass//dtd html 2.0 extended//", "-//sq//dtd html 2.0 hotmetal + extensions//", "-//sun microsystems corp.//dtd hotjava html//", "-//sun microsystems corp.//dtd hotjava strict html//", "-//w3c//dtd html 3 1995-03-24//", "-//w3c//dtd html 3.2 draft//", "-//w3c//dtd html 3.2 final//", "-//w3c//dtd html 3.2//", "-//w3c//dtd html 3.2s draft//", "-//w3c//dtd html 4.0 frameset//", "-//w3c//dtd html 4.0 transitional//", "-//w3c//dtd html experimental 19960712//", "-//w3c//dtd html experimental 970421//", "-//w3c//dtd w3 html//", "-//w3o//dtd w3 html 3.0//", "-//webtechs//dtd mozilla html 2.0//", "-//webtechs//dtd mozilla html//", } lxd-2.0.2/dist/src/golang.org/x/net/html/testdata/0000755061062106075000000000000012721405224024310 5ustar00stgraberdomain admins00000000000000lxd-2.0.2/dist/src/golang.org/x/net/html/testdata/go1.html0000644061062106075000000023051412721405224025671 0ustar00stgraberdomain admins00000000000000 Go 1 Release Notes - The Go Programming Language
Documents References Packages The Project Help
The Go Programming Language

Go 1 Release Notes

Introduction to Go 1

Go version 1, Go 1 for short, defines a language and a set of core libraries that provide a stable foundation for creating reliable products, projects, and publications.

The driving motivation for Go 1 is stability for its users. People should be able to write Go programs and expect that they will continue to compile and run without change, on a time scale of years, including in production environments such as Google App Engine. Similarly, people should be able to write books about Go, be able to say which version of Go the book is describing, and have that version number still be meaningful much later.

Code that compiles in Go 1 should, with few exceptions, continue to compile and run throughout the lifetime of that version, even as we issue updates and bug fixes such as Go version 1.1, 1.2, and so on. Other than critical fixes, changes made to the language and library for subsequent releases of Go 1 may add functionality but will not break existing Go 1 programs. The Go 1 compatibility document explains the compatibility guidelines in more detail.

Go 1 is a representation of Go as it used today, not a wholesale rethinking of the language. We avoided designing new features and instead focused on cleaning up problems and inconsistencies and improving portability. There are a number changes to the Go language and packages that we had considered for some time and prototyped but not released primarily because they are significant and backwards-incompatible. Go 1 was an opportunity to get them out, which is helpful for the long term, but also means that Go 1 introduces incompatibilities for old programs. Fortunately, the go fix tool can automate much of the work needed to bring programs up to the Go 1 standard.

This document outlines the major changes in Go 1 that will affect programmers updating existing code; its reference point is the prior release, r60 (tagged as r60.3). It also explains how to update code from r60 to run under Go 1.

Changes to the language

Append

The append predeclared variadic function makes it easy to grow a slice by adding elements to the end. A common use is to add bytes to the end of a byte slice when generating output. However, append did not provide a way to append a string to a []byte, which is another common case.

    greeting := []byte{}
    greeting = append(greeting, []byte("hello ")...)

By analogy with the similar property of copy, Go 1 permits a string to be appended (byte-wise) directly to a byte slice, reducing the friction between strings and byte slices. The conversion is no longer necessary:

    greeting = append(greeting, "world"...)

Updating: This is a new feature, so existing code needs no changes.

Close

The close predeclared function provides a mechanism for a sender to signal that no more values will be sent. It is important to the implementation of for range loops over channels and is helpful in other situations. Partly by design and partly because of race conditions that can occur otherwise, it is intended for use only by the goroutine sending on the channel, not by the goroutine receiving data. However, before Go 1 there was no compile-time checking that close was being used correctly.

To close this gap, at least in part, Go 1 disallows close on receive-only channels. Attempting to close such a channel is a compile-time error.

    var c chan int
    var csend chan<- int = c
    var crecv <-chan int = c
    close(c)     // legal
    close(csend) // legal
    close(crecv) // illegal

Updating: Existing code that attempts to close a receive-only channel was erroneous even before Go 1 and should be fixed. The compiler will now reject such code.

Composite literals

In Go 1, a composite literal of array, slice, or map type can elide the type specification for the elements' initializers if they are of pointer type. All four of the initializations in this example are legal; the last one was illegal before Go 1.

    type Date struct {
        month string
        day   int
    }
    // Struct values, fully qualified; always legal.
    holiday1 := []Date{
        Date{"Feb", 14},
        Date{"Nov", 11},
        Date{"Dec", 25},
    }
    // Struct values, type name elided; always legal.
    holiday2 := []Date{
        {"Feb", 14},
        {"Nov", 11},
        {"Dec", 25},
    }
    // Pointers, fully qualified, always legal.
    holiday3 := []*Date{
        &Date{"Feb", 14},
        &Date{"Nov", 11},
        &Date{"Dec", 25},
    }
    // Pointers, type name elided; legal in Go 1.
    holiday4 := []*Date{
        {"Feb", 14},
        {"Nov", 11},
        {"Dec", 25},
    }

Updating: This change has no effect on existing code, but the command gofmt -s applied to existing source will, among other things, elide explicit element types wherever permitted.

Goroutines during init

The old language defined that go statements executed during initialization created goroutines but that they did not begin to run until initialization of the entire program was complete. This introduced clumsiness in many places and, in effect, limited the utility of the init construct: if it was possible for another package to use the library during initialization, the library was forced to avoid goroutines. This design was done for reasons of simplicity and safety but, as our confidence in the language grew, it seemed unnecessary. Running goroutines during initialization is no more complex or unsafe than running them during normal execution.

In Go 1, code that uses goroutines can be called from init routines and global initialization expressions without introducing a deadlock.

var PackageGlobal int

func init() {
    c := make(chan int)
    go initializationFunction(c)
    PackageGlobal = <-c
}

Updating: This is a new feature, so existing code needs no changes, although it's possible that code that depends on goroutines not starting before main will break. There was no such code in the standard repository.

The rune type

The language spec allows the int type to be 32 or 64 bits wide, but current implementations set int to 32 bits even on 64-bit platforms. It would be preferable to have int be 64 bits on 64-bit platforms. (There are important consequences for indexing large slices.) However, this change would waste space when processing Unicode characters with the old language because the int type was also used to hold Unicode code points: each code point would waste an extra 32 bits of storage if int grew from 32 bits to 64.

To make changing to 64-bit int feasible, Go 1 introduces a new basic type, rune, to represent individual Unicode code points. It is an alias for int32, analogous to byte as an alias for uint8.

Character literals such as 'a', '่ชž', and '\u0345' now have default type rune, analogous to 1.0 having default type float64. A variable initialized to a character constant will therefore have type rune unless otherwise specified.

Libraries have been updated to use rune rather than int when appropriate. For instance, the functions unicode.ToLower and relatives now take and return a rune.

    delta := 'ฮด' // delta has type rune.
    var DELTA rune
    DELTA = unicode.ToUpper(delta)
    epsilon := unicode.ToLower(DELTA + 1)
    if epsilon != 'ฮด'+1 {
        log.Fatal("inconsistent casing for Greek")
    }

Updating: Most source code will be unaffected by this because the type inference from := initializers introduces the new type silently, and it propagates from there. Some code may get type errors that a trivial conversion will resolve.

The error type

Go 1 introduces a new built-in type, error, which has the following definition:

    type error interface {
        Error() string
    }

Since the consequences of this type are all in the package library, it is discussed below.

Deleting from maps

In the old language, to delete the entry with key k from map m, one wrote the statement,

    m[k] = value, false

This syntax was a peculiar special case, the only two-to-one assignment. It required passing a value (usually ignored) that is evaluated but discarded, plus a boolean that was nearly always the constant false. It did the job but was odd and a point of contention.

In Go 1, that syntax has gone; instead there is a new built-in function, delete. The call

    delete(m, k)

will delete the map entry retrieved by the expression m[k]. There is no return value. Deleting a non-existent entry is a no-op.

Updating: Running go fix will convert expressions of the form m[k] = value, false into delete(m, k) when it is clear that the ignored value can be safely discarded from the program and false refers to the predefined boolean constant. The fix tool will flag other uses of the syntax for inspection by the programmer.

Iterating in maps

The old language specification did not define the order of iteration for maps, and in practice it differed across hardware platforms. This caused tests that iterated over maps to be fragile and non-portable, with the unpleasant property that a test might always pass on one machine but break on another.

In Go 1, the order in which elements are visited when iterating over a map using a for range statement is defined to be unpredictable, even if the same loop is run multiple times with the same map. Code should not assume that the elements are visited in any particular order.

This change means that code that depends on iteration order is very likely to break early and be fixed long before it becomes a problem. Just as important, it allows the map implementation to ensure better map balancing even when programs are using range loops to select an element from a map.

    m := map[string]int{"Sunday": 0, "Monday": 1}
    for name, value := range m {
        // This loop should not assume Sunday will be visited first.
        f(name, value)
    }

Updating: This is one change where tools cannot help. Most existing code will be unaffected, but some programs may break or misbehave; we recommend manual checking of all range statements over maps to verify they do not depend on iteration order. There were a few such examples in the standard repository; they have been fixed. Note that it was already incorrect to depend on the iteration order, which was unspecified. This change codifies the unpredictability.

Multiple assignment

The language specification has long guaranteed that in assignments the right-hand-side expressions are all evaluated before any left-hand-side expressions are assigned. To guarantee predictable behavior, Go 1 refines the specification further.

If the left-hand side of the assignment statement contains expressions that require evaluation, such as function calls or array indexing operations, these will all be done using the usual left-to-right rule before any variables are assigned their value. Once everything is evaluated, the actual assignments proceed in left-to-right order.

These examples illustrate the behavior.

    sa := []int{1, 2, 3}
    i := 0
    i, sa[i] = 1, 2 // sets i = 1, sa[0] = 2

    sb := []int{1, 2, 3}
    j := 0
    sb[j], j = 2, 1 // sets sb[0] = 2, j = 1

    sc := []int{1, 2, 3}
    sc[0], sc[0] = 1, 2 // sets sc[0] = 1, then sc[0] = 2 (so sc[0] = 2 at end)

Updating: This is one change where tools cannot help, but breakage is unlikely. No code in the standard repository was broken by this change, and code that depended on the previous unspecified behavior was already incorrect.

Returns and shadowed variables

A common mistake is to use return (without arguments) after an assignment to a variable that has the same name as a result variable but is not the same variable. This situation is called shadowing: the result variable has been shadowed by another variable with the same name declared in an inner scope.

In functions with named return values, the Go 1 compilers disallow return statements without arguments if any of the named return values is shadowed at the point of the return statement. (It isn't part of the specification, because this is one area we are still exploring; the situation is analogous to the compilers rejecting functions that do not end with an explicit return statement.)

This function implicitly returns a shadowed return value and will be rejected by the compiler:

    func Bug() (i, j, k int) {
        for i = 0; i < 5; i++ {
            for j := 0; j < 5; j++ { // Redeclares j.
                k += i*j
                if k > 100 {
                    return // Rejected: j is shadowed here.
                }
            }
        }
        return // OK: j is not shadowed here.
    }

Updating: Code that shadows return values in this way will be rejected by the compiler and will need to be fixed by hand. The few cases that arose in the standard repository were mostly bugs.

Copying structs with unexported fields

The old language did not allow a package to make a copy of a struct value containing unexported fields belonging to a different package. There was, however, a required exception for a method receiver; also, the implementations of copy and append have never honored the restriction.

Go 1 will allow packages to copy struct values containing unexported fields from other packages. Besides resolving the inconsistency, this change admits a new kind of API: a package can return an opaque value without resorting to a pointer or interface. The new implementations of time.Time and reflect.Value are examples of types taking advantage of this new property.

As an example, if package p includes the definitions,

    type Struct struct {
        Public int
        secret int
    }
    func NewStruct(a int) Struct {  // Note: not a pointer.
        return Struct{a, f(a)}
    }
    func (s Struct) String() string {
        return fmt.Sprintf("{%d (secret %d)}", s.Public, s.secret)
    }

a package that imports p can assign and copy values of type p.Struct at will. Behind the scenes the unexported fields will be assigned and copied just as if they were exported, but the client code will never be aware of them. The code

    import "p"

    myStruct := p.NewStruct(23)
    copyOfMyStruct := myStruct
    fmt.Println(myStruct, copyOfMyStruct)

will show that the secret field of the struct has been copied to the new value.

Updating: This is a new feature, so existing code needs no changes.

Equality

Before Go 1, the language did not define equality on struct and array values. This meant, among other things, that structs and arrays could not be used as map keys. On the other hand, Go did define equality on function and map values. Function equality was problematic in the presence of closures (when are two closures equal?) while map equality compared pointers, not the maps' content, which was usually not what the user would want.

Go 1 addressed these issues. First, structs and arrays can be compared for equality and inequality (== and !=), and therefore be used as map keys, provided they are composed from elements for which equality is also defined, using element-wise comparison.

    type Day struct {
        long  string
        short string
    }
    Christmas := Day{"Christmas", "XMas"}
    Thanksgiving := Day{"Thanksgiving", "Turkey"}
    holiday := map[Day]bool{
        Christmas:    true,
        Thanksgiving: true,
    }
    fmt.Printf("Christmas is a holiday: %t\n", holiday[Christmas])

Second, Go 1 removes the definition of equality for function values, except for comparison with nil. Finally, map equality is gone too, also except for comparison with nil.

Note that equality is still undefined for slices, for which the calculation is in general infeasible. Also note that the ordered comparison operators (< <= > >=) are still undefined for structs and arrays.

Updating: Struct and array equality is a new feature, so existing code needs no changes. Existing code that depends on function or map equality will be rejected by the compiler and will need to be fixed by hand. Few programs will be affected, but the fix may require some redesign.

The package hierarchy

Go 1 addresses many deficiencies in the old standard library and cleans up a number of packages, making them more internally consistent and portable.

This section describes how the packages have been rearranged in Go 1. Some have moved, some have been renamed, some have been deleted. New packages are described in later sections.

The package hierarchy

Go 1 has a rearranged package hierarchy that groups related items into subdirectories. For instance, utf8 and utf16 now occupy subdirectories of unicode. Also, some packages have moved into subrepositories of code.google.com/p/go while others have been deleted outright.

Old path New path

asn1 encoding/asn1
csv encoding/csv
gob encoding/gob
json encoding/json
xml encoding/xml

exp/template/html html/template

big math/big
cmath math/cmplx
rand math/rand

http net/http
http/cgi net/http/cgi
http/fcgi net/http/fcgi
http/httptest net/http/httptest
http/pprof net/http/pprof
mail net/mail
rpc net/rpc
rpc/jsonrpc net/rpc/jsonrpc
smtp net/smtp
url net/url

exec os/exec

scanner text/scanner
tabwriter text/tabwriter
template text/template
template/parse text/template/parse

utf8 unicode/utf8
utf16 unicode/utf16

Note that the package names for the old cmath and exp/template/html packages have changed to cmplx and template.

Updating: Running go fix will update all imports and package renames for packages that remain inside the standard repository. Programs that import packages that are no longer in the standard repository will need to be edited by hand.

The package tree exp

Because they are not standardized, the packages under the exp directory will not be available in the standard Go 1 release distributions, although they will be available in source code form in the repository for developers who wish to use them.

Several packages have moved under exp at the time of Go 1's release:

  • ebnf
  • html†
  • go/types

(†The EscapeString and UnescapeString types remain in package html.)

All these packages are available under the same names, with the prefix exp/: exp/ebnf etc.

Also, the utf8.String type has been moved to its own package, exp/utf8string.

Finally, the gotype command now resides in exp/gotype, while ebnflint is now in exp/ebnflint. If they are installed, they now reside in $GOROOT/bin/tool.

Updating: Code that uses packages in exp will need to be updated by hand, or else compiled from an installation that has exp available. The go fix tool or the compiler will complain about such uses.

The package tree old

Because they are deprecated, the packages under the old directory will not be available in the standard Go 1 release distributions, although they will be available in source code form for developers who wish to use them.

The packages in their new locations are:

  • old/netchan
  • old/regexp
  • old/template

Updating: Code that uses packages now in old will need to be updated by hand, or else compiled from an installation that has old available. The go fix tool will warn about such uses.

Deleted packages

Go 1 deletes several packages outright:

  • container/vector
  • exp/datafmt
  • go/typechecker
  • try

and also the command gotry.

Updating: Code that uses container/vector should be updated to use slices directly. See the Go Language Community Wiki for some suggestions. Code that uses the other packages (there should be almost zero) will need to be rethought.

Packages moving to subrepositories

Go 1 has moved a number of packages into other repositories, usually sub-repositories of the main Go repository. This table lists the old and new import paths:
Old New

crypto/bcrypt code.google.com/p/go.crypto/bcrypt
crypto/blowfish code.google.com/p/go.crypto/blowfish
crypto/cast5 code.google.com/p/go.crypto/cast5
crypto/md4 code.google.com/p/go.crypto/md4
crypto/ocsp code.google.com/p/go.crypto/ocsp
crypto/openpgp code.google.com/p/go.crypto/openpgp
crypto/openpgp/armor code.google.com/p/go.crypto/openpgp/armor
crypto/openpgp/elgamal code.google.com/p/go.crypto/openpgp/elgamal
crypto/openpgp/errors code.google.com/p/go.crypto/openpgp/errors
crypto/openpgp/packet code.google.com/p/go.crypto/openpgp/packet
crypto/openpgp/s2k code.google.com/p/go.crypto/openpgp/s2k
crypto/ripemd160 code.google.com/p/go.crypto/ripemd160
crypto/twofish code.google.com/p/go.crypto/twofish
crypto/xtea code.google.com/p/go.crypto/xtea
exp/ssh code.google.com/p/go.crypto/ssh

image/bmp code.google.com/p/go.image/bmp
image/tiff code.google.com/p/go.image/tiff

net/dict code.google.com/p/go.net/dict
net/websocket code.google.com/p/go.net/websocket
exp/spdy code.google.com/p/go.net/spdy

encoding/git85 code.google.com/p/go.codereview/git85
patch code.google.com/p/go.codereview/patch

exp/wingui code.google.com/p/gowingui

Updating: Running go fix will update imports of these packages to use the new import paths. Installations that depend on these packages will need to install them using a go get command.

Major changes to the library

This section describes significant changes to the core libraries, the ones that affect the most programs.

The error type and errors package

The placement of os.Error in package os is mostly historical: errors first came up when implementing package os, and they seemed system-related at the time. Since then it has become clear that errors are more fundamental than the operating system. For example, it would be nice to use Errors in packages that os depends on, like syscall. Also, having Error in os introduces many dependencies on os that would otherwise not exist.

Go 1 solves these problems by introducing a built-in error interface type and a separate errors package (analogous to bytes and strings) that contains utility functions. It replaces os.NewError with errors.New, giving errors a more central place in the environment.

So the widely-used String method does not cause accidental satisfaction of the error interface, the error interface uses instead the name Error for that method:

    type error interface {
        Error() string
    }

The fmt library automatically invokes Error, as it already does for String, for easy printing of error values.

type SyntaxError struct {
    File    string
    Line    int
    Message string
}

func (se *SyntaxError) Error() string {
    return fmt.Sprintf("%s:%d: %s", se.File, se.Line, se.Message)
}

All standard packages have been updated to use the new interface; the old os.Error is gone.

A new package, errors, contains the function

func New(text string) error

to turn a string into an error. It replaces the old os.NewError.

    var ErrSyntax = errors.New("syntax error")

Updating: Running go fix will update almost all code affected by the change. Code that defines error types with a String method will need to be updated by hand to rename the methods to Error.

System call errors

The old syscall package, which predated os.Error (and just about everything else), returned errors as int values. In turn, the os package forwarded many of these errors, such as EINVAL, but using a different set of errors on each platform. This behavior was unpleasant and unportable.

In Go 1, the syscall package instead returns an error for system call errors. On Unix, the implementation is done by a syscall.Errno type that satisfies error and replaces the old os.Errno.

The changes affecting os.EINVAL and relatives are described elsewhere.

Updating: Running go fix will update almost all code affected by the change. Regardless, most code should use the os package rather than syscall and so will be unaffected.

Time

Time is always a challenge to support well in a programming language. The old Go time package had int64 units, no real type safety, and no distinction between absolute times and durations.

One of the most sweeping changes in the Go 1 library is therefore a complete redesign of the time package. Instead of an integer number of nanoseconds as an int64, and a separate *time.Time type to deal with human units such as hours and years, there are now two fundamental types: time.Time (a value, so the * is gone), which represents a moment in time; and time.Duration, which represents an interval. Both have nanosecond resolution. A Time can represent any time into the ancient past and remote future, while a Duration can span plus or minus only about 290 years. There are methods on these types, plus a number of helpful predefined constant durations such as time.Second.

Among the new methods are things like Time.Add, which adds a Duration to a Time, and Time.Sub, which subtracts two Times to yield a Duration.

The most important semantic change is that the Unix epoch (Jan 1, 1970) is now relevant only for those functions and methods that mention Unix: time.Unix and the Unix and UnixNano methods of the Time type. In particular, time.Now returns a time.Time value rather than, in the old API, an integer nanosecond count since the Unix epoch.

// sleepUntil sleeps until the specified time. It returns immediately if it's too late.
func sleepUntil(wakeup time.Time) {
    now := time.Now() // A Time.
    if !wakeup.After(now) {
        return
    }
    delta := wakeup.Sub(now) // A Duration.
    fmt.Printf("Sleeping for %.3fs\n", delta.Seconds())
    time.Sleep(delta)
}

The new types, methods, and constants have been propagated through all the standard packages that use time, such as os and its representation of file time stamps.

Updating: The go fix tool will update many uses of the old time package to use the new types and methods, although it does not replace values such as 1e9 representing nanoseconds per second. Also, because of type changes in some of the values that arise, some of the expressions rewritten by the fix tool may require further hand editing; in such cases the rewrite will include the correct function or method for the old functionality, but may have the wrong type or require further analysis.

Minor changes to the library

This section describes smaller changes, such as those to less commonly used packages or that affect few programs beyond the need to run go fix. This category includes packages that are new in Go 1. Collectively they improve portability, regularize behavior, and make the interfaces more modern and Go-like.

The archive/zip package

In Go 1, *zip.Writer no longer has a Write method. Its presence was a mistake.

Updating: What little code is affected will be caught by the compiler and must be updated by hand.

The bufio package

In Go 1, bufio.NewReaderSize and bufio.NewWriterSize functions no longer return an error for invalid sizes. If the argument size is too small or invalid, it is adjusted.

Updating: Running go fix will update calls that assign the error to _. Calls that aren't fixed will be caught by the compiler and must be updated by hand.

The compress/flate, compress/gzip and compress/zlib packages

In Go 1, the NewWriterXxx functions in compress/flate, compress/gzip and compress/zlib all return (*Writer, error) if they take a compression level, and *Writer otherwise. Package gzip's Compressor and Decompressor types have been renamed to Writer and Reader. Package flate's WrongValueError type has been removed.

Updating Running go fix will update old names and calls that assign the error to _. Calls that aren't fixed will be caught by the compiler and must be updated by hand.

The crypto/aes and crypto/des packages

In Go 1, the Reset method has been removed. Go does not guarantee that memory is not copied and therefore this method was misleading.

The cipher-specific types *aes.Cipher, *des.Cipher, and *des.TripleDESCipher have been removed in favor of cipher.Block.

Updating: Remove the calls to Reset. Replace uses of the specific cipher types with cipher.Block.

The crypto/elliptic package

In Go 1, elliptic.Curve has been made an interface to permit alternative implementations. The curve parameters have been moved to the elliptic.CurveParams structure.

Updating: Existing users of *elliptic.Curve will need to change to simply elliptic.Curve. Calls to Marshal, Unmarshal and GenerateKey are now functions in crypto/elliptic that take an elliptic.Curve as their first argument.

The crypto/hmac package

In Go 1, the hash-specific functions, such as hmac.NewMD5, have been removed from crypto/hmac. Instead, hmac.New takes a function that returns a hash.Hash, such as md5.New.

Updating: Running go fix will perform the needed changes.

The crypto/x509 package

In Go 1, the CreateCertificate and CreateCRL functions in crypto/x509 have been altered to take an interface{} where they previously took a *rsa.PublicKey or *rsa.PrivateKey. This will allow other public key algorithms to be implemented in the future.

Updating: No changes will be needed.

The encoding/binary package

In Go 1, the binary.TotalSize function has been replaced by Size, which takes an interface{} argument rather than a reflect.Value.

Updating: What little code is affected will be caught by the compiler and must be updated by hand.

The encoding/xml package

In Go 1, the xml package has been brought closer in design to the other marshaling packages such as encoding/gob.

The old Parser type is renamed Decoder and has a new Decode method. An Encoder type was also introduced.

The functions Marshal and Unmarshal work with []byte values now. To work with streams, use the new Encoder and Decoder types.

When marshaling or unmarshaling values, the format of supported flags in field tags has changed to be closer to the json package (`xml:"name,flag"`). The matching done between field tags, field names, and the XML attribute and element names is now case-sensitive. The XMLName field tag, if present, must also match the name of the XML element being marshaled.

Updating: Running go fix will update most uses of the package except for some calls to Unmarshal. Special care must be taken with field tags, since the fix tool will not update them and if not fixed by hand they will misbehave silently in some cases. For example, the old "attr" is now written ",attr" while plain "attr" remains valid but with a different meaning.

The expvar package

In Go 1, the RemoveAll function has been removed. The Iter function and Iter method on *Map have been replaced by Do and (*Map).Do.

Updating: Most code using expvar will not need changing. The rare code that used Iter can be updated to pass a closure to Do to achieve the same effect.

The flag package

In Go 1, the interface flag.Value has changed slightly. The Set method now returns an error instead of a bool to indicate success or failure.

There is also a new kind of flag, Duration, to support argument values specifying time intervals. Values for such flags must be given units, just as time.Duration formats them: 10s, 1h30m, etc.

var timeout = flag.Duration("timeout", 30*time.Second, "how long to wait for completion")

Updating: Programs that implement their own flags will need minor manual fixes to update their Set methods. The Duration flag is new and affects no existing code.

The go/* packages

Several packages under go have slightly revised APIs.

A concrete Mode type was introduced for configuration mode flags in the packages go/scanner, go/parser, go/printer, and go/doc.

The modes AllowIllegalChars and InsertSemis have been removed from the go/scanner package. They were mostly useful for scanning text other then Go source files. Instead, the text/scanner package should be used for that purpose.

The ErrorHandler provided to the scanner's Init method is now simply a function rather than an interface. The ErrorVector type has been removed in favor of the (existing) ErrorList type, and the ErrorVector methods have been migrated. Instead of embedding an ErrorVector in a client of the scanner, now a client should maintain an ErrorList.

The set of parse functions provided by the go/parser package has been reduced to the primary parse function ParseFile, and a couple of convenience functions ParseDir and ParseExpr.

The go/printer package supports an additional configuration mode SourcePos; if set, the printer will emit //line comments such that the generated output contains the original source code position information. The new type CommentedNode can be used to provide comments associated with an arbitrary ast.Node (until now only ast.File carried comment information).

The type names of the go/doc package have been streamlined by removing the Doc suffix: PackageDoc is now Package, ValueDoc is Value, etc. Also, all types now consistently have a Name field (or Names, in the case of type Value) and Type.Factories has become Type.Funcs. Instead of calling doc.NewPackageDoc(pkg, importpath), documentation for a package is created with:

    doc.New(pkg, importpath, mode)

where the new mode parameter specifies the operation mode: if set to AllDecls, all declarations (not just exported ones) are considered. The function NewFileDoc was removed, and the function CommentText has become the method Text of ast.CommentGroup.

In package go/token, the token.FileSet method Files (which originally returned a channel of *token.Files) has been replaced with the iterator Iterate that accepts a function argument instead.

In package go/build, the API has been nearly completely replaced. The package still computes Go package information but it does not run the build: the Cmd and Script types are gone. (To build code, use the new go command instead.) The DirInfo type is now named Package. FindTree and ScanDir are replaced by Import and ImportDir.

Updating: Code that uses packages in go will have to be updated by hand; the compiler will reject incorrect uses. Templates used in conjunction with any of the go/doc types may need manual fixes; the renamed fields will lead to run-time errors.

The hash package

In Go 1, the definition of hash.Hash includes a new method, BlockSize. This new method is used primarily in the cryptographic libraries.

The Sum method of the hash.Hash interface now takes a []byte argument, to which the hash value will be appended. The previous behavior can be recreated by adding a nil argument to the call.

Updating: Existing implementations of hash.Hash will need to add a BlockSize method. Hashes that process the input one byte at a time can implement BlockSize to return 1. Running go fix will update calls to the Sum methods of the various implementations of hash.Hash.

Updating: Since the package's functionality is new, no updating is necessary.

The http package

In Go 1 the http package is refactored, putting some of the utilities into a httputil subdirectory. These pieces are only rarely needed by HTTP clients. The affected items are:

  • ClientConn
  • DumpRequest
  • DumpRequestOut
  • DumpResponse
  • NewChunkedReader
  • NewChunkedWriter
  • NewClientConn
  • NewProxyClientConn
  • NewServerConn
  • NewSingleHostReverseProxy
  • ReverseProxy
  • ServerConn

The Request.RawURL field has been removed; it was a historical artifact.

The Handle and HandleFunc functions, and the similarly-named methods of ServeMux, now panic if an attempt is made to register the same pattern twice.

Updating: Running go fix will update the few programs that are affected except for uses of RawURL, which must be fixed by hand.

The image package

The image package has had a number of minor changes, rearrangements and renamings.

Most of the color handling code has been moved into its own package, image/color. For the elements that moved, a symmetry arises; for instance, each pixel of an image.RGBA is a color.RGBA.

The old image/ycbcr package has been folded, with some renamings, into the image and image/color packages.

The old image.ColorImage type is still in the image package but has been renamed image.Uniform, while image.Tiled has been removed.

This table lists the renamings.

Old New

image.Color color.Color
image.ColorModel color.Model
image.ColorModelFunc color.ModelFunc
image.PalettedColorModel color.Palette

image.RGBAColor color.RGBA
image.RGBA64Color color.RGBA64
image.NRGBAColor color.NRGBA
image.NRGBA64Color color.NRGBA64
image.AlphaColor color.Alpha
image.Alpha16Color color.Alpha16
image.GrayColor color.Gray
image.Gray16Color color.Gray16

image.RGBAColorModel color.RGBAModel
image.RGBA64ColorModel color.RGBA64Model
image.NRGBAColorModel color.NRGBAModel
image.NRGBA64ColorModel color.NRGBA64Model
image.AlphaColorModel color.AlphaModel
image.Alpha16ColorModel color.Alpha16Model
image.GrayColorModel color.GrayModel
image.Gray16ColorModel color.Gray16Model

ycbcr.RGBToYCbCr color.RGBToYCbCr
ycbcr.YCbCrToRGB color.YCbCrToRGB
ycbcr.YCbCrColorModel color.YCbCrModel
ycbcr.YCbCrColor color.YCbCr
ycbcr.YCbCr image.YCbCr

ycbcr.SubsampleRatio444 image.YCbCrSubsampleRatio444
ycbcr.SubsampleRatio422 image.YCbCrSubsampleRatio422
ycbcr.SubsampleRatio420 image.YCbCrSubsampleRatio420

image.ColorImage image.Uniform

The image package's New functions (NewRGBA, NewRGBA64, etc.) take an image.Rectangle as an argument instead of four integers.

Finally, there are new predefined color.Color variables color.Black, color.White, color.Opaque and color.Transparent.

Updating: Running go fix will update almost all code affected by the change.

The log/syslog package

In Go 1, the syslog.NewLogger function returns an error as well as a log.Logger.

Updating: What little code is affected will be caught by the compiler and must be updated by hand.

The mime package

In Go 1, the FormatMediaType function of the mime package has been simplified to make it consistent with ParseMediaType. It now takes "text/html" rather than "text" and "html".

Updating: What little code is affected will be caught by the compiler and must be updated by hand.

The net package

In Go 1, the various SetTimeout, SetReadTimeout, and SetWriteTimeout methods have been replaced with SetDeadline, SetReadDeadline, and SetWriteDeadline, respectively. Rather than taking a timeout value in nanoseconds that apply to any activity on the connection, the new methods set an absolute deadline (as a time.Time value) after which reads and writes will time out and no longer block.

There are also new functions net.DialTimeout to simplify timing out dialing a network address and net.ListenMulticastUDP to allow multicast UDP to listen concurrently across multiple listeners. The net.ListenMulticastUDP function replaces the old JoinGroup and LeaveGroup methods.

Updating: Code that uses the old methods will fail to compile and must be updated by hand. The semantic change makes it difficult for the fix tool to update automatically.

The os package

The Time function has been removed; callers should use the Time type from the time package.

The Exec function has been removed; callers should use Exec from the syscall package, where available.

The ShellExpand function has been renamed to ExpandEnv.

The NewFile function now takes a uintptr fd, instead of an int. The Fd method on files now also returns a uintptr.

There are no longer error constants such as EINVAL in the os package, since the set of values varied with the underlying operating system. There are new portable functions like IsPermission to test common error properties, plus a few new error values with more Go-like names, such as ErrPermission and ErrNoEnv.

The Getenverror function has been removed. To distinguish between a non-existent environment variable and an empty string, use os.Environ or syscall.Getenv.

The Process.Wait method has dropped its option argument and the associated constants are gone from the package. Also, the function Wait is gone; only the method of the Process type persists.

The Waitmsg type returned by Process.Wait has been replaced with a more portable ProcessState type with accessor methods to recover information about the process. Because of changes to Wait, the ProcessState value always describes an exited process. Portability concerns simplified the interface in other ways, but the values returned by the ProcessState.Sys and ProcessState.SysUsage methods can be type-asserted to underlying system-specific data structures such as syscall.WaitStatus and syscall.Rusage on Unix.

Updating: Running go fix will drop a zero argument to Process.Wait. All other changes will be caught by the compiler and must be updated by hand.

The os.FileInfo type

Go 1 redefines the os.FileInfo type, changing it from a struct to an interface:

    type FileInfo interface {
        Name() string       // base name of the file
        Size() int64        // length in bytes
        Mode() FileMode     // file mode bits
        ModTime() time.Time // modification time
        IsDir() bool        // abbreviation for Mode().IsDir()
        Sys() interface{}   // underlying data source (can return nil)
    }

The file mode information has been moved into a subtype called os.FileMode, a simple integer type with IsDir, Perm, and String methods.

The system-specific details of file modes and properties such as (on Unix) i-number have been removed from FileInfo altogether. Instead, each operating system's os package provides an implementation of the FileInfo interface, which has a Sys method that returns the system-specific representation of file metadata. For instance, to discover the i-number of a file on a Unix system, unpack the FileInfo like this:

    fi, err := os.Stat("hello.go")
    if err != nil {
        log.Fatal(err)
    }
    // Check that it's a Unix file.
    unixStat, ok := fi.Sys().(*syscall.Stat_t)
    if !ok {
        log.Fatal("hello.go: not a Unix file")
    }
    fmt.Printf("file i-number: %d\n", unixStat.Ino)

Assuming (which is unwise) that "hello.go" is a Unix file, the i-number expression could be contracted to

    fi.Sys().(*syscall.Stat_t).Ino

The vast majority of uses of FileInfo need only the methods of the standard interface.

The os package no longer contains wrappers for the POSIX errors such as ENOENT. For the few programs that need to verify particular error conditions, there are now the boolean functions IsExist, IsNotExist and IsPermission.

    f, err := os.OpenFile(name, os.O_RDWR|os.O_CREATE|os.O_EXCL, 0600)
    if os.IsExist(err) {
        log.Printf("%s already exists", name)
    }

Updating: Running go fix will update code that uses the old equivalent of the current os.FileInfo and os.FileMode API. Code that needs system-specific file details will need to be updated by hand. Code that uses the old POSIX error values from the os package will fail to compile and will also need to be updated by hand.

The os/signal package

The os/signal package in Go 1 replaces the Incoming function, which returned a channel that received all incoming signals, with the selective Notify function, which asks for delivery of specific signals on an existing channel.

Updating: Code must be updated by hand. A literal translation of

c := signal.Incoming()

is

c := make(chan os.Signal)
signal.Notify(c) // ask for all signals

but most code should list the specific signals it wants to handle instead:

c := make(chan os.Signal)
signal.Notify(c, syscall.SIGHUP, syscall.SIGQUIT)

The path/filepath package

In Go 1, the Walk function of the path/filepath package has been changed to take a function value of type WalkFunc instead of a Visitor interface value. WalkFunc unifies the handling of both files and directories.

    type WalkFunc func(path string, info os.FileInfo, err error) error

The WalkFunc function will be called even for files or directories that could not be opened; in such cases the error argument will describe the failure. If a directory's contents are to be skipped, the function should return the value filepath.SkipDir

    markFn := func(path string, info os.FileInfo, err error) error {
        if path == "pictures" { // Will skip walking of directory pictures and its contents.
            return filepath.SkipDir
        }
        if err != nil {
            return err
        }
        log.Println(path)
        return nil
    }
    err := filepath.Walk(".", markFn)
    if err != nil {
        log.Fatal(err)
    }

Updating: The change simplifies most code but has subtle consequences, so affected programs will need to be updated by hand. The compiler will catch code using the old interface.

The regexp package

The regexp package has been rewritten. It has the same interface but the specification of the regular expressions it supports has changed from the old "egrep" form to that of RE2.

Updating: Code that uses the package should have its regular expressions checked by hand.

The runtime package

In Go 1, much of the API exported by package runtime has been removed in favor of functionality provided by other packages. Code using the runtime.Type interface or its specific concrete type implementations should now use package reflect. Code using runtime.Semacquire or runtime.Semrelease should use channels or the abstractions in package sync. The runtime.Alloc, runtime.Free, and runtime.Lookup functions, an unsafe API created for debugging the memory allocator, have no replacement.

Before, runtime.MemStats was a global variable holding statistics about memory allocation, and calls to runtime.UpdateMemStats ensured that it was up to date. In Go 1, runtime.MemStats is a struct type, and code should use runtime.ReadMemStats to obtain the current statistics.

The package adds a new function, runtime.NumCPU, that returns the number of CPUs available for parallel execution, as reported by the operating system kernel. Its value can inform the setting of GOMAXPROCS. The runtime.Cgocalls and runtime.Goroutines functions have been renamed to runtime.NumCgoCall and runtime.NumGoroutine.

Updating: Running go fix will update code for the function renamings. Other code will need to be updated by hand.

The strconv package

In Go 1, the strconv package has been significantly reworked to make it more Go-like and less C-like, although Atoi lives on (it's similar to int(ParseInt(x, 10, 0)), as does Itoa(x) (FormatInt(int64(x), 10)). There are also new variants of some of the functions that append to byte slices rather than return strings, to allow control over allocation.

This table summarizes the renamings; see the package documentation for full details.

Old call New call

Atob(x) ParseBool(x)

Atof32(x) ParseFloat(x, 32)ยง
Atof64(x) ParseFloat(x, 64)
AtofN(x, n) ParseFloat(x, n)

Atoi(x) Atoi(x)
Atoi(x) ParseInt(x, 10, 0)ยง
Atoi64(x) ParseInt(x, 10, 64)

Atoui(x) ParseUint(x, 10, 0)ยง
Atoui64(x) ParseUint(x, 10, 64)

Btoi64(x, b) ParseInt(x, b, 64)
Btoui64(x, b) ParseUint(x, b, 64)

Btoa(x) FormatBool(x)

Ftoa32(x, f, p) FormatFloat(float64(x), f, p, 32)
Ftoa64(x, f, p) FormatFloat(x, f, p, 64)
FtoaN(x, f, p, n) FormatFloat(x, f, p, n)

Itoa(x) Itoa(x)
Itoa(x) FormatInt(int64(x), 10)
Itoa64(x) FormatInt(x, 10)

Itob(x, b) FormatInt(int64(x), b)
Itob64(x, b) FormatInt(x, b)

Uitoa(x) FormatUint(uint64(x), 10)
Uitoa64(x) FormatUint(x, 10)

Uitob(x, b) FormatUint(uint64(x), b)
Uitob64(x, b) FormatUint(x, b)

Updating: Running go fix will update almost all code affected by the change.
ยง Atoi persists but Atoui and Atof32 do not, so they may require a cast that must be added by hand; the go fix tool will warn about it.

The template packages

The template and exp/template/html packages have moved to text/template and html/template. More significant, the interface to these packages has been simplified. The template language is the same, but the concept of "template set" is gone and the functions and methods of the packages have changed accordingly, often by elimination.

Instead of sets, a Template object may contain multiple named template definitions, in effect constructing name spaces for template invocation. A template can invoke any other template associated with it, but only those templates associated with it. The simplest way to associate templates is to parse them together, something made easier with the new structure of the packages.

Updating: The imports will be updated by fix tool. Single-template uses will be otherwise be largely unaffected. Code that uses multiple templates in concert will need to be updated by hand. The examples in the documentation for text/template can provide guidance.

The testing package

The testing package has a type, B, passed as an argument to benchmark functions. In Go 1, B has new methods, analogous to those of T, enabling logging and failure reporting.

func BenchmarkSprintf(b *testing.B) {
    // Verify correctness before running benchmark.
    b.StopTimer()
    got := fmt.Sprintf("%x", 23)
    const expect = "17"
    if expect != got {
        b.Fatalf("expected %q; got %q", expect, got)
    }
    b.StartTimer()
    for i := 0; i < b.N; i++ {
        fmt.Sprintf("%x", 23)
    }
}

Updating: Existing code is unaffected, although benchmarks that use println or panic should be updated to use the new methods.

The testing/script package

The testing/script package has been deleted. It was a dreg.

Updating: No code is likely to be affected.

The unsafe package

In Go 1, the functions unsafe.Typeof, unsafe.Reflect, unsafe.Unreflect, unsafe.New, and unsafe.NewArray have been removed; they duplicated safer functionality provided by package reflect.

Updating: Code using these functions must be rewritten to use package reflect. The changes to encoding/gob and the protocol buffer library may be helpful as examples.

The url package

In Go 1 several fields from the url.URL type were removed or replaced.

The String method now predictably rebuilds an encoded URL string using all of URL's fields as necessary. The resulting string will also no longer have passwords escaped.

The Raw field has been removed. In most cases the String method may be used in its place.

The old RawUserinfo field is replaced by the User field, of type *net.Userinfo. Values of this type may be created using the new net.User and net.UserPassword functions. The EscapeUserinfo and UnescapeUserinfo functions are also gone.

The RawAuthority field has been removed. The same information is available in the Host and User fields.

The RawPath field and the EncodedPath method have been removed. The path information in rooted URLs (with a slash following the schema) is now available only in decoded form in the Path field. Occasionally, the encoded data may be required to obtain information that was lost in the decoding process. These cases must be handled by accessing the data the URL was built from.

URLs with non-rooted paths, such as "mailto:dev@golang.org?subject=Hi", are also handled differently. The OpaquePath boolean field has been removed and a new Opaque string field introduced to hold the encoded path for such URLs. In Go 1, the cited URL parses as:

    URL{
        Scheme: "mailto",
        Opaque: "dev@golang.org",
        RawQuery: "subject=Hi",
    }

A new RequestURI method was added to URL.

The ParseWithReference function has been renamed to ParseWithFragment.

Updating: Code that uses the old fields will fail to compile and must be updated by hand. The semantic changes make it difficult for the fix tool to update automatically.

The go command

Go 1 introduces the go command, a tool for fetching, building, and installing Go packages and commands. The go command does away with makefiles, instead using Go source code to find dependencies and determine build conditions. Most existing Go programs will no longer require makefiles to be built.

See How to Write Go Code for a primer on the go command and the go command documentation for the full details.

Updating: Projects that depend on the Go project's old makefile-based build infrastructure (Make.pkg, Make.cmd, and so on) should switch to using the go command for building Go code and, if necessary, rewrite their makefiles to perform any auxiliary build tasks.

The cgo command

In Go 1, the cgo command uses a different _cgo_export.h file, which is generated for packages containing //export lines. The _cgo_export.h file now begins with the C preamble comment, so that exported function definitions can use types defined there. This has the effect of compiling the preamble multiple times, so a package using //export must not put function definitions or variable initializations in the C preamble.

Packaged releases

One of the most significant changes associated with Go 1 is the availability of prepackaged, downloadable distributions. They are available for many combinations of architecture and operating system (including Windows) and the list will grow. Installation details are described on the Getting Started page, while the distributions themselves are listed on the downloads page.

Build version go1.0.1.
A link noted, and then, coming up on the very next line, we will find yet another link, link 3.0 if you will, after a few more words link text.
Terms of Service | Privacy Policy
lxd-2.0.2/dist/src/golang.org/x/net/html/testdata/webkit/0000755061062106075000000000000012721405224025575 5ustar00stgraberdomain admins00000000000000lxd-2.0.2/dist/src/golang.org/x/net/html/testdata/webkit/scriptdata01.dat0000644061062106075000000001030112721405224030561 0ustar00stgraberdomain admins00000000000000#data FOOBAR #errors #document | | | | "FOO" | BAR #errors #document | | | | "FOO" | BAR #errors #document | | | | "FOO" | BAR #errors #document | | | | "FOO" | BAR #errors #document | | | | "FOO" | BAR #errors #document | | | | "FOO" | BAR #errors #document | | | | "FOO" | BAR #errors #document | | | | "FOO" | BAR #errors #document | | | | "FOO" | BAR #errors #document | | | | "FOO" | BAR #errors #document | | | | "FOO" | BAR #errors #document | | | | "FOO" | BAR #errors #document | | | | "FOO" | BAR #errors #document | | | | "FOO" | QUX #errors #document | | | | "FOO" | 3 #errors #document | | | | "1" | 4 #errors #document | | | | "1" |

TEXT #errors #document | | | |

| | id="B" | #errors Line: 1 Col: 6 Unexpected start tag (head). Expected DOCTYPE. Line: 1 Col: 21 Unexpected start tag (script) that can be in head. Moved. #document | | | #errors Line: 1 Col: 6 Unexpected start tag (head). Expected DOCTYPE. Line: 1 Col: 28 Unexpected start tag (style) that can be in head. Moved. #document | | | #errors Line: 1 Col: 6 Unexpected start tag (head). Expected DOCTYPE. #document | | | | | "x" | #errors #document | | | | | | "X" | <meta> | name="z" | <link> | rel="foo" | <style> | " x { content:"</style" } " #data <!DOCTYPE html><select><optgroup></optgroup></select> #errors #document | <!DOCTYPE html> | <html> | <head> | <body> | <select> | <optgroup> #data #errors Line: 2 Col: 1 Unexpected End of file. Expected DOCTYPE. #document | <html> | <head> | <body> #data <!DOCTYPE html> <html> #errors #document | <!DOCTYPE html> | <html> | <head> | <body> #data <!DOCTYPE html><script> </script> <title>x #errors #document | | | | --> x #errors Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE. #document | | |

Bye
#errors #document | | | |
| "Hello" | " " |
Bye
#errors #document | | | |
| "Hello" | " " |
<p>

#errors Line: 1 Col: 8 Unexpected start tag (script). Expected DOCTYPE. Line: 1 Col: 28 Unexpected end tag (div). Ignored. #document | | | --> EOF #errors #document | | | | #errors Line: 1 Col: 7 Unexpected start tag (style). Expected DOCTYPE. Line: 1 Col: 51 Unexpected end of file. Expected end tag (style). #document | | |

#errors Line: 1 Col: 9 Unexpected end tag (strong). Expected DOCTYPE. Line: 1 Col: 9 Unexpected end tag (strong) after the (implied) root element. Line: 1 Col: 13 Unexpected end tag (b) after the (implied) root element. Line: 1 Col: 18 Unexpected end tag (em) after the (implied) root element. Line: 1 Col: 22 Unexpected end tag (i) after the (implied) root element. Line: 1 Col: 26 Unexpected end tag (u) after the (implied) root element. Line: 1 Col: 35 Unexpected end tag (strike) after the (implied) root element. Line: 1 Col: 39 Unexpected end tag (s) after the (implied) root element. Line: 1 Col: 47 Unexpected end tag (blink) after the (implied) root element. Line: 1 Col: 52 Unexpected end tag (tt) after the (implied) root element. Line: 1 Col: 58 Unexpected end tag (pre) after the (implied) root element. Line: 1 Col: 64 Unexpected end tag (big) after the (implied) root element. Line: 1 Col: 72 Unexpected end tag (small) after the (implied) root element. Line: 1 Col: 79 Unexpected end tag (font) after the (implied) root element. Line: 1 Col: 88 Unexpected end tag (select) after the (implied) root element. Line: 1 Col: 93 Unexpected end tag (h1) after the (implied) root element. Line: 1 Col: 98 Unexpected end tag (h2) after the (implied) root element. Line: 1 Col: 103 Unexpected end tag (h3) after the (implied) root element. Line: 1 Col: 108 Unexpected end tag (h4) after the (implied) root element. Line: 1 Col: 113 Unexpected end tag (h5) after the (implied) root element. Line: 1 Col: 118 Unexpected end tag (h6) after the (implied) root element. Line: 1 Col: 125 Unexpected end tag (body) after the (implied) root element. Line: 1 Col: 130 Unexpected end tag (br). Treated as br element. Line: 1 Col: 134 End tag (a) violates step 1, paragraph 1 of the adoption agency algorithm. Line: 1 Col: 140 This element (img) has no end tag. Line: 1 Col: 148 Unexpected end tag (title). Ignored. Line: 1 Col: 155 Unexpected end tag (span). Ignored. Line: 1 Col: 163 Unexpected end tag (style). Ignored. Line: 1 Col: 172 Unexpected end tag (script). Ignored. Line: 1 Col: 180 Unexpected end tag (table). Ignored. Line: 1 Col: 185 Unexpected end tag (th). Ignored. Line: 1 Col: 190 Unexpected end tag (td). Ignored. Line: 1 Col: 195 Unexpected end tag (tr). Ignored. Line: 1 Col: 203 This element (frame) has no end tag. Line: 1 Col: 210 This element (area) has no end tag. Line: 1 Col: 217 Unexpected end tag (link). Ignored. Line: 1 Col: 225 This element (param) has no end tag. Line: 1 Col: 230 This element (hr) has no end tag. Line: 1 Col: 238 This element (input) has no end tag. Line: 1 Col: 244 Unexpected end tag (col). Ignored. Line: 1 Col: 251 Unexpected end tag (base). Ignored. Line: 1 Col: 258 Unexpected end tag (meta). Ignored. Line: 1 Col: 269 This element (basefont) has no end tag. Line: 1 Col: 279 This element (bgsound) has no end tag. Line: 1 Col: 287 This element (embed) has no end tag. Line: 1 Col: 296 This element (spacer) has no end tag. Line: 1 Col: 300 Unexpected end tag (p). Ignored. Line: 1 Col: 305 End tag (dd) seen too early. Expected other end tag. Line: 1 Col: 310 End tag (dt) seen too early. Expected other end tag. Line: 1 Col: 320 Unexpected end tag (caption). Ignored. Line: 1 Col: 331 Unexpected end tag (colgroup). Ignored. Line: 1 Col: 339 Unexpected end tag (tbody). Ignored. Line: 1 Col: 347 Unexpected end tag (tfoot). Ignored. Line: 1 Col: 355 Unexpected end tag (thead). Ignored. Line: 1 Col: 365 End tag (address) seen too early. Expected other end tag. Line: 1 Col: 378 End tag (blockquote) seen too early. Expected other end tag. Line: 1 Col: 387 End tag (center) seen too early. Expected other end tag. Line: 1 Col: 393 Unexpected end tag (dir). Ignored. Line: 1 Col: 399 End tag (div) seen too early. Expected other end tag. Line: 1 Col: 404 End tag (dl) seen too early. Expected other end tag. Line: 1 Col: 415 End tag (fieldset) seen too early. Expected other end tag. Line: 1 Col: 425 End tag (listing) seen too early. Expected other end tag. Line: 1 Col: 432 End tag (menu) seen too early. Expected other end tag. Line: 1 Col: 437 End tag (ol) seen too early. Expected other end tag. Line: 1 Col: 442 End tag (ul) seen too early. Expected other end tag. Line: 1 Col: 447 End tag (li) seen too early. Expected other end tag. Line: 1 Col: 454 End tag (nobr) violates step 1, paragraph 1 of the adoption agency algorithm. Line: 1 Col: 460 This element (wbr) has no end tag. Line: 1 Col: 476 End tag (button) seen too early. Expected other end tag. Line: 1 Col: 486 End tag (marquee) seen too early. Expected other end tag. Line: 1 Col: 495 End tag (object) seen too early. Expected other end tag. Line: 1 Col: 513 Unexpected end tag (html). Ignored. Line: 1 Col: 513 Unexpected end tag (frameset). Ignored. Line: 1 Col: 520 Unexpected end tag (head). Ignored. Line: 1 Col: 529 Unexpected end tag (iframe). Ignored. Line: 1 Col: 537 This element (image) has no end tag. Line: 1 Col: 547 This element (isindex) has no end tag. Line: 1 Col: 557 Unexpected end tag (noembed). Ignored. Line: 1 Col: 568 Unexpected end tag (noframes). Ignored. Line: 1 Col: 579 Unexpected end tag (noscript). Ignored. Line: 1 Col: 590 Unexpected end tag (optgroup). Ignored. Line: 1 Col: 599 Unexpected end tag (option). Ignored. Line: 1 Col: 611 Unexpected end tag (plaintext). Ignored. Line: 1 Col: 622 Unexpected end tag (textarea). Ignored. #document | | | |
|

#data

#errors Line: 1 Col: 7 Unexpected start tag (table). Expected DOCTYPE. Line: 1 Col: 20 Unexpected end tag (strong) in table context caused voodoo mode. Line: 1 Col: 20 End tag (strong) violates step 1, paragraph 1 of the adoption agency algorithm. Line: 1 Col: 24 Unexpected end tag (b) in table context caused voodoo mode. Line: 1 Col: 24 End tag (b) violates step 1, paragraph 1 of the adoption agency algorithm. Line: 1 Col: 29 Unexpected end tag (em) in table context caused voodoo mode. Line: 1 Col: 29 End tag (em) violates step 1, paragraph 1 of the adoption agency algorithm. Line: 1 Col: 33 Unexpected end tag (i) in table context caused voodoo mode. Line: 1 Col: 33 End tag (i) violates step 1, paragraph 1 of the adoption agency algorithm. Line: 1 Col: 37 Unexpected end tag (u) in table context caused voodoo mode. Line: 1 Col: 37 End tag (u) violates step 1, paragraph 1 of the adoption agency algorithm. Line: 1 Col: 46 Unexpected end tag (strike) in table context caused voodoo mode. Line: 1 Col: 46 End tag (strike) violates step 1, paragraph 1 of the adoption agency algorithm. Line: 1 Col: 50 Unexpected end tag (s) in table context caused voodoo mode. Line: 1 Col: 50 End tag (s) violates step 1, paragraph 1 of the adoption agency algorithm. Line: 1 Col: 58 Unexpected end tag (blink) in table context caused voodoo mode. Line: 1 Col: 58 Unexpected end tag (blink). Ignored. Line: 1 Col: 63 Unexpected end tag (tt) in table context caused voodoo mode. Line: 1 Col: 63 End tag (tt) violates step 1, paragraph 1 of the adoption agency algorithm. Line: 1 Col: 69 Unexpected end tag (pre) in table context caused voodoo mode. Line: 1 Col: 69 End tag (pre) seen too early. Expected other end tag. Line: 1 Col: 75 Unexpected end tag (big) in table context caused voodoo mode. Line: 1 Col: 75 End tag (big) violates step 1, paragraph 1 of the adoption agency algorithm. Line: 1 Col: 83 Unexpected end tag (small) in table context caused voodoo mode. Line: 1 Col: 83 End tag (small) violates step 1, paragraph 1 of the adoption agency algorithm. Line: 1 Col: 90 Unexpected end tag (font) in table context caused voodoo mode. Line: 1 Col: 90 End tag (font) violates step 1, paragraph 1 of the adoption agency algorithm. Line: 1 Col: 99 Unexpected end tag (select) in table context caused voodoo mode. Line: 1 Col: 99 Unexpected end tag (select). Ignored. Line: 1 Col: 104 Unexpected end tag (h1) in table context caused voodoo mode. Line: 1 Col: 104 End tag (h1) seen too early. Expected other end tag. Line: 1 Col: 109 Unexpected end tag (h2) in table context caused voodoo mode. Line: 1 Col: 109 End tag (h2) seen too early. Expected other end tag. Line: 1 Col: 114 Unexpected end tag (h3) in table context caused voodoo mode. Line: 1 Col: 114 End tag (h3) seen too early. Expected other end tag. Line: 1 Col: 119 Unexpected end tag (h4) in table context caused voodoo mode. Line: 1 Col: 119 End tag (h4) seen too early. Expected other end tag. Line: 1 Col: 124 Unexpected end tag (h5) in table context caused voodoo mode. Line: 1 Col: 124 End tag (h5) seen too early. Expected other end tag. Line: 1 Col: 129 Unexpected end tag (h6) in table context caused voodoo mode. Line: 1 Col: 129 End tag (h6) seen too early. Expected other end tag. Line: 1 Col: 136 Unexpected end tag (body) in the table row phase. Ignored. Line: 1 Col: 141 Unexpected end tag (br) in table context caused voodoo mode. Line: 1 Col: 141 Unexpected end tag (br). Treated as br element. Line: 1 Col: 145 Unexpected end tag (a) in table context caused voodoo mode. Line: 1 Col: 145 End tag (a) violates step 1, paragraph 1 of the adoption agency algorithm. Line: 1 Col: 151 Unexpected end tag (img) in table context caused voodoo mode. Line: 1 Col: 151 This element (img) has no end tag. Line: 1 Col: 159 Unexpected end tag (title) in table context caused voodoo mode. Line: 1 Col: 159 Unexpected end tag (title). Ignored. Line: 1 Col: 166 Unexpected end tag (span) in table context caused voodoo mode. Line: 1 Col: 166 Unexpected end tag (span). Ignored. Line: 1 Col: 174 Unexpected end tag (style) in table context caused voodoo mode. Line: 1 Col: 174 Unexpected end tag (style). Ignored. Line: 1 Col: 183 Unexpected end tag (script) in table context caused voodoo mode. Line: 1 Col: 183 Unexpected end tag (script). Ignored. Line: 1 Col: 196 Unexpected end tag (th). Ignored. Line: 1 Col: 201 Unexpected end tag (td). Ignored. Line: 1 Col: 206 Unexpected end tag (tr). Ignored. Line: 1 Col: 214 This element (frame) has no end tag. Line: 1 Col: 221 This element (area) has no end tag. Line: 1 Col: 228 Unexpected end tag (link). Ignored. Line: 1 Col: 236 This element (param) has no end tag. Line: 1 Col: 241 This element (hr) has no end tag. Line: 1 Col: 249 This element (input) has no end tag. Line: 1 Col: 255 Unexpected end tag (col). Ignored. Line: 1 Col: 262 Unexpected end tag (base). Ignored. Line: 1 Col: 269 Unexpected end tag (meta). Ignored. Line: 1 Col: 280 This element (basefont) has no end tag. Line: 1 Col: 290 This element (bgsound) has no end tag. Line: 1 Col: 298 This element (embed) has no end tag. Line: 1 Col: 307 This element (spacer) has no end tag. Line: 1 Col: 311 Unexpected end tag (p). Ignored. Line: 1 Col: 316 End tag (dd) seen too early. Expected other end tag. Line: 1 Col: 321 End tag (dt) seen too early. Expected other end tag. Line: 1 Col: 331 Unexpected end tag (caption). Ignored. Line: 1 Col: 342 Unexpected end tag (colgroup). Ignored. Line: 1 Col: 350 Unexpected end tag (tbody). Ignored. Line: 1 Col: 358 Unexpected end tag (tfoot). Ignored. Line: 1 Col: 366 Unexpected end tag (thead). Ignored. Line: 1 Col: 376 End tag (address) seen too early. Expected other end tag. Line: 1 Col: 389 End tag (blockquote) seen too early. Expected other end tag. Line: 1 Col: 398 End tag (center) seen too early. Expected other end tag. Line: 1 Col: 404 Unexpected end tag (dir). Ignored. Line: 1 Col: 410 End tag (div) seen too early. Expected other end tag. Line: 1 Col: 415 End tag (dl) seen too early. Expected other end tag. Line: 1 Col: 426 End tag (fieldset) seen too early. Expected other end tag. Line: 1 Col: 436 End tag (listing) seen too early. Expected other end tag. Line: 1 Col: 443 End tag (menu) seen too early. Expected other end tag. Line: 1 Col: 448 End tag (ol) seen too early. Expected other end tag. Line: 1 Col: 453 End tag (ul) seen too early. Expected other end tag. Line: 1 Col: 458 End tag (li) seen too early. Expected other end tag. Line: 1 Col: 465 End tag (nobr) violates step 1, paragraph 1 of the adoption agency algorithm. Line: 1 Col: 471 This element (wbr) has no end tag. Line: 1 Col: 487 End tag (button) seen too early. Expected other end tag. Line: 1 Col: 497 End tag (marquee) seen too early. Expected other end tag. Line: 1 Col: 506 End tag (object) seen too early. Expected other end tag. Line: 1 Col: 524 Unexpected end tag (html). Ignored. Line: 1 Col: 524 Unexpected end tag (frameset). Ignored. Line: 1 Col: 531 Unexpected end tag (head). Ignored. Line: 1 Col: 540 Unexpected end tag (iframe). Ignored. Line: 1 Col: 548 This element (image) has no end tag. Line: 1 Col: 558 This element (isindex) has no end tag. Line: 1 Col: 568 Unexpected end tag (noembed). Ignored. Line: 1 Col: 579 Unexpected end tag (noframes). Ignored. Line: 1 Col: 590 Unexpected end tag (noscript). Ignored. Line: 1 Col: 601 Unexpected end tag (optgroup). Ignored. Line: 1 Col: 610 Unexpected end tag (option). Ignored. Line: 1 Col: 622 Unexpected end tag (plaintext). Ignored. Line: 1 Col: 633 Unexpected end tag (textarea). Ignored. #document | | | |
| | | |

#data #errors Line: 1 Col: 10 Unexpected start tag (frameset). Expected DOCTYPE. Line: 1 Col: 10 Expected closing tag. Unexpected end of file. #document | | | lxd-2.0.2/dist/src/golang.org/x/net/html/testdata/webkit/tables01.dat0000644061062106075000000000532212721405224027704 0ustar00stgraberdomain admins00000000000000#data

#errors #document | | | | | | |
#data
#errors #document | | | | | | |
#data #errors #document | | | |
| | | foo="bar" #data
foo #errors #document | | | | "foo" |
| #data

foo #errors #document | | | | |

| "foo" #data

#errors #document | | | | | | |
#data
#errors #document | | | | #data
#errors #document | | | | |
#data
#errors #document | | | | #data
B
#errors #document | | | | | | |
| "B" #data
foo #errors #document | | | | | | |
| "foo" #data
A
B #errors #document | | | | | | |
| "A" | "B" #data
#errors #document | | | | | | |
#data
foo #errors #document | | | | | | |
| "foo" #data #errors #document | | | |
| | | #data
|
#errors #document | | | | | | |
| #data
#errors #document | | | | | | |
| | | ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������lxd-2.0.2/dist/src/golang.org/x/net/html/testdata/webkit/tests18.dat��������������������������������0000644�0610621�0607500�00000010056�12721405224�027604� 0����������������������������������������������������������������������������������������������������ustar�00stgraber������������������������domain admins�������������������0000000�0000000������������������������������������������������������������������������������������������������������������������������������������������������������������������������#data #errors #document | | | | |