golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/000077500000000000000000000000001317277572200216235ustar00rootroot00000000000000golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/.gitignore000066400000000000000000000005341317277572200236150ustar00rootroot00000000000000# Platform .DS_Store # Compiled Object files, Static and Dynamic libs (Shared Objects) *.o *.a *.so # Folders /bin/ /pkg/ # Architecture specific extensions/prefixes *.[568vq] [568vq].out *.cgo1.go *.cgo2.c _cgo_defun.c _cgo_gotypes.go _cgo_export.* _testmain.go *.exe *.test # Website website/build/ website/.bundle website/vendor .vagrant golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/.travis.yml000066400000000000000000000001731317277572200237350ustar00rootroot00000000000000language: go go: - 1.8 branches: only: - master install: - make bin - make get-tools script: - make test golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/CHANGELOG.md000066400000000000000000000336011317277572200234370ustar00rootroot00000000000000## 0.8.2 (UNRELEASED) FEATURES: IMRPOVEMENTS: * agent: Fixed a missing case where gossip would stop flowing to dead nodes for a short while. [GH-451] * agent: Uses the go-sockaddr library to look for private IP addresses, which prefers non-loopback private addresses over loopback ones when trying to automatically determine the advertise address. [GH-451] * agent: Properly seeds Go's random number generator using the seed library. [GH-451] * agent: Serf is now built with Go 1.8. [GH-455] * agent: Improved address comparison during conflict resolution. [GH-433] * library: Moved close of shutdown channel until after network resorces are released. [GH-453] * library: Fixed several race conditions with QueryResponse [GH-460] BUG FIXES: * agent: Added defenses against invalid network coordinates with NaN and Inf values. [GH-468] * agent: Fixed an issue on Windows where "wsarecv" errors were logged when clients accessed the RPC interface. [GH-479] ## 0.8.1 (February 6, 2017) IMPROVEMENTS: * agent: Added support for relaying query responses through N other nodes for redundancy. [GH-439] * agent: Added the ability to tune the broadcast timeout, which might be necessary in very large clusters that experience very large, simultaneous changes to the cluster. [GH-412] * agent: Added a checksum to UDP gossip messages to guard against packet corruption. [GH-432] * agent: Added a short window where gossip will still flow to dead nodes so that they can more quickly refute. [GH-440] * build: Serf now builds with Go 1.7.5. [GH-443] ## 0.8 (September 14, 2016) FEATURES: * **Lifeguard Updates:** Implemented a new set of feedback controls for the gossip layer that help prevent degraded nodes that can't meet the soft real-time requirements from erroneously causing flapping in other, healthy nodes. This feature tunes itself automatically and requires no configuration. [GH-394] IMRPOVEMENTS: * Modified management of intents to be per-node to avoid intent queue overflow errors in large clusters. [GH-402] * Joins based on a DNS lookup will use TCP and attempt to join with the full list of returned addresses. [GH-387] * Serf's Go dependencies are now vendored using govendor. [GH-383] * Updated all of Serf's dependencies. [GH-387] [GH-401] * Moved dist build into a Docker container. [GH-409] BUG FIXES: * Updated memberlist to pull in a fix for leaking goroutines when performing TCP fallback pings. This affected users with frequent UDP connectivity problems. [GH-381] ## 0.7 (December 21, 2015) FEATURES: * Added new network tomography subsystem that computes network coordinates for nodes in the cluster which can be used to estimate network round trip times between any two nodes; exposes new `GetCoordinate` API as as well as a a new `serf rtt` command to query RTT interactively IMPROVEMENTS: * Added support for configuring query request size and query response size [GH-346] * Syslog messages are now filtered by the configured log-level * New `statsd_addr` for sending metrics via UDP to statsd * Added support for sending telemetry to statsite * `serf info` command now displays event handlers [GH-312] * Added a `MemberLeave` message to the `EventCh` for a force-leave so higher- level applications can handle the leave event * Lots of documentation updates BUG FIXES: * Fixed updating cached protocol version of a node when an update event fires [GH-335] * Fixed a bug where an empty remote state message would cause a crash in `MergeRemoteState` ## 0.6.4 (Febuary 12, 2015) IMPROVEMENTS: * Added merge delegate to Serf library to support application specific logic in cluster merging. * `SERF_RPC_AUTH` environment variable can be used in place of CLI flags. * Display if encryption is enabled in Serf stats * Improved `join` behavior when using DNS resolution BUG FIXES: * Fixed snapshot file compaction on Windows * Fixed device binding on Windows * Fixed bug with empty keyring * Fixed parsing of ports in some cases * Fixing stability issues under high churn MISC: * Increased user event size limit to 512 bytes (previously 256) ## 0.6.3 (July 10, 2014) IMPROVEMENTS: * Added `statsite_addr` configuration to stream to statsite BUG FIXES: * Fixed issue with mDNS flooding when using IPv4 and IPv6 * Fixed issue with reloading event handlers MISC: * Improved failure detection reliability under load * Reduced fsync() use in snapshot file * Improved snapshot file performance * Additional logging to help debug flapping ## 0.6.2 (June 16, 2014) IMPROVEMENTS: * Added `syslog_facility` configuration to set facility BUG FIXES: * Fixed memory leak in in-memory stats system * Fixed issue that would cause syslog to deadlock MISC: * Fixed missing prefixes on some log messages * Docs fixes ## 0.6.1 (May 29, 2014) BUG FIXES: * On Windows, a "failed to decode request header" error will no longer be shown on every RPC request. * Avoiding holding a lock which can cause monitor/stream commands to fail when an event handler is blocking * Fixing conflict response decoding errors IMPROVEMENTS: * Improved agent CLI usage documentation * Warn if an event handler is slow, potentially blocking other events ## 0.6.0 (May 8, 2014) FEATURES: * Support for key rotation when using encryption. This adds a new `serf keys` command, and a `-keyring-file` configuration. Thanks to @ryanuber. * New `-tags-file` can be specified to persist changes to tags made via the RPC interface. Thanks to @ryanuber. * New `serf info` command to provide operator debugging information, and to get info about the local node. * Adding `-retry-join` flag to agent which enables retrying the join until success or `-retry-max` attempts have been made. IMPROVEMENTS: * New `-rejoin` flag can be used along with a snapshot file to automatically rejoin a cluster. * Agent uses circular buffer to invoke handlers, guards against unbounded output lengths. * Adding support for logging to syslog * The SERF_RPC_ADDR environment variable can be used instead of the `-rpc-addr` flags. Thanks to @lalyos [GH-209]. * `serf query` can now output the results in a JSON format. * Unknown configuration directives generate an error [GH-186]. Thanks to @vincentbernat. BUG FIXES: * Fixing environmental variables with invalid characters. [GH-200]. Thanks to @arschles. * Fixing issue with tag changes with hard restart before failure detection. * Fixing issue with reconnect when using dynamic ports. MISC: * Improved logging of various error messages * Improved debian packaging. Thanks to @vincentbernat. ## 0.5.0 (March 12, 2014) FEATURES: * New `query` command provides a request/response mechanism to do realtime queries across the cluster. [GH-139] * Automatic conflict resolution. Serf will detect name conflicts, and use an internal query to determine which node is in the minority and perform a shutdown. [GH-167] [GH-119] * New `reachability` command can be used to help diagnose network and configuration issues. * Added `member-reap` event to get notified of when Serf removes a failed or left node from the cluster. The reap interval is controlled by `reconnect_timeout` and `tombstone_timeout` respectively. [GH-172] IMPROVEMENTS: * New Recipes section on the site to share Serf tips. Thanks to @ryanuber. [GH-177] * `members` command has new `-name` filter flag. Thanks to @ryanuber [GH-164] * New RPC command "members-filtered" to move filtering logic to the agent. Thanks to @ryanuber. [GH-149] * `reconnect_interval` and `reconnect_timeout` can be provided to configure agent behavior for attempting to reconnect to failed nodes. [GH-155] * `tombstone_interval` can be provided to configure the reap time for nodes that have gracefully left. [GH_172] * Agent can be provided `rpc_auth` config to require that RPC is authenticated. All commands can take a `-rpc-auth` flag now. [GH-148] BUG FIXES: * Fixed config folder in Upstart script. Thanks to @llchen223. [GH-174] * Event handlers are correctly invoked when BusyBox is the shell. [GH-156] * Event handlers were not being invoked with the correct SERF_TAG_* values if tags were changed using the `tags` command. [GH-169] MISC: * Support for protocol version 1 (Serf 0.2) has been removed. Serf 0.5 cannot join a cluster that has members running version 0.2. ## 0.4.5 (February 25, 2014) FEATURES: * New `tags` command is available to dynamically update tags without reloading the agent. Thanks to @ryanuber. [GH-126] IMPROVEMENTS: * Upstart receipe logs output thanks to @breerly [GH-128] * `members` can filter on any tag thanks to @hmrm [GH-124] * Added vagrant demo to make a simple cluster * `members` now columnizes the output thanks to @ryanuber [GH-138] * Agent passes its own environment variables through thanks to @mcroydon [GH-142] * `-iface` flag can be used to bind to interfaces [GH-145] BUG FIXES: * -config-dir would cause protocol to be set to 0 if there are no configuration files in the directory [GH-129] * Event handlers can filter on 'member-update' * User event handler appends new line, this was being omitted ## 0.4.1 (February 3, 2014) IMPROVEMENTS: * mDNS service uses the advertise address instead of bind address ## 0.4.0 (January 31, 2014) FEATURES: * Static `role` has been replaced with dynamic tags. Each agent can have multiple key/value tags associated using `-tag`. Tags can be updated using a SIGHUP and are advertised to the cluster, causing the `member-update` event to be triggered. [GH-111] [GH-98] * Serf can automatically discover peers uing mDNS when provided the `-discover` flag. In network environments supporting multicast, no explicit join is needed to find peers. [GH-53] * Serf collects telemetry information and simple runtime profiling. Stats can be dumped to stderr by sending a `USR1` signal to Serf. Windows users must use the `BREAK` signal instead. [GH-103] * `advertise` flag can be used to set an advertise address different from the bind address. Used for NAT traversal. Thanks to @benagricola [GH-93] * `members` command now takes `-format` flag to specify either text or JSON output. Fixed by @ryanuber [GH-97] IMPROVEMENTS: * User payload always appends a newline when invoking a shell script * Severity of "Potential blocking operation" reduced to debug to prevent spurious messages on slow or busy machines. BUG FIXES: * If an agent is restarted with the same bind address but new name, it will not respond to the old name, causing the old name to enter the `failed` state, instead of having duplicate entries in the `alive` state. * `leave_on_interrupt` set to false when not specified, if any config file is provided. This flag is deprecated for `skip_leave_on_interrupt` instead. [GH-94] MISC: * `-role` configuration has been deprecated in favor of `-tag role=foo`. The flag is still supported but will generate warnings. * Support for protocol version 0 (Serf 0.1) has been removed. Serf 0.4 cannot join a cluster that has members running version 0.1. ## 0.3.0 (December 5, 2013) FEATURES: * Dynamic port support, cluster wide consistent config not necessary * Snapshots to automaticaly rejoin cluster after failure and prevent replays [GH-84] [GH-71] * Adding `profile` config to agent, to support WAN, LAN, and Local modes * MsgPack over TCP RPC protocol which can be used to control Serf, send events, and receive events with low latency. * New `leave` CLI command and RPC endpoint to control graceful leaves * Signal handling is controlable, graceful leave behavior on SIGINT/SIGTERM can be specified * SIGHUP can be used to reload configuration IMPROVEMENTS: * Event handler provides lamport time of user events via SERF_USER_LTIME [GH-68] * Memberlist encryption overhead has been reduced * Filter output of `members` using regular expressions on role and status * `replay_on_join` parameter to control replay with `start_join` * `monitor` works even if the client is behind a NAT * Serf generates warning if binding to public IP without encryption BUG FIXES: * Prevent unbounded transmit queues [GH-78] * IPv6 addresses can be bound to [GH-72] * Serf join won't hang on a slow/dead node [GH-70] * Serf Leave won't block Shutdown [GH-1] ## 0.2.1 (November 6, 2013) BUG FIXES: * Member role and address not updated on re-join [GH-58] ## 0.2.0 (November 1, 2013) FEATURES: * Protocol versioning features so that upgrades can be done safely. See the website on upgrading Serf for more info. * Can now configure Serf with files or directories of files by specifying the `-config-file` and/or `-config-dir` flags to the agent. * New command `serf force-leave` can be used to force a "failed" node to the "left" state. * Serf now supports message encryption and verification so that it can be used on untrusted networks [GH-25] * The `-join` flag on `serf agent` can be used to join a cluster when starting an agent. [GH-42] IMPROVEMENTS: * Random staggering of periodic routines to avoid cluster-wide synchronization * Push/Pull timer automatically slows down as cluster grows to avoid congestion * Messages are compressed to reduce bandwidth utilization * `serf members` now provides node roles in output * Joining a cluster will no longer replay all the old events by default, but it can using the `-replay` flag. * User events are coalesced by default, meaning duplicate events (by name) within a short period of time are merged. [GH-8] BUG FIXES: * Event handlers work on Windows now by executing commands through `cmd /C` [GH-37] * Nodes that previously left and rejoin won't get stuck in 'leaving' state. [GH-18] * Fixing alignment issues on i386 for atomic operations [GH-20] * "trace" log level works [GH-31] ## 0.1.1 (October 23, 2013) BUG FIXES: * Default node name is outputted when "serf agent" is called with no args. * Remove node from reap list after join so a fast re-join doesn't lose the member. ## 0.1.0 (October 23, 2013) * Initial release golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/GNUmakefile000066400000000000000000000036341317277572200237030ustar00rootroot00000000000000GOTOOLS = github.com/mitchellh/gox github.com/kardianos/govendor VERSION = $(shell awk -F\" '/^const Version/ { print $$2; exit }' cmd/serf/version.go) GITSHA:=$(shell git rev-parse HEAD) GITBRANCH:=$(shell git symbolic-ref --short HEAD 2>/dev/null) default:: test # bin generates the releasable binaries bin:: tools @sh -c "'$(CURDIR)/scripts/build.sh'" # cov generates the coverage output cov:: tools gocov test ./... | gocov-html > /tmp/coverage.html open /tmp/coverage.html # dev creates binaries for testing locally - these are put into ./bin and # $GOPATH dev:: @SERF_DEV=1 sh -c "'$(CURDIR)/scripts/build.sh'" # dist creates the binaries for distibution dist:: @sh -c "'$(CURDIR)/scripts/dist.sh' $(VERSION)" get-tools:: go get -u -v $(GOTOOLS) # subnet sets up the require subnet for testing on darwin (osx) - you must run # this before running other tests if you are on osx. subnet:: @sh -c "'$(CURDIR)/scripts/setup_test_subnet.sh'" # test runs the test suite test:: subnet tools @go list ./... | grep -v -E '^github.com/hashicorp/serf/(vendor|cmd/serf/vendor)' | xargs -n1 go test $(TESTARGS) # testrace runs the race checker testrace:: subnet go test -race `govendor list -no-status +local` $(TESTARGS) tools:: @which gox 2>/dev/null ; if [ $$? -eq 1 ]; then \ $(MAKE) get-tools; \ fi # updatedeps installs all the dependencies needed to test, build, and run updatedeps:: tools govendor list -no-status +vendor | xargs -n1 go get -u govendor update +vendor vet:: tools @echo "--> Running go tool vet $(VETARGS) ." @govendor list -no-status +local \ | cut -d '/' -f 4- \ | xargs -n1 \ go tool vet $(VETARGS) ;\ if [ $$? -ne 0 ]; then \ echo ""; \ echo "Vet found suspicious constructs. Please check the reported constructs"; \ echo "and fix them if necessary before submitting the code for reviewal."; \ fi .PHONY: default bin cov dev dist get-tools subnet test testrace tools updatedeps vet golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/LICENSE000066400000000000000000000371511317277572200226370ustar00rootroot00000000000000Mozilla Public License, version 2.0 1. Definitions 1.1. “Contributor” means each individual or legal entity that creates, contributes to the creation of, or owns Covered Software. 1.2. “Contributor Version” means the combination of the Contributions of others (if any) used by a Contributor and that particular Contributor’s Contribution. 1.3. “Contribution” means Covered Software of a particular Contributor. 1.4. “Covered Software” means Source Code Form to which the initial Contributor has attached the notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source Code Form, in each case including portions thereof. 1.5. “Incompatible With Secondary Licenses” means a. that the initial Contributor has attached the notice described in Exhibit B to the Covered Software; or b. that the Covered Software was made available under the terms of version 1.1 or earlier of the License, but not also under the terms of a Secondary License. 1.6. “Executable Form” means any form of the work other than Source Code Form. 1.7. “Larger Work” means a work that combines Covered Software with other material, in a separate file or files, that is not Covered Software. 1.8. “License” means this document. 1.9. “Licensable” means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently, any and all of the rights conveyed by this License. 1.10. “Modifications” means any of the following: a. any file in Source Code Form that results from an addition to, deletion from, or modification of the contents of Covered Software; or b. any new file in Source Code Form that contains any Covered Software. 1.11. “Patent Claims” of a Contributor means any patent claim(s), including without limitation, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be infringed, but for the grant of the License, by the making, using, selling, offering for sale, having made, import, or transfer of either its Contributions or its Contributor Version. 1.12. “Secondary License” means either the GNU General Public License, Version 2.0, the GNU Lesser General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any later versions of those licenses. 1.13. “Source Code Form” means the form of the work preferred for making modifications. 1.14. “You” (or “Your”) means an individual or a legal entity exercising rights under this License. For legal entities, “You” includes any entity that controls, is controlled by, or is under common control with You. For purposes of this definition, “control” means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity. 2. License Grants and Conditions 2.1. Grants Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license: a. under intellectual property rights (other than patent or trademark) Licensable by such Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger Work; and b. under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import, and otherwise transfer either its Contributions or its Contributor Version. 2.2. Effective Date The licenses granted in Section 2.1 with respect to any Contribution become effective for each Contribution on the date the Contributor first distributes such Contribution. 2.3. Limitations on Grant Scope The licenses granted in this Section 2 are the only rights granted under this License. No additional rights or licenses will be implied from the distribution or licensing of Covered Software under this License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor: a. for any code that a Contributor has removed from Covered Software; or b. for infringements caused by: (i) Your and any other third party’s modifications of Covered Software, or (ii) the combination of its Contributions with other software (except as part of its Contributor Version); or c. under Patent Claims infringed by Covered Software in the absence of its Contributions. This License does not grant any rights in the trademarks, service marks, or logos of any Contributor (except as may be necessary to comply with the notice requirements in Section 3.4). 2.4. Subsequent Licenses No Contributor makes additional grants as a result of Your choice to distribute the Covered Software under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary License (if permitted under the terms of Section 3.3). 2.5. Representation Each Contributor represents that the Contributor believes its Contributions are its original creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this License. 2.6. Fair Use This License is not intended to limit any rights You have under applicable copyright doctrines of fair use, fair dealing, or other equivalents. 2.7. Conditions Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1. 3. Responsibilities 3.1. Distribution of Source Form All distribution of Covered Software in Source Code Form, including any Modifications that You create or to which You contribute, must be under the terms of this License. You must inform recipients that the Source Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict the recipients’ rights in the Source Code Form. 3.2. Distribution of Executable Form If You distribute Covered Software in Executable Form then: a. such Covered Software must also be made available in Source Code Form, as described in Section 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution to the recipient; and b. You may distribute such Executable Form under the terms of this License, or sublicense it under different terms, provided that the license for the Executable Form does not attempt to limit or alter the recipients’ rights in the Source Code Form under this License. 3.3. Distribution of a Larger Work You may create and distribute a Larger Work under terms of Your choice, provided that You also comply with the requirements of this License for the Covered Software. If the Larger Work is a combination of Covered Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to additionally distribute such Covered Software under the terms of such Secondary License(s), so that the recipient of the Larger Work may, at their option, further distribute the Covered Software under the terms of either this License or such Secondary License(s). 3.4. Notices You may not remove or alter the substance of any license notices (including copyright notices, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source Code Form of the Covered Software, except that You may alter any license notices to the extent required to remedy known factual inaccuracies. 3.5. Application of Additional Terms You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, You may do so only on Your own behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree to indemnify every Contributor for any liability incurred by such Contributor as a result of warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of warranty and limitations of liability specific to any jurisdiction. 4. Inability to Comply Due to Statute or Regulation If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Covered Software due to statute, judicial order, or regulation then You must: (a) comply with the terms of this License to the maximum extent possible; and (b) describe the limitations and the code they affect. Such description must be placed in a text file included with all distributions of the Covered Software under this License. Except to the extent prohibited by statute or regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be able to understand it. 5. Termination 5.1. The rights granted under this License will terminate automatically if You fail to comply with any of its terms. However, if You become compliant, then the rights granted under this License from a particular Contributor are reinstated (a) provisionally, unless and until such Contributor explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this is the first time You have received notice of non-compliance with this License from such Contributor, and You become compliant prior to 30 days after Your receipt of the notice. 5.2. If You initiate litigation against any entity by asserting a patent infringement claim (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a Contributor Version directly or indirectly infringes any patent, then the rights granted to You by any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate. 5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements (excluding distributors and resellers) which have been validly granted by You or Your distributors under this License prior to termination shall survive termination. 6. Disclaimer of Warranty Covered Software is provided under this License on an “as is” basis, without warranty of any kind, either expressed, implied, or statutory, including, without limitation, warranties that the Covered Software is free of defects, merchantable, fit for a particular purpose or non-infringing. The entire risk as to the quality and performance of the Covered Software is with You. Should any Covered Software prove defective in any respect, You (not any Contributor) assume the cost of any necessary servicing, repair, or correction. This disclaimer of warranty constitutes an essential part of this License. No use of any Covered Software is authorized under this License except under this disclaimer. 7. Limitation of Liability Under no circumstances and under no legal theory, whether tort (including negligence), contract, or otherwise, shall any Contributor, or anyone who distributes Covered Software as permitted above, be liable to You for any direct, indirect, special, incidental, or consequential damages of any character including, without limitation, damages for lost profits, loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses, even if such party shall have been informed of the possibility of such damages. This limitation of liability shall not apply to liability for death or personal injury resulting from such party’s negligence to the extent applicable law prohibits such limitation. Some jurisdictions do not allow the exclusion or limitation of incidental or consequential damages, so this exclusion and limitation may not apply to You. 8. Litigation Any litigation relating to this License may be brought only in the courts of a jurisdiction where the defendant maintains its principal place of business and such litigation shall be governed by laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this Section shall prevent a party’s ability to bring cross-claims or counter-claims. 9. Miscellaneous This License represents the complete agreement concerning the subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not be used to construe this License against a Contributor. 10. Versions of the License 10.1. New Versions Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the license steward has the right to modify or publish new versions of this License. Each version will be given a distinguishing version number. 10.2. Effect of New Versions You may distribute the Covered Software under the terms of the version of the License under which You originally received the Covered Software, or under the terms of any subsequent version published by the license steward. 10.3. Modified Versions If you create software not governed by this License, and you want to create a new license for such software, you may create and use a modified version of this License if you rename the license and remove any references to the name of the license steward (except to note that such modified license differs from this License). 10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the terms of this version of the License, the notice described in Exhibit B of this License must be attached. Exhibit A - Source Code Form License Notice This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice. You may add additional accurate notices of copyright ownership. Exhibit B - “Incompatible With Secondary Licenses” Notice This Source Code Form is “Incompatible With Secondary Licenses”, as defined by the Mozilla Public License, v. 2.0. golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/README.md000066400000000000000000000115151317277572200231050ustar00rootroot00000000000000# Serf [![Build Status](https://travis-ci.org/hashicorp/serf.png)](https://travis-ci.org/hashicorp/serf) [![Join the chat at https://gitter.im/hashicorp-serf/Lobby](https://badges.gitter.im/hashicorp-serf/Lobby.svg)](https://gitter.im/hashicorp-serf/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) * Website: https://www.serf.io * Chat: [Gitter](https://gitter.im/hashicorp-serf/Lobby) * Mailing list: [Google Groups](https://groups.google.com/group/serfdom/) Serf is a decentralized solution for service discovery and orchestration that is lightweight, highly available, and fault tolerant. Serf runs on Linux, Mac OS X, and Windows. An efficient and lightweight gossip protocol is used to communicate with other nodes. Serf can detect node failures and notify the rest of the cluster. An event system is built on top of Serf, letting you use Serf's gossip protocol to propagate events such as deploys, configuration changes, etc. Serf is completely masterless with no single point of failure. Here are some example use cases of Serf, though there are many others: * Discovering web servers and automatically adding them to a load balancer * Organizing many memcached or redis nodes into a cluster, perhaps with something like [twemproxy](https://github.com/twitter/twemproxy) or maybe just configuring an application with the address of all the nodes * Triggering web deploys using the event system built on top of Serf * Propagating changes to configuration to relevant nodes. * Updating DNS records to reflect cluster changes as they occur. * Much, much more. ## Quick Start First, [download a pre-built Serf binary](https://www.serf.io/downloads.html) for your operating system, [compile Serf yourself](#developing-serf), or install using `go get -u github.com/hashicorp/serf/cmd/serf`. Next, let's start a couple Serf agents. Agents run until they're told to quit and handle the communication of maintenance tasks of Serf. In a real Serf setup, each node in your system will run one or more Serf agents (it can run multiple agents if you're running multiple cluster types. e.g. web servers vs. memcached servers). Start each Serf agent in a separate terminal session so that we can see the output of each. Start the first agent: ``` $ serf agent -node=foo -bind=127.0.0.1:5000 -rpc-addr=127.0.0.1:7373 ... ``` Start the second agent in another terminal session (while the first is still running): ``` $ serf agent -node=bar -bind=127.0.0.1:5001 -rpc-addr=127.0.0.1:7374 ... ``` At this point two Serf agents are running independently but are still unaware of each other. Let's now tell the first agent to join an existing cluster (the second agent). When starting a Serf agent, you must join an existing cluster by specifying at least one existing member. After this, Serf gossips and the remainder of the cluster becomes aware of the join. Run the following commands in a third terminal session. ``` $ serf join 127.0.0.1:5001 ... ``` If you're watching your terminals, you should see both Serf agents become aware of the join. You can prove it by running `serf members` to see the members of the Serf cluster: ``` $ serf members foo 127.0.0.1:5000 alive bar 127.0.0.1:5001 alive ... ``` At this point, you can ctrl-C or force kill either Serf agent, and they'll update their membership lists appropriately. If you ctrl-C a Serf agent, it will gracefully leave by notifying the cluster of its intent to leave. If you force kill an agent, it will eventually (usually within seconds) be detected by another member of the cluster which will notify the cluster of the node failure. ## Documentation Full, comprehensive documentation is viewable on the Serf website: https://www.serf.io/docs ## Developing Serf If you wish to work on Serf itself, you'll first need [Go](https://golang.org) installed (version 1.8+ is _required_). Make sure you have Go properly [installed](https://golang.org/doc/install), including setting up your [GOPATH](https://golang.org/doc/code.html#GOPATH). Next, clone this repository into `$GOPATH/src/github.com/hashicorp/serf` and then just type `make`. In a few moments, you'll have a working `serf` executable: ``` $ make ... $ bin/serf ... ``` *NOTE: `make` will also place a copy of the executable under `$GOPATH/bin/`* Serf is first and foremost a library with a command-line interface, `serf`. The Serf library is independent of the command line agent, `serf`. The `serf` binary is located under `cmd/serf` and can be installed stand alone by issuing the command `go get -u github.com/hashicorp/serf/cmd/serf`. Applications using the Serf library should only need to include `github.com/hashicorp/serf`. Tests can be run by typing `make test`. If you make any changes to the code, run `make format` in order to automatically format the code according to Go [standards](https://golang.org/doc/effective_go.html#formatting). golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/client/000077500000000000000000000000001317277572200231015ustar00rootroot00000000000000golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/client/README.md000066400000000000000000000005431317277572200243620ustar00rootroot00000000000000# Serf Client This repo provide the `client` package, which is used to interact with a Serf agent using the msgpack RPC system it supports. This is the official reference implementation, and is used inside the Serf CLI to support the various commands. Full documentation can be found on [godoc here](https://godoc.org/github.com/hashicorp/serf/client). golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/client/const.go000066400000000000000000000076501317277572200245660ustar00rootroot00000000000000package client import ( "github.com/hashicorp/serf/coordinate" "github.com/hashicorp/serf/serf" "net" "time" ) const ( maxIPCVersion = 1 ) const ( handshakeCommand = "handshake" eventCommand = "event" forceLeaveCommand = "force-leave" joinCommand = "join" membersCommand = "members" membersFilteredCommand = "members-filtered" streamCommand = "stream" stopCommand = "stop" monitorCommand = "monitor" leaveCommand = "leave" installKeyCommand = "install-key" useKeyCommand = "use-key" removeKeyCommand = "remove-key" listKeysCommand = "list-keys" tagsCommand = "tags" queryCommand = "query" respondCommand = "respond" authCommand = "auth" statsCommand = "stats" getCoordinateCommand = "get-coordinate" ) const ( unsupportedCommand = "Unsupported command" unsupportedIPCVersion = "Unsupported IPC version" duplicateHandshake = "Handshake already performed" handshakeRequired = "Handshake required" monitorExists = "Monitor already exists" invalidFilter = "Invalid event filter" streamExists = "Stream with given sequence exists" invalidQueryID = "No pending queries matching ID" authRequired = "Authentication required" invalidAuthToken = "Invalid authentication token" ) const ( queryRecordAck = "ack" queryRecordResponse = "response" queryRecordDone = "done" ) // Request header is sent before each request type requestHeader struct { Command string Seq uint64 } // Response header is sent before each response type responseHeader struct { Seq uint64 Error string } type handshakeRequest struct { Version int32 } type authRequest struct { AuthKey string } type coordinateRequest struct { Node string } type coordinateResponse struct { Coord coordinate.Coordinate Ok bool } type eventRequest struct { Name string Payload []byte Coalesce bool } type forceLeaveRequest struct { Node string } type joinRequest struct { Existing []string Replay bool } type joinResponse struct { Num int32 } type membersFilteredRequest struct { Tags map[string]string Status string Name string } type membersResponse struct { Members []Member } type keyRequest struct { Key string } type keyResponse struct { Messages map[string]string Keys map[string]int NumNodes int NumErr int NumResp int } type monitorRequest struct { LogLevel string } type streamRequest struct { Type string } type stopRequest struct { Stop uint64 } type tagsRequest struct { Tags map[string]string DeleteTags []string } type queryRequest struct { FilterNodes []string FilterTags map[string]string RequestAck bool RelayFactor uint8 Timeout time.Duration Name string Payload []byte } type respondRequest struct { ID uint64 Payload []byte } type queryRecord struct { Type string From string Payload []byte } // NodeResponse is used to return the response of a query type NodeResponse struct { From string Payload []byte } type logRecord struct { Log string } type userEventRecord struct { Event string LTime serf.LamportTime Name string Payload []byte Coalesce bool } // Member is used to represent a single member of the // Serf cluster type Member struct { Name string // Node name Addr net.IP // Address of the Serf node Port uint16 // Gossip port used by Serf Tags map[string]string Status string ProtocolMin uint8 // Minimum supported Memberlist protocol ProtocolMax uint8 // Maximum supported Memberlist protocol ProtocolCur uint8 // Currently set Memberlist protocol DelegateMin uint8 // Minimum supported Serf protocol DelegateMax uint8 // Maximum supported Serf protocol DelegateCur uint8 // Currently set Serf protocol } type memberEventRecord struct { Event string Members []Member } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/client/rpc_client.go000066400000000000000000000452741317277572200255660ustar00rootroot00000000000000package client import ( "bufio" "fmt" "github.com/hashicorp/go-msgpack/codec" "github.com/hashicorp/logutils" "github.com/hashicorp/serf/coordinate" "log" "net" "sync" "sync/atomic" "time" ) const ( // This is the default IO timeout for the client DefaultTimeout = 10 * time.Second ) var ( clientClosed = fmt.Errorf("client closed") ) type seqCallback struct { handler func(*responseHeader) } func (sc *seqCallback) Handle(resp *responseHeader) { sc.handler(resp) } func (sc *seqCallback) Cleanup() {} // seqHandler interface is used to handle responses type seqHandler interface { Handle(*responseHeader) Cleanup() } // Config is provided to ClientFromConfig to make // a new RPCClient from the given configuration type Config struct { // Addr must be the RPC address to contact Addr string // If provided, the client will perform key based auth AuthKey string // If provided, overrides the DefaultTimeout used for // IO deadlines Timeout time.Duration } // RPCClient is used to make requests to the Agent using an RPC mechanism. // Additionally, the client manages event streams and monitors, enabling a client // to easily receive event notifications instead of using the fork/exec mechanism. type RPCClient struct { seq uint64 timeout time.Duration conn *net.TCPConn reader *bufio.Reader writer *bufio.Writer dec *codec.Decoder enc *codec.Encoder writeLock sync.Mutex dispatch map[uint64]seqHandler dispatchLock sync.Mutex shutdown bool shutdownCh chan struct{} shutdownLock sync.Mutex } // send is used to send an object using the MsgPack encoding. send // is serialized to prevent write overlaps, while properly buffering. func (c *RPCClient) send(header *requestHeader, obj interface{}) error { c.writeLock.Lock() defer c.writeLock.Unlock() if c.shutdown { return clientClosed } // Setup an IO deadline, this way we won't wait indefinitely // if the client has hung. if err := c.conn.SetWriteDeadline(time.Now().Add(c.timeout)); err != nil { return err } if err := c.enc.Encode(header); err != nil { return err } if obj != nil { if err := c.enc.Encode(obj); err != nil { return err } } if err := c.writer.Flush(); err != nil { return err } return nil } // NewRPCClient is used to create a new RPC client given the // RPC address of the Serf agent. This will return a client, // or an error if the connection could not be established. // This will use the DefaultTimeout for the client. func NewRPCClient(addr string) (*RPCClient, error) { conf := Config{Addr: addr} return ClientFromConfig(&conf) } // ClientFromConfig is used to create a new RPC client given the // configuration object. This will return a client, or an error if // the connection could not be established. func ClientFromConfig(c *Config) (*RPCClient, error) { // Setup the defaults if c.Timeout == 0 { c.Timeout = DefaultTimeout } // Try to dial to serf conn, err := net.DialTimeout("tcp", c.Addr, c.Timeout) if err != nil { return nil, err } // Create the client client := &RPCClient{ seq: 0, timeout: c.Timeout, conn: conn.(*net.TCPConn), reader: bufio.NewReader(conn), writer: bufio.NewWriter(conn), dispatch: make(map[uint64]seqHandler), shutdownCh: make(chan struct{}), } client.dec = codec.NewDecoder(client.reader, &codec.MsgpackHandle{RawToString: true, WriteExt: true}) client.enc = codec.NewEncoder(client.writer, &codec.MsgpackHandle{RawToString: true, WriteExt: true}) go client.listen() // Do the initial handshake if err := client.handshake(); err != nil { client.Close() return nil, err } // Do the initial authentication if needed if c.AuthKey != "" { if err := client.auth(c.AuthKey); err != nil { client.Close() return nil, err } } return client, err } // StreamHandle is an opaque handle passed to stop to stop streaming type StreamHandle uint64 func (c *RPCClient) IsClosed() bool { return c.shutdown } // Close is used to free any resources associated with the client func (c *RPCClient) Close() error { c.shutdownLock.Lock() defer c.shutdownLock.Unlock() if !c.shutdown { c.shutdown = true close(c.shutdownCh) c.deregisterAll() return c.conn.Close() } return nil } // ForceLeave is used to ask the agent to issue a leave command for // a given node func (c *RPCClient) ForceLeave(node string) error { header := requestHeader{ Command: forceLeaveCommand, Seq: c.getSeq(), } req := forceLeaveRequest{ Node: node, } return c.genericRPC(&header, &req, nil) } // Join is used to instruct the agent to attempt a join func (c *RPCClient) Join(addrs []string, replay bool) (int, error) { header := requestHeader{ Command: joinCommand, Seq: c.getSeq(), } req := joinRequest{ Existing: addrs, Replay: replay, } var resp joinResponse err := c.genericRPC(&header, &req, &resp) return int(resp.Num), err } // Members is used to fetch a list of known members func (c *RPCClient) Members() ([]Member, error) { header := requestHeader{ Command: membersCommand, Seq: c.getSeq(), } var resp membersResponse err := c.genericRPC(&header, nil, &resp) return resp.Members, err } // MembersFiltered returns a subset of members func (c *RPCClient) MembersFiltered(tags map[string]string, status string, name string) ([]Member, error) { header := requestHeader{ Command: membersFilteredCommand, Seq: c.getSeq(), } req := membersFilteredRequest{ Tags: tags, Status: status, Name: name, } var resp membersResponse err := c.genericRPC(&header, &req, &resp) return resp.Members, err } // UserEvent is used to trigger sending an event func (c *RPCClient) UserEvent(name string, payload []byte, coalesce bool) error { header := requestHeader{ Command: eventCommand, Seq: c.getSeq(), } req := eventRequest{ Name: name, Payload: payload, Coalesce: coalesce, } return c.genericRPC(&header, &req, nil) } // Leave is used to trigger a graceful leave and shutdown of the agent func (c *RPCClient) Leave() error { header := requestHeader{ Command: leaveCommand, Seq: c.getSeq(), } return c.genericRPC(&header, nil, nil) } // UpdateTags will modify the tags on a running serf agent func (c *RPCClient) UpdateTags(tags map[string]string, delTags []string) error { header := requestHeader{ Command: tagsCommand, Seq: c.getSeq(), } req := tagsRequest{ Tags: tags, DeleteTags: delTags, } return c.genericRPC(&header, &req, nil) } // Respond allows a client to respond to a query event. The ID is the // ID of the Query to respond to, and the given payload is the response. func (c *RPCClient) Respond(id uint64, buf []byte) error { header := requestHeader{ Command: respondCommand, Seq: c.getSeq(), } req := respondRequest{ ID: id, Payload: buf, } return c.genericRPC(&header, &req, nil) } // IntallKey installs a new encryption key onto the keyring func (c *RPCClient) InstallKey(key string) (map[string]string, error) { header := requestHeader{ Command: installKeyCommand, Seq: c.getSeq(), } req := keyRequest{ Key: key, } resp := keyResponse{} err := c.genericRPC(&header, &req, &resp) return resp.Messages, err } // UseKey changes the primary encryption key on the keyring func (c *RPCClient) UseKey(key string) (map[string]string, error) { header := requestHeader{ Command: useKeyCommand, Seq: c.getSeq(), } req := keyRequest{ Key: key, } resp := keyResponse{} err := c.genericRPC(&header, &req, &resp) return resp.Messages, err } // RemoveKey changes the primary encryption key on the keyring func (c *RPCClient) RemoveKey(key string) (map[string]string, error) { header := requestHeader{ Command: removeKeyCommand, Seq: c.getSeq(), } req := keyRequest{ Key: key, } resp := keyResponse{} err := c.genericRPC(&header, &req, &resp) return resp.Messages, err } // ListKeys returns all of the active keys on each member of the cluster func (c *RPCClient) ListKeys() (map[string]int, int, map[string]string, error) { header := requestHeader{ Command: listKeysCommand, Seq: c.getSeq(), } resp := keyResponse{} err := c.genericRPC(&header, nil, &resp) return resp.Keys, resp.NumNodes, resp.Messages, err } // Stats is used to get debugging state information func (c *RPCClient) Stats() (map[string]map[string]string, error) { header := requestHeader{ Command: statsCommand, Seq: c.getSeq(), } var resp map[string]map[string]string err := c.genericRPC(&header, nil, &resp) return resp, err } // GetCoordinate is used to retrieve the cached coordinate of a node. func (c *RPCClient) GetCoordinate(node string) (*coordinate.Coordinate, error) { header := requestHeader{ Command: getCoordinateCommand, Seq: c.getSeq(), } req := coordinateRequest{ Node: node, } var resp coordinateResponse if err := c.genericRPC(&header, &req, &resp); err != nil { return nil, err } if resp.Ok { return &resp.Coord, nil } return nil, nil } type monitorHandler struct { client *RPCClient closed bool init bool initCh chan<- error logCh chan<- string seq uint64 } func (mh *monitorHandler) Handle(resp *responseHeader) { // Initialize on the first response if !mh.init { mh.init = true mh.initCh <- strToError(resp.Error) return } // Decode logs for all other responses var rec logRecord if err := mh.client.dec.Decode(&rec); err != nil { log.Printf("[ERR] Failed to decode log: %v", err) mh.client.deregisterHandler(mh.seq) return } select { case mh.logCh <- rec.Log: default: log.Printf("[ERR] Dropping log! Monitor channel full") } } func (mh *monitorHandler) Cleanup() { if !mh.closed { if !mh.init { mh.init = true mh.initCh <- fmt.Errorf("Stream closed") } if mh.logCh != nil { close(mh.logCh) } mh.closed = true } } // Monitor is used to subscribe to the logs of the agent func (c *RPCClient) Monitor(level logutils.LogLevel, ch chan<- string) (StreamHandle, error) { // Setup the request seq := c.getSeq() header := requestHeader{ Command: monitorCommand, Seq: seq, } req := monitorRequest{ LogLevel: string(level), } // Create a monitor handler initCh := make(chan error, 1) handler := &monitorHandler{ client: c, initCh: initCh, logCh: ch, seq: seq, } c.handleSeq(seq, handler) // Send the request if err := c.send(&header, &req); err != nil { c.deregisterHandler(seq) return 0, err } // Wait for a response select { case err := <-initCh: return StreamHandle(seq), err case <-c.shutdownCh: c.deregisterHandler(seq) return 0, clientClosed } } type streamHandler struct { client *RPCClient closed bool init bool initCh chan<- error eventCh chan<- map[string]interface{} seq uint64 } func (sh *streamHandler) Handle(resp *responseHeader) { // Initialize on the first response if !sh.init { sh.init = true sh.initCh <- strToError(resp.Error) return } // Decode logs for all other responses var rec map[string]interface{} if err := sh.client.dec.Decode(&rec); err != nil { log.Printf("[ERR] Failed to decode stream record: %v", err) sh.client.deregisterHandler(sh.seq) return } select { case sh.eventCh <- rec: default: log.Printf("[ERR] Dropping event! Stream channel full") } } func (sh *streamHandler) Cleanup() { if !sh.closed { if !sh.init { sh.init = true sh.initCh <- fmt.Errorf("Stream closed") } if sh.eventCh != nil { close(sh.eventCh) } sh.closed = true } } // Stream is used to subscribe to events func (c *RPCClient) Stream(filter string, ch chan<- map[string]interface{}) (StreamHandle, error) { // Setup the request seq := c.getSeq() header := requestHeader{ Command: streamCommand, Seq: seq, } req := streamRequest{ Type: filter, } // Create a monitor handler initCh := make(chan error, 1) handler := &streamHandler{ client: c, initCh: initCh, eventCh: ch, seq: seq, } c.handleSeq(seq, handler) // Send the request if err := c.send(&header, &req); err != nil { c.deregisterHandler(seq) return 0, err } // Wait for a response select { case err := <-initCh: return StreamHandle(seq), err case <-c.shutdownCh: c.deregisterHandler(seq) return 0, clientClosed } } type queryHandler struct { client *RPCClient closed bool init bool initCh chan<- error ackCh chan<- string respCh chan<- NodeResponse seq uint64 } func (qh *queryHandler) Handle(resp *responseHeader) { // Initialize on the first response if !qh.init { qh.init = true qh.initCh <- strToError(resp.Error) return } // Decode the query response var rec queryRecord if err := qh.client.dec.Decode(&rec); err != nil { log.Printf("[ERR] Failed to decode query response: %v", err) qh.client.deregisterHandler(qh.seq) return } switch rec.Type { case queryRecordAck: select { case qh.ackCh <- rec.From: default: log.Printf("[ERR] Dropping query ack, channel full") } case queryRecordResponse: select { case qh.respCh <- NodeResponse{rec.From, rec.Payload}: default: log.Printf("[ERR] Dropping query response, channel full") } case queryRecordDone: // No further records coming qh.client.deregisterHandler(qh.seq) default: log.Printf("[ERR] Unrecognized query record type: %s", rec.Type) } } func (qh *queryHandler) Cleanup() { if !qh.closed { if !qh.init { qh.init = true qh.initCh <- fmt.Errorf("Stream closed") } if qh.ackCh != nil { close(qh.ackCh) } if qh.respCh != nil { close(qh.respCh) } qh.closed = true } } // QueryParam is provided to query set various settings. type QueryParam struct { FilterNodes []string // A list of node names to restrict query to FilterTags map[string]string // A map of tag name to regex to filter on RequestAck bool // Should nodes ack the query receipt RelayFactor uint8 // Duplicate response count to be relayed back to sender for redundancy. Timeout time.Duration // Maximum query duration. Optional, will be set automatically. Name string // Opaque query name Payload []byte // Opaque query payload AckCh chan<- string // Channel to send Ack replies on RespCh chan<- NodeResponse // Channel to send responses on } // Query initiates a new query message using the given parameters, and streams // acks and responses over the given channels. The channels will not block on // sends and should be buffered. At the end of the query, the channels will be // closed. func (c *RPCClient) Query(params *QueryParam) error { // Setup the request seq := c.getSeq() header := requestHeader{ Command: queryCommand, Seq: seq, } req := queryRequest{ FilterNodes: params.FilterNodes, FilterTags: params.FilterTags, RequestAck: params.RequestAck, RelayFactor: params.RelayFactor, Timeout: params.Timeout, Name: params.Name, Payload: params.Payload, } // Create a query handler initCh := make(chan error, 1) handler := &queryHandler{ client: c, initCh: initCh, ackCh: params.AckCh, respCh: params.RespCh, seq: seq, } c.handleSeq(seq, handler) // Send the request if err := c.send(&header, &req); err != nil { c.deregisterHandler(seq) return err } // Wait for a response select { case err := <-initCh: return err case <-c.shutdownCh: c.deregisterHandler(seq) return clientClosed } } // Stop is used to unsubscribe from logs or event streams func (c *RPCClient) Stop(handle StreamHandle) error { // Deregister locally first to stop delivery c.deregisterHandler(uint64(handle)) header := requestHeader{ Command: stopCommand, Seq: c.getSeq(), } req := stopRequest{ Stop: uint64(handle), } return c.genericRPC(&header, &req, nil) } // handshake is used to perform the initial handshake on connect func (c *RPCClient) handshake() error { header := requestHeader{ Command: handshakeCommand, Seq: c.getSeq(), } req := handshakeRequest{ Version: maxIPCVersion, } return c.genericRPC(&header, &req, nil) } // auth is used to perform the initial authentication on connect func (c *RPCClient) auth(authKey string) error { header := requestHeader{ Command: authCommand, Seq: c.getSeq(), } req := authRequest{ AuthKey: authKey, } return c.genericRPC(&header, &req, nil) } // genericRPC is used to send a request and wait for an // errorSequenceResponse, potentially returning an error func (c *RPCClient) genericRPC(header *requestHeader, req interface{}, resp interface{}) error { // Setup a response handler errCh := make(chan error, 1) handler := func(respHeader *responseHeader) { // If we get an auth error, we should not wait for a request body if respHeader.Error == authRequired { goto SEND_ERR } if resp != nil { err := c.dec.Decode(resp) if err != nil { errCh <- err return } } SEND_ERR: errCh <- strToError(respHeader.Error) } c.handleSeq(header.Seq, &seqCallback{handler: handler}) defer c.deregisterHandler(header.Seq) // Send the request if err := c.send(header, req); err != nil { return err } // Wait for a response select { case err := <-errCh: return err case <-c.shutdownCh: return clientClosed } } // strToError converts a string to an error if not blank func strToError(s string) error { if s != "" { return fmt.Errorf(s) } return nil } // getSeq returns the next sequence number in a safe manner func (c *RPCClient) getSeq() uint64 { return atomic.AddUint64(&c.seq, 1) } // deregisterAll is used to deregister all handlers func (c *RPCClient) deregisterAll() { c.dispatchLock.Lock() defer c.dispatchLock.Unlock() for _, seqH := range c.dispatch { seqH.Cleanup() } c.dispatch = make(map[uint64]seqHandler) } // deregisterHandler is used to deregister a handler func (c *RPCClient) deregisterHandler(seq uint64) { c.dispatchLock.Lock() seqH, ok := c.dispatch[seq] delete(c.dispatch, seq) c.dispatchLock.Unlock() if ok { seqH.Cleanup() } } // handleSeq is used to setup a handlerto wait on a response for // a given sequence number. func (c *RPCClient) handleSeq(seq uint64, handler seqHandler) { c.dispatchLock.Lock() defer c.dispatchLock.Unlock() c.dispatch[seq] = handler } // respondSeq is used to respond to a given sequence number func (c *RPCClient) respondSeq(seq uint64, respHeader *responseHeader) { c.dispatchLock.Lock() seqL, ok := c.dispatch[seq] c.dispatchLock.Unlock() // Get a registered listener, ignore if none if ok { seqL.Handle(respHeader) } } // listen is used to processes data coming over the IPC channel, // and wrote it to the correct destination based on seq no func (c *RPCClient) listen() { defer c.Close() var respHeader responseHeader for { if err := c.dec.Decode(&respHeader); err != nil { if !c.shutdown { log.Printf("[ERR] agent.client: Failed to decode response header: %v", err) } break } c.respondSeq(respHeader.Seq, &respHeader) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/000077500000000000000000000000001317277572200223665ustar00rootroot00000000000000golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/000077500000000000000000000000001317277572200233255ustar00rootroot00000000000000golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/.gitignore000066400000000000000000000000061317277572200253110ustar00rootroot00000000000000/serf golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/000077500000000000000000000000001317277572200247435ustar00rootroot00000000000000golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/000077500000000000000000000000001317277572200260415ustar00rootroot00000000000000golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/agent.go000066400000000000000000000302531317277572200274710ustar00rootroot00000000000000package agent import ( "encoding/base64" "encoding/json" "fmt" "io" "io/ioutil" "log" "os" "strings" "sync" "github.com/hashicorp/memberlist" "github.com/hashicorp/serf/serf" ) // Agent starts and manages a Serf instance, adding some niceties // on top of Serf such as storing logs that you can later retrieve, // and invoking EventHandlers when events occur. type Agent struct { // Stores the serf configuration conf *serf.Config // Stores the agent configuration agentConf *Config // eventCh is used for Serf to deliver events on eventCh chan serf.Event // eventHandlers is the registered handlers for events eventHandlers map[EventHandler]struct{} eventHandlerList []EventHandler eventHandlersLock sync.Mutex // logger instance wraps the logOutput logger *log.Logger // This is the underlying Serf we are wrapping serf *serf.Serf // shutdownCh is used for shutdowns shutdown bool shutdownCh chan struct{} shutdownLock sync.Mutex } // Start creates a new agent, potentially returning an error func Create(agentConf *Config, conf *serf.Config, logOutput io.Writer) (*Agent, error) { // Ensure we have a log sink if logOutput == nil { logOutput = os.Stderr } // Setup the underlying loggers conf.MemberlistConfig.LogOutput = logOutput conf.LogOutput = logOutput // Create a channel to listen for events from Serf eventCh := make(chan serf.Event, 64) conf.EventCh = eventCh // Setup the agent agent := &Agent{ conf: conf, agentConf: agentConf, eventCh: eventCh, eventHandlers: make(map[EventHandler]struct{}), logger: log.New(logOutput, "", log.LstdFlags), shutdownCh: make(chan struct{}), } // Restore agent tags from a tags file if agentConf.TagsFile != "" { if err := agent.loadTagsFile(agentConf.TagsFile); err != nil { return nil, err } } // Load in a keyring file if provided if agentConf.KeyringFile != "" { if err := agent.loadKeyringFile(agentConf.KeyringFile); err != nil { return nil, err } } return agent, nil } // Start is used to initiate the event listeners. It is separate from // create so that there isn't a race condition between creating the // agent and registering handlers func (a *Agent) Start() error { a.logger.Printf("[INFO] agent: Serf agent starting") // Create serf first serf, err := serf.Create(a.conf) if err != nil { return fmt.Errorf("Error creating Serf: %s", err) } a.serf = serf // Start event loop go a.eventLoop() return nil } // Leave prepares for a graceful shutdown of the agent and its processes func (a *Agent) Leave() error { if a.serf == nil { return nil } a.logger.Println("[INFO] agent: requesting graceful leave from Serf") return a.serf.Leave() } // Shutdown closes this agent and all of its processes. Should be preceded // by a Leave for a graceful shutdown. func (a *Agent) Shutdown() error { a.shutdownLock.Lock() defer a.shutdownLock.Unlock() if a.shutdown { return nil } if a.serf == nil { goto EXIT } a.logger.Println("[INFO] agent: requesting serf shutdown") if err := a.serf.Shutdown(); err != nil { return err } EXIT: a.logger.Println("[INFO] agent: shutdown complete") a.shutdown = true close(a.shutdownCh) return nil } // ShutdownCh returns a channel that can be selected to wait // for the agent to perform a shutdown. func (a *Agent) ShutdownCh() <-chan struct{} { return a.shutdownCh } // Returns the Serf agent of the running Agent. func (a *Agent) Serf() *serf.Serf { return a.serf } // Returns the Serf config of the running Agent. func (a *Agent) SerfConfig() *serf.Config { return a.conf } // Join asks the Serf instance to join. See the Serf.Join function. func (a *Agent) Join(addrs []string, replay bool) (n int, err error) { a.logger.Printf("[INFO] agent: joining: %v replay: %v", addrs, replay) ignoreOld := !replay n, err = a.serf.Join(addrs, ignoreOld) if n > 0 { a.logger.Printf("[INFO] agent: joined: %d nodes", n) } if err != nil { a.logger.Printf("[WARN] agent: error joining: %v", err) } return } // ForceLeave is used to eject a failed node from the cluster func (a *Agent) ForceLeave(node string) error { a.logger.Printf("[INFO] agent: Force leaving node: %s", node) err := a.serf.RemoveFailedNode(node) if err != nil { a.logger.Printf("[WARN] agent: failed to remove node: %v", err) } return err } // UserEvent sends a UserEvent on Serf, see Serf.UserEvent. func (a *Agent) UserEvent(name string, payload []byte, coalesce bool) error { a.logger.Printf("[DEBUG] agent: Requesting user event send: %s. Coalesced: %#v. Payload: %#v", name, coalesce, string(payload)) err := a.serf.UserEvent(name, payload, coalesce) if err != nil { a.logger.Printf("[WARN] agent: failed to send user event: %v", err) } return err } // Query sends a Query on Serf, see Serf.Query. func (a *Agent) Query(name string, payload []byte, params *serf.QueryParam) (*serf.QueryResponse, error) { // Prevent the use of the internal prefix if strings.HasPrefix(name, serf.InternalQueryPrefix) { // Allow the special "ping" query if name != serf.InternalQueryPrefix+"ping" || payload != nil { return nil, fmt.Errorf("Queries cannot contain the '%s' prefix", serf.InternalQueryPrefix) } } a.logger.Printf("[DEBUG] agent: Requesting query send: %s. Payload: %#v", name, string(payload)) resp, err := a.serf.Query(name, payload, params) if err != nil { a.logger.Printf("[WARN] agent: failed to start user query: %v", err) } return resp, err } // RegisterEventHandler adds an event handler to receive event notifications func (a *Agent) RegisterEventHandler(eh EventHandler) { a.eventHandlersLock.Lock() defer a.eventHandlersLock.Unlock() a.eventHandlers[eh] = struct{}{} a.eventHandlerList = nil for eh := range a.eventHandlers { a.eventHandlerList = append(a.eventHandlerList, eh) } } // DeregisterEventHandler removes an EventHandler and prevents more invocations func (a *Agent) DeregisterEventHandler(eh EventHandler) { a.eventHandlersLock.Lock() defer a.eventHandlersLock.Unlock() delete(a.eventHandlers, eh) a.eventHandlerList = nil for eh := range a.eventHandlers { a.eventHandlerList = append(a.eventHandlerList, eh) } } // eventLoop listens to events from Serf and fans out to event handlers func (a *Agent) eventLoop() { serfShutdownCh := a.serf.ShutdownCh() for { select { case e := <-a.eventCh: a.logger.Printf("[INFO] agent: Received event: %s", e.String()) a.eventHandlersLock.Lock() handlers := a.eventHandlerList a.eventHandlersLock.Unlock() for _, eh := range handlers { eh.HandleEvent(e) } case <-serfShutdownCh: a.logger.Printf("[WARN] agent: Serf shutdown detected, quitting") a.Shutdown() return case <-a.shutdownCh: return } } } // InstallKey initiates a query to install a new key on all members func (a *Agent) InstallKey(key string) (*serf.KeyResponse, error) { a.logger.Print("[INFO] agent: Initiating key installation") manager := a.serf.KeyManager() return manager.InstallKey(key) } // UseKey sends a query instructing all members to switch primary keys func (a *Agent) UseKey(key string) (*serf.KeyResponse, error) { a.logger.Print("[INFO] agent: Initiating primary key change") manager := a.serf.KeyManager() return manager.UseKey(key) } // RemoveKey sends a query to all members to remove a key from the keyring func (a *Agent) RemoveKey(key string) (*serf.KeyResponse, error) { a.logger.Print("[INFO] agent: Initiating key removal") manager := a.serf.KeyManager() return manager.RemoveKey(key) } // ListKeys sends a query to all members to return a list of their keys func (a *Agent) ListKeys() (*serf.KeyResponse, error) { a.logger.Print("[INFO] agent: Initiating key listing") manager := a.serf.KeyManager() return manager.ListKeys() } // SetTags is used to update the tags. The agent will make sure to // persist tags if necessary before gossiping to the cluster. func (a *Agent) SetTags(tags map[string]string) error { // Update the tags file if we have one if a.agentConf.TagsFile != "" { if err := a.writeTagsFile(tags); err != nil { a.logger.Printf("[ERR] agent: %s", err) return err } } // Set the tags in Serf, start gossiping out return a.serf.SetTags(tags) } // loadTagsFile will load agent tags out of a file and set them in the // current serf configuration. func (a *Agent) loadTagsFile(tagsFile string) error { // Avoid passing tags and using a tags file at the same time if len(a.agentConf.Tags) > 0 { return fmt.Errorf("Tags config not allowed while using tag files") } if _, err := os.Stat(tagsFile); err == nil { tagData, err := ioutil.ReadFile(tagsFile) if err != nil { return fmt.Errorf("Failed to read tags file: %s", err) } if err := json.Unmarshal(tagData, &a.conf.Tags); err != nil { return fmt.Errorf("Failed to decode tags file: %s", err) } a.logger.Printf("[INFO] agent: Restored %d tag(s) from %s", len(a.conf.Tags), tagsFile) } // Success! return nil } // writeTagsFile will write the current tags to the configured tags file. func (a *Agent) writeTagsFile(tags map[string]string) error { encoded, err := json.MarshalIndent(tags, "", " ") if err != nil { return fmt.Errorf("Failed to encode tags: %s", err) } // Use 0600 for permissions, in case tag data is sensitive if err = ioutil.WriteFile(a.agentConf.TagsFile, encoded, 0600); err != nil { return fmt.Errorf("Failed to write tags file: %s", err) } // Success! return nil } // MarshalTags is a utility function which takes a map of tag key/value pairs // and returns the same tags as strings in 'key=value' format. func MarshalTags(tags map[string]string) []string { var result []string for name, value := range tags { result = append(result, fmt.Sprintf("%s=%s", name, value)) } return result } // UnmarshalTags is a utility function which takes a slice of strings in // key=value format and returns them as a tag mapping. func UnmarshalTags(tags []string) (map[string]string, error) { result := make(map[string]string) for _, tag := range tags { parts := strings.SplitN(tag, "=", 2) if len(parts) != 2 || len(parts[0]) == 0 { return nil, fmt.Errorf("Invalid tag: '%s'", tag) } result[parts[0]] = parts[1] } return result, nil } // loadKeyringFile will load a keyring out of a file func (a *Agent) loadKeyringFile(keyringFile string) error { // Avoid passing an encryption key and a keyring file at the same time if len(a.agentConf.EncryptKey) > 0 { return fmt.Errorf("Encryption key not allowed while using a keyring") } if _, err := os.Stat(keyringFile); err != nil { return err } // Read in the keyring file data keyringData, err := ioutil.ReadFile(keyringFile) if err != nil { return fmt.Errorf("Failed to read keyring file: %s", err) } // Decode keyring JSON keys := make([]string, 0) if err := json.Unmarshal(keyringData, &keys); err != nil { return fmt.Errorf("Failed to decode keyring file: %s", err) } // Decode base64 values keysDecoded := make([][]byte, len(keys)) for i, key := range keys { keyBytes, err := base64.StdEncoding.DecodeString(key) if err != nil { return fmt.Errorf("Failed to decode key from keyring: %s", err) } keysDecoded[i] = keyBytes } // Guard against empty keyring file if len(keysDecoded) == 0 { return fmt.Errorf("Keyring file contains no keys") } // Create the keyring keyring, err := memberlist.NewKeyring(keysDecoded, keysDecoded[0]) if err != nil { return fmt.Errorf("Failed to restore keyring: %s", err) } a.conf.MemberlistConfig.Keyring = keyring a.logger.Printf("[INFO] agent: Restored keyring with %d keys from %s", len(keys), keyringFile) // Success! return nil } // Stats is used to get various runtime information and stats func (a *Agent) Stats() map[string]map[string]string { local := a.serf.LocalMember() event_handlers := make(map[string]string) // Convert event handlers from a string slice to a string map for _, script := range a.agentConf.EventScripts() { script_filter := fmt.Sprintf("%s:%s", script.EventFilter.Event, script.EventFilter.Name) event_handlers[script_filter] = script.Script } output := map[string]map[string]string{ "agent": map[string]string{ "name": local.Name, }, "runtime": runtimeStats(), "serf": a.serf.Stats(), "tags": local.Tags, "event_handlers": event_handlers, } return output } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/agent_test.go000066400000000000000000000135341317277572200305330ustar00rootroot00000000000000package agent import ( "encoding/json" "github.com/hashicorp/serf/serf" "github.com/hashicorp/serf/testutil" "io/ioutil" "os" "path/filepath" "reflect" "strings" "testing" ) func TestAgent_eventHandler(t *testing.T) { a1 := testAgent(nil) defer a1.Shutdown() defer a1.Leave() handler := new(MockEventHandler) a1.RegisterEventHandler(handler) if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() if len(handler.Events) != 1 { t.Fatalf("bad: %#v", handler.Events) } if handler.Events[0].EventType() != serf.EventMemberJoin { t.Fatalf("bad: %#v", handler.Events[0]) } } func TestAgentShutdown_multiple(t *testing.T) { a := testAgent(nil) if err := a.Start(); err != nil { t.Fatalf("err: %s", err) } for i := 0; i < 5; i++ { if err := a.Shutdown(); err != nil { t.Fatalf("err: %s", err) } } } func TestAgentUserEvent(t *testing.T) { a1 := testAgent(nil) defer a1.Shutdown() defer a1.Leave() handler := new(MockEventHandler) a1.RegisterEventHandler(handler) if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() if err := a1.UserEvent("deploy", []byte("foo"), false); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() handler.Lock() defer handler.Unlock() if len(handler.Events) == 0 { t.Fatal("no events") } e, ok := handler.Events[len(handler.Events)-1].(serf.UserEvent) if !ok { t.Fatalf("bad: %#v", e) } if e.Name != "deploy" { t.Fatalf("bad: %#v", e) } if string(e.Payload) != "foo" { t.Fatalf("bad: %#v", e) } } func TestAgentQuery_BadPrefix(t *testing.T) { a1 := testAgent(nil) defer a1.Shutdown() defer a1.Leave() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() _, err := a1.Query("_serf_test", nil, nil) if err == nil || !strings.Contains(err.Error(), "cannot contain") { t.Fatalf("err: %s", err) } } func TestAgentTagsFile(t *testing.T) { tags := map[string]string{ "role": "webserver", "datacenter": "us-east", } td, err := ioutil.TempDir("", "serf") if err != nil { t.Fatalf("err: %s", err) } defer os.RemoveAll(td) agentConfig := DefaultConfig() agentConfig.TagsFile = filepath.Join(td, "tags.json") a1 := testAgentWithConfig(agentConfig, serf.DefaultConfig(), nil) if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } defer a1.Shutdown() defer a1.Leave() testutil.Yield() err = a1.SetTags(tags) if err != nil { t.Fatalf("err: %s", err) } testutil.Yield() a2 := testAgentWithConfig(agentConfig, serf.DefaultConfig(), nil) if err := a2.Start(); err != nil { t.Fatalf("err: %s", err) } defer a2.Shutdown() defer a2.Leave() testutil.Yield() m := a2.Serf().LocalMember() if !reflect.DeepEqual(m.Tags, tags) { t.Fatalf("tags not restored: %#v", m.Tags) } } func TestAgentTagsFile_BadOptions(t *testing.T) { agentConfig := DefaultConfig() agentConfig.TagsFile = "/some/path" agentConfig.Tags = map[string]string{ "tag1": "val1", } _, err := Create(agentConfig, serf.DefaultConfig(), nil) if err == nil || !strings.Contains(err.Error(), "not allowed") { t.Fatalf("err: %s", err) } } func TestAgent_MarshalTags(t *testing.T) { tags := map[string]string{ "tag1": "val1", "tag2": "val2", } tagPairs := MarshalTags(tags) if !containsKey(tagPairs, "tag1=val1") { t.Fatalf("bad: %v", tagPairs) } if !containsKey(tagPairs, "tag2=val2") { t.Fatalf("bad: %v", tagPairs) } } func TestAgent_UnmarshalTags(t *testing.T) { tagPairs := []string{ "tag1=val1", "tag2=val2", } tags, err := UnmarshalTags(tagPairs) if err != nil { t.Fatalf("err: %s", err) } if v, ok := tags["tag1"]; !ok || v != "val1" { t.Fatalf("bad: %v", tags) } if v, ok := tags["tag2"]; !ok || v != "val2" { t.Fatalf("bad: %v", tags) } } func TestAgent_UnmarshalTagsError(t *testing.T) { tagSets := [][]string{ []string{"="}, []string{"=x"}, []string{""}, []string{"x"}, } for _, tagPairs := range tagSets { if _, err := UnmarshalTags(tagPairs); err == nil { t.Fatalf("Expected tag error: %s", tagPairs[0]) } } } func TestAgentKeyringFile(t *testing.T) { keys := []string{ "enjTwAFRe4IE71bOFhirzQ==", "csT9mxI7aTf9ap3HLBbdmA==", "noha2tVc0OyD/2LtCBoAOQ==", } td, err := ioutil.TempDir("", "serf") if err != nil { t.Fatalf("err: %s", err) } defer os.RemoveAll(td) keyringFile := filepath.Join(td, "keyring.json") serfConfig := serf.DefaultConfig() agentConfig := DefaultConfig() agentConfig.KeyringFile = keyringFile encodedKeys, err := json.Marshal(keys) if err != nil { t.Fatalf("err: %s", err) } if err := ioutil.WriteFile(keyringFile, encodedKeys, 0600); err != nil { t.Fatalf("err: %s", err) } a1 := testAgentWithConfig(agentConfig, serfConfig, nil) if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } defer a1.Shutdown() testutil.Yield() totalLoadedKeys := len(serfConfig.MemberlistConfig.Keyring.GetKeys()) if totalLoadedKeys != 3 { t.Fatalf("Expected to load 3 keys but got %d", totalLoadedKeys) } } func TestAgentKeyringFile_BadOptions(t *testing.T) { agentConfig := DefaultConfig() agentConfig.KeyringFile = "/some/path" agentConfig.EncryptKey = "pL4owv4IE1x+ZXCyd5vLLg==" _, err := Create(agentConfig, serf.DefaultConfig(), nil) if err == nil || !strings.Contains(err.Error(), "not allowed") { t.Fatalf("err: %s", err) } } func TestAgentKeyringFile_NoKeys(t *testing.T) { dir, err := ioutil.TempDir("", "serf") if err != nil { t.Fatalf("err: %s", err) } defer os.RemoveAll(dir) keysFile := filepath.Join(dir, "keyring") if err := ioutil.WriteFile(keysFile, []byte("[]"), 0600); err != nil { t.Fatalf("err: %s", err) } agentConfig := DefaultConfig() agentConfig.KeyringFile = keysFile _, err = Create(agentConfig, serf.DefaultConfig(), nil) if err == nil { t.Fatalf("should have errored") } if !strings.Contains(err.Error(), "contains no keys") { t.Fatalf("bad: %s", err) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/command.go000066400000000000000000000627321317277572200300200ustar00rootroot00000000000000package agent import ( "flag" "fmt" "io" "log" "net" "os" "os/signal" "runtime" "strings" "syscall" "time" "github.com/armon/go-metrics" "github.com/hashicorp/go-syslog" "github.com/hashicorp/logutils" "github.com/hashicorp/memberlist" "github.com/hashicorp/serf/serf" "github.com/mitchellh/cli" ) const ( // gracefulTimeout controls how long we wait before forcefully terminating gracefulTimeout = 3 * time.Second // minRetryInterval applies a lower bound to the join retry interval minRetryInterval = time.Second // minBroadcastTimeout applies a lower bound to the broadcast timeout interval minBroadcastTimeout = time.Second ) // Command is a Command implementation that runs a Serf agent. // The command will not end unless a shutdown message is sent on the // ShutdownCh. If two messages are sent on the ShutdownCh it will forcibly // exit. type Command struct { Ui cli.Ui ShutdownCh <-chan struct{} args []string scriptHandler *ScriptEventHandler logFilter *logutils.LevelFilter logger *log.Logger } // readConfig is responsible for setup of our configuration using // the command line and any file configs func (c *Command) readConfig() *Config { var cmdConfig Config var configFiles []string var tags []string var retryInterval string var broadcastTimeout string cmdFlags := flag.NewFlagSet("agent", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } cmdFlags.StringVar(&cmdConfig.BindAddr, "bind", "", "address to bind listeners to") cmdFlags.StringVar(&cmdConfig.AdvertiseAddr, "advertise", "", "address to advertise to cluster") cmdFlags.Var((*AppendSliceValue)(&configFiles), "config-file", "json file to read config from") cmdFlags.Var((*AppendSliceValue)(&configFiles), "config-dir", "directory of json files to read") cmdFlags.StringVar(&cmdConfig.EncryptKey, "encrypt", "", "encryption key") cmdFlags.StringVar(&cmdConfig.KeyringFile, "keyring-file", "", "path to the keyring file") cmdFlags.Var((*AppendSliceValue)(&cmdConfig.EventHandlers), "event-handler", "command to execute when events occur") cmdFlags.Var((*AppendSliceValue)(&cmdConfig.StartJoin), "join", "address of agent to join on startup") cmdFlags.BoolVar(&cmdConfig.ReplayOnJoin, "replay", false, "replay events for startup join") cmdFlags.StringVar(&cmdConfig.LogLevel, "log-level", "", "log level") cmdFlags.StringVar(&cmdConfig.NodeName, "node", "", "node name") cmdFlags.IntVar(&cmdConfig.Protocol, "protocol", -1, "protocol version") cmdFlags.StringVar(&cmdConfig.Role, "role", "", "role name") cmdFlags.StringVar(&cmdConfig.RPCAddr, "rpc-addr", "", "address to bind RPC listener to") cmdFlags.StringVar(&cmdConfig.Profile, "profile", "", "timing profile to use (lan, wan, local)") cmdFlags.StringVar(&cmdConfig.SnapshotPath, "snapshot", "", "path to the snapshot file") cmdFlags.Var((*AppendSliceValue)(&tags), "tag", "tag pair, specified as key=value") cmdFlags.StringVar(&cmdConfig.Discover, "discover", "", "mDNS discovery name") cmdFlags.StringVar(&cmdConfig.Interface, "iface", "", "interface to bind to") cmdFlags.StringVar(&cmdConfig.TagsFile, "tags-file", "", "tag persistence file") cmdFlags.BoolVar(&cmdConfig.EnableSyslog, "syslog", false, "enable logging to syslog facility") cmdFlags.Var((*AppendSliceValue)(&cmdConfig.RetryJoin), "retry-join", "address of agent to join on startup with retry") cmdFlags.IntVar(&cmdConfig.RetryMaxAttempts, "retry-max", 0, "maximum retry join attempts") cmdFlags.StringVar(&retryInterval, "retry-interval", "", "retry join interval") cmdFlags.BoolVar(&cmdConfig.RejoinAfterLeave, "rejoin", false, "enable re-joining after a previous leave") cmdFlags.StringVar(&broadcastTimeout, "broadcast-timeout", "", "timeout for broadcast messages") if err := cmdFlags.Parse(c.args); err != nil { return nil } // Parse any command line tag values tagValues, err := UnmarshalTags(tags) if err != nil { c.Ui.Error(fmt.Sprintf("Error: %s", err)) return nil } cmdConfig.Tags = tagValues // Decode the retry interval if given if retryInterval != "" { dur, err := time.ParseDuration(retryInterval) if err != nil { c.Ui.Error(fmt.Sprintf("Error: %s", err)) return nil } cmdConfig.RetryInterval = dur } // Decode the broadcast timeout if given if broadcastTimeout != "" { dur, err := time.ParseDuration(broadcastTimeout) if err != nil { c.Ui.Error(fmt.Sprintf("Error: %s", err)) return nil } cmdConfig.BroadcastTimeout = dur } config := DefaultConfig() if len(configFiles) > 0 { fileConfig, err := ReadConfigPaths(configFiles) if err != nil { c.Ui.Error(err.Error()) return nil } config = MergeConfig(config, fileConfig) } config = MergeConfig(config, &cmdConfig) if config.NodeName == "" { hostname, err := os.Hostname() if err != nil { c.Ui.Error(fmt.Sprintf("Error determining hostname: %s", err)) return nil } config.NodeName = hostname } eventScripts := config.EventScripts() for _, script := range eventScripts { if !script.Valid() { c.Ui.Error(fmt.Sprintf("Invalid event script: %s", script.String())) return nil } } // Check for a valid interface if _, err := config.NetworkInterface(); err != nil { c.Ui.Error(fmt.Sprintf("Invalid network interface: %s", err)) return nil } // Backward compatibility hack for 'Role' if config.Role != "" { c.Ui.Output("Deprecation warning: 'Role' has been replaced with 'Tags'") config.Tags["role"] = config.Role } // Check for sane retry interval if config.RetryInterval < minRetryInterval { config.RetryInterval = minRetryInterval c.Ui.Output(fmt.Sprintf("Warning: 'RetryInterval' is too low. Setting to %v", config.RetryInterval)) } // Check for sane broadcast timeout if config.BroadcastTimeout < minBroadcastTimeout { config.BroadcastTimeout = minBroadcastTimeout c.Ui.Output(fmt.Sprintf("Warning: 'BroadcastTimeout' is too low. Setting to %v", config.BroadcastTimeout)) } // Check snapshot file is provided if we have RejoinAfterLeave if config.RejoinAfterLeave && config.SnapshotPath == "" { c.Ui.Output("Warning: 'RejoinAfterLeave' enabled without snapshot file") } return config } // setupAgent is used to create the agent we use func (c *Command) setupAgent(config *Config, logOutput io.Writer) *Agent { bindIP, bindPort, err := config.AddrParts(config.BindAddr) if err != nil { c.Ui.Error(fmt.Sprintf("Invalid bind address: %s", err)) return nil } // Check if we have an interface if iface, _ := config.NetworkInterface(); iface != nil { addrs, err := iface.Addrs() if err != nil { c.Ui.Error(fmt.Sprintf("Failed to get interface addresses: %s", err)) return nil } if len(addrs) == 0 { c.Ui.Error(fmt.Sprintf("Interface '%s' has no addresses", config.Interface)) return nil } // If there is no bind IP, pick an address if bindIP == "0.0.0.0" { found := false for _, a := range addrs { var addrIP net.IP if runtime.GOOS == "windows" { // Waiting for https://github.com/golang/go/issues/5395 to use IPNet only addr, ok := a.(*net.IPAddr) if !ok { continue } addrIP = addr.IP } else { addr, ok := a.(*net.IPNet) if !ok { continue } addrIP = addr.IP } // Skip self-assigned IPs if addrIP.IsLinkLocalUnicast() { continue } // Found an IP found = true bindIP = addrIP.String() c.Ui.Output(fmt.Sprintf("Using interface '%s' address '%s'", config.Interface, bindIP)) // Update the configuration bindAddr := &net.TCPAddr{ IP: net.ParseIP(bindIP), Port: bindPort, } config.BindAddr = bindAddr.String() break } if !found { c.Ui.Error(fmt.Sprintf("Failed to find usable address for interface '%s'", config.Interface)) return nil } } else { // If there is a bind IP, ensure it is available found := false for _, a := range addrs { addr, ok := a.(*net.IPNet) if !ok { continue } if addr.IP.String() == bindIP { found = true break } } if !found { c.Ui.Error(fmt.Sprintf("Interface '%s' has no '%s' address", config.Interface, bindIP)) return nil } } } var advertiseIP string var advertisePort int if config.AdvertiseAddr != "" { advertiseIP, advertisePort, err = config.AddrParts(config.AdvertiseAddr) if err != nil { c.Ui.Error(fmt.Sprintf("Invalid advertise address: %s", err)) return nil } } encryptKey, err := config.EncryptBytes() if err != nil { c.Ui.Error(fmt.Sprintf("Invalid encryption key: %s", err)) return nil } serfConfig := serf.DefaultConfig() switch config.Profile { case "lan": serfConfig.MemberlistConfig = memberlist.DefaultLANConfig() case "wan": serfConfig.MemberlistConfig = memberlist.DefaultWANConfig() case "local": serfConfig.MemberlistConfig = memberlist.DefaultLocalConfig() default: c.Ui.Error(fmt.Sprintf("Unknown profile: %s", config.Profile)) return nil } serfConfig.MemberlistConfig.BindAddr = bindIP serfConfig.MemberlistConfig.BindPort = bindPort serfConfig.MemberlistConfig.AdvertiseAddr = advertiseIP serfConfig.MemberlistConfig.AdvertisePort = advertisePort serfConfig.MemberlistConfig.SecretKey = encryptKey serfConfig.NodeName = config.NodeName serfConfig.Tags = config.Tags serfConfig.SnapshotPath = config.SnapshotPath serfConfig.ProtocolVersion = uint8(config.Protocol) serfConfig.CoalescePeriod = 3 * time.Second serfConfig.QuiescentPeriod = time.Second serfConfig.QueryResponseSizeLimit = config.QueryResponseSizeLimit serfConfig.QuerySizeLimit = config.QuerySizeLimit serfConfig.UserCoalescePeriod = 3 * time.Second serfConfig.UserQuiescentPeriod = time.Second if config.ReconnectInterval != 0 { serfConfig.ReconnectInterval = config.ReconnectInterval } if config.ReconnectTimeout != 0 { serfConfig.ReconnectTimeout = config.ReconnectTimeout } if config.TombstoneTimeout != 0 { serfConfig.TombstoneTimeout = config.TombstoneTimeout } serfConfig.EnableNameConflictResolution = !config.DisableNameResolution if config.KeyringFile != "" { serfConfig.KeyringFile = config.KeyringFile } serfConfig.RejoinAfterLeave = config.RejoinAfterLeave if config.BroadcastTimeout != 0 { serfConfig.BroadcastTimeout = config.BroadcastTimeout } // Start Serf c.Ui.Output("Starting Serf agent...") agent, err := Create(config, serfConfig, logOutput) if err != nil { c.Ui.Error(fmt.Sprintf("Failed to start the Serf agent: %v", err)) return nil } return agent } // setupLoggers is used to setup the logGate, logWriter, and our logOutput func (c *Command) setupLoggers(config *Config) (*GatedWriter, *logWriter, io.Writer) { // Setup logging. First create the gated log writer, which will // store logs until we're ready to show them. Then create the level // filter, filtering logs of the specified level. logGate := &GatedWriter{ Writer: &cli.UiWriter{Ui: c.Ui}, } c.logFilter = LevelFilter() c.logFilter.MinLevel = logutils.LogLevel(strings.ToUpper(config.LogLevel)) c.logFilter.Writer = logGate if !ValidateLevelFilter(c.logFilter.MinLevel, c.logFilter) { c.Ui.Error(fmt.Sprintf( "Invalid log level: %s. Valid log levels are: %v", c.logFilter.MinLevel, c.logFilter.Levels)) return nil, nil, nil } // Check if syslog is enabled var syslog io.Writer if config.EnableSyslog { l, err := gsyslog.NewLogger(gsyslog.LOG_NOTICE, config.SyslogFacility, "serf") if err != nil { c.Ui.Error(fmt.Sprintf("Syslog setup failed: %v", err)) return nil, nil, nil } syslog = &SyslogWrapper{l, c.logFilter} } // Create a log writer, and wrap a logOutput around it logWriter := NewLogWriter(512) var logOutput io.Writer if syslog != nil { logOutput = io.MultiWriter(c.logFilter, logWriter, syslog) } else { logOutput = io.MultiWriter(c.logFilter, logWriter) } // Create a logger c.logger = log.New(logOutput, "", log.LstdFlags) return logGate, logWriter, logOutput } // startAgent is used to start the agent and IPC func (c *Command) startAgent(config *Config, agent *Agent, logWriter *logWriter, logOutput io.Writer) *AgentIPC { // Add the script event handlers c.scriptHandler = &ScriptEventHandler{ SelfFunc: func() serf.Member { return agent.Serf().LocalMember() }, Scripts: config.EventScripts(), Logger: log.New(logOutput, "", log.LstdFlags), } agent.RegisterEventHandler(c.scriptHandler) // Start the agent after the handler is registered if err := agent.Start(); err != nil { c.Ui.Error(fmt.Sprintf("Failed to start the Serf agent: %v", err)) return nil } // Parse the bind address information bindIP, bindPort, err := config.AddrParts(config.BindAddr) bindAddr := &net.TCPAddr{IP: net.ParseIP(bindIP), Port: bindPort} // Start the discovery layer if config.Discover != "" { // Use the advertise addr and port local := agent.Serf().Memberlist().LocalNode() // Get the bind interface if any iface, _ := config.NetworkInterface() _, err := NewAgentMDNS(agent, logOutput, config.ReplayOnJoin, config.NodeName, config.Discover, iface, local.Addr, int(local.Port)) if err != nil { c.Ui.Error(fmt.Sprintf("Error starting mDNS listener: %s", err)) return nil } } // Setup the RPC listener rpcListener, err := net.Listen("tcp", config.RPCAddr) if err != nil { c.Ui.Error(fmt.Sprintf("Error starting RPC listener: %s", err)) return nil } // Start the IPC layer c.Ui.Output("Starting Serf agent RPC...") ipc := NewAgentIPC(agent, config.RPCAuthKey, rpcListener, logOutput, logWriter) c.Ui.Output("Serf agent running!") c.Ui.Info(fmt.Sprintf(" Node name: '%s'", config.NodeName)) c.Ui.Info(fmt.Sprintf(" Bind addr: '%s'", bindAddr.String())) if config.AdvertiseAddr != "" { advertiseIP, advertisePort, _ := config.AddrParts(config.AdvertiseAddr) advertiseAddr := (&net.TCPAddr{IP: net.ParseIP(advertiseIP), Port: advertisePort}).String() c.Ui.Info(fmt.Sprintf("Advertise addr: '%s'", advertiseAddr)) } c.Ui.Info(fmt.Sprintf(" RPC addr: '%s'", config.RPCAddr)) c.Ui.Info(fmt.Sprintf(" Encrypted: %#v", agent.serf.EncryptionEnabled())) c.Ui.Info(fmt.Sprintf(" Snapshot: %v", config.SnapshotPath != "")) c.Ui.Info(fmt.Sprintf(" Profile: %s", config.Profile)) if config.Discover != "" { c.Ui.Info(fmt.Sprintf(" mDNS cluster: %s", config.Discover)) } return ipc } // startupJoin is invoked to handle any joins specified to take place at start time func (c *Command) startupJoin(config *Config, agent *Agent) error { if len(config.StartJoin) == 0 { return nil } c.Ui.Output(fmt.Sprintf("Joining cluster...(replay: %v)", config.ReplayOnJoin)) n, err := agent.Join(config.StartJoin, config.ReplayOnJoin) if err != nil { return err } c.Ui.Info(fmt.Sprintf("Join completed. Synced with %d initial agents", n)) return nil } // retryJoin is invoked to handle joins with retries. This runs until at least a // single successful join or RetryMaxAttempts is reached func (c *Command) retryJoin(config *Config, agent *Agent, errCh chan struct{}) { // Quit fast if there is no nodes to join if len(config.RetryJoin) == 0 { return } // Track the number of join attempts attempt := 0 for { // Try to perform the join c.logger.Printf("[INFO] agent: Joining cluster...(replay: %v)", config.ReplayOnJoin) n, err := agent.Join(config.RetryJoin, config.ReplayOnJoin) if err == nil { c.logger.Printf("[INFO] agent: Join completed. Synced with %d initial agents", n) return } // Check if the maximum attempts has been exceeded attempt++ if config.RetryMaxAttempts > 0 && attempt > config.RetryMaxAttempts { c.logger.Printf("[ERR] agent: maximum retry join attempts made, exiting") close(errCh) return } // Log the failure and sleep c.logger.Printf("[WARN] agent: Join failed: %v, retrying in %v", err, config.RetryInterval) time.Sleep(config.RetryInterval) } } func (c *Command) Run(args []string) int { c.Ui = &cli.PrefixedUi{ OutputPrefix: "==> ", InfoPrefix: " ", ErrorPrefix: "==> ", Ui: c.Ui, } // Parse our configs c.args = args config := c.readConfig() if config == nil { return 1 } // Setup the log outputs logGate, logWriter, logOutput := c.setupLoggers(config) if logWriter == nil { return 1 } /* Setup telemetry Aggregate on 10 second intervals for 1 minute. Expose the metrics over stderr when there is a SIGUSR1 received. */ inm := metrics.NewInmemSink(10*time.Second, time.Minute) metrics.DefaultInmemSignal(inm) metricsConf := metrics.DefaultConfig("serf-agent") // Configure the statsite sink var fanout metrics.FanoutSink if config.StatsiteAddr != "" { sink, err := metrics.NewStatsiteSink(config.StatsiteAddr) if err != nil { c.Ui.Error(fmt.Sprintf("Failed to start statsite sink. Got: %s", err)) return 1 } fanout = append(fanout, sink) } // Configure the statsd sink if config.StatsdAddr != "" { sink, err := metrics.NewStatsdSink(config.StatsdAddr) if err != nil { c.Ui.Error(fmt.Sprintf("Failed to start statsd sink. Got: %s", err)) return 1 } fanout = append(fanout, sink) } // Initialize the global sink if len(fanout) > 0 { fanout = append(fanout, inm) metrics.NewGlobal(metricsConf, fanout) } else { metricsConf.EnableHostname = false metrics.NewGlobal(metricsConf, inm) } // Setup serf agent := c.setupAgent(config, logOutput) if agent == nil { return 1 } defer agent.Shutdown() // Start the agent ipc := c.startAgent(config, agent, logWriter, logOutput) if ipc == nil { return 1 } defer ipc.Shutdown() // Join startup nodes if specified if err := c.startupJoin(config, agent); err != nil { c.Ui.Error(err.Error()) return 1 } // Enable log streaming c.Ui.Info("") c.Ui.Output("Log data will now stream in as it occurs:\n") logGate.Flush() // Start the retry joins retryJoinCh := make(chan struct{}) go c.retryJoin(config, agent, retryJoinCh) // Wait for exit return c.handleSignals(config, agent, retryJoinCh) } // handleSignals blocks until we get an exit-causing signal func (c *Command) handleSignals(config *Config, agent *Agent, retryJoin chan struct{}) int { signalCh := make(chan os.Signal, 4) signal.Notify(signalCh, os.Interrupt, syscall.SIGTERM, syscall.SIGHUP) // Wait for a signal WAIT: var sig os.Signal select { case s := <-signalCh: sig = s case <-c.ShutdownCh: sig = os.Interrupt case <-retryJoin: // Retry join failed! return 1 case <-agent.ShutdownCh(): // Agent is already shutdown! return 0 } c.Ui.Output(fmt.Sprintf("Caught signal: %v", sig)) // Check if this is a SIGHUP if sig == syscall.SIGHUP { config = c.handleReload(config, agent) goto WAIT } // Check if we should do a graceful leave graceful := false if sig == os.Interrupt && !config.SkipLeaveOnInt { graceful = true } else if sig == syscall.SIGTERM && config.LeaveOnTerm { graceful = true } // Bail fast if not doing a graceful leave if !graceful { return 1 } // Attempt a graceful leave gracefulCh := make(chan struct{}) c.Ui.Output("Gracefully shutting down agent...") go func() { if err := agent.Leave(); err != nil { c.Ui.Error(fmt.Sprintf("Error: %s", err)) return } close(gracefulCh) }() // Wait for leave or another signal select { case <-signalCh: return 1 case <-time.After(gracefulTimeout): return 1 case <-gracefulCh: return 0 } } // handleReload is invoked when we should reload our configs, e.g. SIGHUP func (c *Command) handleReload(config *Config, agent *Agent) *Config { c.Ui.Output("Reloading configuration...") newConf := c.readConfig() if newConf == nil { c.Ui.Error(fmt.Sprintf("Failed to reload configs")) return config } // Change the log level minLevel := logutils.LogLevel(strings.ToUpper(newConf.LogLevel)) if ValidateLevelFilter(minLevel, c.logFilter) { c.logFilter.SetMinLevel(minLevel) } else { c.Ui.Error(fmt.Sprintf( "Invalid log level: %s. Valid log levels are: %v", minLevel, c.logFilter.Levels)) // Keep the current log level newConf.LogLevel = config.LogLevel } // Change the event handlers c.scriptHandler.UpdateScripts(newConf.EventScripts()) // Update the tags in serf if err := agent.SetTags(newConf.Tags); err != nil { c.Ui.Error(fmt.Sprintf("Failed to update tags: %v", err)) return newConf } return newConf } func (c *Command) Synopsis() string { return "Runs a Serf agent" } func (c *Command) Help() string { helpText := ` Usage: serf agent [options] Starts the Serf agent and runs until an interrupt is received. The agent represents a single node in a cluster. Options: -bind=0.0.0.0:7946 Address to bind network listeners to. To use an IPv6 address, specify [::1] or [::1]:7946. -iface Network interface to bind to. Can be used instead of -bind if the interface is known but not the address. If both are provided, then Serf verifies that the interface has the bind address that is provided. This flag also sets the multicast device used for -discover. -advertise=0.0.0.0 Address to advertise to the other cluster members -config-file=foo Path to a JSON file to read configuration from. This can be specified multiple times. -config-dir=foo Path to a directory to read configuration files from. This will read every file ending in ".json" as configuration in this directory in alphabetical order. -discover=cluster A cluster name used to discovery peers. On networks that support multicast, this can be used to have peers join each other without an explicit join. -encrypt=foo Key for encrypting network traffic within Serf. Must be a base64-encoded 16-byte key. -keyring-file The keyring file is used to store encryption keys used by Serf. As encryption keys are changed, the content of this file is updated so that the same keys may be used during later agent starts. -event-handler=foo Script to execute when events occur. This can be specified multiple times. See the event scripts section below for more info. -join=addr An initial agent to join with. This flag can be specified multiple times. -log-level=info Log level of the agent. -node=hostname Name of this node. Must be unique in the cluster -profile=[lan|wan|local] Profile is used to control the timing profiles used in Serf. The default if not provided is lan. -protocol=n Serf protocol version to use. This defaults to the latest version, but can be set back for upgrades. -rejoin Ignores a previous leave and attempts to rejoin the cluster. Only works if provided along with a snapshot file. -retry-join=addr An agent to join with. This flag be specified multiple times. Does not exit on failure like -join, used to retry until success. -retry-interval=30s Sets the interval on which a node will attempt to retry joining nodes provided by -retry-join. Defaults to 30s. -retry-max=0 Limits the number of retry events. Defaults to 0 for unlimited. -role=foo The role of this node, if any. This can be used by event scripts to differentiate different types of nodes that may be part of the same cluster. '-role' is deprecated in favor of '-tag role=foo'. -rpc-addr=127.0.0.1:7373 Address to bind the RPC listener. -snapshot=path/to/file The snapshot file is used to store alive nodes and event information so that Serf can rejoin a cluster and avoid event replay on restart. -tag key=value Tag can be specified multiple times to attach multiple key/value tag pairs to the given node. -tags-file=/path/to/file The tags file is used to persist tag data. As an agent's tags are changed, the tags file will be updated. Tags can be reloaded during later agent starts. This option is incompatible with the '-tag' option and requires there be no tags in the agent configuration file, if given. -syslog When provided, logs will also be sent to syslog. -broadcast-timeout=5s Sets the broadcast timeout, which is the max time allowed for responses to events including leave and force remove messages. Defaults to 5s. Event handlers: For more information on what event handlers are, please read the Serf documentation. This section will document how to configure them on the command-line. There are three methods of specifying an event handler: - The value can be a plain script, such as "event.sh". In this case, Serf will send all events to this script, and you'll be responsible for differentiating between them based on the SERF_EVENT. - The value can be in the format of "TYPE=SCRIPT", such as "member-join=join.sh". With this format, Serf will only send events of that type to that script. - The value can be in the format of "user:EVENT=SCRIPT", such as "user:deploy=deploy.sh". This means that Serf will only invoke this script in the case of user events named "deploy". ` return strings.TrimSpace(helpText) } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/command_test.go000066400000000000000000000142041317277572200310460ustar00rootroot00000000000000package agent import ( "bytes" "github.com/hashicorp/serf/client" "github.com/hashicorp/serf/testutil" "github.com/mitchellh/cli" "log" "os" "testing" "time" ) func TestCommand_implements(t *testing.T) { var _ cli.Command = new(Command) } func TestCommandRun(t *testing.T) { shutdownCh := make(chan struct{}) defer close(shutdownCh) ui := new(cli.MockUi) c := &Command{ ShutdownCh: shutdownCh, Ui: ui, } args := []string{ "-bind", testutil.GetBindAddr().String(), "-rpc-addr", getRPCAddr(), } resultCh := make(chan int) go func() { resultCh <- c.Run(args) }() testutil.Yield() // Verify it runs "forever" select { case <-resultCh: t.Fatalf("ended too soon, err: %s", ui.ErrorWriter.String()) case <-time.After(50 * time.Millisecond): } // Send a shutdown request shutdownCh <- struct{}{} select { case code := <-resultCh: if code != 0 { t.Fatalf("bad code: %d", code) } case <-time.After(50 * time.Millisecond): t.Fatalf("timeout") } } func TestCommandRun_rpc(t *testing.T) { doneCh := make(chan struct{}) shutdownCh := make(chan struct{}) defer func() { close(shutdownCh) <-doneCh }() c := &Command{ ShutdownCh: shutdownCh, Ui: new(cli.MockUi), } rpcAddr := getRPCAddr() args := []string{ "-bind", testutil.GetBindAddr().String(), "-rpc-addr", rpcAddr, } go func() { code := c.Run(args) if code != 0 { log.Printf("bad: %d", code) } close(doneCh) }() testutil.Yield() client, err := client.NewRPCClient(rpcAddr) if err != nil { t.Fatalf("err: %s", err) } defer client.Close() members, err := client.Members() if err != nil { t.Fatalf("err: %s", err) } if len(members) != 1 { t.Fatalf("bad: %#v", members) } } func TestCommandRun_join(t *testing.T) { a1 := testAgent(nil) if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } defer a1.Shutdown() doneCh := make(chan struct{}) shutdownCh := make(chan struct{}) defer func() { close(shutdownCh) <-doneCh }() c := &Command{ ShutdownCh: shutdownCh, Ui: new(cli.MockUi), } args := []string{ "-bind", testutil.GetBindAddr().String(), "-join", a1.conf.MemberlistConfig.BindAddr, "-replay", } go func() { code := c.Run(args) if code != 0 { log.Printf("bad: %d", code) } close(doneCh) }() testutil.Yield() if len(a1.Serf().Members()) != 2 { t.Fatalf("bad: %#v", a1.Serf().Members()) } } func TestCommandRun_joinFail(t *testing.T) { shutdownCh := make(chan struct{}) defer close(shutdownCh) c := &Command{ ShutdownCh: shutdownCh, Ui: new(cli.MockUi), } args := []string{ "-bind", testutil.GetBindAddr().String(), "-join", testutil.GetBindAddr().String(), } code := c.Run(args) if code == 0 { t.Fatal("should fail") } } func TestCommandRun_advertiseAddr(t *testing.T) { doneCh := make(chan struct{}) shutdownCh := make(chan struct{}) defer func() { close(shutdownCh) <-doneCh }() c := &Command{ ShutdownCh: shutdownCh, Ui: new(cli.MockUi), } rpcAddr := getRPCAddr() args := []string{ "-bind", testutil.GetBindAddr().String(), "-rpc-addr", rpcAddr, "-advertise", "127.0.0.10:12345", } go func() { code := c.Run(args) if code != 0 { log.Printf("bad: %d", code) } close(doneCh) }() testutil.Yield() client, err := client.NewRPCClient(rpcAddr) if err != nil { t.Fatalf("err: %s", err) } defer client.Close() members, err := client.Members() if err != nil { t.Fatalf("err: %s", err) } if len(members) != 1 { t.Fatalf("bad: %#v", members) } // Check the addr and port is as advertised! m := members[0] if bytes.Compare(m.Addr, []byte{127, 0, 0, 10}) != 0 { t.Fatalf("bad: %#v", m) } if m.Port != 12345 { t.Fatalf("bad: %#v", m) } } func TestCommandRun_mDNS(t *testing.T) { // mDNS does not work in travis if os.Getenv("TRAVIS") != "" { t.SkipNow() } // Start an agent doneCh := make(chan struct{}) shutdownCh := make(chan struct{}) defer func() { close(shutdownCh) <-doneCh }() c := &Command{ ShutdownCh: shutdownCh, Ui: new(cli.MockUi), } args := []string{ "-node", "foo", "-bind", testutil.GetBindAddr().String(), "-discover", "test", "-rpc-addr", getRPCAddr(), } go func() { code := c.Run(args) if code != 0 { log.Printf("bad: %d", code) } close(doneCh) }() // Start a second agent doneCh2 := make(chan struct{}) shutdownCh2 := make(chan struct{}) defer func() { close(shutdownCh2) <-doneCh2 }() c2 := &Command{ ShutdownCh: shutdownCh2, Ui: new(cli.MockUi), } addr2 := getRPCAddr() args2 := []string{ "-node", "bar", "-bind", testutil.GetBindAddr().String(), "-discover", "test", "-rpc-addr", addr2, } go func() { code := c2.Run(args2) if code != 0 { log.Printf("bad: %d", code) } close(doneCh2) }() time.Sleep(150 * time.Millisecond) client, err := client.NewRPCClient(addr2) if err != nil { t.Fatalf("err: %s", err) } defer client.Close() members, err := client.Members() if err != nil { t.Fatalf("err: %s", err) } if len(members) != 2 { t.Fatalf("bad: %#v", members) } } func TestCommandRun_retry_join(t *testing.T) { a1 := testAgent(nil) if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } defer a1.Shutdown() doneCh := make(chan struct{}) shutdownCh := make(chan struct{}) defer func() { close(shutdownCh) <-doneCh }() c := &Command{ ShutdownCh: shutdownCh, Ui: new(cli.MockUi), } args := []string{ "-bind", testutil.GetBindAddr().String(), "-retry-join", a1.conf.MemberlistConfig.BindAddr, "-replay", } go func() { code := c.Run(args) if code != 0 { log.Printf("bad: %d", code) } close(doneCh) }() testutil.Yield() if len(a1.Serf().Members()) != 2 { t.Fatalf("bad: %#v", a1.Serf().Members()) } } func TestCommandRun_retry_joinFail(t *testing.T) { shutdownCh := make(chan struct{}) defer close(shutdownCh) c := &Command{ ShutdownCh: shutdownCh, Ui: new(cli.MockUi), } args := []string{ "-bind", testutil.GetBindAddr().String(), "-retry-join", testutil.GetBindAddr().String(), "-retry-interval", "1s", "-retry-max", "1", } code := c.Run(args) if code == 0 { t.Fatal("should fail") } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/config.go000066400000000000000000000425321317277572200276430ustar00rootroot00000000000000package agent import ( "encoding/base64" "encoding/json" "fmt" "io" "net" "os" "path/filepath" "sort" "strings" "time" "github.com/hashicorp/serf/serf" "github.com/mitchellh/mapstructure" ) // This is the default port that we use for Serf communication const DefaultBindPort int = 7946 // DefaultConfig contains the defaults for configurations. func DefaultConfig() *Config { return &Config{ DisableCoordinates: false, Tags: make(map[string]string), BindAddr: "0.0.0.0", AdvertiseAddr: "", LogLevel: "INFO", RPCAddr: "127.0.0.1:7373", Protocol: serf.ProtocolVersionMax, ReplayOnJoin: false, Profile: "lan", RetryInterval: 30 * time.Second, SyslogFacility: "LOCAL0", QueryResponseSizeLimit: 1024, QuerySizeLimit: 1024, BroadcastTimeout: 5 * time.Second, } } type dirEnts []os.FileInfo // Config is the configuration that can be set for an Agent. Some of these // configurations are exposed as command-line flags to `serf agent`, whereas // many of the more advanced configurations can only be set by creating // a configuration file. type Config struct { // All the configurations in this section are identical to their // Serf counterparts. See the documentation for Serf.Config for // more info. NodeName string `mapstructure:"node_name"` Role string `mapstructure:"role"` DisableCoordinates bool `mapstructure:"disable_coordinates"` // Tags are used to attach key/value metadata to a node. They have // replaced 'Role' as a more flexible meta data mechanism. For compatibility, // the 'role' key is special, and is used for backwards compatibility. Tags map[string]string `mapstructure:"tags"` // TagsFile is the path to a file where Serf can store its tags. Tag // persistence is desirable since tags may be set or deleted while the // agent is running. Tags can be reloaded from this file on later starts. TagsFile string `mapstructure:"tags_file"` // BindAddr is the address that the Serf agent's communication ports // will bind to. Serf will use this address to bind to for both TCP // and UDP connections. If no port is present in the address, the default // port will be used. BindAddr string `mapstructure:"bind"` // AdvertiseAddr is the address that the Serf agent will advertise to // other members of the cluster. Can be used for basic NAT traversal // where both the internal ip:port and external ip:port are known. AdvertiseAddr string `mapstructure:"advertise"` // EncryptKey is the secret key to use for encrypting communication // traffic for Serf. The secret key must be exactly 16-bytes, base64 // encoded. The easiest way to do this on Unix machines is this command: // "head -c16 /dev/urandom | base64". If this is not specified, the // traffic will not be encrypted. EncryptKey string `mapstructure:"encrypt_key"` // KeyringFile is the path to a file containing a serialized keyring. // The keyring is used to facilitate encryption. If left blank, the // keyring will not be persisted to a file. KeyringFile string `mapstructure:"keyring_file"` // LogLevel is the level of the logs to output. // This can be updated during a reload. LogLevel string `mapstructure:"log_level"` // RPCAddr is the address and port to listen on for the agent's RPC // interface. RPCAddr string `mapstructure:"rpc_addr"` // RPCAuthKey is a key that can be set to optionally require that // RPC's provide an authentication key. This is meant to be // a very simple authentication control RPCAuthKey string `mapstructure:"rpc_auth"` // Protocol is the Serf protocol version to use. Protocol int `mapstructure:"protocol"` // ReplayOnJoin tells Serf to replay past user events // when joining based on a `StartJoin`. ReplayOnJoin bool `mapstructure:"replay_on_join"` // QueryResponseSizeLimit and QuerySizeLimit limit the inbound and // outbound payload sizes for queries, respectively. These must fit // in a UDP packet with some additional overhead, so tuning these // past the default values of 1024 will depend on your network // configuration. QueryResponseSizeLimit int `mapstructure:"query_response_size_limit"` QuerySizeLimit int `mapstructure:"query_size_limit"` // StartJoin is a list of addresses to attempt to join when the // agent starts. If Serf is unable to communicate with any of these // addresses, then the agent will error and exit. StartJoin []string `mapstructure:"start_join"` // EventHandlers is a list of event handlers that will be invoked. // These can be updated during a reload. EventHandlers []string `mapstructure:"event_handlers"` // Profile is used to select a timing profile for Serf. The supported choices // are "wan", "lan", and "local". The default is "lan" Profile string `mapstructure:"profile"` // SnapshotPath is used to allow Serf to snapshot important transactional // state to make a more graceful recovery possible. This enables auto // re-joining a cluster on failure and avoids old message replay. SnapshotPath string `mapstructure:"snapshot_path"` // LeaveOnTerm controls if Serf does a graceful leave when receiving // the TERM signal. Defaults false. This can be changed on reload. LeaveOnTerm bool `mapstructure:"leave_on_terminate"` // SkipLeaveOnInt controls if Serf skips a graceful leave when receiving // the INT signal. Defaults false. This can be changed on reload. SkipLeaveOnInt bool `mapstructure:"skip_leave_on_interrupt"` // Discover is used to setup an mDNS Discovery name. When this is set, the // agent will setup an mDNS responder and periodically run an mDNS query // to look for peers. For peers on a network that supports multicast, this // allows Serf agents to join each other with zero configuration. Discover string `mapstructure:"discover"` // Interface is used to provide a binding interface to use. It can be // used instead of providing a bind address, as Serf will discover the // address of the provided interface. It is also used to set the multicast // device used with `-discover`. Interface string `mapstructure:"interface"` // ReconnectIntervalRaw is the string reconnect interval time. This interval // controls how often we attempt to connect to a failed node. ReconnectIntervalRaw string `mapstructure:"reconnect_interval"` ReconnectInterval time.Duration `mapstructure:"-"` // ReconnectTimeoutRaw is the string reconnect timeout. This timeout controls // for how long we attempt to connect to a failed node before removing // it from the cluster. ReconnectTimeoutRaw string `mapstructure:"reconnect_timeout"` ReconnectTimeout time.Duration `mapstructure:"-"` // TombstoneTimeoutRaw is the string tombstone timeout. This timeout controls // for how long we remember a left node before removing it from the cluster. TombstoneTimeoutRaw string `mapstructure:"tombstone_timeout"` TombstoneTimeout time.Duration `mapstructure:"-"` // By default Serf will attempt to resolve name conflicts. This is done by // determining which node the majority believe to be the proper node, and // by having the minority node shutdown. If you want to disable this behavior, // then this flag can be set to true. DisableNameResolution bool `mapstructure:"disable_name_resolution"` // EnableSyslog is used to also tee all the logs over to syslog. Only supported // on linux and OSX. Other platforms will generate an error. EnableSyslog bool `mapstructure:"enable_syslog"` // SyslogFacility is used to control which syslog facility messages are // sent to. Defaults to LOCAL0. SyslogFacility string `mapstructure:"syslog_facility"` // RetryJoin is a list of addresses to attempt to join when the // agent starts. Serf will continue to retry the join until it // succeeds or RetryMaxAttempts is reached. RetryJoin []string `mapstructure:"retry_join"` // RetryMaxAttempts is used to limit the maximum attempts made // by RetryJoin to reach other nodes. If this is 0, then no limit // is imposed, and Serf will continue to try forever. Defaults to 0. RetryMaxAttempts int `mapstructure:"retry_max_attempts"` // RetryIntervalRaw is the string retry interval. This interval // controls how often we retry the join for RetryJoin. This defaults // to 30 seconds. RetryIntervalRaw string `mapstructure:"retry_interval"` RetryInterval time.Duration `mapstructure:"-"` // RejoinAfterLeave controls our interaction with the snapshot file. // When set to false (default), a leave causes a Serf to not rejoin // the cluster until an explicit join is received. If this is set to // true, we ignore the leave, and rejoin the cluster on start. This // only has an affect if the snapshot file is enabled. RejoinAfterLeave bool `mapstructure:"rejoin_after_leave"` // StatsiteAddr is the address of a statsite instance. If provided, // metrics will be streamed to that instance. StatsiteAddr string `mapstructure:"statsite_addr"` // StatsdAddr is the address of a statsd instance. If provided, // metrics will be sent to that instance. StatsdAddr string `mapstructure:"statsd_addr"` // BroadcastTimeoutRaw is the string retry interval. This interval // controls the timeout for broadcast events. This defaults to // 5 seconds. BroadcastTimeoutRaw string `mapstructure:"broadcast_timeout"` BroadcastTimeout time.Duration `mapstructure:"-"` } // BindAddrParts returns the parts of the BindAddr that should be // used to configure Serf. func (c *Config) AddrParts(address string) (string, int, error) { checkAddr := address START: _, _, err := net.SplitHostPort(checkAddr) if ae, ok := err.(*net.AddrError); ok && ae.Err == "missing port in address" { checkAddr = fmt.Sprintf("%s:%d", checkAddr, DefaultBindPort) goto START } if err != nil { return "", 0, err } // Get the address addr, err := net.ResolveTCPAddr("tcp", checkAddr) if err != nil { return "", 0, err } return addr.IP.String(), addr.Port, nil } // EncryptBytes returns the encryption key configured. func (c *Config) EncryptBytes() ([]byte, error) { return base64.StdEncoding.DecodeString(c.EncryptKey) } // EventScripts returns the list of EventScripts associated with this // configuration and specified by the "event_handlers" configuration. func (c *Config) EventScripts() []EventScript { result := make([]EventScript, 0, len(c.EventHandlers)) for _, v := range c.EventHandlers { part := ParseEventScript(v) result = append(result, part...) } return result } // Networkinterface is used to get the associated network // interface from the configured value func (c *Config) NetworkInterface() (*net.Interface, error) { if c.Interface == "" { return nil, nil } return net.InterfaceByName(c.Interface) } // DecodeConfig reads the configuration from the given reader in JSON // format and decodes it into a proper Config structure. func DecodeConfig(r io.Reader) (*Config, error) { var raw interface{} dec := json.NewDecoder(r) if err := dec.Decode(&raw); err != nil { return nil, err } // Decode var md mapstructure.Metadata var result Config msdec, err := mapstructure.NewDecoder(&mapstructure.DecoderConfig{ Metadata: &md, Result: &result, ErrorUnused: true, }) if err != nil { return nil, err } if err := msdec.Decode(raw); err != nil { return nil, err } // Decode the time values if result.ReconnectIntervalRaw != "" { dur, err := time.ParseDuration(result.ReconnectIntervalRaw) if err != nil { return nil, err } result.ReconnectInterval = dur } if result.ReconnectTimeoutRaw != "" { dur, err := time.ParseDuration(result.ReconnectTimeoutRaw) if err != nil { return nil, err } result.ReconnectTimeout = dur } if result.TombstoneTimeoutRaw != "" { dur, err := time.ParseDuration(result.TombstoneTimeoutRaw) if err != nil { return nil, err } result.TombstoneTimeout = dur } if result.RetryIntervalRaw != "" { dur, err := time.ParseDuration(result.RetryIntervalRaw) if err != nil { return nil, err } result.RetryInterval = dur } if result.BroadcastTimeoutRaw != "" { dur, err := time.ParseDuration(result.BroadcastTimeoutRaw) if err != nil { return nil, err } result.BroadcastTimeout = dur } return &result, nil } // containsKey is used to check if a slice of string keys contains // another key func containsKey(keys []string, key string) bool { for _, k := range keys { if k == key { return true } } return false } // MergeConfig merges two configurations together to make a single new // configuration. func MergeConfig(a, b *Config) *Config { var result Config = *a // Copy the strings if they're set if b.NodeName != "" { result.NodeName = b.NodeName } if b.Role != "" { result.Role = b.Role } if b.DisableCoordinates == true { result.DisableCoordinates = true } if b.Tags != nil { if result.Tags == nil { result.Tags = make(map[string]string) } for name, value := range b.Tags { result.Tags[name] = value } } if b.BindAddr != "" { result.BindAddr = b.BindAddr } if b.AdvertiseAddr != "" { result.AdvertiseAddr = b.AdvertiseAddr } if b.EncryptKey != "" { result.EncryptKey = b.EncryptKey } if b.LogLevel != "" { result.LogLevel = b.LogLevel } if b.Protocol > 0 { result.Protocol = b.Protocol } if b.RPCAddr != "" { result.RPCAddr = b.RPCAddr } if b.RPCAuthKey != "" { result.RPCAuthKey = b.RPCAuthKey } if b.ReplayOnJoin != false { result.ReplayOnJoin = b.ReplayOnJoin } if b.Profile != "" { result.Profile = b.Profile } if b.SnapshotPath != "" { result.SnapshotPath = b.SnapshotPath } if b.LeaveOnTerm == true { result.LeaveOnTerm = true } if b.SkipLeaveOnInt == true { result.SkipLeaveOnInt = true } if b.Discover != "" { result.Discover = b.Discover } if b.Interface != "" { result.Interface = b.Interface } if b.ReconnectInterval != 0 { result.ReconnectInterval = b.ReconnectInterval } if b.ReconnectTimeout != 0 { result.ReconnectTimeout = b.ReconnectTimeout } if b.TombstoneTimeout != 0 { result.TombstoneTimeout = b.TombstoneTimeout } if b.DisableNameResolution { result.DisableNameResolution = true } if b.TagsFile != "" { result.TagsFile = b.TagsFile } if b.KeyringFile != "" { result.KeyringFile = b.KeyringFile } if b.EnableSyslog { result.EnableSyslog = true } if b.RetryMaxAttempts != 0 { result.RetryMaxAttempts = b.RetryMaxAttempts } if b.RetryInterval != 0 { result.RetryInterval = b.RetryInterval } if b.RejoinAfterLeave { result.RejoinAfterLeave = true } if b.SyslogFacility != "" { result.SyslogFacility = b.SyslogFacility } if b.StatsiteAddr != "" { result.StatsiteAddr = b.StatsiteAddr } if b.StatsdAddr != "" { result.StatsdAddr = b.StatsdAddr } if b.QueryResponseSizeLimit != 0 { result.QueryResponseSizeLimit = b.QueryResponseSizeLimit } if b.QuerySizeLimit != 0 { result.QuerySizeLimit = b.QuerySizeLimit } if b.BroadcastTimeout != 0 { result.BroadcastTimeout = b.BroadcastTimeout } // Copy the event handlers result.EventHandlers = make([]string, 0, len(a.EventHandlers)+len(b.EventHandlers)) result.EventHandlers = append(result.EventHandlers, a.EventHandlers...) result.EventHandlers = append(result.EventHandlers, b.EventHandlers...) // Copy the start join addresses result.StartJoin = make([]string, 0, len(a.StartJoin)+len(b.StartJoin)) result.StartJoin = append(result.StartJoin, a.StartJoin...) result.StartJoin = append(result.StartJoin, b.StartJoin...) // Copy the retry join addresses result.RetryJoin = make([]string, 0, len(a.RetryJoin)+len(b.RetryJoin)) result.RetryJoin = append(result.RetryJoin, a.RetryJoin...) result.RetryJoin = append(result.RetryJoin, b.RetryJoin...) return &result } // ReadConfigPaths reads the paths in the given order to load configurations. // The paths can be to files or directories. If the path is a directory, // we read one directory deep and read any files ending in ".json" as // configuration files. func ReadConfigPaths(paths []string) (*Config, error) { result := new(Config) for _, path := range paths { f, err := os.Open(path) if err != nil { return nil, fmt.Errorf("Error reading '%s': %s", path, err) } fi, err := f.Stat() if err != nil { f.Close() return nil, fmt.Errorf("Error reading '%s': %s", path, err) } if !fi.IsDir() { config, err := DecodeConfig(f) f.Close() if err != nil { return nil, fmt.Errorf("Error decoding '%s': %s", path, err) } result = MergeConfig(result, config) continue } contents, err := f.Readdir(-1) f.Close() if err != nil { return nil, fmt.Errorf("Error reading '%s': %s", path, err) } // Sort the contents, ensures lexical order sort.Sort(dirEnts(contents)) for _, fi := range contents { // Don't recursively read contents if fi.IsDir() { continue } // If it isn't a JSON file, ignore it if !strings.HasSuffix(fi.Name(), ".json") { continue } subpath := filepath.Join(path, fi.Name()) f, err := os.Open(subpath) if err != nil { return nil, fmt.Errorf("Error reading '%s': %s", subpath, err) } config, err := DecodeConfig(f) f.Close() if err != nil { return nil, fmt.Errorf("Error decoding '%s': %s", subpath, err) } result = MergeConfig(result, config) } } return result, nil } // Implement the sort interface for dirEnts func (d dirEnts) Len() int { return len(d) } func (d dirEnts) Less(i, j int) bool { return d[i].Name() < d[j].Name() } func (d dirEnts) Swap(i, j int) { d[i], d[j] = d[j], d[i] } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/config_test.go000066400000000000000000000275541317277572200307110ustar00rootroot00000000000000package agent import ( "bytes" "encoding/base64" "io/ioutil" "os" "path/filepath" "reflect" "testing" "time" ) func TestConfigBindAddrParts(t *testing.T) { testCases := []struct { Value string IP string Port int Error bool }{ {"0.0.0.0", "0.0.0.0", DefaultBindPort, false}, {"0.0.0.0:1234", "0.0.0.0", 1234, false}, } for _, tc := range testCases { c := &Config{BindAddr: tc.Value} ip, port, err := c.AddrParts(c.BindAddr) if tc.Error != (err != nil) { t.Errorf("Bad error: %s", err) continue } if tc.IP != ip { t.Errorf("%s: Got IP %#v", tc.Value, ip) continue } if tc.Port != port { t.Errorf("%s: Got port %d", tc.Value, port) continue } } } func TestConfigEncryptBytes(t *testing.T) { // Test with some input src := []byte("abc") c := &Config{ EncryptKey: base64.StdEncoding.EncodeToString(src), } result, err := c.EncryptBytes() if err != nil { t.Fatalf("err: %s", err) } if !bytes.Equal(src, result) { t.Fatalf("bad: %#v", result) } // Test with no input c = &Config{} result, err = c.EncryptBytes() if err != nil { t.Fatalf("err: %s", err) } if len(result) > 0 { t.Fatalf("bad: %#v", result) } } func TestConfigEventScripts(t *testing.T) { c := &Config{ EventHandlers: []string{ "foo.sh", "bar=blah.sh", }, } result := c.EventScripts() if len(result) != 2 { t.Fatalf("bad: %#v", result) } expected := []EventScript{ {EventFilter{"*", ""}, "foo.sh"}, {EventFilter{"bar", ""}, "blah.sh"}, } if !reflect.DeepEqual(result, expected) { t.Fatalf("bad: %#v", result) } } func TestDecodeConfig(t *testing.T) { // Without a protocol input := `{"node_name": "foo"}` config, err := DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.NodeName != "foo" { t.Fatalf("bad: %#v", config) } if config.Protocol != 0 { t.Fatalf("bad: %#v", config) } if config.SkipLeaveOnInt != DefaultConfig().SkipLeaveOnInt { t.Fatalf("bad: %#v", config) } if config.LeaveOnTerm != DefaultConfig().LeaveOnTerm { t.Fatalf("bad: %#v", config) } // With a protocol input = `{"node_name": "foo", "protocol": 7}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.NodeName != "foo" { t.Fatalf("bad: %#v", config) } if config.Protocol != 7 { t.Fatalf("bad: %#v", config) } // A bind addr input = `{"bind": "127.0.0.2"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.BindAddr != "127.0.0.2" { t.Fatalf("bad: %#v", config) } // replayOnJoin input = `{"replay_on_join": true}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.ReplayOnJoin != true { t.Fatalf("bad: %#v", config) } // leave_on_terminate input = `{"leave_on_terminate": true}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.LeaveOnTerm != true { t.Fatalf("bad: %#v", config) } // skip_leave_on_interrupt input = `{"skip_leave_on_interrupt": true}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.SkipLeaveOnInt != true { t.Fatalf("bad: %#v", config) } // tags input = `{"tags": {"foo": "bar", "role": "test"}}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.Tags["foo"] != "bar" { t.Fatalf("bad: %#v", config) } if config.Tags["role"] != "test" { t.Fatalf("bad: %#v", config) } // tags file input = `{"tags_file": "/some/path"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.TagsFile != "/some/path" { t.Fatalf("bad: %#v", config) } // Discover input = `{"discover": "foobar"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.Discover != "foobar" { t.Fatalf("bad: %#v", config) } // Interface input = `{"interface": "eth0"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.Interface != "eth0" { t.Fatalf("bad: %#v", config) } // Reconnect intervals input = `{"reconnect_interval": "15s", "reconnect_timeout": "48h"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.ReconnectInterval != 15*time.Second { t.Fatalf("bad: %#v", config) } if config.ReconnectTimeout != 48*time.Hour { t.Fatalf("bad: %#v", config) } // RPC Auth input = `{"rpc_auth": "foobar"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.RPCAuthKey != "foobar" { t.Fatalf("bad: %#v", config) } // DisableNameResolution input = `{"disable_name_resolution": true}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if !config.DisableNameResolution { t.Fatalf("bad: %#v", config) } // Tombstone intervals input = `{"tombstone_timeout": "48h"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.TombstoneTimeout != 48*time.Hour { t.Fatalf("bad: %#v", config) } // Syslog input = `{"enable_syslog": true, "syslog_facility": "LOCAL4"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if !config.EnableSyslog { t.Fatalf("bad: %#v", config) } if config.SyslogFacility != "LOCAL4" { t.Fatalf("bad: %#v", config) } // Retry configs input = `{"retry_max_attempts": 5, "retry_interval": "60s"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.RetryMaxAttempts != 5 { t.Fatalf("bad: %#v", config) } if config.RetryInterval != 60*time.Second { t.Fatalf("bad: %#v", config) } // Broadcast timeout input = `{"broadcast_timeout": "10s"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.BroadcastTimeout != 10*time.Second { t.Fatalf("bad: %#v", config) } // Retry configs input = `{"retry_join": ["127.0.0.1", "127.0.0.2"]}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if len(config.RetryJoin) != 2 { t.Fatalf("bad: %#v", config) } if config.RetryJoin[0] != "127.0.0.1" { t.Fatalf("bad: %#v", config) } if config.RetryJoin[1] != "127.0.0.2" { t.Fatalf("bad: %#v", config) } // Rejoin configs input = `{"rejoin_after_leave": true}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if !config.RejoinAfterLeave { t.Fatalf("bad: %#v", config) } // Stats configs input = `{"statsite_addr": "127.0.0.1:8123", "statsd_addr": "127.0.0.1:8125"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.StatsiteAddr != "127.0.0.1:8123" { t.Fatalf("bad: %#v", config) } if config.StatsdAddr != "127.0.0.1:8125" { t.Fatalf("bad: %#v", config) } // Query sizes input = `{"query_response_size_limit": 123, "query_size_limit": 456}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.QueryResponseSizeLimit != 123 || config.QuerySizeLimit != 456 { t.Fatalf("bad: %#v", config) } } func TestDecodeConfig_unknownDirective(t *testing.T) { input := `{"unknown_directive": "titi"}` _, err := DecodeConfig(bytes.NewReader([]byte(input))) if err == nil { t.Fatal("should have err") } } func TestMergeConfig(t *testing.T) { a := &Config{ NodeName: "foo", Role: "bar", Protocol: 7, EventHandlers: []string{"foo"}, StartJoin: []string{"foo"}, ReplayOnJoin: true, RetryJoin: []string{"zab"}, } b := &Config{ NodeName: "bname", DisableCoordinates: true, Protocol: -1, EncryptKey: "foo", EventHandlers: []string{"bar"}, StartJoin: []string{"bar"}, LeaveOnTerm: true, SkipLeaveOnInt: true, Discover: "tubez", Interface: "eth0", ReconnectInterval: 15 * time.Second, ReconnectTimeout: 48 * time.Hour, RPCAuthKey: "foobar", DisableNameResolution: true, TombstoneTimeout: 36 * time.Hour, EnableSyslog: true, RetryJoin: []string{"zip"}, RetryMaxAttempts: 10, RetryInterval: 120 * time.Second, RejoinAfterLeave: true, StatsiteAddr: "127.0.0.1:8125", QueryResponseSizeLimit: 123, QuerySizeLimit: 456, BroadcastTimeout: 20 * time.Second, } c := MergeConfig(a, b) if c.NodeName != "bname" { t.Fatalf("bad: %#v", c) } if c.Role != "bar" { t.Fatalf("bad: %#v", c) } if c.DisableCoordinates != true { t.Fatalf("bad: %#v", c) } if c.Protocol != 7 { t.Fatalf("bad: %#v", c) } if c.EncryptKey != "foo" { t.Fatalf("bad: %#v", c.EncryptKey) } if c.ReplayOnJoin != true { t.Fatalf("bad: %#v", c.ReplayOnJoin) } if !c.LeaveOnTerm { t.Fatalf("bad: %#v", c.LeaveOnTerm) } if !c.SkipLeaveOnInt { t.Fatalf("bad: %#v", c.SkipLeaveOnInt) } if c.Discover != "tubez" { t.Fatalf("Bad: %v", c.Discover) } if c.Interface != "eth0" { t.Fatalf("Bad: %v", c.Interface) } if c.ReconnectInterval != 15*time.Second { t.Fatalf("bad: %#v", c) } if c.ReconnectTimeout != 48*time.Hour { t.Fatalf("bad: %#v", c) } if c.TombstoneTimeout != 36*time.Hour { t.Fatalf("bad: %#v", c) } if c.RPCAuthKey != "foobar" { t.Fatalf("bad: %#v", c) } if !c.DisableNameResolution { t.Fatalf("bad: %#v", c) } if !c.EnableSyslog { t.Fatalf("bad: %#v", c) } if c.RetryMaxAttempts != 10 { t.Fatalf("bad: %#v", c) } if c.RetryInterval != 120*time.Second { t.Fatalf("bad: %#v", c) } if !c.RejoinAfterLeave { t.Fatalf("bad: %#v", c) } if c.StatsiteAddr != "127.0.0.1:8125" { t.Fatalf("bad: %#v", c) } expected := []string{"foo", "bar"} if !reflect.DeepEqual(c.EventHandlers, expected) { t.Fatalf("bad: %#v", c) } if !reflect.DeepEqual(c.StartJoin, expected) { t.Fatalf("bad: %#v", c) } expected = []string{"zab", "zip"} if !reflect.DeepEqual(c.RetryJoin, expected) { t.Fatalf("bad: %#v", c) } if c.QueryResponseSizeLimit != 123 || c.QuerySizeLimit != 456 { t.Fatalf("bad: %#v", c) } if c.BroadcastTimeout != 20*time.Second { t.Fatalf("bad: %#v", c) } } func TestReadConfigPaths_badPath(t *testing.T) { _, err := ReadConfigPaths([]string{"/i/shouldnt/exist/ever/rainbows"}) if err == nil { t.Fatal("should have err") } } func TestReadConfigPaths_file(t *testing.T) { tf, err := ioutil.TempFile("", "serf") if err != nil { t.Fatalf("err: %s", err) } tf.Write([]byte(`{"node_name":"bar"}`)) tf.Close() defer os.Remove(tf.Name()) config, err := ReadConfigPaths([]string{tf.Name()}) if err != nil { t.Fatalf("err: %s", err) } if config.NodeName != "bar" { t.Fatalf("bad: %#v", config) } } func TestReadConfigPaths_dir(t *testing.T) { td, err := ioutil.TempDir("", "serf") if err != nil { t.Fatalf("err: %s", err) } defer os.RemoveAll(td) err = ioutil.WriteFile(filepath.Join(td, "a.json"), []byte(`{"node_name": "bar"}`), 0644) if err != nil { t.Fatalf("err: %s", err) } err = ioutil.WriteFile(filepath.Join(td, "b.json"), []byte(`{"node_name": "baz"}`), 0644) if err != nil { t.Fatalf("err: %s", err) } // A non-json file, shouldn't be read err = ioutil.WriteFile(filepath.Join(td, "c"), []byte(`{"node_name": "bad"}`), 0644) if err != nil { t.Fatalf("err: %s", err) } config, err := ReadConfigPaths([]string{td}) if err != nil { t.Fatalf("err: %s", err) } if config.NodeName != "baz" { t.Fatalf("bad: %#v", config) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/event_handler.go000066400000000000000000000075601317277572200312160ustar00rootroot00000000000000package agent import ( "fmt" "github.com/hashicorp/serf/serf" "log" "os" "strings" "sync" ) // EventHandler is a handler that does things when events happen. type EventHandler interface { HandleEvent(serf.Event) } // ScriptEventHandler invokes scripts for the events that it receives. type ScriptEventHandler struct { SelfFunc func() serf.Member Scripts []EventScript Logger *log.Logger scriptLock sync.Mutex newScripts []EventScript } func (h *ScriptEventHandler) HandleEvent(e serf.Event) { // Swap in the new scripts if any h.scriptLock.Lock() if h.newScripts != nil { h.Scripts = h.newScripts h.newScripts = nil } h.scriptLock.Unlock() if h.Logger == nil { h.Logger = log.New(os.Stderr, "", log.LstdFlags) } self := h.SelfFunc() for _, script := range h.Scripts { if !script.Invoke(e) { continue } err := invokeEventScript(h.Logger, script.Script, self, e) if err != nil { h.Logger.Printf("[ERR] agent: Error invoking script '%s': %s", script.Script, err) } } } // UpdateScripts is used to safely update the scripts we invoke in // a thread safe manner func (h *ScriptEventHandler) UpdateScripts(scripts []EventScript) { h.scriptLock.Lock() defer h.scriptLock.Unlock() h.newScripts = scripts } // EventFilter is used to filter which events are processed type EventFilter struct { Event string Name string } // Invoke tests whether or not this event script should be invoked // for the given Serf event. func (s *EventFilter) Invoke(e serf.Event) bool { if s.Event == "*" { return true } if e.EventType().String() != s.Event { return false } if s.Event == "user" && s.Name != "" { userE, ok := e.(serf.UserEvent) if !ok { return false } if userE.Name != s.Name { return false } } if s.Event == "query" && s.Name != "" { query, ok := e.(*serf.Query) if !ok { return false } if query.Name != s.Name { return false } } return true } // Valid checks if this is a valid agent event script. func (s *EventFilter) Valid() bool { switch s.Event { case "member-join": case "member-leave": case "member-failed": case "member-update": case "member-reap": case "user": case "query": case "*": default: return false } return true } // EventScript is a single event script that will be executed in the // case of an event, and is configured from the command-line or from // a configuration file. type EventScript struct { EventFilter Script string } func (s *EventScript) String() string { if s.Name != "" { return fmt.Sprintf("Event '%s:%s' invoking '%s'", s.Event, s.Name, s.Script) } return fmt.Sprintf("Event '%s' invoking '%s'", s.Event, s.Script) } // ParseEventScript takes a string in the format of "type=script" and // parses it into an EventScript struct, if it can. func ParseEventScript(v string) []EventScript { var filter, script string parts := strings.SplitN(v, "=", 2) if len(parts) == 1 { script = parts[0] } else { filter = parts[0] script = parts[1] } filters := ParseEventFilter(filter) results := make([]EventScript, 0, len(filters)) for _, filt := range filters { result := EventScript{ EventFilter: filt, Script: script, } results = append(results, result) } return results } // ParseEventFilter a string with the event type filters and // parses it into a series of EventFilters if it can. func ParseEventFilter(v string) []EventFilter { // No filter translates to stream all if v == "" { v = "*" } events := strings.Split(v, ",") results := make([]EventFilter, 0, len(events)) for _, event := range events { var result EventFilter var name string if strings.HasPrefix(event, "user:") { name = event[len("user:"):] event = "user" } else if strings.HasPrefix(event, "query:") { name = event[len("query:"):] event = "query" } result.Event = event result.Name = name results = append(results, result) } return results } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/event_handler_mock.go000066400000000000000000000013551317277572200322230ustar00rootroot00000000000000package agent import ( "github.com/hashicorp/serf/serf" "sync" ) // MockEventHandler is an EventHandler implementation that can be used // for tests. type MockEventHandler struct { Events []serf.Event sync.Mutex } func (h *MockEventHandler) HandleEvent(e serf.Event) { h.Lock() defer h.Unlock() h.Events = append(h.Events, e) } // MockQueryHandler is an EventHandler implementation used for tests, // it always responds to a query with a given response type MockQueryHandler struct { Response []byte Queries []*serf.Query sync.Mutex } func (h *MockQueryHandler) HandleEvent(e serf.Event) { query, ok := e.(*serf.Query) if !ok { return } h.Lock() h.Queries = append(h.Queries, query) h.Unlock() query.Respond(h.Response) } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/event_handler_test.go000066400000000000000000000224161317277572200322520ustar00rootroot00000000000000package agent import ( "fmt" "github.com/hashicorp/serf/serf" "io/ioutil" "net" "os" "testing" ) const eventScript = `#!/bin/sh RESULT_FILE="%s" echo $SERF_SELF_NAME $SERF_SELF_ROLE >>${RESULT_FILE} echo $SERF_TAG_DC >> ${RESULT_FILE} echo $SERF_TAG_BAD_TAG >> ${RESULT_FILE} echo $SERF_EVENT $SERF_USER_EVENT "$@" >>${RESULT_FILE} echo $os_env_var >> ${RESULT_FILE} while read line; do printf "${line}\n" >>${RESULT_FILE} done ` const userEventScript = `#!/bin/sh RESULT_FILE="%s" echo $SERF_SELF_NAME $SERF_SELF_ROLE >>${RESULT_FILE} echo $SERF_TAG_DC >> ${RESULT_FILE} echo $SERF_EVENT $SERF_USER_EVENT "$@" >>${RESULT_FILE} echo $SERF_EVENT $SERF_USER_LTIME "$@" >>${RESULT_FILE} while read line; do printf "${line}\n" >>${RESULT_FILE} done ` const queryScript = `#!/bin/sh RESULT_FILE="%s" echo $SERF_SELF_NAME $SERF_SELF_ROLE >>${RESULT_FILE} echo $SERF_TAG_DC >> ${RESULT_FILE} echo $SERF_EVENT $SERF_QUERY_NAME "$@" >>${RESULT_FILE} echo $SERF_EVENT $SERF_QUERY_LTIME "$@" >>${RESULT_FILE} while read line; do printf "${line}\n" >>${RESULT_FILE} done ` // testEventScript creates an event script that can be used with the // agent. It returns the path to the event script itself and a path to // the file that will contain the events that that script receives. func testEventScript(t *testing.T, script string) (string, string) { scriptFile, err := ioutil.TempFile("", "serf") if err != nil { t.Fatalf("err: %s", err) } defer scriptFile.Close() if err := scriptFile.Chmod(0755); err != nil { t.Fatalf("err: %s", err) } resultFile, err := ioutil.TempFile("", "serf-result") if err != nil { t.Fatalf("err: %s", err) } defer resultFile.Close() _, err = scriptFile.Write([]byte( fmt.Sprintf(script, resultFile.Name()))) if err != nil { t.Fatalf("err: %s", err) } return scriptFile.Name(), resultFile.Name() } func TestScriptEventHandler(t *testing.T) { os.Setenv("os_env_var", "os-env-foo") script, results := testEventScript(t, eventScript) h := &ScriptEventHandler{ SelfFunc: func() serf.Member { return serf.Member{ Name: "ourname", Tags: map[string]string{"role": "ourrole", "dc": "east-aws", "bad-tag": "bad"}, } }, Scripts: []EventScript{ { EventFilter: EventFilter{ Event: "*", }, Script: script, }, }, } event := serf.MemberEvent{ Type: serf.EventMemberJoin, Members: []serf.Member{ { Name: "foo", Addr: net.ParseIP("1.2.3.4"), Tags: map[string]string{"role": "bar", "foo": "bar"}, }, }, } h.HandleEvent(event) result, err := ioutil.ReadFile(results) if err != nil { t.Fatalf("err: %s", err) } expected1 := "ourname ourrole\neast-aws\nbad\nmember-join\nos-env-foo\nfoo\t1.2.3.4\tbar\trole=bar,foo=bar\n" expected2 := "ourname ourrole\neast-aws\nbad\nmember-join\nos-env-foo\nfoo\t1.2.3.4\tbar\tfoo=bar,role=bar\n" if string(result) != expected1 && string(result) != expected2 { t.Fatalf("bad: %#v. Expected: %#v or %v", string(result), expected1, expected2) } } func TestScriptUserEventHandler(t *testing.T) { script, results := testEventScript(t, userEventScript) h := &ScriptEventHandler{ SelfFunc: func() serf.Member { return serf.Member{ Name: "ourname", Tags: map[string]string{"role": "ourrole", "dc": "east-aws"}, } }, Scripts: []EventScript{ { EventFilter: EventFilter{ Event: "*", }, Script: script, }, }, } userEvent := serf.UserEvent{ LTime: 1, Name: "baz", Payload: []byte("foobar"), Coalesce: true, } h.HandleEvent(userEvent) result, err := ioutil.ReadFile(results) if err != nil { t.Fatalf("err: %s", err) } expected := "ourname ourrole\neast-aws\nuser baz\nuser 1\nfoobar\n" if string(result) != expected { t.Fatalf("bad: %#v. Expected: %#v", string(result), expected) } } func TestScriptQueryEventHandler(t *testing.T) { script, results := testEventScript(t, queryScript) h := &ScriptEventHandler{ SelfFunc: func() serf.Member { return serf.Member{ Name: "ourname", Tags: map[string]string{"role": "ourrole", "dc": "east-aws"}, } }, Scripts: []EventScript{ { EventFilter: EventFilter{ Event: "*", }, Script: script, }, }, } query := &serf.Query{ LTime: 42, Name: "uptime", Payload: []byte("load average"), } h.HandleEvent(query) result, err := ioutil.ReadFile(results) if err != nil { t.Fatalf("err: %s", err) } expected := "ourname ourrole\neast-aws\nquery uptime\nquery 42\nload average\n" if string(result) != expected { t.Fatalf("bad: %#v. Expected: %#v", string(result), expected) } } func TestEventScriptInvoke(t *testing.T) { testCases := []struct { script EventScript event serf.Event invoke bool }{ { EventScript{EventFilter{"*", ""}, "script.sh"}, serf.MemberEvent{}, true, }, { EventScript{EventFilter{"user", ""}, "script.sh"}, serf.MemberEvent{}, false, }, { EventScript{EventFilter{"user", "deploy"}, "script.sh"}, serf.UserEvent{Name: "deploy"}, true, }, { EventScript{EventFilter{"user", "deploy"}, "script.sh"}, serf.UserEvent{Name: "restart"}, false, }, { EventScript{EventFilter{"member-join", ""}, "script.sh"}, serf.MemberEvent{Type: serf.EventMemberJoin}, true, }, { EventScript{EventFilter{"member-join", ""}, "script.sh"}, serf.MemberEvent{Type: serf.EventMemberLeave}, false, }, { EventScript{EventFilter{"member-reap", ""}, "script.sh"}, serf.MemberEvent{Type: serf.EventMemberReap}, true, }, { EventScript{EventFilter{"query", "deploy"}, "script.sh"}, &serf.Query{Name: "deploy"}, true, }, { EventScript{EventFilter{"query", "uptime"}, "script.sh"}, &serf.Query{Name: "deploy"}, false, }, { EventScript{EventFilter{"query", ""}, "script.sh"}, &serf.Query{Name: "deploy"}, true, }, } for _, tc := range testCases { result := tc.script.Invoke(tc.event) if result != tc.invoke { t.Errorf("bad: %#v", tc) } } } func TestEventScriptValid(t *testing.T) { testCases := []struct { Event string Valid bool }{ {"member-join", true}, {"member-leave", true}, {"member-failed", true}, {"member-update", true}, {"member-reap", true}, {"user", true}, {"User", false}, {"member", false}, {"query", true}, {"Query", false}, {"*", true}, } for _, tc := range testCases { script := EventScript{EventFilter: EventFilter{Event: tc.Event}} if script.Valid() != tc.Valid { t.Errorf("bad: %#v", tc) } } } func TestParseEventScript(t *testing.T) { testCases := []struct { v string err bool results []EventScript }{ { "script.sh", false, []EventScript{{EventFilter{"*", ""}, "script.sh"}}, }, { "member-join=script.sh", false, []EventScript{{EventFilter{"member-join", ""}, "script.sh"}}, }, { "foo,bar=script.sh", false, []EventScript{ {EventFilter{"foo", ""}, "script.sh"}, {EventFilter{"bar", ""}, "script.sh"}, }, }, { "user:deploy=script.sh", false, []EventScript{{EventFilter{"user", "deploy"}, "script.sh"}}, }, { "foo,user:blah,bar,query:tubez=script.sh", false, []EventScript{ {EventFilter{"foo", ""}, "script.sh"}, {EventFilter{"user", "blah"}, "script.sh"}, {EventFilter{"bar", ""}, "script.sh"}, {EventFilter{"query", "tubez"}, "script.sh"}, }, }, { "query:load=script.sh", false, []EventScript{{EventFilter{"query", "load"}, "script.sh"}}, }, { "query=script.sh", false, []EventScript{{EventFilter{"query", ""}, "script.sh"}}, }, } for _, tc := range testCases { results := ParseEventScript(tc.v) if results == nil { t.Errorf("result should not be nil") continue } if len(results) != len(tc.results) { t.Errorf("bad: %#v", results) continue } for i, r := range results { expected := tc.results[i] if r.Event != expected.Event { t.Errorf("Events not equal: %s %s", r.Event, expected.Event) } if r.Name != expected.Name { t.Errorf("User events not equal: %s %s", r.Name, expected.Name) } if r.Script != expected.Script { t.Errorf("Scripts not equal: %s %s", r.Script, expected.Script) } } } } func TestParseEventFilter(t *testing.T) { testCases := []struct { v string results []EventFilter }{ { "", []EventFilter{EventFilter{"*", ""}}, }, { "member-join", []EventFilter{EventFilter{"member-join", ""}}, }, { "member-reap", []EventFilter{EventFilter{"member-reap", ""}}, }, { "foo,bar", []EventFilter{ EventFilter{"foo", ""}, EventFilter{"bar", ""}, }, }, { "user:deploy", []EventFilter{EventFilter{"user", "deploy"}}, }, { "foo,user:blah,bar", []EventFilter{ EventFilter{"foo", ""}, EventFilter{"user", "blah"}, EventFilter{"bar", ""}, }, }, { "query:load", []EventFilter{EventFilter{"query", "load"}}, }, } for _, tc := range testCases { results := ParseEventFilter(tc.v) if results == nil { t.Errorf("result should not be nil") continue } if len(results) != len(tc.results) { t.Errorf("bad: %#v", results) continue } for i, r := range results { expected := tc.results[i] if r.Event != expected.Event { t.Errorf("Events not equal: %s %s", r.Event, expected.Event) } if r.Name != expected.Name { t.Errorf("User events not equal: %s %s", r.Name, expected.Name) } } } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/flag_slice_value.go000066400000000000000000000006261317277572200316600ustar00rootroot00000000000000package agent import "strings" // AppendSliceValue implements the flag.Value interface and allows multiple // calls to the same variable to append a list. type AppendSliceValue []string func (s *AppendSliceValue) String() string { return strings.Join(*s, ",") } func (s *AppendSliceValue) Set(value string) error { if *s == nil { *s = make([]string, 0, 1) } *s = append(*s, value) return nil } flag_slice_value_test.go000066400000000000000000000011071317277572200326330ustar00rootroot00000000000000golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agentpackage agent import ( "flag" "reflect" "testing" ) func TestAppendSliceValue_implements(t *testing.T) { var raw interface{} raw = new(AppendSliceValue) if _, ok := raw.(flag.Value); !ok { t.Fatalf("AppendSliceValue should be a Value") } } func TestAppendSliceValueSet(t *testing.T) { sv := new(AppendSliceValue) err := sv.Set("foo") if err != nil { t.Fatalf("err: %s", err) } err = sv.Set("bar") if err != nil { t.Fatalf("err: %s", err) } expected := []string{"foo", "bar"} if !reflect.DeepEqual([]string(*sv), expected) { t.Fatalf("Bad: %#v", sv) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/gated_writer.go000066400000000000000000000013401317277572200310460ustar00rootroot00000000000000package agent import ( "io" "sync" ) // GatedWriter is an io.Writer implementation that buffers all of its // data into an internal buffer until it is told to let data through. type GatedWriter struct { Writer io.Writer buf [][]byte flush bool lock sync.RWMutex } // Flush tells the GatedWriter to flush any buffered data and to stop // buffering. func (w *GatedWriter) Flush() { w.lock.Lock() w.flush = true w.lock.Unlock() for _, p := range w.buf { w.Write(p) } w.buf = nil } func (w *GatedWriter) Write(p []byte) (n int, err error) { w.lock.RLock() defer w.lock.RUnlock() if w.flush { return w.Writer.Write(p) } p2 := make([]byte, len(p)) copy(p2, p) w.buf = append(w.buf, p2) return len(p), nil } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/gated_writer_test.go000066400000000000000000000010361317277572200321070ustar00rootroot00000000000000package agent import ( "bytes" "io" "testing" ) func TestGatedWriter_impl(t *testing.T) { var _ io.Writer = new(GatedWriter) } func TestGatedWriter(t *testing.T) { buf := new(bytes.Buffer) w := &GatedWriter{Writer: buf} w.Write([]byte("foo\n")) w.Write([]byte("bar\n")) if buf.String() != "" { t.Fatalf("bad: %s", buf.String()) } w.Flush() if buf.String() != "foo\nbar\n" { t.Fatalf("bad: %s", buf.String()) } w.Write([]byte("baz\n")) if buf.String() != "foo\nbar\nbaz\n" { t.Fatalf("bad: %s", buf.String()) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/invoke.go000066400000000000000000000127361317277572200276740ustar00rootroot00000000000000package agent import ( "fmt" "github.com/armon/circbuf" "github.com/armon/go-metrics" "github.com/hashicorp/serf/serf" "io" "log" "os" "os/exec" "regexp" "runtime" "strings" "time" ) const ( windows = "windows" // maxBufSize limits how much data we collect from a handler. // This is to prevent Serf's memory from growing to an enormous // amount due to a faulty handler. maxBufSize = 8 * 1024 // warnSlow is used to warn about a slow handler invocation warnSlow = time.Second ) var sanitizeTagRegexp = regexp.MustCompile(`[^A-Z0-9_]`) // invokeEventScript will execute the given event script with the given // event. Depending on the event, the semantics of how data are passed // are a bit different. For all events, the SERF_EVENT environmental // variable is the type of the event. For user events, the SERF_USER_EVENT // environmental variable is also set, containing the name of the user // event that was fired. // // In all events, data is passed in via stdin to facilitate piping. See // the various stdin functions below for more information. func invokeEventScript(logger *log.Logger, script string, self serf.Member, event serf.Event) error { defer metrics.MeasureSince([]string{"agent", "invoke", script}, time.Now()) output, _ := circbuf.NewBuffer(maxBufSize) // Determine the shell invocation based on OS var shell, flag string if runtime.GOOS == windows { shell = "cmd" flag = "/C" } else { shell = "/bin/sh" flag = "-c" } cmd := exec.Command(shell, flag, script) cmd.Env = append(os.Environ(), "SERF_EVENT="+event.EventType().String(), "SERF_SELF_NAME="+self.Name, "SERF_SELF_ROLE="+self.Tags["role"], ) cmd.Stderr = output cmd.Stdout = output // Add all the tags for name, val := range self.Tags { //http://stackoverflow.com/questions/2821043/allowed-characters-in-linux-environment-variable-names //(http://pubs.opengroup.org/onlinepubs/000095399/basedefs/xbd_chap08.html for the long version) //says that env var names must be in [A-Z0-9_] and not start with [0-9]. //we only care about the first part, so convert all chars not in [A-Z0-9_] to _ sanitizedName := sanitizeTagRegexp.ReplaceAllString(strings.ToUpper(name), "_") tag_env := fmt.Sprintf("SERF_TAG_%s=%s", sanitizedName, val) cmd.Env = append(cmd.Env, tag_env) } stdin, err := cmd.StdinPipe() if err != nil { return err } switch e := event.(type) { case serf.MemberEvent: go memberEventStdin(logger, stdin, &e) case serf.UserEvent: cmd.Env = append(cmd.Env, "SERF_USER_EVENT="+e.Name) cmd.Env = append(cmd.Env, fmt.Sprintf("SERF_USER_LTIME=%d", e.LTime)) go streamPayload(logger, stdin, e.Payload) case *serf.Query: cmd.Env = append(cmd.Env, "SERF_QUERY_NAME="+e.Name) cmd.Env = append(cmd.Env, fmt.Sprintf("SERF_QUERY_LTIME=%d", e.LTime)) go streamPayload(logger, stdin, e.Payload) default: return fmt.Errorf("Unknown event type: %s", event.EventType().String()) } // Start a timer to warn about slow handlers slowTimer := time.AfterFunc(warnSlow, func() { logger.Printf("[WARN] agent: Script '%s' slow, execution exceeding %v", script, warnSlow) }) if err := cmd.Start(); err != nil { return err } // Warn if buffer is overritten if output.TotalWritten() > output.Size() { logger.Printf("[WARN] agent: Script '%s' generated %d bytes of output, truncated to %d", script, output.TotalWritten(), output.Size()) } err = cmd.Wait() slowTimer.Stop() logger.Printf("[DEBUG] agent: Event '%s' script output: %s", event.EventType().String(), output.String()) if err != nil { return err } // If this is a query and we have output, respond if query, ok := event.(*serf.Query); ok && output.TotalWritten() > 0 { if err := query.Respond(output.Bytes()); err != nil { logger.Printf("[WARN] agent: Failed to respond to query '%s': %s", event.String(), err) } } return nil } // eventClean cleans a value to be a parameter in an event line. func eventClean(v string) string { v = strings.Replace(v, "\t", "\\t", -1) v = strings.Replace(v, "\n", "\\n", -1) return v } // Sends data on stdin for a member event. // // The format for the data is unix tool friendly, separated by whitespace // and newlines. The structure of each line for any member event is: // "NAME ADDRESS ROLE TAGS" where the whitespace is actually tabs. // The name and role are cleaned so that newlines and tabs are replaced // with "\n" and "\t" respectively. func memberEventStdin(logger *log.Logger, stdin io.WriteCloser, e *serf.MemberEvent) { defer stdin.Close() for _, member := range e.Members { // Format the tags as tag1=v1,tag2=v2,... var tagPairs []string for name, value := range member.Tags { tagPairs = append(tagPairs, fmt.Sprintf("%s=%s", name, value)) } tags := strings.Join(tagPairs, ",") // Send the entire line _, err := stdin.Write([]byte(fmt.Sprintf( "%s\t%s\t%s\t%s\n", eventClean(member.Name), member.Addr.String(), eventClean(member.Tags["role"]), eventClean(tags)))) if err != nil { return } } } // Sends data on stdin for an event. The stdin simply contains the // payload (if any). // Most shells read implementations need a newline, force it to be there func streamPayload(logger *log.Logger, stdin io.WriteCloser, buf []byte) { defer stdin.Close() // Append a newline to payload if missing payload := buf if len(payload) > 0 && payload[len(payload)-1] != '\n' { payload = append(payload, '\n') } if _, err := stdin.Write(payload); err != nil { logger.Printf("[ERR] Error writing payload: %s", err) return } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/ipc.go000066400000000000000000000602371317277572200271530ustar00rootroot00000000000000package agent /* The agent exposes an IPC mechanism that is used for both controlling Serf as well as providing a fast streaming mechanism for events. This allows other applications to easily leverage Serf as the event layer. We additionally make use of the IPC layer to also handle RPC calls from the CLI to unify the code paths. This results in a split Request/Response as well as streaming mode of operation. The system is fairly simple, each client opens a TCP connection to the agent. The connection is initialized with a handshake which establishes the protocol version being used. This is to allow for future changes to the protocol. Once initialized, clients send commands and wait for responses. Certain commands will cause the client to subscribe to events, and those will be pushed down the socket as they are received. This provides a low-latency mechanism for applications to send and receive events, while also providing a flexible control mechanism for Serf. */ import ( "bufio" "fmt" "github.com/armon/go-metrics" "github.com/hashicorp/go-msgpack/codec" "github.com/hashicorp/logutils" "github.com/hashicorp/serf/coordinate" "github.com/hashicorp/serf/serf" "io" "log" "net" "os" "regexp" "strings" "sync" "sync/atomic" "time" ) const ( MinIPCVersion = 1 MaxIPCVersion = 1 ) const ( handshakeCommand = "handshake" eventCommand = "event" forceLeaveCommand = "force-leave" joinCommand = "join" membersCommand = "members" membersFilteredCommand = "members-filtered" streamCommand = "stream" stopCommand = "stop" monitorCommand = "monitor" leaveCommand = "leave" installKeyCommand = "install-key" useKeyCommand = "use-key" removeKeyCommand = "remove-key" listKeysCommand = "list-keys" tagsCommand = "tags" queryCommand = "query" respondCommand = "respond" authCommand = "auth" statsCommand = "stats" getCoordinateCommand = "get-coordinate" ) const ( unsupportedCommand = "Unsupported command" unsupportedIPCVersion = "Unsupported IPC version" duplicateHandshake = "Handshake already performed" handshakeRequired = "Handshake required" monitorExists = "Monitor already exists" invalidFilter = "Invalid event filter" streamExists = "Stream with given sequence exists" invalidQueryID = "No pending queries matching ID" authRequired = "Authentication required" invalidAuthToken = "Invalid authentication token" ) const ( queryRecordAck = "ack" queryRecordResponse = "response" queryRecordDone = "done" ) // Request header is sent before each request type requestHeader struct { Command string Seq uint64 } // Response header is sent before each response type responseHeader struct { Seq uint64 Error string } type handshakeRequest struct { Version int32 } type authRequest struct { AuthKey string } type coordinateRequest struct { Node string } type coordinateResponse struct { Coord coordinate.Coordinate Ok bool } type eventRequest struct { Name string Payload []byte Coalesce bool } type forceLeaveRequest struct { Node string } type joinRequest struct { Existing []string Replay bool } type joinResponse struct { Num int32 } type membersFilteredRequest struct { Tags map[string]string Status string Name string } type membersResponse struct { Members []Member } type keyRequest struct { Key string } type keyResponse struct { Messages map[string]string Keys map[string]int NumNodes int NumErr int NumResp int } type monitorRequest struct { LogLevel string } type streamRequest struct { Type string } type stopRequest struct { Stop uint64 } type tagsRequest struct { Tags map[string]string DeleteTags []string } type queryRequest struct { FilterNodes []string FilterTags map[string]string RequestAck bool RelayFactor uint8 Timeout time.Duration Name string Payload []byte } type respondRequest struct { ID uint64 Payload []byte } type queryRecord struct { Type string From string Payload []byte } type logRecord struct { Log string } type userEventRecord struct { Event string LTime serf.LamportTime Name string Payload []byte Coalesce bool } type queryEventRecord struct { Event string ID uint64 // ID is opaque to client, used to respond LTime serf.LamportTime Name string Payload []byte } type Member struct { Name string Addr net.IP Port uint16 Tags map[string]string Status string ProtocolMin uint8 ProtocolMax uint8 ProtocolCur uint8 DelegateMin uint8 DelegateMax uint8 DelegateCur uint8 } type memberEventRecord struct { Event string Members []Member } type AgentIPC struct { sync.Mutex agent *Agent authKey string clients map[string]*IPCClient listener net.Listener logger *log.Logger logWriter *logWriter stop bool stopCh chan struct{} } type IPCClient struct { queryID uint64 // Used to increment query IDs name string conn net.Conn reader *bufio.Reader writer *bufio.Writer dec *codec.Decoder enc *codec.Encoder writeLock sync.Mutex version int32 // From the handshake, 0 before logStreamer *logStream eventStreams map[uint64]*eventStream pendingQueries map[uint64]*serf.Query queryLock sync.Mutex didAuth bool // Did we get an auth token yet? } // send is used to send an object using the MsgPack encoding. send // is serialized to prevent write overlaps, while properly buffering. func (c *IPCClient) Send(header *responseHeader, obj interface{}) error { c.writeLock.Lock() defer c.writeLock.Unlock() if err := c.enc.Encode(header); err != nil { return err } if obj != nil { if err := c.enc.Encode(obj); err != nil { return err } } if err := c.writer.Flush(); err != nil { return err } return nil } func (c *IPCClient) String() string { return fmt.Sprintf("ipc.client: %v", c.conn.RemoteAddr()) } // nextQueryID safely generates a new query ID func (c *IPCClient) nextQueryID() uint64 { return atomic.AddUint64(&c.queryID, 1) } // RegisterQuery is used to register a pending query that may // get a response. The ID of the query is returned func (c *IPCClient) RegisterQuery(q *serf.Query) uint64 { // Generate a unique-per-client ID id := c.nextQueryID() // Ensure the query deadline is in the future timeout := q.Deadline().Sub(time.Now()) if timeout < 0 { return id } // Register the query c.queryLock.Lock() c.pendingQueries[id] = q c.queryLock.Unlock() // Setup a timer to deregister after the timeout time.AfterFunc(timeout, func() { c.queryLock.Lock() delete(c.pendingQueries, id) c.queryLock.Unlock() }) return id } // NewAgentIPC is used to create a new Agent IPC handler func NewAgentIPC(agent *Agent, authKey string, listener net.Listener, logOutput io.Writer, logWriter *logWriter) *AgentIPC { if logOutput == nil { logOutput = os.Stderr } ipc := &AgentIPC{ agent: agent, authKey: authKey, clients: make(map[string]*IPCClient), listener: listener, logger: log.New(logOutput, "", log.LstdFlags), logWriter: logWriter, stopCh: make(chan struct{}), } go ipc.listen() return ipc } // Shutdown is used to shutdown the IPC layer func (i *AgentIPC) Shutdown() { i.Lock() defer i.Unlock() if i.stop { return } i.stop = true close(i.stopCh) i.listener.Close() // Close the existing connections for _, client := range i.clients { client.conn.Close() } } // listen is a long running routine that listens for new clients func (i *AgentIPC) listen() { for { conn, err := i.listener.Accept() if err != nil { if i.stop { return } i.logger.Printf("[ERR] agent.ipc: Failed to accept client: %v", err) continue } i.logger.Printf("[INFO] agent.ipc: Accepted client: %v", conn.RemoteAddr()) metrics.IncrCounter([]string{"agent", "ipc", "accept"}, 1) // Wrap the connection in a client client := &IPCClient{ name: conn.RemoteAddr().String(), conn: conn, reader: bufio.NewReader(conn), writer: bufio.NewWriter(conn), eventStreams: make(map[uint64]*eventStream), pendingQueries: make(map[uint64]*serf.Query), } client.dec = codec.NewDecoder(client.reader, &codec.MsgpackHandle{RawToString: true, WriteExt: true}) client.enc = codec.NewEncoder(client.writer, &codec.MsgpackHandle{RawToString: true, WriteExt: true}) // Register the client i.Lock() if !i.stop { i.clients[client.name] = client go i.handleClient(client) } else { conn.Close() } i.Unlock() } } // deregisterClient is called to cleanup after a client disconnects func (i *AgentIPC) deregisterClient(client *IPCClient) { // Close the socket client.conn.Close() // Remove from the clients list i.Lock() delete(i.clients, client.name) i.Unlock() // Remove from the log writer if client.logStreamer != nil { i.logWriter.DeregisterHandler(client.logStreamer) client.logStreamer.Stop() } // Remove from event handlers for _, es := range client.eventStreams { i.agent.DeregisterEventHandler(es) es.Stop() } } // handleClient is a long running routine that handles a single client func (i *AgentIPC) handleClient(client *IPCClient) { defer i.deregisterClient(client) var reqHeader requestHeader for { // Decode the header if err := client.dec.Decode(&reqHeader); err != nil { if !i.stop { // The second part of this if is to block socket // errors from Windows which appear to happen every // time there is an EOF. if err != io.EOF && !strings.Contains(strings.ToLower(err.Error()), "wsarecv") { i.logger.Printf("[ERR] agent.ipc: failed to decode request header: %v", err) } } return } // Evaluate the command if err := i.handleRequest(client, &reqHeader); err != nil { i.logger.Printf("[ERR] agent.ipc: Failed to evaluate request: %v", err) return } } } // handleRequest is used to evaluate a single client command func (i *AgentIPC) handleRequest(client *IPCClient, reqHeader *requestHeader) error { // Look for a command field command := reqHeader.Command seq := reqHeader.Seq // Ensure the handshake is performed before other commands if command != handshakeCommand && client.version == 0 { respHeader := responseHeader{Seq: seq, Error: handshakeRequired} client.Send(&respHeader, nil) return fmt.Errorf(handshakeRequired) } metrics.IncrCounter([]string{"agent", "ipc", "command"}, 1) // Ensure the client has authenticated after the handshake if necessary if i.authKey != "" && !client.didAuth && command != authCommand && command != handshakeCommand { i.logger.Printf("[WARN] agent.ipc: Client sending commands before auth") respHeader := responseHeader{Seq: seq, Error: authRequired} client.Send(&respHeader, nil) return nil } // Dispatch command specific handlers switch command { case handshakeCommand: return i.handleHandshake(client, seq) case authCommand: return i.handleAuth(client, seq) case eventCommand: return i.handleEvent(client, seq) case membersCommand, membersFilteredCommand: return i.handleMembers(client, command, seq) case streamCommand: return i.handleStream(client, seq) case monitorCommand: return i.handleMonitor(client, seq) case stopCommand: return i.handleStop(client, seq) case forceLeaveCommand: return i.handleForceLeave(client, seq) case joinCommand: return i.handleJoin(client, seq) case leaveCommand: return i.handleLeave(client, seq) case installKeyCommand: return i.handleInstallKey(client, seq) case useKeyCommand: return i.handleUseKey(client, seq) case removeKeyCommand: return i.handleRemoveKey(client, seq) case listKeysCommand: return i.handleListKeys(client, seq) case tagsCommand: return i.handleTags(client, seq) case queryCommand: return i.handleQuery(client, seq) case respondCommand: return i.handleRespond(client, seq) case statsCommand: return i.handleStats(client, seq) case getCoordinateCommand: return i.handleGetCoordinate(client, seq) default: respHeader := responseHeader{Seq: seq, Error: unsupportedCommand} client.Send(&respHeader, nil) return fmt.Errorf("command '%s' not recognized", command) } } func (i *AgentIPC) handleHandshake(client *IPCClient, seq uint64) error { var req handshakeRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } resp := responseHeader{ Seq: seq, Error: "", } // Check the version if req.Version < MinIPCVersion || req.Version > MaxIPCVersion { resp.Error = unsupportedIPCVersion } else if client.version != 0 { resp.Error = duplicateHandshake } else { client.version = req.Version } return client.Send(&resp, nil) } func (i *AgentIPC) handleAuth(client *IPCClient, seq uint64) error { var req authRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } resp := responseHeader{ Seq: seq, Error: "", } // Check the token matches if req.AuthKey == i.authKey { client.didAuth = true } else { resp.Error = invalidAuthToken } return client.Send(&resp, nil) } func (i *AgentIPC) handleEvent(client *IPCClient, seq uint64) error { var req eventRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } // Attempt the send err := i.agent.UserEvent(req.Name, req.Payload, req.Coalesce) // Respond resp := responseHeader{ Seq: seq, Error: errToString(err), } return client.Send(&resp, nil) } func (i *AgentIPC) handleForceLeave(client *IPCClient, seq uint64) error { var req forceLeaveRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } // Attempt leave err := i.agent.ForceLeave(req.Node) // Respond resp := responseHeader{ Seq: seq, Error: errToString(err), } return client.Send(&resp, nil) } func (i *AgentIPC) handleJoin(client *IPCClient, seq uint64) error { var req joinRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } // Attempt the join num, err := i.agent.Join(req.Existing, req.Replay) // Respond header := responseHeader{ Seq: seq, Error: errToString(err), } resp := joinResponse{ Num: int32(num), } return client.Send(&header, &resp) } func (i *AgentIPC) handleMembers(client *IPCClient, command string, seq uint64) error { serf := i.agent.Serf() raw := serf.Members() members := make([]Member, 0, len(raw)) if command == membersFilteredCommand { var req membersFilteredRequest err := client.dec.Decode(&req) if err != nil { return fmt.Errorf("decode failed: %v", err) } raw, err = i.filterMembers(raw, req.Tags, req.Status, req.Name) if err != nil { return err } } for _, m := range raw { sm := Member{ Name: m.Name, Addr: m.Addr, Port: m.Port, Tags: m.Tags, Status: m.Status.String(), ProtocolMin: m.ProtocolMin, ProtocolMax: m.ProtocolMax, ProtocolCur: m.ProtocolCur, DelegateMin: m.DelegateMin, DelegateMax: m.DelegateMax, DelegateCur: m.DelegateCur, } members = append(members, sm) } header := responseHeader{ Seq: seq, Error: "", } resp := membersResponse{ Members: members, } return client.Send(&header, &resp) } func (i *AgentIPC) filterMembers(members []serf.Member, tags map[string]string, status string, name string) ([]serf.Member, error) { result := make([]serf.Member, 0, len(members)) // Pre-compile all the regular expressions tagsRe := make(map[string]*regexp.Regexp) for tag, expr := range tags { re, err := regexp.Compile(fmt.Sprintf("^%s$", expr)) if err != nil { return nil, fmt.Errorf("Failed to compile regex: %v", err) } tagsRe[tag] = re } statusRe, err := regexp.Compile(fmt.Sprintf("^%s$", status)) if err != nil { return nil, fmt.Errorf("Failed to compile regex: %v", err) } nameRe, err := regexp.Compile(fmt.Sprintf("^%s$", name)) if err != nil { return nil, fmt.Errorf("Failed to compile regex: %v", err) } OUTER: for _, m := range members { // Check if tags were passed, and if they match for tag := range tags { if !tagsRe[tag].MatchString(m.Tags[tag]) { continue OUTER } } // Check if status matches if status != "" && !statusRe.MatchString(m.Status.String()) { continue } // Check if node name matches if name != "" && !nameRe.MatchString(m.Name) { continue } // Made it past the filters! result = append(result, m) } return result, nil } func (i *AgentIPC) handleInstallKey(client *IPCClient, seq uint64) error { var req keyRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } queryResp, err := i.agent.InstallKey(req.Key) header := responseHeader{ Seq: seq, Error: errToString(err), } resp := keyResponse{ Messages: queryResp.Messages, NumNodes: queryResp.NumNodes, NumErr: queryResp.NumErr, NumResp: queryResp.NumResp, } return client.Send(&header, &resp) } func (i *AgentIPC) handleUseKey(client *IPCClient, seq uint64) error { var req keyRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } queryResp, err := i.agent.UseKey(req.Key) header := responseHeader{ Seq: seq, Error: errToString(err), } resp := keyResponse{ Messages: queryResp.Messages, NumNodes: queryResp.NumNodes, NumErr: queryResp.NumErr, NumResp: queryResp.NumResp, } return client.Send(&header, &resp) } func (i *AgentIPC) handleRemoveKey(client *IPCClient, seq uint64) error { var req keyRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } queryResp, err := i.agent.RemoveKey(req.Key) header := responseHeader{ Seq: seq, Error: errToString(err), } resp := keyResponse{ Messages: queryResp.Messages, NumNodes: queryResp.NumNodes, NumErr: queryResp.NumErr, NumResp: queryResp.NumResp, } return client.Send(&header, &resp) } func (i *AgentIPC) handleListKeys(client *IPCClient, seq uint64) error { queryResp, err := i.agent.ListKeys() header := responseHeader{ Seq: seq, Error: errToString(err), } resp := keyResponse{ Messages: queryResp.Messages, Keys: queryResp.Keys, NumNodes: queryResp.NumNodes, NumErr: queryResp.NumErr, NumResp: queryResp.NumResp, } return client.Send(&header, &resp) } func (i *AgentIPC) handleStream(client *IPCClient, seq uint64) error { var es *eventStream var req streamRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } resp := responseHeader{ Seq: seq, Error: "", } // Create the event filters filters := ParseEventFilter(req.Type) for _, f := range filters { if !f.Valid() { resp.Error = invalidFilter goto SEND } } // Check if there is an existing stream if _, ok := client.eventStreams[seq]; ok { resp.Error = streamExists goto SEND } // Create an event streamer es = newEventStream(client, filters, seq, i.logger) client.eventStreams[seq] = es // Register with the agent. Defer so that we can respond before // registration, avoids any possible race condition defer i.agent.RegisterEventHandler(es) SEND: return client.Send(&resp, nil) } func (i *AgentIPC) handleMonitor(client *IPCClient, seq uint64) error { var req monitorRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } resp := responseHeader{ Seq: seq, Error: "", } // Upper case the log level req.LogLevel = strings.ToUpper(req.LogLevel) // Create a level filter filter := LevelFilter() filter.MinLevel = logutils.LogLevel(req.LogLevel) if !ValidateLevelFilter(filter.MinLevel, filter) { resp.Error = fmt.Sprintf("Unknown log level: %s", filter.MinLevel) goto SEND } // Check if there is an existing monitor if client.logStreamer != nil { resp.Error = monitorExists goto SEND } // Create a log streamer client.logStreamer = newLogStream(client, filter, seq, i.logger) // Register with the log writer. Defer so that we can respond before // registration, avoids any possible race condition defer i.logWriter.RegisterHandler(client.logStreamer) SEND: return client.Send(&resp, nil) } func (i *AgentIPC) handleStop(client *IPCClient, seq uint64) error { var req stopRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } // Remove a log monitor if any if client.logStreamer != nil && client.logStreamer.seq == req.Stop { i.logWriter.DeregisterHandler(client.logStreamer) client.logStreamer.Stop() client.logStreamer = nil } // Remove an event stream if any if es, ok := client.eventStreams[req.Stop]; ok { i.agent.DeregisterEventHandler(es) es.Stop() delete(client.eventStreams, req.Stop) } // Always succeed resp := responseHeader{Seq: seq, Error: ""} return client.Send(&resp, nil) } func (i *AgentIPC) handleLeave(client *IPCClient, seq uint64) error { i.logger.Printf("[INFO] agent.ipc: Graceful leave triggered") // Do the leave err := i.agent.Leave() if err != nil { i.logger.Printf("[ERR] agent.ipc: leave failed: %v", err) } resp := responseHeader{Seq: seq, Error: errToString(err)} // Send and wait err = client.Send(&resp, nil) // Trigger a shutdown! if err := i.agent.Shutdown(); err != nil { i.logger.Printf("[ERR] agent.ipc: shutdown failed: %v", err) } return err } func (i *AgentIPC) handleTags(client *IPCClient, seq uint64) error { var req tagsRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } tags := make(map[string]string) for key, val := range i.agent.SerfConfig().Tags { var delTag bool for _, delkey := range req.DeleteTags { delTag = (delTag || delkey == key) } if !delTag { tags[key] = val } } for key, val := range req.Tags { tags[key] = val } err := i.agent.SetTags(tags) resp := responseHeader{Seq: seq, Error: errToString(err)} return client.Send(&resp, nil) } func (i *AgentIPC) handleQuery(client *IPCClient, seq uint64) error { var req queryRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } // Setup the query params := serf.QueryParam{ FilterNodes: req.FilterNodes, FilterTags: req.FilterTags, RequestAck: req.RequestAck, RelayFactor: req.RelayFactor, Timeout: req.Timeout, } // Start the query queryResp, err := i.agent.Query(req.Name, req.Payload, ¶ms) // Stream the query responses if err == nil { qs := newQueryResponseStream(client, seq, i.logger) defer func() { go qs.Stream(queryResp) }() } // Respond resp := responseHeader{ Seq: seq, Error: errToString(err), } return client.Send(&resp, nil) } func (i *AgentIPC) handleRespond(client *IPCClient, seq uint64) error { var req respondRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } // Lookup the query client.queryLock.Lock() query, ok := client.pendingQueries[req.ID] client.queryLock.Unlock() // Respond if we have a pending query var err error if ok { err = query.Respond(req.Payload) } else { err = fmt.Errorf(invalidQueryID) } // Respond resp := responseHeader{ Seq: seq, Error: errToString(err), } return client.Send(&resp, nil) } // handleStats is used to get various statistics func (i *AgentIPC) handleStats(client *IPCClient, seq uint64) error { header := responseHeader{ Seq: seq, Error: "", } resp := i.agent.Stats() return client.Send(&header, resp) } // handleGetCoordinate is used to get the cached coordinate for a node. func (i *AgentIPC) handleGetCoordinate(client *IPCClient, seq uint64) error { var req coordinateRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } // Fetch the coordinate. var result coordinate.Coordinate coord, ok := i.agent.Serf().GetCachedCoordinate(req.Node) if ok { result = *coord } // Respond header := responseHeader{ Seq: seq, Error: errToString(nil), } resp := coordinateResponse{ Coord: result, Ok: ok, } return client.Send(&header, &resp) } // Used to convert an error to a string representation func errToString(err error) string { if err == nil { return "" } return err.Error() } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/ipc_event_stream.go000066400000000000000000000057071317277572200317300ustar00rootroot00000000000000package agent import ( "fmt" "github.com/hashicorp/serf/serf" "log" ) type streamClient interface { Send(*responseHeader, interface{}) error RegisterQuery(*serf.Query) uint64 } // eventStream is used to stream events to a client over IPC type eventStream struct { client streamClient eventCh chan serf.Event filters []EventFilter logger *log.Logger seq uint64 } func newEventStream(client streamClient, filters []EventFilter, seq uint64, logger *log.Logger) *eventStream { es := &eventStream{ client: client, eventCh: make(chan serf.Event, 512), filters: filters, logger: logger, seq: seq, } go es.stream() return es } func (es *eventStream) HandleEvent(e serf.Event) { // Check the event for _, f := range es.filters { if f.Invoke(e) { goto HANDLE } } return // Do a non-blocking send HANDLE: select { case es.eventCh <- e: default: es.logger.Printf("[WARN] agent.ipc: Dropping event to %v", es.client) } } func (es *eventStream) Stop() { close(es.eventCh) } func (es *eventStream) stream() { var err error for event := range es.eventCh { switch e := event.(type) { case serf.MemberEvent: err = es.sendMemberEvent(e) case serf.UserEvent: err = es.sendUserEvent(e) case *serf.Query: err = es.sendQuery(e) default: err = fmt.Errorf("Unknown event type: %s", event.EventType().String()) } if err != nil { es.logger.Printf("[ERR] agent.ipc: Failed to stream event to %v: %v", es.client, err) return } } } // sendMemberEvent is used to send a single member event func (es *eventStream) sendMemberEvent(me serf.MemberEvent) error { members := make([]Member, 0, len(me.Members)) for _, m := range me.Members { sm := Member{ Name: m.Name, Addr: m.Addr, Port: m.Port, Tags: m.Tags, Status: m.Status.String(), ProtocolMin: m.ProtocolMin, ProtocolMax: m.ProtocolMax, ProtocolCur: m.ProtocolCur, DelegateMin: m.DelegateMin, DelegateMax: m.DelegateMax, DelegateCur: m.DelegateCur, } members = append(members, sm) } header := responseHeader{ Seq: es.seq, Error: "", } rec := memberEventRecord{ Event: me.String(), Members: members, } return es.client.Send(&header, &rec) } // sendUserEvent is used to send a single user event func (es *eventStream) sendUserEvent(ue serf.UserEvent) error { header := responseHeader{ Seq: es.seq, Error: "", } rec := userEventRecord{ Event: ue.EventType().String(), LTime: ue.LTime, Name: ue.Name, Payload: ue.Payload, Coalesce: ue.Coalesce, } return es.client.Send(&header, &rec) } // sendQuery is used to send a single query event func (es *eventStream) sendQuery(q *serf.Query) error { id := es.client.RegisterQuery(q) header := responseHeader{ Seq: es.seq, Error: "", } rec := queryEventRecord{ Event: q.EventType().String(), ID: id, LTime: q.LTime, Name: q.Name, Payload: q.Payload, } return es.client.Send(&header, &rec) } ipc_event_stream_test.go000066400000000000000000000063221317277572200327020ustar00rootroot00000000000000golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agentpackage agent import ( "bytes" "github.com/hashicorp/serf/serf" "log" "net" "os" "testing" "time" ) type MockStreamClient struct { headers []*responseHeader objs []interface{} err error } func (m *MockStreamClient) Send(h *responseHeader, o interface{}) error { m.headers = append(m.headers, h) m.objs = append(m.objs, o) return m.err } func (m *MockStreamClient) RegisterQuery(q *serf.Query) uint64 { return 42 } func TestIPCEventStream(t *testing.T) { sc := &MockStreamClient{} filters := ParseEventFilter("user:foobar,member-join,query:deploy") es := newEventStream(sc, filters, 42, log.New(os.Stderr, "", log.LstdFlags)) defer es.Stop() es.HandleEvent(serf.UserEvent{ LTime: 123, Name: "foobar", Payload: []byte("test"), Coalesce: true, }) es.HandleEvent(serf.UserEvent{ LTime: 124, Name: "ignore", Payload: []byte("test"), Coalesce: true, }) es.HandleEvent(serf.MemberEvent{ Type: serf.EventMemberJoin, Members: []serf.Member{ serf.Member{ Name: "TestNode", Addr: net.IP([]byte{127, 0, 0, 1}), Port: 12345, Tags: map[string]string{"role": "node"}, Status: serf.StatusAlive, ProtocolMin: 0, ProtocolMax: 0, ProtocolCur: 0, DelegateMin: 0, DelegateMax: 0, DelegateCur: 0, }, }, }) es.HandleEvent(&serf.Query{ LTime: 125, Name: "deploy", Payload: []byte("test"), }) time.Sleep(5 * time.Millisecond) if len(sc.headers) != 3 { t.Fatalf("expected 2 messages!") } for _, h := range sc.headers { if h.Seq != 42 { t.Fatalf("bad seq") } if h.Error != "" { t.Fatalf("bad err") } } obj1 := sc.objs[0].(*userEventRecord) if obj1.Event != "user" { t.Fatalf("bad event: %#v", obj1) } if obj1.LTime != 123 { t.Fatalf("bad event: %#v", obj1) } if obj1.Name != "foobar" { t.Fatalf("bad event: %#v", obj1) } if bytes.Compare(obj1.Payload, []byte("test")) != 0 { t.Fatalf("bad event: %#v", obj1) } if !obj1.Coalesce { t.Fatalf("bad event: %#v", obj1) } obj2 := sc.objs[1].(*memberEventRecord) if obj2.Event != "member-join" { t.Fatalf("bad event: %#v", obj2) } mem1 := obj2.Members[0] if mem1.Name != "TestNode" { t.Fatalf("bad member: %#v", mem1) } if bytes.Compare(mem1.Addr, []byte{127, 0, 0, 1}) != 0 { t.Fatalf("bad member: %#v", mem1) } if mem1.Port != 12345 { t.Fatalf("bad member: %#v", mem1) } if mem1.Status != "alive" { t.Fatalf("bad member: %#v", mem1) } if mem1.ProtocolMin != 0 { t.Fatalf("bad member: %#v", mem1) } if mem1.ProtocolMax != 0 { t.Fatalf("bad member: %#v", mem1) } if mem1.ProtocolCur != 0 { t.Fatalf("bad member: %#v", mem1) } if mem1.DelegateMin != 0 { t.Fatalf("bad member: %#v", mem1) } if mem1.DelegateMax != 0 { t.Fatalf("bad member: %#v", mem1) } if mem1.DelegateCur != 0 { t.Fatalf("bad member: %#v", mem1) } obj3 := sc.objs[2].(*queryEventRecord) if obj3.Event != "query" { t.Fatalf("bad query: %#v", obj3) } if obj3.ID != 42 { t.Fatalf("bad query: %#v", obj3) } if obj3.LTime != 125 { t.Fatalf("bad query: %#v", obj3) } if obj3.Name != "deploy" { t.Fatalf("bad query: %#v", obj3) } if bytes.Compare(obj3.Payload, []byte("test")) != 0 { t.Fatalf("bad query: %#v", obj3) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/ipc_log_stream.go000066400000000000000000000025551317277572200313660ustar00rootroot00000000000000package agent import ( "github.com/hashicorp/logutils" "log" ) // logStream is used to stream logs to a client over IPC type logStream struct { client streamClient filter *logutils.LevelFilter logCh chan string logger *log.Logger seq uint64 } func newLogStream(client streamClient, filter *logutils.LevelFilter, seq uint64, logger *log.Logger) *logStream { ls := &logStream{ client: client, filter: filter, logCh: make(chan string, 512), logger: logger, seq: seq, } go ls.stream() return ls } func (ls *logStream) HandleLog(l string) { // Check the log level if !ls.filter.Check([]byte(l)) { return } // Do a non-blocking send select { case ls.logCh <- l: default: // We can't log syncronously, since we are already being invoked // from the logWriter, and a log will need to invoke Write() which // already holds the lock. We must therefor do the log async, so // as to not deadlock go ls.logger.Printf("[WARN] agent.ipc: Dropping logs to %v", ls.client) } } func (ls *logStream) Stop() { close(ls.logCh) } func (ls *logStream) stream() { header := responseHeader{Seq: ls.seq, Error: ""} rec := logRecord{Log: ""} for line := range ls.logCh { rec.Log = line if err := ls.client.Send(&header, &rec); err != nil { ls.logger.Printf("[ERR] agent.ipc: Failed to stream log to %v: %v", ls.client, err) return } } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/ipc_log_stream_test.go000066400000000000000000000013661317277572200324240ustar00rootroot00000000000000package agent import ( "github.com/hashicorp/logutils" "log" "os" "testing" "time" ) func TestIPCLogStream(t *testing.T) { sc := &MockStreamClient{} filter := LevelFilter() filter.MinLevel = logutils.LogLevel("INFO") ls := newLogStream(sc, filter, 42, log.New(os.Stderr, "", log.LstdFlags)) defer ls.Stop() log := "[DEBUG] this is a test log" log2 := "[INFO] This should pass" ls.HandleLog(log) ls.HandleLog(log2) time.Sleep(5 * time.Millisecond) if len(sc.headers) != 1 { t.Fatalf("expected 1 messages!") } for _, h := range sc.headers { if h.Seq != 42 { t.Fatalf("bad seq") } if h.Error != "" { t.Fatalf("bad err") } } obj1 := sc.objs[0].(*logRecord) if obj1.Log != log2 { t.Fatalf("bad event %#v", obj1) } } ipc_query_response_stream.go000066400000000000000000000041451317277572200336060ustar00rootroot00000000000000golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agentpackage agent import ( "github.com/hashicorp/serf/serf" "log" "time" ) // queryResponseStream is used to stream the query results back to a client type queryResponseStream struct { client streamClient logger *log.Logger seq uint64 } func newQueryResponseStream(client streamClient, seq uint64, logger *log.Logger) *queryResponseStream { qs := &queryResponseStream{ client: client, logger: logger, seq: seq, } return qs } // Stream is a long running routine used to stream the results of a query back to a client func (qs *queryResponseStream) Stream(resp *serf.QueryResponse) { // Setup a timer for the query ending remaining := resp.Deadline().Sub(time.Now()) done := time.After(remaining) ackCh := resp.AckCh() respCh := resp.ResponseCh() for { select { case a := <-ackCh: if err := qs.sendAck(a); err != nil { qs.logger.Printf("[ERR] agent.ipc: Failed to stream ack to %v: %v", qs.client, err) return } case r := <-respCh: if err := qs.sendResponse(r.From, r.Payload); err != nil { qs.logger.Printf("[ERR] agent.ipc: Failed to stream response to %v: %v", qs.client, err) return } case <-done: if err := qs.sendDone(); err != nil { qs.logger.Printf("[ERR] agent.ipc: Failed to stream query end to %v: %v", qs.client, err) } return } } } // sendAck is used to send a single ack func (qs *queryResponseStream) sendAck(from string) error { header := responseHeader{ Seq: qs.seq, Error: "", } rec := queryRecord{ Type: queryRecordAck, From: from, } return qs.client.Send(&header, &rec) } // sendResponse is used to send a single response func (qs *queryResponseStream) sendResponse(from string, payload []byte) error { header := responseHeader{ Seq: qs.seq, Error: "", } rec := queryRecord{ Type: queryRecordResponse, From: from, Payload: payload, } return qs.client.Send(&header, &rec) } // sendDone is used to signal the end func (qs *queryResponseStream) sendDone() error { header := responseHeader{ Seq: qs.seq, Error: "", } rec := queryRecord{ Type: queryRecordDone, } return qs.client.Send(&header, &rec) } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/log_levels.go000066400000000000000000000012031317277572200305170ustar00rootroot00000000000000package agent import ( "github.com/hashicorp/logutils" "io/ioutil" ) // LevelFilter returns a LevelFilter that is configured with the log // levels that we use. func LevelFilter() *logutils.LevelFilter { return &logutils.LevelFilter{ Levels: []logutils.LogLevel{"TRACE", "DEBUG", "INFO", "WARN", "ERR"}, MinLevel: "INFO", Writer: ioutil.Discard, } } // ValidateLevelFilter verifies that the log levels within the filter // are valid. func ValidateLevelFilter(minLevel logutils.LogLevel, filter *logutils.LevelFilter) bool { for _, level := range filter.Levels { if level == minLevel { return true } } return false } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/log_writer.go000066400000000000000000000034411317277572200305470ustar00rootroot00000000000000package agent import ( "sync" ) // LogHandler interface is used for clients that want to subscribe // to logs, for example to stream them over an IPC mechanism type LogHandler interface { HandleLog(string) } // logWriter implements io.Writer so it can be used as a log sink. // It maintains a circular buffer of logs, and a set of handlers to // which it can stream the logs to. type logWriter struct { sync.Mutex logs []string index int handlers map[LogHandler]struct{} } // NewLogWriter creates a logWriter with the given buffer capacity func NewLogWriter(buf int) *logWriter { return &logWriter{ logs: make([]string, buf), index: 0, handlers: make(map[LogHandler]struct{}), } } // RegisterHandler adds a log handler to receive logs, and sends // the last buffered logs to the handler func (l *logWriter) RegisterHandler(lh LogHandler) { l.Lock() defer l.Unlock() // Do nothing if already registered if _, ok := l.handlers[lh]; ok { return } // Register l.handlers[lh] = struct{}{} // Send the old logs if l.logs[l.index] != "" { for i := l.index; i < len(l.logs); i++ { lh.HandleLog(l.logs[i]) } } for i := 0; i < l.index; i++ { lh.HandleLog(l.logs[i]) } } // DeregisterHandler removes a LogHandler and prevents more invocations func (l *logWriter) DeregisterHandler(lh LogHandler) { l.Lock() defer l.Unlock() delete(l.handlers, lh) } // Write is used to accumulate new logs func (l *logWriter) Write(p []byte) (n int, err error) { l.Lock() defer l.Unlock() // Strip off newlines at the end if there are any since we store // individual log lines in the agent. n = len(p) if p[n-1] == '\n' { p = p[:n-1] } l.logs[l.index] = string(p) l.index = (l.index + 1) % len(l.logs) for lh, _ := range l.handlers { lh.HandleLog(string(p)) } return } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/log_writer_test.go000066400000000000000000000014241317277572200316050ustar00rootroot00000000000000package agent import ( "testing" ) type MockLogHandler struct { logs []string } func (m *MockLogHandler) HandleLog(l string) { m.logs = append(m.logs, l) } func TestLogWriter(t *testing.T) { h := &MockLogHandler{} w := NewLogWriter(4) // Write some logs w.Write([]byte("one")) // Gets dropped! w.Write([]byte("two")) w.Write([]byte("three")) w.Write([]byte("four")) w.Write([]byte("five")) // Register a handler, sends old! w.RegisterHandler(h) w.Write([]byte("six")) w.Write([]byte("seven")) // Deregister w.DeregisterHandler(h) w.Write([]byte("eight")) w.Write([]byte("nine")) out := []string{ "two", "three", "four", "five", "six", "seven", } for idx := range out { if out[idx] != h.logs[idx] { t.Fatalf("mismatch %v", h.logs) } } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/mdns.go000066400000000000000000000055131317277572200273350ustar00rootroot00000000000000package agent import ( "fmt" "io" "log" "net" "time" "github.com/hashicorp/mdns" ) const ( mdnsPollInterval = 60 * time.Second mdnsQuietInterval = 100 * time.Millisecond ) // AgentMDNS is used to advertise ourself using mDNS and to // attempt to join peers periodically using mDNS queries. type AgentMDNS struct { agent *Agent discover string logger *log.Logger seen map[string]struct{} server *mdns.Server replay bool iface *net.Interface } // NewAgentMDNS is used to create a new AgentMDNS func NewAgentMDNS(agent *Agent, logOutput io.Writer, replay bool, node, discover string, iface *net.Interface, bind net.IP, port int) (*AgentMDNS, error) { // Create the service service, err := mdns.NewMDNSService( node, mdnsName(discover), "", "", port, []net.IP{bind}, []string{fmt.Sprintf("Serf '%s' cluster", discover)}) if err != nil { return nil, err } // Configure mdns server conf := &mdns.Config{ Zone: service, Iface: iface, } // Create the server server, err := mdns.NewServer(conf) if err != nil { return nil, err } // Initialize the AgentMDNS m := &AgentMDNS{ agent: agent, discover: discover, logger: log.New(logOutput, "", log.LstdFlags), seen: make(map[string]struct{}), server: server, replay: replay, iface: iface, } // Start the background workers go m.run() return m, nil } // run is a long running goroutine that scans for new hosts periodically func (m *AgentMDNS) run() { hosts := make(chan *mdns.ServiceEntry, 32) poll := time.After(0) var quiet <-chan time.Time var join []string for { select { case h := <-hosts: // Format the host address addr := net.TCPAddr{IP: h.Addr, Port: h.Port} addrS := addr.String() // Skip if we've handled this host already if _, ok := m.seen[addrS]; ok { continue } // Queue for handling join = append(join, addrS) quiet = time.After(mdnsQuietInterval) case <-quiet: // Attempt the join n, err := m.agent.Join(join, m.replay) if err != nil { m.logger.Printf("[ERR] agent.mdns: Failed to join: %v", err) } if n > 0 { m.logger.Printf("[INFO] agent.mdns: Joined %d hosts", n) } // Mark all as seen for _, n := range join { m.seen[n] = struct{}{} } join = nil case <-poll: poll = time.After(mdnsPollInterval) go m.poll(hosts) } } } // poll is invoked periodically to check for new hosts func (m *AgentMDNS) poll(hosts chan *mdns.ServiceEntry) { params := mdns.QueryParam{ Service: mdnsName(m.discover), Interface: m.iface, Entries: hosts, } if err := mdns.Query(¶ms); err != nil { m.logger.Printf("[ERR] agent.mdns: Failed to poll for new hosts: %v", err) } } // mdnsName returns the service name to register and to lookup func mdnsName(discover string) string { return fmt.Sprintf("_serf_%s._tcp", discover) } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/rpc_client_test.go000066400000000000000000000516211317277572200315560ustar00rootroot00000000000000package agent import ( "bytes" "encoding/base64" "github.com/hashicorp/serf/client" "github.com/hashicorp/serf/serf" "github.com/hashicorp/serf/testutil" "io" "net" "os" "strings" "testing" "time" ) func testRPCClient(t *testing.T) (*client.RPCClient, *Agent, *AgentIPC) { agentConf := DefaultConfig() serfConf := serf.DefaultConfig() return testRPCClientWithConfig(t, agentConf, serfConf) } // testRPCClient returns an RPCClient connected to an RPC server that // serves only this connection. func testRPCClientWithConfig(t *testing.T, agentConf *Config, serfConf *serf.Config) (*client.RPCClient, *Agent, *AgentIPC) { l, err := net.Listen("tcp", "127.0.0.1:0") if err != nil { t.Fatalf("err: %s", err) } lw := NewLogWriter(512) mult := io.MultiWriter(os.Stderr, lw) agent := testAgentWithConfig(agentConf, serfConf, mult) ipc := NewAgentIPC(agent, "", l, mult, lw) rpcClient, err := client.NewRPCClient(l.Addr().String()) if err != nil { t.Fatalf("err: %s", err) } return rpcClient, agent, ipc } func findMember(t *testing.T, members []serf.Member, name string) serf.Member { for _, m := range members { if m.Name == name { return m } } t.Fatalf("%s not found", name) return serf.Member{} } func TestRPCClientForceLeave(t *testing.T) { client, a1, ipc := testRPCClient(t) a2 := testAgent(nil) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() defer a2.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } if err := a2.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() s2Addr := a2.conf.MemberlistConfig.BindAddr if _, err := a1.Join([]string{s2Addr}, false); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() if err := a2.Shutdown(); err != nil { t.Fatalf("err: %s", err) } start := time.Now() WAIT: time.Sleep(a1.conf.MemberlistConfig.ProbeInterval * 3) m := a1.Serf().Members() if len(m) != 2 { t.Fatalf("should have 2 members: %#v", a1.Serf().Members()) } if findMember(t, m, a2.conf.NodeName).Status != serf.StatusFailed && time.Now().Sub(start) < 3*time.Second { goto WAIT } if err := client.ForceLeave(a2.conf.NodeName); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() m = a1.Serf().Members() if len(m) != 2 { t.Fatalf("should have 2 members: %#v", a1.Serf().Members()) } if findMember(t, m, a2.conf.NodeName).Status != serf.StatusLeft { t.Fatalf("should be left: %#v", m[1]) } } func TestRPCClientJoin(t *testing.T) { client, a1, ipc := testRPCClient(t) a2 := testAgent(nil) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() defer a2.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } if err := a2.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() n, err := client.Join([]string{a2.conf.MemberlistConfig.BindAddr}, false) if err != nil { t.Fatalf("err: %s", err) } if n != 1 { t.Fatalf("n != 1: %d", n) } } func TestRPCClientMembers(t *testing.T) { client, a1, ipc := testRPCClient(t) a2 := testAgent(nil) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() defer a2.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } if err := a2.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() mem, err := client.Members() if err != nil { t.Fatalf("err: %s", err) } if len(mem) != 1 { t.Fatalf("bad: %#v", mem) } _, err = client.Join([]string{a2.conf.MemberlistConfig.BindAddr}, false) if err != nil { t.Fatalf("err: %s", err) } testutil.Yield() mem, err = client.Members() if err != nil { t.Fatalf("err: %s", err) } if len(mem) != 2 { t.Fatalf("bad: %#v", mem) } } func TestRPCClientMembersFiltered(t *testing.T) { client, a1, ipc := testRPCClient(t) a2 := testAgent(nil) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() defer a2.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } if err := a2.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() _, err := client.Join([]string{a2.conf.MemberlistConfig.BindAddr}, false) if err != nil { t.Fatalf("err: %s", err) } err = client.UpdateTags(map[string]string{ "tag1": "val1", "tag2": "val2", }, []string{}) if err != nil { t.Fatalf("bad: %s", err) } testutil.Yield() // Make sure that filters work on member names mem, err := client.MembersFiltered(map[string]string{}, "", ".*") if err != nil { t.Fatalf("bad: %s", err) } if len(mem) == 0 { t.Fatalf("should have matched more than 0 members") } mem, err = client.MembersFiltered(map[string]string{}, "", "bad") if err != nil { t.Fatalf("bad: %s", err) } if len(mem) != 0 { t.Fatalf("should have matched 0 members: %#v", mem) } // Make sure that filters work on member tags mem, err = client.MembersFiltered(map[string]string{"tag1": "val.*"}, "", "") if err != nil { t.Fatalf("bad: %s", err) } if len(mem) != 1 { t.Fatalf("should have matched 1 member: %#v", mem) } // Make sure tag filters work on multiple tags mem, err = client.MembersFiltered(map[string]string{ "tag1": "val.*", "tag2": "val2", }, "", "") if err != nil { t.Fatalf("bad: %s", err) } if len(mem) != 1 { t.Fatalf("should have matched one member: %#v", mem) } // Make sure all tags match when multiple tags are passed mem, err = client.MembersFiltered(map[string]string{ "tag1": "val1", "tag2": "bad", }, "", "") if err != nil { t.Fatalf("bad: %s", err) } if len(mem) != 0 { t.Fatalf("should have matched 0 members: %#v", mem) } // Make sure that filters work on member status if err := client.ForceLeave(a2.conf.NodeName); err != nil { t.Fatalf("bad: %s", err) } mem, err = client.MembersFiltered(map[string]string{}, "alive", "") if err != nil { t.Fatalf("err: %s", err) } if len(mem) != 1 { t.Fatalf("should have matched 1 member: %#v", mem) } mem, err = client.MembersFiltered(map[string]string{}, "leaving", "") if err != nil { t.Fatalf("err: %s", err) } if len(mem) != 1 { t.Fatalf("should have matched 1 member: %#v", mem) } } func TestRPCClientUserEvent(t *testing.T) { client, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() handler := new(MockEventHandler) a1.RegisterEventHandler(handler) if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() if err := client.UserEvent("deploy", []byte("foo"), false); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() handler.Lock() defer handler.Unlock() if len(handler.Events) == 0 { t.Fatal("no events") } serfEvent, ok := handler.Events[len(handler.Events)-1].(serf.UserEvent) if !ok { t.Fatalf("bad: %#v", serfEvent) } if serfEvent.Name != "deploy" { t.Fatalf("bad: %#v", serfEvent) } if string(serfEvent.Payload) != "foo" { t.Fatalf("bad: %#v", serfEvent) } } func TestRPCClientLeave(t *testing.T) { client, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() testutil.Yield() if err := client.Leave(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() select { case <-a1.ShutdownCh(): default: t.Fatalf("agent should be shutdown!") } } func TestRPCClientMonitor(t *testing.T) { client, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } eventCh := make(chan string, 64) if handle, err := client.Monitor("debug", eventCh); err != nil { t.Fatalf("err: %s", err) } else { defer client.Stop(handle) } testutil.Yield() select { case e := <-eventCh: if !strings.Contains(e, "Accepted client") { t.Fatalf("bad: %s", e) } default: t.Fatalf("should have backlog") } // Drain the rest of the messages as we know it drainEventCh(eventCh) // Join a bad thing to generate more events a1.Join(nil, false) testutil.Yield() select { case e := <-eventCh: if !strings.Contains(e, "joining") { t.Fatalf("bad: %s", e) } default: t.Fatalf("should have message") } } func TestRPCClientStream_User(t *testing.T) { client, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } eventCh := make(chan map[string]interface{}, 64) if handle, err := client.Stream("user", eventCh); err != nil { t.Fatalf("err: %s", err) } else { defer client.Stop(handle) } testutil.Yield() if err := client.UserEvent("deploy", []byte("foo"), false); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() select { case e := <-eventCh: if e["Event"].(string) != "user" { t.Fatalf("bad event: %#v", e) } if e["LTime"].(int64) != 1 { t.Fatalf("bad event: %#v", e) } if e["Name"].(string) != "deploy" { t.Fatalf("bad event: %#v", e) } if bytes.Compare(e["Payload"].([]byte), []byte("foo")) != 0 { t.Fatalf("bad event: %#v", e) } if e["Coalesce"].(bool) != false { t.Fatalf("bad event: %#v", e) } default: t.Fatalf("should have event") } } func TestRPCClientStream_Member(t *testing.T) { client, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() a2 := testAgent(nil) defer a2.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } if err := a2.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() eventCh := make(chan map[string]interface{}, 64) if handle, err := client.Stream("*", eventCh); err != nil { t.Fatalf("err: %s", err) } else { defer client.Stop(handle) } testutil.Yield() s2Addr := a2.conf.MemberlistConfig.BindAddr if _, err := a1.Join([]string{s2Addr}, false); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() select { case e := <-eventCh: if e["Event"].(string) != "member-join" { t.Fatalf("bad event: %#v", e) } members := e["Members"].([]interface{}) if len(members) != 1 { t.Fatalf("should have 1 member") } member := members[0].(map[interface{}]interface{}) if _, ok := member["Name"].(string); !ok { t.Fatalf("bad event: %#v", e) } if _, ok := member["Addr"].([]uint8); !ok { t.Fatalf("bad event: %#v", e) } if _, ok := member["Port"].(uint64); !ok { t.Fatalf("bad event: %#v", e) } if _, ok := member["Tags"].(map[interface{}]interface{}); !ok { t.Fatalf("bad event: %#v", e) } if stat, _ := member["Status"].(string); stat != "alive" { t.Fatalf("bad event: %#v", e) } if _, ok := member["ProtocolMin"].(int64); !ok { t.Fatalf("bad event: %#v", e) } if _, ok := member["ProtocolMax"].(int64); !ok { t.Fatalf("bad event: %#v", e) } if _, ok := member["ProtocolCur"].(int64); !ok { t.Fatalf("bad event: %#v", e) } if _, ok := member["DelegateMin"].(int64); !ok { t.Fatalf("bad event: %#v", e) } if _, ok := member["DelegateMax"].(int64); !ok { t.Fatalf("bad event: %#v", e) } if _, ok := member["DelegateCur"].(int64); !ok { t.Fatalf("bad event: %#v", e) } default: t.Fatalf("should have event") } } func TestRPCClientUpdateTags(t *testing.T) { client, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() mem, err := client.Members() if err != nil { t.Fatalf("err: %s", err) } if len(mem) != 1 { t.Fatalf("bad: %#v", mem) } m0 := mem[0] if _, ok := m0.Tags["testing"]; ok { t.Fatalf("have testing tag") } if err := client.UpdateTags(map[string]string{"testing": "1"}, nil); err != nil { t.Fatalf("err: %s", err) } mem, err = client.Members() if err != nil { t.Fatalf("err: %s", err) } if len(mem) != 1 { t.Fatalf("bad: %#v", mem) } m0 = mem[0] if _, ok := m0.Tags["testing"]; !ok { t.Fatalf("missing testing tag") } } func TestRPCClientQuery(t *testing.T) { cl, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer cl.Close() defer a1.Shutdown() handler := new(MockQueryHandler) handler.Response = []byte("ok") a1.RegisterEventHandler(handler) if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() ackCh := make(chan string, 1) respCh := make(chan client.NodeResponse, 1) params := client.QueryParam{ RequestAck: true, Timeout: 200 * time.Millisecond, Name: "deploy", Payload: []byte("foo"), AckCh: ackCh, RespCh: respCh, } if err := cl.Query(¶ms); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() handler.Lock() defer handler.Unlock() if len(handler.Queries) == 0 { t.Fatal("no queries") } query := handler.Queries[0] if query.Name != "deploy" { t.Fatalf("bad: %#v", query) } if string(query.Payload) != "foo" { t.Fatalf("bad: %#v", query) } select { case a := <-ackCh: if a != a1.conf.NodeName { t.Fatalf("Bad ack from: %v", a) } default: t.Fatalf("missing ack") } select { case r := <-respCh: if r.From != a1.conf.NodeName { t.Fatalf("Bad resp from: %v", r) } if string(r.Payload) != "ok" { t.Fatalf("Bad resp from: %v", r) } default: t.Fatalf("missing response") } } func TestRPCClientStream_Query(t *testing.T) { cl, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer cl.Close() defer a1.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } eventCh := make(chan map[string]interface{}, 64) if handle, err := cl.Stream("query", eventCh); err != nil { t.Fatalf("err: %s", err) } else { defer cl.Stop(handle) } testutil.Yield() params := client.QueryParam{ Timeout: 200 * time.Millisecond, Name: "deploy", Payload: []byte("foo"), } if err := cl.Query(¶ms); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() select { case e := <-eventCh: if e["Event"].(string) != "query" { t.Fatalf("bad query: %#v", e) } if e["ID"].(int64) != 1 { t.Fatalf("bad query: %#v", e) } if e["LTime"].(int64) != 1 { t.Fatalf("bad query: %#v", e) } if e["Name"].(string) != "deploy" { t.Fatalf("bad query: %#v", e) } if bytes.Compare(e["Payload"].([]byte), []byte("foo")) != 0 { t.Fatalf("bad query: %#v", e) } default: t.Fatalf("should have query") } } func TestRPCClientStream_Query_Respond(t *testing.T) { cl, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer cl.Close() defer a1.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } eventCh := make(chan map[string]interface{}, 64) if handle, err := cl.Stream("query", eventCh); err != nil { t.Fatalf("err: %s", err) } else { defer cl.Stop(handle) } testutil.Yield() ackCh := make(chan string, 1) respCh := make(chan client.NodeResponse, 1) params := client.QueryParam{ RequestAck: true, Timeout: 500 * time.Millisecond, Name: "ping", AckCh: ackCh, RespCh: respCh, } if err := cl.Query(¶ms); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() select { case e := <-eventCh: if e["Event"].(string) != "query" { t.Fatalf("bad query: %#v", e) } if e["Name"].(string) != "ping" { t.Fatalf("bad query: %#v", e) } // Send a response id := e["ID"].(int64) if err := cl.Respond(uint64(id), []byte("pong")); err != nil { t.Fatalf("err: %v", err) } default: t.Fatalf("should have query") } testutil.Yield() // Should have ack select { case a := <-ackCh: if a != a1.conf.NodeName { t.Fatalf("Bad ack from: %v", a) } default: t.Fatalf("missing ack") } // Should have response select { case r := <-respCh: if r.From != a1.conf.NodeName { t.Fatalf("Bad resp from: %v", r) } if string(r.Payload) != "pong" { t.Fatalf("Bad resp from: %v", r) } default: t.Fatalf("missing response") } } func TestRPCClientAuth(t *testing.T) { cl, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer cl.Close() defer a1.Shutdown() // Setup an auth key ipc.authKey = "foobar" if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() if err := cl.UserEvent("deploy", nil, false); err.Error() != authRequired { t.Fatalf("err: %s", err) } testutil.Yield() config := client.Config{Addr: ipc.listener.Addr().String(), AuthKey: "foobar"} rpcClient, err := client.ClientFromConfig(&config) if err != nil { t.Fatalf("err: %s", err) } defer rpcClient.Close() if err := rpcClient.UserEvent("deploy", nil, false); err != nil { t.Fatalf("err: %s", err) } } func TestRPCClient_Keys_EncryptionDisabledError(t *testing.T) { client, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } // Failed installing key failures, err := client.InstallKey("El/H8lEqX2WiUa36SxcpZw==") if err == nil { t.Fatalf("expected encryption disabled error") } if _, ok := failures[a1.conf.NodeName]; !ok { t.Fatalf("expected error from node %s", a1.conf.NodeName) } // Failed using key failures, err = client.UseKey("El/H8lEqX2WiUa36SxcpZw==") if err == nil { t.Fatalf("expected encryption disabled error") } if _, ok := failures[a1.conf.NodeName]; !ok { t.Fatalf("expected error from node %s", a1.conf.NodeName) } // Failed removing key failures, err = client.RemoveKey("El/H8lEqX2WiUa36SxcpZw==") if err == nil { t.Fatalf("expected encryption disabled error") } if _, ok := failures[a1.conf.NodeName]; !ok { t.Fatalf("expected error from node %s", a1.conf.NodeName) } // Failed listing keys _, _, failures, err = client.ListKeys() if err == nil { t.Fatalf("expected encryption disabled error") } if _, ok := failures[a1.conf.NodeName]; !ok { t.Fatalf("expected error from node %s", a1.conf.NodeName) } } func TestRPCClient_Keys(t *testing.T) { newKey := "El/H8lEqX2WiUa36SxcpZw==" existing := "A2xzjs0eq9PxSV2+dPi3sg==" existingBytes, err := base64.StdEncoding.DecodeString(existing) if err != nil { t.Fatalf("err: %s", err) } agentConf := DefaultConfig() serfConf := serf.DefaultConfig() serfConf.MemberlistConfig.SecretKey = existingBytes client, a1, ipc := testRPCClientWithConfig(t, agentConf, serfConf) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() keys, num, _, err := client.ListKeys() if err != nil { t.Fatalf("err: %s", err) } if _, ok := keys[newKey]; ok { t.Fatalf("have new key: %s", newKey) } // Trying to use a key that doesn't exist errors if _, err := client.UseKey(newKey); err == nil { t.Fatalf("expected use-key error: %s", newKey) } // Keyring should not contain new key at this point keys, _, _, err = client.ListKeys() if err != nil { t.Fatalf("err: %s", err) } if _, ok := keys[newKey]; ok { t.Fatalf("have new key: %s", newKey) } // Invalid key installation throws an error if _, err := client.InstallKey("badkey"); err == nil { t.Fatalf("expected bad key error") } // InstallKey should succeed if _, err := client.InstallKey(newKey); err != nil { t.Fatalf("err: %s", err) } // InstallKey is idempotent if _, err := client.InstallKey(newKey); err != nil { t.Fatalf("err: %s", err) } // New key should now appear in the list of keys keys, num, _, err = client.ListKeys() if err != nil { t.Fatalf("err: %s", err) } if num != 1 { t.Fatalf("expected 1 member total, got %d", num) } if _, ok := keys[newKey]; !ok { t.Fatalf("key not found: %s", newKey) } // Counter of installed copies of new key should be 1 if keys[newKey] != 1 { t.Fatalf("expected 1 member with key %s, have %d", newKey, keys[newKey]) } // Deleting primary key should return error if _, err := client.RemoveKey(existing); err == nil { t.Fatalf("expected error deleting primary key: %s", newKey) } // UseKey succeeds when given a key that exists if _, err := client.UseKey(newKey); err != nil { t.Fatalf("err: %s", err) } // UseKey is idempotent if _, err := client.UseKey(newKey); err != nil { t.Fatalf("err: %s", err) } // Removing a non-primary key should succeed if _, err := client.RemoveKey(newKey); err == nil { t.Fatalf("expected error deleting primary key: %s", newKey) } // RemoveKey is idempotent if _, err := client.RemoveKey(existing); err != nil { t.Fatalf("err: %s", err) } } func TestRPCClientStats(t *testing.T) { client, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() stats, err := client.Stats() if err != nil { t.Fatalf("err: %v", err) } if stats["agent"]["name"] != a1.conf.NodeName { t.Fatalf("bad: %v", stats) } } func TestRPCClientGetCoordinate(t *testing.T) { client, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() coord, err := client.GetCoordinate(a1.conf.NodeName) if err != nil { t.Fatalf("err: %v", err) } if coord == nil { t.Fatalf("should have gotten a coordinate") } coord, err = client.GetCoordinate("nope") if err != nil { t.Fatalf("err: %v", err) } if coord != nil { t.Fatalf("should have not gotten a coordinate") } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/syslog.go000066400000000000000000000023411317277572200277100ustar00rootroot00000000000000package agent import ( "bytes" "github.com/hashicorp/go-syslog" "github.com/hashicorp/logutils" ) // levelPriority is used to map a log level to a // syslog priority level var levelPriority = map[string]gsyslog.Priority{ "TRACE": gsyslog.LOG_DEBUG, "DEBUG": gsyslog.LOG_INFO, "INFO": gsyslog.LOG_NOTICE, "WARN": gsyslog.LOG_WARNING, "ERR": gsyslog.LOG_ERR, "CRIT": gsyslog.LOG_CRIT, } // SyslogWrapper is used to cleaup log messages before // writing them to a Syslogger. Implements the io.Writer // interface. type SyslogWrapper struct { l gsyslog.Syslogger filt *logutils.LevelFilter } // Write is used to implement io.Writer func (s *SyslogWrapper) Write(p []byte) (int, error) { // Skip syslog if the log level doesn't apply if !s.filt.Check(p) { return 0, nil } // Extract log level var level string afterLevel := p x := bytes.IndexByte(p, '[') if x >= 0 { y := bytes.IndexByte(p[x:], ']') if y >= 0 { level = string(p[x+1 : x+y]) afterLevel = p[x+y+2:] } } // Each log level will be handled by a specific syslog priority priority, ok := levelPriority[level] if !ok { priority = gsyslog.LOG_NOTICE } // Attempt the write err := s.l.WriteLevel(priority, afterLevel) return len(p), err } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/syslog_test.go000066400000000000000000000012731317277572200307520ustar00rootroot00000000000000package agent import ( "runtime" "testing" "github.com/hashicorp/go-syslog" "github.com/hashicorp/logutils" ) func TestSyslogFilter(t *testing.T) { if runtime.GOOS == "windows" { t.SkipNow() } l, err := gsyslog.NewLogger(gsyslog.LOG_NOTICE, "LOCAL0", "serf") if err != nil { t.Fatalf("err: %s", err) } filt := LevelFilter() filt.MinLevel = logutils.LogLevel("INFO") s := &SyslogWrapper{l, filt} n, err := s.Write([]byte("[INFO] test")) if err != nil { t.Fatalf("err: %s", err) } if n == 0 { t.Fatalf("should have logged") } n, err = s.Write([]byte("[DEBUG] test")) if err != nil { t.Fatalf("err: %s", err) } if n != 0 { t.Fatalf("should not have logged") } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/util.go000066400000000000000000000007431317277572200273510ustar00rootroot00000000000000package agent import ( "runtime" "strconv" ) // runtimeStats is used to return various runtime information func runtimeStats() map[string]string { return map[string]string{ "os": runtime.GOOS, "arch": runtime.GOARCH, "version": runtime.Version(), "max_procs": strconv.FormatInt(int64(runtime.GOMAXPROCS(0)), 10), "goroutines": strconv.FormatInt(int64(runtime.NumGoroutine()), 10), "cpu_count": strconv.FormatInt(int64(runtime.NumCPU()), 10), } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/agent/util_test.go000066400000000000000000000022001317277572200303760ustar00rootroot00000000000000package agent import ( "fmt" "github.com/hashicorp/serf/serf" "github.com/hashicorp/serf/testutil" "io" "math/rand" "net" "os" "time" ) func init() { // Seed the random number generator rand.Seed(time.Now().UnixNano()) } func drainEventCh(ch <-chan string) { for { select { case <-ch: default: return } } } func getRPCAddr() string { for i := 0; i < 500; i++ { l, err := net.Listen("tcp", fmt.Sprintf("127.0.0.1:%d", rand.Int31n(25000)+1024)) if err == nil { l.Close() return l.Addr().String() } } panic("no listener") } func testAgent(logOutput io.Writer) *Agent { return testAgentWithConfig(DefaultConfig(), serf.DefaultConfig(), logOutput) } func testAgentWithConfig(agentConfig *Config, serfConfig *serf.Config, logOutput io.Writer) *Agent { if logOutput == nil { logOutput = os.Stderr } serfConfig.MemberlistConfig.ProbeInterval = 100 * time.Millisecond serfConfig.MemberlistConfig.BindAddr = testutil.GetBindAddr().String() serfConfig.NodeName = serfConfig.MemberlistConfig.BindAddr agent, err := Create(agentConfig, serfConfig, logOutput) if err != nil { panic(err) } return agent } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/event.go000066400000000000000000000041551317277572200264200ustar00rootroot00000000000000package command import ( "flag" "fmt" "github.com/mitchellh/cli" "strings" ) // EventCommand is a Command implementation that queries a running // Serf agent what members are part of the cluster currently. type EventCommand struct { Ui cli.Ui } func (c *EventCommand) Help() string { helpText := ` Usage: serf event [options] name payload Dispatches a custom event across the Serf cluster. Options: -coalesce=true/false Whether this event can be coalesced. This means that repeated events of the same name within a short period of time are ignored, except the last one received. Default is true. -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. ` return strings.TrimSpace(helpText) } func (c *EventCommand) Run(args []string) int { var coalesce bool cmdFlags := flag.NewFlagSet("event", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } cmdFlags.BoolVar(&coalesce, "coalesce", true, "coalesce") rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } args = cmdFlags.Args() if len(args) < 1 { c.Ui.Error("An event name must be specified.") c.Ui.Error("") c.Ui.Error(c.Help()) return 1 } else if len(args) > 2 { c.Ui.Error("Too many command line arguments. Only a name and payload must be specified.") c.Ui.Error("") c.Ui.Error(c.Help()) return 1 } event := args[0] var payload []byte if len(args) == 2 { payload = []byte(args[1]) } client, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer client.Close() if err := client.UserEvent(event, payload, coalesce); err != nil { c.Ui.Error(fmt.Sprintf("Error sending event: %s", err)) return 1 } c.Ui.Output(fmt.Sprintf("Event '%s' dispatched! Coalescing enabled: %#v", event, coalesce)) return 0 } func (c *EventCommand) Synopsis() string { return "Send a custom event through the Serf cluster" } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/event_test.go000066400000000000000000000014761317277572200274620ustar00rootroot00000000000000package command import ( "github.com/mitchellh/cli" "strings" "testing" ) func TestEventCommand_implements(t *testing.T) { var _ cli.Command = &EventCommand{} } func TestEventCommandRun_noEvent(t *testing.T) { ui := new(cli.MockUi) c := &EventCommand{Ui: ui} args := []string{"-rpc-addr=foo"} code := c.Run(args) if code != 1 { t.Fatalf("bad: %d", code) } if !strings.Contains(ui.ErrorWriter.String(), "event name") { t.Fatalf("bad: %#v", ui.ErrorWriter.String()) } } func TestEventCommandRun_tooMany(t *testing.T) { ui := new(cli.MockUi) c := &EventCommand{Ui: ui} args := []string{"-rpc-addr=foo", "foo", "bar", "baz"} code := c.Run(args) if code != 1 { t.Fatalf("bad: %d", code) } if !strings.Contains(ui.ErrorWriter.String(), "Too many") { t.Fatalf("bad: %#v", ui.ErrorWriter.String()) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/force_leave.go000066400000000000000000000034161317277572200275500ustar00rootroot00000000000000package command import ( "flag" "fmt" "github.com/mitchellh/cli" "strings" ) // ForceLeaveCommand is a Command implementation that tells a running Serf // to force a member to enter the "left" state. type ForceLeaveCommand struct { Ui cli.Ui } func (c *ForceLeaveCommand) Run(args []string) int { cmdFlags := flag.NewFlagSet("join", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } nodes := cmdFlags.Args() if len(nodes) != 1 { c.Ui.Error("A node name must be specified to force leave.") c.Ui.Error("") c.Ui.Error(c.Help()) return 1 } client, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer client.Close() err = client.ForceLeave(nodes[0]) if err != nil { c.Ui.Error(fmt.Sprintf("Error force leaving: %s", err)) return 1 } return 0 } func (c *ForceLeaveCommand) Synopsis() string { return "Forces a member of the cluster to enter the \"left\" state" } func (c *ForceLeaveCommand) Help() string { helpText := ` Usage: serf force-leave [options] name Forces a member of a Serf cluster to enter the "left" state. Note that if the member is still actually alive, it will eventually rejoin the cluster. This command is most useful for cleaning out "failed" nodes that are never coming back. If you do not force leave a failed node, Serf will attempt to reconnect to those failed nodes for some period of time before eventually reaping them. Options: -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. ` return strings.TrimSpace(helpText) } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/force_leave_test.go000066400000000000000000000034711317277572200306100ustar00rootroot00000000000000package command import ( "github.com/hashicorp/serf/serf" "github.com/hashicorp/serf/testutil" "github.com/mitchellh/cli" "strings" "testing" "time" ) func TestForceLeaveCommand_implements(t *testing.T) { var _ cli.Command = &ForceLeaveCommand{} } func TestForceLeaveCommandRun(t *testing.T) { a1 := testAgent(t) a2 := testAgent(t) defer a1.Shutdown() defer a2.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() _, err := a1.Join([]string{a2.SerfConfig().MemberlistConfig.BindAddr}, false) if err != nil { t.Fatalf("err: %s", err) } testutil.Yield() // Forcibly shutdown a2 so that it appears "failed" in a1 if err := a2.Serf().Shutdown(); err != nil { t.Fatalf("err: %s", err) } start := time.Now() WAIT: time.Sleep(a2.SerfConfig().MemberlistConfig.ProbeInterval * 3) m := a1.Serf().Members() if len(m) != 2 { t.Fatalf("should have 2 members: %#v", a1.Serf().Members()) } if m[1].Status != serf.StatusFailed && time.Now().Sub(start) < 3*time.Second { goto WAIT } ui := new(cli.MockUi) c := &ForceLeaveCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, a2.SerfConfig().NodeName, } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } m = a1.Serf().Members() if len(m) != 2 { t.Fatalf("should have 2 members: %#v", a1.Serf().Members()) } left := m[0] if m[1].Name == a2.SerfConfig().NodeName { left = m[1] } if left.Status != serf.StatusLeft { t.Fatalf("should be left: %#v", left) } } func TestForceLeaveCommandRun_noAddrs(t *testing.T) { ui := new(cli.MockUi) c := &ForceLeaveCommand{Ui: ui} args := []string{"-rpc-addr=foo"} code := c.Run(args) if code != 1 { t.Fatalf("bad: %d", code) } if !strings.Contains(ui.ErrorWriter.String(), "node name") { t.Fatalf("bad: %#v", ui.ErrorWriter.String()) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/info.go000066400000000000000000000045311317277572200262300ustar00rootroot00000000000000package command import ( "bytes" "flag" "fmt" "github.com/mitchellh/cli" "sort" "strings" ) // InfoCommand is a Command implementation that queries a running // Serf agent for various debugging statistics for operators type InfoCommand struct { Ui cli.Ui } func (i *InfoCommand) Help() string { helpText := ` Usage: serf info [options] Provides debugging information for operators Options: -format If provided, output is returned in the specified format. Valid formats are 'json', and 'text' (default) -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. ` return strings.TrimSpace(helpText) } func (i *InfoCommand) Run(args []string) int { var format string cmdFlags := flag.NewFlagSet("info", flag.ContinueOnError) cmdFlags.Usage = func() { i.Ui.Output(i.Help()) } cmdFlags.StringVar(&format, "format", "text", "output format") rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } client, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { i.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer client.Close() stats, err := client.Stats() if err != nil { i.Ui.Error(fmt.Sprintf("Error querying agent: %s", err)) return 1 } output, err := formatOutput(StatsContainer(stats), format) if err != nil { i.Ui.Error(fmt.Sprintf("Encoding error: %s", err)) return 1 } i.Ui.Output(string(output)) return 0 } func (i *InfoCommand) Synopsis() string { return "Provides debugging information for operators" } type StatsContainer map[string]map[string]string func (s StatsContainer) String() string { var buf bytes.Buffer // Get the keys in sorted order keys := make([]string, 0, len(s)) for key := range s { keys = append(keys, key) } sort.Strings(keys) // Iterate over each top-level key for _, key := range keys { buf.WriteString(fmt.Sprintf(key + ":\n")) // Sort the sub-keys subvals := s[key] subkeys := make([]string, 0, len(subvals)) for k := range subvals { subkeys = append(subkeys, k) } sort.Strings(subkeys) // Iterate over the subkeys for _, subkey := range subkeys { val := subvals[subkey] buf.WriteString(fmt.Sprintf("\t%s = %s\n", subkey, val)) } } return buf.String() } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/info_test.go000066400000000000000000000011411317277572200272610ustar00rootroot00000000000000package command import ( "github.com/mitchellh/cli" "strings" "testing" ) func TestInfoCommand_implements(t *testing.T) { var _ cli.Command = &InfoCommand{} } func TestInfoCommandRun(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &InfoCommand{Ui: ui} args := []string{"-rpc-addr=" + rpcAddr} code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), "runtime") { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/join.go000066400000000000000000000032241317277572200262320ustar00rootroot00000000000000package command import ( "flag" "fmt" "github.com/mitchellh/cli" "strings" ) // JoinCommand is a Command implementation that tells a running Serf // agent to join another. type JoinCommand struct { Ui cli.Ui } func (c *JoinCommand) Help() string { helpText := ` Usage: serf join [options] address ... Tells a running Serf agent (with "serf agent") to join the cluster by specifying at least one existing member. Options: -replay Replay past user events. -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. ` return strings.TrimSpace(helpText) } func (c *JoinCommand) Run(args []string) int { var replayEvents bool cmdFlags := flag.NewFlagSet("join", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } cmdFlags.BoolVar(&replayEvents, "replay", false, "replay") rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } addrs := cmdFlags.Args() if len(addrs) == 0 { c.Ui.Error("At least one address to join must be specified.") c.Ui.Error("") c.Ui.Error(c.Help()) return 1 } client, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer client.Close() n, err := client.Join(addrs, replayEvents) if err != nil { c.Ui.Error(fmt.Sprintf("Error joining the cluster: %s", err)) return 1 } c.Ui.Output(fmt.Sprintf( "Successfully joined cluster by contacting %d nodes.", n)) return 0 } func (c *JoinCommand) Synopsis() string { return "Tell Serf agent to join cluster" } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/join_test.go000066400000000000000000000017401317277572200272720ustar00rootroot00000000000000package command import ( "github.com/mitchellh/cli" "strings" "testing" ) func TestJoinCommand_implements(t *testing.T) { var _ cli.Command = &JoinCommand{} } func TestJoinCommandRun(t *testing.T) { a1 := testAgent(t) a2 := testAgent(t) defer a1.Shutdown() defer a2.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &JoinCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, a2.SerfConfig().MemberlistConfig.BindAddr, } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if len(a1.Serf().Members()) != 2 { t.Fatalf("bad: %#v", a1.Serf().Members()) } } func TestJoinCommandRun_noAddrs(t *testing.T) { ui := new(cli.MockUi) c := &JoinCommand{Ui: ui} args := []string{"-rpc-addr=foo"} code := c.Run(args) if code != 1 { t.Fatalf("bad: %d", code) } if !strings.Contains(ui.ErrorWriter.String(), "one address") { t.Fatalf("bad: %#v", ui.ErrorWriter.String()) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/keygen.go000066400000000000000000000017701317277572200265610ustar00rootroot00000000000000package command import ( "crypto/rand" "encoding/base64" "fmt" "github.com/mitchellh/cli" "strings" ) // KeygenCommand is a Command implementation that generates an encryption // key for use in `serf agent`. type KeygenCommand struct { Ui cli.Ui } func (c *KeygenCommand) Run(_ []string) int { key := make([]byte, 16) n, err := rand.Reader.Read(key) if err != nil { c.Ui.Error(fmt.Sprintf("Error reading random data: %s", err)) return 1 } if n != 16 { c.Ui.Error(fmt.Sprintf("Couldn't read enough entropy. Generate more entropy!")) return 1 } c.Ui.Output(base64.StdEncoding.EncodeToString(key)) return 0 } func (c *KeygenCommand) Synopsis() string { return "Generates a new encryption key" } func (c *KeygenCommand) Help() string { helpText := ` Usage: serf keygen Generates a new encryption key that can be used to configure the agent to encrypt traffic. The output of this command is already in the proper format that the agent expects. ` return strings.TrimSpace(helpText) } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/keygen_test.go000066400000000000000000000010311317277572200276060ustar00rootroot00000000000000package command import ( "encoding/base64" "github.com/mitchellh/cli" "testing" ) func TestKeygenCommand_implements(t *testing.T) { var _ cli.Command = &KeygenCommand{} } func TestKeygenCommand(t *testing.T) { ui := new(cli.MockUi) c := &KeygenCommand{Ui: ui} code := c.Run(nil) if code != 0 { t.Fatalf("bad: %d", code) } output := ui.OutputWriter.String() result, err := base64.StdEncoding.DecodeString(output) if err != nil { t.Fatalf("err: %s", err) } if len(result) != 16 { t.Fatalf("bad: %#v", result) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/keys.go000066400000000000000000000134251317277572200262520ustar00rootroot00000000000000package command import ( "flag" "fmt" "github.com/mitchellh/cli" "github.com/ryanuber/columnize" "strings" ) type KeysCommand struct { Ui cli.Ui } func (c *KeysCommand) Help() string { helpText := ` Usage: serf keys [options]... Manage the internal encryption keyring used by Serf. Modifications made by this command will be broadcasted to all members in the cluster and applied locally on each member. Operations of this command are idempotent. To facilitate key rotation, Serf allows for multiple encryption keys to be in use simultaneously. Only one key, the "primary" key, will be used for encrypting messages. All other keys are used for decryption only. All variations of this command will return 0 if all nodes reply and report no errors. If any node fails to respond or reports failure, we return 1. WARNING: Running with multiple encryption keys enabled is recommended as a transition state only. Performance may be impacted by using multiple keys. Options: -install= Install a new key onto Serf's internal keyring. This will enable the key for decryption. The key will not be used to encrypt messages until the primary key is changed. -use= Change the primary key used for encrypting messages. All nodes in the cluster must already have this key installed if they are to continue communicating with eachother. -remove= Remove a key from Serf's internal keyring. The key being removed may not be the current primary key. -list List all currently known keys in the cluster. This will ask all nodes in the cluster for a list of keys and dump a summary containing each key and the number of members it is installed on to the console. -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. ` return strings.TrimSpace(helpText) } func (c *KeysCommand) Run(args []string) int { var installKey, useKey, removeKey string var lines []string var listKeys bool cmdFlags := flag.NewFlagSet("key", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } cmdFlags.StringVar(&installKey, "install", "", "install a new key") cmdFlags.StringVar(&useKey, "use", "", "change primary encryption key") cmdFlags.StringVar(&removeKey, "remove", "", "remove a key") cmdFlags.BoolVar(&listKeys, "list", false, "list cluster keys") rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } c.Ui = &cli.PrefixedUi{ OutputPrefix: "", InfoPrefix: "==> ", ErrorPrefix: "", Ui: c.Ui, } // Make sure that we only have one actionable argument to avoid ambiguity found := listKeys for _, arg := range []string{installKey, useKey, removeKey} { if found && len(arg) > 0 { c.Ui.Error("Only one of -install, -use, -remove, or -list allowed") return 1 } found = found || len(arg) > 0 } // Fail fast if no actionable args were passed if !found { c.Ui.Error(c.Help()) return 1 } client, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer client.Close() if listKeys { c.Ui.Info("Asking all members for installed keys...") keys, total, failures, err := client.ListKeys() if err != nil { if len(failures) > 0 { for node, message := range failures { lines = append(lines, fmt.Sprintf("failed: | %s | %s", node, message)) } out := columnize.SimpleFormat(lines) c.Ui.Error(out) } c.Ui.Error("") c.Ui.Error(fmt.Sprintf("Failed to gather member keys: %s", err)) return 1 } c.Ui.Info("Keys gathered, listing cluster keys...") c.Ui.Output("") for key, num := range keys { lines = append(lines, fmt.Sprintf("%s | [%d/%d]", key, num, total)) } out := columnize.SimpleFormat(lines) c.Ui.Output(out) return 0 } if installKey != "" { c.Ui.Info("Installing key on all members...") if failures, err := client.InstallKey(installKey); err != nil { if len(failures) > 0 { for node, message := range failures { lines = append(lines, fmt.Sprintf("failed: | %s | %s", node, message)) } out := columnize.SimpleFormat(lines) c.Ui.Error(out) } c.Ui.Error("") c.Ui.Error(fmt.Sprintf("Error installing key: %s", err)) return 1 } c.Ui.Info("Successfully installed key!") return 0 } if useKey != "" { c.Ui.Info("Changing primary key on all members...") if failures, err := client.UseKey(useKey); err != nil { if len(failures) > 0 { for node, message := range failures { lines = append(lines, fmt.Sprintf("failed: | %s | %s", node, message)) } out := columnize.SimpleFormat(lines) c.Ui.Error(out) } c.Ui.Error("") c.Ui.Error(fmt.Sprintf("Error changing primary key: %s", err)) return 1 } c.Ui.Info("Successfully changed primary key!") return 0 } if removeKey != "" { c.Ui.Info("Removing key on all members...") if failures, err := client.RemoveKey(removeKey); err != nil { if len(failures) > 0 { for node, message := range failures { lines = append(lines, fmt.Sprintf("failed: | %s | %s", node, message)) } out := columnize.SimpleFormat(lines) c.Ui.Error(out) } c.Ui.Error("") c.Ui.Error(fmt.Sprintf("Error removing key: %s", err)) return 1 } c.Ui.Info("Successfully removed key!") return 0 } // Should never reach this point return 0 } func (c *KeysCommand) Synopsis() string { return "Manipulate the internal encryption keyring used by Serf" } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/keys_test.go000066400000000000000000000172511317277572200273120ustar00rootroot00000000000000package command import ( "encoding/base64" "strings" "testing" "github.com/hashicorp/memberlist" "github.com/hashicorp/serf/client" "github.com/hashicorp/serf/cmd/serf/command/agent" "github.com/hashicorp/serf/serf" "github.com/mitchellh/cli" ) func testKeysCommandAgent(t *testing.T) *agent.Agent { key1, err := base64.StdEncoding.DecodeString("SNCg1bQSoCdGVlEx+TgfBw==") if err != nil { t.Fatalf("err: %s", err) } key2, err := base64.StdEncoding.DecodeString("vbitCcJNwNP4aEWHgofjMg==") if err != nil { t.Fatalf("err: %s", err) } keyring, err := memberlist.NewKeyring([][]byte{key1, key2}, key1) if err != nil { t.Fatalf("err: %s", err) } agentConf := agent.DefaultConfig() serfConf := serf.DefaultConfig() serfConf.MemberlistConfig.Keyring = keyring a1 := testAgentWithConfig(t, agentConf, serfConf) return a1 } func TestKeysCommand_implements(t *testing.T) { var _ cli.Command = &KeysCommand{} } func TestKeysCommandRun_InstallKey(t *testing.T) { a1 := testKeysCommandAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &KeysCommand{Ui: ui} rpcClient, err := client.NewRPCClient(rpcAddr) if err != nil { t.Fatalf("err: %s", err) } keys, _, _, err := rpcClient.ListKeys() if err != nil { t.Fatalf("err: %s", err) } if _, ok := keys["jbuQMI4gMUeh1PPmKOtiBg=="]; ok { t.Fatalf("have test key") } args := []string{ "-rpc-addr=" + rpcAddr, "-install", "jbuQMI4gMUeh1PPmKOtiBg==", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), "Successfully installed key") { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } keys, _, _, err = rpcClient.ListKeys() if err != nil { t.Fatalf("err: %s", err) } if _, ok := keys["jbuQMI4gMUeh1PPmKOtiBg=="]; !ok { t.Fatalf("new key not found") } } func TestKeysCommandRun_InstallKeyFailure(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &KeysCommand{Ui: ui} // Trying to install with encryption disabled returns 1 args := []string{ "-rpc-addr=" + rpcAddr, "-install", "jbuQMI4gMUeh1PPmKOtiBg==", } code := c.Run(args) if code != 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } // Node errors appear in stderr if !strings.Contains(ui.ErrorWriter.String(), "not enabled") { t.Fatalf("expected empty keyring error") } } func TestKeysCommandRun_UseKey(t *testing.T) { a1 := testKeysCommandAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &KeysCommand{Ui: ui} // Trying to use a non-existent key returns 1 args := []string{ "-rpc-addr=" + rpcAddr, "-use", "eodFZZjm7pPwIZ0Miy7boQ==", } code := c.Run(args) if code != 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } // Using an existing key returns 0 args = []string{ "-rpc-addr=" + rpcAddr, "-use", "vbitCcJNwNP4aEWHgofjMg==", } code = c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } } func TestKeysCommandRun_UseKeyFailure(t *testing.T) { a1 := testKeysCommandAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &KeysCommand{Ui: ui} // Trying to use a key that doesn't exist returns 1 args := []string{ "-rpc-addr=" + rpcAddr, "-use", "jbuQMI4gMUeh1PPmKOtiBg==", } code := c.Run(args) if code != 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } // Node errors appear in stderr if !strings.Contains(ui.ErrorWriter.String(), "not in the keyring") { t.Fatalf("expected absent key error") } } func TestKeysCommandRun_RemoveKey(t *testing.T) { a1 := testKeysCommandAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &KeysCommand{Ui: ui} rpcClient, err := client.NewRPCClient(rpcAddr) if err != nil { t.Fatalf("err: %s", err) } keys, _, _, err := rpcClient.ListKeys() if err != nil { t.Fatalf("err: %s", err) } if len(keys) != 2 { t.Fatalf("expected 2 keys: %v", keys) } // Removing non-existing key still returns 0 (noop) args := []string{ "-rpc-addr=" + rpcAddr, "-remove", "eodFZZjm7pPwIZ0Miy7boQ==", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } // Number of keys unchanged after noop command keys, _, _, err = rpcClient.ListKeys() if err != nil { t.Fatalf("err: %s", err) } if len(keys) != 2 { t.Fatalf("expected 2 keys: %v", keys) } // Removing a primary key returns 1 args = []string{ "-rpc-addr=" + rpcAddr, "-remove", "SNCg1bQSoCdGVlEx+TgfBw==", } ui.ErrorWriter.Reset() code = c.Run(args) if code != 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.ErrorWriter.String(), "Error removing key") { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } // Removing a non-primary, existing key returns 0 args = []string{ "-rpc-addr=" + rpcAddr, "-remove", "vbitCcJNwNP4aEWHgofjMg==", } code = c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } // Key removed after successful -remove command keys, _, _, err = rpcClient.ListKeys() if err != nil { t.Fatalf("err: %s", err) } if len(keys) != 1 { t.Fatalf("expected 2 keys: %v", keys) } } func TestKeysCommandRun_RemoveKeyFailure(t *testing.T) { a1 := testKeysCommandAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &KeysCommand{Ui: ui} // Trying to remove the primary key returns 1 args := []string{ "-rpc-addr=" + rpcAddr, "-remove", "SNCg1bQSoCdGVlEx+TgfBw==", } code := c.Run(args) if code != 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } // Node errors appear in stderr if !strings.Contains(ui.ErrorWriter.String(), "not allowed") { t.Fatalf("expected primary key removal error") } } func TestKeysCommandRun_ListKeys(t *testing.T) { a1 := testKeysCommandAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &KeysCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-list", } code := c.Run(args) if code == 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), "SNCg1bQSoCdGVlEx+TgfBw==") { t.Fatalf("missing expected key") } if !strings.Contains(ui.OutputWriter.String(), "vbitCcJNwNP4aEWHgofjMg==") { t.Fatalf("missing expected key") } } func TestKeysCommandRun_ListKeysFailure(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &KeysCommand{Ui: ui} // Trying to list keys with encryption disabled returns 1 args := []string{ "-rpc-addr=" + rpcAddr, "-list", } code := c.Run(args) if code != 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.ErrorWriter.String(), "not enabled") { t.Fatalf("expected empty keyring error") } } func TestKeysCommandRun_BadOptions(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &KeysCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-install", "vbitCcJNwNP4aEWHgofjMg==", "-use", "vbitCcJNwNP4aEWHgofjMg==", } code := c.Run(args) if code != 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } args = []string{ "-rpc-addr=" + rpcAddr, "-list", "-remove", "SNCg1bQSoCdGVlEx+TgfBw==", } code = c.Run(args) if code != 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/leave.go000066400000000000000000000023671317277572200263760ustar00rootroot00000000000000package command import ( "flag" "fmt" "github.com/mitchellh/cli" "strings" ) // LeaveCommand is a Command implementation that instructs // the Serf agent to gracefully leave the cluster type LeaveCommand struct { Ui cli.Ui } func (c *LeaveCommand) Help() string { helpText := ` Usage: serf leave Causes the agent to gracefully leave the Serf cluster and shutdown. Options: -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. ` return strings.TrimSpace(helpText) } func (c *LeaveCommand) Run(args []string) int { cmdFlags := flag.NewFlagSet("leave", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } client, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer client.Close() if err := client.Leave(); err != nil { c.Ui.Error(fmt.Sprintf("Error leaving: %s", err)) return 1 } c.Ui.Output("Graceful leave complete") return 0 } func (c *LeaveCommand) Synopsis() string { return "Gracefully leaves the Serf cluster and shuts down" } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/leave_test.go000066400000000000000000000011541317277572200274260ustar00rootroot00000000000000package command import ( "github.com/mitchellh/cli" "strings" "testing" ) func TestLeaveCommand_implements(t *testing.T) { var _ cli.Command = &LeaveCommand{} } func TestLeaveCommandRun(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &LeaveCommand{Ui: ui} args := []string{"-rpc-addr=" + rpcAddr} code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), "leave complete") { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/members.go000066400000000000000000000122361317277572200267300ustar00rootroot00000000000000package command import ( "flag" "fmt" "net" "strings" "github.com/hashicorp/serf/cmd/serf/command/agent" "github.com/mitchellh/cli" "github.com/ryanuber/columnize" ) // MembersCommand is a Command implementation that queries a running // Serf agent what members are part of the cluster currently. type MembersCommand struct { Ui cli.Ui } // A container of member details. Maintaining a command-specific struct here // makes sense so that the agent.Member struct can evolve without changing the // keys in the output interface. type Member struct { detail bool Name string `json:"name"` Addr string `json:"addr"` Port uint16 `json:"port"` Tags map[string]string `json:"tags"` Status string `json:"status"` Proto map[string]uint8 `json:"protocol"` } type MemberContainer struct { Members []Member `json:"members"` } func (c MemberContainer) String() string { var result []string for _, member := range c.Members { tags := strings.Join(agent.MarshalTags(member.Tags), ",") line := fmt.Sprintf("%s|%s|%s|%s", member.Name, member.Addr, member.Status, tags) if member.detail { line += fmt.Sprintf( "|Protocol Version: %d|Available Protocol Range: [%d, %d]", member.Proto["version"], member.Proto["min"], member.Proto["max"]) } result = append(result, line) } return columnize.SimpleFormat(result) } func (c *MembersCommand) Help() string { helpText := ` Usage: serf members [options] Outputs the members of a running Serf agent. Options: -detailed Additional information such as protocol verions will be shown (only affects text output format). -format If provided, output is returned in the specified format. Valid formats are 'json', and 'text' (default) -name= If provided, only members matching the regexp are returned. The regexp is anchored at the start and end, and must be a full match. -role= If provided, output is filtered to only nodes matching the regular expression for role '-role' is deprecated in favor of '-tag role=foo'. The regexp is anchored at the start and end, and must be a full match. -status= If provided, output is filtered to only nodes matching the regular expression for status -tag = If provided, output is filtered to only nodes with the tag with value matching the regular expression. tag can be specified multiple times to filter on multiple keys. The regexp is anchored at the start and end, and must be a full match. -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. ` return strings.TrimSpace(helpText) } func (c *MembersCommand) Run(args []string) int { var detailed bool var roleFilter, statusFilter, nameFilter, format string var tags []string cmdFlags := flag.NewFlagSet("members", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } cmdFlags.BoolVar(&detailed, "detailed", false, "detailed output") cmdFlags.StringVar(&roleFilter, "role", "", "role filter") cmdFlags.StringVar(&statusFilter, "status", "", "status filter") cmdFlags.StringVar(&format, "format", "text", "output format") cmdFlags.Var((*agent.AppendSliceValue)(&tags), "tag", "tag filter") cmdFlags.StringVar(&nameFilter, "name", "", "name filter") rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } // Deprecation warning for role if roleFilter != "" { c.Ui.Output("Deprecation warning: 'Role' has been replaced with 'Tags'") tags = append(tags, fmt.Sprintf("role=%s", roleFilter)) } reqtags, err := agent.UnmarshalTags(tags) if err != nil { c.Ui.Error(fmt.Sprintf("Error: %s", err)) return 1 } client, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer client.Close() members, err := client.MembersFiltered(reqtags, statusFilter, nameFilter) if err != nil { c.Ui.Error(fmt.Sprintf("Error retrieving members: %s", err)) return 1 } result := MemberContainer{} for _, member := range members { addr := net.TCPAddr{IP: member.Addr, Port: int(member.Port)} result.Members = append(result.Members, Member{ detail: detailed, Name: member.Name, Addr: addr.String(), Port: member.Port, Tags: member.Tags, Status: member.Status, Proto: map[string]uint8{ "min": member.DelegateMin, "max": member.DelegateMax, "version": member.DelegateCur, }, }) } output, err := formatOutput(result, format) if err != nil { c.Ui.Error(fmt.Sprintf("Encoding error: %s", err)) return 1 } c.Ui.Output(string(output)) return 0 } func (c *MembersCommand) Synopsis() string { return "Lists the members of a Serf cluster" } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/members_test.go000066400000000000000000000111701317277572200277630ustar00rootroot00000000000000package command import ( "github.com/mitchellh/cli" "strings" "testing" ) func TestMembersCommand_implements(t *testing.T) { var _ cli.Command = &MembersCommand{} } func TestMembersCommandRun(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &MembersCommand{Ui: ui} args := []string{"-rpc-addr=" + rpcAddr} code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestMembersCommandRun_statusFilter(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &MembersCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-status=alive", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestMembersCommandRun_statusFilter_failed(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &MembersCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-status=(failed|left)", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestMembersCommandRun_roleFilter(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &MembersCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-role=test", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestMembersCommandRun_roleFilter_failed(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &MembersCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-role=primary", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestMembersCommandRun_tagFilter(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &MembersCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-tag=tag1=foo", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestMembersCommandRun_tagFilter_failed(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &MembersCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-tag=tag1=nomatch", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestMembersCommandRun_mutliTagFilter(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &MembersCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-tag=tag1=foo", "-tag=tag2=bar", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestMembersCommandRun_multiTagFilter_failed(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &MembersCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-tag=tag1=foo", "-tag=tag2=nomatch", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/monitor.go000066400000000000000000000054251317277572200267670ustar00rootroot00000000000000package command import ( "flag" "fmt" "github.com/hashicorp/logutils" "github.com/mitchellh/cli" "strings" "sync" ) // MonitorCommand is a Command implementation that queries a running // Serf agent what members are part of the cluster currently. type MonitorCommand struct { ShutdownCh <-chan struct{} Ui cli.Ui lock sync.Mutex quitting bool } func (c *MonitorCommand) Help() string { helpText := ` Usage: serf monitor [options] Shows recent log messages of a Serf agent, and attaches to the agent, outputting log messages as they occur in real time. The monitor lets you listen for log levels that may be filtered out of the Serf agent. For example your agent may only be logging at INFO level, but with the monitor you can see the DEBUG level logs. Options: -log-level=info Log level of the agent. -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. ` return strings.TrimSpace(helpText) } func (c *MonitorCommand) Run(args []string) int { var logLevel string cmdFlags := flag.NewFlagSet("monitor", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } cmdFlags.StringVar(&logLevel, "log-level", "INFO", "log level") rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } client, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer client.Close() eventCh := make(chan map[string]interface{}, 1024) streamHandle, err := client.Stream("*", eventCh) if err != nil { c.Ui.Error(fmt.Sprintf("Error starting stream: %s", err)) return 1 } defer client.Stop(streamHandle) logCh := make(chan string, 1024) monHandle, err := client.Monitor(logutils.LogLevel(logLevel), logCh) if err != nil { c.Ui.Error(fmt.Sprintf("Error starting monitor: %s", err)) return 1 } defer client.Stop(monHandle) eventDoneCh := make(chan struct{}) go func() { defer close(eventDoneCh) OUTER: for { select { case log := <-logCh: if log == "" { break OUTER } c.Ui.Info(log) case event := <-eventCh: if event == nil { break OUTER } c.Ui.Info("Event Info:") for key, val := range event { c.Ui.Info(fmt.Sprintf("\t%s: %#v", key, val)) } } } c.lock.Lock() defer c.lock.Unlock() if !c.quitting { c.Ui.Info("") c.Ui.Output("Remote side ended the monitor! This usually means that the\n" + "remote side has exited or crashed.") } }() select { case <-eventDoneCh: return 1 case <-c.ShutdownCh: c.lock.Lock() c.quitting = true c.lock.Unlock() } return 0 } func (c *MonitorCommand) Synopsis() string { return "Stream logs from a Serf agent" } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/output.go000066400000000000000000000015071317277572200266350ustar00rootroot00000000000000package command import ( "encoding/json" "fmt" "strings" ) // Format some raw data for output. For better or worse, this currently forces // the passed data object to implement fmt.Stringer, since it's pretty hard to // implement a canonical *-to-string function. func formatOutput(data interface{}, format string) ([]byte, error) { var out string switch format { case "json": jsonout, err := json.MarshalIndent(data, "", " ") if err != nil { return nil, err } out = string(jsonout) case "text": out = data.(fmt.Stringer).String() default: return nil, fmt.Errorf("Invalid output format \"%s\"", format) } return []byte(prepareOutput(out)), nil } // Apply some final formatting to make sure we don't end up with extra newlines func prepareOutput(in string) string { return strings.TrimSpace(string(in)) } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/output_test.go000066400000000000000000000026351317277572200276770ustar00rootroot00000000000000package command import ( "fmt" "testing" ) type OutputTest struct { XMLName string `json:"-"` TestString string `json:"test_string"` TestInt int `json:"test_int"` TestNil []byte `json:"test_nil"` TestNested OutputTestNested `json:"nested"` } type OutputTestNested struct { NestKey string `json:"nest_key"` } func (o OutputTest) String() string { return fmt.Sprintf("%s %d %s", o.TestString, o.TestInt, o.TestNil) } func TestCommandOutput(t *testing.T) { var formatted []byte result := OutputTest{ TestString: "woooo a string", TestInt: 77, TestNil: nil, TestNested: OutputTestNested{ NestKey: "nest_value", }, } json_expected := `{ "test_string": "woooo a string", "test_int": 77, "test_nil": null, "nested": { "nest_key": "nest_value" } }` formatted, _ = formatOutput(result, "json") if string(formatted) != json_expected { t.Fatalf("bad json:\n%s\n\nexpected:\n%s", formatted, json_expected) } text_expected := "woooo a string 77" formatted, _ = formatOutput(result, "text") if string(formatted) != text_expected { t.Fatalf("bad output:\n\"%s\"\n\nexpected:\n\"%s\"", formatted, text_expected) } error_expected := `Invalid output format "boo"` _, err := formatOutput(result, "boo") if err.Error() != error_expected { t.Fatalf("bad output:\n\"%s\"\n\nexpected:\n\"%s\"", err.Error(), error_expected) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/query.go000066400000000000000000000140621317277572200264420ustar00rootroot00000000000000package command import ( "flag" "fmt" "strings" "time" "github.com/hashicorp/serf/client" "github.com/hashicorp/serf/cmd/serf/command/agent" "github.com/mitchellh/cli" ) // QueryCommand is a Command implementation that is used to trigger a new // query and wait for responses and acks type QueryCommand struct { ShutdownCh <-chan struct{} Ui cli.Ui } func (c *QueryCommand) Help() string { helpText := ` Usage: serf query [options] name payload Dispatches a query to the Serf cluster. Options: -format If provided, output is returned in the specified format. Valid formats are 'json', and 'text' (default) -node=NAME This flag can be provided multiple times to filter responses to only named nodes. -tag key=regexp This flag can be provided multiple times to filter responses to only nodes matching the tags. -timeout="15s" Providing a timeout overrides the default timeout. -no-ack Setting this prevents nodes from sending an acknowledgement of the query. -relay-factor If provided, query responses will be relayed through this number of extra nodes for redundancy. -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. ` return strings.TrimSpace(helpText) } func (c *QueryCommand) Run(args []string) int { var noAck bool var nodes []string var tags []string var timeout time.Duration var format string var relayFactor int cmdFlags := flag.NewFlagSet("event", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } cmdFlags.Var((*agent.AppendSliceValue)(&nodes), "node", "node filter") cmdFlags.Var((*agent.AppendSliceValue)(&tags), "tag", "tag filter") cmdFlags.DurationVar(&timeout, "timeout", 0, "query timeout") cmdFlags.BoolVar(&noAck, "no-ack", false, "no-ack") cmdFlags.StringVar(&format, "format", "text", "output format") cmdFlags.IntVar(&relayFactor, "relay-factor", 0, "response relay count") rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } // Setup the filter tags filterTags, err := agent.UnmarshalTags(tags) if err != nil { c.Ui.Error(fmt.Sprintf("Error: %s", err)) return 1 } args = cmdFlags.Args() if len(args) < 1 { c.Ui.Error("A query name must be specified.") c.Ui.Error("") c.Ui.Error(c.Help()) return 1 } else if len(args) > 2 { c.Ui.Error("Too many command line arguments. Only a name and payload must be specified.") c.Ui.Error("") c.Ui.Error(c.Help()) return 1 } if relayFactor > 255 || relayFactor < 0 { c.Ui.Error("Relay factor must be between 0 and 255") return 1 } name := args[0] var payload []byte if len(args) == 2 { payload = []byte(args[1]) } cl, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer cl.Close() // Setup the the response handler var handler queryRespFormat switch format { case "text": handler = &textQueryRespFormat{ ui: c.Ui, name: name, noAck: noAck, } case "json": handler = &jsonQueryRespFormat{ ui: c.Ui, Responses: make(map[string]string), } default: c.Ui.Error(fmt.Sprintf("Invalid format: %s", format)) return 1 } ackCh := make(chan string, 128) respCh := make(chan client.NodeResponse, 128) params := client.QueryParam{ FilterNodes: nodes, FilterTags: filterTags, RequestAck: !noAck, RelayFactor: uint8(relayFactor), Timeout: timeout, Name: name, Payload: payload, AckCh: ackCh, RespCh: respCh, } if err := cl.Query(¶ms); err != nil { c.Ui.Error(fmt.Sprintf("Error sending query: %s", err)) return 1 } handler.Started() OUTER: for { select { case a := <-ackCh: if a == "" { break OUTER } handler.AckReceived(a) case r := <-respCh: if r.From == "" { break OUTER } handler.ResponseReceived(r) case <-c.ShutdownCh: return 1 } } if err := handler.Finished(); err != nil { return 1 } return 0 } func (c *QueryCommand) Synopsis() string { return "Send a query to the Serf cluster" } // queryRespFormat is used to switch our handler based on the format type queryRespFormat interface { Started() AckReceived(from string) ResponseReceived(resp client.NodeResponse) Finished() error } // textQueryRespFormat is used to output the results in a human-readable // format that is streamed as results come in type textQueryRespFormat struct { ui cli.Ui name string noAck bool numAcks int numResp int } func (t *textQueryRespFormat) Started() { t.ui.Output(fmt.Sprintf("Query '%s' dispatched", t.name)) } func (t *textQueryRespFormat) AckReceived(from string) { t.numAcks++ t.ui.Info(fmt.Sprintf("Ack from '%s'", from)) } func (t *textQueryRespFormat) ResponseReceived(r client.NodeResponse) { t.numResp++ // Remove the trailing newline if there is one payload := r.Payload if n := len(payload); n > 0 && payload[n-1] == '\n' { payload = payload[:n-1] } t.ui.Info(fmt.Sprintf("Response from '%s': %s", r.From, payload)) } func (t *textQueryRespFormat) Finished() error { if !t.noAck { t.ui.Output(fmt.Sprintf("Total Acks: %d", t.numAcks)) } t.ui.Output(fmt.Sprintf("Total Responses: %d", t.numResp)) return nil } // jsonQueryRespFormat is used to output the results in a JSON format type jsonQueryRespFormat struct { ui cli.Ui Acks []string Responses map[string]string } func (j *jsonQueryRespFormat) Started() {} func (j *jsonQueryRespFormat) AckReceived(from string) { j.Acks = append(j.Acks, from) } func (j *jsonQueryRespFormat) ResponseReceived(r client.NodeResponse) { j.Responses[r.From] = string(r.Payload) } func (j *jsonQueryRespFormat) Finished() error { output, err := formatOutput(j, "json") if err != nil { j.ui.Error(fmt.Sprintf("Encoding error: %s", err)) return err } j.ui.Output(string(output)) return nil } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/query_test.go000066400000000000000000000113131317277572200274750ustar00rootroot00000000000000package command import ( "encoding/json" "github.com/mitchellh/cli" "strings" "testing" ) func TestQUeryCommand_implements(t *testing.T) { var _ cli.Command = &QueryCommand{} } func TestQueryCommandRun_noName(t *testing.T) { ui := new(cli.MockUi) c := &QueryCommand{Ui: ui} args := []string{"-rpc-addr=foo"} code := c.Run(args) if code != 1 { t.Fatalf("bad: %d", code) } if !strings.Contains(ui.ErrorWriter.String(), "query name") { t.Fatalf("bad: %#v", ui.ErrorWriter.String()) } } func TestQueryCommandRun_tooMany(t *testing.T) { ui := new(cli.MockUi) c := &QueryCommand{Ui: ui} args := []string{"-rpc-addr=foo", "foo", "bar", "baz"} code := c.Run(args) if code != 1 { t.Fatalf("bad: %d", code) } if !strings.Contains(ui.ErrorWriter.String(), "Too many") { t.Fatalf("bad: %#v", ui.ErrorWriter.String()) } } func TestQueryCommandRun(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &QueryCommand{Ui: ui} args := []string{"-rpc-addr=" + rpcAddr, "-timeout=500ms", "deploy", "abcd1234"} code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestQueryCommandRun_tagFilter(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &QueryCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-tag=tag1=foo", "foo", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestQueryCommandRun_tagFilter_failed(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &QueryCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-tag=tag1=nomatch", "foo", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestQueryCommandRun_nodeFilter(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &QueryCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-node", a1.SerfConfig().NodeName, "foo", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestQueryCommandRun_nodeFilter_failed(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &QueryCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-node=whoisthis", "foo", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestQueryCommandRun_formatJSON(t *testing.T) { type output struct { Acks []string Responses map[string]string } a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &QueryCommand{Ui: ui} args := []string{"-rpc-addr=" + rpcAddr, "-format=json", "-timeout=500ms", "deploy", "abcd1234"} code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } // Decode the output dec := json.NewDecoder(ui.OutputWriter) var out output if err := dec.Decode(&out); err != nil { t.Fatalf("Decode err: %v", err) } if out.Acks[0] != a1.SerfConfig().NodeName { t.Fatalf("bad: %#v", out) } } func TestQueryCommandRun_invalidRelayFactor(t *testing.T) { ui := new(cli.MockUi) { c := &QueryCommand{Ui: ui} args := []string{"-rpc-addr=foo", "-relay-factor=9999", "foo"} code := c.Run(args) if code != 1 { t.Fatalf("bad: %d", code) } if !strings.Contains(ui.ErrorWriter.String(), "Relay factor must be") { t.Fatalf("bad: %#v", ui.ErrorWriter.String()) } } { c := &QueryCommand{Ui: ui} args := []string{"-rpc-addr=foo", "-relay-factor=-1", "foo"} code := c.Run(args) if code != 1 { t.Fatalf("bad: %d", code) } if !strings.Contains(ui.ErrorWriter.String(), "Relay factor must be") { t.Fatalf("bad: %#v", ui.ErrorWriter.String()) } } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/reachability.go000066400000000000000000000105511317277572200277340ustar00rootroot00000000000000package command import ( "flag" "fmt" "github.com/hashicorp/serf/client" "github.com/hashicorp/serf/serf" "github.com/mitchellh/cli" "strings" "time" ) const ( tooManyAcks = `This could mean Serf is detecting false-failures due to a misconfiguration or network issue.` tooFewAcks = `This could mean Serf gossip packets are being lost due to a misconfiguration or network issue.` duplicateResponses = `Duplicate responses means there is a misconfiguration. Verify that node names are unique.` troubleshooting = ` Troubleshooting tips: * Ensure that the bind addr:port is accessible by all other nodes * If an advertise address is set, ensure it routes to the bind address * Check that no nodes are behind a NAT * If nodes are behind firewalls or iptables, check that Serf traffic is permitted (UDP and TCP) * Verify networking equipment is functional` ) // ReachabilityCommand is a Command implementation that is used to trigger // a new reachability test type ReachabilityCommand struct { ShutdownCh <-chan struct{} Ui cli.Ui } func (c *ReachabilityCommand) Help() string { helpText := ` Usage: serf reachability [options] Tests the network reachability of this node Options: -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. -verbose Verbose mode ` return strings.TrimSpace(helpText) } func (c *ReachabilityCommand) Run(args []string) int { var verbose bool cmdFlags := flag.NewFlagSet("reachability", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } cmdFlags.BoolVar(&verbose, "verbose", false, "verbose mode") rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } cl, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer cl.Close() ackCh := make(chan string, 128) // Get the list of members members, err := cl.Members() if err != nil { c.Ui.Error(fmt.Sprintf("Error getting members: %s", err)) return 1 } // Get only the live members liveMembers := make(map[string]struct{}) for _, m := range members { if m.Status == "alive" { liveMembers[m.Name] = struct{}{} } } c.Ui.Output(fmt.Sprintf("Total members: %d, live members: %d", len(members), len(liveMembers))) // Start the query params := client.QueryParam{ RequestAck: true, Name: serf.InternalQueryPrefix + "ping", AckCh: ackCh, } if err := cl.Query(¶ms); err != nil { c.Ui.Error(fmt.Sprintf("Error sending query: %s", err)) return 1 } c.Ui.Output("Starting reachability test...") start := time.Now() last := time.Now() // Track responses and acknowledgements exit := 0 dups := false numAcks := 0 acksFrom := make(map[string]struct{}, len(members)) OUTER: for { select { case a := <-ackCh: if a == "" { break OUTER } if verbose { c.Ui.Output(fmt.Sprintf("\tAck from '%s'", a)) } numAcks++ if _, ok := acksFrom[a]; ok { dups = true c.Ui.Output(fmt.Sprintf("Duplicate response from '%v'", a)) } acksFrom[a] = struct{}{} last = time.Now() case <-c.ShutdownCh: c.Ui.Error("Test interrupted") return 1 } } if verbose { total := float64(time.Now().Sub(start)) / float64(time.Second) timeToLast := float64(last.Sub(start)) / float64(time.Second) c.Ui.Output(fmt.Sprintf("Query time: %0.2f sec, time to last response: %0.2f sec", total, timeToLast)) } // Print troubleshooting info for duplicate responses if dups { c.Ui.Output(duplicateResponses) exit = 1 } n := len(liveMembers) if numAcks == n { c.Ui.Output("Successfully contacted all live nodes") } else if numAcks > n { c.Ui.Output("Received more acks than live nodes! Acks from non-live nodes:") for m := range acksFrom { if _, ok := liveMembers[m]; !ok { c.Ui.Output(fmt.Sprintf("\t%s", m)) } } c.Ui.Output(tooManyAcks) c.Ui.Output(troubleshooting) return 1 } else if numAcks < n { c.Ui.Output("Received less acks than live nodes! Missing acks from:") for m := range liveMembers { if _, ok := acksFrom[m]; !ok { c.Ui.Output(fmt.Sprintf("\t%s", m)) } } c.Ui.Output(tooFewAcks) c.Ui.Output(troubleshooting) return 1 } return exit } func (c *ReachabilityCommand) Synopsis() string { return "Test network reachability" } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/reachability_test.go000066400000000000000000000014231317277572200307710ustar00rootroot00000000000000package command import ( "github.com/mitchellh/cli" "strings" "testing" ) func TestReachabilityCommand_implements(t *testing.T) { var _ cli.Command = &ReachabilityCommand{} } func TestReachabilityCommand_Run(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &ReachabilityCommand{Ui: ui} args := []string{"-rpc-addr=" + rpcAddr, "-verbose"} code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), "Successfully") { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/rpc.go000066400000000000000000000017311317277572200260600ustar00rootroot00000000000000package command import ( "flag" "github.com/hashicorp/serf/client" "os" ) // RPCAddrFlag returns a pointer to a string that will be populated // when the given flagset is parsed with the RPC address of the Serf. func RPCAddrFlag(f *flag.FlagSet) *string { defaultRpcAddr := os.Getenv("SERF_RPC_ADDR") if defaultRpcAddr == "" { defaultRpcAddr = "127.0.0.1:7373" } return f.String("rpc-addr", defaultRpcAddr, "RPC address of the Serf agent") } // RPCAuthFlag returns a pointer to a string that will be populated // when the given flagset is parsed with the RPC auth token of the Serf. func RPCAuthFlag(f *flag.FlagSet) *string { rpcAuth := os.Getenv("SERF_RPC_AUTH") return f.String("rpc-auth", rpcAuth, "RPC auth token of the Serf agent") } // RPCClient returns a new Serf RPC client with the given address. func RPCClient(addr, auth string) (*client.RPCClient, error) { config := client.Config{Addr: addr, AuthKey: auth} return client.ClientFromConfig(&config) } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/rtt.go000066400000000000000000000050161317277572200261050ustar00rootroot00000000000000package command import ( "flag" "fmt" "strings" "github.com/mitchellh/cli" ) // RTTCommand is a Command implementation that allows users to query the // estimated round trip time between nodes using network coordinates. type RTTCommand struct { Ui cli.Ui } func (c *RTTCommand) Help() string { helpText := ` Usage: serf rtt [options] node1 [node2] Estimates the round trip time between two nodes using Serf's network coordinate model of the cluster. At least one node name is required. If the second node name isn't given, it is set to the agent's node name. Note that these are node names as known to Serf as "serf members" would show, not IP addresses. Options: -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. ` return strings.TrimSpace(helpText) } func (c *RTTCommand) Run(args []string) int { cmdFlags := flag.NewFlagSet("rtt", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } // Create the RPC client. client, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer client.Close() // They must provide at least one node. nodes := cmdFlags.Args() if len(nodes) == 1 { stats, err := client.Stats() if err != nil { c.Ui.Error(fmt.Sprintf("Error querying agent: %s", err)) return 1 } nodes = append(nodes, stats["agent"]["name"]) } else if len(nodes) != 2 { c.Ui.Error("One or two node names must be specified") c.Ui.Error("") c.Ui.Error(c.Help()) return 1 } // Get the coordinates. coord1, err := client.GetCoordinate(nodes[0]) if err != nil { c.Ui.Error(fmt.Sprintf("Error getting coordinates: %s", err)) return 1 } if coord1 == nil { c.Ui.Error(fmt.Sprintf("Could not find a coordinate for node %q", nodes[0])) return 1 } coord2, err := client.GetCoordinate(nodes[1]) if err != nil { c.Ui.Error(fmt.Sprintf("Error getting coordinates: %s", err)) return 1 } if coord2 == nil { c.Ui.Error(fmt.Sprintf("Could not find a coordinate for node %q", nodes[1])) return 1 } // Report the round trip time. dist := fmt.Sprintf("%.3f ms", coord1.DistanceTo(coord2).Seconds()*1000.0) c.Ui.Output(fmt.Sprintf("Estimated %s <-> %s rtt: %s", nodes[0], nodes[1], dist)) return 0 } func (c *RTTCommand) Synopsis() string { return "Estimates network round trip time between nodes" } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/rtt_test.go000066400000000000000000000037121317277572200271450ustar00rootroot00000000000000package command import ( "fmt" "github.com/mitchellh/cli" "strings" "testing" ) func TestRTTCommand_Implements(t *testing.T) { var _ cli.Command = &RTTCommand{} } func TestRTTCommand_Run_BadArgs(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() _, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &RTTCommand{Ui: ui} code := c.Run([]string{}) if code != 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } } func TestRTTCommand_Run(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() coord, ok := a1.Serf().GetCachedCoordinate(a1.SerfConfig().NodeName) if !ok { t.Fatalf("should have a coordinate for the agent") } dist_str := fmt.Sprintf("%.3f ms", coord.DistanceTo(coord).Seconds()*1000.0) // Try with the default of the agent's node. args := []string{"-rpc-addr=" + rpcAddr, a1.SerfConfig().NodeName} { ui := new(cli.MockUi) c := &RTTCommand{Ui: ui} code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } // Make sure the proper RTT was reported in the output. expected := fmt.Sprintf("rtt: %s", dist_str) if !strings.Contains(ui.OutputWriter.String(), expected) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } // Explicitly set the agent's node twice. args = append(args, a1.SerfConfig().NodeName) { ui := new(cli.MockUi) c := &RTTCommand{Ui: ui} code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } // Make sure the proper RTT was reported in the output. expected := fmt.Sprintf("rtt: %s", dist_str) if !strings.Contains(ui.OutputWriter.String(), expected) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } // Try an unknown node. args = []string{"nope"} { ui := new(cli.MockUi) c := &RTTCommand{Ui: ui} code := c.Run(args) if code != 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/tags.go000066400000000000000000000035121317277572200262310ustar00rootroot00000000000000package command import ( "flag" "fmt" "strings" "github.com/hashicorp/serf/cmd/serf/command/agent" "github.com/mitchellh/cli" ) // TagsCommand is an interface to dynamically add or otherwise modify a // running serf agent's tags. type TagsCommand struct { Ui cli.Ui } func (c *TagsCommand) Help() string { helpText := ` Usage: serf tags [options] ... Modifies tags on a running Serf agent. Options: -rpc-addr=127.0.0.1:7373 RPC Address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. -set key=value Creates or modifies the value of a tag -delete key Removes a tag, if present ` return strings.TrimSpace(helpText) } func (c *TagsCommand) Run(args []string) int { var tagPairs []string var delTags []string cmdFlags := flag.NewFlagSet("tags", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } cmdFlags.Var((*agent.AppendSliceValue)(&tagPairs), "set", "tag pairs, specified as key=value") cmdFlags.Var((*agent.AppendSliceValue)(&delTags), "delete", "tag keys to unset") rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } if len(tagPairs) == 0 && len(delTags) == 0 { c.Ui.Output(c.Help()) return 1 } client, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer client.Close() tags, err := agent.UnmarshalTags(tagPairs) if err != nil { c.Ui.Error(fmt.Sprintf("Error: %s", err)) return 1 } if err := client.UpdateTags(tags, delTags); err != nil { c.Ui.Error(fmt.Sprintf("Error setting tags: %s", err)) return 1 } c.Ui.Output("Successfully updated agent tags") return 0 } func (c *TagsCommand) Synopsis() string { return "Modify tags of a running Serf agent" } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/tags_test.go000066400000000000000000000022301317277572200272640ustar00rootroot00000000000000package command import ( "github.com/hashicorp/serf/client" "github.com/mitchellh/cli" "strings" "testing" ) func TestTagsCommand_implements(t *testing.T) { var _ cli.Command = &TagsCommand{} } func TestTagsCommandRun(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &TagsCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-delete", "tag2", "-set", "a=1", "-set", "b=2", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), "Successfully updated agent tags") { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } rpcClient, err := client.NewRPCClient(rpcAddr) if err != nil { t.Fatalf("err: %s", err) } mem, err := rpcClient.Members() if err != nil { t.Fatalf("err: %s", err) } if len(mem) != 1 { t.Fatalf("bad: %v", mem) } m0 := mem[0] if _, ok := m0.Tags["tag2"]; ok { t.Fatalf("bad: %v", m0.Tags) } if _, ok := m0.Tags["a"]; !ok { t.Fatalf("bad: %v", m0.Tags) } if _, ok := m0.Tags["b"]; !ok { t.Fatalf("bad: %v", m0.Tags) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/util_test.go000066400000000000000000000032771317277572200273170ustar00rootroot00000000000000package command import ( "fmt" "io" "math/rand" "net" "os" "testing" "time" "github.com/hashicorp/serf/cmd/serf/command/agent" "github.com/hashicorp/serf/serf" "github.com/hashicorp/serf/testutil" ) func init() { // Seed the random number generator rand.Seed(time.Now().UnixNano()) } func testAgent(t *testing.T) *agent.Agent { agentConfig := agent.DefaultConfig() serfConfig := serf.DefaultConfig() return testAgentWithConfig(t, agentConfig, serfConfig) } func testAgentWithConfig(t *testing.T, agentConfig *agent.Config, serfConfig *serf.Config) *agent.Agent { serfConfig.MemberlistConfig.BindAddr = testutil.GetBindAddr().String() serfConfig.MemberlistConfig.ProbeInterval = 50 * time.Millisecond serfConfig.MemberlistConfig.ProbeTimeout = 25 * time.Millisecond serfConfig.MemberlistConfig.SuspicionMult = 1 serfConfig.NodeName = serfConfig.MemberlistConfig.BindAddr serfConfig.Tags = map[string]string{"role": "test", "tag1": "foo", "tag2": "bar"} agent, err := agent.Create(agentConfig, serfConfig, nil) if err != nil { t.Fatalf("err: %s", err) } if err := agent.Start(); err != nil { t.Fatalf("err: %s", err) } return agent } func getRPCAddr() string { for i := 0; i < 500; i++ { l, err := net.Listen("tcp", fmt.Sprintf("127.0.0.1:%d", rand.Int31n(25000)+1024)) if err == nil { l.Close() return l.Addr().String() } } panic("no listener") } func testIPC(t *testing.T, a *agent.Agent) (string, *agent.AgentIPC) { rpcAddr := getRPCAddr() l, err := net.Listen("tcp", rpcAddr) if err != nil { t.Fatalf("err: %s", err) } lw := agent.NewLogWriter(512) mult := io.MultiWriter(os.Stderr, lw) ipc := agent.NewAgentIPC(a, "", l, mult, lw) return rpcAddr, ipc } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/version.go000066400000000000000000000016371317277572200267660ustar00rootroot00000000000000package command import ( "bytes" "fmt" "github.com/hashicorp/serf/serf" "github.com/mitchellh/cli" ) // VersionCommand is a Command implementation prints the version. type VersionCommand struct { Revision string Version string VersionPrerelease string Ui cli.Ui } func (c *VersionCommand) Help() string { return "" } func (c *VersionCommand) Run(_ []string) int { var versionString bytes.Buffer fmt.Fprintf(&versionString, "Serf v%s", c.Version) if c.VersionPrerelease != "" { fmt.Fprintf(&versionString, ".%s", c.VersionPrerelease) if c.Revision != "" { fmt.Fprintf(&versionString, " (%s)", c.Revision) } } c.Ui.Output(versionString.String()) c.Ui.Output(fmt.Sprintf("Agent Protocol: %d (Understands back to: %d)", serf.ProtocolVersionMax, serf.ProtocolVersionMin)) return 0 } func (c *VersionCommand) Synopsis() string { return "Prints the Serf version" } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/command/version_test.go000066400000000000000000000002401317277572200300120ustar00rootroot00000000000000package command import ( "github.com/mitchellh/cli" "testing" ) func TestVersionCommand_implements(t *testing.T) { var _ cli.Command = &VersionCommand{} } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/commands.go000066400000000000000000000051041317277572200254550ustar00rootroot00000000000000package main import ( "os" "os/signal" "github.com/hashicorp/serf/cmd/serf/command" "github.com/hashicorp/serf/cmd/serf/command/agent" "github.com/mitchellh/cli" ) // Commands is the mapping of all the available Serf commands. var Commands map[string]cli.CommandFactory func init() { ui := &cli.BasicUi{Writer: os.Stdout} Commands = map[string]cli.CommandFactory{ "agent": func() (cli.Command, error) { return &agent.Command{ Ui: ui, ShutdownCh: make(chan struct{}), }, nil }, "event": func() (cli.Command, error) { return &command.EventCommand{ Ui: ui, }, nil }, "query": func() (cli.Command, error) { return &command.QueryCommand{ ShutdownCh: makeShutdownCh(), Ui: ui, }, nil }, "force-leave": func() (cli.Command, error) { return &command.ForceLeaveCommand{ Ui: ui, }, nil }, "join": func() (cli.Command, error) { return &command.JoinCommand{ Ui: ui, }, nil }, "keygen": func() (cli.Command, error) { return &command.KeygenCommand{ Ui: ui, }, nil }, "keys": func() (cli.Command, error) { return &command.KeysCommand{ Ui: ui, }, nil }, "leave": func() (cli.Command, error) { return &command.LeaveCommand{ Ui: ui, }, nil }, "members": func() (cli.Command, error) { return &command.MembersCommand{ Ui: ui, }, nil }, "monitor": func() (cli.Command, error) { return &command.MonitorCommand{ ShutdownCh: makeShutdownCh(), Ui: ui, }, nil }, "tags": func() (cli.Command, error) { return &command.TagsCommand{ Ui: ui, }, nil }, "reachability": func() (cli.Command, error) { return &command.ReachabilityCommand{ ShutdownCh: makeShutdownCh(), Ui: ui, }, nil }, "rtt": func() (cli.Command, error) { return &command.RTTCommand{ Ui: ui, }, nil }, "info": func() (cli.Command, error) { return &command.InfoCommand{ Ui: ui, }, nil }, "version": func() (cli.Command, error) { return &command.VersionCommand{ Revision: GitCommit, Version: Version, VersionPrerelease: VersionPrerelease, Ui: ui, }, nil }, } } // makeShutdownCh returns a channel that can be used for shutdown // notifications for commands. This channel will send a message for every // interrupt received. func makeShutdownCh() <-chan struct{} { resultCh := make(chan struct{}) signalCh := make(chan os.Signal, 4) signal.Notify(signalCh, os.Interrupt) go func() { for { <-signalCh resultCh <- struct{}{} } }() return resultCh } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/main.go000066400000000000000000000013441317277572200246020ustar00rootroot00000000000000package main import ( "fmt" "github.com/mitchellh/cli" "io/ioutil" "log" "os" ) func main() { os.Exit(realMain()) } func realMain() int { log.SetOutput(ioutil.Discard) // Get the command line args. We shortcut "--version" and "-v" to // just show the version. args := os.Args[1:] for _, arg := range args { if arg == "-v" || arg == "--version" { newArgs := make([]string, len(args)+1) newArgs[0] = "version" copy(newArgs[1:], args) args = newArgs break } } cli := &cli.CLI{ Args: args, Commands: Commands, HelpFunc: cli.BasicHelpFunc("serf"), } exitCode, err := cli.Run() if err != nil { fmt.Fprintf(os.Stderr, "Error executing CLI: %s\n", err.Error()) return 1 } return exitCode } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/main_test.go000066400000000000000000000000151317277572200256330ustar00rootroot00000000000000package main golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/cmd/serf/version.go000066400000000000000000000006611317277572200253440ustar00rootroot00000000000000package main // The git commit that was compiled. This will be filled in by the compiler. var GitCommit string // The main version number that is being run at the moment. const Version = "0.8.2" // A pre-release marker for the version. If this is "" (empty string) // then it means that it is a final release. Otherwise, this is a pre-release // such as "dev" (in development), "beta", "rc1", etc. const VersionPrerelease = "dev" golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/coordinate/000077500000000000000000000000001317277572200237525ustar00rootroot00000000000000golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/coordinate/client.go000066400000000000000000000156621317277572200255710ustar00rootroot00000000000000package coordinate import ( "fmt" "math" "sort" "sync" "time" ) // Client manages the estimated network coordinate for a given node, and adjusts // it as the node observes round trip times and estimated coordinates from other // nodes. The core algorithm is based on Vivaldi, see the documentation for Config // for more details. type Client struct { // coord is the current estimate of the client's network coordinate. coord *Coordinate // origin is a coordinate sitting at the origin. origin *Coordinate // config contains the tuning parameters that govern the performance of // the algorithm. config *Config // adjustmentIndex is the current index into the adjustmentSamples slice. adjustmentIndex uint // adjustment is used to store samples for the adjustment calculation. adjustmentSamples []float64 // latencyFilterSamples is used to store the last several RTT samples, // keyed by node name. We will use the config's LatencyFilterSamples // value to determine how many samples we keep, per node. latencyFilterSamples map[string][]float64 // stats is used to record events that occur when updating coordinates. stats ClientStats // mutex enables safe concurrent access to the client. mutex sync.RWMutex } // ClientStats is used to record events that occur when updating coordinates. type ClientStats struct { // Resets is incremented any time we reset our local coordinate because // our calculations have resulted in an invalid state. Resets int } // NewClient creates a new Client and verifies the configuration is valid. func NewClient(config *Config) (*Client, error) { if !(config.Dimensionality > 0) { return nil, fmt.Errorf("dimensionality must be >0") } return &Client{ coord: NewCoordinate(config), origin: NewCoordinate(config), config: config, adjustmentIndex: 0, adjustmentSamples: make([]float64, config.AdjustmentWindowSize), latencyFilterSamples: make(map[string][]float64), }, nil } // GetCoordinate returns a copy of the coordinate for this client. func (c *Client) GetCoordinate() *Coordinate { c.mutex.RLock() defer c.mutex.RUnlock() return c.coord.Clone() } // SetCoordinate forces the client's coordinate to a known state. func (c *Client) SetCoordinate(coord *Coordinate) error { c.mutex.Lock() defer c.mutex.Unlock() if err := c.checkCoordinate(coord); err != nil { return err } c.coord = coord.Clone() return nil } // ForgetNode removes any client state for the given node. func (c *Client) ForgetNode(node string) { c.mutex.Lock() defer c.mutex.Unlock() delete(c.latencyFilterSamples, node) } // Stats returns a copy of stats for the client. func (c *Client) Stats() ClientStats { c.mutex.Lock() defer c.mutex.Unlock() return c.stats } // checkCoordinate returns an error if the coordinate isn't compatible with // this client, or if the coordinate itself isn't valid. This assumes the mutex // has been locked already. func (c *Client) checkCoordinate(coord *Coordinate) error { if !c.coord.IsCompatibleWith(coord) { return fmt.Errorf("dimensions aren't compatible") } if !coord.IsValid() { return fmt.Errorf("coordinate is invalid") } return nil } // latencyFilter applies a simple moving median filter with a new sample for // a node. This assumes that the mutex has been locked already. func (c *Client) latencyFilter(node string, rttSeconds float64) float64 { samples, ok := c.latencyFilterSamples[node] if !ok { samples = make([]float64, 0, c.config.LatencyFilterSize) } // Add the new sample and trim the list, if needed. samples = append(samples, rttSeconds) if len(samples) > int(c.config.LatencyFilterSize) { samples = samples[1:] } c.latencyFilterSamples[node] = samples // Sort a copy of the samples and return the median. sorted := make([]float64, len(samples)) copy(sorted, samples) sort.Float64s(sorted) return sorted[len(sorted)/2] } // updateVivialdi updates the Vivaldi portion of the client's coordinate. This // assumes that the mutex has been locked already. func (c *Client) updateVivaldi(other *Coordinate, rttSeconds float64) { const zeroThreshold = 1.0e-6 dist := c.coord.DistanceTo(other).Seconds() if rttSeconds < zeroThreshold { rttSeconds = zeroThreshold } wrongness := math.Abs(dist-rttSeconds) / rttSeconds totalError := c.coord.Error + other.Error if totalError < zeroThreshold { totalError = zeroThreshold } weight := c.coord.Error / totalError c.coord.Error = c.config.VivaldiCE*weight*wrongness + c.coord.Error*(1.0-c.config.VivaldiCE*weight) if c.coord.Error > c.config.VivaldiErrorMax { c.coord.Error = c.config.VivaldiErrorMax } delta := c.config.VivaldiCC * weight force := delta * (rttSeconds - dist) c.coord = c.coord.ApplyForce(c.config, force, other) } // updateAdjustment updates the adjustment portion of the client's coordinate, if // the feature is enabled. This assumes that the mutex has been locked already. func (c *Client) updateAdjustment(other *Coordinate, rttSeconds float64) { if c.config.AdjustmentWindowSize == 0 { return } // Note that the existing adjustment factors don't figure in to this // calculation so we use the raw distance here. dist := c.coord.rawDistanceTo(other) c.adjustmentSamples[c.adjustmentIndex] = rttSeconds - dist c.adjustmentIndex = (c.adjustmentIndex + 1) % c.config.AdjustmentWindowSize sum := 0.0 for _, sample := range c.adjustmentSamples { sum += sample } c.coord.Adjustment = sum / (2.0 * float64(c.config.AdjustmentWindowSize)) } // updateGravity applies a small amount of gravity to pull coordinates towards // the center of the coordinate system to combat drift. This assumes that the // mutex is locked already. func (c *Client) updateGravity() { dist := c.origin.DistanceTo(c.coord).Seconds() force := -1.0 * math.Pow(dist/c.config.GravityRho, 2.0) c.coord = c.coord.ApplyForce(c.config, force, c.origin) } // Update takes other, a coordinate for another node, and rtt, a round trip // time observation for a ping to that node, and updates the estimated position of // the client's coordinate. Returns the updated coordinate. func (c *Client) Update(node string, other *Coordinate, rtt time.Duration) (*Coordinate, error) { c.mutex.Lock() defer c.mutex.Unlock() if err := c.checkCoordinate(other); err != nil { return nil, err } const maxRTT = 10 * time.Second if rtt <= 0 || rtt > maxRTT { return nil, fmt.Errorf("round trip time not in valid range, duration %v is not a positive value less than %v ", rtt, maxRTT) } rttSeconds := c.latencyFilter(node, rtt.Seconds()) c.updateVivaldi(other, rttSeconds) c.updateAdjustment(other, rttSeconds) c.updateGravity() if !c.coord.IsValid() { c.stats.Resets++ c.coord = NewCoordinate(c.config) } return c.coord.Clone(), nil } // DistanceTo returns the estimated RTT from the client's coordinate to other, the // coordinate for another node. func (c *Client) DistanceTo(other *Coordinate) time.Duration { c.mutex.RLock() defer c.mutex.RUnlock() return c.coord.DistanceTo(other) } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/coordinate/client_test.go000066400000000000000000000133261317277572200266230ustar00rootroot00000000000000package coordinate import ( "fmt" "math" "reflect" "strings" "testing" "time" ) func TestClient_NewClient(t *testing.T) { config := DefaultConfig() config.Dimensionality = 0 client, err := NewClient(config) if err == nil || !strings.Contains(err.Error(), "dimensionality") { t.Fatal(err) } config.Dimensionality = 7 client, err = NewClient(config) if err != nil { t.Fatal(err) } origin := NewCoordinate(config) if !reflect.DeepEqual(client.GetCoordinate(), origin) { t.Fatalf("fresh client should be located at the origin") } } func TestClient_Update(t *testing.T) { config := DefaultConfig() config.Dimensionality = 3 client, err := NewClient(config) if err != nil { t.Fatal(err) } // Make sure the Euclidean part of our coordinate is what we expect. c := client.GetCoordinate() verifyEqualVectors(t, c.Vec, []float64{0.0, 0.0, 0.0}) // Place a node right above the client and observe an RTT longer than the // client expects, given its distance. other := NewCoordinate(config) other.Vec[2] = 0.001 rtt := time.Duration(2.0 * other.Vec[2] * secondsToNanoseconds) c, err = client.Update("node", other, rtt) if err != nil { t.Fatalf("err: %v", err) } // The client should have scooted down to get away from it. if !(c.Vec[2] < 0.0) { t.Fatalf("client z coordinate %9.6f should be < 0.0", c.Vec[2]) } // Set the coordinate to a known state. c.Vec[2] = 99.0 client.SetCoordinate(c) c = client.GetCoordinate() verifyEqualFloats(t, c.Vec[2], 99.0) } func TestClient_InvalidInPingValues(t *testing.T) { config := DefaultConfig() config.Dimensionality = 3 client, err := NewClient(config) if err != nil { t.Fatal(err) } // Place another node other := NewCoordinate(config) other.Vec[2] = 0.001 dist := client.DistanceTo(other) // Update with a series of invalid ping periods, should return an error and estimated rtt remains unchanged pings := []int{1<<63 - 1, -35, 11} for _, ping := range pings { expectedErr := fmt.Errorf("round trip time not in valid range, duration %v is not a positive value less than %v", ping, 10*time.Second) _, err = client.Update("node", other, time.Duration(ping*secondsToNanoseconds)) if err == nil { t.Fatalf("Unexpected error, wanted %v but got %v", expectedErr, err) } dist_new := client.DistanceTo(other) if dist_new != dist { t.Fatalf("distance estimate %v not equal to %v", dist_new, dist) } } } func TestClient_DistanceTo(t *testing.T) { config := DefaultConfig() config.Dimensionality = 3 config.HeightMin = 0 client, err := NewClient(config) if err != nil { t.Fatal(err) } // Fiddle a raw coordinate to put it a specific number of seconds away. other := NewCoordinate(config) other.Vec[2] = 12.345 expected := time.Duration(other.Vec[2] * secondsToNanoseconds) dist := client.DistanceTo(other) if dist != expected { t.Fatalf("distance doesn't match %9.6f != %9.6f", dist.Seconds(), expected.Seconds()) } } func TestClient_latencyFilter(t *testing.T) { config := DefaultConfig() config.LatencyFilterSize = 3 client, err := NewClient(config) if err != nil { t.Fatal(err) } // Make sure we get the median, and that things age properly. verifyEqualFloats(t, client.latencyFilter("alice", 0.201), 0.201) verifyEqualFloats(t, client.latencyFilter("alice", 0.200), 0.201) verifyEqualFloats(t, client.latencyFilter("alice", 0.207), 0.201) // This glitch will get median-ed out and never seen by Vivaldi. verifyEqualFloats(t, client.latencyFilter("alice", 1.9), 0.207) verifyEqualFloats(t, client.latencyFilter("alice", 0.203), 0.207) verifyEqualFloats(t, client.latencyFilter("alice", 0.199), 0.203) verifyEqualFloats(t, client.latencyFilter("alice", 0.211), 0.203) // Make sure different nodes are not coupled. verifyEqualFloats(t, client.latencyFilter("bob", 0.310), 0.310) // Make sure we don't leak coordinates for nodes that leave. client.ForgetNode("alice") verifyEqualFloats(t, client.latencyFilter("alice", 0.888), 0.888) } func TestClient_NaN_Defense(t *testing.T) { config := DefaultConfig() config.Dimensionality = 3 client, err := NewClient(config) if err != nil { t.Fatal(err) } // Block a bad coordinate from coming in. other := NewCoordinate(config) other.Vec[0] = math.NaN() if other.IsValid() { t.Fatalf("bad: %#v", *other) } rtt := 250 * time.Millisecond c, err := client.Update("node", other, rtt) if err == nil || !strings.Contains(err.Error(), "coordinate is invalid") { t.Fatalf("err: %v", err) } if c := client.GetCoordinate(); !c.IsValid() { t.Fatalf("bad: %#v", *c) } // Block setting an invalid coordinate directly. err = client.SetCoordinate(other) if err == nil || !strings.Contains(err.Error(), "coordinate is invalid") { t.Fatalf("err: %v", err) } if c := client.GetCoordinate(); !c.IsValid() { t.Fatalf("bad: %#v", *c) } // Block an incompatible coordinate. other.Vec = make([]float64, 2*len(other.Vec)) c, err = client.Update("node", other, rtt) if err == nil || !strings.Contains(err.Error(), "dimensions aren't compatible") { t.Fatalf("err: %v", err) } if c := client.GetCoordinate(); !c.IsValid() { t.Fatalf("bad: %#v", *c) } // Block setting an incompatible coordinate directly. err = client.SetCoordinate(other) if err == nil || !strings.Contains(err.Error(), "dimensions aren't compatible") { t.Fatalf("err: %v", err) } if c := client.GetCoordinate(); !c.IsValid() { t.Fatalf("bad: %#v", *c) } // Poison the internal state and make sure we reset on an update. client.coord.Vec[0] = math.NaN() other = NewCoordinate(config) c, err = client.Update("node", other, rtt) if err != nil { t.Fatalf("err: %v", err) } if !c.IsValid() { t.Fatalf("bad: %#v", *c) } if got, want := client.Stats().Resets, 1; got != want { t.Fatalf("got %d want %d", got, want) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/coordinate/config.go000066400000000000000000000055061317277572200255540ustar00rootroot00000000000000package coordinate // Config is used to set the parameters of the Vivaldi-based coordinate mapping // algorithm. // // The following references are called out at various points in the documentation // here: // // [1] Dabek, Frank, et al. "Vivaldi: A decentralized network coordinate system." // ACM SIGCOMM Computer Communication Review. Vol. 34. No. 4. ACM, 2004. // [2] Ledlie, Jonathan, Paul Gardner, and Margo I. Seltzer. "Network Coordinates // in the Wild." NSDI. Vol. 7. 2007. // [3] Lee, Sanghwan, et al. "On suitability of Euclidean embedding for // host-based network coordinate systems." Networking, IEEE/ACM Transactions // on 18.1 (2010): 27-40. type Config struct { // The dimensionality of the coordinate system. As discussed in [2], more // dimensions improves the accuracy of the estimates up to a point. Per [2] // we chose 8 dimensions plus a non-Euclidean height. Dimensionality uint // VivaldiErrorMax is the default error value when a node hasn't yet made // any observations. It also serves as an upper limit on the error value in // case observations cause the error value to increase without bound. VivaldiErrorMax float64 // VivaldiCE is a tuning factor that controls the maximum impact an // observation can have on a node's confidence. See [1] for more details. VivaldiCE float64 // VivaldiCC is a tuning factor that controls the maximum impact an // observation can have on a node's coordinate. See [1] for more details. VivaldiCC float64 // AdjustmentWindowSize is a tuning factor that determines how many samples // we retain to calculate the adjustment factor as discussed in [3]. Setting // this to zero disables this feature. AdjustmentWindowSize uint // HeightMin is the minimum value of the height parameter. Since this // always must be positive, it will introduce a small amount error, so // the chosen value should be relatively small compared to "normal" // coordinates. HeightMin float64 // LatencyFilterSamples is the maximum number of samples that are retained // per node, in order to compute a median. The intent is to ride out blips // but still keep the delay low, since our time to probe any given node is // pretty infrequent. See [2] for more details. LatencyFilterSize uint // GravityRho is a tuning factor that sets how much gravity has an effect // to try to re-center coordinates. See [2] for more details. GravityRho float64 } // DefaultConfig returns a Config that has some default values suitable for // basic testing of the algorithm, but not tuned to any particular type of cluster. func DefaultConfig() *Config { return &Config{ Dimensionality: 8, VivaldiErrorMax: 1.5, VivaldiCE: 0.25, VivaldiCC: 0.25, AdjustmentWindowSize: 20, HeightMin: 10.0e-6, LatencyFilterSize: 3, GravityRho: 150.0, } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/coordinate/coordinate.go000066400000000000000000000141561317277572200264370ustar00rootroot00000000000000package coordinate import ( "math" "math/rand" "time" ) // Coordinate is a specialized structure for holding network coordinates for the // Vivaldi-based coordinate mapping algorithm. All of the fields should be public // to enable this to be serialized. All values in here are in units of seconds. type Coordinate struct { // Vec is the Euclidean portion of the coordinate. This is used along // with the other fields to provide an overall distance estimate. The // units here are seconds. Vec []float64 // Err reflects the confidence in the given coordinate and is updated // dynamically by the Vivaldi Client. This is dimensionless. Error float64 // Adjustment is a distance offset computed based on a calculation over // observations from all other nodes over a fixed window and is updated // dynamically by the Vivaldi Client. The units here are seconds. Adjustment float64 // Height is a distance offset that accounts for non-Euclidean effects // which model the access links from nodes to the core Internet. The access // links are usually set by bandwidth and congestion, and the core links // usually follow distance based on geography. Height float64 } const ( // secondsToNanoseconds is used to convert float seconds to nanoseconds. secondsToNanoseconds = 1.0e9 // zeroThreshold is used to decide if two coordinates are on top of each // other. zeroThreshold = 1.0e-6 ) // ErrDimensionalityConflict will be panic-d if you try to perform operations // with incompatible dimensions. type DimensionalityConflictError struct{} // Adds the error interface. func (e DimensionalityConflictError) Error() string { return "coordinate dimensionality does not match" } // NewCoordinate creates a new coordinate at the origin, using the given config // to supply key initial values. func NewCoordinate(config *Config) *Coordinate { return &Coordinate{ Vec: make([]float64, config.Dimensionality), Error: config.VivaldiErrorMax, Adjustment: 0.0, Height: config.HeightMin, } } // Clone creates an independent copy of this coordinate. func (c *Coordinate) Clone() *Coordinate { vec := make([]float64, len(c.Vec)) copy(vec, c.Vec) return &Coordinate{ Vec: vec, Error: c.Error, Adjustment: c.Adjustment, Height: c.Height, } } // componentIsValid returns false if a floating point value is a NaN or an // infinity. func componentIsValid(f float64) bool { return !math.IsInf(f, 0) && !math.IsNaN(f) } // IsValid returns false if any component of a coordinate isn't valid, per the // componentIsValid() helper above. func (c *Coordinate) IsValid() bool { for i := range c.Vec { if !componentIsValid(c.Vec[i]) { return false } } return componentIsValid(c.Error) && componentIsValid(c.Adjustment) && componentIsValid(c.Height) } // IsCompatibleWith checks to see if the two coordinates are compatible // dimensionally. If this returns true then you are guaranteed to not get // any runtime errors operating on them. func (c *Coordinate) IsCompatibleWith(other *Coordinate) bool { return len(c.Vec) == len(other.Vec) } // ApplyForce returns the result of applying the force from the direction of the // other coordinate. func (c *Coordinate) ApplyForce(config *Config, force float64, other *Coordinate) *Coordinate { if !c.IsCompatibleWith(other) { panic(DimensionalityConflictError{}) } ret := c.Clone() unit, mag := unitVectorAt(c.Vec, other.Vec) ret.Vec = add(ret.Vec, mul(unit, force)) if mag > zeroThreshold { ret.Height = (ret.Height+other.Height)*force/mag + ret.Height ret.Height = math.Max(ret.Height, config.HeightMin) } return ret } // DistanceTo returns the distance between this coordinate and the other // coordinate, including adjustments. func (c *Coordinate) DistanceTo(other *Coordinate) time.Duration { if !c.IsCompatibleWith(other) { panic(DimensionalityConflictError{}) } dist := c.rawDistanceTo(other) adjustedDist := dist + c.Adjustment + other.Adjustment if adjustedDist > 0.0 { dist = adjustedDist } return time.Duration(dist * secondsToNanoseconds) } // rawDistanceTo returns the Vivaldi distance between this coordinate and the // other coordinate in seconds, not including adjustments. This assumes the // dimensions have already been checked to be compatible. func (c *Coordinate) rawDistanceTo(other *Coordinate) float64 { return magnitude(diff(c.Vec, other.Vec)) + c.Height + other.Height } // add returns the sum of vec1 and vec2. This assumes the dimensions have // already been checked to be compatible. func add(vec1 []float64, vec2 []float64) []float64 { ret := make([]float64, len(vec1)) for i := range ret { ret[i] = vec1[i] + vec2[i] } return ret } // diff returns the difference between the vec1 and vec2. This assumes the // dimensions have already been checked to be compatible. func diff(vec1 []float64, vec2 []float64) []float64 { ret := make([]float64, len(vec1)) for i := range ret { ret[i] = vec1[i] - vec2[i] } return ret } // mul returns vec multiplied by a scalar factor. func mul(vec []float64, factor float64) []float64 { ret := make([]float64, len(vec)) for i := range vec { ret[i] = vec[i] * factor } return ret } // magnitude computes the magnitude of the vec. func magnitude(vec []float64) float64 { sum := 0.0 for i := range vec { sum += vec[i] * vec[i] } return math.Sqrt(sum) } // unitVectorAt returns a unit vector pointing at vec1 from vec2. If the two // positions are the same then a random unit vector is returned. We also return // the distance between the points for use in the later height calculation. func unitVectorAt(vec1 []float64, vec2 []float64) ([]float64, float64) { ret := diff(vec1, vec2) // If the coordinates aren't on top of each other we can normalize. if mag := magnitude(ret); mag > zeroThreshold { return mul(ret, 1.0/mag), mag } // Otherwise, just return a random unit vector. for i := range ret { ret[i] = rand.Float64() - 0.5 } if mag := magnitude(ret); mag > zeroThreshold { return mul(ret, 1.0/mag), 0.0 } // And finally just give up and make a unit vector along the first // dimension. This should be exceedingly rare. ret = make([]float64, len(ret)) ret[0] = 1.0 return ret, 0.0 } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/coordinate/coordinate_test.go000066400000000000000000000211171317277572200274710ustar00rootroot00000000000000package coordinate import ( "math" "reflect" "testing" "time" ) // verifyDimensionPanic will run the supplied func and make sure it panics with // the expected error type. func verifyDimensionPanic(t *testing.T, f func()) { defer func() { if r := recover(); r != nil { if _, ok := r.(DimensionalityConflictError); !ok { t.Fatalf("panic isn't the right type") } } else { t.Fatalf("didn't get expected panic") } }() f() } func TestCoordinate_NewCoordinate(t *testing.T) { config := DefaultConfig() c := NewCoordinate(config) if uint(len(c.Vec)) != config.Dimensionality { t.Fatalf("dimensionality not set correctly %d != %d", len(c.Vec), config.Dimensionality) } } func TestCoordinate_Clone(t *testing.T) { c := NewCoordinate(DefaultConfig()) c.Vec[0], c.Vec[1], c.Vec[2] = 1.0, 2.0, 3.0 c.Error = 5.0 c.Adjustment = 10.0 c.Height = 4.2 other := c.Clone() if !reflect.DeepEqual(c, other) { t.Fatalf("coordinate clone didn't make a proper copy") } other.Vec[0] = c.Vec[0] + 0.5 if reflect.DeepEqual(c, other) { t.Fatalf("cloned coordinate is still pointing at its ancestor") } } func TestCoordinate_IsValid(t *testing.T) { c := NewCoordinate(DefaultConfig()) var fields []*float64 for i := range c.Vec { fields = append(fields, &c.Vec[i]) } fields = append(fields, &c.Error) fields = append(fields, &c.Adjustment) fields = append(fields, &c.Height) for i, field := range fields { if !c.IsValid() { t.Fatalf("field %d should be valid", i) } *field = math.NaN() if c.IsValid() { t.Fatalf("field %d should not be valid (NaN)", i) } *field = 0.0 if !c.IsValid() { t.Fatalf("field %d should be valid", i) } *field = math.Inf(0) if c.IsValid() { t.Fatalf("field %d should not be valid (Inf)", i) } *field = 0.0 if !c.IsValid() { t.Fatalf("field %d should be valid", i) } } } func TestCoordinate_IsCompatibleWith(t *testing.T) { config := DefaultConfig() config.Dimensionality = 3 c1 := NewCoordinate(config) c2 := NewCoordinate(config) config.Dimensionality = 2 alien := NewCoordinate(config) if !c1.IsCompatibleWith(c1) || !c2.IsCompatibleWith(c2) || !alien.IsCompatibleWith(alien) { t.Fatalf("coordinates should be compatible with themselves") } if !c1.IsCompatibleWith(c2) || !c2.IsCompatibleWith(c1) { t.Fatalf("coordinates should be compatible with each other") } if c1.IsCompatibleWith(alien) || c2.IsCompatibleWith(alien) || alien.IsCompatibleWith(c1) || alien.IsCompatibleWith(c2) { t.Fatalf("alien should not be compatible with the other coordinates") } } func TestCoordinate_ApplyForce(t *testing.T) { config := DefaultConfig() config.Dimensionality = 3 config.HeightMin = 0 origin := NewCoordinate(config) // This proves that we normalize, get the direction right, and apply the // force multiplier correctly. above := NewCoordinate(config) above.Vec = []float64{0.0, 0.0, 2.9} c := origin.ApplyForce(config, 5.3, above) verifyEqualVectors(t, c.Vec, []float64{0.0, 0.0, -5.3}) // Scoot a point not starting at the origin to make sure there's nothing // special there. right := NewCoordinate(config) right.Vec = []float64{3.4, 0.0, -5.3} c = c.ApplyForce(config, 2.0, right) verifyEqualVectors(t, c.Vec, []float64{-2.0, 0.0, -5.3}) // If the points are right on top of each other, then we should end up // in a random direction, one unit away. This makes sure the unit vector // build up doesn't divide by zero. c = origin.ApplyForce(config, 1.0, origin) verifyEqualFloats(t, origin.DistanceTo(c).Seconds(), 1.0) // Enable a minimum height and make sure that gets factored in properly. config.HeightMin = 10.0e-6 origin = NewCoordinate(config) c = origin.ApplyForce(config, 5.3, above) verifyEqualVectors(t, c.Vec, []float64{0.0, 0.0, -5.3}) verifyEqualFloats(t, c.Height, config.HeightMin+5.3*config.HeightMin/2.9) // Make sure the height minimum is enforced. c = origin.ApplyForce(config, -5.3, above) verifyEqualVectors(t, c.Vec, []float64{0.0, 0.0, 5.3}) verifyEqualFloats(t, c.Height, config.HeightMin) // Shenanigans should get called if the dimensions don't match. bad := c.Clone() bad.Vec = make([]float64, len(bad.Vec)+1) verifyDimensionPanic(t, func() { c.ApplyForce(config, 1.0, bad) }) } func TestCoordinate_DistanceTo(t *testing.T) { config := DefaultConfig() config.Dimensionality = 3 config.HeightMin = 0 c1, c2 := NewCoordinate(config), NewCoordinate(config) c1.Vec = []float64{-0.5, 1.3, 2.4} c2.Vec = []float64{1.2, -2.3, 3.4} verifyEqualFloats(t, c1.DistanceTo(c1).Seconds(), 0.0) verifyEqualFloats(t, c1.DistanceTo(c2).Seconds(), c2.DistanceTo(c1).Seconds()) verifyEqualFloats(t, c1.DistanceTo(c2).Seconds(), 4.104875150354758) // Make sure negative adjustment factors are ignored. c1.Adjustment = -1.0e6 verifyEqualFloats(t, c1.DistanceTo(c2).Seconds(), 4.104875150354758) // Make sure positive adjustment factors affect the distance. c1.Adjustment = 0.1 c2.Adjustment = 0.2 verifyEqualFloats(t, c1.DistanceTo(c2).Seconds(), 4.104875150354758+0.3) // Make sure the heights affect the distance. c1.Height = 0.7 c2.Height = 0.1 verifyEqualFloats(t, c1.DistanceTo(c2).Seconds(), 4.104875150354758+0.3+0.8) // Shenanigans should get called if the dimensions don't match. bad := c1.Clone() bad.Vec = make([]float64, len(bad.Vec)+1) verifyDimensionPanic(t, func() { _ = c1.DistanceTo(bad) }) } // dist is a self-contained example that appears in documentation. func dist(a *Coordinate, b *Coordinate) time.Duration { // Coordinates will always have the same dimensionality, so this is // just a sanity check. if len(a.Vec) != len(b.Vec) { panic("dimensions aren't compatible") } // Calculate the Euclidean distance plus the heights. sumsq := 0.0 for i := 0; i < len(a.Vec); i++ { diff := a.Vec[i] - b.Vec[i] sumsq += diff * diff } rtt := math.Sqrt(sumsq) + a.Height + b.Height // Apply the adjustment components, guarding against negatives. adjusted := rtt + a.Adjustment + b.Adjustment if adjusted > 0.0 { rtt = adjusted } // Go's times are natively nanoseconds, so we convert from seconds. const secondsToNanoseconds = 1.0e9 return time.Duration(rtt * secondsToNanoseconds) } func TestCoordinate_dist_Example(t *testing.T) { config := DefaultConfig() c1, c2 := NewCoordinate(config), NewCoordinate(config) c1.Vec = []float64{-0.5, 1.3, 2.4} c2.Vec = []float64{1.2, -2.3, 3.4} c1.Adjustment = 0.1 c2.Adjustment = 0.2 c1.Height = 0.7 c2.Height = 0.1 verifyEqualFloats(t, c1.DistanceTo(c2).Seconds(), dist(c1, c2).Seconds()) } func TestCoordinate_rawDistanceTo(t *testing.T) { config := DefaultConfig() config.Dimensionality = 3 config.HeightMin = 0 c1, c2 := NewCoordinate(config), NewCoordinate(config) c1.Vec = []float64{-0.5, 1.3, 2.4} c2.Vec = []float64{1.2, -2.3, 3.4} verifyEqualFloats(t, c1.rawDistanceTo(c1), 0.0) verifyEqualFloats(t, c1.rawDistanceTo(c2), c2.rawDistanceTo(c1)) verifyEqualFloats(t, c1.rawDistanceTo(c2), 4.104875150354758) // Make sure that the adjustment doesn't factor into the raw // distance. c1.Adjustment = 1.0e6 verifyEqualFloats(t, c1.rawDistanceTo(c2), 4.104875150354758) // Make sure the heights affect the distance. c1.Height = 0.7 c2.Height = 0.1 verifyEqualFloats(t, c1.rawDistanceTo(c2), 4.104875150354758+0.8) } func TestCoordinate_add(t *testing.T) { vec1 := []float64{1.0, -3.0, 3.0} vec2 := []float64{-4.0, 5.0, 6.0} verifyEqualVectors(t, add(vec1, vec2), []float64{-3.0, 2.0, 9.0}) zero := []float64{0.0, 0.0, 0.0} verifyEqualVectors(t, add(vec1, zero), vec1) } func TestCoordinate_diff(t *testing.T) { vec1 := []float64{1.0, -3.0, 3.0} vec2 := []float64{-4.0, 5.0, 6.0} verifyEqualVectors(t, diff(vec1, vec2), []float64{5.0, -8.0, -3.0}) zero := []float64{0.0, 0.0, 0.0} verifyEqualVectors(t, diff(vec1, zero), vec1) } func TestCoordinate_magnitude(t *testing.T) { zero := []float64{0.0, 0.0, 0.0} verifyEqualFloats(t, magnitude(zero), 0.0) vec := []float64{1.0, -2.0, 3.0} verifyEqualFloats(t, magnitude(vec), 3.7416573867739413) } func TestCoordinate_unitVectorAt(t *testing.T) { vec1 := []float64{1.0, 2.0, 3.0} vec2 := []float64{0.5, 0.6, 0.7} u, mag := unitVectorAt(vec1, vec2) verifyEqualVectors(t, u, []float64{0.18257418583505536, 0.511207720338155, 0.8398412548412546}) verifyEqualFloats(t, magnitude(u), 1.0) verifyEqualFloats(t, mag, magnitude(diff(vec1, vec2))) // If we give positions that are equal we should get a random unit vector // returned to us, rather than a divide by zero. u, mag = unitVectorAt(vec1, vec1) verifyEqualFloats(t, magnitude(u), 1.0) verifyEqualFloats(t, mag, 0.0) // We can't hit the final clause without heroics so I manually forced it // there to verify it works. } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/coordinate/performance_test.go000066400000000000000000000121411317277572200276400ustar00rootroot00000000000000package coordinate import ( "math" "testing" "time" ) func TestPerformance_Line(t *testing.T) { const spacing = 10 * time.Millisecond const nodes, cycles = 10, 1000 config := DefaultConfig() clients, err := GenerateClients(nodes, config) if err != nil { t.Fatal(err) } truth := GenerateLine(nodes, spacing) Simulate(clients, truth, cycles) stats := Evaluate(clients, truth) if stats.ErrorAvg > 0.0018 || stats.ErrorMax > 0.0092 { t.Fatalf("performance stats are out of spec: %v", stats) } } func TestPerformance_Grid(t *testing.T) { const spacing = 10 * time.Millisecond const nodes, cycles = 25, 1000 config := DefaultConfig() clients, err := GenerateClients(nodes, config) if err != nil { t.Fatal(err) } truth := GenerateGrid(nodes, spacing) Simulate(clients, truth, cycles) stats := Evaluate(clients, truth) if stats.ErrorAvg > 0.0015 || stats.ErrorMax > 0.022 { t.Fatalf("performance stats are out of spec: %v", stats) } } func TestPerformance_Split(t *testing.T) { const lan, wan = 1 * time.Millisecond, 10 * time.Millisecond const nodes, cycles = 25, 1000 config := DefaultConfig() clients, err := GenerateClients(nodes, config) if err != nil { t.Fatal(err) } truth := GenerateSplit(nodes, lan, wan) Simulate(clients, truth, cycles) stats := Evaluate(clients, truth) if stats.ErrorAvg > 0.000060 || stats.ErrorMax > 0.00048 { t.Fatalf("performance stats are out of spec: %v", stats) } } func TestPerformance_Height(t *testing.T) { const radius = 100 * time.Millisecond const nodes, cycles = 25, 1000 // Constrain us to two dimensions so that we can just exactly represent // the circle. config := DefaultConfig() config.Dimensionality = 2 clients, err := GenerateClients(nodes, config) if err != nil { t.Fatal(err) } // Generate truth where the first coordinate is in the "middle" because // it's equidistant from all the nodes, but it will have an extra radius // added to the distance, so it should come out above all the others. truth := GenerateCircle(nodes, radius) Simulate(clients, truth, cycles) // Make sure the height looks reasonable with the regular nodes all in a // plane, and the center node up above. for i, _ := range clients { coord := clients[i].GetCoordinate() if i == 0 { if coord.Height < 0.97*radius.Seconds() { t.Fatalf("height is out of spec: %9.6f", coord.Height) } } else { if coord.Height > 0.03*radius.Seconds() { t.Fatalf("height is out of spec: %9.6f", coord.Height) } } } stats := Evaluate(clients, truth) if stats.ErrorAvg > 0.0025 || stats.ErrorMax > 0.064 { t.Fatalf("performance stats are out of spec: %v", stats) } } func TestPerformance_Drift(t *testing.T) { const dist = 500 * time.Millisecond const nodes = 4 config := DefaultConfig() config.Dimensionality = 2 clients, err := GenerateClients(nodes, config) if err != nil { t.Fatal(err) } // Do some icky surgery on the clients to put them into a square, up in // the first quadrant. clients[0].coord.Vec = []float64{0.0, 0.0} clients[1].coord.Vec = []float64{0.0, dist.Seconds()} clients[2].coord.Vec = []float64{dist.Seconds(), dist.Seconds()} clients[3].coord.Vec = []float64{dist.Seconds(), dist.Seconds()} // Make a corresponding truth matrix. The nodes are laid out like this // so the distances are all equal, except for the diagonal: // // (1) <- dist -> (2) // // | <- dist | // | | // | dist -> | // // (0) <- dist -> (3) // truth := make([][]time.Duration, nodes) for i := range truth { truth[i] = make([]time.Duration, nodes) } for i := 0; i < nodes; i++ { for j := i + 1; j < nodes; j++ { rtt := dist if (i%2 == 0) && (j%2 == 0) { rtt = time.Duration(math.Sqrt2 * float64(rtt)) } truth[i][j], truth[j][i] = rtt, rtt } } calcCenterError := func() float64 { min, max := clients[0].GetCoordinate(), clients[0].GetCoordinate() for i := 1; i < nodes; i++ { coord := clients[i].GetCoordinate() for j, v := range coord.Vec { min.Vec[j] = math.Min(min.Vec[j], v) max.Vec[j] = math.Max(max.Vec[j], v) } } mid := make([]float64, config.Dimensionality) for i, _ := range mid { mid[i] = min.Vec[i] + (max.Vec[i]-min.Vec[i])/2 } return magnitude(mid) } // Let the simulation run for a while to stabilize, then snap a baseline // for the center error. Simulate(clients, truth, 1000) baseline := calcCenterError() // Now run for a bunch more cycles and see if gravity pulls the center // in the right direction. Simulate(clients, truth, 10000) if error := calcCenterError(); error > 0.8*baseline { t.Fatalf("drift performance out of spec: %9.6f -> %9.6f", baseline, error) } } func TestPerformance_Random(t *testing.T) { const mean, deviation = 100 * time.Millisecond, 10 * time.Millisecond const nodes, cycles = 25, 1000 config := DefaultConfig() clients, err := GenerateClients(nodes, config) if err != nil { t.Fatal(err) } truth := GenerateRandom(nodes, mean, deviation) Simulate(clients, truth, cycles) stats := Evaluate(clients, truth) if stats.ErrorAvg > 0.075 || stats.ErrorMax > 0.33 { t.Fatalf("performance stats are out of spec: %v", stats) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/coordinate/phantom.go000066400000000000000000000130321317277572200257460ustar00rootroot00000000000000package coordinate import ( "fmt" "math" "math/rand" "time" ) // GenerateClients returns a slice with nodes number of clients, all with the // given config. func GenerateClients(nodes int, config *Config) ([]*Client, error) { clients := make([]*Client, nodes) for i, _ := range clients { client, err := NewClient(config) if err != nil { return nil, err } clients[i] = client } return clients, nil } // GenerateLine returns a truth matrix as if all the nodes are in a straight linke // with the given spacing between them. func GenerateLine(nodes int, spacing time.Duration) [][]time.Duration { truth := make([][]time.Duration, nodes) for i := range truth { truth[i] = make([]time.Duration, nodes) } for i := 0; i < nodes; i++ { for j := i + 1; j < nodes; j++ { rtt := time.Duration(j-i) * spacing truth[i][j], truth[j][i] = rtt, rtt } } return truth } // GenerateGrid returns a truth matrix as if all the nodes are in a two dimensional // grid with the given spacing between them. func GenerateGrid(nodes int, spacing time.Duration) [][]time.Duration { truth := make([][]time.Duration, nodes) for i := range truth { truth[i] = make([]time.Duration, nodes) } n := int(math.Sqrt(float64(nodes))) for i := 0; i < nodes; i++ { for j := i + 1; j < nodes; j++ { x1, y1 := float64(i%n), float64(i/n) x2, y2 := float64(j%n), float64(j/n) dx, dy := x2-x1, y2-y1 dist := math.Sqrt(dx*dx + dy*dy) rtt := time.Duration(dist * float64(spacing)) truth[i][j], truth[j][i] = rtt, rtt } } return truth } // GenerateSplit returns a truth matrix as if half the nodes are close together in // one location and half the nodes are close together in another. The lan factor // is used to separate the nodes locally and the wan factor represents the split // between the two sides. func GenerateSplit(nodes int, lan time.Duration, wan time.Duration) [][]time.Duration { truth := make([][]time.Duration, nodes) for i := range truth { truth[i] = make([]time.Duration, nodes) } split := nodes / 2 for i := 0; i < nodes; i++ { for j := i + 1; j < nodes; j++ { rtt := lan if (i <= split && j > split) || (i > split && j <= split) { rtt += wan } truth[i][j], truth[j][i] = rtt, rtt } } return truth } // GenerateCircle returns a truth matrix for a set of nodes, evenly distributed // around a circle with the given radius. The first node is at the "center" of the // circle because it's equidistant from all the other nodes, but we place it at // double the radius, so it should show up above all the other nodes in height. func GenerateCircle(nodes int, radius time.Duration) [][]time.Duration { truth := make([][]time.Duration, nodes) for i := range truth { truth[i] = make([]time.Duration, nodes) } for i := 0; i < nodes; i++ { for j := i + 1; j < nodes; j++ { var rtt time.Duration if i == 0 { rtt = 2 * radius } else { t1 := 2.0 * math.Pi * float64(i) / float64(nodes) x1, y1 := math.Cos(t1), math.Sin(t1) t2 := 2.0 * math.Pi * float64(j) / float64(nodes) x2, y2 := math.Cos(t2), math.Sin(t2) dx, dy := x2-x1, y2-y1 dist := math.Sqrt(dx*dx + dy*dy) rtt = time.Duration(dist * float64(radius)) } truth[i][j], truth[j][i] = rtt, rtt } } return truth } // GenerateRandom returns a truth matrix for a set of nodes with normally // distributed delays, with the given mean and deviation. The RNG is re-seeded // so you always get the same matrix for a given size. func GenerateRandom(nodes int, mean time.Duration, deviation time.Duration) [][]time.Duration { rand.Seed(1) truth := make([][]time.Duration, nodes) for i := range truth { truth[i] = make([]time.Duration, nodes) } for i := 0; i < nodes; i++ { for j := i + 1; j < nodes; j++ { rttSeconds := rand.NormFloat64()*deviation.Seconds() + mean.Seconds() rtt := time.Duration(rttSeconds * secondsToNanoseconds) truth[i][j], truth[j][i] = rtt, rtt } } return truth } // Simulate runs the given number of cycles using the given list of clients and // truth matrix. On each cycle, each client will pick a random node and observe // the truth RTT, updating its coordinate estimate. The RNG is re-seeded for // each simulation run to get deterministic results (for this algorithm and the // underlying algorithm which will use random numbers for position vectors when // starting out with everything at the origin). func Simulate(clients []*Client, truth [][]time.Duration, cycles int) { rand.Seed(1) nodes := len(clients) for cycle := 0; cycle < cycles; cycle++ { for i, _ := range clients { if j := rand.Intn(nodes); j != i { c := clients[j].GetCoordinate() rtt := truth[i][j] node := fmt.Sprintf("node_%d", j) clients[i].Update(node, c, rtt) } } } } // Stats is returned from the Evaluate function with a summary of the algorithm // performance. type Stats struct { ErrorMax float64 ErrorAvg float64 } // Evaluate uses the coordinates of the given clients to calculate estimated // distances and compares them with the given truth matrix, returning summary // stats. func Evaluate(clients []*Client, truth [][]time.Duration) (stats Stats) { nodes := len(clients) count := 0 for i := 0; i < nodes; i++ { for j := i + 1; j < nodes; j++ { est := clients[i].DistanceTo(clients[j].GetCoordinate()).Seconds() actual := truth[i][j].Seconds() error := math.Abs(est-actual) / actual stats.ErrorMax = math.Max(stats.ErrorMax, error) stats.ErrorAvg += error count += 1 } } stats.ErrorAvg /= float64(count) fmt.Printf("Error avg=%9.6f max=%9.6f\n", stats.ErrorAvg, stats.ErrorMax) return } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/coordinate/util_test.go000066400000000000000000000012721317277572200263170ustar00rootroot00000000000000package coordinate import ( "math" "testing" ) // verifyEqualFloats will compare f1 and f2 and fail if they are not // "equal" within a threshold. func verifyEqualFloats(t *testing.T, f1 float64, f2 float64) { const zeroThreshold = 1.0e-6 if math.Abs(f1-f2) > zeroThreshold { t.Fatalf("equal assertion fail, %9.6f != %9.6f", f1, f2) } } // verifyEqualVectors will compare vec1 and vec2 and fail if they are not // "equal" within a threshold. func verifyEqualVectors(t *testing.T, vec1 []float64, vec2 []float64) { if len(vec1) != len(vec2) { t.Fatalf("vector length mismatch, %d != %d", len(vec1), len(vec2)) } for i, _ := range vec1 { verifyEqualFloats(t, vec1[i], vec2[i]) } } golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/demo/000077500000000000000000000000001317277572200225475ustar00rootroot00000000000000golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/demo/vagrant-cluster/000077500000000000000000000000001317277572200256705ustar00rootroot00000000000000golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/demo/vagrant-cluster/README.md000066400000000000000000000015301317277572200271460ustar00rootroot00000000000000# Vagrant Serf Demo This demo provides a very simple Vagrantfile that creates two nodes, one at "172.20.20.10" and another at "172.20.20.11". Both are running a standard Ubuntu 12.04 distribution, with Serf pre-installed under `/usr/bin/serf`. To get started, you can start the cluster by just doing: $ vagrant up Once it is finished, you should be able to see the following: $ vagrant status Current machine states: n1 running (vmware_fusion) n2 running (vmware_fusion) At this point the two nodes are running and you can SSH in to play with them: $ vagrant ssh n1 ... $ vagrant ssh n2 ... To learn more about starting serf, joining nodes and interacting with the agent, checkout the [getting started guide](https://www.serf.io/intro/getting-started/install.html). golang-github-hashicorp-serf-0.8.1+git20171021.c20a0b1/demo/vagrant-cluster/Vagrantfile000066400000000000000000000014241317277572200300560ustar00rootroot00000000000000# -*- mode: ruby -*- # vi: set ft=ruby : $script = <