pax_global_header00006660000000000000000000000064141170646370014523gustar00rootroot0000000000000052 comment=55708d5404e28c8c72acdf8615f64d6528dc8507 mirrorbits-0.5.1+git20210123.eeea0e0+ds1/000077500000000000000000000000001411706463700172025ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/.gitignore000066400000000000000000000000301411706463700211630ustar00rootroot00000000000000bin dist tmp *~ .vscode mirrorbits-0.5.1+git20210123.eeea0e0+ds1/.travis.yml000066400000000000000000000012501411706463700213110ustar00rootroot00000000000000language: go sudo: required # https://docs.travis-ci.com/user/languages/go/#go-import-path go_import_path: github.com/etix/mirrorbits go: - "1.11.x" - "1.12.x" - "1.13.x" - master os: - linux matrix: allow_failures: - go: master before_install: - sudo apt-get -qq update - sudo apt-get install -y libgeoip-dev - curl -L https://github.com/google/protobuf/releases/download/v3.9.1/protoc-3.9.1-linux-x86_64.zip -o /tmp/protoc.zip - unzip /tmp/protoc.zip -d "$HOME"/protoc install: - go version - export GOBIN="$GOPATH/bin" - export PROTOCBIN="$HOME/protoc/bin" - export PATH="$PATH:$PROTOCBIN:$GOBIN" - go env script: - make && make test mirrorbits-0.5.1+git20210123.eeea0e0+ds1/CHANGELOG.md000066400000000000000000000135321411706463700210170ustar00rootroot00000000000000## master ### FEATURES - Make per-mirror logs available on the CLI: `mirrorbits logs ` (#5) - New option (see FixTimezoneOffsets) to detect and automatically fix timezone shifts on mirrors (mostly for those using FTP). ### ENHANCEMENTS - Enforce checks on modtime based on FTP and rsync capabilities - Use `type=notify` in the systemd service file to indicate readiness of the http server - Make unauthorized redirect errors more visible ### BUGFIXES - Fixed a race condition in automatic mirror scan - Restore case-insensitive mirror name matching on the CLI ### Changes - Use Go modules (Go 1.11+) ## v0.5.1 ### ENHANCEMENTS - Sort the mirrors by the last state date in the list command ### BUGFIXES - Regression: mirrors were not able to transition between up and down states ## v0.5 ### FEATURES - Allow renaming a mirror directly from `mirrorbits edit` - Option to exclude a country from being served by a mirror ### ENHANCEMENTS - Use of GeoIP2 mmdb databases - RPC between the CLI and the server - Use SHA256 as new default hash - General improvements on the web templates - Google Maps replaced by OpenStreetMap (#74) - Google Charts replaced by Flot (#76) - Possibility to fetch and serve Javascript locally without relying on CDNs (#76) - Dockerfile improvements - Systemd service file with process isolation ### BUGFIXES - Add the Redis database index in pubsub announcements (#75) - Exclude partial directories from rsync (#64) ### Changes - JSON API: - Name contains the name of the mirror (previously known as ID) - ID now contains the unique ID of the mirror ## v0.4 ### FEATURES - Allow negative scores to reduce the weight of a mirror - Follow symbolic links within a repository - Allow/Disallow per-mirror redirects configuration - Display the sync offset between each mirrors and the source on the mirrorstats page (requires a trace file on the repository) - New cli option to force a rehashing of all files during a refresh - Added a Dockerfile ### ENHANCEMENTS - Support password protected rsync URLs - Allow https URLs when adding a mirror - Display location and score in the list output - Display mirror status in the stats output - Improvements in the selection algorithm - Load OSM tiles using https - Keep the list of mirrors sorted by score in the mirrorlist - Set cache-control to disable caching - Log unauthorized redirection from a mirror - New option to set the maximum number of backup mirrors to return in link headers - Support for Google Maps API keys - Mirrorlist and Mirrorstats UI refresh - Use UTC time on mirrorlist / mirrorstats page - Improved error reporting - Add dependency vendoring ### BUGFIXES - Fix a possible crash while Redis is loading the dataset - Fix a race condition when updating mirrors state - Fix a rare deadlock within the FTP client ## v0.3 ### FEATURES - Support for HA via Redis sentinel - Clustering support (multiple Mirrorbits instances) [#6](https://github.com/etix/mirrorbits/issues/6) - Support for Redis DB index - SHA256 and MD5 hashing support (in addition to SHA1) [#4](https://github.com/etix/mirrorbits/issues/4) - Configurable interval for sync and check - CLI: get stats by matching regular expressions - HTTP: get the checksum of any file by appending ?sha1, ?sha256 or ?md5 to any served file - Added a Makefile to support different builds ### ENHANCEMENTS - Improved systemd service file - New mirrorlist template [#15](https://github.com/etix/mirrorbits/issues/15) - Geoip databases are now updated (in memory) during a reload - Reuse all Redis connections when possible - Detect and wait until Redis has loaded the dataset into memory - Improved handling of X-Forwarded-For IP addresses [#23](https://github.com/etix/mirrorbits/issues/23) - Logging: enable the colored output only if supported by the terminal - More configuration items can be applied with a simple reload - Improved scan behavior for newly added mirror (healthcheck only after successful scan) - Limit redis verbosity in CLI operations - CLI: reduce the number of database requests required to fetch stats by time interval - CLI: differentiate down vs disabled mirrors - FTP: add a connection timeout - Don't try to open download logs when using the cli - process: ensure the file descriptor is valid before finalizing a seamless binary upgrade - Mirrors with a weight less than 1% will show <1% instead - Graceful exit is now faster - General improvements on error reporting ### BUGFIXES - Fix Redis password authentication - Fix a crash in the weight randomization algorithm - Fix a bug causing a rescan of all mirrors during startup - Fix a bug causing some disabled mirrors to be health-checked - Don't reload logs if outputting on stderr (journald is now happy) - Fix a crash if no mirrors and no fallbacks are available - CLI: fix matching of a mirror ID containing the same substring [#19](https://github.com/etix/mirrorbits/issues/19) - scan: fix an issue causing a constant rehashing of all files [#18](https://github.com/etix/mirrorbits/issues/18) - The geoip-lite-update script did not update the databases correctly ## v0.2 ### FEATURES - Request a scan using a specific protocol (rsync or ftp) - Print basic download stats (mirrorbits stats ) ### ENHANCEMENTS - Improve parse errors in the configuration - Don't log if logdir in unset ### BUGFIXES - Fix a minor corner case when the client and server are in the exact same location ## v0.1.2 ### BUGFIXES - Fix a possible division by zero during mirror selection ## v0.1.1 ### FEATURES - CLI: a parse error in the mirror configuration can now be retried - CLI: add supports for taking notes / comments on a mirror - CLI: add a command-line flag to auto-enable a mirror after a successful scan - CLI: add a flag to scan all mirrors at once ### ENHANCEMENTS - Improved mirror selection algorithm ### BUGFIXES - Fix few corner cases in weight distribution ## v0.1.0 Initial release mirrorbits-0.5.1+git20210123.eeea0e0+ds1/Dockerfile000066400000000000000000000016261411706463700212010ustar00rootroot00000000000000FROM golang:latest LABEL maintainer="etix@l0cal.com" ADD . /go/mirrorbits RUN apt-get update -y && \ DEBIAN_FRONTEND=noninteractive apt-get install -y pkg-config zlib1g-dev protobuf-compiler libprotoc-dev rsync && \ apt-get clean RUN go get -u github.com/maxmind/geoipupdate2/cmd/geoipupdate && \ go install -ldflags "-X main.defaultConfigFile=/etc/GeoIP.conf -X main.defaultDatabaseDirectory=/usr/share/GeoIP" github.com/maxmind/geoipupdate2/cmd/geoipupdate && \ echo "AccountID 0\nLicenseKey 000000000000\nEditionIDs GeoLite2-City GeoLite2-Country GeoLite2-ASN" > /etc/GeoIP.conf && \ mkdir /usr/share/GeoIP && \ /go/bin/geoipupdate RUN mkdir /srv/repo /var/log/mirrorbits && \ cd /go/mirrorbits && \ make install PREFIX=/usr RUN cp /go/mirrorbits/contrib/docker/mirrorbits.conf /etc/mirrorbits.conf ENTRYPOINT /usr/bin/mirrorbits daemon -config /etc/mirrorbits.conf EXPOSE 8080 mirrorbits-0.5.1+git20210123.eeea0e0+ds1/LICENSE.txt000066400000000000000000000020751411706463700210310ustar00rootroot00000000000000The MIT License (MIT) Copyright (c) 2014-2019 Ludovic Fauvet Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.mirrorbits-0.5.1+git20210123.eeea0e0+ds1/Makefile000066400000000000000000000055421411706463700206500ustar00rootroot00000000000000.PHONY: all build dev clean release test installdirs install uninstall install-service uninstall-service service-systemd VERSION := $(shell git describe --always --dirty --tags) SHA := $(shell git rev-parse --short HEAD) BRANCH := $(subst /,-,$(shell git rev-parse --abbrev-ref HEAD)) BUILD := $(SHA)-$(BRANCH) BINARY_NAME := mirrorbits BINARY := bin/$(BINARY_NAME) TARBALL := dist/mirrorbits-$(VERSION).tar.gz TEMPLATES := templates/ ifneq (${DESTDIR}$(PREFIX),) TEMPLATES = ${DESTDIR}$(PREFIX)/share/mirrorbits endif PREFIX ?= /usr/local PACKAGE = github.com/etix/mirrorbits LDFLAGS := -X $(PACKAGE)/core.VERSION=$(VERSION) -X $(PACKAGE)/core.BUILD=$(BUILD) -X $(PACKAGE)/config.TEMPLATES_PATH=${TEMPLATES} GOFLAGS := -ldflags "$(LDFLAGS)" GOFLAGSDEV := -race -ldflags "$(LDFLAGS) -X $(PACKAGE)/core.DEV=-dev" GOPATH ?= $(HOME)/go PROTOC_GEN_GO := $(GOPATH)/bin/protoc-gen-go export PATH := $(GOPATH)/bin:$(PATH) PKG_CONFIG ?= /usr/bin/pkg-config SERVICEDIR_SYSTEMD ?= $(shell $(PKG_CONFIG) systemd --variable=systemdsystemunitdir) all: build $(PROTOC_GEN_GO): go get -u github.com/golang/protobuf/protoc-gen-go rpc/rpc.pb.go: rpc/rpc.proto | $(PROTOC_GEN_GO) $(PROTOC) @ if ! which protoc > /dev/null; then \ echo "error: protoc not installed" >&2; \ exit 1; \ fi protoc -I rpc rpc/rpc.proto --go_out=plugins=grpc:rpc build: rpc/rpc.pb.go GO111MODULE=on go build $(GOFLAGS) -o $(BINARY) . dev: rpc/rpc.pb.go GO111MODULE=on go build $(GOFLAGSDEV) -o $(BINARY) . clean: @echo Cleaning workspace... @rm -f $(BINARY) @rm -f contrib/init/systemd/mirrorbits.service @rm -dRf dist release: $(TARBALL) test: rpc/rpc.pb.go GO111MODULE=on go test $(GOFLAGS) ./... installdirs: mkdir -p ${DESTDIR}${PREFIX}/{bin,share} ${DESTDIR}$(PREFIX)/share/mirrorbits install: build installdirs install-service # For the 'make install' to work with sudo it might be necessary to add # the Go binary path to the 'secure_path' and add 'GOPATH' to 'env_keep'. @cp -vf $(BINARY) ${DESTDIR}${PREFIX}/bin/ @cp -vf templates/* ${DESTDIR}$(PREFIX)/share/mirrorbits uninstall: uninstall-service @rm -vf ${DESTDIR}${PREFIX}/bin/$(BINARY_NAME) @rm -vfr ${DESTDIR}$(PREFIX)/share/mirrorbits ifeq (,${SERVICEDIR_SYSTEMD}) install-service: uninstall-service: else install-service: service-systemd install -Dm644 contrib/init/systemd/mirrorbits.service ${DESTDIR}${SERVICEDIR_SYSTEMD}/mirrorbits.service uninstall-service: @rm -vf ${DESTDIR}${SERVICEDIR_SYSTEMD}/mirrorbits.service service-systemd: @sed "s|##PREFIX##|$(PREFIX)|" contrib/init/systemd/mirrorbits.service.in > contrib/init/systemd/mirrorbits.service endif $(TARBALL): build @echo Packaging release... @mkdir -p tmp/mirrorbits @cp -f $(BINARY) tmp/mirrorbits/ @cp -r templates tmp/mirrorbits/ @cp mirrorbits.conf tmp/mirrorbits/ @mkdir -p dist/ @tar -czf $@ -C tmp mirrorbits && echo release tarball has been created: $@ @rm -rf tmp mirrorbits-0.5.1+git20210123.eeea0e0+ds1/README.md000066400000000000000000000162771411706463700204760ustar00rootroot00000000000000[![Build Status](https://travis-ci.org/etix/mirrorbits.svg?branch=master)](https://travis-ci.org/etix/mirrorbits) [![Go Report Card](https://goreportcard.com/badge/github.com/etix/mirrorbits)](https://goreportcard.com/report/github.com/etix/mirrorbits) Mirrorbits =========== Mirrorbits is a geographical download redirector written in [Go](https://golang.org) for distributing files efficiently across a set of mirrors. It offers a simple and economic way to create a Content Delivery Network layer using a pure software stack. It is primarily designed for the distribution of large-scale Open-Source projects with a lot of traffic. ![mirrorbits_screenshot](https://cloud.githubusercontent.com/assets/38853/3636687/ab6bba38-0fd8-11e4-9d69-01543ed2531a.png) ## Main Features * Blazing fast, can reach 8K QPS on a single laptop * Easy to deploy and maintain, everything is packed in a single binary * Automatic synchronization with the mirrors over **rsync** or **FTP** * Response can be either JSON or HTTP redirect * Support partial repositories * Complete checksum / size control * Realtime monitoring and reports * Disable misbehaving mirrors without human intervention * Realtime decision making based on location, AS number and defined rules * Smart load-balancing over multiple mirrors in the same area to avoid hotspots * Ability to adjust the weight of each mirror * Limit access to a country, region or ASN for any mirror * Clustering (multiple mirrorbits instances) * High-availability using redis-sentinel * Automatically fix timezone offsets for broken mirrors * Realtime statistics per file / mirror / date * Realtime reconfiguration * Seamless binary upgrade (aka zero downtime upgrade) * [Mirmon](http://www.staff.science.uu.nl/~penni101/mirmon/) support * Full **IPv6** support * more... ## Is it production ready? **Yes!** Mirrorbits has served **billions** of files already and is known to be running in production at * [VideoLAN](http://www.videolan.org) to distribute [VLC media player](http://www.videolan.org/vlc/) since April 2014 * [Popcorn Time](https://popcorntime.io) * [SuperRepo](https://superrepo.org) * [Kodi](http://kodi.tv) (aka XBMC) * [OSMC](https://osmc.tv) * [LineageOS](http://lineageos.org) (previously CyanogenMod) * [Chaos Computer Club](https://media.ccc.de) (media distribution) * [CarbonROM](https://carbonrom.org) * [Endless OS](https://endlessos.com/) * [Parrot OS](https://www.parrotsec.org/) Yet some things might change before the 1.0 release. If you intend to deploy Mirrorbits in a production system it is advised to notify the author first so we can help you to make any transition as seamless as possible! # Quick start ## Prerequisites * Go 1.11 or later * Protobuf (protoc) * Redis 3.2 or later (with [persistence](https://redis.io/topics/persistence) enabled) * GeoIP2 databases from [Maxmind](https://dev.maxmind.com/geoip/geoip2/geolite2/) (preferably updated regularly) :warning: **GeoIP-legacy is not supported anymore, please use the new GeoIP2 mmdb databases!** **Optional:** * redis-sentinel (for high-availability support) ## Upgrading Before upgrading to the latest version, please check [this guide](https://github.com/etix/mirrorbits/wiki/Upgrade-Guide). ## Installation You can either get a [prebuilt version](https://github.com/etix/mirrorbits/releases) or choose to build it yourself. ### Docker A docker "quick start" can be found [on the wiki](https://github.com/etix/mirrorbits/wiki/Running-within-Docker). ### Manual build Go >= 1.11: ``` $ git clone https://github.com/etix/mirrorbits.git $ cd mirrorbits $ sudo make install ``` Go < 1.11: ``` $ go get -u github.com/etix/mirrorbits $ cd $GOPATH/src/github.com/etix/mirrorbits $ sudo make install ``` The resulting executable should now live in your */usr/local/bin* directory. You can also specify a `PREFIX` or `DESTDIR` if necessary: ``` sudo make install PREFIX=/usr ``` ## Configuration A sample configuration file can be found [here](mirrorbits.conf). ## Running Mirrorbits is a self-contained application and can act, at the same time, as the server and the cli. To run the server: ``` mirrorbits daemon ``` Additional options can be found with ```mirrorbits -help```. To run the cli: ``` mirrorbits help ``` Add a mirror: ``` mirrorbits add -ftp="ftp://ftp.mirrors.example/myproject/" -http="http://ftp.mirrors.example/myproject/" mirrors.example ``` Enable the mirror: ``` mirrorbits enable mirrors.example ``` ### Realtime file availability By appending `?mirrorlist` to any file served by mirrorbits, you'll be able to get some useful realtime informations about the given file. You can see a [live example here](https://get.videolan.org/vlc/2.2.4/win32/vlc-2.2.4-win32.exe?mirrorlist). ### Realtime mirrors statistics Mirror statistics are available by querying mirrorbits with the `?mirrorstats` argument. You can see a [live example here](https://get.videolan.org/?mirrorstats). ## Clustering / High availability Multiple instances of mirrorbits can be started simultanously on different servers, discovery of other nodes should be automatic as long as all the instances are connected to the same redis server. In addition to the clustering it is advised to use redis-sentinel to monitor the database and gracefuly handle failover. ## Upgrading Mirrorbits has a mode called *seamless binary upgrade* to upgrade the server executable at runtime without service disruption. Once the binary has been replaced on the filesystem just issue the following command in the cli: ``` mirrorbits upgrade ``` ## Considerations * When configured in redirect mode, Mirrorbits can easily serve client requests directly but it is usually recommended to set it behind a reverse proxy like nginx. In this case take care to pass the IP address of the client within a X-Forwarded-For header: ``` proxy_set_header X-Forwarded-For $remote_addr; ``` * It is advised to never cache requests intended for Mirrorbits since each request is supposed to be unique, caching the result might have unexpected consequences. # We're social! The best place to discuss about mirrorbits is to join the #VideoLAN IRC channel on Freenode. For the latest news, you can follow [@mirrorbits](http://twitter.com/mirrorbits) on Twitter. # License MIT > Permission is hereby granted, free of charge, to any person obtaining a copy > of this software and associated documentation files (the "Software"), to deal > in the Software without restriction, including without limitation the rights > to use, copy, modify, merge, publish, distribute, sublicense, and/or sell > copies of the Software, and to permit persons to whom the Software is > furnished to do so, subject to the following conditions: > > The above copyright notice and this permission notice shall be included in > all copies or substantial portions of the Software. > > THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR > IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, > FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE > AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER > LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, > OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN > THE SOFTWARE. mirrorbits-0.5.1+git20210123.eeea0e0+ds1/cli/000077500000000000000000000000001411706463700177515ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/cli/commands.go000066400000000000000000000621271411706463700221110ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package cli import ( "bufio" "bytes" "context" "flag" "fmt" "io/ioutil" "net/url" "os" "os/exec" "reflect" "sort" "strings" "sync" "text/tabwriter" "time" "github.com/etix/mirrorbits/core" "github.com/etix/mirrorbits/filesystem" "github.com/etix/mirrorbits/mirrors" "github.com/etix/mirrorbits/rpc" "github.com/etix/mirrorbits/utils" "github.com/golang/protobuf/ptypes" "github.com/golang/protobuf/ptypes/empty" "github.com/howeyc/gopass" "github.com/op/go-logging" "github.com/pkg/errors" "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/status" "gopkg.in/yaml.v2" ) const ( commentSeparator = "##### Comments go below this line #####" defaultRPCTimeout = time.Second * 10 ) var ( log = logging.MustGetLogger("main") ) type cli struct { sync.Mutex rpcconn *grpc.ClientConn creds *loginCreds } // ParseCommands parses the command line and call the appropriate functions func ParseCommands(args ...string) error { c := &cli{ creds: &loginCreds{ Password: core.RPCPassword, }, } if len(args) > 0 && args[0] != "help" { method, exists := c.getMethod(args[0]) if !exists { fmt.Println("Error: Command not found:", args[0]) return c.CmdHelp() } if len(c.creds.Password) == 0 && core.RPCAskPass { fmt.Print("Password: ") passwd, err := gopass.GetPasswdMasked() if err != nil { return err } c.creds.Password = string(passwd) } ret := method.Func.CallSlice([]reflect.Value{ reflect.ValueOf(c), reflect.ValueOf(args[1:]), })[0].Interface() if c.rpcconn != nil { c.rpcconn.Close() } if ret == nil { return nil } return ret.(error) } return c.CmdHelp() } func (c *cli) getMethod(name string) (reflect.Method, bool) { methodName := "Cmd" + strings.ToUpper(name[:1]) + strings.ToLower(name[1:]) return reflect.TypeOf(c).MethodByName(methodName) } func (c *cli) CmdHelp() error { help := fmt.Sprintf("Usage: mirrorbits [OPTIONS] COMMAND [arg...]\n\nA smart download redirector.\n\n") help += fmt.Sprintf("Server commands:\n %-10.10s%s\n\n", "daemon", "Start the server") help += fmt.Sprintf("CLI commands:\n") for _, command := range [][]string{ {"add", "Add a new mirror"}, {"disable", "Disable a mirror"}, {"edit", "Edit a mirror"}, {"enable", "Enable a mirror"}, {"export", "Export the mirror database"}, {"list", "List all mirrors"}, {"logs", "Print logs of a mirror"}, {"refresh", "Refresh the local repository"}, {"reload", "Reload configuration"}, {"remove", "Remove a mirror"}, {"scan", "(Re-)Scan a mirror"}, {"show", "Print a mirror configuration"}, {"stats", "Show download stats"}, {"upgrade", "Seamless binary upgrade"}, {"version", "Print version information"}, } { help += fmt.Sprintf(" %-10.10s%s\n", command[0], command[1]) } fmt.Fprintf(os.Stderr, "%s\n", help) return nil } // SubCmd prints the usage of a subcommand func SubCmd(name, signature, description string) *flag.FlagSet { flags := flag.NewFlagSet(name, flag.ContinueOnError) flags.Usage = func() { fmt.Fprintf(os.Stderr, "\nUsage: mirrorbits %s %s\n\n%s\n\n", name, signature, description) flags.PrintDefaults() } return flags } type ByDate []*rpc.Mirror func (d ByDate) Len() int { return len(d) } func (d ByDate) Swap(i, j int) { d[i], d[j] = d[j], d[i] } func (d ByDate) Less(i, j int) bool { return d[i].StateSince.Seconds > d[j].StateSince.Seconds } func (c *cli) CmdList(args ...string) error { cmd := SubCmd("list", "", "Get the list of mirrors") http := cmd.Bool("http", false, "Print HTTP addresses") rsync := cmd.Bool("rsync", false, "Print rsync addresses") ftp := cmd.Bool("ftp", false, "Print FTP addresses") location := cmd.Bool("location", false, "Print the country and continent code") state := cmd.Bool("state", true, "Print the state of the mirror") score := cmd.Bool("score", false, "Print the score of the mirror") disabled := cmd.Bool("disabled", false, "List disabled mirrors only") enabled := cmd.Bool("enabled", false, "List enabled mirrors only") down := cmd.Bool("down", false, "List only mirrors currently down") if err := cmd.Parse(args); err != nil { return nil } if cmd.NArg() != 0 { cmd.Usage() return nil } client := c.GetRPC() ctx, cancel := context.WithTimeout(context.Background(), defaultRPCTimeout) defer cancel() list, err := client.List(ctx, &empty.Empty{}) if err != nil { log.Fatal("list error:", err) } sort.Sort(ByDate(list.Mirrors)) w := new(tabwriter.Writer) w.Init(os.Stdout, 0, 8, 0, '\t', 0) fmt.Fprint(w, "Identifier ") if *score == true { fmt.Fprint(w, "\tSCORE") } if *http == true { fmt.Fprint(w, "\tHTTP ") } if *rsync == true { fmt.Fprint(w, "\tRSYNC ") } if *ftp == true { fmt.Fprint(w, "\tFTP ") } if *location == true { fmt.Fprint(w, "\tLOCATION ") } if *state == true { fmt.Fprint(w, "\tSTATE\tSINCE") } fmt.Fprint(w, "\n") for _, mirror := range list.Mirrors { if *disabled == true { if mirror.Enabled == true { continue } } if *enabled == true { if mirror.Enabled == false { continue } } if *down == true { if mirror.Up == true { continue } } stateSince, err := ptypes.Timestamp(mirror.StateSince) if err != nil { log.Fatal("list error:", err) } fmt.Fprintf(w, "%s ", mirror.Name) if *score == true { fmt.Fprintf(w, "\t%d ", mirror.Score) } if *http == true { fmt.Fprintf(w, "\t%s ", mirror.HttpURL) } if *rsync == true { fmt.Fprintf(w, "\t%s ", mirror.RsyncURL) } if *ftp == true { fmt.Fprintf(w, "\t%s ", mirror.FtpURL) } if *location == true { countries := strings.Split(mirror.CountryCodes, " ") countryCode := "/" if len(countries) >= 1 { countryCode = countries[0] } fmt.Fprintf(w, "\t%s (%s) ", countryCode, mirror.ContinentCode) } if *state == true { if mirror.Enabled == false { fmt.Fprintf(w, "\tdisabled") } else if mirror.Up == true { fmt.Fprintf(w, "\tup") } else { fmt.Fprintf(w, "\tdown") } fmt.Fprintf(w, " \t(%s)", stateSince.Format(time.RFC1123)) } fmt.Fprint(w, "\n") } w.Flush() return nil } func (c *cli) CmdAdd(args ...string) error { cmd := SubCmd("add", "[OPTIONS] IDENTIFIER", "Add a new mirror") http := cmd.String("http", "", "HTTP base URL") rsync := cmd.String("rsync", "", "RSYNC base URL (for scanning only)") ftp := cmd.String("ftp", "", "FTP base URL (for scanning only)") sponsorName := cmd.String("sponsor-name", "", "Name of the sponsor") sponsorURL := cmd.String("sponsor-url", "", "URL of the sponsor") sponsorLogo := cmd.String("sponsor-logo", "", "URL of a logo to display for this mirror") adminName := cmd.String("admin-name", "", "Admin's name") adminEmail := cmd.String("admin-email", "", "Admin's email") customData := cmd.String("custom-data", "", "Associated data to return when the mirror is selected (i.e. json document)") continentOnly := cmd.Bool("continent-only", false, "The mirror should only handle its continent") countryOnly := cmd.Bool("country-only", false, "The mirror should only handle its country") asOnly := cmd.Bool("as-only", false, "The mirror should only handle clients in the same AS number") score := cmd.Int("score", 0, "Weight to give to the mirror during selection") comment := cmd.String("comment", "", "Comment") if err := cmd.Parse(args); err != nil { return nil } if cmd.NArg() < 1 { cmd.Usage() return nil } if strings.Contains(cmd.Arg(0), " ") { fmt.Fprintf(os.Stderr, "The identifier cannot contain a space\n") os.Exit(-1) } if *http == "" { fmt.Fprintf(os.Stderr, "You *must* pass at least an HTTP URL\n") os.Exit(-1) } if !strings.HasPrefix(*http, "http://") && !strings.HasPrefix(*http, "https://") { *http = "http://" + *http } _, err := url.Parse(*http) if err != nil { fmt.Fprintf(os.Stderr, "Can't parse url\n") os.Exit(-1) } mirror := &mirrors.Mirror{ Name: cmd.Arg(0), HttpURL: *http, RsyncURL: *rsync, FtpURL: *ftp, SponsorName: *sponsorName, SponsorURL: *sponsorURL, SponsorLogoURL: *sponsorLogo, AdminName: *adminName, AdminEmail: *adminEmail, CustomData: *customData, ContinentOnly: *continentOnly, CountryOnly: *countryOnly, ASOnly: *asOnly, Score: *score, Comment: *comment, } client := c.GetRPC() ctx, cancel := context.WithTimeout(context.Background(), defaultRPCTimeout) defer cancel() m, err := rpc.MirrorToRPC(mirror) if err != nil { log.Fatal("edit error:", err) } reply, err := client.AddMirror(ctx, m) if err != nil { if err.Error() == rpc.ErrNameAlreadyTaken.Error() { log.Fatalf("Mirror %s already exists!\n", mirror.Name) } log.Fatal("edit error:", err) } for i := 0; i < len(reply.Warnings); i++ { fmt.Println(reply.Warnings[i]) if i == len(reply.Warnings)-1 { fmt.Println("") } } if reply.Country != "" { fmt.Println("Mirror location:") fmt.Printf("Latitude: %.4f\n", reply.Latitude) fmt.Printf("Longitude: %.4f\n", reply.Longitude) fmt.Printf("Continent: %s\n", reply.Continent) fmt.Printf("Country: %s\n", reply.Country) fmt.Printf("ASN: %s\n", reply.ASN) fmt.Println("") } fmt.Printf("Mirror '%s' added successfully\n", mirror.Name) fmt.Printf("Enable this mirror using\n $ mirrorbits enable %s\n", mirror.Name) return nil } func (c *cli) CmdRemove(args ...string) error { cmd := SubCmd("remove", "IDENTIFIER", "Remove an existing mirror") force := cmd.Bool("f", false, "Never prompt for confirmation") if err := cmd.Parse(args); err != nil { return nil } if cmd.NArg() != 1 { cmd.Usage() return nil } id, name := c.matchMirror(cmd.Arg(0)) if *force == false { fmt.Printf("Removing %s, are you sure? [y/N]", name) reader := bufio.NewReader(os.Stdin) s, _ := reader.ReadString('\n') switch s[0] { case 'y', 'Y': break default: return nil } } client := c.GetRPC() ctx, cancel := context.WithTimeout(context.Background(), defaultRPCTimeout) defer cancel() _, err := client.RemoveMirror(ctx, &rpc.MirrorIDRequest{ ID: int32(id), }) if err != nil { log.Fatal("remove error:", err) } fmt.Printf("Mirror '%s' removed successfully\n", name) return nil } func (c *cli) CmdScan(args ...string) error { cmd := SubCmd("scan", "[IDENTIFIER]", "(Re-)Scan a mirror") enable := cmd.Bool("enable", false, "Enable the mirror automatically if the scan is successful") all := cmd.Bool("all", false, "Scan all mirrors at once") ftp := cmd.Bool("ftp", false, "Force a scan using FTP") rsync := cmd.Bool("rsync", false, "Force a scan using rsync") timeout := cmd.Uint("timeout", 0, "Timeout in seconds") if err := cmd.Parse(args); err != nil { return nil } if !*all && cmd.NArg() != 1 || *all && cmd.NArg() != 0 { cmd.Usage() return nil } client := c.GetRPC() ctx, cancel := context.WithCancel(context.Background()) defer cancel() list := make(map[int]string) // Get the list of mirrors to scan if *all == true { reply, err := client.MatchMirror(ctx, &rpc.MatchRequest{ Pattern: "", // Match all of them }) if err != nil { return errors.New("Cannot fetch the list of mirrors") } for _, m := range reply.Mirrors { list[int(m.ID)] = m.Name } } else { // Single mirror id, name := c.matchMirror(cmd.Arg(0)) list[id] = name } // Set the method of the scan (if not default) var method rpc.ScanMirrorRequest_Method if *ftp == false && *rsync == false { method = rpc.ScanMirrorRequest_ALL } else if *rsync == true { method = rpc.ScanMirrorRequest_RSYNC } else if *ftp == true { method = rpc.ScanMirrorRequest_FTP } for id, name := range list { if *timeout > 0 { ctx, cancel = context.WithTimeout(context.Background(), time.Duration(*timeout)*time.Second) defer cancel() } fmt.Printf("Scanning %s... ", name) reply, err := client.ScanMirror(ctx, &rpc.ScanMirrorRequest{ ID: int32(id), AutoEnable: *enable, Protocol: method, }) if err != nil { s := status.Convert(err) if s.Code() == codes.FailedPrecondition || len(list) == 1 { return errors.New("\nscan error: " + grpc.ErrorDesc(err)) } fmt.Println("scan error:", grpc.ErrorDesc(err)) continue } else { fmt.Printf("%d files indexed, %d known and %d removed\n", reply.FilesIndexed, reply.KnownIndexed, reply.Removed) if reply.GetTZOffsetMs() != 0 { fmt.Printf(" ∟ Timezone offset detected and corrected: %d milliseconds\n", reply.TZOffsetMs) } if reply.Enabled { fmt.Println(" ∟ Enabled") } } } return nil } func (c *cli) CmdRefresh(args ...string) error { cmd := SubCmd("refresh", "", "Scan the local repository") rehash := cmd.Bool("rehash", false, "Force a rehash of the files") if err := cmd.Parse(args); err != nil { return nil } if cmd.NArg() != 0 { cmd.Usage() return nil } fmt.Print("Refreshing the local repository... ") client := c.GetRPC() ctx, cancel := context.WithCancel(context.Background()) defer cancel() _, err := client.RefreshRepository(ctx, &rpc.RefreshRepositoryRequest{ Rehash: *rehash, }) if err != nil { fmt.Println("") log.Fatal(err) } fmt.Println("done") return nil } func (c *cli) matchMirror(pattern string) (id int, name string) { if len(pattern) == 0 { return -1, "" } client := c.GetRPC() ctx, cancel := context.WithTimeout(context.Background(), defaultRPCTimeout) defer cancel() reply, err := client.MatchMirror(ctx, &rpc.MatchRequest{ Pattern: pattern, }) if err != nil { fmt.Fprintf(os.Stderr, "mirror matching: %s\n", err) os.Exit(1) } switch len(reply.Mirrors) { case 0: fmt.Fprintf(os.Stderr, "No match for '%s'\n", pattern) os.Exit(1) case 1: id, name, err := GetSingle(reply.Mirrors) if err != nil { log.Fatal("unexpected error:", err) } return id, name default: fmt.Fprintln(os.Stderr, "Multiple match:") for _, mirror := range reply.Mirrors { fmt.Fprintf(os.Stderr, " %s\n", mirror.Name) } os.Exit(1) } return } func GetSingle(list []*rpc.MirrorID) (int, string, error) { if len(list) == 0 { return -1, "", errors.New("list is empty") } else if len(list) > 1 { return -1, "", errors.New("too many results") } return int(list[0].ID), list[0].Name, nil } func (c *cli) CmdEdit(args ...string) error { cmd := SubCmd("edit", "[IDENTIFIER]", "Edit a mirror") if err := cmd.Parse(args); err != nil { return nil } if cmd.NArg() != 1 { cmd.Usage() return nil } // Find the editor to use editor := os.Getenv("EDITOR") if editor == "" { log.Fatal("Environment variable $EDITOR not set") } id, _ := c.matchMirror(cmd.Arg(0)) client := c.GetRPC() ctx, cancel := context.WithTimeout(context.Background(), defaultRPCTimeout) defer cancel() rpcm, err := client.MirrorInfo(ctx, &rpc.MirrorIDRequest{ ID: int32(id), }) if err != nil { log.Fatal("edit error:", err) } mirror, err := rpc.MirrorFromRPC(rpcm) if err != nil { log.Fatal("edit error:", err) } // Generate a yaml configuration string from the struct out, err := yaml.Marshal(mirror) // Open a temporary file f, err := ioutil.TempFile(os.TempDir(), "edit") if err != nil { log.Fatal("Cannot create temporary file:", err) } defer os.Remove(f.Name()) f.WriteString("# You can now edit this mirror configuration.\n" + "# Just save and quit when you're done.\n\n") f.WriteString(string(out)) f.WriteString(fmt.Sprintf("\n%s\n\n%s\n", commentSeparator, mirror.Comment)) f.Close() // Checksum the original file chk, _ := filesystem.Sha256sum(f.Name()) reopen: // Launch the editor with the filename as first parameter exe := exec.Command(editor, f.Name()) exe.Stdin = os.Stdin exe.Stdout = os.Stdout exe.Stderr = os.Stderr err = exe.Run() if err != nil { log.Fatal(err) } // Read the file back out, err = ioutil.ReadFile(f.Name()) if err != nil { log.Fatal("Cannot read file", f.Name()) } // Checksum the file back and compare chk2, _ := filesystem.Sha256sum(f.Name()) if bytes.Compare(chk, chk2) == 0 { fmt.Println("Aborted - settings are unmodified, so there is nothing to change.") return nil } var comment string yamlstr := string(out) commentIndex := strings.Index(yamlstr, commentSeparator) if commentIndex > 0 { comment = strings.TrimSpace(yamlstr[commentIndex+len(commentSeparator):]) yamlstr = yamlstr[:commentIndex] } reopen := func(err error) bool { eagain: fmt.Printf("%s\nRetry? [Y/n]", err.Error()) reader := bufio.NewReader(os.Stdin) s, _ := reader.ReadString('\n') switch s[0] { case 'y', 'Y', 10: return true case 'n', 'N': fmt.Println("Aborted") return false default: goto eagain } } // Fill the struct from the yaml err = yaml.Unmarshal([]byte(yamlstr), &mirror) if err != nil { switch reopen(err) { case true: goto reopen case false: return nil } } mirror.Comment = comment ctx, cancel = context.WithTimeout(context.Background(), defaultRPCTimeout) defer cancel() m, err := rpc.MirrorToRPC(mirror) if err != nil { log.Fatal("edit error:", err) } reply, err := client.UpdateMirror(ctx, m) if err != nil { if err.Error() == rpc.ErrNameAlreadyTaken.Error() { switch reopen(errors.New("Name already taken")) { case true: goto reopen case false: return nil } } log.Fatal("edit error:", err) } if len(reply.Diff) > 0 { fmt.Println(reply.Diff) } fmt.Printf("Mirror '%s' edited successfully\n", mirror.Name) return nil } func (c *cli) CmdShow(args ...string) error { cmd := SubCmd("show", "[IDENTIFIER]", "Print a mirror configuration") if err := cmd.Parse(args); err != nil { return nil } if cmd.NArg() != 1 { cmd.Usage() return nil } id, _ := c.matchMirror(cmd.Arg(0)) client := c.GetRPC() ctx, cancel := context.WithTimeout(context.Background(), defaultRPCTimeout) defer cancel() rpcm, err := client.MirrorInfo(ctx, &rpc.MirrorIDRequest{ ID: int32(id), }) if err != nil { log.Fatal("edit error:", err) } mirror, err := rpc.MirrorFromRPC(rpcm) if err != nil { log.Fatal("edit error:", err) } // Generate a yaml configuration string from the struct out, err := yaml.Marshal(mirror) if err != nil { log.Fatal("show error:", err) } fmt.Printf("%s\nComment:\n%s\n", out, mirror.Comment) return nil } func (c *cli) CmdExport(args ...string) error { cmd := SubCmd("export", "[format]", "Export the mirror database.\n\nAvailable formats: mirmon") rsync := cmd.Bool("rsync", true, "Export rsync URLs") http := cmd.Bool("http", true, "Export http URLs") ftp := cmd.Bool("ftp", true, "Export ftp URLs") disabled := cmd.Bool("disabled", true, "Export disabled mirrors") if err := cmd.Parse(args); err != nil { return nil } if cmd.NArg() != 1 { cmd.Usage() return nil } if cmd.Arg(0) != "mirmon" { fmt.Fprintf(os.Stderr, "Unsupported format\n") cmd.Usage() return nil } client := c.GetRPC() ctx, cancel := context.WithTimeout(context.Background(), defaultRPCTimeout) defer cancel() list, err := client.List(ctx, &empty.Empty{}) if err != nil { log.Fatal("export error:", err) } w := new(tabwriter.Writer) w.Init(os.Stdout, 0, 8, 0, '\t', 0) for _, m := range list.Mirrors { if *disabled == false { if m.Enabled == false { continue } } ccodes := strings.Fields(m.CountryCodes) urls := make([]string, 0, 3) if *rsync == true && m.RsyncURL != "" { urls = append(urls, m.RsyncURL) } if *http == true && m.HttpURL != "" { urls = append(urls, m.HttpURL) } if *ftp == true && m.FtpURL != "" { urls = append(urls, m.FtpURL) } for _, u := range urls { fmt.Fprintf(w, "%s\t%s\t%s\n", ccodes[0], u, m.AdminEmail) } } w.Flush() return nil } func (c *cli) CmdEnable(args ...string) error { cmd := SubCmd("enable", "[IDENTIFIER]", "Enable a mirror") if err := cmd.Parse(args); err != nil { return nil } if cmd.NArg() != 1 { cmd.Usage() return nil } c.changeStatus(cmd.Arg(0), true) return nil } func (c *cli) CmdDisable(args ...string) error { cmd := SubCmd("disable", "[IDENTIFIER]", "Disable a mirror") if err := cmd.Parse(args); err != nil { return nil } if cmd.NArg() != 1 { cmd.Usage() return nil } c.changeStatus(cmd.Arg(0), false) return nil } func (c *cli) changeStatus(pattern string, enabled bool) { id, name := c.matchMirror(pattern) client := c.GetRPC() ctx, cancel := context.WithTimeout(context.Background(), defaultRPCTimeout) defer cancel() _, err := client.ChangeStatus(ctx, &rpc.ChangeStatusRequest{ ID: int32(id), Enabled: enabled, }) if err != nil { if enabled { log.Fatalf("Couldn't enable mirror '%s': %s\n", name, err) } else { log.Fatalf("Couldn't disable mirror '%s': %s\n", name, err) } } if enabled { fmt.Printf("Mirror '%s' enabled successfully\n", name) } else { fmt.Printf("Mirror '%s' disabled successfully\n", name) } return } func (c *cli) CmdStats(args ...string) error { cmd := SubCmd("stats", "[OPTIONS] [mirror|file] [IDENTIFIER|PATTERN]", "Show download stats for a particular mirror or a file pattern") dateStart := cmd.String("start-date", "", "Starting date (format YYYY-MM-DD)") dateEnd := cmd.String("end-date", "", "Ending date (format YYYY-MM-DD)") human := cmd.Bool("h", true, "Human readable version") if err := cmd.Parse(args); err != nil { return nil } if cmd.NArg() != 2 || (cmd.Arg(0) != "mirror" && cmd.Arg(0) != "file") { cmd.Usage() return nil } start, err := time.Parse("2006-1-2", *dateStart) if err != nil { start = time.Now() } startproto, _ := ptypes.TimestampProto(start) end, err := time.Parse("2006-1-2", *dateEnd) if err != nil { end = time.Now() } endproto, _ := ptypes.TimestampProto(end) client := c.GetRPC() ctx, cancel := context.WithTimeout(context.Background(), defaultRPCTimeout) defer cancel() if cmd.Arg(0) == "file" { // File stats reply, err := client.StatsFile(ctx, &rpc.StatsFileRequest{ Pattern: cmd.Arg(1), DateStart: startproto, DateEnd: endproto, }) if err != nil { log.Fatal("file stats error:", err) } // Format the results w := new(tabwriter.Writer) w.Init(os.Stdout, 0, 8, 0, '\t', 0) // Sort keys and count requests var keys []string var requests int64 for k, req := range reply.Files { requests += req keys = append(keys, k) } sort.Strings(keys) for _, k := range keys { fmt.Fprintf(w, "%s:\t%d\n", k, reply.Files[k]) } if len(keys) > 0 { // Add a line separator fmt.Fprintf(w, "\t\n") } fmt.Fprintf(w, "Total download requests: \t%d\n", requests) w.Flush() } else if cmd.Arg(0) == "mirror" { // Mirror stats id, name := c.matchMirror(cmd.Arg(1)) reply, err := client.StatsMirror(ctx, &rpc.StatsMirrorRequest{ ID: int32(id), DateStart: startproto, DateEnd: endproto, }) if err != nil { log.Fatal("mirror stats error:", err) } // Format the results w := new(tabwriter.Writer) w.Init(os.Stdout, 0, 8, 0, '\t', 0) fmt.Fprintf(w, "Identifier:\t%s\n", name) if !reply.Mirror.Enabled { fmt.Fprintf(w, "Status:\tdisabled\n") } else if reply.Mirror.Up { fmt.Fprintf(w, "Status:\tup\n") } else { fmt.Fprintf(w, "Status:\tdown\n") } fmt.Fprintf(w, "Download requests:\t%d\n", reply.Requests) fmt.Fprint(w, "Bytes transferred:\t") if *human { fmt.Fprintln(w, utils.ReadableSize(reply.Bytes)) } else { fmt.Fprintln(w, reply.Bytes) } w.Flush() } return nil } func (c *cli) CmdLogs(args ...string) error { cmd := SubCmd("logs", "[IDENTIFIER]", "Print logs of a mirror") maxResults := cmd.Uint("l", 500, "Maximum number of logs to return") if err := cmd.Parse(args); err != nil { return nil } if cmd.NArg() != 1 { cmd.Usage() return nil } id, name := c.matchMirror(cmd.Arg(0)) client := c.GetRPC() ctx, cancel := context.WithTimeout(context.Background(), defaultRPCTimeout) defer cancel() resp, err := client.GetMirrorLogs(ctx, &rpc.GetMirrorLogsRequest{ ID: int32(id), MaxResults: int32(*maxResults), }) if err != nil { log.Fatal("logs error:", err) } if len(resp.Line) == 0 { fmt.Printf("No logs for %s\n", name) return nil } fmt.Printf("Printing logs for %s:\n", name) for _, l := range resp.Line { fmt.Println(l) } return nil } func (c *cli) CmdReload(args ...string) error { client := c.GetRPC() ctx, cancel := context.WithTimeout(context.Background(), defaultRPCTimeout) defer cancel() _, err := client.Reload(ctx, &empty.Empty{}) if err != nil { log.Fatal("upgrade error:", err) } return nil } func (c *cli) CmdUpgrade(args ...string) error { client := c.GetRPC() ctx, cancel := context.WithTimeout(context.Background(), defaultRPCTimeout) defer cancel() _, err := client.Upgrade(ctx, &empty.Empty{}) if err != nil { log.Fatal("upgrade error:", err) } return nil } func (c *cli) CmdVersion(args ...string) error { fmt.Printf("Client:\n") core.PrintVersion(core.GetVersionInfo()) fmt.Println() client := c.GetRPC() ctx, cancel := context.WithTimeout(context.Background(), defaultRPCTimeout) defer cancel() reply, err := client.GetVersion(ctx, &empty.Empty{}) if err != nil { s := status.Convert(err) return errors.Wrap(s.Err(), "version error") } if reply.Version != "" { fmt.Printf("Server:\n") core.PrintVersion(core.VersionInfo{ Version: reply.Version, Build: reply.Build, GoVersion: reply.GoVersion, OS: reply.OS, Arch: reply.Arch, GoMaxProcs: int(reply.GoMaxProcs), }) } return nil } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/cli/rpc.go000066400000000000000000000026311411706463700210660ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package cli import ( "context" "fmt" "os" "strconv" "github.com/etix/mirrorbits/core" "github.com/etix/mirrorbits/rpc" "github.com/golang/protobuf/ptypes/empty" "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/status" ) func (c *cli) GetRPC() rpc.CLIClient { c.Lock() defer c.Unlock() if c.rpcconn == nil { conn, err := grpc.Dial(core.RPCHost+":"+strconv.FormatUint(uint64(core.RPCPort), 10), grpc.WithInsecure(), grpc.WithBlock(), grpc.FailOnNonTempDialError(true), grpc.WithPerRPCCredentials(c.creds)) if err != nil { fmt.Fprintf(os.Stderr, "rpc: %s\n", err) os.Exit(1) } c.rpcconn = conn client := rpc.NewCLIClient(c.rpcconn) _, err = client.Ping(context.Background(), &empty.Empty{}) s := status.Convert(err) if s.Code() == codes.Unauthenticated { if len(c.creds.Password) == 0 { fmt.Fprintf(os.Stderr, "Please set the server password with the -P option.\n") } else { fmt.Fprintf(os.Stderr, "Password refused\n") } os.Exit(1) } } return rpc.NewCLIClient(c.rpcconn) } type loginCreds struct { Password string } func (c *loginCreds) GetRequestMetadata(context.Context, ...string) (map[string]string, error) { return map[string]string{ "password": c.Password, }, nil } func (c *loginCreds) RequireTransportSecurity() bool { return false } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/config/000077500000000000000000000000001411706463700204475ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/config/config.go000066400000000000000000000147431411706463700222540ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package config import ( "fmt" "io/ioutil" "os" "path/filepath" "sync" "github.com/etix/mirrorbits/core" "github.com/op/go-logging" "gopkg.in/yaml.v2" ) var ( // TEMPLATES_PATH is set at compile time TEMPLATES_PATH = "" ) var ( log = logging.MustGetLogger("main") config *Configuration configMutex sync.RWMutex subscribers []chan bool subscribersLock sync.RWMutex ) func defaultConfig() Configuration { return Configuration{ Repository: "", Templates: TEMPLATES_PATH, LocalJSPath: "", OutputMode: "auto", ListenAddress: ":8080", Gzip: false, RedisAddress: "127.0.0.1:6379", RedisPassword: "", RedisDB: 0, LogDir: "", TraceFileLocation: "", GeoipDatabasePath: "/usr/share/GeoIP/", ConcurrentSync: 5, ScanInterval: 30, CheckInterval: 1, RepositoryScanInterval: 5, MaxLinkHeaders: 10, FixTimezoneOffsets: false, Hashes: hashing{ SHA1: false, SHA256: true, MD5: false, }, DisallowRedirects: false, WeightDistributionRange: 1.5, DisableOnMissingFile: false, RPCListenAddress: "localhost:3390", RPCPassword: "", } } // Configuration contains all the option available in the yaml file type Configuration struct { Repository string `yaml:"Repository"` Templates string `yaml:"Templates"` LocalJSPath string `yaml:"LocalJSPath"` OutputMode string `yaml:"OutputMode"` ListenAddress string `yaml:"ListenAddress"` Gzip bool `yaml:"Gzip"` RedisAddress string `yaml:"RedisAddress"` RedisPassword string `yaml:"RedisPassword"` RedisDB int `yaml:"RedisDB"` LogDir string `yaml:"LogDir"` TraceFileLocation string `yaml:"TraceFileLocation"` GeoipDatabasePath string `yaml:"GeoipDatabasePath"` ConcurrentSync int `yaml:"ConcurrentSync"` ScanInterval int `yaml:"ScanInterval"` CheckInterval int `yaml:"CheckInterval"` RepositoryScanInterval int `yaml:"RepositoryScanInterval"` MaxLinkHeaders int `yaml:"MaxLinkHeaders"` FixTimezoneOffsets bool `yaml:"FixTimezoneOffsets"` Hashes hashing `yaml:"Hashes"` DisallowRedirects bool `yaml:"DisallowRedirects"` WeightDistributionRange float32 `yaml:"WeightDistributionRange"` DisableOnMissingFile bool `yaml:"DisableOnMissingFile"` Fallbacks []fallback `yaml:"Fallbacks"` RedisSentinelMasterName string `yaml:"RedisSentinelMasterName"` RedisSentinels []sentinels `yaml:"RedisSentinels"` RPCListenAddress string `yaml:"RPCListenAddress"` RPCPassword string `yaml:"RPCPassword"` } type fallback struct { URL string `yaml:"URL"` CountryCode string `yaml:"CountryCode"` ContinentCode string `yaml:"ContinentCode"` } type sentinels struct { Host string `yaml:"Host"` } type hashing struct { SHA1 bool `yaml:"SHA1"` SHA256 bool `yaml:"SHA256"` MD5 bool `yaml:"MD5"` } // LoadConfig loads the configuration file if it has not yet been loaded func LoadConfig() { if config != nil { return } err := ReloadConfig() if err != nil { log.Fatal(err) } } // ReloadConfig reloads the configuration file and update it globally func ReloadConfig() error { if core.ConfigFile == "" { if fileExists("/etc/mirrorbits.conf") { core.ConfigFile = "/etc/mirrorbits.conf" } } content, err := ioutil.ReadFile(core.ConfigFile) if err != nil { fmt.Println("Configuration could not be found.\n\tUse -config ") os.Exit(1) } if os.Getenv("DEBUG") != "" { fmt.Println("Reading configuration from", core.ConfigFile) } c := defaultConfig() // Overload the default configuration with the user's one err = yaml.Unmarshal(content, &c) if err != nil { return fmt.Errorf("%s in %s", err, core.ConfigFile) } // Sanitize if c.WeightDistributionRange <= 0 { return fmt.Errorf("WeightDistributionRange must be > 0") } if !isInSlice(c.OutputMode, []string{"auto", "json", "redirect"}) { return fmt.Errorf("Config: outputMode can only be set to 'auto', 'json' or 'redirect'") } if c.Repository == "" { return fmt.Errorf("Path to local repository not configured (see mirrorbits.conf)") } c.Repository, err = filepath.Abs(c.Repository) if err != nil { return fmt.Errorf("Invalid local repository path: %s", err) } if c.RepositoryScanInterval < 0 { c.RepositoryScanInterval = 0 } if config != nil && (c.RedisAddress != config.RedisAddress || c.RedisPassword != config.RedisPassword || !testSentinelsEq(c.RedisSentinels, config.RedisSentinels)) { // TODO reload redis connections // Currently established connections will be updated only in case of disconnection } // Lock the pointer during the swap configMutex.Lock() config = &c configMutex.Unlock() // Notify all subscribers that the configuration has been reloaded notifySubscribers() return nil } // GetConfig returns a pointer to a configuration object // FIXME reading from the pointer could cause a race! func GetConfig() *Configuration { configMutex.RLock() defer configMutex.RUnlock() if config == nil { panic("Configuration not loaded") } return config } // SetConfiguration is only used for testing purpose func SetConfiguration(c *Configuration) { config = c } // SubscribeConfig allows subscribers to get notified when // the configuration is updated. func SubscribeConfig(subscriber chan bool) { subscribersLock.Lock() defer subscribersLock.Unlock() subscribers = append(subscribers, subscriber) } func notifySubscribers() { subscribersLock.RLock() defer subscribersLock.RUnlock() for _, subscriber := range subscribers { select { case subscriber <- true: default: // Don't block if the subscriber is unavailable // and discard the message. } } } func fileExists(filename string) bool { _, err := os.Stat(filename) return err == nil } func testSentinelsEq(a, b []sentinels) bool { if len(a) != len(b) { return false } for i := range a { if a[i].Host != b[i].Host { return false } } return true } //DUPLICATE func isInSlice(a string, list []string) bool { for _, b := range list { if b == a { return true } } return false } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/contrib/000077500000000000000000000000001411706463700206425ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/contrib/docker/000077500000000000000000000000001411706463700221115ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/contrib/docker/mirrorbits.conf000066400000000000000000000001301411706463700251460ustar00rootroot00000000000000# vim: set ft=yaml: Repository: /srv/repo ListenAddress: :8080 RedisAddress: redis:6379mirrorbits-0.5.1+git20210123.eeea0e0+ds1/contrib/geoip/000077500000000000000000000000001411706463700217455ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/contrib/geoip/geoip-lite-update000066400000000000000000000046361411706463700252170ustar00rootroot00000000000000#!/bin/bash # geoip-lite-update -- update geoip lite database(s). # (c) 2008,2009,2010,2011,2012,2013,2014 poeml@cmdline.net # Distribute under GPLv2 if it proves worthy. # With added support for: # - GeoLiteCityv6 # - GeoIPASNum # - GeoIPASNumv6 # by Ludovic Fauvet for i in curl wget ftp; do if which $i &>/dev/null; then prg=$i break fi done if [ -z "$prg" ]; then echo cannot find a tool to download, like curl or wget >&2 exit 1 fi case $prg in curl) prg="curl -s -O" ;; wget) prg="wget --quiet" ;; esac set -e # GeoIP data used to be in /usr/share/GeoIP in the openSUSE package, and was moved later. # try the old location first - if it's present, it means that the user had his own # updated database there cd /usr/share/GeoIP/ 2>/dev/null || cd /var/lib/GeoIP rm -f GeoIP.dat.gz $prg http://geolite.maxmind.com/download/geoip/database/GeoLiteCountry/GeoIP.dat.gz gunzip -c GeoIP.dat.gz > GeoIP.dat.updated.new mv GeoIP.dat.updated.new GeoIP.dat.updated rm -f GeoLiteCity.dat.gz $prg http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz gunzip -c GeoLiteCity.dat.gz > GeoLiteCity.dat.updated.new mv GeoLiteCity.dat.updated.new GeoLiteCity.dat.updated rm -f GeoLiteCityv6.dat.gz $prg http://geolite.maxmind.com/download/geoip/database/GeoLiteCityv6-beta/GeoLiteCityv6.dat.gz gunzip -c GeoLiteCityv6.dat.gz > GeoLiteCityv6.dat.updated.new mv GeoLiteCityv6.dat.updated.new GeoLiteCityv6.dat.updated rm -f GeoIPv6.dat.gz $prg http://geolite.maxmind.com/download/geoip/database/GeoIPv6.dat.gz gunzip -c GeoIPv6.dat.gz > GeoIPv6.dat.updated.new mv GeoIPv6.dat.updated.new GeoIPv6.dat.updated rm -f GeoIPASNum.dat.gz $prg http://download.maxmind.com/download/geoip/database/asnum/GeoIPASNum.dat.gz gunzip -c GeoIPASNum.dat.gz > GeoIPASNum.dat.updated.new mv GeoIPASNum.dat.updated.new GeoIPASNum.dat.updated rm -f GeoIPASNumv6.dat.gz $prg http://download.maxmind.com/download/geoip/database/asnum/GeoIPASNumv6.dat.gz gunzip -c GeoIPASNumv6.dat.gz > GeoIPASNumv6.dat.updated.new mv GeoIPASNumv6.dat.updated.new GeoIPASNumv6.dat.updated set +e if [ "$1" = "--no-reload" ]; then exit 0 fi if [ -x /etc/init.d/apache2 ]; then /etc/init.d/apache2 reload elif [ -x /etc/init.d/httpd ]; then /etc/init.d/httpd reload elif [ -x /usr/bin/systemctl ]; then /usr/bin/systemctl reload httpd >/dev/null 2>&1 || : elif [ -x /bin/systemctl ]; then /bin/systemctl reload httpd >/dev/null 2>&1 || : fi mirrorbits-0.5.1+git20210123.eeea0e0+ds1/contrib/init/000077500000000000000000000000001411706463700216055ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/contrib/init/systemd/000077500000000000000000000000001411706463700232755ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/contrib/init/systemd/.gitignore000066400000000000000000000000231411706463700252600ustar00rootroot00000000000000mirrorbits.service mirrorbits-0.5.1+git20210123.eeea0e0+ds1/contrib/init/systemd/mirrorbits.service.in000066400000000000000000000007361411706463700274660ustar00rootroot00000000000000[Unit] Description=Mirrorbits redirector Documentation=https://github.com/etix/mirrorbits After=network.target [Service] Type=notify DynamicUser=yes LogsDirectory=mirrorbits RuntimeDirectory=mirrorbits PIDFile=/run/mirrorbits/mirrorbits.pid ExecStart=##PREFIX##/bin/mirrorbits daemon -p /run/mirrorbits/mirrorbits.pid ExecReload=/bin/kill -HUP $MAINPID ExecStop=-/bin/kill -QUIT $MAINPID TimeoutStopSec=5 KillMode=mixed Restart=on-failure [Install] WantedBy=multi-user.target mirrorbits-0.5.1+git20210123.eeea0e0+ds1/contrib/init/sysvinit-debian/000077500000000000000000000000001411706463700247155ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/contrib/init/sysvinit-debian/mirrorbits000066400000000000000000000111661411706463700270410ustar00rootroot00000000000000#! /bin/sh ### BEGIN INIT INFO # Provides: mirrorbits # Required-Start: redis-server $remote_fs $syslog # Required-Stop: redis-server $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Mirrorbits initscript # Description: Simple HTTP mirror redirector written in Go. ### END INIT INFO # Author: Ludovic Fauvet # Do NOT "set -e" # PATH should only include /usr/* if it runs after the mountnfs.sh script PATH=/sbin:/usr/sbin:/bin:/usr/bin DESC="Mirrorbits is a geographic load-balancer for mirrors" NAME=mirrorbits CONFFILE=/etc/mirrorbits.conf RUNLOG=/var/log/mirrorbits/mirrorbits.log PIDFILE=/var/run/$NAME.pid DAEMON=/usr/bin/$NAME DAEMON_ARGS="daemon -p $PIDFILE -log $RUNLOG" SCRIPTNAME=/etc/init.d/$NAME # Exit if the package is not installed [ -x "$DAEMON" ] || exit 0 # Read configuration variable file if it is present [ -r /etc/default/$NAME ] && . /etc/default/$NAME # Load the VERBOSE setting and other rcS variables . /lib/init/vars.sh # Define LSB log_* functions. # Depend on lsb-base (>= 3.2-14) to ensure that this file is present # and status_of_proc is working. . /lib/lsb/init-functions # # Function that starts the daemon/service # do_start() { # Return # 0 if daemon has been started # 1 if daemon was already running # 2 if daemon could not be started start-stop-daemon --start --quiet --pidfile $PIDFILE -b --exec $DAEMON --test > /dev/null \ || return 1 start-stop-daemon --start --quiet --pidfile $PIDFILE -b --exec $DAEMON -- \ $DAEMON_ARGS \ || return 2 # Add code here, if necessary, that waits for the process to be ready # to handle requests from services started subsequently which depend # on this one. As a last resort, sleep for some time. } # # Function that stops the daemon/service # do_stop() { # Return # 0 if daemon has been stopped # 1 if daemon was already stopped # 2 if daemon could not be stopped # other if a failure occurred start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE --name $NAME RETVAL="$?" [ "$RETVAL" = 2 ] && return 2 # Wait for children to finish too if this is a daemon that forks # and if the daemon is only ever run from this initscript. # If the above conditions are not satisfied then add some other code # that waits for the process to drop all resources that could be # needed by services started subsequently. A last resort is to # sleep for some time. start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 --exec $DAEMON [ "$?" = 2 ] && return 2 # Many daemons don't delete their pidfiles when they exit. rm -f $PIDFILE return "$RETVAL" } # # Function that sends a SIGHUP to the daemon/service # do_reload() { # # If the daemon can reload its configuration without # restarting (for example, when it is sent a SIGHUP), # then implement that here. # start-stop-daemon --stop --signal 1 --quiet --pidfile $PIDFILE --name $NAME return 0 } do_configtest() { return 0 # not supported yet if [ "$#" -ne 0 ]; then case "$1" in -q) FLAG=$1 ;; *) ;; esac shift fi $DAEMON -t $FLAG -c $CONFFILE RETVAL="$?" return $RETVAL } do_upgrade() { do_configtest -q || return 6 PID=$(cat $PIDFILE) if [ ! -x /proc/${PID} ]; then echo "$NAME is not running" exit 0 fi start-stop-daemon --stop --signal USR2 --quiet --pidfile $PIDFILE --name $NAME RETVAL="$?" echo "Upgrading..." return $RETVAL } case "$1" in start) [ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME" do_start case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; stop) [ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME" do_stop case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; status) status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $? ;; configtest) do_configtest ;; upgrade) do_upgrade ;; reload|force-reload) log_daemon_msg "Reloading $DESC" "$NAME" do_reload log_end_msg $? ;; restart|force-reload) log_daemon_msg "Restarting $DESC" "$NAME" do_configtest -q || exit $RETVAL do_stop case "$?" in 0|1) do_start case "$?" in 0) log_end_msg 0 ;; 1) log_end_msg 1 ;; # Old process is still running *) log_end_msg 1 ;; # Failed to start esac ;; *) # Failed to stop log_end_msg 1 ;; esac ;; *) echo "Usage: $SCRIPTNAME {start|stop|status|restart|reload|force-reload|upgrade|configtest}" >&2 exit 3 ;; esac : mirrorbits-0.5.1+git20210123.eeea0e0+ds1/contrib/localjs/000077500000000000000000000000001411706463700222715ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/contrib/localjs/fetchfiles.sh000077500000000000000000000107041411706463700247460ustar00rootroot00000000000000#!/bin/bash # List of scripts to fetch and store locally whattofetch=( "https://cdnjs.cloudflare.com/ajax/libs/flot/0.8.3/excanvas.js" "https://cdnjs.cloudflare.com/ajax/libs/flot/0.8.3/excanvas.min.js" "https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js" "https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js" "https://cdnjs.cloudflare.com/ajax/libs/flot/0.8.3/jquery.flot.js" "https://cdnjs.cloudflare.com/ajax/libs/flot/0.8.3/jquery.flot.min.js" "https://cdnjs.cloudflare.com/ajax/libs/flot/0.8.3/jquery.flot.pie.js" "https://cdnjs.cloudflare.com/ajax/libs/flot/0.8.3/jquery.flot.pie.min.js" "https://cdnjs.cloudflare.com/ajax/libs/flot.tooltip/0.9.0/jquery.flot.tooltip.js" "https://cdnjs.cloudflare.com/ajax/libs/flot.tooltip/0.9.0/jquery.flot.tooltip.min.js" "https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.3.4/leaflet.css" "https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.3.4/leaflet.js" "https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.3.4/images/marker-icon.png" "https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.3.4/images/marker-shadow.png" "https://cdnjs.cloudflare.com/ajax/libs/leaflet.markercluster/1.4.1/MarkerCluster.css" "https://cdnjs.cloudflare.com/ajax/libs/leaflet.markercluster/1.4.1/leaflet.markercluster.js" ) showhelp() { echo "Syntax: $0 directory" echo "where directory is the directory in which you want to store the downloaded files." echo "" echo "This will download the Javascript- and Font-files used by the default" echo "templates in mirrorbits. You can then self-host that directory tree on" echo "your webserver instead of using external CDNs." echo "You then need to set the LocalJSPath option in your mirrorbits config to" echo "point at the web-accessible path to that directory." } getlocalfilename () { local sfn="$1" lfn=${sfn#https://cdnjs.cloudflare.com/ajax/libs} } downloadfile () { curl=`which curl` if [ ${#curl} -ge 4 ] ; then $curl --output "$2" "$1" return fi wget=`which wget` if [ ${#wget} -ge 4 ] ; then $wget --output-document="$2" "$1" return fi echo "ERROR: Neither curl nor wget were found in path. Please install either curl or wget." exit 1 } if [ "$#" -ne 1 ] ; then showhelp exit 1 fi if [ "$1" == "--help" -o "$1" == "-h" ] ; then showhelp exit 0 fi localdir="$1" if [ ! -d "$localdir" ] ; then echo "Target directory ${localdir} does not exist or is not a directory." showhelp exit 1 fi unzip=`which unzip` if [ ${#unzip} -lt 5 ] ; then echo "ERROR: unzip was not found in path. Please install unzip." exit 1 fi for sf in ${whattofetch[@]}; do lfn="/void/void/void/void/" getlocalfilename "$sf" # lfn is now filled. tf="${localdir}${lfn}" if [ -e "$tf" ] ; then echo "No need to fetch $sf to $tf, it already exists." else tdn=`dirname "${tf}"` if [ ! -e "$tdn" ] ; then mkdir -p "$tdn" fi downloadfile "$sf" "$tf" fi done # The font stuff is a bit more messy. # For the Lato font, we rely on a service that's packing the mess into # a ZIP file for us. if [ ! -e "${localdir}/fonts" ] ; then mkdir -p "${localdir}/fonts" fi downloadfile "https://google-webfonts-helper.herokuapp.com/api/fonts/lato?download=zip&subsets=latin&variants=900,regular" "${localdir}/fonts/lato-font-900and400.zip" # Make sure the webserver cannot access the file for licensing reasons, as # that would count as distribution and require proper attribution which # we cannot guarantee. chmod go-rwx "${localdir}/fonts/lato-font-900and400.zip" unzip "${localdir}/fonts/lato-font-900and400.zip" -d "${localdir}/fonts/" # For fontawesome, we download and unpack the whole ZIP as well. # Downloading individual files will not work here, as depending on the # browser random other files will be included. if [ ! -e "${localdir}/font-awesome" ] ; then mkdir -p "${localdir}/font-awesome" fi if [ -e "${localdir}/font-awesome/4.7.0" ] ; then # Only download and extract if that directory isn't there yet, as we # have to use a 'mv' command here that would fail if the target already # existed. echo "Note: Skipping font-awesome download and extraction, it seems to be in place already." echo "To force redownload and extraction, remove '${localdir}/font-awesome/4.7.0'" else downloadfile "https://fontawesome.com/v4.7.0/assets/font-awesome-4.7.0.zip" "${localdir}/font-awesome-4.7.0.zip" chmod go-rwx "${localdir}/font-awesome-4.7.0.zip" unzip "${localdir}/font-awesome-4.7.0.zip" -d "${localdir}/font-awesome" mv "${localdir}/font-awesome/font-awesome-4.7.0" "${localdir}/font-awesome/4.7.0" fi mirrorbits-0.5.1+git20210123.eeea0e0+ds1/core/000077500000000000000000000000001411706463700201325ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/core/banner.go000066400000000000000000000006021411706463700217240ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package core // Banner is a piece of (ascii) art shown during startup var Banner = ` _______ __ __ __ __ | | |__|.----.----.-----.----.| |--.|__| |_.-----. | | || _| _| _ | _|| _ || | _|__ --| |__|_|__|__||__| |__| |_____|__| |_____||__|____|_____| %s` mirrorbits-0.5.1+git20210123.eeea0e0+ds1/core/context.go000066400000000000000000000006771411706463700221570ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package core // ContextKey reprensents a context key associated with a value type ContextKey int const ( // ContextAllowRedirects is the key for option: AllowRedirects ContextAllowRedirects ContextKey = iota // ContextMirrorID is the key for the variable: MirrorID ContextMirrorID // ContextMirrorName is the key for the variable: MirrorName ContextMirrorName ) mirrorbits-0.5.1+git20210123.eeea0e0+ds1/core/database.go000066400000000000000000000006351411706463700222310ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package core const ( // RedisMinimumVersion contains the minimum redis version required to run the application RedisMinimumVersion = "3.2.0" // DBVersion represents the current DB format version DBVersion = 1 // DBVersionKey contains the global redis key containing the DB version format DBVersionKey = "MIRRORBITS_DB_VERSION" ) mirrorbits-0.5.1+git20210123.eeea0e0+ds1/core/flags.go000066400000000000000000000025031411706463700215550ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package core import ( "flag" "os" ) var ( Daemon bool Debug bool Monitor bool ConfigFile string CpuProfile string PidFile string RunLog string RPCPort uint RPCHost string RPCPassword string RPCAskPass bool NArg int ) func Parseflags() { flag.BoolVar(&Debug, "debug", false, "Debug mode") flag.StringVar(&CpuProfile, "cpuprofile", "", "write cpu profile to file") flag.UintVar(&RPCPort, "p", 3390, "Server port") flag.StringVar(&RPCHost, "h", "localhost", "Server host") flag.StringVar(&RPCPassword, "P", "", "Server password") flag.BoolVar(&RPCAskPass, "a", false, "Ask for server password") flag.Parse() NArg = flag.NArg() daemon := flag.NewFlagSet("daemon", flag.ExitOnError) daemon.BoolVar(&Debug, "debug", false, "Debug mode") daemon.StringVar(&CpuProfile, "cpuprofile", "", "write cpu profile to file") daemon.StringVar(&ConfigFile, "config", "", "Path to the config file") daemon.BoolVar(&Monitor, "monitor", true, "Enable the background mirrors monitor") daemon.StringVar(&PidFile, "p", "", "Path to pid file") daemon.StringVar(&RunLog, "log", "", "File to output logs (default: stderr)") if len(os.Args) > 1 && os.Args[1] == "daemon" { Daemon = true daemon.Parse(os.Args[2:]) } } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/core/scan.go000066400000000000000000000006211411706463700214040ustar00rootroot00000000000000package core import "time" // ScannerType holds the type of scanner in use type ScannerType int8 const ( // RSYNC represents an rsync scanner RSYNC ScannerType = iota // FTP represents an ftp scanner FTP ) // Precision is used to compute the precision of the mod time (millisecond, second) type Precision time.Duration func (p Precision) Duration() time.Duration { return time.Duration(p) } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/core/version.go000066400000000000000000000021311411706463700221430ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package core import ( "fmt" "runtime" ) var ( VERSION = "" BUILD = "" DEV = "" ) // VersionInfo is a struct containing version related informations type VersionInfo struct { Version string Build string GoVersion string OS string Arch string GoMaxProcs int } // GetVersionInfo returns the details of the current build func GetVersionInfo() VersionInfo { return VersionInfo{ Version: VERSION, Build: BUILD + DEV, GoVersion: runtime.Version(), OS: runtime.GOOS, Arch: runtime.GOARCH, GoMaxProcs: runtime.GOMAXPROCS(0), } } // PrintVersion prints the versions contained in a VersionReply func PrintVersion(info VersionInfo) { fmt.Printf(" %-17s %s\n", "Version:", info.Version) fmt.Printf(" %-17s %s\n", "Build:", info.Build) fmt.Printf(" %-17s %s\n", "GoVersion:", info.GoVersion) fmt.Printf(" %-17s %s\n", "Operating System:", info.OS) fmt.Printf(" %-17s %s\n", "Architecture:", info.Arch) fmt.Printf(" %-17s %d\n", "Gomaxprocs:", info.GoMaxProcs) } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/daemon/000077500000000000000000000000001411706463700204455ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/daemon/cluster.go000066400000000000000000000120351411706463700224560ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package daemon import ( "fmt" "math/rand" "sort" "strconv" "strings" "sync" "time" . "github.com/etix/mirrorbits/config" "github.com/etix/mirrorbits/database" "github.com/etix/mirrorbits/mirrors" "github.com/etix/mirrorbits/utils" ) const ( clusterAnnouncePrefix = "HELLO" ) type cluster struct { redis *database.Redis nodeID string nodes []node nodeIndex int nodeTotal int nodesLock sync.RWMutex mirrorsIndex []int stop chan bool wg sync.WaitGroup running bool StartStopLock sync.Mutex announceText string } type node struct { ID string LastAnnounce int64 } type byNodeID []node func (n byNodeID) Len() int { return len(n) } func (n byNodeID) Swap(i, j int) { n[i], n[j] = n[j], n[i] } func (n byNodeID) Less(i, j int) bool { return n[i].ID < n[j].ID } // NewCluster creates a new instance of the cluster agent func NewCluster(r *database.Redis) *cluster { c := &cluster{ redis: r, nodes: make([]node, 0), stop: make(chan bool), } hostname := utils.Hostname() if len(hostname) == 0 { hostname = "unknown" } c.nodeID = fmt.Sprintf("%s-%05d", hostname, rand.Intn(32000)) c.announceText = clusterAnnouncePrefix + strconv.Itoa(GetConfig().RedisDB) return c } func (c *cluster) Start() { c.StartStopLock.Lock() defer c.StartStopLock.Unlock() if c.running == true { return } log.Debug("Cluster starting...") c.running = true c.wg.Add(1) c.stop = make(chan bool) go c.clusterLoop() } func (c *cluster) Stop() { c.StartStopLock.Lock() defer c.StartStopLock.Unlock() select { case _, _ = <-c.stop: return default: close(c.stop) c.wg.Wait() c.running = false log.Debug("Cluster stopped") } } func (c *cluster) clusterLoop() { clusterChan := make(chan string, 10) announceTicker := time.NewTicker(1 * time.Second) c.refreshNodeList(c.nodeID, c.nodeID) c.redis.Pubsub.SubscribeEvent(database.CLUSTER, clusterChan) for { select { case <-c.stop: c.wg.Done() return case <-announceTicker.C: c.announce() case data := <-clusterChan: if !strings.HasPrefix(data, c.announceText+" ") { // Garbage continue } c.refreshNodeList(data[len(c.announceText)+1:], c.nodeID) } } } func (c *cluster) announce() { r := c.redis.Get() database.Publish(r, database.CLUSTER, fmt.Sprintf("%s %s", c.announceText, c.nodeID)) r.Close() } func (c *cluster) refreshNodeList(nodeID, self string) { found := false c.nodesLock.Lock() // Expire unreachable nodes for i := 0; i < len(c.nodes); i++ { if utils.ElapsedSec(c.nodes[i].LastAnnounce, 5) && c.nodes[i].ID != nodeID && c.nodes[i].ID != self { log.Noticef("<- Node %s left the cluster", c.nodes[i].ID) c.nodes = append(c.nodes[:i], c.nodes[i+1:]...) i-- } else if c.nodes[i].ID == nodeID { found = true c.nodes[i].LastAnnounce = time.Now().UTC().Unix() } } // Join new node if !found { if nodeID != self { log.Noticef("-> Node %s joined the cluster", nodeID) } n := node{ ID: nodeID, LastAnnounce: time.Now().UTC().Unix(), } // TODO use binary search here // See https://golang.org/pkg/sort/#Search c.nodes = append(c.nodes, n) sort.Sort(byNodeID(c.nodes)) } c.nodeTotal = len(c.nodes) // TODO use binary search here // See https://golang.org/pkg/sort/#Search for i, n := range c.nodes { if n.ID == self { c.nodeIndex = i break } } c.nodesLock.Unlock() } func (c *cluster) AddMirror(mirror *mirrors.Mirror) { c.nodesLock.Lock() c.mirrorsIndex = addMirrorIDToSlice(c.mirrorsIndex, mirror.ID) c.nodesLock.Unlock() } func (c *cluster) RemoveMirror(mirror *mirrors.Mirror) { c.nodesLock.Lock() c.mirrorsIndex = removeMirrorIDFromSlice(c.mirrorsIndex, mirror.ID) c.nodesLock.Unlock() } func (c *cluster) RemoveMirrorID(id int) { c.nodesLock.Lock() c.mirrorsIndex = removeMirrorIDFromSlice(c.mirrorsIndex, id) c.nodesLock.Unlock() } func (c *cluster) IsHandled(mirrorID int) bool { c.nodesLock.RLock() defer c.nodesLock.RUnlock() index := sort.SearchInts(c.mirrorsIndex, mirrorID) mRange := int(float32(len(c.mirrorsIndex))/float32(c.nodeTotal) + 0.5) start := mRange * c.nodeIndex // Check bounding to see if this mirror must be handled by this node. // The distribution of the nodes should be balanced except for the last node // that could contain one more node. if index >= start && (index < start+mRange || c.nodeIndex == c.nodeTotal-1) { return true } return false } func removeMirrorIDFromSlice(slice []int, mirrorID int) []int { // See https://golang.org/pkg/sort/#SearchInts idx := sort.SearchInts(slice, mirrorID) if idx < len(slice) && slice[idx] == mirrorID { slice = append(slice[:idx], slice[idx+1:]...) } return slice } func addMirrorIDToSlice(slice []int, mirrorID int) []int { // See https://golang.org/pkg/sort/#SearchInts idx := sort.SearchInts(slice, mirrorID) if idx >= len(slice) || slice[idx] != mirrorID { slice = append(slice[:idx], append([]int{mirrorID}, slice[idx:]...)...) } return slice } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/daemon/cluster_test.go000066400000000000000000000124351411706463700235210ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package daemon import ( "fmt" "os" "reflect" "sort" "testing" "time" . "github.com/etix/mirrorbits/config" "github.com/etix/mirrorbits/database" "github.com/etix/mirrorbits/mirrors" . "github.com/etix/mirrorbits/testing" ) func TestMain(m *testing.M) { SetConfiguration(&Configuration{ RedisDB: 42, }) os.Exit(m.Run()) } func TestStart(t *testing.T) { _, conn := PrepareRedisTest() conn.ConnectPubsub() c := NewCluster(conn) c.Start() defer c.Stop() if c.running != true { t.Fatalf("Expected true, got false") } } func TestStop(t *testing.T) { _, conn := PrepareRedisTest() conn.ConnectPubsub() c := NewCluster(conn) c.Start() c.Stop() if c.running != false { t.Fatalf("Expected false, got true") } } func TestClusterLoop(t *testing.T) { mock, conn := PrepareRedisTest() conn.ConnectPubsub() c := NewCluster(conn) cmdPublish := mock.Command("PUBLISH", string(database.CLUSTER), fmt.Sprintf("%s %s", c.announceText, c.nodeID)).Expect("1") c.Start() defer c.Stop() n := time.Now() for { if time.Since(n) > 1500*time.Millisecond { t.Fatalf("Announce not made") } if mock.Stats(cmdPublish) > 0 { // Success break } time.Sleep(50 * time.Millisecond) } } func TestRefreshNodeList(t *testing.T) { _, conn := PrepareRedisTest() conn.ConnectPubsub() c := NewCluster(conn) n := node{ ID: "test-4242", LastAnnounce: time.Now().UTC().Unix(), } c.nodes = append(c.nodes, n) sort.Sort(byNodeID(c.nodes)) n = node{ ID: "meh-4242", LastAnnounce: time.Now().UTC().Add(time.Second * -6).Unix(), } c.nodes = append(c.nodes, n) sort.Sort(byNodeID(c.nodes)) c.Start() defer c.Stop() c.refreshNodeList("test-4242", "test-4242") if len(c.nodes) != 1 { t.Fatalf("Node meh-4242 should have left") } c.refreshNodeList("meh-4242", "test-4242") if len(c.nodes) != 2 { t.Fatalf("Node meh-4242 should have joined") } } func TestAddMirror(t *testing.T) { _, conn := PrepareRedisTest() c := NewCluster(conn) r := []int{2} c.AddMirror(&mirrors.Mirror{ ID: 2, Name: "bbb", }) if !reflect.DeepEqual(r, c.mirrorsIndex) { t.Fatalf("Expected %+v, got %+v", r, c.mirrorsIndex) } r = []int{1, 2} c.AddMirror(&mirrors.Mirror{ ID: 1, Name: "aaa", }) if !reflect.DeepEqual(r, c.mirrorsIndex) { t.Fatalf("Expected %+v, got %+v", r, c.mirrorsIndex) } r = []int{1, 2, 3} c.AddMirror(&mirrors.Mirror{ ID: 3, Name: "ccc", }) if !reflect.DeepEqual(r, c.mirrorsIndex) { t.Fatalf("Expected %+v, got %+v", r, c.mirrorsIndex) } } func TestRemoveMirror(t *testing.T) { _, conn := PrepareRedisTest() c := NewCluster(conn) c.AddMirror(&mirrors.Mirror{ ID: 1, Name: "aaa", }) c.AddMirror(&mirrors.Mirror{ ID: 2, Name: "bbb", }) c.AddMirror(&mirrors.Mirror{ ID: 3, Name: "ccc", }) c.RemoveMirror(&mirrors.Mirror{ID: 4}) r := []int{1, 2, 3} if !reflect.DeepEqual(r, c.mirrorsIndex) { t.Fatalf("Expected %+v, got %+v", r, c.mirrorsIndex) } c.RemoveMirror(&mirrors.Mirror{ID: 1}) r = []int{2, 3} if !reflect.DeepEqual(r, c.mirrorsIndex) { t.Fatalf("Expected %+v, got %+v", r, c.mirrorsIndex) } c.RemoveMirror(&mirrors.Mirror{ID: 3}) r = []int{2} if !reflect.DeepEqual(r, c.mirrorsIndex) { t.Fatalf("Expected %+v, got %+v", r, c.mirrorsIndex) } } func TestIsHandled(t *testing.T) { _, conn := PrepareRedisTest() conn.ConnectPubsub() c := NewCluster(conn) c.Start() defer c.Stop() c.AddMirror(&mirrors.Mirror{ ID: 1, Name: "aaa", }) c.AddMirror(&mirrors.Mirror{ ID: 2, Name: "bbb", }) c.AddMirror(&mirrors.Mirror{ ID: 3, Name: "ccc", }) c.AddMirror(&mirrors.Mirror{ ID: 4, Name: "ddd", }) c.nodeTotal = 1 if !c.IsHandled(1) || !c.IsHandled(2) || !c.IsHandled(3) || !c.IsHandled(4) { t.Fatalf("All mirrors should be handled") } c.nodeTotal = 2 handled := 0 if c.IsHandled(1) { handled++ } if c.IsHandled(2) { handled++ } if c.IsHandled(3) { handled++ } if c.IsHandled(4) { handled++ } if handled != 2 { t.Fatalf("Expected 2, got %d", handled) } } func TestRemoveMirrorIDFromSlice(t *testing.T) { s1 := []int{1, 2, 3, 4, 5} r1 := []int{1, 2, 4, 5} r := removeMirrorIDFromSlice(s1, 3) if !reflect.DeepEqual(r1, r) { t.Fatalf("Expected %+v, got %+v", r1, r) } s2 := []int{1, 2, 3, 4, 5} r2 := []int{2, 3, 4, 5} r = removeMirrorIDFromSlice(s2, 1) if !reflect.DeepEqual(r2, r) { t.Fatalf("Expected %+v, got %+v", r2, r) } s3 := []int{1, 2, 3, 4, 5} r3 := []int{1, 2, 3, 4} r = removeMirrorIDFromSlice(s3, 5) if !reflect.DeepEqual(r3, r) { t.Fatalf("Expected %+v, got %+v", r3, r) } s4 := []int{1, 2, 3, 4, 5} r4 := []int{1, 2, 3, 4, 5} r = removeMirrorIDFromSlice(s4, 6) if !reflect.DeepEqual(r4, r) { t.Fatalf("Expected %+v, got %+v", r4, r) } } func TestAddMirrorIDToSlice(t *testing.T) { s1 := []int{1, 3} r1 := []int{1, 2, 3} r := addMirrorIDToSlice(s1, 2) if !reflect.DeepEqual(r1, r) { t.Fatalf("Expected %+v, got %+v", r1, r) } s2 := []int{2, 3, 4} r2 := []int{1, 2, 3, 4} r = addMirrorIDToSlice(s2, 1) if !reflect.DeepEqual(r2, r) { t.Fatalf("Expected %+v, got %+v", r2, r) } s3 := []int{1, 2, 3} r3 := []int{1, 2, 3, 4} r = addMirrorIDToSlice(s3, 4) if !reflect.DeepEqual(r3, r) { t.Fatalf("Expected %+v, got %+v", r3, r) } } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/daemon/monitor.go000066400000000000000000000357671411706463700225050ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package daemon import ( "context" "fmt" "math/rand" "net" "net/http" "strconv" "strings" "sync" "time" . "github.com/etix/mirrorbits/config" "github.com/etix/mirrorbits/core" "github.com/etix/mirrorbits/database" "github.com/etix/mirrorbits/mirrors" "github.com/etix/mirrorbits/scan" "github.com/etix/mirrorbits/utils" "github.com/gomodule/redigo/redis" "github.com/op/go-logging" "github.com/pkg/errors" ) var ( healthCheckThreads = 10 userAgent = "Mirrorbits/" + core.VERSION + " PING CHECK" clientTimeout = time.Duration(20 * time.Second) clientDeadline = time.Duration(40 * time.Second) errRedirect = errors.New("Redirect not allowed") errMirrorNotScanned = errors.New("Mirror has not yet been scanned") log = logging.MustGetLogger("main") ) type monitor struct { redis *database.Redis cache *mirrors.Cache mirrors map[int]*mirror mapLock sync.Mutex httpClient http.Client httpTransport http.Transport healthCheckChan chan int syncChan chan int stop chan struct{} configNotifier chan bool wg sync.WaitGroup formatLongestID int cluster *cluster trace *scan.Trace } type mirror struct { mirrors.Mirror checking bool scanning bool lastCheck time.Time } func (m *mirror) NeedHealthCheck() bool { return time.Since(m.lastCheck) > time.Duration(GetConfig().CheckInterval)*time.Minute } func (m *mirror) NeedSync() bool { return time.Since(m.LastSync.Time) > time.Duration(GetConfig().ScanInterval)*time.Minute } func (m *mirror) IsScanning() bool { return m.scanning } func (m *mirror) IsChecking() bool { return m.checking } // NewMonitor returns a new instance of monitor func NewMonitor(r *database.Redis, c *mirrors.Cache) *monitor { m := new(monitor) m.redis = r m.cache = c m.cluster = NewCluster(r) m.mirrors = make(map[int]*mirror) m.healthCheckChan = make(chan int, healthCheckThreads*5) m.syncChan = make(chan int) m.stop = make(chan struct{}) m.configNotifier = make(chan bool, 1) m.trace = scan.NewTraceHandler(m.redis, m.stop) SubscribeConfig(m.configNotifier) rand.Seed(time.Now().UnixNano()) m.httpTransport = http.Transport{ DisableKeepAlives: true, MaxIdleConnsPerHost: 0, Dial: func(network, addr string) (net.Conn, error) { deadline := time.Now().Add(clientDeadline) c, err := net.DialTimeout(network, addr, clientTimeout) if err != nil { return nil, err } c.SetDeadline(deadline) return c, nil }, } m.httpClient = http.Client{ CheckRedirect: checkRedirect, Transport: &m.httpTransport, } return m } func (m *monitor) Stop() { select { case _, _ = <-m.stop: return default: m.cluster.Stop() close(m.stop) } } func (m *monitor) Wait() { m.wg.Wait() } // Return an error if the endpoint is an unauthorized redirect func checkRedirect(req *http.Request, via []*http.Request) error { redirects := req.Context().Value(core.ContextAllowRedirects).(mirrors.Redirects) if redirects.Allowed() { return nil } name := req.Context().Value(core.ContextMirrorName) for _, r := range via { if r.URL != nil { log.Warningf("Unauthorized redirection for %s: %s => %s", name, r.URL.String(), req.URL.String()) } } return errRedirect } // Main monitor loop func (m *monitor) MonitorLoop() { m.wg.Add(1) defer m.wg.Done() mirrorUpdateEvent := m.cache.GetMirrorInvalidationEvent() // Wait until the database is ready to be used for { r := m.redis.Get() if r.Err() != nil { if _, ok := r.Err().(database.NetReadyError); ok { time.Sleep(100 * time.Millisecond) continue } } break } // Scan the local repository m.retry(func(i uint) error { err := m.scanRepository() if err != nil { if i == 0 { log.Errorf("%+v", errors.Wrap(err, "unable to scan the local repository")) } return err } return nil }, 1*time.Second) // Synchronize the list of all known mirrors m.retry(func(i uint) error { ids, err := m.mirrorsID() if err != nil { if i == 0 { log.Errorf("%+v", errors.Wrap(err, "unable to retrieve the mirror list")) } return err } err = m.syncMirrorList(ids...) if err != nil { if i == 0 { log.Errorf("%+v", errors.Wrap(err, "unable to sync the list of mirrors")) } return err } return nil }, 500*time.Millisecond) if utils.IsStopped(m.stop) { return } // Start the cluster manager m.cluster.Start() // Start the health check routines for i := 0; i < healthCheckThreads; i++ { m.wg.Add(1) go m.healthCheckLoop() } // Start the mirror sync routines for i := 0; i < GetConfig().ConcurrentSync; i++ { m.wg.Add(1) go m.syncLoop() } // Setup recurrent tasks var repositoryScanTicker <-chan time.Time repositoryScanInterval := -1 mirrorCheckTicker := time.NewTicker(1 * time.Second) // Disable the mirror check while stopping to avoid spurious events go func() { select { case <-m.stop: mirrorCheckTicker.Stop() } }() // Force a first configuration reload to setup the timers select { case m.configNotifier <- true: default: } for { select { case <-m.stop: return case v := <-mirrorUpdateEvent: id, err := strconv.Atoi(v) if err == nil { m.syncMirrorList(id) } case <-m.configNotifier: if repositoryScanInterval != GetConfig().RepositoryScanInterval { repositoryScanInterval = GetConfig().RepositoryScanInterval if repositoryScanInterval == 0 { repositoryScanTicker = nil } else { repositoryScanTicker = time.Tick(time.Duration(repositoryScanInterval) * time.Minute) } } case <-repositoryScanTicker: m.scanRepository() case <-mirrorCheckTicker.C: if m.redis.Failure() { continue } m.mapLock.Lock() for id, v := range m.mirrors { if !v.Enabled { // Ignore disabled mirrors continue } if v.NeedHealthCheck() && !v.IsChecking() && m.cluster.IsHandled(id) { select { case m.healthCheckChan <- id: m.mirrors[id].checking = true default: } } if v.NeedSync() && !v.IsScanning() && m.cluster.IsHandled(id) { select { case m.syncChan <- id: m.mirrors[id].scanning = true default: } } } m.mapLock.Unlock() } } } // Returns a list of all mirrors ID func (m *monitor) mirrorsID() ([]int, error) { var ids []int list, err := m.redis.GetListOfMirrors() if err != nil { return nil, err } for id := range list { ids = append(ids, id) } return ids, nil } // Sync the remote mirror struct with the local dataset func (m *monitor) syncMirrorList(mirrorsIDs ...int) error { for _, id := range mirrorsIDs { mir, err := m.cache.GetMirror(id) if err != nil && err != redis.ErrNil { log.Errorf("Fetching mirror %s failed: %s", id, err.Error()) continue } else if err == redis.ErrNil { // Mirror has been deleted m.mapLock.Lock() delete(m.mirrors, id) m.mapLock.Unlock() m.cluster.RemoveMirrorID(id) continue } // Compute the space required to display the mirror names in the logs if len(mir.Name) > m.formatLongestID { m.formatLongestID = len(mir.Name) } m.cluster.AddMirror(&mir) m.mapLock.Lock() if _, ok := m.mirrors[mir.ID]; ok { // Update existing mirror tmp := m.mirrors[mir.ID] tmp.Mirror = mir m.mirrors[mir.ID] = tmp } else { // Add new mirror m.mirrors[mir.ID] = &mirror{ Mirror: mir, } } m.mapLock.Unlock() } log.Debugf("%d mirror%s updated", len(mirrorsIDs), utils.Plural(len(mirrorsIDs))) return nil } // Main health check loop // TODO merge with the monitorLoop? func (m *monitor) healthCheckLoop() { defer m.wg.Done() for { select { case <-m.stop: return case id := <-m.healthCheckChan: if utils.IsStopped(m.stop) { return } var mptr *mirror var mirror mirror var ok bool m.mapLock.Lock() if mptr, ok = m.mirrors[id]; !ok { m.mapLock.Unlock() continue } // Copy the mirror struct for read-only access mirror = *mptr m.mapLock.Unlock() err := m.healthCheck(mirror.Mirror) if err == errMirrorNotScanned { // Not removing the 'checking' lock is intended here so the mirror won't // be checked again until the rsync/ftp scan is finished. continue } m.mapLock.Lock() if mirror, ok := m.mirrors[id]; ok { if !database.RedisIsLoading(err) { mirror.lastCheck = time.Now().UTC() } mirror.checking = false } m.mapLock.Unlock() } } } // Main sync loop // TODO merge with the monitorLoop? func (m *monitor) syncLoop() { defer m.wg.Done() for { select { case <-m.stop: return case id := <-m.syncChan: var mir mirror var mirrorPtr *mirror var ok bool m.mapLock.Lock() if mirrorPtr, ok = m.mirrors[id]; !ok { m.mapLock.Unlock() continue } mir = *mirrorPtr m.mapLock.Unlock() conn := m.redis.Get() scanning, err := scan.IsScanning(conn, id) if err != nil { conn.Close() if !database.RedisIsLoading(err) { log.Warningf("syncloop: %s", err.Error()) } goto end } else if scanning { log.Debugf("[%s] scan already in progress on another node", mir.Name) conn.Close() goto end } conn.Close() log.Debugf("Scanning %s", mir.Name) // Start fetching the latest trace go func() { err := m.trace.GetLastUpdate(mir.Mirror) if err != nil && err != scan.ErrNoTrace { if numError, ok := err.(*strconv.NumError); ok { if numError.Err == strconv.ErrSyntax { log.Warningf("[%s] parsing trace file failed: %s is not a valid timestamp", mir.Name, strconv.Quote(numError.Num)) return } } else { log.Warningf("[%s] fetching trace file failed: %s", mir.Name, err) } } }() err = scan.ErrNoSyncMethod // First try to scan with rsync if mir.RsyncURL != "" { _, err = scan.Scan(core.RSYNC, m.redis, m.cache, mir.RsyncURL, id, m.stop) } // If it failed or rsync wasn't supported // fallback to FTP if err != nil && err != scan.ErrScanAborted && mir.FtpURL != "" { _, err = scan.Scan(core.FTP, m.redis, m.cache, mir.FtpURL, id, m.stop) } if err == scan.ErrScanInProgress { log.Warningf("%-30.30s Scan already in progress", mir.Name) goto end } if err == nil && mir.Enabled == true && mir.Up == false { m.healthCheckChan <- id } end: m.mapLock.Lock() if mirrorPtr, ok = m.mirrors[id]; ok { mirrorPtr.scanning = false } m.mapLock.Unlock() } } } // Do an actual health check against a given mirror func (m *monitor) healthCheck(mirror mirrors.Mirror) error { // Format log output format := "%-" + fmt.Sprintf("%d.%ds", m.formatLongestID+4, m.formatLongestID+4) // Get the URL to a random file available on this mirror file, size, err := m.getRandomFile(mirror.ID) if err != nil { if err == redis.ErrNil { return errMirrorNotScanned } else if !database.RedisIsLoading(err) { log.Warningf(format+"Error: Cannot obtain a random file: %s", mirror.Name, err) } return err } // Prepare the HTTP request req, err := http.NewRequest("HEAD", strings.TrimRight(mirror.HttpURL, "/")+file, nil) req.Header.Set("User-Agent", userAgent) req.Close = true ctx, cancel := context.WithTimeout(req.Context(), clientDeadline) ctx = context.WithValue(ctx, core.ContextMirrorID, mirror.ID) ctx = context.WithValue(ctx, core.ContextMirrorName, mirror.Name) ctx = context.WithValue(ctx, core.ContextAllowRedirects, mirror.AllowRedirects) req = req.WithContext(ctx) defer cancel() go func() { select { case <-m.stop: log.Debugf("Aborting health-check for %s", mirror.HttpURL) cancel() case <-ctx.Done(): } }() var contentLength string var statusCode int elapsed, err := m.httpDo(ctx, req, func(resp *http.Response, err error) error { if err != nil { return err } defer resp.Body.Close() statusCode = resp.StatusCode contentLength = resp.Header.Get("Content-Length") return nil }) if utils.IsStopped(m.stop) { return nil } if err != nil { if opErr, ok := err.(*net.OpError); ok { log.Debugf("Op: %s | Net: %s | Addr: %s | Err: %s | Temporary: %t", opErr.Op, opErr.Net, opErr.Addr, opErr.Error(), opErr.Temporary()) } if strings.Contains(err.Error(), errRedirect.Error()) { mirrors.MarkMirrorDown(m.redis, mirror.ID, "Unauthorized redirect") } else { mirrors.MarkMirrorDown(m.redis, mirror.ID, "Unreachable") } log.Errorf(format+"Error: %s (%dms)", mirror.Name, err.Error(), elapsed/time.Millisecond) return err } switch statusCode { case 200: err = mirrors.MarkMirrorUp(m.redis, mirror.ID) if err != nil { log.Errorf(format+"Unable to mark mirror as up: %s", mirror.Name, err) } rsize, err := strconv.ParseInt(contentLength, 10, 64) if err == nil && rsize != size { log.Warningf(format+"File size mismatch! [%s] (%dms)", mirror.Name, file, elapsed/time.Millisecond) } else { log.Noticef(format+"Up! (%dms)", mirror.Name, elapsed/time.Millisecond) } case 404: err = mirrors.MarkMirrorDown(m.redis, mirror.ID, fmt.Sprintf("File not found %s (error 404)", file)) if err != nil { log.Errorf(format+"Unable to mark mirror as down: %s", mirror.Name, err) } if GetConfig().DisableOnMissingFile { err = mirrors.DisableMirror(m.redis, mirror.ID) if err != nil { log.Errorf(format+"Unable to disable mirror: %s", mirror.Name, err) } } log.Errorf(format+"Error: File %s not found (error 404)", mirror.Name, file) default: err = mirrors.MarkMirrorDown(m.redis, mirror.ID, fmt.Sprintf("Got status code %d", statusCode)) if err != nil { log.Errorf(format+"Unable to mark mirror as down: %s", mirror.Name, err) } log.Warningf(format+"Down! Status: %d", mirror.Name, statusCode) } return nil } func (m *monitor) httpDo(ctx context.Context, req *http.Request, f func(*http.Response, error) error) (time.Duration, error) { var elapsed time.Duration c := make(chan error, 1) go func() { start := time.Now() err := f(m.httpClient.Do(req)) elapsed = time.Since(start) c <- err }() select { case <-ctx.Done(): m.httpTransport.CancelRequest(req) <-c // Wait for f to return. return elapsed, ctx.Err() case err := <-c: return elapsed, err } } // Get a random filename known to be served by the given mirror func (m *monitor) getRandomFile(id int) (file string, size int64, err error) { sinterKey := fmt.Sprintf("HANDLEDFILES_%d", id) rconn := m.redis.Get() defer rconn.Close() file, err = redis.String(rconn.Do("SRANDMEMBER", sinterKey)) if err != nil { return } size, err = redis.Int64(rconn.Do("HGET", fmt.Sprintf("FILE_%s", file), "size")) if err != nil { return } return } // Trigger a sync of the local repository func (m *monitor) scanRepository() error { err := scan.ScanSource(m.redis, false, m.stop) if err != nil { log.Errorf("Scanning source failed: %s", err.Error()) } return err } // Retry a function until no errors is returned while still allowing // the process to be stopped. func (m *monitor) retry(fn func(iteration uint) error, delay time.Duration) { var i uint for { err := fn(i) i++ if err == nil { break } select { case <-m.stop: return case <-time.After(delay): } } } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/database/000077500000000000000000000000001411706463700207465ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/database/interfaces/000077500000000000000000000000001411706463700230715ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/database/interfaces/redis.go000066400000000000000000000003221411706463700245230ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package interfaces import "github.com/gomodule/redigo/redis" type Redis interface { Get() redis.Conn UnblockedGet() redis.Conn } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/database/lock.go000066400000000000000000000040601411706463700222250ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package database import ( "errors" "math/rand" "strconv" "sync" "time" "github.com/gomodule/redigo/redis" ) var ( ErrInvalidLockName = errors.New("invalid lock name") ErrAlreadyLocked = errors.New("lock already acquired") ) type Lock struct { sync.RWMutex redis *Redis name string value string held bool } func init() { rand.Seed(time.Now().UnixNano()) } func (r *Redis) AcquireLock(name string) (*Lock, error) { if len(name) == 0 { return nil, ErrInvalidLockName } l := &Lock{ redis: r, name: "LOCK_" + name, value: strconv.Itoa(rand.Int()), held: true, } conn := r.UnblockedGet() defer conn.Close() _, err := redis.String(conn.Do("SET", l.name, l.value, "NX", "PX", "5000")) if err == redis.ErrNil { return nil, ErrAlreadyLocked } else if err != nil { return nil, err } // Start the lock keepalive go l.keepalive() return l, nil } func (l *Lock) keepalive() { for { l.Lock() if l.held == false { l.Unlock() return } l.Unlock() valid, err := l.isValid() if err != nil { continue } if !valid { l.Lock() l.held = false l.Unlock() return } conn := l.redis.UnblockedGet() ok, err := redis.Bool(conn.Do("PEXPIRE", l.name, "5000")) conn.Close() if err != nil { continue } if !ok { l.held = false return } time.Sleep(1 * time.Second) } } func (l *Lock) isValid() (bool, error) { conn := l.redis.UnblockedGet() defer conn.Close() value, err := redis.String(conn.Do("GET", l.name)) if err != nil && err != redis.ErrNil { return false, err } if value != l.value { return false, nil } return true, nil } func (l *Lock) Release() { l.Lock() if l.held == false { l.Unlock() return } l.held = false l.Unlock() conn := l.redis.UnblockedGet() defer conn.Close() v, _ := redis.String(conn.Do("GET", l.name)) if v == l.value { // Delete the key only if we are still the owner conn.Do("DEL", l.name) } } func (l *Lock) Held() bool { l.RLock() defer l.RUnlock() return l.held } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/database/pubsub.go000066400000000000000000000103701411706463700225760ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package database import ( "sync" "time" "github.com/gomodule/redigo/redis" "github.com/op/go-logging" ) var ( log = logging.MustGetLogger("main") ) type pubsubEvent string const ( CLUSTER pubsubEvent = "_mirrorbits_cluster" FILE_UPDATE pubsubEvent = "_mirrorbits_file_update" MIRROR_UPDATE pubsubEvent = "_mirrorbits_mirror_update" MIRROR_FILE_UPDATE pubsubEvent = "_mirrorbits_mirror_file_update" PUBSUB_RECONNECTED pubsubEvent = "_mirrorbits_pubsub_reconnected" ) // Pubsub is the internal structure of the publish/subscribe handler type Pubsub struct { r *Redis rconn redis.Conn connlock sync.Mutex extSubscribers map[string][]chan string extSubscribersLock sync.RWMutex stop chan bool wg sync.WaitGroup } // NewPubsub returns a new instance of the publish/subscribe handler func NewPubsub(r *Redis) *Pubsub { pubsub := new(Pubsub) pubsub.r = r pubsub.stop = make(chan bool) pubsub.extSubscribers = make(map[string][]chan string) go pubsub.updateEvents() return pubsub } // Close all the connections to the pubsub server func (p *Pubsub) Close() { close(p.stop) p.connlock.Lock() if p.rconn != nil { // FIXME Calling p.rconn.Close() here will block indefinitely in redigo p.rconn.Send("UNSUBSCRIBE") p.rconn.Send("QUIT") p.rconn.Flush() } p.connlock.Unlock() p.wg.Wait() } // SubscribeEvent allows subscription to a particular kind of events and receive a // notification when an event is dispatched on the given channel. func (p *Pubsub) SubscribeEvent(event pubsubEvent, channel chan string) { p.extSubscribersLock.Lock() defer p.extSubscribersLock.Unlock() listeners := p.extSubscribers[string(event)] listeners = append(listeners, channel) p.extSubscribers[string(event)] = listeners } func (p *Pubsub) updateEvents() { p.wg.Add(1) defer p.wg.Done() disconnected := false connect: for { select { case <-p.stop: return default: } p.connlock.Lock() p.rconn = p.r.Get() if _, err := p.rconn.Do("PING"); err != nil { disconnected = true p.rconn.Close() p.rconn = nil p.connlock.Unlock() if RedisIsLoading(err) { // Doing a PING after (re-connection) prevents cases where redis // is currently loading the dataset and is still not ready. log.Warning("Redis is still loading the dataset in memory") } time.Sleep(500 * time.Millisecond) continue } p.connlock.Unlock() log.Debug("Subscribing pubsub") psc := redis.PubSubConn{Conn: p.rconn} psc.Subscribe(CLUSTER) psc.Subscribe(FILE_UPDATE) psc.Subscribe(MIRROR_UPDATE) psc.Subscribe(MIRROR_FILE_UPDATE) if disconnected == true { // This is a way to keep the cache active while disconnected // from redis but still clear the cache (possibly outdated) // after a successful reconnection. disconnected = false p.handleMessage(string(PUBSUB_RECONNECTED), nil) } for { switch v := psc.Receive().(type) { case redis.Message: //log.Debugf("Redis message on channel %s: message: %s", v.Channel, v.Data) p.handleMessage(v.Channel, v.Data) case redis.Subscription: log.Debugf("Redis subscription on channel %s: %s (%d)", v.Channel, v.Kind, v.Count) case error: select { case <-p.stop: return default: } log.Errorf("Pubsub disconnected: %s", v) psc.Close() p.rconn.Close() time.Sleep(50 * time.Millisecond) disconnected = true goto connect } } } } // Notify subscribers of the new message func (p *Pubsub) handleMessage(channel string, data []byte) { p.extSubscribersLock.RLock() defer p.extSubscribersLock.RUnlock() listeners := p.extSubscribers[channel] for _, listener := range listeners { select { case listener <- string(data): default: // Don't block if the listener is not available // and drop the message. } } } // Publish a message on the pubsub server func Publish(r redis.Conn, event pubsubEvent, message string) error { _, err := r.Do("PUBLISH", string(event), message) return err } // SendPublish add the message to a transaction func SendPublish(r redis.Conn, event pubsubEvent, message string) error { err := r.Send("PUBLISH", string(event), message) return err } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/database/redis.go000066400000000000000000000234321411706463700224070ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package database import ( "errors" "fmt" "strconv" "strings" "sync" "time" . "github.com/etix/mirrorbits/config" "github.com/etix/mirrorbits/core" "github.com/gomodule/redigo/redis" "github.com/rafaeljusto/redigomock" ) const ( redisConnectionTimeout = 200 * time.Millisecond redisReadWriteTimeout = 300 * time.Second ) var ( // ErrUnreachable is returned when the endpoint is not reachable ErrUnreachable = errors.New("redis endpoint unreachable") // ErrRedisUpgradeRequired is returned when the redis server is running an unsupported version ErrRedisUpgradeRequired = errors.New("unsupported Redis version") ) type redisPool interface { Get() redis.Conn Close() error } // Redis is the instance object of the redis database type Redis struct { pool redisPool Pubsub *Pubsub failure bool failureState sync.RWMutex knownMaster string knownMasterLock sync.Mutex stop chan bool ready chan struct{} } // NewRedis returns a new instance of the redis database func NewRedis() *Redis { r := NewRedisCustomPool(nil) // Asynchronous db update handler go r.dbUpdateHandler() return r } // NewRedisCustomPool returns a new instance of the redis database // using a custom pool func NewRedisCustomPool(pool redisPool) *Redis { r := &Redis{ stop: make(chan bool), ready: make(chan struct{}), } if pool != nil { // Check if we are running inside `go test` if _, ok := pool.Get().(*redigomock.Conn); ok { // Close ready since we are running a mock close(r.ready) } r.pool = pool } else { r.pool = &redis.Pool{ MaxIdle: 10, IdleTimeout: 240 * time.Second, Dial: func() (redis.Conn, error) { conn, err := r.Connect() switch err { case nil: r.setFailureState(false) default: r.setFailureState(true) } if r.checkVersion(conn) == ErrRedisUpgradeRequired { log.Fatalf("Unsupported Redis version, please upgrade to Redis >= %s", core.RedisMinimumVersion) } return conn, err }, TestOnBorrow: func(c redis.Conn, t time.Time) error { _, err := c.Do("PING") if RedisIsLoading(err) { return nil } return err }, } } go r.connRecover() return r } // Get returns a redis connection from the pool func (r *Redis) Get() redis.Conn { select { case <-r.ready: default: return &NotReadyError{} } return r.pool.Get() } // UnblockedGet returns a redis connection from the pool even // if the database checks and/or upgrade are not finished. func (r *Redis) UnblockedGet() redis.Conn { return r.pool.Get() } // Close closes all connections to the redis database func (r *Redis) Close() { select { case _, _ = <-r.stop: return default: log.Debug("Closing databases connections") r.Pubsub.Close() r.pool.Close() close(r.stop) } } // ConnectPubsub initiates the connection to the pubsub func (r *Redis) ConnectPubsub() { if r.Pubsub == nil { r.Pubsub = NewPubsub(r) } } // CheckVersion checks if the redis server version is supported func (r *Redis) CheckVersion() error { c := r.UnblockedGet() defer c.Close() return r.checkVersion(c) } func (r *Redis) checkVersion(conn redis.Conn) error { if conn == nil { return ErrUnreachable } info, err := parseInfo(conn.Do("INFO", "server")) if err == nil { if parseVersion(info["redis_version"]) < parseVersion(core.RedisMinimumVersion) { return ErrRedisUpgradeRequired } } return err } // Connect initiates a new connection to the redis server func (r *Redis) Connect() (redis.Conn, error) { sentinels := GetConfig().RedisSentinels if len(sentinels) > 0 { if len(GetConfig().RedisSentinelMasterName) == 0 { r.logError("Config: RedisSentinelMasterName cannot be empty!") goto single } for _, s := range sentinels { log.Debugf("Connecting to redis sentinel %s", s.Host) var master []string var masterhost string var cm redis.Conn c, err := r.connectTo(s.Host) if err != nil { r.logError("Sentinel: %s", err.Error()) continue } //AUTH? role, err := r.askRole(c) if err != nil { r.logError("Sentinel: %s", err.Error()) goto closeSentinel } if role != "sentinel" { r.logError("Sentinel: %s is not a sentinel but a %s", s.Host, role) goto closeSentinel } master, err = redis.Strings(c.Do("SENTINEL", "get-master-addr-by-name", GetConfig().RedisSentinelMasterName)) if err == redis.ErrNil { r.logError("Sentinel: %s doesn't know the master-name %s", s.Host, GetConfig().RedisSentinelMasterName) goto closeSentinel } else if err != nil { r.logError("Sentinel: %s", err.Error()) goto closeSentinel } masterhost = fmt.Sprintf("%s:%s", master[0], master[1]) cm, err = r.connectTo(masterhost) if err != nil { r.logError("Redis master: %s", err.Error()) goto closeSentinel } if r.auth(cm) != nil { r.logError("Redis master: auth failed") goto closeMaster } if err = r.selectDB(cm); err != nil { c.Close() return nil, err } role, err = r.askRole(cm) if err != nil { r.logError("Redis master: %s", err.Error()) goto closeMaster } if role != "master" { r.logError("Redis master: %s is not a master but a %s", masterhost, role) goto closeMaster } // Close the connection to the sentinel c.Close() r.printConnectedMaster(masterhost) return cm, nil closeMaster: cm.Close() closeSentinel: c.Close() } } single: if len(GetConfig().RedisAddress) == 0 { if len(sentinels) == 0 { log.Error("No redis master available") } return nil, ErrUnreachable } if len(sentinels) > 0 && r.Failure() == false { log.Warning("No redis master available, trying using the configured RedisAddress as fallback") } c, err := r.connectTo(GetConfig().RedisAddress) if err != nil { return nil, err } if err = r.auth(c); err != nil { c.Close() return nil, err } if err = r.selectDB(c); err != nil { c.Close() return nil, err } role, err := r.askRole(c) if err != nil { r.logError("Redis master: %s", err.Error()) return nil, ErrUnreachable } if role != "master" { r.logError("Redis master: %s is not a master but a %s", GetConfig().RedisAddress, role) return nil, ErrUnreachable } r.printConnectedMaster(GetConfig().RedisAddress) return c, err } func (r *Redis) connectTo(address string) (redis.Conn, error) { return redis.Dial("tcp", address, redis.DialConnectTimeout(redisConnectionTimeout), redis.DialReadTimeout(redisReadWriteTimeout), redis.DialWriteTimeout(redisReadWriteTimeout)) } func (r *Redis) askRole(c redis.Conn) (string, error) { roleReply, err := redis.Values(c.Do("ROLE")) if err != nil { return "", err } role, err := redis.String(roleReply[0], err) return role, err } func (r *Redis) auth(c redis.Conn) (err error) { if GetConfig().RedisPassword != "" { _, err = c.Do("AUTH", GetConfig().RedisPassword) } return } func (r *Redis) selectDB(c redis.Conn) (err error) { _, err = c.Do("SELECT", GetConfig().RedisDB) return } func (r *Redis) logError(format string, args ...interface{}) { if r.Failure() { log.Debugf(format, args...) } else { log.Errorf(format, args...) } } func (r *Redis) printConnectedMaster(address string) { r.knownMasterLock.Lock() defer r.knownMasterLock.Unlock() if address != r.knownMaster && core.Daemon { r.knownMaster = address log.Infof("Connected to redis master %s", address) } else { log.Debugf("Connected to redis master %s", address) } } func (r *Redis) setFailureState(failure bool) { r.failureState.Lock() r.failure = failure r.failureState.Unlock() } // Failure returns true if the connection is in a failure state func (r *Redis) Failure() bool { r.failureState.RLock() defer r.failureState.RUnlock() return r.failure } func (r *Redis) connRecover() { ticker := time.NewTicker(1 * time.Second) for { select { case <-r.stop: return case <-ticker.C: if r.Failure() { if conn := r.Get(); conn != nil { // A successful Get() request will automatically unlock // other services waiting for a working connection. // This is only a way to ensure they wont wait forever. if conn.Err() != nil { log.Warningf("Database is down: %s", conn.Err().Error()) } conn.Close() } } } } } func (r *Redis) dbUpdateHandler() { var logOnce sync.Once again: upneeded, err := r.UpgradeNeeded() if err != nil { time.Sleep(100 * time.Millisecond) goto again } if upneeded { t := time.Now() err = r.Upgrade() if err == ErrAlreadyLocked { logOnce.Do(func() { log.Warning("Database upgrade running. Waiting for completion...") }) time.Sleep(100 * time.Millisecond) goto again } else if err != nil { log.Fatalf("Upgrade failed: %+v", err) } log.Infof("Database upgrade successful (took %s), starting normally", time.Since(t).Round(time.Millisecond)) } close(r.ready) } // RedisIsLoading returns true if the error is of type LOADING func RedisIsLoading(err error) bool { // PARSING: "LOADING Redis is loading the dataset in memory" if err != nil && strings.HasPrefix(err.Error(), "LOADING") { return true } return false } func parseVersion(version string) int64 { s := strings.Split(version, ".") format := fmt.Sprintf("%%s%%0%ds", 2) var v string for _, value := range s { v = fmt.Sprintf(format, v, value) } var result int64 var err error if result, err = strconv.ParseInt(v, 10, 64); err != nil { return -1 } return result } func parseInfo(i interface{}, err error) (map[string]string, error) { v, err := redis.String(i, err) if err != nil { return nil, err } m := make(map[string]string) lines := strings.Split(v, "\r\n") for _, l := range lines { if strings.HasPrefix(l, "#") { continue } kv := strings.SplitN(l, ":", 2) if len(kv) < 2 { continue } m[kv[0]] = kv[1] } return m, nil } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/database/upgrade.go000066400000000000000000000035731411706463700227340ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package database import ( "errors" "time" "github.com/etix/mirrorbits/core" "github.com/etix/mirrorbits/database/upgrader" "github.com/gomodule/redigo/redis" ) var ( ErrUnsupportedVersion = errors.New("unsupported database version, please upgrade mirrorbits") ) // UpgradeNeeded returns true if a database upgrade is needed func (r *Redis) UpgradeNeeded() (bool, error) { version, err := r.GetDBFormatVersion() if err != nil { return false, err } if version > core.DBVersion { return false, ErrUnsupportedVersion } return version != core.DBVersion, nil } // GetDBFormatVersion return the current database format version func (r *Redis) GetDBFormatVersion() (int, error) { conn := r.UnblockedGet() defer conn.Close() again: version, err := redis.Int(conn.Do("GET", core.DBVersionKey)) if RedisIsLoading(err) { time.Sleep(time.Millisecond * 100) goto again } else if err == redis.ErrNil { found, err := redis.Bool(conn.Do("EXISTS", "MIRRORS")) if err != nil { return -1, err } if found { return 0, nil } _, err = conn.Do("SET", core.DBVersionKey, core.DBVersion) return core.DBVersion, err } else if err != nil { return -1, err } return version, nil } // Upgrade starts the upgrade of the database format func (r *Redis) Upgrade() error { version, err := r.GetDBFormatVersion() if err != nil { return err } if version > core.DBVersion { return ErrUnsupportedVersion } else if version == core.DBVersion { return nil } lock, err := r.AcquireLock("upgrade") if err != nil { return err } defer lock.Release() for i := version + 1; i <= core.DBVersion; i++ { u := upgrader.GetUpgrader(r, i) if u != nil { log.Warningf("Upgrading database from version %d to version %d...", i-1, i) if err = u.Upgrade(); err != nil { return err } } } return nil } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/database/upgrader/000077500000000000000000000000001411706463700225575ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/database/upgrader/upgrader.go000066400000000000000000000010141411706463700247130ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package upgrader import ( "github.com/etix/mirrorbits/database/interfaces" v1 "github.com/etix/mirrorbits/database/v1" ) // Upgrader is an interface to implement a database upgrade strategy type Upgrader interface { Upgrade() error } // GetUpgrader returns the upgrader for the given target version func GetUpgrader(redis interfaces.Redis, version int) Upgrader { switch version { case 1: return v1.NewUpgraderV1(redis) } return nil } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/database/utils.go000066400000000000000000000033231411706463700224360ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package database import ( "errors" "strconv" "github.com/gomodule/redigo/redis" ) func (r *Redis) GetListOfMirrors() (map[int]string, error) { conn, err := r.Connect() if err != nil { return nil, err } defer conn.Close() values, err := redis.Values(conn.Do("HGETALL", "MIRRORS")) if err != nil { return nil, err } mirrors := make(map[int]string, len(values)/2) // Convert the mirror id to int for i := 0; i < len(values); i += 2 { key, okKey := values[i].([]byte) value, okValue := values[i+1].([]byte) if !okKey || !okValue { return nil, errors.New("invalid type for mirrors key") } id, err := strconv.Atoi(string(key)) if err != nil { return nil, errors.New("invalid type for mirrors ID") } mirrors[id] = string(value) } return mirrors, nil } type NetReadyError struct { error } func (n *NetReadyError) Timeout() bool { return false } func (n *NetReadyError) Temporary() bool { return true } func NewNetTemporaryError() NetReadyError { return NetReadyError{ error: errors.New("database not ready"), } } type NotReadyError struct{} func (e *NotReadyError) Close() error { return NewNetTemporaryError() } func (e *NotReadyError) Err() error { return NewNetTemporaryError() } func (e *NotReadyError) Do(commandName string, args ...interface{}) (reply interface{}, err error) { return nil, NewNetTemporaryError() } func (e *NotReadyError) Send(commandName string, args ...interface{}) error { return NewNetTemporaryError() } func (e *NotReadyError) Flush() error { return NewNetTemporaryError() } func (e *NotReadyError) Receive() (reply interface{}, err error) { return nil, NewNetTemporaryError() } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/database/v1/000077500000000000000000000000001411706463700212745ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/database/v1/version1.go000066400000000000000000000145641411706463700234030ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package v1 import ( "fmt" "github.com/etix/mirrorbits/core" "github.com/etix/mirrorbits/database/interfaces" "github.com/gomodule/redigo/redis" "github.com/pkg/errors" ) // NewUpgraderV1 upgrades the database from version 0 to 1 func NewUpgraderV1(redis interfaces.Redis) *Version1 { return &Version1{ Redis: redis, } } type Version1 struct { Redis interfaces.Redis } type actions struct { delete []string rename map[string]string } func (v *Version1) Upgrade() error { a := &actions{ rename: make(map[string]string), } conn := v.Redis.UnblockedGet() defer conn.Close() // Erase previous work keys (previous failed upgrade?) _, err := conn.Do("EVAL", ` local keys = redis.call('keys', ARGV[1]) for i=1,#keys,5000 do redis.call('del', unpack(keys, i, math.min(i+4999, #keys))) end return keys`, 0, "V1_*") if err != nil { return err } m, err := v.CreateMirrorIndex(a) if err != nil { return err } err = v.RenameKeys(a, m) if err != nil { return err } err = v.FixMirrorID(a, m) if err != nil { return err } err = v.RenameStats(a, m) if err != nil { return err } // Start a transaction to atomically and irrevocably set the new version conn.Send("MULTI") for k, v := range a.rename { conn.Send("RENAME", k, v) } for _, d := range a.delete { do := true for _, v := range a.rename { if d == v { // Abort the operation since this would // delete the result of a rename do = false break } } if do { conn.Send("DEL", d) } } conn.Send("SET", core.DBVersionKey, 1) // Finalize the transaction _, err = conn.Do("EXEC") // <-- At this point, if any of the previous mutation failed, it is still // safe to run a previous version of mirrorbits. return err } func (v *Version1) CreateMirrorIndex(a *actions) (map[int]string, error) { m := make(map[int]string) conn := v.Redis.UnblockedGet() defer conn.Close() // Get the v0 list of mirrors mirrors, err := redis.Strings(conn.Do("LRANGE", "MIRRORS", "0", "-1")) if err != nil { return m, errors.WithStack(err) } for _, name := range mirrors { // Create a unique ID for the current mirror id, err := redis.Int(conn.Do("INCR", "LAST_MID")) if err != nil { return m, errors.WithStack(err) } // Assign the ID to the current mirror if _, err = conn.Do("HSET", "V1_MIRRORS", id, name); err != nil { return m, errors.WithStack(err) } m[id] = name } // Prepare for renaming a.rename["V1_MIRRORS"] = "MIRRORS" return m, nil } func (v *Version1) RenameKeys(a *actions, m map[int]string) error { conn := v.Redis.UnblockedGet() defer conn.Close() // Rename all keys to contain the ID instead of the name for id, name := range m { // Get the list of files known to this mirror files, err := redis.Strings(conn.Do("SMEMBERS", fmt.Sprintf("MIRROR_%s_FILES", name))) if err == redis.ErrNil || IsErrNoSuchKey(err) { continue } else if err != nil { return errors.WithStack(err) } // Rename the FILEINFO__ keys for _, file := range files { a.rename[fmt.Sprintf("FILEINFO_%s_%s", name, file)] = fmt.Sprintf("FILEINFO_%d_%s", id, file) } // Rename the remaing global keys a.rename[fmt.Sprintf("MIRROR_%s_FILES", name)] = fmt.Sprintf("MIRRORFILES_%d", id) a.rename[fmt.Sprintf("HANDLEDFILES_%s", name)] = fmt.Sprintf("HANDLEDFILES_%d", id) // MIRROR_%s -> MIRROR_%d is handled by FixMirrorID } // Get the list of files in the local repo files, err := redis.Strings(conn.Do("SMEMBERS", "FILES")) if err != nil && err != redis.ErrNil { return errors.WithStack(err) } // Rename the keys within FILEMIRRORS_* for _, file := range files { // Get the list of mirrors having each file names, err := redis.Strings(conn.Do("SMEMBERS", fmt.Sprintf("FILEMIRRORS_%s", file))) if err != nil { return errors.WithStack(err) } for _, name := range names { var id int for mid, mname := range m { if mname == name { id = mid break } } if id == 0 { continue } conn.Send("SADD", fmt.Sprintf("V1_FILEMIRRORS_%s", file), id) } if err := conn.Flush(); err != nil { return errors.WithStack(err) } // Mark the key for renaming a.rename[fmt.Sprintf("V1_FILEMIRRORS_%s", file)] = fmt.Sprintf("FILEMIRRORS_%s", file) } return nil } func (v *Version1) FixMirrorID(a *actions, m map[int]string) error { conn := v.Redis.UnblockedGet() defer conn.Close() // Replace ID by the new mirror id // Add a field 'name' containing the mirror name for id, name := range m { err := CopyKey(conn, fmt.Sprintf("MIRROR_%s", name), fmt.Sprintf("V1_MIRROR_%d", id)) if err != nil { return errors.WithStack(err) } conn.Send("HMSET", fmt.Sprintf("V1_MIRROR_%d", id), "ID", id, "name", name) a.rename[fmt.Sprintf("V1_MIRROR_%d", id)] = fmt.Sprintf("MIRROR_%d", id) a.delete = append(a.delete, fmt.Sprintf("MIRROR_%s", name)) } if err := conn.Flush(); err != nil { return errors.WithStack(err) } return nil } func (v *Version1) RenameStats(a *actions, m map[int]string) error { conn := v.Redis.UnblockedGet() defer conn.Close() keys, err := redis.Strings(conn.Do("KEYS", "STATS_MIRROR_*")) if err != nil && err != redis.ErrNil { return errors.WithStack(err) } for _, key := range keys { // Here we get two formats: // - STATS_MIRROR_* // - STATS_MIRROR_BYTES_* // and each of them with three differents dates (year, year+month, year+month+day) stats, err := redis.StringMap(conn.Do("HGETALL", key)) if err != nil { return errors.WithStack(err) } for identifier, value := range stats { var id int for mid, mname := range m { if mname == identifier { id = mid break } } if id == 0 { // Mirror does not exist anymore // This is expected if mirrors were removed over time continue } conn.Send("HSET", "V1_"+key, id, value) a.rename["V1_"+key] = key } if err := conn.Flush(); err != nil { return errors.WithStack(err) } } return nil } func CopyKey(conn redis.Conn, src, dst string) error { dmp, err := redis.String(conn.Do("DUMP", src)) if err != nil { return err } _, err = conn.Do("RESTORE", dst, 0, dmp, "REPLACE") return err } // IsErrNoSuchKey return true if error is of type "no such key" func IsErrNoSuchKey(err error) bool { // PARSING: "ERR no such key" if err != nil && err.Error() == "ERR no such key" { return true } return false } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/doc.go000066400000000000000000000027641411706463700203070ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license // Mirrorbits is a geographic download redirector for distributing files efficiently across a set of mirrors. // // Prerequisites // // Before diving into the install section ensures you have: // - Redis 2.8.12 (or later) // - libgeoip // - a recent geoip database (see contrib/geoip/) // // Installation // // You can now proceed to the installation by downloading a prebuilt release on // https://github.com/etix/mirrorbits/releases or by building it yourself: // go get github.com/etix/mirrorbits // go install -v github.com/etix/mirrorbits // If you plan to use the web UI be sure to install the templates found on // https://github.com/etix/mirrorbits/tree/master/templates into your system (usually in /usr/share/mirrorbits). // // Configuration // // A sample configuration file can be found in the git repository: // https://github.com/etix/mirrorbits/blob/master/mirrorbits.conf // // Running // // Mirrorbits is a self-contained application and is, at the same time, the server and the cli. // // To run the server: // mirrorbits -D // To run the cli: // mirrorbits help // // Upgrading // // Mirrorbits has a mode called seamless binary upgrade to upgrade the server executable at runtime // without service disruption. Once the binary has been replaced just issue the following // command in the cli: // mirrorbits upgrade // // For more information visit the official page: // https://github.com/etix/mirrorbits/ package main mirrorbits-0.5.1+git20210123.eeea0e0+ds1/filesystem/000077500000000000000000000000001411706463700213665ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/filesystem/fileinfo.go000066400000000000000000000012051411706463700235060ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package filesystem import ( "time" ) // FileInfo is a struct embedding details about a file served by // the redirector. type FileInfo struct { Path string `redis:"-"` Size int64 `redis:"size" json:",omitempty"` ModTime time.Time `redis:"modTime" json:",omitempty"` Sha1 string `redis:"sha1" json:",omitempty"` Sha256 string `redis:"sha256" json:",omitempty"` Md5 string `redis:"md5" json:",omitempty"` } // NewFileInfo returns a new FileInfo object func NewFileInfo(path string) FileInfo { return FileInfo{ Path: path, } } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/filesystem/fs.go000066400000000000000000000025211411706463700223250ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package filesystem import ( "errors" "path/filepath" "strings" ) var ( // ErrOutsideRepo is returned when the target file is outside of the repository ErrOutsideRepo = errors.New("target file outside repository") ) // EvaluateFilePath sanitize and validate the file against the local repository func EvaluateFilePath(repository, urlpath string) (string, error) { fpath := repository + urlpath // Get the absolute file path fpath, err := filepath.Abs(fpath) if err != nil { return "", err } // Check if absolute path is within the repository if !IsInRepository(repository, fpath) { return "", ErrOutsideRepo } // Evaluate symlinks targetPath, err := filepath.EvalSymlinks(fpath) if err != nil { return "", err } if targetPath != fpath { targetPath, err = filepath.Abs(targetPath) if err != nil { return "", err } if !IsInRepository(repository, targetPath) { return "", ErrOutsideRepo } return targetPath[len(repository):], nil } return fpath[len(repository):], nil } // IsInRepository ensures that the given file path is contained in the repository func IsInRepository(repository, filePath string) bool { if filePath == repository { return true } if strings.HasPrefix(filePath, repository+"/") { return true } return false } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/filesystem/hash.go000066400000000000000000000031221411706463700226360ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package filesystem import ( "bufio" "crypto/md5" "crypto/sha1" "crypto/sha256" "encoding/hex" "hash" "io" "os" . "github.com/etix/mirrorbits/config" ) // HashFile generates a human readable hash of the given file path func HashFile(path string) (hashes FileInfo, err error) { f, err := os.Open(path) if err != nil { return } defer f.Close() reader := bufio.NewReader(f) var writers []io.Writer if GetConfig().Hashes.SHA1 { hsha1 := newHasher(sha1.New(), &hashes.Sha1) defer hsha1.Close() writers = append(writers, hsha1) } if GetConfig().Hashes.SHA256 { hsha256 := newHasher(sha256.New(), &hashes.Sha256) defer hsha256.Close() writers = append(writers, hsha256) } if GetConfig().Hashes.MD5 { hmd5 := newHasher(md5.New(), &hashes.Md5) defer hmd5.Close() writers = append(writers, hmd5) } if len(writers) == 0 { return } w := io.MultiWriter(writers...) _, err = io.Copy(w, reader) if err != nil { return } return } type hasher struct { hash.Hash output *string } func newHasher(hash hash.Hash, output *string) hasher { return hasher{ Hash: hash, output: output, } } func (h hasher) Close() error { *h.output = hex.EncodeToString(h.Sum(nil)) return nil } // Sha256sum generates a human readable sha256 hash of the given file path func Sha256sum(path string) ([]byte, error) { f, err := os.Open(path) if err != nil { return nil, err } defer f.Close() h := sha256.New() if _, err := io.Copy(h, f); err != nil { return nil, err } return h.Sum(nil), nil } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/go.mod000066400000000000000000000022641411706463700203140ustar00rootroot00000000000000module github.com/etix/mirrorbits require ( github.com/coreos/go-systemd v0.0.0-20190719114852-fd7a80b32e1f github.com/etix/goftp v0.0.0-20170217140226-0c13163a1028 github.com/golang/protobuf v1.3.2 github.com/gomodule/redigo v0.0.0-20181026001555-e8fc0692a7e2 github.com/howeyc/gopass v0.0.0-20190910152052-7cb4b85ec19c github.com/kr/pretty v0.1.0 // indirect github.com/op/go-logging v0.0.0-20160315200505-970db520ece7 github.com/oschwald/maxminddb-golang v1.5.0 github.com/pkg/errors v0.8.1 github.com/rafaeljusto/redigomock v0.0.0-20190202135759-257e089e14a1 github.com/stretchr/testify v1.4.0 // indirect github.com/youtube/vitess v0.0.0-20181105031612-54855ec7b369 golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7 // indirect golang.org/x/net v0.0.0-20190912160710-24e19bdeb0f2 golang.org/x/sys v0.0.0-20190913121621-c3b328c6e5a7 // indirect golang.org/x/text v0.3.2 // indirect google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51 // indirect google.golang.org/grpc v1.23.1 gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 // indirect gopkg.in/tylerb/graceful.v1 v1.2.15 gopkg.in/yaml.v2 v2.2.2 vitess.io/vitess v2.1.1+incompatible // indirect ) go 1.13 mirrorbits-0.5.1+git20210123.eeea0e0+ds1/go.sum000066400000000000000000000216641411706463700203460ustar00rootroot00000000000000cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= github.com/coreos/go-systemd v0.0.0-20190719114852-fd7a80b32e1f h1:JOrtw2xFKzlg+cbHpyrpLDmnN1HqhBfnX7WDiW7eG2c= github.com/coreos/go-systemd v0.0.0-20190719114852-fd7a80b32e1f/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4= github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/etix/goftp v0.0.0-20170217140226-0c13163a1028 h1:hO2NDwWjaY+FjWoZdMLapjkxt9Gpnmjb4ZdfOaQi9nI= github.com/etix/goftp v0.0.0-20170217140226-0c13163a1028/go.mod h1:broujVOEKwPL7fT3Xa1mJzV2aIqT1keOqSVEPqEO61Y= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM= github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs= github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/gomodule/redigo v0.0.0-20181026001555-e8fc0692a7e2 h1:Gzyurvlb8eehpl7l2YLkMddyOXWkdQN7wU5x5l/xM9s= github.com/gomodule/redigo v0.0.0-20181026001555-e8fc0692a7e2/go.mod h1:B4C85qUVwatsJoIUNIfCRsp7qO0iAmpGFZ4EELWSbC4= github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= github.com/howeyc/gopass v0.0.0-20190910152052-7cb4b85ec19c h1:aY2hhxLhjEAbfXOx2nRJxCXezC6CO2V/yN+OCr1srtk= github.com/howeyc/gopass v0.0.0-20190910152052-7cb4b85ec19c/go.mod h1:lADxMC39cJJqL93Duh1xhAs4I2Zs8mKS89XWXFGp9cs= github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/op/go-logging v0.0.0-20160315200505-970db520ece7 h1:lDH9UUVJtmYCjyT0CI4q8xvlXPxeZ0gYCVvWbmPlp88= github.com/op/go-logging v0.0.0-20160315200505-970db520ece7/go.mod h1:HzydrMdWErDVzsI23lYNej1Htcns9BCg93Dk0bBINWk= github.com/oschwald/maxminddb-golang v1.5.0 h1:rmyoIV6z2/s9TCJedUuDiKht2RN12LWJ1L7iRGtWY64= github.com/oschwald/maxminddb-golang v1.5.0/go.mod h1:3jhIUymTJ5VREKyIhWm66LJiQt04F0UCDdodShpjWsY= github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I= github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/rafaeljusto/redigomock v0.0.0-20190202135759-257e089e14a1 h1:+kGqA4dNN5hn7WwvKdzHl0rdN5AEkbNZd0VjRltAiZg= github.com/rafaeljusto/redigomock v0.0.0-20190202135759-257e089e14a1/go.mod h1:JaY6n2sDr+z2WTsXkOmNRUfDy6FN0L6Nk7x06ndm4tY= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= github.com/youtube/vitess v0.0.0-20181105031612-54855ec7b369 h1:Hg7gcIGpsMjVX63qXG6QYpin4kX5WrJ05VSAyxzgxIA= github.com/youtube/vitess v0.0.0-20181105031612-54855ec7b369/go.mod h1:hpMim5/30F1r+0P8GGtB29d0gWHr0IZ5unS+CG0zMx8= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7 h1:0hQKqeLdqlt5iIwVOBErRisrHJAN57yOiPRQItI20fU= golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190912160710-24e19bdeb0f2 h1:4dVFTC832rPn4pomLSz1vA+are2+dU19w1H8OngV7nc= golang.org/x/net v0.0.0-20190912160710-24e19bdeb0f2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190913121621-c3b328c6e5a7 h1:wYqz/tQaWUgGKyx+B/rssSE6wkIKdY5Ee6ryOmzarIg= golang.org/x/sys v0.0.0-20190913121621-c3b328c6e5a7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51 h1:Ex1mq5jaJof+kRnYi3SlYJ8KKa9Ao3NHyIT5XJ1gF6U= google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.23.1 h1:q4XQuHFC6I28BKZpo6IYyb3mNO+l7lSOxRuYTCiDfXk= google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/tylerb/graceful.v1 v1.2.15 h1:1JmOyhKqAyX3BgTXMI84LwT6FOJ4tP2N9e2kwTCM0nQ= gopkg.in/tylerb/graceful.v1 v1.2.15/go.mod h1:yBhekWvR20ACXVObSSdD3u6S9DeSylanL2PAbAC/uJ8= gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= vitess.io/vitess v2.1.1+incompatible h1:nuuGHiWYWpudD3gOCLeGzol2EJ25e/u5Wer2wV1O130= vitess.io/vitess v2.1.1+incompatible/go.mod h1:h4qvkyNYTOC0xI+vcidSWoka0gQAZc9ZPHbkHo48gP0= mirrorbits-0.5.1+git20210123.eeea0e0+ds1/http/000077500000000000000000000000001411706463700201615ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/http/context.go000066400000000000000000000061771411706463700222070ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package http import ( "net/http" "net/url" "strings" ) // RequestType defines the type of the request type RequestType int // SecureOption is the type that defines TLS requirements type SecureOption int const ( STANDARD RequestType = iota MIRRORLIST FILESTATS MIRRORSTATS CHECKSUM UNDEFINED SecureOption = iota WITHTLS WITHOUTTLS ) // Context represents the context of a request type Context struct { r *http.Request w http.ResponseWriter t Templates v url.Values typ RequestType isMirrorList bool isMirrorStats bool isFileStats bool isChecksum bool isPretty bool secureOption SecureOption } // NewContext returns a new instance of Context func NewContext(w http.ResponseWriter, r *http.Request, t Templates) *Context { c := &Context{r: r, w: w, t: t, v: r.URL.Query()} if c.paramBool("mirrorlist") { c.typ = MIRRORLIST c.isMirrorList = true } else if c.paramBool("stats") { c.typ = FILESTATS c.isFileStats = true } else if c.paramBool("mirrorstats") { c.typ = MIRRORSTATS c.isMirrorStats = true } else if c.paramBool("md5") || c.paramBool("sha1") || c.paramBool("sha256") { c.typ = CHECKSUM c.isChecksum = true } else { c.typ = STANDARD } if c.paramBool("pretty") { c.isPretty = true } // Check for HTTPS requirements proto := r.Header.Get("X-Forwarded-Proto") if strings.ToLower(proto) == "https" { c.secureOption = WITHTLS } v, ok := c.v["https"] if ok { if v[0] == "1" { c.secureOption = WITHTLS } else if v[0] == "0" { c.secureOption = WITHOUTTLS } } return c } // Request returns the underlying http.Request of the current request func (c *Context) Request() *http.Request { return c.r } // ResponseWriter returns the underlying http.ResponseWriter of the current request func (c *Context) ResponseWriter() http.ResponseWriter { return c.w } // Templates returns the instance of precompiled templates func (c *Context) Templates() Templates { return c.t } // Type returns the type of the current request func (c *Context) Type() RequestType { return c.typ } // IsMirrorlist returns true if the mirror list has been requested func (c *Context) IsMirrorlist() bool { return c.isMirrorList } // IsFileStats returns true if the file stats has been requested func (c *Context) IsFileStats() bool { return c.isFileStats } // IsMirrorStats returns true if the mirror stats has been requested func (c *Context) IsMirrorStats() bool { return c.isMirrorStats } // IsChecksum returns true if a checksum has been requested func (c *Context) IsChecksum() bool { return c.isChecksum } // IsPretty returns true if the pretty json has been requested func (c *Context) IsPretty() bool { return c.isPretty } // QueryParam returns the value associated with the given query parameter func (c *Context) QueryParam(key string) string { return c.v.Get(key) } // SecureOption returns the selected secure option func (c *Context) SecureOption() SecureOption { return c.secureOption } func (c *Context) paramBool(key string) bool { _, ok := c.v[key] return ok } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/http/gzip.go000066400000000000000000000020241411706463700214570ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package http import ( "io" "net/http" "strings" . "github.com/etix/mirrorbits/config" "github.com/youtube/vitess/go/cgzip" ) type gzipResponseWriter struct { io.Writer http.ResponseWriter typeGuessed bool } func (w *gzipResponseWriter) Write(b []byte) (int, error) { if !w.typeGuessed { if w.Header().Get("Content-Type") == "" { w.Header().Set("Content-Type", http.DetectContentType(b)) } w.typeGuessed = true } return w.Writer.Write(b) } // NewGzipHandler is an HTTP handler used to compress responses if supported by the client func NewGzipHandler(fn http.HandlerFunc) http.HandlerFunc { return func(w http.ResponseWriter, r *http.Request) { if !GetConfig().Gzip || !strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") { fn(w, r) return } w.Header().Set("Content-Encoding", "gzip") gz, _ := cgzip.NewWriterLevel(w, cgzip.Z_BEST_SPEED) defer gz.Close() fn(&gzipResponseWriter{Writer: gz, ResponseWriter: w}, r) } } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/http/http.go000066400000000000000000000377551411706463700215100ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package http import ( "encoding/json" "fmt" "html/template" "math/rand" "net" "net/http" "os" "path/filepath" "sort" "strconv" "strings" "sync" "time" systemd "github.com/coreos/go-systemd/daemon" . "github.com/etix/mirrorbits/config" "github.com/etix/mirrorbits/core" "github.com/etix/mirrorbits/database" "github.com/etix/mirrorbits/filesystem" "github.com/etix/mirrorbits/logs" "github.com/etix/mirrorbits/mirrors" "github.com/etix/mirrorbits/network" "github.com/etix/mirrorbits/utils" "github.com/gomodule/redigo/redis" "github.com/op/go-logging" "gopkg.in/tylerb/graceful.v1" ) var ( log = logging.MustGetLogger("main") ) // HTTP represents an instance of the HTTP webserver type HTTP struct { geoip *network.GeoIP redis *database.Redis templates Templates Listener *net.Listener server *graceful.Server serverStopChan <-chan struct{} stats *Stats cache *mirrors.Cache engine mirrorSelection Restarting bool stopped bool stoppedMutex sync.Mutex } // Templates is a struct embedding instances of the precompiled templates type Templates struct { *sync.RWMutex mirrorlist *template.Template mirrorstats *template.Template } // HTTPServer is the constructor of the HTTP server func HTTPServer(redis *database.Redis, cache *mirrors.Cache) *HTTP { h := new(HTTP) h.redis = redis h.geoip = network.NewGeoIP() h.templates.RWMutex = new(sync.RWMutex) h.templates.mirrorlist = template.Must(h.LoadTemplates("mirrorlist")) h.templates.mirrorstats = template.Must(h.LoadTemplates("mirrorstats")) h.cache = cache h.stats = NewStats(redis) h.engine = DefaultEngine{} http.Handle("/", NewGzipHandler(h.requestDispatcher)) // Load the GeoIP databases if err := h.geoip.LoadGeoIP(); err != nil { if gerr, ok := err.(network.GeoIPError); ok { for _, e := range gerr.Errors { log.Critical(e.Error()) } if gerr.IsFatal() { if len(GetConfig().Fallbacks) == 0 { log.Fatal("Can't load the GeoIP databases, please set a valid path in the mirrorbits configuration") } else { log.Critical("Can't load the GeoIP databases, all requests will be served by the fallback mirrors") } } else { log.Critical("One or more GeoIP database could not be loaded, service will run in degraded mode") } } } // Initialize the random number generator rand.Seed(time.Now().UnixNano()) return h } // SetListener can be used to set a different listener that should be used by the // HTTP server. This is primarily used during seamless binary upgrade. func (h *HTTP) SetListener(l net.Listener) { h.Listener = &l } // Stop gracefully stops the HTTP server with a timeout to let // the remaining connections finish func (h *HTTP) Stop(timeout time.Duration) { /* Close the server and process remaining connections */ h.stoppedMutex.Lock() defer h.stoppedMutex.Unlock() if h.stopped { return } h.stopped = true h.server.Stop(timeout) } // Terminate terminates the current HTTP server gracefully func (h *HTTP) Terminate() { /* Wait for the server to stop */ select { case <-h.serverStopChan: } /* Commit the latest recorded stats to the database */ h.stats.Terminate() } // StopChan returns a channel that notifies when the server is stopped func (h *HTTP) StopChan() <-chan struct{} { return h.serverStopChan } // Reload the configuration func (h *HTTP) Reload() { // Reload the GeoIP database h.geoip.LoadGeoIP() // Reload the templates h.templates.Lock() if t, err := h.LoadTemplates("mirrorlist"); err == nil { h.templates.mirrorlist = t } else { log.Errorf("could not reload templates 'mirrorlist': %s", err.Error()) } if t, err := h.LoadTemplates("mirrorstats"); err == nil { h.templates.mirrorstats = t } else { log.Errorf("could not reload templates 'mirrorstats': %s", err.Error()) } h.templates.Unlock() } // RunServer is the main function used to start the HTTP server func (h *HTTP) RunServer() (err error) { // If listener isn't nil that means that we're running a seamless // binary upgrade and we have recovered an already running listener if h.Listener == nil { proto := "tcp" address := GetConfig().ListenAddress if strings.HasPrefix(address, "unix:") { proto = "unix" address = strings.TrimPrefix(address, "unix:") } listener, err := net.Listen(proto, address) if err != nil { log.Fatal("Listen: ", err) } h.SetListener(listener) } h.server = &graceful.Server{ // http Server: &http.Server{ Handler: nil, ReadTimeout: 10 * time.Second, WriteTimeout: 10 * time.Second, MaxHeaderBytes: 1 << 20, }, // graceful Timeout: 10 * time.Second, NoSignalHandling: true, } h.serverStopChan = h.server.StopChan() log.Infof("Service listening on %s", GetConfig().ListenAddress) // Since main blocks here until completion, tell systemd we're ready. // This is a no-op if NOTIFY_SOCKET isn't set. if os.Getenv("NOTIFY_SOCKET") != "" { log.Debug("Notifying systemd of readiness") systemd.SdNotify(false, systemd.SdNotifyReady) } /* Serve until we receive a SIGTERM */ return h.server.Serve(*h.Listener) } func (h *HTTP) requestDispatcher(w http.ResponseWriter, r *http.Request) { h.templates.RLock() ctx := NewContext(w, r, h.templates) h.templates.RUnlock() w.Header().Set("Server", "Mirrorbits/"+core.VERSION) switch ctx.Type() { case MIRRORLIST: fallthrough case STANDARD: h.mirrorHandler(w, r, ctx) case MIRRORSTATS: h.mirrorStatsHandler(w, r, ctx) case FILESTATS: h.fileStatsHandler(w, r, ctx) case CHECKSUM: h.checksumHandler(w, r, ctx) } } func (h *HTTP) mirrorHandler(w http.ResponseWriter, r *http.Request, ctx *Context) { //XXX it would be safer to recover in case of panic // Sanitize path urlPath, err := filesystem.EvaluateFilePath(GetConfig().Repository, r.URL.Path) if err != nil { if err == filesystem.ErrOutsideRepo { http.Error(w, http.StatusText(http.StatusForbidden), http.StatusForbidden) return } http.Error(w, http.StatusText(http.StatusNotFound), http.StatusNotFound) return } fileInfo := filesystem.NewFileInfo(urlPath) remoteIP := network.ExtractRemoteIP(r.Header.Get("X-Forwarded-For")) if len(remoteIP) == 0 { remoteIP = network.RemoteIPFromAddr(r.RemoteAddr) } if ctx.IsMirrorlist() { fromip := ctx.QueryParam("fromip") if net.ParseIP(fromip) != nil { remoteIP = fromip } } clientInfo := h.geoip.GetRecord(remoteIP) //TODO return a pointer? mlist, excluded, err := h.engine.Selection(ctx, h.cache, &fileInfo, clientInfo) /* Handle errors */ fallback := false if _, ok := err.(net.Error); ok || len(mlist) == 0 { /* Handle fallbacks */ fallbacks := GetConfig().Fallbacks if len(fallbacks) > 0 { fallback = true for i, f := range fallbacks { mlist = append(mlist, mirrors.Mirror{ ID: i * -1, Name: fmt.Sprintf("fallback%d", i), HttpURL: f.URL, CountryCodes: strings.ToUpper(f.CountryCode), CountryFields: []string{strings.ToUpper(f.CountryCode)}, ContinentCode: strings.ToUpper(f.ContinentCode)}) } sort.Sort(mirrors.ByRank{Mirrors: mlist, ClientInfo: clientInfo}) } else { // No fallback in stock, there's nothing else we can do http.Error(w, http.StatusText(http.StatusServiceUnavailable), http.StatusServiceUnavailable) return } } else if err != nil { http.Error(w, err.Error(), http.StatusInternalServerError) return } results := &mirrors.Results{ FileInfo: fileInfo, MirrorList: mlist, ExcludedList: excluded, ClientInfo: clientInfo, IP: remoteIP, Fallback: fallback, LocalJSPath: GetConfig().LocalJSPath, } var resultRenderer resultsRenderer if ctx.IsMirrorlist() { resultRenderer = &MirrorListRenderer{} } else { switch GetConfig().OutputMode { case "json": resultRenderer = &JSONRenderer{} case "redirect": resultRenderer = &RedirectRenderer{} case "auto": accept := r.Header.Get("Accept") if strings.Index(accept, "application/json") >= 0 { resultRenderer = &JSONRenderer{} } else { resultRenderer = &RedirectRenderer{} } default: http.Error(w, "No page renderer", http.StatusInternalServerError) return } } w.Header().Set("Cache-Control", "private, no-cache") status, err := resultRenderer.Write(ctx, results) if err != nil { http.Error(w, err.Error(), status) } if !ctx.IsMirrorlist() { logs.LogDownload(resultRenderer.Type(), status, results, err) if len(mlist) > 0 { h.stats.CountDownload(mlist[0], fileInfo) } } return } // LoadTemplates pre-loads templates from the configured template directory func (h *HTTP) LoadTemplates(name string) (t *template.Template, err error) { t = template.New("t") t.Funcs(template.FuncMap{ "add": utils.Add, "sizeof": utils.ReadableSize, "version": utils.Version, "hostname": utils.Hostname, "concaturl": utils.ConcatURL, "dateutc": utils.FormattedDateUTC, }) t, err = t.ParseFiles( filepath.Clean(GetConfig().Templates+"/base.html"), filepath.Clean(fmt.Sprintf("%s/%s.html", GetConfig().Templates, name))) if err != nil { if e, ok := err.(*os.PathError); ok { log.Fatalf(fmt.Sprintf("Cannot load template %s: %s", e.Path, e.Err.Error())) } else { log.Fatal(err.Error()) } } return t, err } // StatsFileNow is the structure containing the latest stats of a file type StatsFileNow struct { Today int64 Month int64 Year int64 Total int64 } // StatsFilePeriod is the structure containing the stats for the given period type StatsFilePeriod struct { Period string Downloads int64 } // See stats.go header for the storage structure func (h *HTTP) fileStatsHandler(w http.ResponseWriter, r *http.Request, ctx *Context) { var output []byte rconn := h.redis.Get() defer rconn.Close() req := strings.SplitN(ctx.QueryParam("stats"), "-", 3) // Sanity check for _, e := range req { if e == "" { continue } if _, err := strconv.ParseInt(e, 10, 0); err != nil { http.Error(w, "Invalid period", http.StatusBadRequest) return } } if len(req) == 0 || req[0] == "" { fkey := fmt.Sprintf("STATS_FILE_%s", time.Now().Format("2006_01_02")) rconn.Send("MULTI") for i := 0; i < 4; i++ { rconn.Send("HGET", fkey, r.URL.Path) fkey = fkey[:strings.LastIndex(fkey, "_")] } res, err := redis.Values(rconn.Do("EXEC")) if err != nil && err != redis.ErrNil { http.Error(w, err.Error(), http.StatusInternalServerError) return } s := &StatsFileNow{} s.Today, _ = redis.Int64(res[0], err) s.Month, _ = redis.Int64(res[1], err) s.Year, _ = redis.Int64(res[2], err) s.Total, _ = redis.Int64(res[3], err) output, err = json.MarshalIndent(s, "", " ") } else { // Generate the redis key dkey := "STATS_FILE_" for _, e := range req { dkey += fmt.Sprintf("%s_", e) } dkey = dkey[:len(dkey)-1] v, err := redis.Int64(rconn.Do("HGET", dkey, r.URL.Path)) if err != nil && err != redis.ErrNil { http.Error(w, err.Error(), http.StatusInternalServerError) return } s := &StatsFilePeriod{Period: ctx.QueryParam("stats"), Downloads: v} output, err = json.MarshalIndent(s, "", " ") } w.Write(output) } func (h *HTTP) checksumHandler(w http.ResponseWriter, r *http.Request, ctx *Context) { // Sanitize path urlPath, err := filesystem.EvaluateFilePath(GetConfig().Repository, r.URL.Path) if err != nil { if err == filesystem.ErrOutsideRepo { http.Error(w, http.StatusText(http.StatusForbidden), http.StatusForbidden) return } http.Error(w, http.StatusText(http.StatusNotFound), http.StatusNotFound) return } fileInfo, err := h.cache.GetFileInfo(urlPath) if err == redis.ErrNil { http.NotFound(w, r) return } else if err != nil { log.Errorf("Error while fetching Fileinfo: %s", err.Error()) http.Error(w, http.StatusText(http.StatusServiceUnavailable), http.StatusServiceUnavailable) return } var hash string if ctx.paramBool("md5") { hash = fileInfo.Md5 } else if ctx.paramBool("sha1") { hash = fileInfo.Sha1 } else if ctx.paramBool("sha256") { hash = fileInfo.Sha256 } if len(hash) == 0 { http.Error(w, "Hash type not supported", http.StatusNotFound) return } w.Header().Set("Content-Type", "text/plain; charset=UTF-8") w.Write([]byte(fmt.Sprintf("%s %s", hash, filepath.Base(fileInfo.Path)))) return } // MirrorStats contains the stats of a given mirror type MirrorStats struct { ID int Name string Downloads int64 Bytes int64 PercentD float32 PercentB float32 SyncOffset SyncOffset TZOffset time.Duration } // SyncOffset contains the time offset between the mirror and the local repository type SyncOffset struct { Valid bool Value int // in hours HumanReadable string } // MirrorStatsPage contains the values needed to generate the mirrorstats page type MirrorStatsPage struct { List []MirrorStats MirrorList []mirrors.Mirror LocalJSPath string HasTZAdjustement bool } // byDownloadNumbers is a sorting function type byDownloadNumbers struct { mirrorStatsSlice } func (b byDownloadNumbers) Less(i, j int) bool { if b.mirrorStatsSlice[i].Downloads > b.mirrorStatsSlice[j].Downloads { return true } return false } // mirrorStatsSlice is a slice of MirrorStats type mirrorStatsSlice []MirrorStats func (s mirrorStatsSlice) Len() int { return len(s) } func (s mirrorStatsSlice) Swap(i, j int) { s[i], s[j] = s[j], s[i] } func (h *HTTP) mirrorStatsHandler(w http.ResponseWriter, r *http.Request, ctx *Context) { rconn := h.redis.Get() defer rconn.Close() // Get all mirrors ID mirrorsMap, err := h.redis.GetListOfMirrors() if err != nil { http.Error(w, "Cannot fetch the list of mirrors", http.StatusInternalServerError) return } var mirrorsIDs []int for id := range mirrorsMap { // We need a common order to iterate the // results from Redis. mirrorsIDs = append(mirrorsIDs, id) } rconn.Send("MULTI") // Get all mirrors stats for _, id := range mirrorsIDs { today := time.Now().UTC().Format("2006_01_02") rconn.Send("HGET", "STATS_MIRROR_"+today, id) rconn.Send("HGET", "STATS_MIRROR_BYTES_"+today, id) } stats, err := redis.Values(rconn.Do("EXEC")) if err != nil { http.Error(w, "Cannot fetch stats", http.StatusInternalServerError) return } var hasTZAdjustement bool var maxdownloads int64 var maxbytes int64 var results []MirrorStats var index int64 mlist := make([]mirrors.Mirror, 0, len(mirrorsIDs)) for _, id := range mirrorsIDs { mirror, err := h.cache.GetMirror(id) if err != nil { continue } mlist = append(mlist, mirror) var downloads int64 if v, _ := redis.String(stats[index], nil); v != "" { downloads, _ = strconv.ParseInt(v, 10, 64) } var bytes int64 if v, _ := redis.String(stats[index+1], nil); v != "" { bytes, _ = strconv.ParseInt(v, 10, 64) } if downloads > maxdownloads { maxdownloads = downloads } if bytes > maxbytes { maxbytes = bytes } var lastModTime time.Time if !mirror.LastModTime.IsZero() { lastModTime = mirror.LastModTime.Time } elapsed := time.Since(lastModTime) tzoffset, _ := time.ParseDuration(fmt.Sprintf("%dms", mirror.TZOffset)) if tzoffset != 0 { hasTZAdjustement = true } s := MirrorStats{ ID: id, Name: mirror.Name, Downloads: downloads, Bytes: bytes, SyncOffset: SyncOffset{ Valid: !lastModTime.IsZero(), Value: int(elapsed.Hours()), HumanReadable: utils.FuzzyTimeStr(elapsed), }, TZOffset: tzoffset, } results = append(results, s) index += 2 } sort.Sort(byDownloadNumbers{results}) for i := 0; i < len(results); i++ { results[i].PercentD = float32(results[i].Downloads) * 100 / float32(maxdownloads) results[i].PercentB = float32(results[i].Bytes) * 100 / float32(maxbytes) } w.Header().Set("Content-Type", "text/html; charset=utf-8") err = ctx.Templates().mirrorstats.ExecuteTemplate(w, "base", MirrorStatsPage{results, mlist, GetConfig().LocalJSPath, hasTZAdjustement}) if err != nil { log.Errorf("HTTP error: %s", err.Error()) http.Error(w, err.Error(), http.StatusInternalServerError) return } } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/http/pagerenderer.go000066400000000000000000000077571411706463700231730ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package http import ( "bytes" "encoding/json" "errors" "fmt" "net/http" "sort" "strconv" "strings" . "github.com/etix/mirrorbits/config" "github.com/etix/mirrorbits/mirrors" ) var ( // ErrTemplatesNotFound is returned cannot be loaded ErrTemplatesNotFound = errors.New("please set a valid path to the templates directory") ) // resultsRenderer is the interface for all result renderers type resultsRenderer interface { Write(ctx *Context, results *mirrors.Results) (int, error) Type() string } // JSONRenderer is used to render JSON formatted details about the current request type JSONRenderer struct{} // Type returns the type of renderer func (w *JSONRenderer) Type() string { return "JSON" } // Write is used to write the result to the ResponseWriter func (w *JSONRenderer) Write(ctx *Context, results *mirrors.Results) (statusCode int, err error) { if ctx.IsPretty() { output, err := json.MarshalIndent(results, "", " ") if err != nil { return http.StatusInternalServerError, err } ctx.ResponseWriter().Header().Set("Content-Type", "application/json; charset=utf-8") ctx.ResponseWriter().Header().Set("Content-Length", strconv.Itoa(len(output))) ctx.ResponseWriter().Write(output) } else { ctx.ResponseWriter().Header().Set("Content-Type", "application/json; charset=utf-8") err = json.NewEncoder(ctx.ResponseWriter()).Encode(results) if err != nil { return http.StatusInternalServerError, err } } return http.StatusOK, nil } // RedirectRenderer is a basic renderer that redirects the user to the first mirror in the list type RedirectRenderer struct{} // Type returns the type of renderer func (w *RedirectRenderer) Type() string { return "REDIRECT" } // Write is used to write the result to the ResponseWriter func (w *RedirectRenderer) Write(ctx *Context, results *mirrors.Results) (statusCode int, err error) { if len(results.MirrorList) > 0 { ctx.ResponseWriter().Header().Set("Content-Type", "text/html; charset=utf-8") path := strings.TrimPrefix(results.FileInfo.Path, "/") mh := len(results.MirrorList) maxheaders := GetConfig().MaxLinkHeaders if mh > maxheaders+1 { mh = maxheaders + 1 } if mh >= 1 { // Generate the header alternative links for i, m := range results.MirrorList[1:mh] { var countryCode string if len(m.CountryFields) > 0 { countryCode = strings.ToLower(m.CountryFields[0]) } ctx.ResponseWriter().Header().Add("Link", fmt.Sprintf("<%s>; rel=duplicate; pri=%d; geo=%s", m.HttpURL+path, i+1, countryCode)) } } // Finally issue the redirect http.Redirect(ctx.ResponseWriter(), ctx.Request(), results.MirrorList[0].HttpURL+path, http.StatusFound) return http.StatusFound, nil } // No mirror returned for this request http.NotFound(ctx.ResponseWriter(), ctx.Request()) return http.StatusNotFound, nil } // MirrorListRenderer is used to render the mirrorlist page using the HTML templates type MirrorListRenderer struct{} // Type returns the type of renderer func (w *MirrorListRenderer) Type() string { return "MIRRORLIST" } // Write is used to write the result to the ResponseWriter func (w *MirrorListRenderer) Write(ctx *Context, results *mirrors.Results) (statusCode int, err error) { if ctx.Templates().mirrorlist == nil { // No templates found for the mirrorlist return http.StatusInternalServerError, ErrTemplatesNotFound } // Sort the exclude reasons by message so they appear grouped sort.Sort(mirrors.ByExcludeReason{Mirrors: results.ExcludedList}) // Create a temporary output buffer to render the page var buf bytes.Buffer ctx.ResponseWriter().Header().Set("Content-Type", "text/html; charset=utf-8") // Render the page into the buffer err = ctx.Templates().mirrorlist.ExecuteTemplate(&buf, "base", results) if err != nil { // Something went wrong, discard the buffer return http.StatusInternalServerError, err } // Write the buffer to the socket buf.WriteTo(ctx.ResponseWriter()) return http.StatusOK, nil } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/http/selection.go000066400000000000000000000163531411706463700225050ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package http import ( "fmt" "math" "math/rand" "sort" "strings" "time" . "github.com/etix/mirrorbits/config" "github.com/etix/mirrorbits/filesystem" "github.com/etix/mirrorbits/mirrors" "github.com/etix/mirrorbits/network" "github.com/etix/mirrorbits/utils" ) type mirrorSelection interface { // Selection must return an ordered list of selected mirror, // a list of rejected mirrors and and an error code. Selection(*Context, *mirrors.Cache, *filesystem.FileInfo, network.GeoIPRecord) (mirrors.Mirrors, mirrors.Mirrors, error) } // DefaultEngine is the default algorithm used for mirror selection type DefaultEngine struct{} // Selection returns an ordered list of selected mirror, a list of rejected mirrors and and an error code func (h DefaultEngine) Selection(ctx *Context, cache *mirrors.Cache, fileInfo *filesystem.FileInfo, clientInfo network.GeoIPRecord) (mlist mirrors.Mirrors, excluded mirrors.Mirrors, err error) { // Get details about the requested file *fileInfo, err = cache.GetFileInfo(fileInfo.Path) if err != nil { return } // Prepare and return the list of all potential mirrors mlist, err = cache.GetMirrors(fileInfo.Path, clientInfo) if err != nil { return } // Filter safeIndex := 0 excluded = make([]mirrors.Mirror, 0, len(mlist)) var closestMirror float32 var farthestMirror float32 for i, m := range mlist { // Does it support http? Is it well formated? if !strings.HasPrefix(m.HttpURL, "http://") && !strings.HasPrefix(m.HttpURL, "https://") { m.ExcludeReason = "Invalid URL" goto discard } // Is it enabled? if !m.Enabled { m.ExcludeReason = "Disabled" goto discard } // Is it up? if !m.Up { if m.ExcludeReason == "" { m.ExcludeReason = "Down" } goto discard } if ctx.SecureOption() == WITHTLS && !m.IsHTTPS() { m.ExcludeReason = "Not HTTPS" goto discard } if ctx.SecureOption() == WITHOUTTLS && m.IsHTTPS() { m.ExcludeReason = "Not HTTP" goto discard } // Is it the same size / modtime as source? if m.FileInfo != nil { if m.FileInfo.Size != fileInfo.Size { m.ExcludeReason = "File size mismatch" goto discard } if !m.FileInfo.ModTime.IsZero() { mModTime := m.FileInfo.ModTime if GetConfig().FixTimezoneOffsets { mModTime = mModTime.Add(time.Duration(m.TZOffset) * time.Millisecond) } mModTime = mModTime.Truncate(m.LastSuccessfulSyncPrecision.Duration()) lModTime := fileInfo.ModTime.Truncate(m.LastSuccessfulSyncPrecision.Duration()) if !mModTime.Equal(lModTime) { m.ExcludeReason = fmt.Sprintf("Mod time mismatch (diff: %s)", lModTime.Sub(mModTime)) goto discard } } } // Is it configured to serve its continent only? if m.ContinentOnly { if !clientInfo.IsValid() || clientInfo.ContinentCode != m.ContinentCode { m.ExcludeReason = "Continent only" goto discard } } // Is it configured to serve its country only? if m.CountryOnly { if !clientInfo.IsValid() || !utils.IsInSlice(clientInfo.CountryCode, m.CountryFields) { m.ExcludeReason = "Country only" goto discard } } // Is it in the same AS number? if m.ASOnly { if !clientInfo.IsValid() || clientInfo.ASNum != m.Asnum { m.ExcludeReason = "AS only" goto discard } } // Is the user's country code allowed on this mirror? if clientInfo.IsValid() && utils.IsInSlice(clientInfo.CountryCode, m.ExcludedCountryFields) { m.ExcludeReason = "User's country restriction" goto discard } if safeIndex == 0 { closestMirror = m.Distance } else if closestMirror > m.Distance { closestMirror = m.Distance } if m.Distance > farthestMirror { farthestMirror = m.Distance } mlist[safeIndex] = mlist[i] safeIndex++ continue discard: excluded = append(excluded, m) } // Reduce the slice to its new size mlist = mlist[:safeIndex] if !clientInfo.IsValid() { // Shuffle the list //XXX Should we use the fallbacks instead? for i := range mlist { j := rand.Intn(i + 1) mlist[i], mlist[j] = mlist[j], mlist[i] } // Shortcut if !ctx.IsMirrorlist() { // Reduce the number of mirrors to process mlist = mlist[:utils.Min(5, len(mlist))] } return } // We're not interested in divisions by zero if closestMirror == 0 { closestMirror = math.SmallestNonzeroFloat32 } /* Weight distribution for random selection [Probabilistic weight] */ // Compute score for each mirror and return the mirrors eligible for weight distribution. // This includes: // - mirrors found in a 1.5x (configurable) range from the closest mirror // - mirrors targeting the given country (as primary or secondary) // - mirrors being in the same AS number totalScore := 0 baseScore := int(farthestMirror) weights := map[int]int{} for i := 0; i < len(mlist); i++ { m := &mlist[i] m.ComputedScore = baseScore - int(m.Distance) + 1 if m.Distance <= closestMirror*GetConfig().WeightDistributionRange { score := (float32(baseScore) - m.Distance) if !utils.IsPrimaryCountry(clientInfo, m.CountryFields) { score /= 2 } m.ComputedScore += int(score) } else if utils.IsPrimaryCountry(clientInfo, m.CountryFields) { m.ComputedScore += int(float32(baseScore) - (m.Distance * 5)) } else if utils.IsAdditionalCountry(clientInfo, m.CountryFields) { m.ComputedScore += int(float32(baseScore) - closestMirror) } if m.Asnum == clientInfo.ASNum { m.ComputedScore += baseScore / 2 } floatingScore := float64(m.ComputedScore) + (float64(m.ComputedScore) * (float64(m.Score) / 100)) + 0.5 // The minimum allowed score is 1 m.ComputedScore = int(math.Max(floatingScore, 1)) if m.ComputedScore > baseScore { // The weight must always be > 0 to not break the randomization below totalScore += m.ComputedScore - baseScore weights[m.ID] = m.ComputedScore - baseScore } } // Get the final number of mirrors selected for weight distribution selected := len(weights) // Sort mirrors by computed score sort.Sort(mirrors.ByComputedScore{Mirrors: mlist}) if selected > 1 { if ctx.IsMirrorlist() { // Don't reorder the results, just set the percentage for i := 0; i < selected; i++ { id := mlist[i].ID for j := 0; j < len(mlist); j++ { if mlist[j].ID == id { mlist[j].Weight = float32(float64(weights[id]) * 100 / float64(totalScore)) break } } } } else { // Randomize the order of the selected mirrors considering their weights weightedMirrors := make([]mirrors.Mirror, selected) rest := totalScore for i := 0; i < selected; i++ { var id int rv := rand.Int31n(int32(rest)) s := 0 for k, v := range weights { s += v if int32(s) > rv { id = k break } } for _, m := range mlist { if m.ID == id { m.Weight = float32(float64(weights[id]) * 100 / float64(totalScore)) weightedMirrors[i] = m break } } rest -= weights[id] delete(weights, id) } // Replace the head of the list by its reordered counterpart mlist = append(weightedMirrors, mlist[selected:]...) // Reduce the number of mirrors to return v := math.Min(math.Min(5, float64(selected)), float64(len(mlist))) mlist = mlist[:int(v)] } } else if selected == 1 && len(mlist) > 0 { mlist[0].Weight = 100 } return } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/http/stats.go000066400000000000000000000101121411706463700216410ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package http import ( "errors" "fmt" "strconv" "strings" "sync" "time" "github.com/etix/mirrorbits/database" "github.com/etix/mirrorbits/filesystem" "github.com/etix/mirrorbits/mirrors" ) /* Total (all files, all mirrors): STATS_TOTAL List of hashes for a file: STATS_FILE = path -> value All time STATS_FILE_[year] = path -> value By year STATS_FILE_[year]_[month] = path -> value By month STATS_FILE_[year]_[month]_[day] = path -> value By day List of hashes for a mirror: STATS_MIRROR = mirror -> value All time STATS_MIRROR_[year] = mirror -> value By year STATS_MIRROR_[year]_[month] = mirror -> value By month STATS_MIRROR_[year]_[month]_[day] = mirror -> value By day */ var ( errEmptyFileError = errors.New("stats: file parameter is empty") errUnknownMirror = errors.New("stats: unknown mirror") ) // Stats is the internal structure for the download stats type Stats struct { r *database.Redis countChan chan countItem mapStats map[string]int64 stop chan bool wg sync.WaitGroup downgraded bool } type countItem struct { mirrorID int filepath string size int64 time time.Time } // NewStats returns an instance of the stats counter func NewStats(redis *database.Redis) *Stats { s := &Stats{ r: redis, countChan: make(chan countItem, 1000), mapStats: make(map[string]int64), stop: make(chan bool), } go s.processCountDownload() return s } // Terminate stops the stats handler and commit results to the database func (s *Stats) Terminate() { close(s.stop) log.Notice("Saving stats") s.wg.Wait() } // CountDownload is a lightweight method used to count a new download for a specific file and mirror func (s *Stats) CountDownload(m mirrors.Mirror, fileinfo filesystem.FileInfo) error { if m.Name == "" { return errUnknownMirror } if fileinfo.Path == "" { return errEmptyFileError } s.countChan <- countItem{m.ID, fileinfo.Path, fileinfo.Size, time.Now().UTC()} return nil } // Process all stacked download messages func (s *Stats) processCountDownload() { s.wg.Add(1) pushTicker := time.NewTicker(500 * time.Millisecond) for { select { case <-s.stop: s.pushStats() s.wg.Done() return case c := <-s.countChan: date := c.time.Format("2006_01_02|") // Includes separator s.mapStats["f"+date+c.filepath]++ s.mapStats["m"+date+strconv.Itoa(c.mirrorID)]++ s.mapStats["s"+date+strconv.Itoa(c.mirrorID)] += c.size case <-pushTicker.C: s.pushStats() } } } // Push the resulting stats on redis func (s *Stats) pushStats() { if len(s.mapStats) <= 0 { return } rconn := s.r.Get() defer rconn.Close() if rconn.Err() != nil { if s.downgraded == false { log.Warningf("Uncommited stats kept in-memory: %v", rconn.Err()) } s.downgraded = true return } rconn.Send("MULTI") for k, v := range s.mapStats { if v == 0 { continue } separator := strings.Index(k, "|") if separator <= 0 { log.Critical("Stats: separator not found") continue } typ := k[:1] date := k[1:separator] object := k[separator+1:] if typ == "f" { // File fkey := fmt.Sprintf("STATS_FILE_%s", date) for i := 0; i < 4; i++ { rconn.Send("HINCRBY", fkey, object, v) fkey = fkey[:strings.LastIndex(fkey, "_")] } // Increase the total too rconn.Send("INCRBY", "STATS_TOTAL", v) } else if typ == "m" { // Mirror mkey := fmt.Sprintf("STATS_MIRROR_%s", date) for i := 0; i < 4; i++ { rconn.Send("HINCRBY", mkey, object, v) mkey = mkey[:strings.LastIndex(mkey, "_")] } } else if typ == "s" { // Bytes mkey := fmt.Sprintf("STATS_MIRROR_BYTES_%s", date) for i := 0; i < 4; i++ { rconn.Send("HINCRBY", mkey, object, v) mkey = mkey[:strings.LastIndex(mkey, "_")] } } else { log.Warning("Stats: unknown type", typ) } } _, err := rconn.Do("EXEC") if err != nil { log.Errorf("Stats: could not save stats to redis: %s", err.Error()) return } s.downgraded = false // Clear the map s.mapStats = make(map[string]int64) } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/logs/000077500000000000000000000000001411706463700201465ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/logs/logs.go000066400000000000000000000111211411706463700214350ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package logs import ( "bytes" "fmt" "io" stdlog "log" "os" "runtime" "strconv" "strings" "sync" "time" . "github.com/etix/mirrorbits/config" "github.com/etix/mirrorbits/core" "github.com/etix/mirrorbits/mirrors" "github.com/op/go-logging" ) var ( log = logging.MustGetLogger("main") rlogger runtimeLogger dlogger downloadsLogger ) type runtimeLogger struct { f *os.File } type downloadsLogger struct { sync.RWMutex l *stdlog.Logger f io.WriteCloser } func (d *downloadsLogger) Close() { if d.f != nil { d.f.Close() d.f = nil } d.l = nil } // ReloadLogs will reopen the logs to allow rotations func ReloadLogs() { ReloadRuntimeLogs() if core.Daemon { ReloadDownloadLogs() } } func isTerminal(f *os.File) bool { stat, _ := f.Stat() if (stat.Mode() & os.ModeCharDevice) != 0 { return true } return false } // ReloadRuntimeLogs reopens the runtime logs for writing func ReloadRuntimeLogs() { if rlogger.f == os.Stderr && core.RunLog == "" { // Logger already set up and connected to the console. // Don't reload to avoid breaking journald. return } if rlogger.f != nil && rlogger.f != os.Stderr { rlogger.f.Close() } if core.RunLog != "" { var err error rlogger.f, _, err = openLogFile(core.RunLog) if err != nil { fmt.Fprintln(os.Stderr, "Cannot open log file for writing") rlogger.f = os.Stderr } } else { rlogger.f = os.Stderr } logBackend := logging.NewLogBackend(rlogger.f, "", 0) logBackend.Color = isTerminal(rlogger.f) //TODO make color optional logging.SetBackend(logBackend) if core.Debug { logging.SetFormatter(logging.MustStringFormatter("%{shortfile:-20s}%{time:2006/01/02 15:04:05.000 MST} %{message}")) logging.SetLevel(logging.DEBUG, "main") } else { logging.SetFormatter(logging.MustStringFormatter("%{time:2006/01/02 15:04:05.000 MST} %{message}")) logging.SetLevel(logging.INFO, "main") } } func openLogFile(logfile string) (*os.File, bool, error) { newfile := true s, _ := os.Stat(logfile) if s != nil && s.Size() > 0 { newfile = false } f, err := os.OpenFile(logfile, os.O_WRONLY|os.O_APPEND|os.O_CREATE, 0664) if err != nil { return nil, false, err } return f, newfile, nil } func setDownloadLogWriter(writer io.Writer, createHeader bool) { dlogger.l = stdlog.New(writer, "", stdlog.Ldate|stdlog.Lmicroseconds) if createHeader { var buf bytes.Buffer hostname, _ := os.Hostname() fmt.Fprintf(&buf, "# Log file created at: %s\n", time.Now().Format("2006/01/02 15:04:05")) fmt.Fprintf(&buf, "# Running on machine: %s\n", hostname) fmt.Fprintf(&buf, "# Binary: Built with %s %s for %s/%s\n", runtime.Compiler, runtime.Version(), runtime.GOOS, runtime.GOARCH) writer.Write(buf.Bytes()) } } // ReloadDownloadLogs reopens the download logs for writing func ReloadDownloadLogs() { dlogger.Lock() defer dlogger.Unlock() dlogger.Close() if GetConfig().LogDir == "" { return } logfile := GetConfig().LogDir + "/downloads.log" f, createHeader, err := openLogFile(logfile) if err != nil { log.Criticalf("Cannot open log file %s", logfile) return } setDownloadLogWriter(f, createHeader) } // LogDownload writes a download result to the logs func LogDownload(typ string, statuscode int, p *mirrors.Results, err error) { dlogger.RLock() defer dlogger.RUnlock() if dlogger.l == nil { // Logs are disabled return } errstr := "" if err != nil { errstr = err.Error() } if (statuscode == 302 || statuscode == 200) && p != nil && len(p.MirrorList) > 0 { var distance, countries string m := p.MirrorList[0] distance = strconv.FormatFloat(float64(m.Distance), 'f', 2, 32) countries = strings.Join(m.CountryFields, ",") fallback := "" if p.Fallback == true { fallback = " fallback:true" } sameASNum := "" if m.Asnum > 0 && m.Asnum == p.ClientInfo.ASNum { sameASNum = "same" } dlogger.l.Printf("%s %d \"%s\" ip:%s mirror:%s%s %sasn:%d distance:%skm countries:%s", typ, statuscode, p.FileInfo.Path, p.IP, m.Name, fallback, sameASNum, m.Asnum, distance, countries) } else if statuscode == 404 && p != nil { dlogger.l.Printf("%s 404 \"%s\" ip:%s", typ, p.FileInfo.Path, p.IP) } else if statuscode == 500 && p != nil { mirrorName := "unknown" if len(p.MirrorList) > 0 { mirrorName = p.MirrorList[0].Name } dlogger.l.Printf("%s 500 \"%s\" ip:%s mirror:%s error:%s", typ, p.FileInfo.Path, p.IP, mirrorName, errstr) } else { var path, ip string if p != nil { path = p.FileInfo.Path ip = p.IP } dlogger.l.Printf("%s %d \"%s\" ip:%s error:%s", typ, statuscode, path, ip, errstr) } } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/logs/logs_test.go000066400000000000000000000161451411706463700225070ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package logs import ( "bytes" "errors" "io/ioutil" "os" "reflect" "strings" "testing" "github.com/etix/mirrorbits/core" "github.com/etix/mirrorbits/filesystem" "github.com/etix/mirrorbits/mirrors" "github.com/etix/mirrorbits/network" "github.com/op/go-logging" ) type CloseTester struct { closed bool } func (c *CloseTester) Write(p []byte) (n int, err error) { return 0, err } func (c *CloseTester) Close() error { c.closed = true return nil } func TestDownloadsLogger_Close(t *testing.T) { f := &CloseTester{} dlogger.f = f if f.closed == true { t.Fatalf("Precondition failed") } dlogger.Close() if f.closed == false { t.Fatalf("Should be closed") } if dlogger.l != nil || dlogger.f != nil { t.Fatalf("Should be nil") } } func TestIsTerminal(t *testing.T) { stat, _ := os.Stdout.Stat() if (stat.Mode() & os.ModeCharDevice) == 0 { t.Skip("Cannot test without a valid terminal") } if !isTerminal(os.Stdout) { t.Fatalf("The current terminal is supposed to support colors") } f, err := ioutil.TempFile("", "mirrorbits-tests") if err != nil { t.Errorf("Unable to create a temporary file: %s", err.Error()) return } defer os.Remove(f.Name()) if isTerminal(f) { t.Fatalf("The given file cannot be a terminal") } } func TestReloadRuntimeLogs(t *testing.T) { rlogger.f = nil ReloadRuntimeLogs() if rlogger.f == nil { t.Fatalf("The logger output must be setup") } if rlogger.f != os.Stderr { t.Fatalf("The logger output is expected to be Stderr") } if logging.GetLevel("main") != logging.INFO { t.Fatalf("Log level is supposed to be INFO by default") } ptr := reflect.ValueOf(rlogger.f).Pointer() ReloadRuntimeLogs() if reflect.ValueOf(rlogger.f).Pointer() != ptr { t.Fatalf("The logger must not be reloaded when writing on Stderr") } /* */ core.RunLog = "/" ReloadRuntimeLogs() if rlogger.f != os.Stderr { t.Fatalf("Opening an invalid file must fallback to Stderr") } /* */ f, err := ioutil.TempFile("", "mirrorbits-tests") if err != nil { t.Errorf("Unable to create a temporary file: %s", err.Error()) return } defer os.Remove(f.Name()) core.RunLog = f.Name() core.Debug = true ReloadRuntimeLogs() if logging.GetLevel("main") != logging.DEBUG { t.Fatalf("Log level is supposed to be DEBUG") } if rlogger.f == os.Stderr { t.Fatalf("The output is expected to be a file, not Stderr") } /* */ testString := "Testing42" log.Error(testString) buf, _ := ioutil.ReadAll(f) if !strings.Contains(string(buf), testString) { t.Fatalf("The log doesn't contain the string %s", testString) } /* */ core.RunLog = "" ReloadRuntimeLogs() if rlogger.f != os.Stderr { t.Fatalf("The output is expected to be Stderr") } } func TestOpenLogFile(t *testing.T) { path, err := ioutil.TempDir("", "mirrorbits-tests") if err != nil { t.Errorf("Unable to create temporary directory: %s", err.Error()) return } defer os.RemoveAll(path) f, newfile, err := openLogFile(path + "/test1.log") if err != nil { t.Fatalf("Unexpected error: %s", err.Error()) } if newfile == false { t.Fatalf("Expected new file") } content := []byte("It works!") n, err := f.Write(content) if err != nil { t.Fatalf("Unexpected write error: %s", err.Error()) } if n != len(content) { t.Fatalf("Invalid number of bytes written") } f.Close() /* Reopen file to check newfile */ f, newfile, err = openLogFile(path + "/test1.log") if err != nil { t.Fatalf("Unexpected error: %s", err.Error()) } if newfile == true { t.Fatalf("Expected newfile to be false") } f.Close() /* Open invalid file */ f, _, err = openLogFile("") if err == nil { t.Fatalf("Error expected while opening invalid file") } f.Close() } func TestSetDownloadLogWriter(t *testing.T) { if dlogger.l != nil || dlogger.f != nil { t.Fatalf("Precondition failed") } var buf bytes.Buffer setDownloadLogWriter(&buf, true) if dlogger.l == nil { t.Fatalf("Logger not created") } if buf.Len() == 0 { t.Fatalf("Buffer empty, expected header") } if !strings.HasPrefix(buf.String(), "#") { t.Fatalf("Header doesn't starts with '#'") } buf.Reset() /* */ setDownloadLogWriter(&buf, false) if buf.Len() != 0 { t.Fatalf("Expected no content") } } func TestReloadDownloadLogs(t *testing.T) { // Not implemented because of GetConfig() // TODO need abstraction for GetConfig() } //type xResults struct { // FileInfo filesystem.FileInfo // MapURL string `json:"-"` // IP string // ClientInfo network.GeoIPRecord // MirrorList Mirrors // ExcludedList Mirrors `json:",omitempty"` // Fallback bool `json:",omitempty"` //} func TestLogDownload(t *testing.T) { var buf bytes.Buffer dlogger.Close() // The next line isn't supposed to crash. LogDownload("", 500, nil, nil) setDownloadLogWriter(&buf, true) buf.Reset() // The next few lines arent't supposed to crash. LogDownload("", 200, nil, nil) LogDownload("", 302, nil, nil) LogDownload("", 404, nil, nil) LogDownload("", 500, nil, nil) LogDownload("", 501, nil, nil) if c := strings.Count(buf.String(), "\n"); c != 5 { t.Fatalf("Invalid number of lines, got %d, expected 5", c) } buf.Reset() /* */ p := &mirrors.Results{ FileInfo: filesystem.FileInfo{ Path: "/test/file.tgz", }, MirrorList: mirrors.Mirrors{ mirrors.Mirror{ ID: 1, Name: "m1", Asnum: 444, Distance: 99, CountryFields: []string{"FR", "UK", "DE"}, }, mirrors.Mirror{ ID: 2, Name: "m2", }, }, IP: "192.168.0.1", ClientInfo: network.GeoIPRecord{ ASNum: 444, }, Fallback: true, } LogDownload("JSON", 200, p, nil) expected := "JSON 200 \"/test/file.tgz\" ip:192.168.0.1 mirror:m1 fallback:true sameasn:444 distance:99.00km countries:FR,UK,DE\n" if !strings.HasSuffix(buf.String(), expected) { t.Fatalf("Invalid log line:\nGot:\n%#vs\nExpected:\n%#v", buf.String(), expected) } buf.Reset() /* */ p = &mirrors.Results{ FileInfo: filesystem.FileInfo{ Path: "/test/file.tgz", }, IP: "192.168.0.1", } LogDownload("JSON", 404, p, nil) expected = "JSON 404 \"/test/file.tgz\" ip:192.168.0.1\n" if !strings.HasSuffix(buf.String(), expected) { t.Fatalf("Invalid log line:\nGot:\n%#vs\nExpected:\n%#v", buf.String(), expected) } buf.Reset() /* */ p = &mirrors.Results{ MirrorList: mirrors.Mirrors{ mirrors.Mirror{ ID: 1, Name: "m1", }, mirrors.Mirror{ ID: 2, Name: "m2", }, }, } LogDownload("JSON", 500, p, errors.New("test error")) expected = "JSON 500 \"\" ip: mirror:m1 error:test error\n" if !strings.HasSuffix(buf.String(), expected) { t.Fatalf("Invalid log line:\nGot:\n%#vs\nExpected:\n%#v", buf.String(), expected) } buf.Reset() /* */ p = &mirrors.Results{ FileInfo: filesystem.FileInfo{ Path: "/test/file.tgz", }, IP: "192.168.0.1", } LogDownload("JSON", 501, p, errors.New("test error")) expected = "JSON 501 \"/test/file.tgz\" ip:192.168.0.1 error:test error\n" if !strings.HasSuffix(buf.String(), expected) { t.Fatalf("Invalid log line:\nGot:\n%#vs\nExpected:\n%#v", buf.String(), expected) } buf.Reset() } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/main.go000066400000000000000000000075711411706463700204670ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package main import ( "fmt" "os" "os/signal" "runtime/pprof" "strings" "syscall" "time" "github.com/etix/mirrorbits/cli" . "github.com/etix/mirrorbits/config" "github.com/etix/mirrorbits/core" "github.com/etix/mirrorbits/daemon" "github.com/etix/mirrorbits/database" "github.com/etix/mirrorbits/http" "github.com/etix/mirrorbits/logs" "github.com/etix/mirrorbits/mirrors" "github.com/etix/mirrorbits/process" "github.com/etix/mirrorbits/rpc" "github.com/op/go-logging" "github.com/pkg/errors" ) var ( log = logging.MustGetLogger("main") ) func main() { core.Parseflags() if core.CpuProfile != "" { f, err := os.Create(core.CpuProfile) if err != nil { log.Fatal(err) } defer f.Close() pprof.StartCPUProfile(f) defer pprof.StopCPUProfile() } if core.Daemon { LoadConfig() logs.ReloadLogs() process.WritePidFile() // Show our nice welcome logo fmt.Printf(core.Banner+"\n\n", core.VERSION) /* Setup RPC */ rpcs := new(rpc.CLI) if err := rpcs.Start(); err != nil { log.Fatal(errors.Wrap(err, "rpc error")) } /* Connect to the database */ r := database.NewRedis() r.ConnectPubsub() rpcs.SetDatabase(r) c := mirrors.NewCache(r) rpcs.SetCache(c) h := http.HTTPServer(r, c) /* Start the background monitor */ m := daemon.NewMonitor(r, c) if core.Monitor { go m.MonitorLoop() } /* Handle SIGNALS */ k := make(chan os.Signal, 1) rpcs.SetSignals(k) signal.Notify(k, syscall.SIGINT, // Terminate syscall.SIGTERM, // Terminate syscall.SIGQUIT, // Stop gracefully syscall.SIGHUP, // Reload config syscall.SIGUSR1, // Reopen log files syscall.SIGUSR2, // Seamless binary upgrade ) go func() { for { sig := <-k switch sig { case syscall.SIGINT: fallthrough case syscall.SIGTERM: process.RemovePidFile() os.Exit(0) case syscall.SIGQUIT: m.Stop() rpcs.Close() if h.Listener != nil { log.Notice("Waiting for running tasks to finish...") h.Stop(5 * time.Second) } else { process.RemovePidFile() os.Exit(0) } case syscall.SIGHUP: listenAddress := GetConfig().ListenAddress if err := ReloadConfig(); err != nil { log.Warningf("SIGHUP Received: %s\n", err) } else { log.Notice("SIGHUP Received: Reloading configuration...") } if GetConfig().ListenAddress != listenAddress { h.Restarting = true h.Stop(1 * time.Second) } h.Reload() case syscall.SIGUSR1: log.Notice("SIGUSR1 Received: Re-opening logs...") logs.ReloadLogs() case syscall.SIGUSR2: log.Notice("SIGUSR2 Received: Seamless binary upgrade...") rpcs.Close() err := process.Relaunch(*h.Listener) if err != nil { log.Errorf("Relaunch failed: %s\n", err) } } } }() // Recover an existing listener (see process.go) if l, ppid, err := process.Recover(); err == nil { h.SetListener(l) go func() { time.Sleep(100 * time.Millisecond) process.KillParent(ppid) }() } /* Finally start the HTTP server */ var err error for { err = h.RunServer() if h.Restarting { h.Restarting = false continue } // This check is ugly but there's still no way to detect this error by type if err != nil && strings.Contains(err.Error(), "use of closed network connection") { // This error is expected during a graceful shutdown err = nil } break } log.Debug("Waiting for monitor termination") m.Wait() log.Debug("Terminating server") h.Terminate() r.Close() process.RemovePidFile() if err != nil { log.Fatal(err) } else { log.Notice("Server stopped gracefully.") } } else { args := os.Args[len(os.Args)-core.NArg:] if err := cli.ParseCommands(args...); err != nil { fmt.Fprintf(os.Stderr, "%s\n", err) os.Exit(1) } } os.Exit(0) } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/mirrorbits.conf000066400000000000000000000074351411706463700222560ustar00rootroot00000000000000# vim: set ft=yaml: ################### ##### GENERAL ##### ################### ## Path to the local repository # Repository: /srv/repo ## Path to the templates (default autodetect) # Templates: /usr/share/mirrorbits/ ## A local path or URL containing the JavaScript used by the templates. ## If this is not set (the default), the JavaScript will just be loaded ## from the usual CDNs. See also `contrib/localjs/fetchfiles.sh`. # LocalJSPath: ## Path where to store logs (comment to disable) # LogDir: /var/log/mirrorbits ## Path to the GeoIP2 mmdb databases # GeoipDatabasePath: /usr/share/GeoIP/ ## OutputMode can take on the three values: ## - redirect: HTTP redirect to the destination file on the selected mirror ## - json: return a json document for pre-treatment by an application ## - auto: based on the Accept HTTP header # OutputMode: auto ## Enable Gzip compression # Gzip: false ## Host an port to listen on # ListenAddress: :8080 ## Host and port to listen for the CLI RPC # RPCListenAddress: localhost:3390 ## Password for restricting access to the CLI (optional) # RPCPassword: #################### ##### DATABASE ##### #################### ## Redis host and port # RedisAddress: 10.0.0.1:6379 ## Redis password (if any) # RedisPassword: supersecure ## Redis database ID (if any) # RedisDB: 0 ## Redis sentinel name (only if using sentinel) # RedisSentinelMasterName: mirrorbits ## List of Redis sentinel hosts (only if using sentinel) # RedisSentinels: # - Host: 10.0.0.1:26379 # - Host: 10.0.0.2:26379 # - Host: 10.0.0.3:26379 ################### ##### MIRRORS ##### ################### ## Relative path to the trace file within the repository (optional). ## The file must contain the number of seconds since epoch and should ## be updated every minute (or so) with a cron on the master repository. # TraceFileLocation: /trace ## Interval between two scans of the local repository. ## The repository scan will index new and removed files and collect file ## sizes and checksums. ## This should, more or less, match the frequency where the local repo ## is updated. # RepositoryScanInterval: 5 ## Enable or disable specific hashing algorithms # Hashes: # SHA256: On # SHA1: Off # MD5: Off ################### ##### MIRRORS ##### ################### ## Maximum number of concurrent mirror synchronization to do (rsync/ftp) # ConcurrentSync: 5 ## Interval in minutes between mirror scan # ScanInterval: 30 ## Interval in minutes between mirrors HTTP health checks # CheckInterval: 1 ## Allow a mirror to issue an HTTP redirect. ## Setting this to true will disable the mirror if a redirect is detected. # DisallowRedirects: false ## Disable a mirror if an active file is missing (HTTP 404) # DisableOnMissingFile: false ## Adjust the weight/range of the geographic distribution # WeightDistributionRange: 1.5 ## Maximum number of alternative links to return in the HTTP header # MaxLinkHeaders: 10 ## Automatically fix timezone offsets. ## Enable this if one or more mirrors are always excluded because their ## last-modification-time mismatch. This option will try to guess the ## offset and adjust the mod time accordingly. ## Affected mirrors will need to be rescanned after enabling this feature. # FixTimezoneOffsets: false ## List of mirrors to use as fallback which will be used in case mirrorbits ## is unable to answer a request because the database is unreachable. ## Note: Mirrorbits will redirect to one of these mirrors based on the user ## location but won't be able to know if the mirror has the requested file. ## Therefore only put your most reliable and up-to-date mirrors here. # Fallbacks: # - URL: http://fallback1.mirror/repo/ # CountryCode: fr # ContinentCode: eu # - URL: http://fallback2.mirror/repo/ # CountryCode: us # ContinentCode: na mirrorbits-0.5.1+git20210123.eeea0e0+ds1/mirrors/000077500000000000000000000000001411706463700206775ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/mirrors/cache.go000066400000000000000000000171201411706463700222720ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package mirrors import ( "fmt" "strconv" "strings" "time" "unsafe" "github.com/etix/mirrorbits/database" "github.com/etix/mirrorbits/filesystem" "github.com/etix/mirrorbits/network" "github.com/etix/mirrorbits/utils" "github.com/gomodule/redigo/redis" ) // Cache implements a local caching mechanism of type LRU for content available in the // redis database that is automatically invalidated if the object is updated in Redis. type Cache struct { r *database.Redis fiCache *LRUCache fmCache *LRUCache mCache *LRUCache fimCache *LRUCache mirrorUpdateEvent chan string fileUpdateEvent chan string mirrorFileUpdateEvent chan string pubsubReconnectedEvent chan string invalidationEvent chan string } type fileInfoValue struct { value filesystem.FileInfo } func (f *fileInfoValue) Size() int { return int(unsafe.Sizeof(f.value)) } type fileMirrorValue struct { value []int } func (f *fileMirrorValue) Size() int { return cap(f.value) } type mirrorValue struct { value Mirror } func (f *mirrorValue) Size() int { return int(unsafe.Sizeof(f.value)) } // NewCache constructs a new instance of Cache func NewCache(r *database.Redis) *Cache { if r == nil || r.Pubsub == nil { return nil } c := &Cache{ r: r, } // Create the LRU c.fiCache = NewLRUCache(1024000) c.fmCache = NewLRUCache(2048000) c.mCache = NewLRUCache(1024000) c.fimCache = NewLRUCache(4096000) // Create event channels c.mirrorUpdateEvent = make(chan string, 10) c.fileUpdateEvent = make(chan string, 10) c.mirrorFileUpdateEvent = make(chan string, 10) c.pubsubReconnectedEvent = make(chan string) c.invalidationEvent = make(chan string, 10) // Subscribe to events c.r.Pubsub.SubscribeEvent(database.MIRROR_UPDATE, c.mirrorUpdateEvent) c.r.Pubsub.SubscribeEvent(database.FILE_UPDATE, c.fileUpdateEvent) c.r.Pubsub.SubscribeEvent(database.MIRROR_FILE_UPDATE, c.mirrorFileUpdateEvent) c.r.Pubsub.SubscribeEvent(database.PUBSUB_RECONNECTED, c.pubsubReconnectedEvent) go func() { for { //FIXME add a close channel select { case data := <-c.mirrorUpdateEvent: c.mCache.Delete(data) select { case c.invalidationEvent <- data: default: // Non-blocking } case data := <-c.fileUpdateEvent: c.fiCache.Delete(data) case data := <-c.mirrorFileUpdateEvent: s := strings.SplitN(data, " ", 2) c.fmCache.Delete(s[1]) c.fimCache.Delete(fmt.Sprintf("%s|%s", s[0], s[1])) case <-c.pubsubReconnectedEvent: c.Clear() } } }() return c } // Clear clears the local cache func (c *Cache) Clear() { c.fiCache.Clear() c.fmCache.Clear() c.mCache.Clear() c.fimCache.Clear() } // GetMirrorInvalidationEvent returns a channel that contains ID of mirrors // that have just been invalidated. This function is supposed to have only // ONE reader and is made to avoid a race for MIRROR_UPDATE events between // a mirror invalidation and a mirror being fetched from the cache. func (c *Cache) GetMirrorInvalidationEvent() <-chan string { return c.invalidationEvent } // GetFileInfo returns file information for a given file either from the cache // or directly from the database if the object is not yet stored in the cache. func (c *Cache) GetFileInfo(path string) (f filesystem.FileInfo, err error) { v, ok := c.fiCache.Get(path) if ok { f = v.(*fileInfoValue).value } else { f, err = c.fetchFileInfo(path) } return } func (c *Cache) fetchFileInfo(path string) (f filesystem.FileInfo, err error) { rconn := c.r.Get() defer rconn.Close() f.Path = path // Path is not stored in the object instance in redis reply, err := redis.Strings(rconn.Do("HMGET", fmt.Sprintf("FILE_%s", path), "size", "modTime", "sha1", "sha256", "md5")) if err != nil { return } f.Size, _ = strconv.ParseInt(reply[0], 10, 64) f.ModTime, _ = time.Parse("2006-01-02 15:04:05.999999999 -0700 MST", reply[1]) f.Sha1 = reply[2] f.Sha256 = reply[3] f.Md5 = reply[4] c.fiCache.Set(path, &fileInfoValue{value: f}) return } // GetMirrors returns all the mirrors serving a given file either from the cache // or directly from the database if the object is not yet stored in the cache. func (c *Cache) GetMirrors(path string, clientInfo network.GeoIPRecord) (mirrors []Mirror, err error) { var mirrorsIDs []int v, ok := c.fmCache.Get(path) if ok { mirrorsIDs = v.(*fileMirrorValue).value } else { mirrorsIDs, err = c.fetchFileMirrors(path) if err != nil { return } } mirrors = make([]Mirror, 0, len(mirrorsIDs)) for _, id := range mirrorsIDs { var mirror Mirror var fileInfo filesystem.FileInfo v, ok := c.mCache.Get(strconv.Itoa(id)) if ok { mirror = v.(*mirrorValue).value } else { //TODO execute missing items in a MULTI query mirror, err = c.fetchMirror(id) if err != nil { return } } v, ok = c.fimCache.Get(fmt.Sprintf("%d|%s", id, path)) if ok { fileInfo = v.(*fileInfoValue).value } else { fileInfo, err = c.fetchFileInfoMirror(id, path) if err != nil { return } } if fileInfo.Size >= 0 { mirror.FileInfo = &fileInfo } // Add the path in the results so we can access it from the templates mirror.FileInfo.Path = path if clientInfo.IsValid() { mirror.Distance = utils.GetDistanceKm(clientInfo.Latitude, clientInfo.Longitude, mirror.Latitude, mirror.Longitude) } else { mirror.Distance = 0 } mirrors = append(mirrors, mirror) } return } func (c *Cache) fetchFileMirrors(path string) (ids []int, err error) { rconn := c.r.Get() defer rconn.Close() ids, err = redis.Ints(rconn.Do("SMEMBERS", fmt.Sprintf("FILEMIRRORS_%s", path))) if err != nil { return } c.fmCache.Set(path, &fileMirrorValue{value: ids}) return } func (c *Cache) fetchMirror(mirrorID int) (mirror Mirror, err error) { rconn := c.r.Get() defer rconn.Close() reply, err := redis.Values(rconn.Do("HGETALL", fmt.Sprintf("MIRROR_%d", mirrorID))) if err != nil { return } if len(reply) == 0 { err = redis.ErrNil return } err = redis.ScanStruct(reply, &mirror) if err != nil { return } mirror.Prepare() c.mCache.Set(strconv.Itoa(mirrorID), &mirrorValue{value: mirror}) return } func (c *Cache) GetFileInfoMirror(mirrorID int, path string) (f filesystem.FileInfo, err error) { var fileInfo filesystem.FileInfo v, ok := c.fimCache.Get(fmt.Sprintf("%d|%s", mirrorID, path)) if ok { fileInfo = v.(*fileInfoValue).value } else { fileInfo, err = c.fetchFileInfoMirror(mirrorID, path) if err != nil { return } } return fileInfo, nil } func (c *Cache) fetchFileInfoMirror(id int, path string) (f filesystem.FileInfo, err error) { rconn := c.r.Get() defer rconn.Close() f.Path = path // Path is not stored in the object instance in redis reply, err := redis.Strings(rconn.Do("HMGET", fmt.Sprintf("FILEINFO_%d_%s", id, path), "size", "modTime", "sha1", "sha256", "md5")) if err != nil { return } // Note: as of today, only the size is stored by the scanners // all other fields are left blank. f.Size, _ = strconv.ParseInt(reply[0], 10, 64) f.ModTime, _ = time.Parse("2006-01-02 15:04:05.999999999 -0700 MST", reply[1]) f.Sha1 = reply[2] f.Sha256 = reply[3] f.Md5 = reply[4] c.fimCache.Set(fmt.Sprintf("%d|%s", id, path), &fileInfoValue{value: f}) return } // GetMirror returns all information about a given mirror either from the cache // or directly from the database if the object is not yet stored in the cache. func (c *Cache) GetMirror(id int) (mirror Mirror, err error) { v, ok := c.mCache.Get(strconv.Itoa(id)) if ok { mirror = v.(*mirrorValue).value } else { mirror, err = c.fetchMirror(id) if err != nil { return } } return } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/mirrors/cache_test.go000066400000000000000000000277541411706463700233470ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package mirrors import ( "fmt" "reflect" "strconv" "testing" "time" "unsafe" "github.com/etix/mirrorbits/filesystem" "github.com/etix/mirrorbits/network" . "github.com/etix/mirrorbits/testing" "github.com/gomodule/redigo/redis" _ "github.com/rafaeljusto/redigomock" ) func TestNewCache(t *testing.T) { _, conn := PrepareRedisTest() conn.ConnectPubsub() c := NewCache(nil) if c != nil { t.Fatalf("Expected invalid instance") } c = NewCache(conn) if c == nil { t.Fatalf("No valid instance returned") } } type TestValue struct { value string } func (f *TestValue) Size() int { return int(unsafe.Sizeof(f.value)) } func TestCache_Clear(t *testing.T) { _, conn := PrepareRedisTest() conn.ConnectPubsub() c := NewCache(conn) c.fiCache.Set("test", &TestValue{"42"}) c.fmCache.Set("test", &TestValue{"42"}) c.mCache.Set("test", &TestValue{"42"}) c.fimCache.Set("test", &TestValue{"42"}) c.Clear() if _, ok := c.fiCache.Get("test"); ok { t.Fatalf("Value shouldn't be present") } if _, ok := c.fmCache.Get("test"); ok { t.Fatalf("Value shouldn't be present") } if _, ok := c.mCache.Get("test"); ok { t.Fatalf("Value shouldn't be present") } if _, ok := c.fimCache.Get("test"); ok { t.Fatalf("Value shouldn't be present") } } func TestCache_fetchFileInfo(t *testing.T) { mock, conn := PrepareRedisTest() conn.ConnectPubsub() c := NewCache(conn) testfile := filesystem.FileInfo{ Path: "/test/file.tgz", Size: 43000, ModTime: time.Now(), Sha1: "3ce963aea2d6f23fe915063f8bba21888db0ddfa", Sha256: "1c8e38c7e03e4d117eba4f82afaf6631a9b79f4c1e9dec144d4faf1d109aacda", Md5: "2c98ec39f49da6ddd9cfa7b1d7342afe", } f, err := c.fetchFileInfo(testfile.Path) if err == nil { t.Fatalf("Error expected, mock command not yet registered") } cmdGetFileinfo := mock.Command("HMGET", "FILE_"+testfile.Path, "size", "modTime", "sha1", "sha256", "md5").Expect([]interface{}{ []byte(strconv.FormatInt(testfile.Size, 10)), []byte(testfile.ModTime.Format("2006-01-02 15:04:05.999999999 -0700 MST")), []byte(testfile.Sha1), []byte(testfile.Sha256), []byte(testfile.Md5), }) f, err = c.fetchFileInfo(testfile.Path) if err != nil { t.Fatalf("Unexpected error: %s", err.Error()) } if mock.Stats(cmdGetFileinfo) < 1 { t.Fatalf("HMGET not executed") } if f.Path != testfile.Path { t.Fatalf("Path doesn't match, expected %#v got %#v", testfile.Path, f.Path) } if f.Size != testfile.Size { t.Fatalf("Size doesn't match, expected %#v got %#v", testfile.Size, f.Size) } if !f.ModTime.Equal(testfile.ModTime) { t.Fatalf("ModTime doesn't match, expected %s got %s", testfile.ModTime.String(), f.ModTime.String()) } if f.Sha1 != testfile.Sha1 { t.Fatalf("Sha1 doesn't match, expected %#v got %#v", testfile.Sha1, f.Sha1) } if f.Sha256 != testfile.Sha256 { t.Fatalf("Sha256 doesn't match, expected %#v got %#v", testfile.Sha256, f.Sha256) } if f.Md5 != testfile.Md5 { t.Fatalf("Md5 doesn't match, expected %#v got %#v", testfile.Md5, f.Md5) } _, ok := c.fiCache.Get(testfile.Path) if !ok { t.Fatalf("Not stored in cache") } } func TestCache_GetFileInfo(t *testing.T) { mock, conn := PrepareRedisTest() conn.ConnectPubsub() c := NewCache(conn) testfile := filesystem.FileInfo{ Path: "/test/file.tgz", Size: 43000, ModTime: time.Now(), Sha1: "3ce963aea2d6f23fe915063f8bba21888db0ddfa", Sha256: "1c8e38c7e03e4d117eba4f82afaf6631a9b79f4c1e9dec144d4faf1d109aacda", Md5: "2c98ec39f49da6ddd9cfa7b1d7342afe", } _, err := c.GetFileInfo(testfile.Path) if err == nil { t.Fatalf("Error expected, mock command not yet registered") } cmdGetFileinfo := mock.Command("HMGET", "FILE_"+testfile.Path, "size", "modTime", "sha1", "sha256", "md5").Expect([]interface{}{ []byte(strconv.FormatInt(testfile.Size, 10)), []byte(testfile.ModTime.Format("2006-01-02 15:04:05.999999999 -0700 MST")), []byte(testfile.Sha1), []byte(testfile.Sha256), []byte(testfile.Md5), }) f, err := c.GetFileInfo(testfile.Path) if err != nil { t.Fatalf("Unexpected error: %s", err.Error()) } if mock.Stats(cmdGetFileinfo) < 1 { t.Fatalf("HMGET not executed") } // Results are already checked by TestCache_fetchFileInfo // We only need to check one of them if !f.ModTime.Equal(testfile.ModTime) { t.Fatalf("One or more values do not match") } _, err = c.GetFileInfo(testfile.Path) if err == redis.ErrNil { t.Fatalf("Cache not used, request expected to be done once") } else if err != nil { t.Fatalf("Unexpected error: %s", err.Error()) } } func TestCache_fetchFileMirrors(t *testing.T) { mock, conn := PrepareRedisTest() conn.ConnectPubsub() c := NewCache(conn) filename := "/test/file.tgz" _, err := c.fetchFileMirrors(filename) if err == nil { t.Fatalf("Error expected, mock command not yet registered") } cmdGetFilemirrors := mock.Command("SMEMBERS", "FILEMIRRORS_"+filename).Expect([]interface{}{ []byte("9"), []byte("2"), []byte("5"), }) ids, err := c.fetchFileMirrors(filename) if err != nil { t.Fatalf("Unexpected error: %s", err.Error()) } if mock.Stats(cmdGetFilemirrors) < 1 { t.Fatalf("SMEMBERS not executed") } if len(ids) != 3 { t.Fatalf("Invalid number of items returned") } _, ok := c.fmCache.Get(filename) if !ok { t.Fatalf("Not stored in cache") } } func TestCache_fetchMirror(t *testing.T) { mock, conn := PrepareRedisTest() conn.ConnectPubsub() c := NewCache(conn) testmirror := Mirror{ ID: 1, Name: "m1", HttpURL: "http://m1.mirror", RsyncURL: "rsync://m1.mirror", FtpURL: "ftp://m1.mirror", SponsorName: "m1sponsor", SponsorURL: "m1sponsorurl", SponsorLogoURL: "m1sponsorlogourl", AdminName: "m1adminname", AdminEmail: "m1adminemail", CustomData: "m1customdata", ContinentOnly: true, CountryOnly: false, ASOnly: true, Score: 0, Latitude: -20.0, Longitude: 55.0, ContinentCode: "EU", CountryCodes: "FR UK", Asnum: 444, Comment: "m1comment", Enabled: true, Up: true, } _, err := c.fetchMirror(testmirror.ID) if err == nil { t.Fatalf("Error expected, mock command not yet registered") } cmdGetMirror := mock.Command("HGETALL", "MIRROR_1").ExpectMap(map[string]string{ "ID": strconv.Itoa(testmirror.ID), "name": testmirror.Name, "http": testmirror.HttpURL, "rsync": testmirror.RsyncURL, "ftp": testmirror.FtpURL, "sponsorName": testmirror.SponsorName, "sponsorURL": testmirror.SponsorURL, "sponsorLogo": testmirror.SponsorLogoURL, "adminName": testmirror.AdminName, "adminEmail": testmirror.AdminEmail, "customData": testmirror.CustomData, "continentOnly": strconv.FormatBool(testmirror.ContinentOnly), "countryOnly": strconv.FormatBool(testmirror.CountryOnly), "asOnly": strconv.FormatBool(testmirror.ASOnly), "score": strconv.FormatInt(int64(testmirror.Score), 10), "latitude": fmt.Sprintf("%f", testmirror.Latitude), "longitude": fmt.Sprintf("%f", testmirror.Longitude), "continentCode": testmirror.ContinentCode, "countryCodes": testmirror.CountryCodes, "asnum": strconv.FormatInt(int64(testmirror.Asnum), 10), "comment": testmirror.Comment, "enabled": strconv.FormatBool(testmirror.Enabled), "up": strconv.FormatBool(testmirror.Up), }) m, err := c.fetchMirror(testmirror.ID) if err != nil { t.Fatalf("Unexpected error: %s", err.Error()) } if mock.Stats(cmdGetMirror) < 1 { t.Fatalf("HGETALL not executed") } // This is required to reach DeepEqual(ity) testmirror.Prepare() if !reflect.DeepEqual(testmirror, m) { t.Fatalf("Result is different") } _, ok := c.mCache.Get(strconv.Itoa(testmirror.ID)) if !ok { t.Fatalf("Not stored in cache") } } func TestCache_fetchFileInfoMirror(t *testing.T) { mock, conn := PrepareRedisTest() conn.ConnectPubsub() c := NewCache(conn) testfile := filesystem.FileInfo{ Path: "/test/file.tgz", Size: 44000, ModTime: time.Now(), Sha1: "3ce963aea2d6f23fe915063f8bba21888db0ddfa", Sha256: "1c8e38c7e03e4d117eba4f82afaf6631a9b79f4c1e9dec144d4faf1d109aacda", Md5: "2c98ec39f49da6ddd9cfa7b1d7342afe", } _, err := c.fetchFileInfoMirror(1, testfile.Path) if err == nil { t.Fatalf("Error expected, mock command not yet registered") } cmdGetFileinfomirror := mock.Command("HMGET", "FILEINFO_1_"+testfile.Path, "size", "modTime", "sha1", "sha256", "md5").ExpectMap(map[string]string{ "size": strconv.FormatInt(testfile.Size, 10), "modTime": testfile.ModTime.String(), "sha1": testfile.Sha1, "sha256": testfile.Sha256, "md5": testfile.Md5, }) _, err = c.fetchFileInfoMirror(1, testfile.Path) if err != nil { t.Fatalf("Unexpected error: %s", err.Error()) } if mock.Stats(cmdGetFileinfomirror) < 1 { t.Fatalf("HGETALL not executed") } _, ok := c.fimCache.Get("1|" + testfile.Path) if !ok { t.Fatalf("Not stored in cache") } } func TestCache_GetMirror(t *testing.T) { mock, conn := PrepareRedisTest() conn.ConnectPubsub() c := NewCache(conn) testmirror := 1 _, err := c.GetMirror(testmirror) if err == nil { t.Fatalf("Error expected, mock command not yet registered") } cmdGetMirror := mock.Command("HGETALL", "MIRROR_1").ExpectMap(map[string]string{ "ID": strconv.Itoa(testmirror), }) m, err := c.GetMirror(testmirror) if err != nil { t.Fatalf("Unexpected error: %s", err.Error()) } if mock.Stats(cmdGetMirror) < 1 { t.Fatalf("HGETALL not executed") } // Results are already checked by TestCache_fetchMirror // We only need to check one of them if m.ID != testmirror { t.Fatalf("Result is different") } _, ok := c.mCache.Get(strconv.Itoa(testmirror)) if !ok { t.Fatalf("Not stored in cache") } } func TestCache_GetMirrors(t *testing.T) { mock, conn := PrepareRedisTest() conn.ConnectPubsub() c := NewCache(conn) filename := "/test/file.tgz" clientInfo := network.GeoIPRecord{ CountryCode: "FR", Latitude: 48.8567, Longitude: 2.3508, } _, err := c.GetMirrors(filename, clientInfo) if err == nil { t.Fatalf("Error expected, mock command not yet registered") } cmdGetFilemirrors := mock.Command("SMEMBERS", "FILEMIRRORS_"+filename).Expect([]interface{}{ []byte("1"), []byte("2"), }) cmdGetMirrorM1 := mock.Command("HGETALL", "MIRROR_1").ExpectMap(map[string]string{ "ID": "1", "latitude": "52.5167", "longitude": "13.3833", }) cmdGetMirrorM2 := mock.Command("HGETALL", "MIRROR_2").ExpectMap(map[string]string{ "ID": "2", "latitude": "51.5072", "longitude": "0.1275", }) cmdGetFileinfomirrorM1 := mock.Command("HMGET", "FILEINFO_1_"+filename, "size", "modTime", "sha1", "sha256", "md5").ExpectMap(map[string]string{ "size": "44000", "modTime": "", "sha1": "", "sha256": "", "md5": "", }) cmdGetFileinfomirrorM2 := mock.Command("HMGET", "FILEINFO_2_"+filename, "size", "modTime", "sha1", "sha256", "md5").ExpectMap(map[string]string{ "size": "44000", "modTime": "", "sha1": "", "sha256": "", "md5": "", }) mirrors, err := c.GetMirrors(filename, clientInfo) if err != nil { t.Fatalf("Unexpected error: %s", err.Error()) } if mock.Stats(cmdGetFilemirrors) < 1 { t.Fatalf("cmd_get_filemirrors not called") } if mock.Stats(cmdGetMirrorM1) < 1 { t.Fatalf("cmdGetMirrorM1 not called") } if mock.Stats(cmdGetMirrorM2) < 1 { t.Fatalf("cmdGetMirrorM2 not called") } if mock.Stats(cmdGetFileinfomirrorM1) < 1 { t.Fatalf("cmd_get_fileinfomirror_m1 not called") } if mock.Stats(cmdGetFileinfomirrorM2) < 1 { t.Fatalf("cmd_get_fileinfomirror_m2 not called") } if len(mirrors) != 2 { t.Fatalf("Invalid number of mirrors returned") } if int(mirrors[0].Distance) != int(876) { t.Fatalf("Distance between user and m1 is wrong, got %d, expected 876", int(mirrors[0].Distance)) } if int(mirrors[1].Distance) != int(334) { t.Fatalf("Distance between user and m2 is wrong, got %d, expected 334", int(mirrors[1].Distance)) } } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/mirrors/logs.go000066400000000000000000000142051411706463700221740ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package mirrors import ( "encoding/json" "fmt" "time" "github.com/etix/mirrorbits/core" "github.com/etix/mirrorbits/database" "github.com/gomodule/redigo/redis" "github.com/op/go-logging" ) var ( log = logging.MustGetLogger("main") ) type LogType uint const ( _ LogType = iota LOGTYPE_ERROR LOGTYPE_ADDED LOGTYPE_EDITED LOGTYPE_ENABLED LOGTYPE_DISABLED LOGTYPE_STATECHANGED LOGTYPE_SCANSTARTED LOGTYPE_SCANCOMPLETED ) func typeToInstance(typ LogType) LogAction { switch LogType(typ) { case LOGTYPE_ERROR: return &LogError{} case LOGTYPE_ADDED: return &LogAdded{} case LOGTYPE_EDITED: return &LogEdited{} case LOGTYPE_ENABLED: return &LogEnabled{} case LOGTYPE_DISABLED: return &LogDisabled{} case LOGTYPE_STATECHANGED: return &LogStateChanged{} case LOGTYPE_SCANSTARTED: return &LogScanStarted{} case LOGTYPE_SCANCOMPLETED: return &LogScanCompleted{} default: } return nil } type LogAction interface { GetType() LogType GetMirrorID() int GetTimestamp() time.Time GetOutput() string } type LogCommonAction struct { Type LogType MirrorID int Timestamp time.Time } func (l LogCommonAction) GetType() LogType { return l.Type } func (l LogCommonAction) GetMirrorID() int { return l.MirrorID } func (l LogCommonAction) GetTimestamp() time.Time { return l.Timestamp } type LogError struct { LogCommonAction Err string } func (l *LogError) GetOutput() string { return fmt.Sprintf("Error: %s", l.Err) } func NewLogError(id int, err error) LogAction { return &LogError{ LogCommonAction: LogCommonAction{ Type: LOGTYPE_ERROR, MirrorID: id, Timestamp: time.Now(), }, Err: err.Error(), } } type LogAdded struct { LogCommonAction } func (l *LogAdded) GetOutput() string { return "Mirror added" } func NewLogAdded(id int) LogAction { return &LogAdded{ LogCommonAction: LogCommonAction{ Type: LOGTYPE_ADDED, MirrorID: id, Timestamp: time.Now(), }, } } type LogEdited struct { LogCommonAction } func (l *LogEdited) GetOutput() string { return "Mirror edited" } func NewLogEdited(id int) LogAction { return &LogEdited{ LogCommonAction: LogCommonAction{ Type: LOGTYPE_EDITED, MirrorID: id, Timestamp: time.Now(), }, } } type LogEnabled struct { LogCommonAction } func (l *LogEnabled) GetOutput() string { return "Mirror enabled" } func NewLogEnabled(id int) LogAction { return &LogEnabled{ LogCommonAction: LogCommonAction{ Type: LOGTYPE_ENABLED, MirrorID: id, Timestamp: time.Now(), }, } } type LogDisabled struct { LogCommonAction } func (l *LogDisabled) GetOutput() string { return "Mirror disabled" } func NewLogDisabled(id int) LogAction { return &LogDisabled{ LogCommonAction: LogCommonAction{ Type: LOGTYPE_DISABLED, MirrorID: id, Timestamp: time.Now(), }, } } type LogStateChanged struct { LogCommonAction Up bool Reason string } func (l *LogStateChanged) GetOutput() string { if l.Up == false { if len(l.Reason) == 0 { return "Mirror is down" } return "Mirror is down: " + l.Reason } return "Mirror is up" } func NewLogStateChanged(id int, up bool, reason string) LogAction { return &LogStateChanged{ LogCommonAction: LogCommonAction{ Type: LOGTYPE_STATECHANGED, MirrorID: id, Timestamp: time.Now(), }, Up: up, Reason: reason, } } type LogScanStarted struct { LogCommonAction Typ core.ScannerType } func (l *LogScanStarted) GetOutput() string { switch l.Typ { case core.RSYNC: return "RSYNC scan started" case core.FTP: return "FTP scan started" default: return "Scan started using a unknown protocol" } } func NewLogScanStarted(id int, typ core.ScannerType) LogAction { return &LogScanStarted{ LogCommonAction: LogCommonAction{ Type: LOGTYPE_SCANSTARTED, MirrorID: id, Timestamp: time.Now(), }, Typ: typ, } } type LogScanCompleted struct { LogCommonAction FilesIndexed int64 KnownIndexed int64 Removed int64 TZOffset int64 } func (l *LogScanCompleted) GetOutput() string { output := fmt.Sprintf("Scan completed: %d files (%d known), %d removed", l.FilesIndexed, l.KnownIndexed, l.Removed) if l.TZOffset != 0 { offset, _ := time.ParseDuration(fmt.Sprintf("%dms", l.TZOffset)) output += fmt.Sprintf(" (corrected timezone offset: %s)", offset) } return output } func NewLogScanCompleted(id int, files, known, removed, tzoffset int64) LogAction { return &LogScanCompleted{ LogCommonAction: LogCommonAction{ Type: LOGTYPE_SCANCOMPLETED, MirrorID: id, Timestamp: time.Now(), }, FilesIndexed: files, KnownIndexed: known, Removed: removed, TZOffset: tzoffset, } } func PushLog(r *database.Redis, logAction LogAction) error { conn := r.Get() defer conn.Close() key := fmt.Sprintf("MIRRORLOGS_%d", logAction.GetMirrorID()) value, err := json.Marshal(logAction) if err != nil { return err } _, err = conn.Do("RPUSH", key, value) return err } func ReadLogs(r *database.Redis, mirrorid, max int) ([]string, error) { conn := r.Get() defer conn.Close() if max <= 0 { // Get the latest 500 events by default max = 500 } key := fmt.Sprintf("MIRRORLOGS_%d", mirrorid) lines, err := redis.Strings(conn.Do("LRANGE", key, max*-1, -1)) if err != nil { return nil, err } outputs := make([]string, 0, len(lines)) for _, line := range lines { var objmap map[string]interface{} err = json.Unmarshal([]byte(line), &objmap) if err != nil { log.Warningf("Unable to parse mirror log line: %s", err) continue } typf, ok := objmap["Type"].(float64) if !ok { log.Warning("Unable to parse mirror log line") continue } // Truncate the received float64 back to int typ := int(typf) action := typeToInstance(LogType(typ)) if action == nil { log.Warning("Unknown mirror log action") continue } err = json.Unmarshal([]byte(line), action) if err != nil { log.Warningf("Unable to unmarshal mirror log line: %s", err) continue } line := fmt.Sprintf("%s: %s", action.GetTimestamp().Format("2006-01-02 15:04:05 MST"), action.GetOutput()) outputs = append(outputs, line) } return outputs, nil } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/mirrors/lru.go000066400000000000000000000140251411706463700220320ustar00rootroot00000000000000/* Copyright 2012, Google Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ package mirrors // Implementation of an LRU cache in golang import ( "container/list" "fmt" "sync" "time" ) // LRUCache is the internal structure of the cache type LRUCache struct { mu sync.Mutex // list & table of *entry objects list *list.List table map[string]*list.Element // Our current size, in bytes. Obviously a gross simplification and low-grade // approximation. size uint64 // How many bytes we are limiting the cache to. capacity uint64 } // Value that go into LRUCache need to satisfy this interface. type Value interface { Size() int } // Item contains the key and value that goes into the cache type Item struct { Key string Value Value } type entry struct { key string value Value size int timeAccessed time.Time } // NewLRUCache return a new instance of the cache func NewLRUCache(capacity uint64) *LRUCache { return &LRUCache{ list: list.New(), table: make(map[string]*list.Element), capacity: capacity, } } // Get a value from cache func (lru *LRUCache) Get(key string) (v Value, ok bool) { lru.mu.Lock() defer lru.mu.Unlock() element := lru.table[key] if element == nil { return nil, false } lru.moveToFront(element) return element.Value.(*entry).value, true } // Set a key and associated value into the cache func (lru *LRUCache) Set(key string, value Value) { lru.mu.Lock() defer lru.mu.Unlock() if element := lru.table[key]; element != nil { lru.updateInplace(element, value) } else { lru.addNew(key, value) } } // SetIfAbsent sets a key into the cache only if it doesn't exist yet func (lru *LRUCache) SetIfAbsent(key string, value Value) { lru.mu.Lock() defer lru.mu.Unlock() if element := lru.table[key]; element != nil { lru.moveToFront(element) } else { lru.addNew(key, value) } } // Delete the key and associated value from the cache func (lru *LRUCache) Delete(key string) bool { lru.mu.Lock() defer lru.mu.Unlock() element := lru.table[key] if element == nil { return false } lru.list.Remove(element) delete(lru.table, key) lru.size -= uint64(element.Value.(*entry).size) return true } // Clear the cache func (lru *LRUCache) Clear() { lru.mu.Lock() defer lru.mu.Unlock() lru.list.Init() lru.table = make(map[string]*list.Element) lru.size = 0 } // SetCapacity sets the capacity of the cache func (lru *LRUCache) SetCapacity(capacity uint64) { lru.mu.Lock() defer lru.mu.Unlock() lru.capacity = capacity lru.checkCapacity() } // Stats return stats about the caching structure func (lru *LRUCache) Stats() (length, size, capacity uint64, oldest time.Time) { lru.mu.Lock() defer lru.mu.Unlock() if lastElem := lru.list.Back(); lastElem != nil { oldest = lastElem.Value.(*entry).timeAccessed } return uint64(lru.list.Len()), lru.size, lru.capacity, oldest } // StatsJSON returns the stats as JSON func (lru *LRUCache) StatsJSON() string { if lru == nil { return "{}" } l, s, c, o := lru.Stats() return fmt.Sprintf("{\"Length\": %v, \"Size\": %v, \"Capacity\": %v, \"OldestAccess\": \"%v\"}", l, s, c, o) } // Keys returns all the keys available in the cache func (lru *LRUCache) Keys() []string { lru.mu.Lock() defer lru.mu.Unlock() keys := make([]string, 0, lru.list.Len()) for e := lru.list.Front(); e != nil; e = e.Next() { keys = append(keys, e.Value.(*entry).key) } return keys } // Items returns all the items available in the cache func (lru *LRUCache) Items() []Item { lru.mu.Lock() defer lru.mu.Unlock() items := make([]Item, 0, lru.list.Len()) for e := lru.list.Front(); e != nil; e = e.Next() { v := e.Value.(*entry) items = append(items, Item{Key: v.key, Value: v.value}) } return items } func (lru *LRUCache) updateInplace(element *list.Element, value Value) { valueSize := value.Size() sizeDiff := valueSize - element.Value.(*entry).size element.Value.(*entry).value = value element.Value.(*entry).size = valueSize lru.size += uint64(sizeDiff) lru.moveToFront(element) lru.checkCapacity() } func (lru *LRUCache) moveToFront(element *list.Element) { lru.list.MoveToFront(element) element.Value.(*entry).timeAccessed = time.Now() } func (lru *LRUCache) addNew(key string, value Value) { newEntry := &entry{key, value, value.Size(), time.Now()} element := lru.list.PushFront(newEntry) lru.table[key] = element lru.size += uint64(newEntry.size) lru.checkCapacity() } func (lru *LRUCache) checkCapacity() { // Partially duplicated from Delete for lru.size > lru.capacity { delElem := lru.list.Back() delValue := delElem.Value.(*entry) lru.list.Remove(delElem) delete(lru.table, delValue.key) lru.size -= uint64(delValue.size) } } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/mirrors/lru_test.go000066400000000000000000000106051411706463700230710ustar00rootroot00000000000000// Copyright 2012, Google Inc. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package mirrors import ( "testing" ) type CacheValue struct { size int } func (cv *CacheValue) Size() int { return cv.size } func TestInitialState(t *testing.T) { cache := NewLRUCache(5) l, sz, c, _ := cache.Stats() if l != 0 { t.Errorf("length = %v, want 0", l) } if sz != 0 { t.Errorf("size = %v, want 0", sz) } if c != 5 { t.Errorf("capacity = %v, want 5", c) } } func TestSetInsertsValue(t *testing.T) { cache := NewLRUCache(100) data := &CacheValue{0} key := "key" cache.Set(key, data) v, ok := cache.Get(key) if !ok || v.(*CacheValue) != data { t.Errorf("Cache has incorrect value: %v != %v", data, v) } } func TestGetValueWithMultipleTypes(t *testing.T) { cache := NewLRUCache(100) data := &CacheValue{0} key := "key" cache.Set(key, data) v, ok := cache.Get("key") if !ok || v.(*CacheValue) != data { t.Errorf("Cache has incorrect value for \"key\": %v != %v", data, v) } v, ok = cache.Get(string([]byte{'k', 'e', 'y'})) if !ok || v.(*CacheValue) != data { t.Errorf("Cache has incorrect value for []byte {'k','e','y'}: %v != %v", data, v) } } func TestSetUpdatesSize(t *testing.T) { cache := NewLRUCache(100) emptyValue := &CacheValue{0} key := "key1" cache.Set(key, emptyValue) if _, sz, _, _ := cache.Stats(); sz != 0 { t.Errorf("cache.Size() = %v, expected 0", sz) } someValue := &CacheValue{20} key = "key2" cache.Set(key, someValue) if _, sz, _, _ := cache.Stats(); sz != 20 { t.Errorf("cache.Size() = %v, expected 20", sz) } } func TestSetWithOldKeyUpdatesValue(t *testing.T) { cache := NewLRUCache(100) emptyValue := &CacheValue{0} key := "key1" cache.Set(key, emptyValue) someValue := &CacheValue{20} cache.Set(key, someValue) v, ok := cache.Get(key) if !ok || v.(*CacheValue) != someValue { t.Errorf("Cache has incorrect value: %v != %v", someValue, v) } } func TestSetWithOldKeyUpdatesSize(t *testing.T) { cache := NewLRUCache(100) emptyValue := &CacheValue{0} key := "key1" cache.Set(key, emptyValue) if _, sz, _, _ := cache.Stats(); sz != 0 { t.Errorf("cache.Size() = %v, expected %v", sz, 0) } someValue := &CacheValue{20} cache.Set(key, someValue) expected := uint64(someValue.size) if _, sz, _, _ := cache.Stats(); sz != expected { t.Errorf("cache.Size() = %v, expected %v", sz, expected) } } func TestGetNonExistent(t *testing.T) { cache := NewLRUCache(100) if _, ok := cache.Get("crap"); ok { t.Error("Cache returned a crap value after no inserts.") } } func TestDelete(t *testing.T) { cache := NewLRUCache(100) value := &CacheValue{1} key := "key" if cache.Delete(key) { t.Error("Item unexpectedly already in cache.") } cache.Set(key, value) if !cache.Delete(key) { t.Error("Expected item to be in cache.") } if _, sz, _, _ := cache.Stats(); sz != 0 { t.Errorf("cache.Size() = %v, expected 0", sz) } if _, ok := cache.Get(key); ok { t.Error("Cache returned a value after deletion.") } } func TestClear(t *testing.T) { cache := NewLRUCache(100) value := &CacheValue{1} key := "key" cache.Set(key, value) cache.Clear() if _, sz, _, _ := cache.Stats(); sz != 0 { t.Errorf("cache.Size() = %v, expected 0 after Clear()", sz) } } func TestCapacityIsObeyed(t *testing.T) { size := uint64(3) cache := NewLRUCache(size) value := &CacheValue{1} // Insert up to the cache's capacity. cache.Set("key1", value) cache.Set("key2", value) cache.Set("key3", value) if _, sz, _, _ := cache.Stats(); sz != size { t.Errorf("cache.Size() = %v, expected %v", sz, size) } // Insert one more; something should be evicted to make room. cache.Set("key4", value) if _, sz, _, _ := cache.Stats(); sz != size { t.Errorf("post-evict cache.Size() = %v, expected %v", sz, size) } } func TestLRUIsEvicted(t *testing.T) { size := uint64(3) cache := NewLRUCache(size) cache.Set("key1", &CacheValue{1}) cache.Set("key2", &CacheValue{1}) cache.Set("key3", &CacheValue{1}) // lru: [key3, key2, key1] // Look up the elements. This will rearrange the LRU ordering. cache.Get("key3") cache.Get("key2") cache.Get("key1") // lru: [key1, key2, key3] cache.Set("key0", &CacheValue{1}) // lru: [key0, key1, key2] // The least recently used one should have been evicted. if _, ok := cache.Get("key3"); ok { t.Error("Least recently used element was not evicted.") } } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/mirrors/mirrors.go000066400000000000000000000236061411706463700227320ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package mirrors import ( "fmt" "math/rand" "strconv" "strings" "time" . "github.com/etix/mirrorbits/config" "github.com/etix/mirrorbits/core" "github.com/etix/mirrorbits/database" "github.com/etix/mirrorbits/filesystem" "github.com/etix/mirrorbits/network" "github.com/etix/mirrorbits/utils" "github.com/gomodule/redigo/redis" ) // Mirror is the structure representing all the information about a mirror type Mirror struct { ID int `redis:"ID" yaml:"-"` Name string `redis:"name" yaml:"Name"` HttpURL string `redis:"http" yaml:"HttpURL"` RsyncURL string `redis:"rsync" yaml:"RsyncURL"` FtpURL string `redis:"ftp" yaml:"FtpURL"` SponsorName string `redis:"sponsorName" yaml:"SponsorName"` SponsorURL string `redis:"sponsorURL" yaml:"SponsorURL"` SponsorLogoURL string `redis:"sponsorLogo" yaml:"SponsorLogoURL"` AdminName string `redis:"adminName" yaml:"AdminName"` AdminEmail string `redis:"adminEmail" yaml:"AdminEmail"` CustomData string `redis:"customData" yaml:"CustomData"` ContinentOnly bool `redis:"continentOnly" yaml:"ContinentOnly"` CountryOnly bool `redis:"countryOnly" yaml:"CountryOnly"` ASOnly bool `redis:"asOnly" yaml:"ASOnly"` Score int `redis:"score" yaml:"Score"` Latitude float32 `redis:"latitude" yaml:"Latitude"` Longitude float32 `redis:"longitude" yaml:"Longitude"` ContinentCode string `redis:"continentCode" yaml:"ContinentCode"` CountryCodes string `redis:"countryCodes" yaml:"CountryCodes"` ExcludedCountryCodes string `redis:"excludedCountryCodes" yaml:"ExcludedCountryCodes"` Asnum uint `redis:"asnum" yaml:"ASNum"` Comment string `redis:"comment" yaml:"-"` Enabled bool `redis:"enabled" yaml:"Enabled"` Up bool `redis:"up" json:"-" yaml:"-"` ExcludeReason string `redis:"excludeReason" json:",omitempty" yaml:"-"` StateSince Time `redis:"stateSince" json:",omitempty" yaml:"-"` AllowRedirects Redirects `redis:"allowredirects" json:",omitempty" yaml:"AllowRedirects"` TZOffset int64 `redis:"tzoffset" json:"-" yaml:"-"` // timezone offset in ms Distance float32 `redis:"-" yaml:"-"` CountryFields []string `redis:"-" json:"-" yaml:"-"` ExcludedCountryFields []string `redis:"-" json:"-" yaml:"-"` Filepath string `redis:"-" json:"-" yaml:"-"` Weight float32 `redis:"-" json:"-" yaml:"-"` ComputedScore int `redis:"-" yaml:"-"` LastSync Time `redis:"lastSync" yaml:"-"` LastSuccessfulSync Time `redis:"lastSuccessfulSync" yaml:"-"` LastSuccessfulSyncProtocol core.ScannerType `redis:"lastSuccessfulSyncProtocol" yaml:"-"` LastSuccessfulSyncPrecision core.Precision `redis:"lastSuccessfulSyncPrecision" yaml:"-"` LastModTime Time `redis:"lastModTime" yaml:"-"` FileInfo *filesystem.FileInfo `redis:"-" json:"-" yaml:"-"` // Details of the requested file on this specific mirror } // Prepare must be called after retrieval from the database to reformat some values func (m *Mirror) Prepare() { m.CountryFields = strings.Fields(m.CountryCodes) m.ExcludedCountryFields = strings.Fields(m.ExcludedCountryCodes) } // IsHTTPS returns true if the mirror has an HTTPS address func (m *Mirror) IsHTTPS() bool { return strings.HasPrefix(m.HttpURL, "https://") } // Mirrors represents a slice of Mirror type Mirrors []Mirror // Len return the number of Mirror in the slice func (s Mirrors) Len() int { return len(s) } // Swap swaps mirrors at index i and j func (s Mirrors) Swap(i, j int) { s[i], s[j] = s[j], s[i] } // ByRank is used to sort a slice of Mirror by their rank type ByRank struct { Mirrors ClientInfo network.GeoIPRecord } // Less compares two mirrors based on their rank func (m ByRank) Less(i, j int) bool { if m.ClientInfo.IsValid() { if m.ClientInfo.ASNum == m.Mirrors[i].Asnum { if m.Mirrors[i].Asnum != m.Mirrors[j].Asnum { return true } } else if m.ClientInfo.ASNum == m.Mirrors[j].Asnum { return false } //TODO Simplify me if m.ClientInfo.CountryCode != "" { if utils.IsInSlice(m.ClientInfo.CountryCode, m.Mirrors[i].CountryFields) { if !utils.IsInSlice(m.ClientInfo.CountryCode, m.Mirrors[j].CountryFields) { return true } } else if utils.IsInSlice(m.ClientInfo.CountryCode, m.Mirrors[j].CountryFields) { return false } } if m.ClientInfo.ContinentCode != "" { if m.ClientInfo.ContinentCode == m.Mirrors[i].ContinentCode { if m.ClientInfo.ContinentCode != m.Mirrors[j].ContinentCode { return true } } else if m.ClientInfo.ContinentCode == m.Mirrors[j].ContinentCode { return false } } return m.Mirrors[i].Distance < m.Mirrors[j].Distance } // Randomize the output if we miss client info return rand.Intn(2) == 0 } // ByComputedScore is used to sort a slice of Mirror by their score type ByComputedScore struct { Mirrors } // Less compares two mirrors based on their score func (b ByComputedScore) Less(i, j int) bool { return b.Mirrors[i].ComputedScore > b.Mirrors[j].ComputedScore } // ByExcludeReason is used to sort a slice of Mirror alphabetically by their exclude reason type ByExcludeReason struct { Mirrors } // Less compares two mirrors based on their exclude reason func (b ByExcludeReason) Less(i, j int) bool { if b.Mirrors[i].ExcludeReason < b.Mirrors[j].ExcludeReason { return true } return false } // EnableMirror enables the given mirror func EnableMirror(r *database.Redis, id int) error { return SetMirrorEnabled(r, id, true) } // DisableMirror disables the given mirror func DisableMirror(r *database.Redis, id int) error { return SetMirrorEnabled(r, id, false) } // SetMirrorEnabled marks a mirror as enabled or disabled func SetMirrorEnabled(r *database.Redis, id int, state bool) error { conn := r.Get() defer conn.Close() key := fmt.Sprintf("MIRROR_%d", id) _, err := conn.Do("HMSET", key, "enabled", state) // Publish update if err == nil { database.Publish(conn, database.MIRROR_UPDATE, strconv.Itoa(id)) if state == true { PushLog(r, NewLogEnabled(id)) } else { PushLog(r, NewLogDisabled(id)) } } return err } // MarkMirrorUp marks the given mirror as up func MarkMirrorUp(r *database.Redis, id int) error { return SetMirrorState(r, id, true, "") } // MarkMirrorDown marks the given mirror as down func MarkMirrorDown(r *database.Redis, id int, reason string) error { return SetMirrorState(r, id, false, reason) } // SetMirrorState sets the state of a mirror to up or down with an optional reason func SetMirrorState(r *database.Redis, id int, state bool, reason string) error { conn := r.Get() defer conn.Close() key := fmt.Sprintf("MIRROR_%d", id) previousState, err := redis.Bool(conn.Do("HGET", key, "up")) if err != nil && err != redis.ErrNil { return err } var args []interface{} args = append(args, key, "up", state, "excludeReason", reason) if state != previousState { args = append(args, "stateSince", time.Now().Unix()) } _, err = conn.Do("HMSET", args...) if err == nil { // Publish update database.Publish(conn, database.MIRROR_UPDATE, strconv.Itoa(id)) if state != previousState { PushLog(r, NewLogStateChanged(id, state, reason)) } } return err } // Results is the resulting struct of a request and is // used by the renderers to generate the final page. type Results struct { FileInfo filesystem.FileInfo IP string ClientInfo network.GeoIPRecord MirrorList Mirrors ExcludedList Mirrors `json:",omitempty"` Fallback bool `json:",omitempty"` LocalJSPath string } // Redirects is handling the per-mirror authorization of HTTP redirects type Redirects int // Allowed will return true if redirects are authorized for this mirror func (r *Redirects) Allowed() bool { switch *r { case 1: return true case 2: return false default: return GetConfig().DisallowRedirects == false } } // MarshalYAML converts internal values to YAML func (r Redirects) MarshalYAML() (interface{}, error) { var b *bool switch r { case 1: v := true b = &v case 2: v := false b = &v default: } return b, nil } // UnmarshalYAML converts YAML to internal values func (r *Redirects) UnmarshalYAML(unmarshal func(interface{}) error) error { var b *bool if err := unmarshal(&b); err != nil { return err } if b == nil { *r = 0 } else if *b == true { *r = 1 } else { *r = 2 } return nil } // Time is a structure holding a time.Time object. // It is used to serialize and deserialize a time // held in a redis database. type Time struct { time.Time } // RedisArg serialize the time.Time object func (t Time) RedisArg() interface{} { return t.UTC().Unix() } // RedisScan deserialize the time.Time object func (t *Time) RedisScan(src interface{}) (err error) { switch src := src.(type) { case int64: t.Time = time.Unix(src, 0) case []byte: var i int64 i, err = strconv.ParseInt(string(src), 10, 64) t.Time = time.Unix(i, 0) default: err = fmt.Errorf("cannot convert from %T to %T", src, t) } return err } // FromTime returns a Time from a time.Time func (t Time) FromTime(time time.Time) Time { return Time{ Time: time, } } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/mirrors/mirrors_test.go000066400000000000000000000263251411706463700237720ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package mirrors import ( "fmt" "math/rand" "sort" "strings" "testing" "time" "github.com/etix/mirrorbits/database" "github.com/etix/mirrorbits/network" . "github.com/etix/mirrorbits/testing" "github.com/gomodule/redigo/redis" "github.com/rafaeljusto/redigomock" ) func generateSimpleMirrorList(number int) Mirrors { ret := Mirrors{} for i := 0; i < number; i++ { m := Mirror{ ID: i, Name: fmt.Sprintf("M%d", i), } ret = append(ret, m) } return ret } func formatMirrorOrder(mirrors Mirrors) string { buf := "" for _, m := range mirrors { buf += fmt.Sprintf("%s, ", m.Name) } return strings.TrimSuffix(buf, ", ") } func matchingMirrorOrder(m Mirrors, order []int) bool { if len(m) != len(order) { return false } for i, v := range order { if v != m[i].ID { return false } } return true } func TestMirrors_Len(t *testing.T) { m := Mirrors{} if m.Len() != 0 { t.Fatalf("Expected 0, got %d", m.Len()) } m = generateSimpleMirrorList(2) if m.Len() != len(m) { t.Fatalf("Expected %d, got %d", len(m), m.Len()) } } func TestMirrors_Swap(t *testing.T) { m := generateSimpleMirrorList(5) if !matchingMirrorOrder(m, []int{0, 1, 2, 3, 4}) { t.Fatalf("Expected M0 before M1, got %s", formatMirrorOrder(m)) } m.Swap(0, 1) if !matchingMirrorOrder(m, []int{1, 0, 2, 3, 4}) { t.Fatalf("Expected M1 before M0, got %s", formatMirrorOrder(m)) } m.Swap(2, 4) if !matchingMirrorOrder(m, []int{1, 0, 4, 3, 2}) { t.Fatal("Expected M4 at position 2 and M2 at position 4", m) } } func TestByRank_Less(t *testing.T) { rand.Seed(time.Now().UnixNano()) /* */ c := network.GeoIPRecord{} if c.IsValid() { t.Fatalf("GeoIPRecord is supposed to be invalid") } /* */ // Generate two identical slices m1 := generateSimpleMirrorList(50) m2 := generateSimpleMirrorList(50) // Mirrors are identical (besides name) so ByRank is expected // to randomize their order. sort.Sort(ByRank{m1, c}) differences := 0 for i, m := range m1 { if m.ID != m2[i].ID { differences++ } } if differences == 0 { t.Fatalf("Result is supposed to be randomized") } else if differences < 10 { t.Fatalf("Too many similarities, something's wrong?") } // Sort again, just to be sure the result is different m3 := generateSimpleMirrorList(50) sort.Sort(ByRank{m3, c}) differences = 0 for i, m := range m3 { if m.ID != m1[i].ID { differences++ } } if differences == 0 { t.Fatalf("Result is supposed to be different from previous run") } else if differences < 10 { t.Fatalf("Too many similarities, something's wrong?") } /* */ c = network.GeoIPRecord{ CountryCode: "FR", ContinentCode: "EU", ASNum: 4444, } if !c.IsValid() { t.Fatalf("GeoIPRecord is supposed to be valid") } /* asnum */ m := Mirrors{ Mirror{ ID: 1, Name: "M1", Asnum: 6666, }, Mirror{ ID: 2, Name: "M2", Asnum: 5555, }, Mirror{ ID: 3, Name: "M3", Asnum: 4444, }, Mirror{ ID: 4, Name: "M4", Asnum: 6666, }, } sort.Sort(ByRank{m, c}) if !matchingMirrorOrder(m, []int{3, 1, 2, 4}) { t.Fatalf("Order doesn't seem right: %s, expected M3, M1, M2, M4", formatMirrorOrder(m)) } /* distance */ m = Mirrors{ Mirror{ ID: 1, Name: "M1", Distance: 1000.0, }, Mirror{ ID: 2, Name: "M2", Distance: 999.0, }, Mirror{ ID: 3, Name: "M3", Distance: 1000.0, }, Mirror{ ID: 4, Name: "M4", Distance: 888.0, }, } sort.Sort(ByRank{m, c}) if !matchingMirrorOrder(m, []int{4, 2, 1, 3}) { t.Fatalf("Order doesn't seem right: %s, expected M4, M2, M1, M3", formatMirrorOrder(m)) } /* countrycode */ m = Mirrors{ Mirror{ ID: 1, Name: "M1", CountryFields: []string{"IT", "UK"}, }, Mirror{ ID: 2, Name: "M2", CountryFields: []string{"IT", "UK"}, }, Mirror{ ID: 3, Name: "M3", CountryFields: []string{"IT", "FR"}, }, Mirror{ ID: 4, Name: "M4", CountryFields: []string{"FR", "UK"}, }, } sort.Sort(ByRank{m, c}) if !matchingMirrorOrder(m, []int{3, 4, 1, 2}) { t.Fatalf("Order doesn't seem right: %s, expected M3, M4, M1, M2", formatMirrorOrder(m)) } /* continentcode */ c = network.GeoIPRecord{ ContinentCode: "EU", ASNum: 4444, CountryCode: "XX", } m = Mirrors{ Mirror{ ID: 1, Name: "M1", ContinentCode: "NA", }, Mirror{ ID: 2, Name: "M2", ContinentCode: "NA", }, Mirror{ ID: 3, Name: "M3", ContinentCode: "EU", }, Mirror{ ID: 4, Name: "M4", ContinentCode: "NA", }, } sort.Sort(ByRank{m, c}) if !matchingMirrorOrder(m, []int{3, 1, 2, 4}) { t.Fatalf("Order doesn't seem right: %s, expected M3, M1, M2, M4", formatMirrorOrder(m)) } /* */ c = network.GeoIPRecord{ CountryCode: "FR", ContinentCode: "EU", ASNum: 4444, } m = Mirrors{ Mirror{ ID: 1, Name: "M1", Distance: 100.0, CountryFields: []string{"IT", "FR"}, ContinentCode: "EU", }, Mirror{ ID: 2, Name: "M2", Distance: 200.0, CountryFields: []string{"FR", "CH"}, ContinentCode: "EU", }, Mirror{ ID: 3, Name: "M3", Distance: 1000.0, CountryFields: []string{"UK", "DE"}, Asnum: 4444, }, } sort.Sort(ByRank{m, c}) if !matchingMirrorOrder(m, []int{3, 1, 2}) { t.Fatalf("Order doesn't seem right: %s, expected M3, M1, M2", formatMirrorOrder(m)) } } func TestByComputedScore_Less(t *testing.T) { m := Mirrors{ Mirror{ ID: 1, Name: "M1", ComputedScore: 50, }, Mirror{ ID: 2, Name: "M2", ComputedScore: 0, }, Mirror{ ID: 3, Name: "M3", ComputedScore: 2500, }, Mirror{ ID: 4, Name: "M4", ComputedScore: 21, }, } sort.Sort(ByComputedScore{m}) if !matchingMirrorOrder(m, []int{3, 1, 4, 2}) { t.Fatalf("Order doesn't seem right: %s, expected M3, M1, M4, M2", formatMirrorOrder(m)) } } func TestByExcludeReason_Less(t *testing.T) { m := Mirrors{ Mirror{ ID: 1, Name: "M1", ExcludeReason: "x42", }, Mirror{ ID: 2, Name: "M2", ExcludeReason: "x43", }, Mirror{ ID: 3, Name: "M3", ExcludeReason: "Test one", }, Mirror{ ID: 4, Name: "M4", ExcludeReason: "Test two", }, Mirror{ ID: 5, Name: "M5", ExcludeReason: "test three", }, } sort.Sort(ByExcludeReason{m}) if !matchingMirrorOrder(m, []int{3, 4, 5, 1, 2}) { t.Fatalf("Order doesn't seem right: %s, expected M3, M4, M5, M1, M2", formatMirrorOrder(m)) } } func TestEnableMirror(t *testing.T) { mock, conn := PrepareRedisTest() cmdEnable := mock.Command("HMSET", "MIRROR_1", "enabled", true).Expect("ok") EnableMirror(conn, 1) if mock.Stats(cmdEnable) != 1 { t.Fatalf("Mirror not enabled") } mock.Command("HMSET", "MIRROR_1", "enabled", true).ExpectError(redis.Error("blah")) if EnableMirror(conn, 1) == nil { t.Fatalf("Error expected") } } func TestDisableMirror(t *testing.T) { mock, conn := PrepareRedisTest() cmdDisable := mock.Command("HMSET", "MIRROR_1", "enabled", false).Expect("ok") DisableMirror(conn, 1) if mock.Stats(cmdDisable) != 1 { t.Fatalf("Mirror not enabled") } mock.Command("HMSET", "MIRROR_1", "enabled", false).ExpectError(redis.Error("blah")) if DisableMirror(conn, 1) == nil { t.Fatalf("Error expected") } } func TestSetMirrorEnabled(t *testing.T) { mock, conn := PrepareRedisTest() cmdPublish := mock.Command("PUBLISH", string(database.MIRROR_UPDATE), redigomock.NewAnyData()).Expect("ok") cmdEnable := mock.Command("HMSET", "MIRROR_1", "enabled", true).Expect("ok") SetMirrorEnabled(conn, 1, true) if mock.Stats(cmdEnable) < 1 { t.Fatalf("Mirror not enabled") } else if mock.Stats(cmdEnable) > 1 { t.Fatalf("Mirror enabled more than once") } if mock.Stats(cmdPublish) < 1 { t.Fatalf("Event MIRROR_UPDATE not published") } mock.Command("HMSET", "MIRROR_1", "enabled", true).ExpectError(redis.Error("blah")) if SetMirrorEnabled(conn, 1, true) == nil { t.Fatalf("Error expected") } cmdDisable := mock.Command("HMSET", "MIRROR_1", "enabled", false).Expect("ok") SetMirrorEnabled(conn, 1, false) if mock.Stats(cmdDisable) != 1 { t.Fatalf("Mirror not disabled") } else if mock.Stats(cmdDisable) > 1 { t.Fatalf("Mirror disabled more than once") } if mock.Stats(cmdPublish) < 2 { t.Fatalf("Event MIRROR_UPDATE not published") } mock.Command("HMSET", "MIRROR_1", "enabled", false).ExpectError(redis.Error("blah")) if SetMirrorEnabled(conn, 1, false) == nil { t.Fatalf("Error expected") } } func TestMarkMirrorUp(t *testing.T) { _, conn := PrepareRedisTest() if err := MarkMirrorUp(conn, 1); err == nil { t.Fatalf("Error expected but nil returned") } } func TestMarkMirrorDown(t *testing.T) { _, conn := PrepareRedisTest() if err := MarkMirrorDown(conn, 1, "test1"); err == nil { t.Fatalf("Error expected but nil returned") } } func TestSetMirrorState(t *testing.T) { mock, conn := PrepareRedisTest() if err := SetMirrorState(conn, 1, true, "test1"); err == nil { t.Fatalf("Error expected but nil returned") } cmdPublish := mock.Command("PUBLISH", string(database.MIRROR_UPDATE), redigomock.NewAnyData()).Expect("ok") /* */ cmdPreviousState := mock.Command("HGET", "MIRROR_1", "up").Expect(int64(0)).Expect(int64(1)) cmdStateSince := mock.Command("HMSET", "MIRROR_1", "up", true, "excludeReason", "test1", "stateSince", redigomock.NewAnyInt()).Expect("ok") cmdState := mock.Command("HMSET", "MIRROR_1", "up", true, "excludeReason", "test2").Expect("ok") if err := SetMirrorState(conn, 1, true, "test1"); err != nil { t.Fatalf("Unexpected error: %s", err) } if mock.Stats(cmdPreviousState) < 1 { t.Fatalf("Previous state not tested") } if mock.Stats(cmdStateSince) < 1 { t.Fatalf("New state not set") } else if mock.Stats(cmdStateSince) > 1 { t.Fatalf("State set more than once") } if mock.Stats(cmdPublish) < 1 { t.Fatalf("Event MIRROR_UPDATE not published") } /* */ if err := SetMirrorState(conn, 1, true, "test2"); err != nil { t.Fatalf("Unexpected error: %s", err) } if mock.Stats(cmdStateSince) > 1 || mock.Stats(cmdState) < 1 { t.Fatalf("The value stateSince isn't supposed to be set") } if mock.Stats(cmdPublish) != 2 { t.Fatalf("Event MIRROR_UPDATE should be sent") } /* */ cmdPreviousState = mock.Command("HGET", "MIRROR_1", "up").Expect(int64(1)) cmdStateSince = mock.Command("HMSET", "MIRROR_1", "up", false, "excludeReason", "test3", "stateSince", redigomock.NewAnyInt()).Expect("ok") if err := SetMirrorState(conn, 1, false, "test3"); err != nil { t.Fatalf("Unexpected error: %s", err) } if mock.Stats(cmdPreviousState) < 1 { t.Fatalf("Previous state not tested") } if mock.Stats(cmdStateSince) < 1 { t.Fatalf("New state not set") } else if mock.Stats(cmdStateSince) > 1 { t.Fatalf("State set more than once") } if mock.Stats(cmdPublish) < 2 { t.Fatalf("Event MIRROR_UPDATE not published") } } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/network/000077500000000000000000000000001411706463700206735ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/network/clusterlock.go000066400000000000000000000042711411706463700235600ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package network import ( "errors" "os" "time" "github.com/etix/mirrorbits/database" "github.com/gomodule/redigo/redis" ) const ( lockTTL = 10 // in seconds lockRefresh = 5 // in seconds ) // ClusterLock holds the internal structure of a ClusterLock type ClusterLock struct { redis *database.Redis key string identifier string done chan struct{} } // NewClusterLock returns a new instance of a ClusterLock. // A ClucterLock is used to maitain a lock on a mirror that is being // scanned. The lock is renewed every lockRefresh seconds and is // automatically released by the redis database every lockTTL seconds // allowing the lock to be released even if the application is killed. func NewClusterLock(redis *database.Redis, key, identifier string) *ClusterLock { return &ClusterLock{ redis: redis, key: key, identifier: identifier, } } // Get tries to obtain an exclusive lock, cluster wide, for the given mirror func (n *ClusterLock) Get() (<-chan struct{}, error) { if n.done != nil { return nil, errors.New("lock already in use") } conn := n.redis.Get() defer conn.Close() if conn.Err() != nil { return nil, conn.Err() } _, err := redis.String(conn.Do("SET", n.key, 1, "NX", "EX", lockTTL)) if err == redis.ErrNil { return nil, nil } else if err != nil { return nil, err } n.done = make(chan struct{}) // Maintain the lock active until release go func() { conn := n.redis.Get() defer conn.Close() for { select { case <-n.done: n.done = nil conn.Do("DEL", n.key) return case <-time.After(lockRefresh * time.Second): result, err := redis.Int(conn.Do("EXPIRE", n.key, lockTTL)) if err != nil { log.Errorf("Renewing lock for %s failed: %s", n.identifier, err) return } else if result == 0 { log.Errorf("Renewing lock for %s failed: lock disappeared", n.identifier) return } if os.Getenv("DEBUG") != "" { log.Debugf("[%s] Lock renewed", n.identifier) } } } }() return n.done, nil } // Release releases the exclusive lock on the mirror func (n *ClusterLock) Release() { close(n.done) } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/network/geoip.go000066400000000000000000000112121411706463700223220ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package network import ( "errors" "net" "os" "strings" "sync" "time" . "github.com/etix/mirrorbits/config" "github.com/op/go-logging" "github.com/oschwald/maxminddb-golang" ) var ( // ErrMultipleAddresses is returned when the mirror has more than one address ErrMultipleAddresses = errors.New("the mirror has more than one IP address") log = logging.MustGetLogger("main") ) const ( geoipUpdatedExt = ".updated" ) // GeoIP contains methods to query the GeoIP database type GeoIP struct { sync.RWMutex city *geoipDB asn *geoipDB } // GeoIPRecord defines a GeoIP record for a given IP address type GeoIPRecord struct { // City DB CountryCode string ContinentCode string City string Country string Latitude float32 Longitude float32 // ASN DB ASName string ASNum uint } // Geolocalizer is an interface representing a GeoIP library type Geolocalizer interface { Lookup(ipAddress net.IP, result interface{}) error } // NewGeoIP instanciates a new instance of GeoIP func NewGeoIP() *GeoIP { return &GeoIP{} } // Open the GeoIP database func (g *GeoIP) openDatabase(file string) (*maxminddb.Reader, error) { dbpath := GetConfig().GeoipDatabasePath if dbpath != "" && !strings.HasSuffix(dbpath, "/") { dbpath += "/" } filename := dbpath + file var err error if _, err = os.Stat(filename + geoipUpdatedExt); !os.IsNotExist(err) { filename += geoipUpdatedExt } return maxminddb.Open(filename) } type geoipDB struct { filename string modTime time.Time db Geolocalizer } func (g *GeoIP) loadDB(filename string, geodb **geoipDB, geoiperror *GeoIPError) error { // Increase the loaded counter geoiperror.loaded++ if *geodb == nil { *geodb = &geoipDB{ filename: filename, } } db, err := g.openDatabase(filename) if err != nil { geoiperror.Errors = append(geoiperror.Errors, err) return err } modTime := time.Unix(int64(db.Metadata.BuildEpoch), 0) if (*geodb).modTime.Equal(modTime) { return nil } (*geodb).db = db (*geodb).modTime = modTime log.Infof("Loading %s database (built on %s)", filename, (*geodb).modTime) return nil } // GeoIPError holds errors while loading the different databases type GeoIPError struct { Errors []error loaded int } func (e GeoIPError) Error() string { return "One or more GeoIP database could not be loaded" } // IsFatal returns true if the error is fatal func (e GeoIPError) IsFatal() bool { return e.loaded == len(e.Errors) } // LoadGeoIP loads the GeoIP databases into memory func (g *GeoIP) LoadGeoIP() error { var ret GeoIPError g.Lock() g.loadDB("GeoLite2-City.mmdb", &g.city, &ret) g.loadDB("GeoLite2-ASN.mmdb", &g.asn, &ret) g.Unlock() if len(ret.Errors) > 0 { return ret } return nil } // GetRecord return informations about the given ip address // (works in IPv4 and v6) func (g *GeoIP) GetRecord(ip string) (ret GeoIPRecord) { addr := net.ParseIP(ip) if addr == nil { return GeoIPRecord{} } type CityDb struct { City struct { Names struct { English string `maxminddb:"en"` } `maxminddb:"names"` } `maxminddb:"city"` Country struct { IsoCode string `maxminddb:"iso_code"` Names struct { English string `maxminddb:"en"` } `maxminddb:"names"` } `maxminddb:"country"` Continent struct { Code string `maxminddb:"code"` } `maxminddb:"continent"` Location struct { Latitude float64 `maxminddb:"latitude"` Longitude float64 `maxminddb:"longitude"` } `maxminddb:"location"` } type ASNDb struct { AutonomousSystemNumber uint `maxminddb:"autonomous_system_number"` AutonomousSystemOrg string `maxminddb:"autonomous_system_organization"` } var err error var cityDb CityDb var asnDb ASNDb g.RLock() defer g.RUnlock() if g.city != nil && g.city.db != nil { err = g.city.db.Lookup(addr, &cityDb) if err != nil { return GeoIPRecord{} } ret.CountryCode = cityDb.Country.IsoCode ret.ContinentCode = cityDb.Continent.Code ret.City = cityDb.City.Names.English ret.Country = cityDb.Country.Names.English ret.Latitude = float32(cityDb.Location.Latitude) ret.Longitude = float32(cityDb.Location.Longitude) } if g.asn != nil && g.asn.db != nil { err = g.asn.db.Lookup(addr, &asnDb) if err != nil { return GeoIPRecord{} } ret.ASName = asnDb.AutonomousSystemOrg ret.ASNum = asnDb.AutonomousSystemNumber } return ret } // IsIPv6 returns true if the given address is of version 6 func (g *GeoIP) IsIPv6(ip string) bool { return strings.Contains(ip, ":") } // IsValid returns true if the given address is valid func (g *GeoIPRecord) IsValid() bool { return len(g.CountryCode) > 0 } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/network/geoip_test.go000066400000000000000000000072261411706463700233730ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package network import ( "net" "reflect" "strings" "testing" "time" ) type CityDb struct { City struct { Names struct { En string } } Country struct { Iso_Code string Names struct { En string } } Continent struct { Code string } Location struct { Latitude float64 Longitude float64 } } type ASNDb struct { Autonomous_system_number uint Autonomous_system_organization string } func TestNewGeoIP(t *testing.T) { g := NewGeoIP() if g == nil { t.Fatalf("Expected valid pointer, got nil") } } func TestGeoIP_GetRecord(t *testing.T) { g := NewGeoIP() mockcity := &geoipDB{ filename: "city.mmdb", modTime: time.Now(), db: &GeoIPMockCity{}, } mockasn := &geoipDB{ filename: "asn.mmdb", modTime: time.Now(), db: &GeoIPMockASN{}, } g.city = mockcity g.asn = mockasn /* city */ r := g.GetRecord("127.0.0.1") if r.City != "test1" { t.Fatalf("Invalid response got %s, expected test1", r.City) } if r.CountryCode != "test2" { t.Fatalf("Invalid response got %s, expected test2", r.CountryCode) } if r.Country != "test3" { t.Fatalf("Invalid response got %s, expected test3", r.Country) } if r.ContinentCode != "test4" { t.Fatalf("Invalid response got %s, expected test4", r.ContinentCode) } if r.Latitude != 24 { t.Fatalf("Invalid response got %f, expected 24", r.Latitude) } if r.Longitude != 42 { t.Fatalf("Invalid response got %f, expected 42", r.Longitude) } if r.ASNum != 42 { t.Fatalf("Invalid response got %d, expected 42", r.ASNum) } if r.ASName != "forty two" { t.Fatalf("Invalid response got %s, expected forty two", r.ASName) } } func TestIsIPv6(t *testing.T) { g := NewGeoIP() if g.IsIPv6("192.168.0.1") == true { t.Fatalf("Expected ipv4, got ipv6") } if g.IsIPv6("::1") == false { t.Fatalf("Expected ipv6, got ipv4") } if g.IsIPv6("fe80::801a:2cff:fe80:315c") == false { t.Fatalf("Expected ipv6, got ipv4") } } func TestGeoIPRecord_IsValid(t *testing.T) { var r GeoIPRecord if r.IsValid() == true { t.Fatalf("Expected false, got true") } r = GeoIPRecord{ CountryCode: "FR", } if r.IsValid() == false { t.Fatalf("Expected true, got false") } } /* MOCK */ type GeoIPMockCity struct { } func (g *GeoIPMockCity) Lookup(ipAddress net.IP, result interface{}) error { var citydb CityDb citydb.City.Names.En = "test1" citydb.Country.Iso_Code = "test2" citydb.Country.Names.En = "test3" citydb.Continent.Code = "test4" citydb.Location.Latitude = 24 citydb.Location.Longitude = 42 CopyStruct(&citydb, result) return nil } type GeoIPMockASN struct { } func (g *GeoIPMockASN) Lookup(ipAddress net.IP, result interface{}) error { var asnDb ASNDb asnDb.Autonomous_system_number = 42 asnDb.Autonomous_system_organization = "forty two" CopyStruct(&asnDb, result) return nil } func CopyStruct(src interface{}, dst interface{}) { s := reflect.Indirect(reflect.ValueOf(src)) d := reflect.Indirect(reflect.ValueOf(dst)) CopyStructRec(s, d) } func CopyStructRec(s, d reflect.Value) { st := s.Type() dt := d.Type() typeOft1 := s.Type() typeOft2 := d.Type() for i := 0; i < s.NumField(); i++ { sf := s.Field(i) if st.Field(i).Type.Kind() == reflect.Struct { for j := 0; j < d.NumField(); j++ { if typeOft1.Field(i).Name == typeOft2.Field(j).Name { CopyStructRec(s.Field(i), d.Field(j)) goto cont } } } for j := 0; j < d.NumField(); j++ { df := d.Field(j) dtf := dt.Field(j) dsttag := dtf.Tag.Get("maxminddb") if strings.ToLower(typeOft1.Field(i).Name) == strings.ToLower(dsttag) { df.Set(reflect.Value(sf)) break } } cont: } } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/network/utils.go000066400000000000000000000024201411706463700223600ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package network import ( "net" "strings" ) // LookupMirrorIP returns the IP address of a mirror and returns an error // if the DNS has more than one address func LookupMirrorIP(host string) (string, error) { addrs, err := net.LookupIP(host) if err != nil { return "", err } // A mirror with multiple IP address is a problem // since we can't determine the exact position of // the server. if len(addrs) > 1 { err = ErrMultipleAddresses } return addrs[0].String(), err } // RemoteIPFromAddr removes the port from a remote address (x.x.x.x:yyyy) func RemoteIPFromAddr(remoteAddr string) string { return remoteAddr[:strings.LastIndex(remoteAddr, ":")] } // ExtractRemoteIP extracts the remote IP from an X-Forwarded-For header func ExtractRemoteIP(XForwardedFor string) string { addresses := strings.Split(XForwardedFor, ",") if len(addresses) > 0 { // The left-most address is supposed to be the original client address. // Each successive are added by proxies. In most cases we should probably // take the last address but in case of optimization services this will // probably not work. For now we'll always take the original one. return strings.TrimSpace(addresses[0]) } return "" } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/network/utils_test.go000066400000000000000000000014151411706463700234220ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package network import ( "testing" ) func TestRemoteIpFromAddr(t *testing.T) { r := RemoteIPFromAddr("127.0.0.1:8080") if r != "127.0.0.1" { t.Fatalf("Expected '127.0.0.1', got %s", r) } r = RemoteIPFromAddr("[::1]:8080") if r != "[::1]" { t.Fatalf("Expected '[::1]', got %s", r) } r = RemoteIPFromAddr(":8080") if r != "" { t.Fatalf("Expected '', got %s", r) } } func TestExtractRemoteIP(t *testing.T) { r := ExtractRemoteIP("192.168.0.1, 192.168.0.2, 192.168.0.3") if r != "192.168.0.1" { t.Fatalf("Expected '192.168.0.1', got %s", r) } r = ExtractRemoteIP("192.168.0.1,192.168.0.2,192.168.0.3") if r != "192.168.0.1" { t.Fatalf("Expected '192.168.0.1', got %s", r) } } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/process/000077500000000000000000000000001411706463700206605ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/process/process.go000066400000000000000000000106611411706463700226710ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package process import ( "errors" "fmt" "io/ioutil" "net" "os" "os/exec" "path" "strconv" "syscall" "github.com/etix/mirrorbits/core" "github.com/op/go-logging" ) var ( // Compile time variable defaultPidFile string ) var ( // ErrInvalidfd is returned when the given file descriptor is invalid ErrInvalidfd = errors.New("invalid file descriptor") log = logging.MustGetLogger("main") ) // Relaunch launches {self} as a child process passing listener details // to provide a seamless binary upgrade. func Relaunch(l net.Listener) error { argv0, err := exec.LookPath(os.Args[0]) if err != nil { return err } if _, err := os.Stat(argv0); err != nil { return err } wd, err := os.Getwd() if err != nil { return err } var file *os.File switch t := l.(type) { case *net.TCPListener: file, err = t.File() case *net.UnixListener: file, err = t.File() default: return ErrInvalidfd } if err != nil { return err } fd := file.Fd() sysfile := file.Name() listener, ok := l.(*net.TCPListener) if ok { listenerFile, err := listener.File() if err != nil { return err } fd = listenerFile.Fd() sysfile = listenerFile.Name() } if fd < uintptr(syscall.Stderr) { return ErrInvalidfd } if err := os.Setenv("OLD_FD", fmt.Sprint(fd)); err != nil { return err } if err := os.Setenv("OLD_NAME", fmt.Sprintf("tcp:%s->", l.Addr().String())); err != nil { return err } if err := os.Setenv("OLD_PPID", fmt.Sprint(syscall.Getpid())); err != nil { return err } files := make([]*os.File, fd+1) files[syscall.Stdin] = os.Stdin files[syscall.Stdout] = os.Stdout files[syscall.Stderr] = os.Stderr files[fd] = os.NewFile(fd, sysfile) p, err := os.StartProcess(argv0, os.Args, &os.ProcAttr{ Dir: wd, Env: os.Environ(), Files: files, Sys: &syscall.SysProcAttr{}, }) if err != nil { return err } log.Infof("Spawned child %d\n", p.Pid) return nil } // Recover from a seamless binary upgrade and use an already // existing listener to take over the connections func Recover() (l net.Listener, ppid int, err error) { var fd uintptr _, err = fmt.Sscan(os.Getenv("OLD_FD"), &fd) if err != nil { return } var i net.Listener i, err = net.FileListener(os.NewFile(fd, os.Getenv("OLD_NAME"))) if err != nil { return } switch i.(type) { case *net.TCPListener: l = i.(*net.TCPListener) case *net.UnixListener: l = i.(*net.UnixListener) default: err = fmt.Errorf("file descriptor is %T not *net.TCPListener or *net.UnixListener", i) return } if err = syscall.Close(int(fd)); err != nil { return } _, err = fmt.Sscan(os.Getenv("OLD_PPID"), &ppid) if err != nil { return } return } // KillParent sends a signal to make the parent exit gracefully with SIGQUIT func KillParent(ppid int) error { log.Info("Asking parent to quit") return syscall.Kill(ppid, syscall.SIGQUIT) } // GetPidLocation finds the location to store our pid file // and fallback to /var/run if none found func GetPidLocation() string { if core.PidFile == "" { // Runtime rdir := os.Getenv("XDG_RUNTIME_DIR") if rdir == "" { if defaultPidFile == "" { // Compile time return "/run/mirrorbits/mirrorbits.pid" // Fallback } return defaultPidFile } return rdir + "/mirrorbits.pid" } return core.PidFile } // WritePidFile writes the current pid file to disk func WritePidFile() { // Get the pid destination p := GetPidLocation() // Create the whole directory path if err := os.MkdirAll(path.Dir(p), 0755); err != nil { log.Errorf("Unable to write pid file: %v", err) } // Get our own PID and write it pid := strconv.Itoa(os.Getpid()) if err := ioutil.WriteFile(p, []byte(pid), 0644); err != nil { log.Errorf("Unable to write pid file: %v", err) } } // RemovePidFile removes the current pid file func RemovePidFile() { pidFile := GetPidLocation() if _, err := os.Stat(pidFile); !os.IsNotExist(err) { // Ensures we don't remove our forked process pid file // This can happen during seamless binary upgrade if GetRemoteProcPid() == os.Getpid() { if err = os.Remove(pidFile); err != nil { log.Errorf("Unable to remove pid file: %v", err) } } } } // GetRemoteProcPid gets the pid as it appears in the pid file (maybe not ours) func GetRemoteProcPid() int { b, err := ioutil.ReadFile(GetPidLocation()) if err != nil { return -1 } i, err := strconv.ParseInt(string(b), 10, 0) if err != nil { return -1 } return int(i) } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/rpc/000077500000000000000000000000001411706463700177665ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/rpc/interceptors.go000066400000000000000000000017251411706463700230430ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package rpc import ( "context" . "github.com/etix/mirrorbits/config" grpc "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/metadata" "google.golang.org/grpc/status" ) func StreamInterceptor(srv interface{}, stream grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) error { if err := authorize(stream.Context()); err != nil { return err } return handler(srv, stream) } func UnaryInterceptor(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) { if err := authorize(ctx); err != nil { return nil, err } return handler(ctx, req) } func authorize(ctx context.Context) error { if md, ok := metadata.FromIncomingContext(ctx); ok { if md["password"][0] == GetConfig().RPCPassword { return nil } } return status.Error(codes.Unauthenticated, "access denied") } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/rpc/rpc.go000066400000000000000000000432371411706463700211120ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package rpc import ( "fmt" "log" "net" "net/url" "os" "regexp" "runtime" "strconv" "strings" "sync" "syscall" . "github.com/etix/mirrorbits/config" "github.com/etix/mirrorbits/core" "github.com/etix/mirrorbits/database" "github.com/etix/mirrorbits/mirrors" "github.com/etix/mirrorbits/network" "github.com/etix/mirrorbits/scan" "github.com/etix/mirrorbits/utils" "github.com/golang/protobuf/ptypes" "github.com/golang/protobuf/ptypes/empty" "github.com/gomodule/redigo/redis" "github.com/pkg/errors" context "golang.org/x/net/context" grpc "google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/reflection" "google.golang.org/grpc/status" "gopkg.in/yaml.v2" ) var ( // ErrNameAlreadyTaken is returned when the request name is already taken by another mirror ErrNameAlreadyTaken = errors.New("name already taken") ) // CLI object handles the server side RPC of the CLI type CLI struct { listener net.Listener server *grpc.Server sig chan<- os.Signal redis *database.Redis cache *mirrors.Cache } func (c *CLI) Start() error { var err error c.listener, err = net.Listen("tcp", GetConfig().RPCListenAddress) if err != nil { return err } c.server = grpc.NewServer( grpc.UnaryInterceptor(UnaryInterceptor), grpc.StreamInterceptor(StreamInterceptor), ) RegisterCLIServer(c.server, c) reflection.Register(c.server) go func() { if err := c.server.Serve(c.listener); err != nil { log.Fatalf("failed to serve rpc: %v", err) } }() return nil } func (c *CLI) Close() error { c.server.Stop() return c.listener.Close() } func (c *CLI) SetSignals(sig chan<- os.Signal) { c.sig = sig } func (c *CLI) SetDatabase(r *database.Redis) { c.redis = r } func (c *CLI) SetCache(cache *mirrors.Cache) { c.cache = cache } func (c *CLI) Ping(context.Context, *empty.Empty) (*empty.Empty, error) { return &empty.Empty{}, nil } func (c *CLI) GetVersion(context.Context, *empty.Empty) (*VersionReply, error) { return &VersionReply{ Version: core.VERSION, Build: core.BUILD + core.DEV, GoVersion: runtime.Version(), OS: runtime.GOOS, Arch: runtime.GOARCH, GoMaxProcs: int32(runtime.GOMAXPROCS(0)), }, nil } func (c *CLI) Upgrade(ctx context.Context, in *empty.Empty) (*empty.Empty, error) { select { case c.sig <- syscall.SIGUSR2: default: return nil, status.Error(codes.Internal, "signal handler not ready") } return &empty.Empty{}, nil } func (c *CLI) Reload(ctx context.Context, in *empty.Empty) (*empty.Empty, error) { select { case c.sig <- syscall.SIGHUP: default: return nil, status.Error(codes.Internal, "signal handler not ready") } return &empty.Empty{}, nil } func (c *CLI) MatchMirror(ctx context.Context, in *MatchRequest) (*MatchReply, error) { if c.redis == nil { return nil, status.Error(codes.Internal, "database not ready") } mirrors, err := c.redis.GetListOfMirrors() if err != nil { return nil, errors.Wrap(err, "can't fetch the list of mirrors") } reply := &MatchReply{} for id, name := range mirrors { if strings.Contains(strings.ToLower(name), strings.ToLower(in.Pattern)) { reply.Mirrors = append(reply.Mirrors, &MirrorID{ ID: int32(id), Name: name, }) } } return reply, nil } func (c *CLI) ChangeStatus(ctx context.Context, in *ChangeStatusRequest) (*empty.Empty, error) { if in.ID <= 0 { return nil, status.Error(codes.FailedPrecondition, "invalid mirror id") } var err error switch in.Enabled { case true: err = mirrors.EnableMirror(c.redis, int(in.ID)) case false: err = mirrors.DisableMirror(c.redis, int(in.ID)) } return &empty.Empty{}, err } func (c *CLI) List(ctx context.Context, in *empty.Empty) (*MirrorListReply, error) { conn, err := c.redis.Connect() if err != nil { return nil, err } defer conn.Close() mirrorsIDs, err := c.redis.GetListOfMirrors() if err != nil { return nil, errors.Wrap(err, "can't fetch the list of mirrors") } conn.Send("MULTI") for id := range mirrorsIDs { conn.Send("HGETALL", fmt.Sprintf("MIRROR_%d", id)) } res, err := redis.Values(conn.Do("EXEC")) if err != nil { return nil, errors.Wrap(err, "database error") } reply := &MirrorListReply{} for _, e := range res { var mirror mirrors.Mirror res, ok := e.([]interface{}) if !ok { return nil, errors.New("typecast failed") } err = redis.ScanStruct([]interface{}(res), &mirror) if err != nil { return nil, errors.Wrap(err, "scan struct failed") } m, err := MirrorToRPC(&mirror) if err != nil { return nil, err } reply.Mirrors = append(reply.Mirrors, m) } return reply, nil } func (c *CLI) MirrorInfo(ctx context.Context, in *MirrorIDRequest) (*Mirror, error) { if in.ID <= 0 { return nil, status.Error(codes.FailedPrecondition, "invalid mirror id") } conn, err := c.redis.Connect() if err != nil { return nil, err } defer conn.Close() m, err := redis.Values(conn.Do("HGETALL", fmt.Sprintf("MIRROR_%d", in.ID))) if err != nil { return nil, err } var mi mirrors.Mirror err = redis.ScanStruct(m, &mi) if err != nil { return nil, err } rpcm, err := MirrorToRPC(&mi) if err != nil { return nil, err } return rpcm, nil } func (c *CLI) AddMirror(ctx context.Context, in *Mirror) (*AddMirrorReply, error) { mirror, err := MirrorFromRPC(in) if err != nil { return nil, err } if mirror.ID != 0 { return nil, status.Error(codes.FailedPrecondition, "unexpected ID") } u, err := url.Parse(mirror.HttpURL) if err != nil { return nil, errors.Wrap(err, "can't parse http url") } reply := &AddMirrorReply{} ip, err := network.LookupMirrorIP(u.Host) if err == network.ErrMultipleAddresses { reply.Warnings = append(reply.Warnings, "Warning: the hostname returned more than one address. Assuming they're sharing the same location.") } else if err != nil { return nil, errors.Wrap(err, "IP lookup failed") } geo := network.NewGeoIP() if err := geo.LoadGeoIP(); err != nil { return nil, errors.WithStack(err) } geoRec := geo.GetRecord(ip) if geoRec.IsValid() { mirror.Latitude = geoRec.Latitude mirror.Longitude = geoRec.Longitude mirror.ContinentCode = geoRec.ContinentCode mirror.CountryCodes = geoRec.CountryCode mirror.Asnum = geoRec.ASNum reply.Latitude = geoRec.Latitude reply.Longitude = geoRec.Longitude reply.Continent = geoRec.ContinentCode reply.Country = geoRec.Country reply.ASN = fmt.Sprintf("%s (%d)", geoRec.ASName, geoRec.ASNum) } else { reply.Warnings = append(reply.Warnings, "Warning: unable to guess the geographic location of this mirror") } return reply, c.setMirror(mirror) } func (c *CLI) UpdateMirror(ctx context.Context, in *Mirror) (*UpdateMirrorReply, error) { mirror, err := MirrorFromRPC(in) if err != nil { return nil, err } if mirror.ID <= 0 { return nil, status.Error(codes.FailedPrecondition, "invalid mirror id") } conn, err := c.redis.Connect() if err != nil { return &UpdateMirrorReply{}, err } defer conn.Close() m, err := redis.Values(conn.Do("HGETALL", fmt.Sprintf("MIRROR_%d", mirror.ID))) if err != nil { return nil, err } var original mirrors.Mirror err = redis.ScanStruct(m, &original) if err != nil { return nil, err } diff := createDiff(&original, mirror) return &UpdateMirrorReply{ Diff: diff, }, c.setMirror(mirror) } func createDiff(mirror1, mirror2 *mirrors.Mirror) (out string) { yamlo, _ := yaml.Marshal(mirror1) yamln, _ := yaml.Marshal(mirror2) splito := strings.Split(string(yamlo), "\n") splitn := strings.Split(string(yamln), "\n") for i, l := range splito { if l != splitn[i] { out += fmt.Sprintf("- %s\n+ %s\n", l, splitn[i]) } } return } func (c *CLI) setMirror(mirror *mirrors.Mirror) error { conn, err := c.redis.Connect() if err != nil { return err } defer conn.Close() mirrorsIDs, err := c.redis.GetListOfMirrors() if err != nil { return errors.Wrap(err, "can't fetch the list of mirrors") } isUpdate := false for id, name := range mirrorsIDs { if id == mirror.ID { isUpdate = true } if mirror.ID != id && name == mirror.Name { return ErrNameAlreadyTaken } } if mirror.ID <= 0 { // Generate a new ID mirror.ID, err = redis.Int(conn.Do("INCR", "LAST_MID")) if err != nil { return errors.Wrap(err, "failed creating a new id") } } // Reformat contry codes mirror.CountryCodes = utils.SanitizeLocationCodes(mirror.CountryCodes) mirror.ExcludedCountryCodes = utils.SanitizeLocationCodes(mirror.ExcludedCountryCodes) // Reformat continent code mirror.ContinentCode = utils.SanitizeLocationCodes(mirror.ContinentCode) // Normalize URLs if mirror.HttpURL != "" { mirror.HttpURL = utils.NormalizeURL(mirror.HttpURL) } if mirror.RsyncURL != "" { mirror.RsyncURL = utils.NormalizeURL(mirror.RsyncURL) } if mirror.FtpURL != "" { mirror.FtpURL = utils.NormalizeURL(mirror.FtpURL) } // Save the values back into redis conn.Send("MULTI") conn.Send("HMSET", fmt.Sprintf("MIRROR_%d", mirror.ID), "ID", mirror.ID, "name", mirror.Name, "http", mirror.HttpURL, "rsync", mirror.RsyncURL, "ftp", mirror.FtpURL, "sponsorName", mirror.SponsorName, "sponsorURL", mirror.SponsorURL, "sponsorLogo", mirror.SponsorLogoURL, "adminName", mirror.AdminName, "adminEmail", mirror.AdminEmail, "customData", mirror.CustomData, "continentOnly", mirror.ContinentOnly, "countryOnly", mirror.CountryOnly, "asOnly", mirror.ASOnly, "score", mirror.Score, "latitude", mirror.Latitude, "longitude", mirror.Longitude, "continentCode", mirror.ContinentCode, "countryCodes", mirror.CountryCodes, "excludedCountryCodes", mirror.ExcludedCountryCodes, "asnum", mirror.Asnum, "comment", mirror.Comment, "allowredirects", mirror.AllowRedirects, "enabled", mirror.Enabled) // The name of the mirror has been changed. conn.Send("HSET", "MIRRORS", mirror.ID, mirror.Name) _, err = conn.Do("EXEC") if err != nil { return errors.Wrap(err, "couldn't save the mirror configuration") } // Publish update database.Publish(conn, database.MIRROR_UPDATE, strconv.Itoa(mirror.ID)) if isUpdate { // This was an update of an existing mirror mirrors.PushLog(c.redis, mirrors.NewLogEdited(mirror.ID)) } else { // We just added a new mirror mirrors.PushLog(c.redis, mirrors.NewLogAdded(mirror.ID)) } return nil } func (c *CLI) RemoveMirror(ctx context.Context, in *MirrorIDRequest) (*empty.Empty, error) { if in.ID <= 0 { return nil, status.Error(codes.FailedPrecondition, "invalid mirror id") } conn, err := c.redis.Connect() if err != nil { return nil, err } defer conn.Close() // First disable the mirror err = mirrors.DisableMirror(c.redis, int(in.ID)) if err != nil { return nil, errors.Wrap(err, "unable to disable the mirror") } // Get all files supported by the given mirror files, err := redis.Strings(conn.Do("SMEMBERS", fmt.Sprintf("MIRRORFILES_%d", in.ID))) if err != nil { return nil, errors.Wrap(err, "unable to fetch the file list") } conn.Send("MULTI") // Remove each FILEINFO / FILEMIRRORS for _, file := range files { conn.Send("DEL", fmt.Sprintf("FILEINFO_%d_%s", in.ID, file)) conn.Send("SREM", fmt.Sprintf("FILEMIRRORS_%s", file), in.ID) conn.Send("PUBLISH", database.MIRROR_FILE_UPDATE, fmt.Sprintf("%d %s", in.ID, file)) } // Remove all other keys conn.Send("DEL", fmt.Sprintf("MIRROR_%d", in.ID), fmt.Sprintf("MIRRORFILES_%d", in.ID), fmt.Sprintf("MIRRORFILESTMP_%d", in.ID), fmt.Sprintf("HANDLEDFILES_%d", in.ID), fmt.Sprintf("SCANNING_%d", in.ID), fmt.Sprintf("MIRRORLOGS_%d", in.ID)) // Remove the last reference conn.Send("HDEL", "MIRRORS", in.ID) _, err = conn.Do("EXEC") if err != nil { return nil, errors.Wrap(err, "operation failed") } // Publish update database.Publish(conn, database.MIRROR_UPDATE, strconv.Itoa(int(in.ID))) return &empty.Empty{}, nil } func (c *CLI) RefreshRepository(ctx context.Context, in *RefreshRepositoryRequest) (*empty.Empty, error) { return &empty.Empty{}, scan.ScanSource(c.redis, in.Rehash, nil) } func (c *CLI) ScanMirror(ctx context.Context, in *ScanMirrorRequest) (*ScanMirrorReply, error) { if in.ID <= 0 { return nil, status.Error(codes.FailedPrecondition, "invalid mirror id") } conn, err := c.redis.Connect() if err != nil { return nil, err } defer conn.Close() // Check if the local repository has been scanned already exists, err := redis.Bool(conn.Do("EXISTS", "FILES")) if err != nil { return nil, err } if !exists { return nil, status.Error(codes.FailedPrecondition, "local repository not yet indexed. You should run 'refresh' first!") } key := fmt.Sprintf("MIRROR_%d", in.ID) m, err := redis.Values(conn.Do("HGETALL", key)) if err != nil { return nil, err } var mirror mirrors.Mirror err = redis.ScanStruct(m, &mirror) if err != nil { return nil, err } var wg sync.WaitGroup trace := scan.NewTraceHandler(c.redis, make(<-chan struct{})) wg.Add(1) go func() { defer wg.Done() err := trace.GetLastUpdate(mirror) if err != nil && err != scan.ErrNoTrace { if numError, ok := err.(*strconv.NumError); ok { if numError.Err == strconv.ErrSyntax { //log.Warningf("[%s] parsing trace file failed: %s is not a valid timestamp", mirror.Name, strconv.Quote(numError.Num)) return } } else { //log.Warningf("[%s] fetching trace file failed: %s", mirror.Name, err) } } }() err = scan.ErrNoSyncMethod var res *scan.ScanResult if in.Protocol == ScanMirrorRequest_ALL { // Use rsync (if applicable) and fallback to FTP if mirror.RsyncURL != "" { res, err = scan.Scan(core.RSYNC, c.redis, c.cache, mirror.RsyncURL, mirror.ID, ctx.Done()) } if err != nil && mirror.FtpURL != "" { res, err = scan.Scan(core.FTP, c.redis, c.cache, mirror.FtpURL, mirror.ID, ctx.Done()) } } else { // Use the requested protocol if in.Protocol == ScanMirrorRequest_RSYNC && mirror.RsyncURL != "" { res, err = scan.Scan(core.RSYNC, c.redis, c.cache, mirror.RsyncURL, mirror.ID, ctx.Done()) } else if in.Protocol == ScanMirrorRequest_FTP && mirror.FtpURL != "" { res, err = scan.Scan(core.FTP, c.redis, c.cache, mirror.FtpURL, mirror.ID, ctx.Done()) } } if err != nil { return nil, errors.New(fmt.Sprintf("scanning %s failed: %s", mirror.Name, err)) } reply := &ScanMirrorReply{ FilesIndexed: res.FilesIndexed, KnownIndexed: res.KnownIndexed, Removed: res.Removed, TZOffsetMs: res.TZOffsetMs, } // Finally enable the mirror if requested if err == nil && in.AutoEnable == true { if err := mirrors.EnableMirror(c.redis, mirror.ID); err != nil { return nil, errors.Wrap(err, "couldn't enable the mirror") } reply.Enabled = true } wg.Wait() return reply, nil } func (c *CLI) StatsFile(ctx context.Context, in *StatsFileRequest) (*StatsFileReply, error) { conn, err := c.redis.Connect() if err != nil { return nil, err } defer conn.Close() // Convert the timestamps start, err := ptypes.Timestamp(in.DateStart) if err != nil { return nil, err } end, err := ptypes.Timestamp(in.DateEnd) if err != nil { return nil, err } // Compile the regex pattern re, err := regexp.Compile(in.Pattern) if err != nil { return nil, err } // Generate the list of redis key for the period tkcoverage := utils.TimeKeyCoverage(start, end) // Prepare the transaction conn.Send("MULTI") for _, k := range tkcoverage { conn.Send("HGETALL", "STATS_FILE_"+k) } stats, err := redis.Values(conn.Do("EXEC")) if err != nil { return nil, errors.Wrap(err, "can't fetch stats") } reply := &StatsFileReply{ Files: make(map[string]int64), } for _, res := range stats { line, ok := res.([]interface{}) if !ok { return nil, errors.Wrap(err, "typecast failed") } else { stats := []interface{}(line) for i := 0; i < len(stats); i += 2 { path, _ := redis.String(stats[i], nil) matched := re.MatchString(path) if matched { reqs, _ := redis.Int64(stats[i+1], nil) reply.Files[path] += reqs } } } } return reply, nil } func (c *CLI) StatsMirror(ctx context.Context, in *StatsMirrorRequest) (*StatsMirrorReply, error) { if in.ID <= 0 { return nil, status.Error(codes.FailedPrecondition, "invalid mirror id") } conn, err := c.redis.Connect() if err != nil { return nil, err } defer conn.Close() // Convert the timestamps start, err := ptypes.Timestamp(in.DateStart) if err != nil { return nil, err } end, err := ptypes.Timestamp(in.DateEnd) if err != nil { return nil, err } // Generate the list of redis key for the period tkcoverage := utils.TimeKeyCoverage(start, end) conn.Send("MULTI") // Fetch the stats for _, k := range tkcoverage { conn.Send("HGET", "STATS_MIRROR_"+k, in.ID) conn.Send("HGET", "STATS_MIRROR_BYTES_"+k, in.ID) } stats, err := redis.Strings(conn.Do("EXEC")) if err != nil { return nil, errors.Wrap(err, "can't fetch stats") } // Fetch the mirror struct m, err := redis.Values(conn.Do("HGETALL", fmt.Sprintf("MIRROR_%d", in.ID))) if err != nil { return nil, errors.WithMessage(err, "can't fetch mirror") } reply := &StatsMirrorReply{} var mirror mirrors.Mirror err = redis.ScanStruct(m, &mirror) if err != nil { return nil, errors.Wrap(err, "stats error") } reply.Mirror, err = MirrorToRPC(&mirror) if err != nil { return nil, errors.Wrap(err, "stats error") } for i := 0; i < len(stats); i += 2 { v1, _ := strconv.ParseInt(stats[i], 10, 64) v2, _ := strconv.ParseInt(stats[i+1], 10, 64) reply.Requests += v1 reply.Bytes += v2 } return reply, nil } func (c *CLI) GetMirrorLogs(ctx context.Context, in *GetMirrorLogsRequest) (*GetMirrorLogsReply, error) { if in.ID <= 0 { return nil, status.Error(codes.FailedPrecondition, "invalid mirror id") } lines, err := mirrors.ReadLogs(c.redis, int(in.ID), int(in.MaxResults)) if err != nil { return nil, errors.Wrap(err, "mirror logs error") } return &GetMirrorLogsReply{Line: lines}, nil } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/rpc/rpc.pb.go000066400000000000000000002110421411706463700215010ustar00rootroot00000000000000// Code generated by protoc-gen-go. DO NOT EDIT. // source: rpc.proto package rpc import ( context "context" fmt "fmt" proto "github.com/golang/protobuf/proto" empty "github.com/golang/protobuf/ptypes/empty" timestamp "github.com/golang/protobuf/ptypes/timestamp" grpc "google.golang.org/grpc" codes "google.golang.org/grpc/codes" status "google.golang.org/grpc/status" math "math" ) // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf // This is a compile-time assertion to ensure that this generated file // is compatible with the proto package it is being compiled against. // A compilation error at this line likely means your copy of the // proto package needs to be updated. const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package type ScanMirrorRequest_Method int32 const ( ScanMirrorRequest_ALL ScanMirrorRequest_Method = 0 ScanMirrorRequest_FTP ScanMirrorRequest_Method = 1 ScanMirrorRequest_RSYNC ScanMirrorRequest_Method = 2 ) var ScanMirrorRequest_Method_name = map[int32]string{ 0: "ALL", 1: "FTP", 2: "RSYNC", } var ScanMirrorRequest_Method_value = map[string]int32{ "ALL": 0, "FTP": 1, "RSYNC": 2, } func (x ScanMirrorRequest_Method) String() string { return proto.EnumName(ScanMirrorRequest_Method_name, int32(x)) } func (ScanMirrorRequest_Method) EnumDescriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{11, 0} } type VersionReply struct { Version string `protobuf:"bytes,1,opt,name=Version,proto3" json:"Version,omitempty"` Build string `protobuf:"bytes,2,opt,name=Build,proto3" json:"Build,omitempty"` GoVersion string `protobuf:"bytes,3,opt,name=GoVersion,proto3" json:"GoVersion,omitempty"` OS string `protobuf:"bytes,4,opt,name=OS,proto3" json:"OS,omitempty"` Arch string `protobuf:"bytes,5,opt,name=Arch,proto3" json:"Arch,omitempty"` GoMaxProcs int32 `protobuf:"varint,6,opt,name=GoMaxProcs,proto3" json:"GoMaxProcs,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *VersionReply) Reset() { *m = VersionReply{} } func (m *VersionReply) String() string { return proto.CompactTextString(m) } func (*VersionReply) ProtoMessage() {} func (*VersionReply) Descriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{0} } func (m *VersionReply) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_VersionReply.Unmarshal(m, b) } func (m *VersionReply) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_VersionReply.Marshal(b, m, deterministic) } func (m *VersionReply) XXX_Merge(src proto.Message) { xxx_messageInfo_VersionReply.Merge(m, src) } func (m *VersionReply) XXX_Size() int { return xxx_messageInfo_VersionReply.Size(m) } func (m *VersionReply) XXX_DiscardUnknown() { xxx_messageInfo_VersionReply.DiscardUnknown(m) } var xxx_messageInfo_VersionReply proto.InternalMessageInfo func (m *VersionReply) GetVersion() string { if m != nil { return m.Version } return "" } func (m *VersionReply) GetBuild() string { if m != nil { return m.Build } return "" } func (m *VersionReply) GetGoVersion() string { if m != nil { return m.GoVersion } return "" } func (m *VersionReply) GetOS() string { if m != nil { return m.OS } return "" } func (m *VersionReply) GetArch() string { if m != nil { return m.Arch } return "" } func (m *VersionReply) GetGoMaxProcs() int32 { if m != nil { return m.GoMaxProcs } return 0 } type MatchRequest struct { Pattern string `protobuf:"bytes,1,opt,name=Pattern,proto3" json:"Pattern,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *MatchRequest) Reset() { *m = MatchRequest{} } func (m *MatchRequest) String() string { return proto.CompactTextString(m) } func (*MatchRequest) ProtoMessage() {} func (*MatchRequest) Descriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{1} } func (m *MatchRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_MatchRequest.Unmarshal(m, b) } func (m *MatchRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_MatchRequest.Marshal(b, m, deterministic) } func (m *MatchRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_MatchRequest.Merge(m, src) } func (m *MatchRequest) XXX_Size() int { return xxx_messageInfo_MatchRequest.Size(m) } func (m *MatchRequest) XXX_DiscardUnknown() { xxx_messageInfo_MatchRequest.DiscardUnknown(m) } var xxx_messageInfo_MatchRequest proto.InternalMessageInfo func (m *MatchRequest) GetPattern() string { if m != nil { return m.Pattern } return "" } type Mirror struct { ID int32 `protobuf:"varint,1,opt,name=ID,proto3" json:"ID,omitempty"` Name string `protobuf:"bytes,2,opt,name=Name,proto3" json:"Name,omitempty"` HttpURL string `protobuf:"bytes,3,opt,name=HttpURL,proto3" json:"HttpURL,omitempty"` RsyncURL string `protobuf:"bytes,4,opt,name=RsyncURL,proto3" json:"RsyncURL,omitempty"` FtpURL string `protobuf:"bytes,5,opt,name=FtpURL,proto3" json:"FtpURL,omitempty"` SponsorName string `protobuf:"bytes,6,opt,name=SponsorName,proto3" json:"SponsorName,omitempty"` SponsorURL string `protobuf:"bytes,7,opt,name=SponsorURL,proto3" json:"SponsorURL,omitempty"` SponsorLogoURL string `protobuf:"bytes,8,opt,name=SponsorLogoURL,proto3" json:"SponsorLogoURL,omitempty"` AdminName string `protobuf:"bytes,9,opt,name=AdminName,proto3" json:"AdminName,omitempty"` AdminEmail string `protobuf:"bytes,10,opt,name=AdminEmail,proto3" json:"AdminEmail,omitempty"` CustomData string `protobuf:"bytes,11,opt,name=CustomData,proto3" json:"CustomData,omitempty"` ContinentOnly bool `protobuf:"varint,12,opt,name=ContinentOnly,proto3" json:"ContinentOnly,omitempty"` CountryOnly bool `protobuf:"varint,13,opt,name=CountryOnly,proto3" json:"CountryOnly,omitempty"` ASOnly bool `protobuf:"varint,14,opt,name=ASOnly,proto3" json:"ASOnly,omitempty"` Score int32 `protobuf:"varint,15,opt,name=Score,proto3" json:"Score,omitempty"` Latitude float32 `protobuf:"fixed32,16,opt,name=Latitude,proto3" json:"Latitude,omitempty"` Longitude float32 `protobuf:"fixed32,17,opt,name=Longitude,proto3" json:"Longitude,omitempty"` ContinentCode string `protobuf:"bytes,18,opt,name=ContinentCode,proto3" json:"ContinentCode,omitempty"` CountryCodes string `protobuf:"bytes,19,opt,name=CountryCodes,proto3" json:"CountryCodes,omitempty"` ExcludedCountryCodes string `protobuf:"bytes,20,opt,name=ExcludedCountryCodes,proto3" json:"ExcludedCountryCodes,omitempty"` Asnum uint32 `protobuf:"varint,21,opt,name=Asnum,proto3" json:"Asnum,omitempty"` Comment string `protobuf:"bytes,22,opt,name=Comment,proto3" json:"Comment,omitempty"` Enabled bool `protobuf:"varint,23,opt,name=Enabled,proto3" json:"Enabled,omitempty"` Up bool `protobuf:"varint,24,opt,name=Up,proto3" json:"Up,omitempty"` ExcludeReason string `protobuf:"bytes,25,opt,name=ExcludeReason,proto3" json:"ExcludeReason,omitempty"` StateSince *timestamp.Timestamp `protobuf:"bytes,26,opt,name=StateSince,proto3" json:"StateSince,omitempty"` AllowRedirects int32 `protobuf:"varint,27,opt,name=AllowRedirects,proto3" json:"AllowRedirects,omitempty"` LastSync *timestamp.Timestamp `protobuf:"bytes,28,opt,name=LastSync,proto3" json:"LastSync,omitempty"` LastSuccessfulSync *timestamp.Timestamp `protobuf:"bytes,29,opt,name=LastSuccessfulSync,proto3" json:"LastSuccessfulSync,omitempty"` LastModTime *timestamp.Timestamp `protobuf:"bytes,30,opt,name=LastModTime,proto3" json:"LastModTime,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *Mirror) Reset() { *m = Mirror{} } func (m *Mirror) String() string { return proto.CompactTextString(m) } func (*Mirror) ProtoMessage() {} func (*Mirror) Descriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{2} } func (m *Mirror) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_Mirror.Unmarshal(m, b) } func (m *Mirror) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_Mirror.Marshal(b, m, deterministic) } func (m *Mirror) XXX_Merge(src proto.Message) { xxx_messageInfo_Mirror.Merge(m, src) } func (m *Mirror) XXX_Size() int { return xxx_messageInfo_Mirror.Size(m) } func (m *Mirror) XXX_DiscardUnknown() { xxx_messageInfo_Mirror.DiscardUnknown(m) } var xxx_messageInfo_Mirror proto.InternalMessageInfo func (m *Mirror) GetID() int32 { if m != nil { return m.ID } return 0 } func (m *Mirror) GetName() string { if m != nil { return m.Name } return "" } func (m *Mirror) GetHttpURL() string { if m != nil { return m.HttpURL } return "" } func (m *Mirror) GetRsyncURL() string { if m != nil { return m.RsyncURL } return "" } func (m *Mirror) GetFtpURL() string { if m != nil { return m.FtpURL } return "" } func (m *Mirror) GetSponsorName() string { if m != nil { return m.SponsorName } return "" } func (m *Mirror) GetSponsorURL() string { if m != nil { return m.SponsorURL } return "" } func (m *Mirror) GetSponsorLogoURL() string { if m != nil { return m.SponsorLogoURL } return "" } func (m *Mirror) GetAdminName() string { if m != nil { return m.AdminName } return "" } func (m *Mirror) GetAdminEmail() string { if m != nil { return m.AdminEmail } return "" } func (m *Mirror) GetCustomData() string { if m != nil { return m.CustomData } return "" } func (m *Mirror) GetContinentOnly() bool { if m != nil { return m.ContinentOnly } return false } func (m *Mirror) GetCountryOnly() bool { if m != nil { return m.CountryOnly } return false } func (m *Mirror) GetASOnly() bool { if m != nil { return m.ASOnly } return false } func (m *Mirror) GetScore() int32 { if m != nil { return m.Score } return 0 } func (m *Mirror) GetLatitude() float32 { if m != nil { return m.Latitude } return 0 } func (m *Mirror) GetLongitude() float32 { if m != nil { return m.Longitude } return 0 } func (m *Mirror) GetContinentCode() string { if m != nil { return m.ContinentCode } return "" } func (m *Mirror) GetCountryCodes() string { if m != nil { return m.CountryCodes } return "" } func (m *Mirror) GetExcludedCountryCodes() string { if m != nil { return m.ExcludedCountryCodes } return "" } func (m *Mirror) GetAsnum() uint32 { if m != nil { return m.Asnum } return 0 } func (m *Mirror) GetComment() string { if m != nil { return m.Comment } return "" } func (m *Mirror) GetEnabled() bool { if m != nil { return m.Enabled } return false } func (m *Mirror) GetUp() bool { if m != nil { return m.Up } return false } func (m *Mirror) GetExcludeReason() string { if m != nil { return m.ExcludeReason } return "" } func (m *Mirror) GetStateSince() *timestamp.Timestamp { if m != nil { return m.StateSince } return nil } func (m *Mirror) GetAllowRedirects() int32 { if m != nil { return m.AllowRedirects } return 0 } func (m *Mirror) GetLastSync() *timestamp.Timestamp { if m != nil { return m.LastSync } return nil } func (m *Mirror) GetLastSuccessfulSync() *timestamp.Timestamp { if m != nil { return m.LastSuccessfulSync } return nil } func (m *Mirror) GetLastModTime() *timestamp.Timestamp { if m != nil { return m.LastModTime } return nil } type MirrorListReply struct { Mirrors []*Mirror `protobuf:"bytes,1,rep,name=Mirrors,proto3" json:"Mirrors,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *MirrorListReply) Reset() { *m = MirrorListReply{} } func (m *MirrorListReply) String() string { return proto.CompactTextString(m) } func (*MirrorListReply) ProtoMessage() {} func (*MirrorListReply) Descriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{3} } func (m *MirrorListReply) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_MirrorListReply.Unmarshal(m, b) } func (m *MirrorListReply) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_MirrorListReply.Marshal(b, m, deterministic) } func (m *MirrorListReply) XXX_Merge(src proto.Message) { xxx_messageInfo_MirrorListReply.Merge(m, src) } func (m *MirrorListReply) XXX_Size() int { return xxx_messageInfo_MirrorListReply.Size(m) } func (m *MirrorListReply) XXX_DiscardUnknown() { xxx_messageInfo_MirrorListReply.DiscardUnknown(m) } var xxx_messageInfo_MirrorListReply proto.InternalMessageInfo func (m *MirrorListReply) GetMirrors() []*Mirror { if m != nil { return m.Mirrors } return nil } type MirrorID struct { ID int32 `protobuf:"varint,1,opt,name=ID,proto3" json:"ID,omitempty"` Name string `protobuf:"bytes,2,opt,name=Name,proto3" json:"Name,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *MirrorID) Reset() { *m = MirrorID{} } func (m *MirrorID) String() string { return proto.CompactTextString(m) } func (*MirrorID) ProtoMessage() {} func (*MirrorID) Descriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{4} } func (m *MirrorID) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_MirrorID.Unmarshal(m, b) } func (m *MirrorID) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_MirrorID.Marshal(b, m, deterministic) } func (m *MirrorID) XXX_Merge(src proto.Message) { xxx_messageInfo_MirrorID.Merge(m, src) } func (m *MirrorID) XXX_Size() int { return xxx_messageInfo_MirrorID.Size(m) } func (m *MirrorID) XXX_DiscardUnknown() { xxx_messageInfo_MirrorID.DiscardUnknown(m) } var xxx_messageInfo_MirrorID proto.InternalMessageInfo func (m *MirrorID) GetID() int32 { if m != nil { return m.ID } return 0 } func (m *MirrorID) GetName() string { if m != nil { return m.Name } return "" } type MatchReply struct { Mirrors []*MirrorID `protobuf:"bytes,1,rep,name=Mirrors,proto3" json:"Mirrors,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *MatchReply) Reset() { *m = MatchReply{} } func (m *MatchReply) String() string { return proto.CompactTextString(m) } func (*MatchReply) ProtoMessage() {} func (*MatchReply) Descriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{5} } func (m *MatchReply) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_MatchReply.Unmarshal(m, b) } func (m *MatchReply) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_MatchReply.Marshal(b, m, deterministic) } func (m *MatchReply) XXX_Merge(src proto.Message) { xxx_messageInfo_MatchReply.Merge(m, src) } func (m *MatchReply) XXX_Size() int { return xxx_messageInfo_MatchReply.Size(m) } func (m *MatchReply) XXX_DiscardUnknown() { xxx_messageInfo_MatchReply.DiscardUnknown(m) } var xxx_messageInfo_MatchReply proto.InternalMessageInfo func (m *MatchReply) GetMirrors() []*MirrorID { if m != nil { return m.Mirrors } return nil } type ChangeStatusRequest struct { ID int32 `protobuf:"varint,1,opt,name=ID,proto3" json:"ID,omitempty"` Enabled bool `protobuf:"varint,2,opt,name=Enabled,proto3" json:"Enabled,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ChangeStatusRequest) Reset() { *m = ChangeStatusRequest{} } func (m *ChangeStatusRequest) String() string { return proto.CompactTextString(m) } func (*ChangeStatusRequest) ProtoMessage() {} func (*ChangeStatusRequest) Descriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{6} } func (m *ChangeStatusRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ChangeStatusRequest.Unmarshal(m, b) } func (m *ChangeStatusRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ChangeStatusRequest.Marshal(b, m, deterministic) } func (m *ChangeStatusRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_ChangeStatusRequest.Merge(m, src) } func (m *ChangeStatusRequest) XXX_Size() int { return xxx_messageInfo_ChangeStatusRequest.Size(m) } func (m *ChangeStatusRequest) XXX_DiscardUnknown() { xxx_messageInfo_ChangeStatusRequest.DiscardUnknown(m) } var xxx_messageInfo_ChangeStatusRequest proto.InternalMessageInfo func (m *ChangeStatusRequest) GetID() int32 { if m != nil { return m.ID } return 0 } func (m *ChangeStatusRequest) GetEnabled() bool { if m != nil { return m.Enabled } return false } type MirrorIDRequest struct { ID int32 `protobuf:"varint,1,opt,name=ID,proto3" json:"ID,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *MirrorIDRequest) Reset() { *m = MirrorIDRequest{} } func (m *MirrorIDRequest) String() string { return proto.CompactTextString(m) } func (*MirrorIDRequest) ProtoMessage() {} func (*MirrorIDRequest) Descriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{7} } func (m *MirrorIDRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_MirrorIDRequest.Unmarshal(m, b) } func (m *MirrorIDRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_MirrorIDRequest.Marshal(b, m, deterministic) } func (m *MirrorIDRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_MirrorIDRequest.Merge(m, src) } func (m *MirrorIDRequest) XXX_Size() int { return xxx_messageInfo_MirrorIDRequest.Size(m) } func (m *MirrorIDRequest) XXX_DiscardUnknown() { xxx_messageInfo_MirrorIDRequest.DiscardUnknown(m) } var xxx_messageInfo_MirrorIDRequest proto.InternalMessageInfo func (m *MirrorIDRequest) GetID() int32 { if m != nil { return m.ID } return 0 } type AddMirrorReply struct { Latitude float32 `protobuf:"fixed32,1,opt,name=Latitude,proto3" json:"Latitude,omitempty"` Longitude float32 `protobuf:"fixed32,2,opt,name=Longitude,proto3" json:"Longitude,omitempty"` Country string `protobuf:"bytes,3,opt,name=Country,proto3" json:"Country,omitempty"` Continent string `protobuf:"bytes,4,opt,name=Continent,proto3" json:"Continent,omitempty"` ASN string `protobuf:"bytes,5,opt,name=ASN,proto3" json:"ASN,omitempty"` Warnings []string `protobuf:"bytes,6,rep,name=Warnings,proto3" json:"Warnings,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *AddMirrorReply) Reset() { *m = AddMirrorReply{} } func (m *AddMirrorReply) String() string { return proto.CompactTextString(m) } func (*AddMirrorReply) ProtoMessage() {} func (*AddMirrorReply) Descriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{8} } func (m *AddMirrorReply) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_AddMirrorReply.Unmarshal(m, b) } func (m *AddMirrorReply) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_AddMirrorReply.Marshal(b, m, deterministic) } func (m *AddMirrorReply) XXX_Merge(src proto.Message) { xxx_messageInfo_AddMirrorReply.Merge(m, src) } func (m *AddMirrorReply) XXX_Size() int { return xxx_messageInfo_AddMirrorReply.Size(m) } func (m *AddMirrorReply) XXX_DiscardUnknown() { xxx_messageInfo_AddMirrorReply.DiscardUnknown(m) } var xxx_messageInfo_AddMirrorReply proto.InternalMessageInfo func (m *AddMirrorReply) GetLatitude() float32 { if m != nil { return m.Latitude } return 0 } func (m *AddMirrorReply) GetLongitude() float32 { if m != nil { return m.Longitude } return 0 } func (m *AddMirrorReply) GetCountry() string { if m != nil { return m.Country } return "" } func (m *AddMirrorReply) GetContinent() string { if m != nil { return m.Continent } return "" } func (m *AddMirrorReply) GetASN() string { if m != nil { return m.ASN } return "" } func (m *AddMirrorReply) GetWarnings() []string { if m != nil { return m.Warnings } return nil } type UpdateMirrorReply struct { Diff string `protobuf:"bytes,1,opt,name=Diff,proto3" json:"Diff,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *UpdateMirrorReply) Reset() { *m = UpdateMirrorReply{} } func (m *UpdateMirrorReply) String() string { return proto.CompactTextString(m) } func (*UpdateMirrorReply) ProtoMessage() {} func (*UpdateMirrorReply) Descriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{9} } func (m *UpdateMirrorReply) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_UpdateMirrorReply.Unmarshal(m, b) } func (m *UpdateMirrorReply) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_UpdateMirrorReply.Marshal(b, m, deterministic) } func (m *UpdateMirrorReply) XXX_Merge(src proto.Message) { xxx_messageInfo_UpdateMirrorReply.Merge(m, src) } func (m *UpdateMirrorReply) XXX_Size() int { return xxx_messageInfo_UpdateMirrorReply.Size(m) } func (m *UpdateMirrorReply) XXX_DiscardUnknown() { xxx_messageInfo_UpdateMirrorReply.DiscardUnknown(m) } var xxx_messageInfo_UpdateMirrorReply proto.InternalMessageInfo func (m *UpdateMirrorReply) GetDiff() string { if m != nil { return m.Diff } return "" } type RefreshRepositoryRequest struct { Rehash bool `protobuf:"varint,1,opt,name=Rehash,proto3" json:"Rehash,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *RefreshRepositoryRequest) Reset() { *m = RefreshRepositoryRequest{} } func (m *RefreshRepositoryRequest) String() string { return proto.CompactTextString(m) } func (*RefreshRepositoryRequest) ProtoMessage() {} func (*RefreshRepositoryRequest) Descriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{10} } func (m *RefreshRepositoryRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_RefreshRepositoryRequest.Unmarshal(m, b) } func (m *RefreshRepositoryRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_RefreshRepositoryRequest.Marshal(b, m, deterministic) } func (m *RefreshRepositoryRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_RefreshRepositoryRequest.Merge(m, src) } func (m *RefreshRepositoryRequest) XXX_Size() int { return xxx_messageInfo_RefreshRepositoryRequest.Size(m) } func (m *RefreshRepositoryRequest) XXX_DiscardUnknown() { xxx_messageInfo_RefreshRepositoryRequest.DiscardUnknown(m) } var xxx_messageInfo_RefreshRepositoryRequest proto.InternalMessageInfo func (m *RefreshRepositoryRequest) GetRehash() bool { if m != nil { return m.Rehash } return false } type ScanMirrorRequest struct { ID int32 `protobuf:"varint,1,opt,name=ID,proto3" json:"ID,omitempty"` AutoEnable bool `protobuf:"varint,2,opt,name=AutoEnable,proto3" json:"AutoEnable,omitempty"` Protocol ScanMirrorRequest_Method `protobuf:"varint,3,opt,name=Protocol,proto3,enum=ScanMirrorRequest_Method" json:"Protocol,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ScanMirrorRequest) Reset() { *m = ScanMirrorRequest{} } func (m *ScanMirrorRequest) String() string { return proto.CompactTextString(m) } func (*ScanMirrorRequest) ProtoMessage() {} func (*ScanMirrorRequest) Descriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{11} } func (m *ScanMirrorRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ScanMirrorRequest.Unmarshal(m, b) } func (m *ScanMirrorRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ScanMirrorRequest.Marshal(b, m, deterministic) } func (m *ScanMirrorRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_ScanMirrorRequest.Merge(m, src) } func (m *ScanMirrorRequest) XXX_Size() int { return xxx_messageInfo_ScanMirrorRequest.Size(m) } func (m *ScanMirrorRequest) XXX_DiscardUnknown() { xxx_messageInfo_ScanMirrorRequest.DiscardUnknown(m) } var xxx_messageInfo_ScanMirrorRequest proto.InternalMessageInfo func (m *ScanMirrorRequest) GetID() int32 { if m != nil { return m.ID } return 0 } func (m *ScanMirrorRequest) GetAutoEnable() bool { if m != nil { return m.AutoEnable } return false } func (m *ScanMirrorRequest) GetProtocol() ScanMirrorRequest_Method { if m != nil { return m.Protocol } return ScanMirrorRequest_ALL } type ScanMirrorReply struct { Enabled bool `protobuf:"varint,1,opt,name=Enabled,proto3" json:"Enabled,omitempty"` FilesIndexed int64 `protobuf:"varint,2,opt,name=FilesIndexed,proto3" json:"FilesIndexed,omitempty"` KnownIndexed int64 `protobuf:"varint,3,opt,name=KnownIndexed,proto3" json:"KnownIndexed,omitempty"` Removed int64 `protobuf:"varint,4,opt,name=Removed,proto3" json:"Removed,omitempty"` TZOffsetMs int64 `protobuf:"varint,5,opt,name=TZOffsetMs,proto3" json:"TZOffsetMs,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *ScanMirrorReply) Reset() { *m = ScanMirrorReply{} } func (m *ScanMirrorReply) String() string { return proto.CompactTextString(m) } func (*ScanMirrorReply) ProtoMessage() {} func (*ScanMirrorReply) Descriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{12} } func (m *ScanMirrorReply) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_ScanMirrorReply.Unmarshal(m, b) } func (m *ScanMirrorReply) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_ScanMirrorReply.Marshal(b, m, deterministic) } func (m *ScanMirrorReply) XXX_Merge(src proto.Message) { xxx_messageInfo_ScanMirrorReply.Merge(m, src) } func (m *ScanMirrorReply) XXX_Size() int { return xxx_messageInfo_ScanMirrorReply.Size(m) } func (m *ScanMirrorReply) XXX_DiscardUnknown() { xxx_messageInfo_ScanMirrorReply.DiscardUnknown(m) } var xxx_messageInfo_ScanMirrorReply proto.InternalMessageInfo func (m *ScanMirrorReply) GetEnabled() bool { if m != nil { return m.Enabled } return false } func (m *ScanMirrorReply) GetFilesIndexed() int64 { if m != nil { return m.FilesIndexed } return 0 } func (m *ScanMirrorReply) GetKnownIndexed() int64 { if m != nil { return m.KnownIndexed } return 0 } func (m *ScanMirrorReply) GetRemoved() int64 { if m != nil { return m.Removed } return 0 } func (m *ScanMirrorReply) GetTZOffsetMs() int64 { if m != nil { return m.TZOffsetMs } return 0 } type StatsFileRequest struct { Pattern string `protobuf:"bytes,1,opt,name=Pattern,proto3" json:"Pattern,omitempty"` DateStart *timestamp.Timestamp `protobuf:"bytes,2,opt,name=DateStart,proto3" json:"DateStart,omitempty"` DateEnd *timestamp.Timestamp `protobuf:"bytes,3,opt,name=DateEnd,proto3" json:"DateEnd,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *StatsFileRequest) Reset() { *m = StatsFileRequest{} } func (m *StatsFileRequest) String() string { return proto.CompactTextString(m) } func (*StatsFileRequest) ProtoMessage() {} func (*StatsFileRequest) Descriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{13} } func (m *StatsFileRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_StatsFileRequest.Unmarshal(m, b) } func (m *StatsFileRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_StatsFileRequest.Marshal(b, m, deterministic) } func (m *StatsFileRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_StatsFileRequest.Merge(m, src) } func (m *StatsFileRequest) XXX_Size() int { return xxx_messageInfo_StatsFileRequest.Size(m) } func (m *StatsFileRequest) XXX_DiscardUnknown() { xxx_messageInfo_StatsFileRequest.DiscardUnknown(m) } var xxx_messageInfo_StatsFileRequest proto.InternalMessageInfo func (m *StatsFileRequest) GetPattern() string { if m != nil { return m.Pattern } return "" } func (m *StatsFileRequest) GetDateStart() *timestamp.Timestamp { if m != nil { return m.DateStart } return nil } func (m *StatsFileRequest) GetDateEnd() *timestamp.Timestamp { if m != nil { return m.DateEnd } return nil } type StatsFileReply struct { Files map[string]int64 `protobuf:"bytes,1,rep,name=files,proto3" json:"files,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"varint,2,opt,name=value,proto3"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *StatsFileReply) Reset() { *m = StatsFileReply{} } func (m *StatsFileReply) String() string { return proto.CompactTextString(m) } func (*StatsFileReply) ProtoMessage() {} func (*StatsFileReply) Descriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{14} } func (m *StatsFileReply) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_StatsFileReply.Unmarshal(m, b) } func (m *StatsFileReply) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_StatsFileReply.Marshal(b, m, deterministic) } func (m *StatsFileReply) XXX_Merge(src proto.Message) { xxx_messageInfo_StatsFileReply.Merge(m, src) } func (m *StatsFileReply) XXX_Size() int { return xxx_messageInfo_StatsFileReply.Size(m) } func (m *StatsFileReply) XXX_DiscardUnknown() { xxx_messageInfo_StatsFileReply.DiscardUnknown(m) } var xxx_messageInfo_StatsFileReply proto.InternalMessageInfo func (m *StatsFileReply) GetFiles() map[string]int64 { if m != nil { return m.Files } return nil } type StatsMirrorRequest struct { ID int32 `protobuf:"varint,1,opt,name=ID,proto3" json:"ID,omitempty"` DateStart *timestamp.Timestamp `protobuf:"bytes,2,opt,name=DateStart,proto3" json:"DateStart,omitempty"` DateEnd *timestamp.Timestamp `protobuf:"bytes,3,opt,name=DateEnd,proto3" json:"DateEnd,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *StatsMirrorRequest) Reset() { *m = StatsMirrorRequest{} } func (m *StatsMirrorRequest) String() string { return proto.CompactTextString(m) } func (*StatsMirrorRequest) ProtoMessage() {} func (*StatsMirrorRequest) Descriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{15} } func (m *StatsMirrorRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_StatsMirrorRequest.Unmarshal(m, b) } func (m *StatsMirrorRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_StatsMirrorRequest.Marshal(b, m, deterministic) } func (m *StatsMirrorRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_StatsMirrorRequest.Merge(m, src) } func (m *StatsMirrorRequest) XXX_Size() int { return xxx_messageInfo_StatsMirrorRequest.Size(m) } func (m *StatsMirrorRequest) XXX_DiscardUnknown() { xxx_messageInfo_StatsMirrorRequest.DiscardUnknown(m) } var xxx_messageInfo_StatsMirrorRequest proto.InternalMessageInfo func (m *StatsMirrorRequest) GetID() int32 { if m != nil { return m.ID } return 0 } func (m *StatsMirrorRequest) GetDateStart() *timestamp.Timestamp { if m != nil { return m.DateStart } return nil } func (m *StatsMirrorRequest) GetDateEnd() *timestamp.Timestamp { if m != nil { return m.DateEnd } return nil } type StatsMirrorReply struct { Mirror *Mirror `protobuf:"bytes,1,opt,name=Mirror,proto3" json:"Mirror,omitempty"` Requests int64 `protobuf:"varint,2,opt,name=Requests,proto3" json:"Requests,omitempty"` Bytes int64 `protobuf:"varint,3,opt,name=Bytes,proto3" json:"Bytes,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *StatsMirrorReply) Reset() { *m = StatsMirrorReply{} } func (m *StatsMirrorReply) String() string { return proto.CompactTextString(m) } func (*StatsMirrorReply) ProtoMessage() {} func (*StatsMirrorReply) Descriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{16} } func (m *StatsMirrorReply) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_StatsMirrorReply.Unmarshal(m, b) } func (m *StatsMirrorReply) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_StatsMirrorReply.Marshal(b, m, deterministic) } func (m *StatsMirrorReply) XXX_Merge(src proto.Message) { xxx_messageInfo_StatsMirrorReply.Merge(m, src) } func (m *StatsMirrorReply) XXX_Size() int { return xxx_messageInfo_StatsMirrorReply.Size(m) } func (m *StatsMirrorReply) XXX_DiscardUnknown() { xxx_messageInfo_StatsMirrorReply.DiscardUnknown(m) } var xxx_messageInfo_StatsMirrorReply proto.InternalMessageInfo func (m *StatsMirrorReply) GetMirror() *Mirror { if m != nil { return m.Mirror } return nil } func (m *StatsMirrorReply) GetRequests() int64 { if m != nil { return m.Requests } return 0 } func (m *StatsMirrorReply) GetBytes() int64 { if m != nil { return m.Bytes } return 0 } type GetMirrorLogsRequest struct { ID int32 `protobuf:"varint,1,opt,name=ID,proto3" json:"ID,omitempty"` MaxResults int32 `protobuf:"varint,2,opt,name=MaxResults,proto3" json:"MaxResults,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GetMirrorLogsRequest) Reset() { *m = GetMirrorLogsRequest{} } func (m *GetMirrorLogsRequest) String() string { return proto.CompactTextString(m) } func (*GetMirrorLogsRequest) ProtoMessage() {} func (*GetMirrorLogsRequest) Descriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{17} } func (m *GetMirrorLogsRequest) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GetMirrorLogsRequest.Unmarshal(m, b) } func (m *GetMirrorLogsRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GetMirrorLogsRequest.Marshal(b, m, deterministic) } func (m *GetMirrorLogsRequest) XXX_Merge(src proto.Message) { xxx_messageInfo_GetMirrorLogsRequest.Merge(m, src) } func (m *GetMirrorLogsRequest) XXX_Size() int { return xxx_messageInfo_GetMirrorLogsRequest.Size(m) } func (m *GetMirrorLogsRequest) XXX_DiscardUnknown() { xxx_messageInfo_GetMirrorLogsRequest.DiscardUnknown(m) } var xxx_messageInfo_GetMirrorLogsRequest proto.InternalMessageInfo func (m *GetMirrorLogsRequest) GetID() int32 { if m != nil { return m.ID } return 0 } func (m *GetMirrorLogsRequest) GetMaxResults() int32 { if m != nil { return m.MaxResults } return 0 } type GetMirrorLogsReply struct { Line []string `protobuf:"bytes,1,rep,name=line,proto3" json:"line,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` } func (m *GetMirrorLogsReply) Reset() { *m = GetMirrorLogsReply{} } func (m *GetMirrorLogsReply) String() string { return proto.CompactTextString(m) } func (*GetMirrorLogsReply) ProtoMessage() {} func (*GetMirrorLogsReply) Descriptor() ([]byte, []int) { return fileDescriptor_77a6da22d6a3feb1, []int{18} } func (m *GetMirrorLogsReply) XXX_Unmarshal(b []byte) error { return xxx_messageInfo_GetMirrorLogsReply.Unmarshal(m, b) } func (m *GetMirrorLogsReply) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { return xxx_messageInfo_GetMirrorLogsReply.Marshal(b, m, deterministic) } func (m *GetMirrorLogsReply) XXX_Merge(src proto.Message) { xxx_messageInfo_GetMirrorLogsReply.Merge(m, src) } func (m *GetMirrorLogsReply) XXX_Size() int { return xxx_messageInfo_GetMirrorLogsReply.Size(m) } func (m *GetMirrorLogsReply) XXX_DiscardUnknown() { xxx_messageInfo_GetMirrorLogsReply.DiscardUnknown(m) } var xxx_messageInfo_GetMirrorLogsReply proto.InternalMessageInfo func (m *GetMirrorLogsReply) GetLine() []string { if m != nil { return m.Line } return nil } func init() { proto.RegisterEnum("ScanMirrorRequest_Method", ScanMirrorRequest_Method_name, ScanMirrorRequest_Method_value) proto.RegisterType((*VersionReply)(nil), "VersionReply") proto.RegisterType((*MatchRequest)(nil), "MatchRequest") proto.RegisterType((*Mirror)(nil), "Mirror") proto.RegisterType((*MirrorListReply)(nil), "MirrorListReply") proto.RegisterType((*MirrorID)(nil), "MirrorID") proto.RegisterType((*MatchReply)(nil), "MatchReply") proto.RegisterType((*ChangeStatusRequest)(nil), "ChangeStatusRequest") proto.RegisterType((*MirrorIDRequest)(nil), "MirrorIDRequest") proto.RegisterType((*AddMirrorReply)(nil), "AddMirrorReply") proto.RegisterType((*UpdateMirrorReply)(nil), "UpdateMirrorReply") proto.RegisterType((*RefreshRepositoryRequest)(nil), "RefreshRepositoryRequest") proto.RegisterType((*ScanMirrorRequest)(nil), "ScanMirrorRequest") proto.RegisterType((*ScanMirrorReply)(nil), "ScanMirrorReply") proto.RegisterType((*StatsFileRequest)(nil), "StatsFileRequest") proto.RegisterType((*StatsFileReply)(nil), "StatsFileReply") proto.RegisterMapType((map[string]int64)(nil), "StatsFileReply.FilesEntry") proto.RegisterType((*StatsMirrorRequest)(nil), "StatsMirrorRequest") proto.RegisterType((*StatsMirrorReply)(nil), "StatsMirrorReply") proto.RegisterType((*GetMirrorLogsRequest)(nil), "GetMirrorLogsRequest") proto.RegisterType((*GetMirrorLogsReply)(nil), "GetMirrorLogsReply") } func init() { proto.RegisterFile("rpc.proto", fileDescriptor_77a6da22d6a3feb1) } var fileDescriptor_77a6da22d6a3feb1 = []byte{ // 1413 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x57, 0x5d, 0x73, 0xdb, 0x44, 0x17, 0xb6, 0xec, 0xc4, 0xb1, 0x8f, 0x9d, 0xc4, 0xd9, 0xa4, 0x79, 0x55, 0xb5, 0x6f, 0xeb, 0xee, 0xfb, 0x51, 0x33, 0x0c, 0x2a, 0x35, 0x2d, 0x64, 0xca, 0xd7, 0x18, 0x3b, 0x49, 0x03, 0x76, 0x93, 0x91, 0x1b, 0x18, 0xb8, 0x53, 0xa5, 0xb5, 0xa3, 0x41, 0xd6, 0x1a, 0xed, 0xaa, 0x8d, 0x67, 0xf8, 0x19, 0x0c, 0x57, 0x5c, 0xc0, 0x0f, 0x60, 0x86, 0xdf, 0xc7, 0x15, 0xb3, 0x1f, 0xb2, 0x65, 0x3b, 0x71, 0x98, 0x5e, 0x70, 0xb7, 0xe7, 0x39, 0x67, 0xf7, 0x7c, 0xe8, 0x9c, 0x67, 0x57, 0x50, 0x8e, 0xc7, 0x9e, 0x3d, 0x8e, 0x29, 0xa7, 0xd6, 0x9d, 0x21, 0xa5, 0xc3, 0x90, 0x3c, 0x92, 0xd2, 0xab, 0x64, 0xf0, 0x88, 0x8c, 0xc6, 0x7c, 0xa2, 0x95, 0xf7, 0x17, 0x95, 0x3c, 0x18, 0x11, 0xc6, 0xdd, 0xd1, 0x58, 0x19, 0xe0, 0x5f, 0x0d, 0xa8, 0x7e, 0x4d, 0x62, 0x16, 0xd0, 0xc8, 0x21, 0xe3, 0x70, 0x82, 0x4c, 0xd8, 0xd0, 0xb2, 0x69, 0xd4, 0x8d, 0x46, 0xd9, 0x49, 0x45, 0xb4, 0x07, 0xeb, 0x5f, 0x24, 0x41, 0xe8, 0x9b, 0x79, 0x89, 0x2b, 0x01, 0xdd, 0x85, 0xf2, 0x31, 0x4d, 0x77, 0x14, 0xa4, 0x66, 0x06, 0xa0, 0x2d, 0xc8, 0x9f, 0xf6, 0xcd, 0x35, 0x09, 0xe7, 0x4f, 0xfb, 0x08, 0xc1, 0x5a, 0x2b, 0xf6, 0x2e, 0xcc, 0x75, 0x89, 0xc8, 0x35, 0xba, 0x07, 0x70, 0x4c, 0x7b, 0xee, 0xe5, 0x59, 0x4c, 0x3d, 0x66, 0x16, 0xeb, 0x46, 0x63, 0xdd, 0xc9, 0x20, 0xb8, 0x01, 0xd5, 0x9e, 0xcb, 0xbd, 0x0b, 0x87, 0xfc, 0x90, 0x10, 0xc6, 0x45, 0x84, 0x67, 0x2e, 0xe7, 0x24, 0x9e, 0x46, 0xa8, 0x45, 0xfc, 0x73, 0x09, 0x8a, 0xbd, 0x20, 0x8e, 0x69, 0x2c, 0x1c, 0x9f, 0x74, 0xa4, 0x7e, 0xdd, 0xc9, 0x9f, 0x74, 0x84, 0xe3, 0x17, 0xee, 0x88, 0xe8, 0xd8, 0xe5, 0x5a, 0x1c, 0xf4, 0x9c, 0xf3, 0xf1, 0xb9, 0xd3, 0xd5, 0x81, 0xa7, 0x22, 0xb2, 0xa0, 0xe4, 0xb0, 0x49, 0xe4, 0x09, 0x95, 0x0a, 0x7e, 0x2a, 0xa3, 0x7d, 0x28, 0x1e, 0xa9, 0x4d, 0x2a, 0x09, 0x2d, 0xa1, 0x3a, 0x54, 0xfa, 0x63, 0x1a, 0x31, 0x1a, 0x4b, 0x47, 0x45, 0xa9, 0xcc, 0x42, 0x22, 0x51, 0x2d, 0x8a, 0xdd, 0x1b, 0xd2, 0x20, 0x83, 0xa0, 0xff, 0xc3, 0x96, 0x96, 0xba, 0x74, 0x48, 0x85, 0x4d, 0x49, 0xda, 0x2c, 0xa0, 0xa2, 0xe4, 0x2d, 0x7f, 0x14, 0x44, 0xd2, 0x4f, 0x59, 0x95, 0x7c, 0x0a, 0x08, 0x2f, 0x52, 0x38, 0x1c, 0xb9, 0x41, 0x68, 0x82, 0xf2, 0x32, 0x43, 0x84, 0xbe, 0x9d, 0x30, 0x4e, 0x47, 0x1d, 0x97, 0xbb, 0x66, 0x45, 0xe9, 0x67, 0x08, 0xfa, 0x2f, 0x6c, 0xb6, 0x69, 0xc4, 0x83, 0x88, 0x44, 0xfc, 0x34, 0x0a, 0x27, 0x66, 0xb5, 0x6e, 0x34, 0x4a, 0xce, 0x3c, 0x28, 0xb2, 0x6d, 0xd3, 0x24, 0xe2, 0xf1, 0x44, 0xda, 0x6c, 0x4a, 0x9b, 0x2c, 0x24, 0xea, 0xd4, 0xea, 0x4b, 0xe5, 0x96, 0x54, 0x6a, 0x49, 0xb4, 0x51, 0xdf, 0xa3, 0x31, 0x31, 0xb7, 0xe5, 0xc7, 0x51, 0x82, 0xa8, 0x78, 0xd7, 0xe5, 0x01, 0x4f, 0x7c, 0x62, 0xd6, 0xea, 0x46, 0x23, 0xef, 0x4c, 0x65, 0x91, 0x6f, 0x97, 0x46, 0x43, 0xa5, 0xdc, 0x91, 0xca, 0x19, 0x30, 0x17, 0x6f, 0x9b, 0xfa, 0xc4, 0x44, 0x32, 0xa5, 0x79, 0x10, 0x61, 0xa8, 0xea, 0xe0, 0x84, 0xc8, 0xcc, 0x5d, 0x69, 0x34, 0x87, 0xa1, 0x26, 0xec, 0x1d, 0x5e, 0x7a, 0x61, 0xe2, 0x13, 0x7f, 0xce, 0x76, 0x4f, 0xda, 0x5e, 0xa9, 0x13, 0xd9, 0xb4, 0x58, 0x94, 0x8c, 0xcc, 0x5b, 0x75, 0xa3, 0xb1, 0xe9, 0x28, 0x41, 0x74, 0x56, 0x9b, 0x8e, 0x46, 0x24, 0xe2, 0xe6, 0xbe, 0xea, 0x2c, 0x2d, 0x0a, 0xcd, 0x61, 0xe4, 0xbe, 0x0a, 0x89, 0x6f, 0xfe, 0x4b, 0x96, 0x25, 0x15, 0x45, 0xc7, 0x9e, 0x8f, 0x4d, 0x53, 0x82, 0xf9, 0xf3, 0xb1, 0xc8, 0x4b, 0x7b, 0x74, 0x88, 0xcb, 0x68, 0x64, 0xde, 0x56, 0x79, 0xcd, 0x81, 0xe8, 0x19, 0x40, 0x9f, 0xbb, 0x9c, 0xf4, 0x83, 0xc8, 0x23, 0xa6, 0x55, 0x37, 0x1a, 0x95, 0xa6, 0x65, 0xab, 0xa9, 0xb7, 0xd3, 0xa9, 0xb7, 0x5f, 0xa6, 0x53, 0xef, 0x64, 0xac, 0x45, 0xbf, 0xb5, 0xc2, 0x90, 0xbe, 0x71, 0x88, 0x1f, 0xc4, 0xc4, 0xe3, 0xcc, 0xbc, 0x23, 0x3f, 0xc9, 0x02, 0x8a, 0x3e, 0x14, 0xdf, 0x86, 0xf1, 0xfe, 0x24, 0xf2, 0xcc, 0xbb, 0x37, 0x7a, 0x98, 0xda, 0xa2, 0x2f, 0x01, 0xc9, 0x75, 0xe2, 0x79, 0x84, 0xb1, 0x41, 0x12, 0xca, 0x13, 0xfe, 0x7d, 0xe3, 0x09, 0x57, 0xec, 0x42, 0x9f, 0x40, 0x45, 0xa0, 0x3d, 0xea, 0x0b, 0x3b, 0xf3, 0xde, 0x8d, 0x87, 0x64, 0xcd, 0xf1, 0x13, 0xd8, 0x56, 0xbc, 0xd0, 0x0d, 0x18, 0x57, 0x3c, 0xf7, 0x00, 0x36, 0x14, 0xc4, 0x4c, 0xa3, 0x5e, 0x68, 0x54, 0x9a, 0x1b, 0xb6, 0x92, 0x9d, 0x14, 0xc7, 0x36, 0x94, 0xd4, 0xf2, 0xa4, 0xf3, 0x77, 0xf8, 0x04, 0x3f, 0x06, 0xd0, 0x44, 0x25, 0x1c, 0xfc, 0x67, 0xd1, 0x41, 0xd9, 0x4e, 0x4f, 0x9b, 0xb9, 0xf8, 0x1c, 0x76, 0xdb, 0x17, 0x6e, 0x34, 0x24, 0xe2, 0xb3, 0x24, 0x2c, 0xa5, 0xb8, 0x45, 0x6f, 0x99, 0xae, 0xc9, 0xcf, 0x75, 0x0d, 0x7e, 0x90, 0x66, 0x76, 0xd2, 0xb9, 0x66, 0x33, 0xfe, 0xc3, 0x80, 0xad, 0x96, 0xef, 0xeb, 0xec, 0x64, 0x6c, 0xd9, 0x69, 0x33, 0x56, 0x4d, 0x5b, 0x7e, 0x71, 0xda, 0x64, 0x67, 0xcb, 0xfe, 0x4f, 0x39, 0x53, 0x8b, 0x62, 0xdf, 0x74, 0xe4, 0x34, 0x69, 0xce, 0x00, 0x54, 0x83, 0x42, 0xab, 0xff, 0x42, 0x53, 0xa6, 0x58, 0x8a, 0x18, 0xbe, 0x71, 0xe3, 0x28, 0x88, 0x86, 0x82, 0xf4, 0x0b, 0x82, 0x63, 0x53, 0x19, 0x3f, 0x84, 0x9d, 0xf3, 0xb1, 0xef, 0x72, 0x92, 0x0d, 0x1a, 0xc1, 0x5a, 0x27, 0x18, 0x0c, 0x34, 0xe9, 0xcb, 0x35, 0x6e, 0x82, 0xe9, 0x90, 0x41, 0x4c, 0x98, 0x28, 0x3a, 0x65, 0x01, 0xa7, 0xf1, 0x24, 0xad, 0xc3, 0x3e, 0x14, 0x1d, 0x72, 0xe1, 0xb2, 0x0b, 0xb9, 0xa3, 0xe4, 0x68, 0x09, 0xff, 0x66, 0xc0, 0x4e, 0xdf, 0x73, 0xa3, 0xf4, 0xec, 0xab, 0x4b, 0x2e, 0x68, 0x34, 0xe1, 0x54, 0xd5, 0x59, 0x57, 0x3d, 0x83, 0xa0, 0xa7, 0x50, 0x3a, 0x13, 0x5d, 0xe7, 0xd1, 0x50, 0x56, 0x62, 0xab, 0x79, 0xdb, 0x5e, 0x3a, 0xd5, 0xee, 0x11, 0x7e, 0x41, 0x7d, 0x67, 0x6a, 0x8a, 0xff, 0x07, 0x45, 0x85, 0xa1, 0x0d, 0x28, 0xb4, 0xba, 0xdd, 0x5a, 0x4e, 0x2c, 0x8e, 0x5e, 0x9e, 0xd5, 0x0c, 0x54, 0x86, 0x75, 0xa7, 0xff, 0xed, 0x8b, 0x76, 0x2d, 0x8f, 0x7f, 0x37, 0x60, 0x3b, 0x7b, 0x9a, 0xbe, 0x99, 0xd3, 0x26, 0x30, 0xe6, 0xa9, 0x03, 0x43, 0xf5, 0x28, 0x08, 0x09, 0x3b, 0x89, 0x7c, 0x72, 0xa9, 0x7b, 0xa4, 0xe0, 0xcc, 0x61, 0xc2, 0xe6, 0xab, 0x88, 0xbe, 0x89, 0x52, 0x9b, 0x82, 0xb2, 0xc9, 0x62, 0xc2, 0x83, 0x43, 0x46, 0xf4, 0x35, 0xf1, 0xe5, 0x07, 0x2c, 0x38, 0xa9, 0x28, 0xaa, 0xf1, 0xf2, 0xbb, 0xd3, 0xc1, 0x80, 0x11, 0xde, 0x63, 0xf2, 0x2b, 0x16, 0x9c, 0x0c, 0x82, 0x7f, 0x31, 0xa0, 0x26, 0x5a, 0x98, 0x09, 0x9f, 0x37, 0x5e, 0xd4, 0xe8, 0x00, 0xca, 0x1d, 0x41, 0x43, 0xdc, 0x8d, 0xb9, 0x8c, 0x76, 0xf5, 0x2c, 0xcf, 0x8c, 0xd1, 0x13, 0xd8, 0x10, 0xc2, 0x61, 0xa4, 0x32, 0x58, 0xbd, 0x2f, 0x35, 0xc5, 0x3f, 0xc2, 0x56, 0x26, 0x3a, 0x51, 0xcc, 0xf7, 0x61, 0x7d, 0x20, 0xca, 0xa3, 0x67, 0xd3, 0xb2, 0xe7, 0xf5, 0xb6, 0xac, 0xdd, 0xa1, 0x68, 0x6c, 0x47, 0x19, 0x5a, 0x07, 0x00, 0x33, 0x50, 0xf4, 0xf3, 0xf7, 0x64, 0xa2, 0xf3, 0x12, 0x4b, 0x71, 0x13, 0xbc, 0x76, 0xc3, 0x84, 0xe8, 0xea, 0x2b, 0xe1, 0x59, 0xfe, 0xc0, 0xc0, 0x3f, 0x19, 0x80, 0xe4, 0xf1, 0xab, 0x3b, 0xee, 0x9f, 0x2e, 0x0a, 0xd1, 0x9f, 0x2c, 0xdb, 0x63, 0xf7, 0xd3, 0x07, 0x94, 0x8c, 0x2b, 0x43, 0x8a, 0xe9, 0xbb, 0x4a, 0xbc, 0x8c, 0x54, 0xfc, 0x4c, 0x27, 0x3a, 0x95, 0xe5, 0x03, 0x71, 0xc2, 0x09, 0xd3, 0xbd, 0xa5, 0x04, 0x7c, 0x04, 0x7b, 0xc7, 0x84, 0x6b, 0xfa, 0xa5, 0x43, 0xb6, 0x62, 0xe0, 0x7a, 0xee, 0xa5, 0x43, 0x58, 0x12, 0xea, 0xb3, 0xd7, 0x9d, 0x0c, 0x82, 0x1b, 0x80, 0x16, 0xce, 0xd1, 0xa4, 0x10, 0x06, 0x11, 0x91, 0x9f, 0xb1, 0xec, 0xc8, 0x75, 0xf3, 0xcf, 0x22, 0x14, 0xda, 0xdd, 0x13, 0xf4, 0x14, 0xe0, 0x98, 0xf0, 0xf4, 0x29, 0xba, 0xbf, 0x54, 0x93, 0x43, 0xf1, 0x50, 0xb6, 0x36, 0xed, 0xec, 0xfb, 0x17, 0xe7, 0xd0, 0xc7, 0xb0, 0x71, 0x3e, 0x1e, 0xc6, 0xae, 0x4f, 0xae, 0xdd, 0x73, 0x0d, 0x8e, 0x73, 0xe8, 0x99, 0x20, 0x9d, 0x90, 0xba, 0xfe, 0x5b, 0xec, 0xfd, 0x0c, 0xaa, 0xd9, 0xcb, 0x00, 0xed, 0xd9, 0x57, 0xdc, 0x0d, 0x2b, 0xf6, 0x37, 0x61, 0x4d, 0xdc, 0x6f, 0xd7, 0x7a, 0xae, 0xd9, 0x0b, 0x97, 0x20, 0xce, 0xa1, 0x77, 0x00, 0xf4, 0xfd, 0x11, 0x0d, 0x28, 0xaa, 0xd9, 0x0b, 0x97, 0x89, 0x95, 0x36, 0x00, 0xce, 0xa1, 0x87, 0xe2, 0xd9, 0xa9, 0xaf, 0x11, 0x94, 0xe2, 0xd6, 0xb6, 0x3d, 0x7f, 0xb7, 0xe0, 0x1c, 0x7a, 0x0f, 0xaa, 0x59, 0xf6, 0x9e, 0xd9, 0x22, 0x7b, 0x89, 0xd5, 0x65, 0xc9, 0xaa, 0x8a, 0x66, 0xb4, 0xf9, 0x72, 0x10, 0xd7, 0xa7, 0xfc, 0x1c, 0x76, 0x96, 0xf8, 0x1f, 0xdd, 0xb6, 0xaf, 0xbb, 0x13, 0x56, 0x9c, 0xf4, 0x04, 0x60, 0x46, 0xb8, 0x08, 0x2d, 0x73, 0xb9, 0x55, 0xb3, 0x17, 0x18, 0x19, 0xe7, 0xd0, 0x63, 0x28, 0x4f, 0x89, 0x03, 0xed, 0xd8, 0x8b, 0x14, 0x68, 0x6d, 0x2f, 0xf0, 0x0a, 0xce, 0xa1, 0x8f, 0xa0, 0x92, 0x19, 0x3b, 0xb4, 0x6b, 0x2f, 0x53, 0x83, 0xb5, 0x63, 0x2f, 0x4e, 0x26, 0xce, 0xa1, 0x03, 0x58, 0x3b, 0x0b, 0xa2, 0xe1, 0x5b, 0x34, 0xd6, 0xa7, 0xb0, 0x39, 0x37, 0x3a, 0xe8, 0x96, 0x7d, 0xd5, 0x48, 0x5a, 0xbb, 0xf6, 0xf2, 0x84, 0xe1, 0x1c, 0x7a, 0x17, 0x2a, 0xf2, 0x5d, 0xa3, 0x23, 0xde, 0xb4, 0xb3, 0xbf, 0x63, 0x56, 0xc5, 0x9e, 0x3d, 0x7a, 0x70, 0xee, 0x55, 0x51, 0x7a, 0xff, 0xe0, 0xaf, 0x00, 0x00, 0x00, 0xff, 0xff, 0x30, 0xff, 0xff, 0x89, 0xa2, 0x0e, 0x00, 0x00, } // Reference imports to suppress errors if they are not otherwise used. var _ context.Context var _ grpc.ClientConn // This is a compile-time assertion to ensure that this generated file // is compatible with the grpc package it is being compiled against. const _ = grpc.SupportPackageIsVersion4 // CLIClient is the client API for CLI service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. type CLIClient interface { GetVersion(ctx context.Context, in *empty.Empty, opts ...grpc.CallOption) (*VersionReply, error) Upgrade(ctx context.Context, in *empty.Empty, opts ...grpc.CallOption) (*empty.Empty, error) Reload(ctx context.Context, in *empty.Empty, opts ...grpc.CallOption) (*empty.Empty, error) ChangeStatus(ctx context.Context, in *ChangeStatusRequest, opts ...grpc.CallOption) (*empty.Empty, error) List(ctx context.Context, in *empty.Empty, opts ...grpc.CallOption) (*MirrorListReply, error) MirrorInfo(ctx context.Context, in *MirrorIDRequest, opts ...grpc.CallOption) (*Mirror, error) AddMirror(ctx context.Context, in *Mirror, opts ...grpc.CallOption) (*AddMirrorReply, error) UpdateMirror(ctx context.Context, in *Mirror, opts ...grpc.CallOption) (*UpdateMirrorReply, error) RemoveMirror(ctx context.Context, in *MirrorIDRequest, opts ...grpc.CallOption) (*empty.Empty, error) RefreshRepository(ctx context.Context, in *RefreshRepositoryRequest, opts ...grpc.CallOption) (*empty.Empty, error) ScanMirror(ctx context.Context, in *ScanMirrorRequest, opts ...grpc.CallOption) (*ScanMirrorReply, error) StatsFile(ctx context.Context, in *StatsFileRequest, opts ...grpc.CallOption) (*StatsFileReply, error) StatsMirror(ctx context.Context, in *StatsMirrorRequest, opts ...grpc.CallOption) (*StatsMirrorReply, error) Ping(ctx context.Context, in *empty.Empty, opts ...grpc.CallOption) (*empty.Empty, error) GetMirrorLogs(ctx context.Context, in *GetMirrorLogsRequest, opts ...grpc.CallOption) (*GetMirrorLogsReply, error) // Tools MatchMirror(ctx context.Context, in *MatchRequest, opts ...grpc.CallOption) (*MatchReply, error) } type cLIClient struct { cc *grpc.ClientConn } func NewCLIClient(cc *grpc.ClientConn) CLIClient { return &cLIClient{cc} } func (c *cLIClient) GetVersion(ctx context.Context, in *empty.Empty, opts ...grpc.CallOption) (*VersionReply, error) { out := new(VersionReply) err := c.cc.Invoke(ctx, "/CLI/GetVersion", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *cLIClient) Upgrade(ctx context.Context, in *empty.Empty, opts ...grpc.CallOption) (*empty.Empty, error) { out := new(empty.Empty) err := c.cc.Invoke(ctx, "/CLI/Upgrade", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *cLIClient) Reload(ctx context.Context, in *empty.Empty, opts ...grpc.CallOption) (*empty.Empty, error) { out := new(empty.Empty) err := c.cc.Invoke(ctx, "/CLI/Reload", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *cLIClient) ChangeStatus(ctx context.Context, in *ChangeStatusRequest, opts ...grpc.CallOption) (*empty.Empty, error) { out := new(empty.Empty) err := c.cc.Invoke(ctx, "/CLI/ChangeStatus", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *cLIClient) List(ctx context.Context, in *empty.Empty, opts ...grpc.CallOption) (*MirrorListReply, error) { out := new(MirrorListReply) err := c.cc.Invoke(ctx, "/CLI/List", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *cLIClient) MirrorInfo(ctx context.Context, in *MirrorIDRequest, opts ...grpc.CallOption) (*Mirror, error) { out := new(Mirror) err := c.cc.Invoke(ctx, "/CLI/MirrorInfo", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *cLIClient) AddMirror(ctx context.Context, in *Mirror, opts ...grpc.CallOption) (*AddMirrorReply, error) { out := new(AddMirrorReply) err := c.cc.Invoke(ctx, "/CLI/AddMirror", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *cLIClient) UpdateMirror(ctx context.Context, in *Mirror, opts ...grpc.CallOption) (*UpdateMirrorReply, error) { out := new(UpdateMirrorReply) err := c.cc.Invoke(ctx, "/CLI/UpdateMirror", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *cLIClient) RemoveMirror(ctx context.Context, in *MirrorIDRequest, opts ...grpc.CallOption) (*empty.Empty, error) { out := new(empty.Empty) err := c.cc.Invoke(ctx, "/CLI/RemoveMirror", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *cLIClient) RefreshRepository(ctx context.Context, in *RefreshRepositoryRequest, opts ...grpc.CallOption) (*empty.Empty, error) { out := new(empty.Empty) err := c.cc.Invoke(ctx, "/CLI/RefreshRepository", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *cLIClient) ScanMirror(ctx context.Context, in *ScanMirrorRequest, opts ...grpc.CallOption) (*ScanMirrorReply, error) { out := new(ScanMirrorReply) err := c.cc.Invoke(ctx, "/CLI/ScanMirror", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *cLIClient) StatsFile(ctx context.Context, in *StatsFileRequest, opts ...grpc.CallOption) (*StatsFileReply, error) { out := new(StatsFileReply) err := c.cc.Invoke(ctx, "/CLI/StatsFile", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *cLIClient) StatsMirror(ctx context.Context, in *StatsMirrorRequest, opts ...grpc.CallOption) (*StatsMirrorReply, error) { out := new(StatsMirrorReply) err := c.cc.Invoke(ctx, "/CLI/StatsMirror", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *cLIClient) Ping(ctx context.Context, in *empty.Empty, opts ...grpc.CallOption) (*empty.Empty, error) { out := new(empty.Empty) err := c.cc.Invoke(ctx, "/CLI/Ping", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *cLIClient) GetMirrorLogs(ctx context.Context, in *GetMirrorLogsRequest, opts ...grpc.CallOption) (*GetMirrorLogsReply, error) { out := new(GetMirrorLogsReply) err := c.cc.Invoke(ctx, "/CLI/GetMirrorLogs", in, out, opts...) if err != nil { return nil, err } return out, nil } func (c *cLIClient) MatchMirror(ctx context.Context, in *MatchRequest, opts ...grpc.CallOption) (*MatchReply, error) { out := new(MatchReply) err := c.cc.Invoke(ctx, "/CLI/MatchMirror", in, out, opts...) if err != nil { return nil, err } return out, nil } // CLIServer is the server API for CLI service. type CLIServer interface { GetVersion(context.Context, *empty.Empty) (*VersionReply, error) Upgrade(context.Context, *empty.Empty) (*empty.Empty, error) Reload(context.Context, *empty.Empty) (*empty.Empty, error) ChangeStatus(context.Context, *ChangeStatusRequest) (*empty.Empty, error) List(context.Context, *empty.Empty) (*MirrorListReply, error) MirrorInfo(context.Context, *MirrorIDRequest) (*Mirror, error) AddMirror(context.Context, *Mirror) (*AddMirrorReply, error) UpdateMirror(context.Context, *Mirror) (*UpdateMirrorReply, error) RemoveMirror(context.Context, *MirrorIDRequest) (*empty.Empty, error) RefreshRepository(context.Context, *RefreshRepositoryRequest) (*empty.Empty, error) ScanMirror(context.Context, *ScanMirrorRequest) (*ScanMirrorReply, error) StatsFile(context.Context, *StatsFileRequest) (*StatsFileReply, error) StatsMirror(context.Context, *StatsMirrorRequest) (*StatsMirrorReply, error) Ping(context.Context, *empty.Empty) (*empty.Empty, error) GetMirrorLogs(context.Context, *GetMirrorLogsRequest) (*GetMirrorLogsReply, error) // Tools MatchMirror(context.Context, *MatchRequest) (*MatchReply, error) } // UnimplementedCLIServer can be embedded to have forward compatible implementations. type UnimplementedCLIServer struct { } func (*UnimplementedCLIServer) GetVersion(ctx context.Context, req *empty.Empty) (*VersionReply, error) { return nil, status.Errorf(codes.Unimplemented, "method GetVersion not implemented") } func (*UnimplementedCLIServer) Upgrade(ctx context.Context, req *empty.Empty) (*empty.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method Upgrade not implemented") } func (*UnimplementedCLIServer) Reload(ctx context.Context, req *empty.Empty) (*empty.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method Reload not implemented") } func (*UnimplementedCLIServer) ChangeStatus(ctx context.Context, req *ChangeStatusRequest) (*empty.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method ChangeStatus not implemented") } func (*UnimplementedCLIServer) List(ctx context.Context, req *empty.Empty) (*MirrorListReply, error) { return nil, status.Errorf(codes.Unimplemented, "method List not implemented") } func (*UnimplementedCLIServer) MirrorInfo(ctx context.Context, req *MirrorIDRequest) (*Mirror, error) { return nil, status.Errorf(codes.Unimplemented, "method MirrorInfo not implemented") } func (*UnimplementedCLIServer) AddMirror(ctx context.Context, req *Mirror) (*AddMirrorReply, error) { return nil, status.Errorf(codes.Unimplemented, "method AddMirror not implemented") } func (*UnimplementedCLIServer) UpdateMirror(ctx context.Context, req *Mirror) (*UpdateMirrorReply, error) { return nil, status.Errorf(codes.Unimplemented, "method UpdateMirror not implemented") } func (*UnimplementedCLIServer) RemoveMirror(ctx context.Context, req *MirrorIDRequest) (*empty.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method RemoveMirror not implemented") } func (*UnimplementedCLIServer) RefreshRepository(ctx context.Context, req *RefreshRepositoryRequest) (*empty.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method RefreshRepository not implemented") } func (*UnimplementedCLIServer) ScanMirror(ctx context.Context, req *ScanMirrorRequest) (*ScanMirrorReply, error) { return nil, status.Errorf(codes.Unimplemented, "method ScanMirror not implemented") } func (*UnimplementedCLIServer) StatsFile(ctx context.Context, req *StatsFileRequest) (*StatsFileReply, error) { return nil, status.Errorf(codes.Unimplemented, "method StatsFile not implemented") } func (*UnimplementedCLIServer) StatsMirror(ctx context.Context, req *StatsMirrorRequest) (*StatsMirrorReply, error) { return nil, status.Errorf(codes.Unimplemented, "method StatsMirror not implemented") } func (*UnimplementedCLIServer) Ping(ctx context.Context, req *empty.Empty) (*empty.Empty, error) { return nil, status.Errorf(codes.Unimplemented, "method Ping not implemented") } func (*UnimplementedCLIServer) GetMirrorLogs(ctx context.Context, req *GetMirrorLogsRequest) (*GetMirrorLogsReply, error) { return nil, status.Errorf(codes.Unimplemented, "method GetMirrorLogs not implemented") } func (*UnimplementedCLIServer) MatchMirror(ctx context.Context, req *MatchRequest) (*MatchReply, error) { return nil, status.Errorf(codes.Unimplemented, "method MatchMirror not implemented") } func RegisterCLIServer(s *grpc.Server, srv CLIServer) { s.RegisterService(&_CLI_serviceDesc, srv) } func _CLI_GetVersion_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(empty.Empty) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(CLIServer).GetVersion(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/CLI/GetVersion", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(CLIServer).GetVersion(ctx, req.(*empty.Empty)) } return interceptor(ctx, in, info, handler) } func _CLI_Upgrade_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(empty.Empty) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(CLIServer).Upgrade(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/CLI/Upgrade", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(CLIServer).Upgrade(ctx, req.(*empty.Empty)) } return interceptor(ctx, in, info, handler) } func _CLI_Reload_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(empty.Empty) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(CLIServer).Reload(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/CLI/Reload", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(CLIServer).Reload(ctx, req.(*empty.Empty)) } return interceptor(ctx, in, info, handler) } func _CLI_ChangeStatus_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(ChangeStatusRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(CLIServer).ChangeStatus(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/CLI/ChangeStatus", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(CLIServer).ChangeStatus(ctx, req.(*ChangeStatusRequest)) } return interceptor(ctx, in, info, handler) } func _CLI_List_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(empty.Empty) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(CLIServer).List(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/CLI/List", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(CLIServer).List(ctx, req.(*empty.Empty)) } return interceptor(ctx, in, info, handler) } func _CLI_MirrorInfo_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(MirrorIDRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(CLIServer).MirrorInfo(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/CLI/MirrorInfo", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(CLIServer).MirrorInfo(ctx, req.(*MirrorIDRequest)) } return interceptor(ctx, in, info, handler) } func _CLI_AddMirror_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(Mirror) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(CLIServer).AddMirror(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/CLI/AddMirror", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(CLIServer).AddMirror(ctx, req.(*Mirror)) } return interceptor(ctx, in, info, handler) } func _CLI_UpdateMirror_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(Mirror) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(CLIServer).UpdateMirror(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/CLI/UpdateMirror", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(CLIServer).UpdateMirror(ctx, req.(*Mirror)) } return interceptor(ctx, in, info, handler) } func _CLI_RemoveMirror_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(MirrorIDRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(CLIServer).RemoveMirror(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/CLI/RemoveMirror", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(CLIServer).RemoveMirror(ctx, req.(*MirrorIDRequest)) } return interceptor(ctx, in, info, handler) } func _CLI_RefreshRepository_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(RefreshRepositoryRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(CLIServer).RefreshRepository(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/CLI/RefreshRepository", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(CLIServer).RefreshRepository(ctx, req.(*RefreshRepositoryRequest)) } return interceptor(ctx, in, info, handler) } func _CLI_ScanMirror_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(ScanMirrorRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(CLIServer).ScanMirror(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/CLI/ScanMirror", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(CLIServer).ScanMirror(ctx, req.(*ScanMirrorRequest)) } return interceptor(ctx, in, info, handler) } func _CLI_StatsFile_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(StatsFileRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(CLIServer).StatsFile(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/CLI/StatsFile", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(CLIServer).StatsFile(ctx, req.(*StatsFileRequest)) } return interceptor(ctx, in, info, handler) } func _CLI_StatsMirror_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(StatsMirrorRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(CLIServer).StatsMirror(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/CLI/StatsMirror", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(CLIServer).StatsMirror(ctx, req.(*StatsMirrorRequest)) } return interceptor(ctx, in, info, handler) } func _CLI_Ping_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(empty.Empty) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(CLIServer).Ping(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/CLI/Ping", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(CLIServer).Ping(ctx, req.(*empty.Empty)) } return interceptor(ctx, in, info, handler) } func _CLI_GetMirrorLogs_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(GetMirrorLogsRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(CLIServer).GetMirrorLogs(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/CLI/GetMirrorLogs", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(CLIServer).GetMirrorLogs(ctx, req.(*GetMirrorLogsRequest)) } return interceptor(ctx, in, info, handler) } func _CLI_MatchMirror_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { in := new(MatchRequest) if err := dec(in); err != nil { return nil, err } if interceptor == nil { return srv.(CLIServer).MatchMirror(ctx, in) } info := &grpc.UnaryServerInfo{ Server: srv, FullMethod: "/CLI/MatchMirror", } handler := func(ctx context.Context, req interface{}) (interface{}, error) { return srv.(CLIServer).MatchMirror(ctx, req.(*MatchRequest)) } return interceptor(ctx, in, info, handler) } var _CLI_serviceDesc = grpc.ServiceDesc{ ServiceName: "CLI", HandlerType: (*CLIServer)(nil), Methods: []grpc.MethodDesc{ { MethodName: "GetVersion", Handler: _CLI_GetVersion_Handler, }, { MethodName: "Upgrade", Handler: _CLI_Upgrade_Handler, }, { MethodName: "Reload", Handler: _CLI_Reload_Handler, }, { MethodName: "ChangeStatus", Handler: _CLI_ChangeStatus_Handler, }, { MethodName: "List", Handler: _CLI_List_Handler, }, { MethodName: "MirrorInfo", Handler: _CLI_MirrorInfo_Handler, }, { MethodName: "AddMirror", Handler: _CLI_AddMirror_Handler, }, { MethodName: "UpdateMirror", Handler: _CLI_UpdateMirror_Handler, }, { MethodName: "RemoveMirror", Handler: _CLI_RemoveMirror_Handler, }, { MethodName: "RefreshRepository", Handler: _CLI_RefreshRepository_Handler, }, { MethodName: "ScanMirror", Handler: _CLI_ScanMirror_Handler, }, { MethodName: "StatsFile", Handler: _CLI_StatsFile_Handler, }, { MethodName: "StatsMirror", Handler: _CLI_StatsMirror_Handler, }, { MethodName: "Ping", Handler: _CLI_Ping_Handler, }, { MethodName: "GetMirrorLogs", Handler: _CLI_GetMirrorLogs_Handler, }, { MethodName: "MatchMirror", Handler: _CLI_MatchMirror_Handler, }, }, Streams: []grpc.StreamDesc{}, Metadata: "rpc.proto", } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/rpc/rpc.proto000066400000000000000000000074021411706463700216420ustar00rootroot00000000000000syntax = "proto3"; import "google/protobuf/empty.proto"; import "google/protobuf/timestamp.proto"; service CLI { rpc GetVersion (google.protobuf.Empty) returns (VersionReply) {} rpc Upgrade (google.protobuf.Empty) returns (google.protobuf.Empty) {} rpc Reload (google.protobuf.Empty) returns (google.protobuf.Empty) {} rpc ChangeStatus (ChangeStatusRequest) returns (google.protobuf.Empty) {} rpc List (google.protobuf.Empty) returns (MirrorListReply) {} rpc MirrorInfo (MirrorIDRequest) returns (Mirror) {} rpc AddMirror (Mirror) returns (AddMirrorReply) {} rpc UpdateMirror (Mirror) returns (UpdateMirrorReply) {} rpc RemoveMirror (MirrorIDRequest) returns (google.protobuf.Empty) {} rpc RefreshRepository (RefreshRepositoryRequest) returns (google.protobuf.Empty) {} rpc ScanMirror (ScanMirrorRequest) returns (ScanMirrorReply) {} rpc StatsFile (StatsFileRequest) returns (StatsFileReply) {} rpc StatsMirror (StatsMirrorRequest) returns (StatsMirrorReply) {} rpc Ping (google.protobuf.Empty) returns (google.protobuf.Empty) {} rpc GetMirrorLogs (GetMirrorLogsRequest) returns (GetMirrorLogsReply) {} // Tools rpc MatchMirror (MatchRequest) returns (MatchReply) {} } message VersionReply { string Version = 1; string Build = 2; string GoVersion = 3; string OS = 4; string Arch = 5; int32 GoMaxProcs = 6; } message MatchRequest { string Pattern = 1; } message Mirror { int32 ID = 1; string Name = 2; string HttpURL = 3; string RsyncURL = 4; string FtpURL = 5; string SponsorName = 6; string SponsorURL = 7; string SponsorLogoURL = 8; string AdminName = 9; string AdminEmail = 10; string CustomData = 11; bool ContinentOnly = 12; bool CountryOnly = 13; bool ASOnly = 14; int32 Score = 15; float Latitude = 16; float Longitude = 17; string ContinentCode = 18; string CountryCodes = 19; string ExcludedCountryCodes = 20; uint32 Asnum = 21; string Comment = 22; bool Enabled = 23; bool Up = 24; string ExcludeReason = 25; google.protobuf.Timestamp StateSince = 26; int32 AllowRedirects = 27; google.protobuf.Timestamp LastSync = 28; google.protobuf.Timestamp LastSuccessfulSync = 29; google.protobuf.Timestamp LastModTime = 30; } message MirrorListReply { repeated Mirror Mirrors = 1; } message MirrorID { int32 ID = 1; string Name = 2; } message MatchReply { repeated MirrorID Mirrors = 1; } message ChangeStatusRequest { int32 ID = 1; bool Enabled = 2; } message MirrorIDRequest { int32 ID = 1; } message AddMirrorReply { float Latitude = 1; float Longitude = 2; string Country = 3; string Continent = 4; string ASN = 5; repeated string Warnings = 6; } message UpdateMirrorReply { string Diff = 1; } message RefreshRepositoryRequest { bool Rehash = 1; } message ScanMirrorRequest { int32 ID = 1; bool AutoEnable = 2; enum Method { ALL = 0; FTP = 1; RSYNC = 2; } Method Protocol = 3; } message ScanMirrorReply { bool Enabled = 1; int64 FilesIndexed = 2; int64 KnownIndexed = 3; int64 Removed = 4; int64 TZOffsetMs = 5; } message StatsFileRequest { string Pattern = 1; google.protobuf.Timestamp DateStart = 2; google.protobuf.Timestamp DateEnd = 3; } message StatsFileReply { map files = 1; } message StatsMirrorRequest { int32 ID = 1; google.protobuf.Timestamp DateStart = 2; google.protobuf.Timestamp DateEnd = 3; } message StatsMirrorReply { Mirror Mirror = 1; int64 Requests = 2; int64 Bytes = 3; } message GetMirrorLogsRequest { int32 ID = 1; int32 MaxResults = 2; } message GetMirrorLogsReply { repeated string line = 1; }mirrorbits-0.5.1+git20210123.eeea0e0+ds1/rpc/utils.go000066400000000000000000000067061411706463700214660ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package rpc import ( "github.com/etix/mirrorbits/mirrors" "github.com/golang/protobuf/ptypes" ) func MirrorToRPC(m *mirrors.Mirror) (*Mirror, error) { stateSince, err := ptypes.TimestampProto(m.StateSince.Time) if err != nil { return nil, err } lastSync, err := ptypes.TimestampProto(m.LastSync.Time) if err != nil { return nil, err } lastSuccessfulSync, err := ptypes.TimestampProto(m.LastSuccessfulSync.Time) if err != nil { return nil, err } lastModTime, err := ptypes.TimestampProto(m.LastModTime.Time) if err != nil { return nil, err } return &Mirror{ ID: int32(m.ID), Name: m.Name, HttpURL: m.HttpURL, RsyncURL: m.RsyncURL, FtpURL: m.FtpURL, SponsorName: m.SponsorName, SponsorURL: m.SponsorURL, SponsorLogoURL: m.SponsorLogoURL, AdminName: m.AdminName, AdminEmail: m.AdminEmail, CustomData: m.CustomData, ContinentOnly: m.ContinentOnly, CountryOnly: m.CountryOnly, ASOnly: m.ASOnly, Score: int32(m.Score), Latitude: m.Latitude, Longitude: m.Longitude, ContinentCode: m.ContinentCode, CountryCodes: m.CountryCodes, ExcludedCountryCodes: m.ExcludedCountryCodes, Asnum: uint32(m.Asnum), Comment: m.Comment, Enabled: m.Enabled, Up: m.Up, ExcludeReason: m.ExcludeReason, StateSince: stateSince, AllowRedirects: int32(m.AllowRedirects), LastSync: lastSync, LastSuccessfulSync: lastSuccessfulSync, LastModTime: lastModTime, }, nil } func MirrorFromRPC(m *Mirror) (*mirrors.Mirror, error) { stateSince, err := ptypes.Timestamp(m.StateSince) if err != nil { return nil, err } lastSync, err := ptypes.Timestamp(m.LastSync) if err != nil { return nil, err } lastSuccessfulSync, err := ptypes.Timestamp(m.LastSuccessfulSync) if err != nil { return nil, err } lastModTime, err := ptypes.Timestamp(m.LastModTime) if err != nil { return nil, err } return &mirrors.Mirror{ ID: int(m.ID), Name: m.Name, HttpURL: m.HttpURL, RsyncURL: m.RsyncURL, FtpURL: m.FtpURL, SponsorName: m.SponsorName, SponsorURL: m.SponsorURL, SponsorLogoURL: m.SponsorLogoURL, AdminName: m.AdminName, AdminEmail: m.AdminEmail, CustomData: m.CustomData, ContinentOnly: m.ContinentOnly, CountryOnly: m.CountryOnly, ASOnly: m.ASOnly, Score: int(m.Score), Latitude: m.Latitude, Longitude: m.Longitude, ContinentCode: m.ContinentCode, CountryCodes: m.CountryCodes, ExcludedCountryCodes: m.ExcludedCountryCodes, Asnum: uint(m.Asnum), Comment: m.Comment, Enabled: m.Enabled, Up: m.Up, ExcludeReason: m.ExcludeReason, StateSince: mirrors.Time{}.FromTime(stateSince), AllowRedirects: mirrors.Redirects(m.AllowRedirects), LastSync: mirrors.Time{}.FromTime(lastSync), LastSuccessfulSync: mirrors.Time{}.FromTime(lastSuccessfulSync), LastModTime: mirrors.Time{}.FromTime(lastModTime), }, nil } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/scan/000077500000000000000000000000001411706463700201265ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/scan/ftp.go000066400000000000000000000073231411706463700212530ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package scan import ( "fmt" "net/url" "strings" "time" ftp "github.com/etix/goftp" "github.com/etix/mirrorbits/core" "github.com/etix/mirrorbits/utils" "github.com/gomodule/redigo/redis" ) const ( ftpConnTimeout = 5 * time.Second ftpRWTimeout = 30 * time.Second ) // FTPScanner is the implementation of an ftp scanner type FTPScanner struct { scan *scan featMLST bool featMDTM bool precision core.Precision // Used for truncating time for comparison } // Scan starts an ftp scan of the given mirror func (f *FTPScanner) Scan(scanurl, identifier string, conn redis.Conn, stop <-chan struct{}) (core.Precision, error) { if !strings.HasPrefix(scanurl, "ftp://") { return 0, fmt.Errorf("%s does not start with ftp://", scanurl) } ftpurl, err := url.Parse(scanurl) if err != nil { return 0, err } host := ftpurl.Host if !strings.Contains(host, ":") { host += ":21" } if utils.IsStopped(stop) { return 0, ErrScanAborted } c, err := ftp.DialTimeout(host, ftpConnTimeout, ftpRWTimeout) if err != nil { return 0, err } defer c.Quit() username, password := "anonymous", "anonymous" if ftpurl.User != nil { username = ftpurl.User.Username() pass, hasPassword := ftpurl.User.Password() if hasPassword { password = pass } } err = c.Login(username, password) if err != nil { return 0, err } _, f.featMLST = c.Feature("MLST") _, f.featMDTM = c.Feature("MDTM") if !f.featMLST || !f.featMDTM { log.Warning("This server does not support some of the RFC 3659 extensions, consider using rsync instead.") } log.Infof("[%s] Requesting file list via ftp...", identifier) files := make([]*filedata, 0, 1000) err = c.ChangeDir(ftpurl.Path) if err != nil { return 0, fmt.Errorf("ftp error %s", err.Error()) } _, err = c.CurrentDir() if err != nil { return 0, fmt.Errorf("ftp error %s", err.Error()) } // Remove the trailing slash prefix := strings.TrimRight(ftpurl.Path, "/") files, err = f.walkFtp(c, files, prefix+"/", stop) if err != nil { return 0, fmt.Errorf("ftp error %s", err.Error()) } count := 0 for _, fd := range files { fd.path = strings.TrimPrefix(fd.path, prefix) f.scan.ScannerAddFile(*fd) count++ } return f.precision, nil } // Walk inside an FTP repository func (f *FTPScanner) walkFtp(c *ftp.ServerConn, files []*filedata, path string, stop <-chan struct{}) ([]*filedata, error) { if utils.IsStopped(stop) { return nil, ErrScanAborted } flist, err := c.List(path) if err != nil { return nil, err } for _, e := range flist { if e.Type == ftp.EntryTypeFile { newf := &filedata{} newf.path = path + e.Name newf.size = int64(e.Size) if f.featMDTM { t, _ := c.LastModificationDate(path + e.Name) if !t.IsZero() { newf.modTime = t if f.precision != core.Precision(time.Millisecond) { // We are not yet sure that we can have millisecond precision if newf.modTime.Truncate(time.Second).Equal(newf.modTime) { // The mod time is precise up to the second (for this file) f.precision = core.Precision(time.Second) } else { // The mod time is precise up to the millisecond f.precision = core.Precision(time.Millisecond) } } } } if newf.modTime.IsZero() { if f.featMLST { newf.modTime = e.Time if f.precision == 0 { f.precision = core.Precision(time.Second) } } else { newf.modTime = time.Time{} } } files = append(files, newf) } else if e.Type == ftp.EntryTypeFolder { if e.Name == "." || e.Name == ".." { continue } files, err = f.walkFtp(c, files, path+e.Name+"/", stop) if err != nil { return files, err } } } return files, err } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/scan/rsync.go000066400000000000000000000107021411706463700216130ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package scan import ( "bufio" "errors" "fmt" "io" "net/url" "os/exec" "regexp" "strconv" "strings" "time" "github.com/etix/mirrorbits/core" "github.com/etix/mirrorbits/utils" "github.com/gomodule/redigo/redis" ) var ( rsyncOutputLine = regexp.MustCompile(`^.+\s+([0-9,]+)\s+([0-9/]+)\s+([0-9:]+)\s+(.*)$`) ) // RsyncScanner is the implementation of an rsync scanner type RsyncScanner struct { scan *scan } // Scan starts an rsync scan of the given mirror func (r *RsyncScanner) Scan(rsyncURL, identifier string, conn redis.Conn, stop <-chan struct{}) (core.Precision, error) { var env []string if !strings.HasPrefix(rsyncURL, "rsync://") { return 0, fmt.Errorf("%s does not start with rsync://", rsyncURL) } u, err := url.Parse(rsyncURL) if err != nil { return 0, err } // Extract the credentials if u.User != nil { if u.User.Username() != "" { env = append(env, fmt.Sprintf("USER=%s", u.User.Username())) } if password, ok := u.User.Password(); ok { env = append(env, fmt.Sprintf("RSYNC_PASSWORD=%s", password)) } // Remove the credentials from the URL as we pass them through the environnement u.User = nil } // Don't use the local timezone, use UTC env = append(env, "TZ=UTC") cmd := exec.Command("rsync", "-r", "--no-motd", "--timeout=30", "--contimeout=30", "--exclude=.~tmp~/", u.String()) // Setup the environnement cmd.Env = env stdout, err := cmd.StdoutPipe() if err != nil { return 0, err } stderr, err := cmd.StderrPipe() if err != nil { return 0, err } // Pipe stdout reader := bufio.NewReader(stdout) readerErr := bufio.NewReader(stderr) if utils.IsStopped(stop) { return 0, ErrScanAborted } // Start the process if err := cmd.Start(); err != nil { return 0, err } log.Infof("[%s] Requesting file list via rsync...", identifier) scanfinished := make(chan bool) go func() { select { case <-stop: cmd.Process.Kill() return case <-scanfinished: return } }() defer close(scanfinished) line, err := readln(reader) for err == nil { var size int64 var f filedata var modTime time.Time var modString string if utils.IsStopped(stop) { return 0, ErrScanAborted } // Parse one line returned by rsync ret := rsyncOutputLine.FindStringSubmatch(line) if ret[0][0] == 'd' || ret[0][0] == 'l' { // Skip directories and links goto cont } // Add the leading slash if ret[4][0] != '/' { ret[4] = "/" + ret[4] } // Parse the mod time modString = ret[2] + " " + ret[3] modTime, err = time.Parse("2006/01/02 15:04:05", modString) if err != nil { log.Errorf("[%s] ScanRsync: Invalid mod time: %s", identifier, modString) goto cont } // Remove the commas in the file size ret[1] = strings.Replace(ret[1], ",", "", -1) // Convert the size to int size, err = strconv.ParseInt(ret[1], 10, 64) if err != nil { log.Errorf("[%s] ScanRsync: Invalid size: %s", identifier, ret[1]) goto cont } // Fill the struct f.size = size f.modTime = modTime f.path = ret[4] r.scan.ScannerAddFile(f) cont: line, err = readln(reader) } rsyncErrors := []string{} for line, err = readln(readerErr); err == nil; line, err = readln(readerErr) { if strings.Contains(line, ": opendir ") { rsyncErrors = append(rsyncErrors, line) } } if err1 := cmd.Wait(); err1 != nil { switch err1.Error() { case "exit status 5": err1 = errors.New("rsync: Error starting client-server protocol") case "exit status 10": err1 = errors.New("rsync: Error in socket I/O") case "exit status 11": err1 = errors.New("rsync: Error in file I/O") case "exit status 23": for _, line := range rsyncErrors { log.Warningf("[%s] %s", identifier, line) } log.Warningf("[%s] rsync: Partial transfer due to error", identifier) err1 = nil case "exit status 30": err1 = errors.New("rsync: Timeout in data send/receive") case "exit status 35": err1 = errors.New("Timeout waiting for daemon connection") default: if utils.IsStopped(stop) { err1 = ErrScanAborted } else { err1 = errors.New("rsync: " + err1.Error()) } } return 0, err1 } if err != io.EOF { return 0, err } return core.Precision(time.Second), nil } func readln(r *bufio.Reader) (string, error) { var ( isPrefix = true err error line, ln []byte ) for isPrefix && err == nil { line, isPrefix, err = r.ReadLine() ln = append(ln, line...) } return string(ln), err } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/scan/scan.go000066400000000000000000000322261411706463700214060ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package scan import ( "errors" "fmt" "os" "path/filepath" "strconv" "time" . "github.com/etix/mirrorbits/config" "github.com/etix/mirrorbits/core" "github.com/etix/mirrorbits/database" "github.com/etix/mirrorbits/filesystem" "github.com/etix/mirrorbits/mirrors" "github.com/etix/mirrorbits/network" "github.com/etix/mirrorbits/utils" "github.com/gomodule/redigo/redis" "github.com/op/go-logging" ) var ( // ErrScanAborted is returned when a scan is aborted by the user ErrScanAborted = errors.New("scan aborted") // ErrScanInProgress is returned when a scan is started while another is already in progress ErrScanInProgress = errors.New("scan already in progress") // ErrNoSyncMethod is returned when no sync protocol is available ErrNoSyncMethod = errors.New("no suitable URL for the scan") log = logging.MustGetLogger("main") ) // Scanner is the interface that all scanners must implement type Scanner interface { Scan(url, identifier string, conn redis.Conn, stop <-chan struct{}) (core.Precision, error) } type filedata struct { path string sha1 string sha256 string md5 string size int64 modTime time.Time } type scan struct { redis *database.Redis cache *mirrors.Cache conn redis.Conn mirrorid int filesTmpKey string count int64 } type ScanResult struct { MirrorID int MirrorName string FilesIndexed int64 KnownIndexed int64 Removed int64 TZOffsetMs int64 } // IsScanning returns true is a scan is already in progress for the given mirror func IsScanning(conn redis.Conn, id int) (bool, error) { return redis.Bool(conn.Do("EXISTS", fmt.Sprintf("SCANNING_%d", id))) } // Scan starts a scan of the given mirror func Scan(typ core.ScannerType, r *database.Redis, c *mirrors.Cache, url string, id int, stop <-chan struct{}) (*ScanResult, error) { // Connect to the database conn := r.Get() defer conn.Close() s := &scan{ redis: r, mirrorid: id, conn: conn, cache: c, } var scanner Scanner switch typ { case core.RSYNC: scanner = &RsyncScanner{ scan: s, } case core.FTP: scanner = &FTPScanner{ scan: s, } default: panic(fmt.Sprintf("Unknown scanner")) } // Get the mirror name name, err := redis.String(conn.Do("HGET", "MIRRORS", id)) if err != nil { return nil, err } // Try to acquire a lock so we don't have a scanning race // from different nodes. // Also make the key expire automatically in case our process // gets killed. lock := network.NewClusterLock(s.redis, fmt.Sprintf("SCANNING_%d", id), name) done, err := lock.Get() if err != nil { return nil, err } else if done == nil { return nil, ErrScanInProgress } defer lock.Release() s.setLastSync(conn, id, typ, 0, false) mirrors.PushLog(r, mirrors.NewLogScanStarted(id, typ)) defer func(err *error) { if err != nil && *err != nil { mirrors.PushLog(r, mirrors.NewLogError(id, *err)) } }(&err) conn.Send("MULTI") filesKey := fmt.Sprintf("MIRRORFILES_%d", id) s.filesTmpKey = fmt.Sprintf("MIRRORFILESTMP_%d", id) // Remove any left over conn.Send("DEL", s.filesTmpKey) var precision core.Precision precision, err = scanner.Scan(url, name, conn, stop) if err != nil { // Discard MULTI s.ScannerDiscard() // Remove the temporary key conn.Do("DEL", s.filesTmpKey) log.Errorf("[%s] %s", name, err.Error()) return nil, err } // Exec multi s.ScannerCommit() // Get the list of files no more present on this mirror var toremove []interface{} toremove, err = redis.Values(conn.Do("SDIFF", filesKey, s.filesTmpKey)) if err != nil { return nil, err } // Remove this mirror from the given file SET if len(toremove) > 0 { conn.Send("MULTI") for _, e := range toremove { log.Debugf("[%s] Removing %s from mirror", name, e) conn.Send("SREM", fmt.Sprintf("FILEMIRRORS_%s", e), id) conn.Send("DEL", fmt.Sprintf("FILEINFO_%d_%s", id, e)) // Publish update database.SendPublish(conn, database.MIRROR_FILE_UPDATE, fmt.Sprintf("%d %s", id, e)) } _, err = conn.Do("EXEC") if err != nil { return nil, err } } // Finally rename the temporary sets containing the list // of files for this mirror to the production key if s.count > 0 { _, err = conn.Do("RENAME", s.filesTmpKey, filesKey) if err != nil { return nil, err } } sinterKey := fmt.Sprintf("HANDLEDFILES_%d", id) // Count the number of files known on the remote end common, _ := redis.Int64(conn.Do("SINTERSTORE", sinterKey, "FILES", filesKey)) if err != nil { return nil, err } s.setLastSync(conn, id, typ, precision, true) var tzoffset int64 tzoffset, err = s.adjustTZOffset(name, precision) if err != nil { log.Warningf("Unable to check timezone shifts: %s", err) } log.Infof("[%s] Indexed %d files (%d known), %d removed", name, s.count, common, len(toremove)) res := &ScanResult{ MirrorID: id, MirrorName: name, FilesIndexed: s.count, KnownIndexed: common, Removed: int64(len(toremove)), TZOffsetMs: tzoffset, } mirrors.PushLog(r, mirrors.NewLogScanCompleted( res.MirrorID, res.FilesIndexed, res.KnownIndexed, res.Removed, res.TZOffsetMs)) return res, nil } func (s *scan) ScannerAddFile(f filedata) { s.count++ // Add all the files to a temporary key s.conn.Send("SADD", s.filesTmpKey, f.path) // Mark the file as being supported by this mirror rk := fmt.Sprintf("FILEMIRRORS_%s", f.path) s.conn.Send("SADD", rk, s.mirrorid) // Save the size of the current file found on this mirror ik := fmt.Sprintf("FILEINFO_%d_%s", s.mirrorid, f.path) s.conn.Send("HMSET", ik, "size", f.size, "modTime", f.modTime) // Publish update database.SendPublish(s.conn, database.MIRROR_FILE_UPDATE, fmt.Sprintf("%d %s", s.mirrorid, f.path)) } func (s *scan) ScannerDiscard() { s.conn.Do("DISCARD") } func (s *scan) ScannerCommit() error { _, err := s.conn.Do("EXEC") return err } func (s *scan) setLastSync(conn redis.Conn, id int, protocol core.ScannerType, precision core.Precision, successful bool) error { now := time.Now().UTC().Unix() conn.Send("MULTI") // Set the last sync time conn.Send("HSET", fmt.Sprintf("MIRROR_%d", id), "lastSync", now) // Set the last successful sync time if successful { if precision == 0 { precision = core.Precision(time.Second) } conn.Send("HMSET", fmt.Sprintf("MIRROR_%d", id), "lastSuccessfulSync", now, "lastSuccessfulSyncProtocol", protocol, "lastSuccessfulSyncPrecision", precision) } _, err := conn.Do("EXEC") // Publish an update on redis database.Publish(conn, database.MIRROR_UPDATE, strconv.Itoa(id)) return err } func (s *scan) adjustTZOffset(name string, precision core.Precision) (ms int64, err error) { type pair struct { local filesystem.FileInfo remote filesystem.FileInfo } var filepaths []string var pairs []pair var offsetmap map[int64]int var commonOffsetFound bool if s.cache == nil { log.Error("Skipping timezone check: missing cache in instance") return } if GetConfig().FixTimezoneOffsets == false { // We need to reset any previous value already // stored in the database. goto finish } // Get 100 random files from the mirror filepaths, err = redis.Strings(s.conn.Do("SRANDMEMBER", fmt.Sprintf("HANDLEDFILES_%d", s.mirrorid), 100)) if err != nil { return } pairs = make([]pair, 0, 100) // Get the metadata of each file for _, path := range filepaths { p := pair{} p.local, err = s.cache.GetFileInfo(path) if err != nil { return } p.remote, err = s.cache.GetFileInfoMirror(s.mirrorid, path) if err != nil { return } if p.remote.ModTime.IsZero() { // Invalid mod time continue } if p.local.Size != p.remote.Size { // File differ: comparing the modfile will fail continue } // Add the file to valid pairs pairs = append(pairs, p) } if len(pairs) < 10 || len(pairs) < len(filepaths)/2 { // Less than half the files we got have a size // match, this is very suspicious. Skip the // check and reset the offset in the db. goto warn } // Compute the diff between local and remote for those files offsetmap = make(map[int64]int) for _, p := range pairs { // Convert to millisecond since unix timestamp truncating to the available precision local := p.local.ModTime.Truncate(precision.Duration()).UnixNano() / int64(time.Millisecond) remote := p.remote.ModTime.Truncate(precision.Duration()).UnixNano() / int64(time.Millisecond) diff := local - remote offsetmap[diff]++ } for k, v := range offsetmap { // Find the common offset (if any) of at least 90% of our subset if v >= int(float64(len(pairs))/100*90) { ms = k commonOffsetFound = true break } } warn: if !commonOffsetFound { log.Warningf("[%s] Unable to guess the timezone offset", name) } finish: // Store the offset in the database key := fmt.Sprintf("MIRROR_%d", s.mirrorid) _, err = s.conn.Do("HMSET", key, "tzoffset", ms) if err != nil { return } // Publish update database.Publish(s.conn, database.MIRROR_UPDATE, strconv.Itoa(s.mirrorid)) if ms != 0 { log.Noticef("[%s] Timezone offset detected: applied correction of %dms", name, ms) } return } type sourcescanner struct { } // Walk inside the source/reference repository func (s *sourcescanner) walkSource(conn redis.Conn, path string, f os.FileInfo, rehash bool, err error) (*filedata, error) { if f == nil || f.IsDir() || f.Mode()&os.ModeSymlink != 0 { return nil, nil } d := new(filedata) d.path = path[len(GetConfig().Repository):] d.size = f.Size() d.modTime = f.ModTime() // Get the previous file properties properties, err := redis.Strings(conn.Do("HMGET", fmt.Sprintf("FILE_%s", d.path), "size", "modTime", "sha1", "sha256", "md5")) if err != nil && err != redis.ErrNil { return nil, err } else if len(properties) < 5 { // This will force a rehash properties = make([]string, 5) } size, _ := strconv.ParseInt(properties[0], 10, 64) modTime, _ := time.Parse("2006-01-02 15:04:05.999999999 -0700 MST", properties[1]) sha1 := properties[2] sha256 := properties[3] md5 := properties[4] rehash = rehash || (GetConfig().Hashes.SHA1 && len(sha1) == 0) || (GetConfig().Hashes.SHA256 && len(sha256) == 0) || (GetConfig().Hashes.MD5 && len(md5) == 0) if rehash || size != d.size || !modTime.Equal(d.modTime) { h, err := filesystem.HashFile(GetConfig().Repository + d.path) if err != nil { log.Warningf("%s: hashing failed: %s", d.path, err.Error()) } else { d.sha1 = h.Sha1 d.sha256 = h.Sha256 d.md5 = h.Md5 if len(d.sha1) > 0 { log.Infof("%s: SHA1 %s", d.path, d.sha1) } if len(d.sha256) > 0 { log.Infof("%s: SHA256 %s", d.path, d.sha256) } if len(d.md5) > 0 { log.Infof("%s: MD5 %s", d.path, d.md5) } } } else { d.sha1 = sha1 d.sha256 = sha256 d.md5 = md5 } return d, nil } // ScanSource starts a scan of the local repository func ScanSource(r *database.Redis, forceRehash bool, stop <-chan struct{}) (err error) { s := &sourcescanner{} conn := r.Get() defer conn.Close() if conn.Err() != nil { return conn.Err() } sourceFiles := make([]*filedata, 0, 1000) //TODO lock atomically inside redis to avoid two simultaneous scan if _, err := os.Stat(GetConfig().Repository); os.IsNotExist(err) { return fmt.Errorf("%s: No such file or directory", GetConfig().Repository) } log.Info("[source] Scanning the filesystem...") err = filepath.Walk(GetConfig().Repository, func(path string, f os.FileInfo, err error) error { fd, err := s.walkSource(conn, path, f, forceRehash, err) if err != nil { return err } if fd != nil { sourceFiles = append(sourceFiles, fd) } return nil }) if utils.IsStopped(stop) { return ErrScanAborted } if err != nil { return err } log.Info("[source] Indexing the files...") lock := network.NewClusterLock(r, "SOURCE_REPO_SYNC", "source repository") retry := 10 for { if retry == 0 { return ErrScanInProgress } done, err := lock.Get() if err != nil { return err } else if done != nil { break } time.Sleep(1 * time.Second) retry-- } defer lock.Release() conn.Send("MULTI") // Remove any left over conn.Send("DEL", "FILES_TMP") // Add all the files to a temporary key count := 0 for _, e := range sourceFiles { conn.Send("SADD", "FILES_TMP", e.path) count++ } _, err = conn.Do("EXEC") if err != nil { return err } // Do a diff between the sets to get the removed files toremove, err := redis.Values(conn.Do("SDIFF", "FILES", "FILES_TMP")) // Create/Update the files' hash keys with the fresh infos conn.Send("MULTI") for _, e := range sourceFiles { conn.Send("HMSET", fmt.Sprintf("FILE_%s", e.path), "size", e.size, "modTime", e.modTime, "sha1", e.sha1, "sha256", e.sha256, "md5", e.md5) // Publish update database.SendPublish(conn, database.FILE_UPDATE, e.path) } // Remove old keys if len(toremove) > 0 { for _, e := range toremove { conn.Send("DEL", fmt.Sprintf("FILE_%s", e)) // Publish update database.SendPublish(conn, database.FILE_UPDATE, fmt.Sprintf("%s", e)) } } // Finally rename the temporary sets containing the list // of files to the production key conn.Send("RENAME", "FILES_TMP", "FILES") _, err = conn.Do("EXEC") if err != nil { return err } log.Infof("[source] Scanned %d files", count) return nil } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/scan/trace.go000066400000000000000000000057051411706463700215620ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package scan import ( "bufio" "context" "errors" "fmt" "net" "net/http" "strconv" "time" . "github.com/etix/mirrorbits/config" "github.com/etix/mirrorbits/core" "github.com/etix/mirrorbits/database" "github.com/etix/mirrorbits/mirrors" "github.com/etix/mirrorbits/utils" ) var ( userAgent = "Mirrorbits/" + core.VERSION + " TRACE" clientTimeout = time.Duration(20 * time.Second) clientDeadline = time.Duration(40 * time.Second) // ErrNoTrace is returned when no trace file is found ErrNoTrace = errors.New("No trace file") ) // Trace is the internal trace handler type Trace struct { redis *database.Redis transport http.Transport httpClient http.Client stop <-chan struct{} } // NewTraceHandler returns a new instance of the trace file handler. // Trace files are used to compute the time offset between a mirror // and the local repository. func NewTraceHandler(redis *database.Redis, stop <-chan struct{}) *Trace { t := &Trace{ redis: redis, stop: stop, } t.transport = http.Transport{ DisableKeepAlives: true, MaxIdleConnsPerHost: 0, Dial: func(network, addr string) (net.Conn, error) { deadline := time.Now().Add(clientDeadline) c, err := net.DialTimeout(network, addr, clientTimeout) if err != nil { return nil, err } c.SetDeadline(deadline) return c, nil }, } t.httpClient = http.Client{ Transport: &t.transport, } return t } // GetLastUpdate connects in HTTP to the mirror to get the latest // trace file and computes the offset of the mirror. func (t *Trace) GetLastUpdate(mirror mirrors.Mirror) error { traceFile := GetConfig().TraceFileLocation if len(traceFile) == 0 { return ErrNoTrace } log.Debugf("Getting latest trace file for %s...", mirror.Name) // Prepare the HTTP request req, err := http.NewRequest("GET", utils.ConcatURL(mirror.HttpURL, traceFile), nil) req.Header.Set("User-Agent", userAgent) req.Close = true // Prepare contexts ctx, cancel := context.WithTimeout(req.Context(), clientDeadline) ctx = context.WithValue(ctx, core.ContextMirrorID, mirror.ID) req = req.WithContext(ctx) defer cancel() go func() { select { case <-t.stop: cancel() case <-ctx.Done(): } }() resp, err := t.httpClient.Do(req) if err != nil { return err } defer resp.Body.Close() scanner := bufio.NewScanner(bufio.NewReader(resp.Body)) scanner.Split(bufio.ScanWords) scanner.Scan() if err := scanner.Err(); err != nil { return err } timestamp, err := strconv.ParseInt(scanner.Text(), 10, 64) if err != nil { return err } conn := t.redis.Get() defer conn.Close() _, err = conn.Do("HSET", fmt.Sprintf("MIRROR_%d", mirror.ID), "lastModTime", timestamp) if err != nil { return err } // Publish an update on redis database.Publish(conn, database.MIRROR_UPDATE, strconv.Itoa(mirror.ID)) log.Debugf("[%s] trace last sync: %s", mirror.Name, time.Unix(timestamp, 0)) return nil } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/templates/000077500000000000000000000000001411706463700212005ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/templates/base.html000066400000000000000000000103561411706463700230050ustar00rootroot00000000000000{{define "base"}} {{template "title" .}} {{if not .LocalJSPath}} {{else}} {{end}} {{template "head" .}}
{{template "body" .}}
{{end}}mirrorbits-0.5.1+git20210123.eeea0e0+ds1/templates/mirrorlist.html000066400000000000000000000254621411706463700243050ustar00rootroot00000000000000{{define "title"}}Mirrorlist {{.FileInfo.Path}}{{end}} {{define "headline"}}{{.FileInfo.Path}}{{end}} {{define "head"}} {{if not .LocalJSPath}} {{else}} {{end}} {{end}} {{define "body"}}

Client

You are connecting with IP address {{.IP}}, which belongs to autonomous system {{.ClientInfo.ASName}} (ASN{{.ClientInfo.ASNum}}).
{{if .ClientInfo.IsValid}}We believe you are {{if .ClientInfo.City}}near {{.ClientInfo.City}} in {{else}}somewhere in {{end}}{{.ClientInfo.Country}} and have selected mirrors based on this.{{else}}We were not able to use your IP to approximate your location, so have chosen the mirrors at random.{{end}}

File

The file {{.FileInfo.Path}} has a size of {{sizeof .FileInfo.Size}} ({{.FileInfo.Size}} bytes) and was last modified on {{dateutc .FileInfo.ModTime}}.

Known hashes:
MD5{{if .FileInfo.Md5}}{{.FileInfo.Md5}}{{else}}N/A{{end}}
SHA1{{if .FileInfo.Sha1}}{{.FileInfo.Sha1}}{{else}}N/A{{end}}
SHA256{{if .FileInfo.Sha256}}{{.FileInfo.Sha256}}{{else}}N/A{{end}}


Mirrors

{{if .Fallback}}

Warning: file not served by any mirror, fallbacks to the rescue.

{{end}} {{if .MirrorList}} {{range $i, $v := .MirrorList}} {{end}}
RankMirror NameURLCountryContinentDistanceSelection
{{add $i 1}}.{{if $v.SponsorName}}{{$v.SponsorName}}{{else}}{{$v.Name}}{{end}}{{$v.HttpURL}}{{$v.CountryCodes}}{{$v.ContinentCode}}{{printf "%.0f" $v.Distance}} Km{{if $v.Weight}}{{if ge $v.Weight 1.0}}{{printf "%.0f" $v.Weight}}{{else}}<1{{end}}%{{else}}n/a{{end}}
{{else}} No mirrors for this file {{end}} {{if .ExcludedList}}

Excluded Mirrors

{{range $i, $v := .ExcludedList}} {{end}}
Mirror NameURLCountryContinentDistanceExclude Reason
{{if $v.SponsorName}}{{$v.SponsorName}}{{else}}{{$v.Name}}{{end}}{{$v.HttpURL}}{{$v.CountryCodes}}{{$v.ContinentCode}}{{printf "%.0f" $v.Distance}} Km{{$v.ExcludeReason}}
{{end}}
{{end}} mirrorbits-0.5.1+git20210123.eeea0e0+ds1/templates/mirrorstats.html000066400000000000000000000220141411706463700244560ustar00rootroot00000000000000{{define "title"}}Mirrorstats{{end}} {{define "headline"}}Mirrorstats{{end}} {{define "head"}} {{if not .LocalJSPath}} {{else}} {{end}} {{end}} {{define "body"}}
{{if .HasTZAdjustement}}{{end}} {{range $i, $v := .List}} {{if $.HasTZAdjustement}}{{end}} {{end}}
Mirror Since 00:00 UTC… Last updateAdjusted TZ
{{$v.Name}}
{{$v.Downloads}}
downloads
{{if $v.SyncOffset.Valid}}{{$v.SyncOffset.HumanReadable}}{{else}}unknown{{end}}{{if ne $v.TZOffset 0}}{{$v.TZOffset}}{{end}}
{{sizeof $v.Bytes}}
transferred
{{end}} mirrorbits-0.5.1+git20210123.eeea0e0+ds1/testing/000077500000000000000000000000001411706463700206575ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/testing/redis.go000066400000000000000000000011701411706463700223130ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package testing import ( "github.com/etix/mirrorbits/database" "github.com/gomodule/redigo/redis" "github.com/rafaeljusto/redigomock" ) type redisPoolMock struct { Conn *redigomock.Conn } func (r *redisPoolMock) Get() redis.Conn { return r.Conn } func (r *redisPoolMock) Close() error { return nil } // PrepareRedisTest initialize redis tests func PrepareRedisTest() (*redigomock.Conn, *database.Redis) { mock := redigomock.NewConn() pool := &redisPoolMock{ Conn: mock, } conn := database.NewRedisCustomPool(pool) return mock, conn } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/utils/000077500000000000000000000000001411706463700203425ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/utils/utils.go000066400000000000000000000136241411706463700220370ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package utils import ( "fmt" "math" "os" "strings" "time" "github.com/etix/mirrorbits/core" "github.com/etix/mirrorbits/network" ) const ( // DegToRad is a constant to convert degrees to radians DegToRad = 0.017453292519943295769236907684886127134428718885417 // N[Pi/180, 50] // RadToDeg is a constant to convert radians to degrees RadToDeg = 57.295779513082320876798154814105170332405472466564 // N[180/Pi, 50] ) // NormalizeURL adds a trailing slash to the URL func NormalizeURL(url string) string { if url != "" && !strings.HasSuffix(url, "/") { url += "/" } return url } // GetDistanceKm returns the distance in km between two coordinates func GetDistanceKm(lat1, lon1, lat2, lon2 float32) float32 { var R float32 = 6371 // radius of the earth in Km dLat := (lat2 - lat1) * float32(DegToRad) dLon := (lon2 - lon1) * float32(DegToRad) a := math.Sin(float64(dLat/2))*math.Sin(float64(dLat/2)) + math.Cos(float64(lat1*DegToRad))*math.Cos(float64(lat2*DegToRad))*math.Sin(float64(dLon/2))*math.Sin(float64(dLon/2)) c := 2 * math.Atan2(math.Sqrt(a), math.Sqrt(1-a)) return R * float32(c) } // Min returns the smallest of the two values func Min(v1, v2 int) int { if v1 < v2 { return v1 } return v2 } // Max returns the highest of the two values func Max(v1, v2 int) int { if v1 > v2 { return v1 } return v2 } // Add does a simple addition func Add(x, y int) int { return x + y } // Version returns the version as a string func Version() string { return core.VERSION } // Hostname return the host name as a string func Hostname() string { hostname, _ := os.Hostname() return hostname } // IsInSlice returns true is `a` is contained in `list` // Warning: this is slow, don't use it for long datasets func IsInSlice(a string, list []string) bool { for _, b := range list { if b == a { return true } } return false } // IsAdditionalCountry returns true if the clientInfo country is in list func IsAdditionalCountry(clientInfo network.GeoIPRecord, list []string) bool { if !clientInfo.IsValid() { return false } for i, b := range list { if i > 0 && b == clientInfo.CountryCode { return true } } return false } // IsPrimaryCountry returns true if the clientInfo country is the primary country func IsPrimaryCountry(clientInfo network.GeoIPRecord, list []string) bool { if !clientInfo.IsValid() { return false } if len(list) > 0 && list[0] == clientInfo.CountryCode { return true } return false } // IsStopped returns true if a stop has been requested func IsStopped(stop <-chan struct{}) bool { select { case <-stop: return true default: return false } } // ReadableSize returns a file size in a human readable form func ReadableSize(value int64) string { units := []string{"bytes", "KB", "MB", "GB", "TB"} v := float64(value) for _, u := range units { if v < 1024 || u == "TB" { return fmt.Sprintf("%3.1f %s", v, u) } v /= 1024 } return "" } // ElapsedSec returns true if lastTimestamp + elapsed time is in the past func ElapsedSec(lastTimestamp int64, elapsedTime int64) bool { if lastTimestamp+elapsedTime < time.Now().UTC().Unix() { return true } return false } // Plural returns a single 's' if there are more than one value func Plural(value interface{}) string { n, ok := value.(int) if ok && n > 1 || n < -1 { return "s" } return "" } // ConcatURL concatenate the url and path func ConcatURL(url, path string) string { if strings.HasSuffix(url, "/") && strings.HasPrefix(path, "/") { return url[:len(url)-1] + path } if !strings.HasSuffix(url, "/") && !strings.HasPrefix(path, "/") { return url + "/" + path } return url + path } // FormattedDateUTC returns the date formatted as RFC1123 func FormattedDateUTC(t time.Time) string { return t.UTC().Format(time.RFC1123) } // TimeKeyCoverage returns a slice of strings covering the date range // used in the redis backend. func TimeKeyCoverage(start, end time.Time) (dates []string) { if start.Day() == end.Day() && start.Month() == end.Month() && start.Year() == end.Year() { dates = append(dates, start.Format("2006_01_02")) return } if start.Day() != 1 { month := start.Month() for { if start.Month() != month || start.Equal(end) { break } dates = append(dates, start.Format("2006_01_02")) start = start.AddDate(0, 0, 1) } } for { tmpyear := time.Date(start.Year()+1, 1, 1, 0, 0, 0, 0, start.Location()) tmpmonth := time.Date(start.Year(), start.Month()+1, 1, 0, 0, 0, 0, start.Location()) if start.Day() == 1 && start.Month() == 1 && (tmpyear.Before(end) || tmpyear.Equal(end)) { dates = append(dates, start.Format("2006")) start = tmpyear } else if tmpmonth.Before(end) || tmpmonth.Equal(end) { dates = append(dates, start.Format("2006_01")) start = tmpmonth } else { break } } for { if start.AddDate(0, 0, 1).After(end) { break } dates = append(dates, start.Format("2006_01_02")) start = start.AddDate(0, 0, 1) } return } // FuzzyTimeStr returns the duration as fuzzy time func FuzzyTimeStr(duration time.Duration) string { hours := duration.Hours() minutes := duration.Minutes() if int(minutes) == 0 { return "up-to-date" } if minutes < 0 { return "in the future" } if hours < 1 { return fmt.Sprintf("%d minute%s ago", int(duration.Minutes()), Plural(int(duration.Minutes()))) } if hours/24 < 1 { return fmt.Sprintf("%d hour%s ago", int(hours), Plural(int(hours))) } if hours/24/365 > 1 { return fmt.Sprintf("%d year%s ago", int(hours/24/365), Plural(int(hours/24/365))) } return fmt.Sprintf("%d day%s ago", int(hours/24), Plural(int(hours/24))) } // SanitizeLocationCodes sanitizes the given location codes func SanitizeLocationCodes(input string) string { input = strings.Replace(input, ",", " ", -1) ccodes := strings.Fields(input) output := "" for _, c := range ccodes { output += strings.ToUpper(c) + " " } return strings.TrimRight(output, " ") } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/utils/utils_test.go000066400000000000000000000140731411706463700230750ustar00rootroot00000000000000// Copyright (c) 2014-2019 Ludovic Fauvet // Licensed under the MIT license package utils import ( "testing" "time" "github.com/etix/mirrorbits/core" "github.com/etix/mirrorbits/network" ) func TestNormalizeURL(t *testing.T) { s := []string{ "", "", "rsync://test.com", "rsync://test.com/", "rsync://test.com/", "rsync://test.com/", } if len(s)%2 != 0 { t.Fatal("not multiple of 2") } for i := 0; i < len(s); i += 2 { if r := NormalizeURL(s[i]); r != s[i+1] { t.Fatalf("%q: expected %q, got %q", s[i], s[i+1], r) } } } func TestGetDistanceKm(t *testing.T) { if r := GetDistanceKm(48.8567, 2.3508, 40.7127, 74.0059); int(r) != 5514 { t.Fatalf("Expected 5514, got %f", r) } if r := GetDistanceKm(48.8567, 2.3508, 48.8567, 2.3508); int(r) != 0 { t.Fatalf("Expected 0, got %f", r) } } func TestMin(t *testing.T) { if r := Min(-10, 5); r != -10 { t.Fatalf("Expected -10, got %d", r) } } func TestMax(t *testing.T) { if r := Max(-10, 5); r != 5 { t.Fatalf("Expected 5, got %d", r) } } func TestAdd(t *testing.T) { if r := Add(2, 40); r != 42 { t.Fatalf("Expected 42, got %d", r) } } func TestVersion(t *testing.T) { if r := Version(); len(r) == 0 || r != core.VERSION { t.Fatalf("Expected %s, got %s", core.VERSION, r) } } func TestHostname(t *testing.T) { if r := Hostname(); len(r) == 0 { t.Fatalf("Expected a valid hostname") } } func TestIsInSlice(t *testing.T) { var b bool list := []string{"aaa", "bbb", "ccc"} b = IsInSlice("ccc", list) if !b { t.Fatal("Expected true, got false") } b = IsInSlice("b", list) if b { t.Fatal("Expected false, got true") } b = IsInSlice("", list) if b { t.Fatal("Expected false, got true") } } func TestIsAdditionalCountry(t *testing.T) { var b bool list := []string{"FR", "DE", "GR"} clientInfo := network.GeoIPRecord{ CountryCode: "FR", } b = IsAdditionalCountry(clientInfo, list) if b { t.Fatal("Expected false, got true") } clientInfo = network.GeoIPRecord{ CountryCode: "GR", } b = IsAdditionalCountry(clientInfo, list) if !b { t.Fatal("Expected true, got false") } } func TestIsPrimaryCountry(t *testing.T) { var b bool list := []string{"FR", "DE", "GR"} clientInfo := network.GeoIPRecord{ CountryCode: "FR", } b = IsPrimaryCountry(clientInfo, list) if !b { t.Fatal("Expected true, got false") } clientInfo = network.GeoIPRecord{ CountryCode: "GR", } b = IsPrimaryCountry(clientInfo, list) if b { t.Fatal("Expected false, got true") } } func TestIsStopped(t *testing.T) { stop := make(chan struct{}, 1) if IsStopped(stop) { t.Fatal("Expected false, got true") } close(stop) if !IsStopped(stop) { t.Fatal("Expected true, got false") } } func TestReadableSize(t *testing.T) { ivalues := []int64{0, 1, 1024, 1000000} svalues := []string{"0.0 bytes", "1.0 bytes", "1.0 KB", "976.6 KB"} for i := range ivalues { if r := ReadableSize(ivalues[i]); r != svalues[i] { t.Fatalf("Expected %q, got %q", svalues[i], r) } } } func TestElapsedSec(t *testing.T) { now := time.Now().UTC().Unix() lastTimestamp := now - 1000 if ElapsedSec(lastTimestamp, 500) == false { t.Fatalf("Expected true, got false") } if ElapsedSec(lastTimestamp, 5000) == true { t.Fatalf("Expected false, got true") } } func TestPlural(t *testing.T) { if Plural(2) != "s" { t.Fatalf("Expected 's', got ''") } if Plural(10000000) != "s" { t.Fatalf("Expected 's', got ''") } if Plural(-2) != "s" { t.Fatalf("Expected 's', got ''") } if Plural(1) != "" { t.Fatalf("Expected '', got 's'") } if Plural(-1) != "" { t.Fatalf("Expected '', got 's'") } if Plural(0) != "" { t.Fatalf("Expected '', got 's'") } } func TestConcatURL(t *testing.T) { part1 := "http://test.example/somedir/" part2 := "/somefile.bin" result := "http://test.example/somedir/somefile.bin" if r := ConcatURL(part1, part2); r != result { t.Fatalf("Expected %s, got %s", result, r) } part1 = "http://test.example/somedir" part2 = "/somefile.bin" result = "http://test.example/somedir/somefile.bin" if r := ConcatURL(part1, part2); r != result { t.Fatalf("Expected %s, got %s", result, r) } part1 = "http://test.example/somedir" part2 = "somefile.bin" result = "http://test.example/somedir/somefile.bin" if r := ConcatURL(part1, part2); r != result { t.Fatalf("Expected %s, got %s", result, r) } } func TestTimeKeyCoverage(t *testing.T) { date1Start := time.Date(2015, 10, 30, 12, 42, 11, 0, time.UTC) date1End := time.Date(2015, 12, 2, 13, 42, 11, 0, time.UTC) result1 := []string{"2015_10_30", "2015_10_31", "2015_11", "2015_12_01"} result := TimeKeyCoverage(date1Start, date1End) if len(result) != len(result1) { t.Fatalf("Expect %d elements, got %d", len(result1), len(result)) } for i, r := range result { if r != result1[i] { t.Fatalf("Expect %#v, got %#v", result1, result) } } /* */ date2Start := time.Date(2015, 12, 2, 12, 42, 11, 0, time.UTC) date2End := time.Date(2015, 12, 2, 13, 42, 11, 0, time.UTC) result2 := []string{"2015_12_02"} result = TimeKeyCoverage(date2Start, date2End) if len(result) != len(result2) { t.Fatalf("Expect %d elements, got %d", len(result2), len(result)) } for i, r := range result { if r != result2[i] { t.Fatalf("Expect %#v, got %#v", result2, result) } } /* */ date3Start := time.Date(2015, 1, 1, 12, 42, 11, 0, time.UTC) date3End := time.Date(2017, 1, 1, 13, 42, 11, 0, time.UTC) result3 := []string{"2015", "2016"} result = TimeKeyCoverage(date3Start, date3End) if len(result) != len(result3) { t.Fatalf("Expect %d elements, got %d", len(result3), len(result)) } for i, r := range result { if r != result3[i] { t.Fatalf("Expect %#v, got %#v", result3, result) } } /* */ date4Start := time.Date(2015, 12, 31, 12, 42, 11, 0, time.UTC) date4End := time.Date(2016, 1, 2, 13, 42, 11, 0, time.UTC) result4 := []string{"2015_12_31", "2016_01_01"} result = TimeKeyCoverage(date4Start, date4End) if len(result) != len(result4) { t.Fatalf("Expect %d elements, got %d", len(result4), len(result)) } for i, r := range result { if r != result4[i] { t.Fatalf("Expect %#v, got %#v", result4, result) } } } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/000077500000000000000000000000001411706463700204775ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/000077500000000000000000000000001411706463700225365ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/etix/000077500000000000000000000000001411706463700235075ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/etix/goftp/000077500000000000000000000000001411706463700246265ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/etix/goftp/.travis.yml000066400000000000000000000006261411706463700267430ustar00rootroot00000000000000language: go go: - 1.7.3 before_install: - sudo mkdir --mode 0777 -p /var/ftp/incoming - sudo apt-get update -qq - sudo apt-get install -qq vsftpd - sudo cp $TRAVIS_BUILD_DIR/.vsftpd.conf /etc/vsftpd.conf - sudo service vsftpd restart - sudo sysctl net.ipv6.conf.lo.disable_ipv6=0 - go get github.com/axw/gocov/gocov - go get github.com/mattn/goveralls script: - $GOPATH/bin/goveralls -service=travis-ci mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/etix/goftp/.vsftpd.conf000066400000000000000000000004061411706463700270610ustar00rootroot00000000000000# Used by Travis CI listen=NO listen_ipv6=YES write_enable=YES dirmessage_enable=YES secure_chroot_dir=/var/run/vsftpd/empty anonymous_enable=YES anon_root=/var/ftp anon_upload_enable=YES anon_mkdir_write_enable=YES anon_other_write_enable=YES anon_umask=022 mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/etix/goftp/LICENSE000066400000000000000000000013711411706463700256350ustar00rootroot00000000000000Copyright (c) 2011-2013, Julien Laffaye Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/etix/goftp/README.md000066400000000000000000000010261411706463700261040ustar00rootroot00000000000000# goftp # [![Build Status](https://travis-ci.org/jlaffaye/ftp.svg?branch=master)](https://travis-ci.org/jlaffaye/ftp) [![Coverage Status](https://coveralls.io/repos/jlaffaye/ftp/badge.svg?branch=master&service=github)](https://coveralls.io/github/jlaffaye/ftp?branch=master) [![Go ReportCard](http://goreportcard.com/badge/jlaffaye/ftp)](http://goreportcard.com/report/jlaffaye/ftp) A FTP client package for Go ## Install ## ``` go get -u github.com/jlaffaye/ftp ``` ## Documentation ## http://godoc.org/github.com/jlaffaye/ftp mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/etix/goftp/ftp.go000066400000000000000000000446451411706463700257630ustar00rootroot00000000000000// Package ftp implements a FTP client as described in RFC 959. package ftp import ( "bufio" "errors" "io" "net" "net/textproto" "strconv" "strings" "time" "unicode" ) // EntryType describes the different types of an Entry. type EntryType int // The differents types of an Entry const ( EntryTypeFile EntryType = iota EntryTypeFolder EntryTypeLink ) // ServerConn represents the connection to a remote FTP server. type ServerConn struct { conn *textproto.Conn host string timeout time.Duration rwtimeout time.Duration features map[string]string disableEPSV bool listall bool } // Entry describes a file and is returned by List(). type Entry struct { Name string Type EntryType Size uint64 Time time.Time } // response represent a data-connection type response struct { conn net.Conn c *ServerConn } type ttConn struct { net.Conn timeout time.Duration } func (t ttConn) Read(buf []byte) (int, error) { t.Conn.SetReadDeadline(time.Now().Add(t.timeout)) return t.Conn.Read(buf) } func (t ttConn) Write(buf []byte) (int, error) { t.Conn.SetWriteDeadline(time.Now().Add(t.timeout)) return t.Conn.Write(buf) } // Connect is an alias to Dial, for backward compatibility func Connect(addr string) (*ServerConn, error) { return Dial(addr) } // Dial is like DialTimeout with no timeout func Dial(addr string) (*ServerConn, error) { return DialTimeout(addr, 0, 0) } // DialTimeout initializes the connection to the specified ftp server address. // // It is generally followed by a call to Login() as most FTP commands require // an authenticated user. func DialTimeout(addr string, timeout time.Duration, rwtimeout time.Duration) (*ServerConn, error) { tconn, err := net.DialTimeout("tcp", addr, timeout) if err != nil { return nil, err } ttconn := ttConn{ Conn: tconn, timeout: rwtimeout, } // Use the resolved IP address in case addr contains a domain name // If we use the domain name, we might not resolve to the same IP. remoteAddr := ttconn.RemoteAddr().String() host, _, err := net.SplitHostPort(remoteAddr) if err != nil { return nil, err } conn := textproto.NewConn(ttconn) c := &ServerConn{ conn: conn, host: host, timeout: timeout, rwtimeout: rwtimeout, features: make(map[string]string), listall: true, } _, _, err = c.conn.ReadResponse(StatusReady) if err != nil { c.Quit() return nil, err } err = c.feat() if err != nil { c.Quit() return nil, err } return c, nil } // Login authenticates the client with specified user and password. // // "anonymous"/"anonymous" is a common user/password scheme for FTP servers // that allows anonymous read-only accounts. func (c *ServerConn) Login(user, password string) error { code, message, err := c.cmd(-1, "USER %s", user) if err != nil { return err } switch code { case StatusLoggedIn: case StatusUserOK: _, _, err = c.cmd(StatusLoggedIn, "PASS %s", password) if err != nil { return err } default: return errors.New(message) } // Switch to binary mode _, _, err = c.cmd(StatusCommandOK, "TYPE I") if err != nil { return err } return nil } // feat issues a FEAT FTP command to list the additional commands supported by // the remote FTP server. // FEAT is described in RFC 2389 func (c *ServerConn) feat() error { code, message, err := c.cmd(-1, "FEAT") if err != nil { return err } if code != StatusSystem { // The server does not support the FEAT command. This is not an // error: we consider that there is no additional feature. return nil } lines := strings.Split(message, "\n") for _, line := range lines { if !strings.HasPrefix(line, " ") { continue } line = strings.TrimSpace(line) featureElements := strings.SplitN(line, " ", 2) command := featureElements[0] var commandDesc string if len(featureElements) == 2 { commandDesc = featureElements[1] } c.features[command] = commandDesc } return nil } func (c *ServerConn) Feature(feat string) (string, bool) { v, ok := c.features[feat] return v, ok } // epsv issues an "EPSV" command to get a port number for a data connection. func (c *ServerConn) epsv() (port int, err error) { _, line, err := c.cmd(StatusExtendedPassiveMode, "EPSV") if err != nil { return } start := strings.Index(line, "|||") end := strings.LastIndex(line, "|") if start == -1 || end == -1 { err = errors.New("Invalid EPSV response format") return } port, err = strconv.Atoi(line[start+3 : end]) return } // pasv issues a "PASV" command to get a port number for a data connection. func (c *ServerConn) pasv() (port int, err error) { _, line, err := c.cmd(StatusPassiveMode, "PASV") if err != nil { return } // PASV response format : 227 Entering Passive Mode (h1,h2,h3,h4,p1,p2). start := strings.Index(line, "(") end := strings.LastIndex(line, ")") if start == -1 || end == -1 { return 0, errors.New("Invalid PASV response format") } // We have to split the response string pasvData := strings.Split(line[start+1:end], ",") if len(pasvData) < 6 { return 0, errors.New("Invalid PASV response format") } // Let's compute the port number portPart1, err1 := strconv.Atoi(pasvData[4]) if err1 != nil { err = err1 return } portPart2, err2 := strconv.Atoi(pasvData[5]) if err2 != nil { err = err2 return } // Recompose port port = portPart1*256 + portPart2 return } // getDataConnPort returns a port for a new data connection // it uses the best available method to do so func (c *ServerConn) getDataConnPort() (int, error) { if !c.disableEPSV { if port, err := c.epsv(); err == nil { return port, nil } // if there is an error, disable EPSV for the next attempts c.disableEPSV = true } return c.pasv() } // openDataConn creates a new FTP data connection. func (c *ServerConn) openDataConn() (net.Conn, error) { port, err := c.getDataConnPort() if err != nil { return nil, err } tconn, err := net.DialTimeout("tcp", net.JoinHostPort(c.host, strconv.Itoa(port)), c.timeout) if err != nil { return tconn, err } conn := ttConn{ Conn: tconn, timeout: c.rwtimeout, } return conn, nil } // cmd is a helper function to execute a command and check for the expected FTP // return code func (c *ServerConn) cmd(expected int, format string, args ...interface{}) (int, string, error) { _, err := c.conn.Cmd(format, args...) if err != nil { return 0, "", err } return c.conn.ReadResponse(expected) } // cmdDataConnFrom executes a command which require a FTP data connection. // Issues a REST FTP command to specify the number of bytes to skip for the transfer. func (c *ServerConn) cmdDataConnFrom(offset uint64, format string, args ...interface{}) (net.Conn, error) { conn, err := c.openDataConn() if err != nil { return nil, err } if offset != 0 { _, _, err := c.cmd(StatusRequestFilePending, "REST %d", offset) if err != nil { conn.Close() return nil, err } } _, err = c.conn.Cmd(format, args...) if err != nil { conn.Close() return nil, err } code, msg, err := c.conn.ReadResponse(-1) if err != nil { conn.Close() return nil, err } if code != StatusAlreadyOpen && code != StatusAboutToSend { conn.Close() return nil, &textproto.Error{Code: code, Msg: msg} } return conn, nil } var errUnsupportedListLine = errors.New("Unsupported LIST line") // parseRFC3659ListLine parses the style of directory line defined in RFC 3659. func parseRFC3659ListLine(line string) (*Entry, error) { iSemicolon := strings.Index(line, ";") iWhitespace := strings.Index(line, " ") if iSemicolon < 0 || iSemicolon > iWhitespace { return nil, errUnsupportedListLine } e := &Entry{ Name: line[iWhitespace+1:], } for _, field := range strings.Split(line[:iWhitespace-1], ";") { i := strings.Index(field, "=") if i < 1 { return nil, errUnsupportedListLine } key := field[:i] value := field[i+1:] switch key { case "modify": var err error e.Time, err = time.Parse("20060102150405", value) if err != nil { return nil, err } case "type": switch value { case "cdir", "pdir": // Discard current and parent dir return nil, nil case "dir": e.Type = EntryTypeFolder case "file": e.Type = EntryTypeFile case "OS.unix=symlink": e.Type = EntryTypeLink } case "size": e.setSize(value) } } return e, nil } // parse file or folder name with starting or containing multiple whitespaces func fieldsLsList(s string) []string { n := 8 fields := make([]string, 0, n+1) fieldStart := -1 nextbreak := false for i, c := range s { if unicode.IsSpace(c) { if fieldStart >= 0 { fields = append(fields, s[fieldStart:i]) fieldStart = -1 } if nextbreak { fields = append(fields, s[i+1:]) break } } else { if fieldStart == -1 { fieldStart = i if len(fields) == n-1 { nextbreak = true } } } } if fieldStart != -1 { fields = append(fields, s[fieldStart:]) } return fields } // parseLsListLine parses a directory line in a format based on the output of // the UNIX ls command. func parseLsListLine(line string) (*Entry, error) { fields := fieldsLsList(line) if len(fields) >= 7 && fields[1] == "folder" && fields[2] == "0" { e := &Entry{ Type: EntryTypeFolder, Name: strings.Join(fields[6:], " "), } if err := e.setTime(fields[3:6]); err != nil { return nil, err } return e, nil } if len(fields) < 8 { return nil, errUnsupportedListLine } if fields[1] == "0" { e := &Entry{ Type: EntryTypeFile, Name: strings.Join(fields[7:], " "), } if err := e.setSize(fields[2]); err != nil { return nil, err } if err := e.setTime(fields[4:7]); err != nil { return nil, err } return e, nil } if len(fields) < 9 { return nil, errUnsupportedListLine } e := &Entry{} switch fields[0][0] { case '-': e.Type = EntryTypeFile if err := e.setSize(fields[4]); err != nil { return nil, err } case 'd': e.Type = EntryTypeFolder case 'l': e.Type = EntryTypeLink default: return nil, errors.New("Unknown entry type") } if err := e.setTime(fields[5:8]); err != nil { return nil, err } e.Name = fields[8] return e, nil } var dirTimeFormats = []string{ "01-02-06 03:04PM", "2006-01-02 15:04", } // parseDirListLine parses a directory line in a format based on the output of // the MS-DOS DIR command. func parseDirListLine(line string) (*Entry, error) { e := &Entry{} var err error // Try various time formats that DIR might use, and stop when one works. for _, format := range dirTimeFormats { if len(line) > len(format) { e.Time, err = time.Parse(format, line[:len(format)]) if err == nil { line = line[len(format):] break } } } if err != nil { // None of the time formats worked. return nil, errUnsupportedListLine } line = strings.TrimLeft(line, " ") if strings.HasPrefix(line, "") { e.Type = EntryTypeFolder line = strings.TrimPrefix(line, "") } else { space := strings.Index(line, " ") if space == -1 { return nil, errUnsupportedListLine } e.Size, err = strconv.ParseUint(line[:space], 10, 64) if err != nil { return nil, errUnsupportedListLine } e.Type = EntryTypeFile line = line[space:] } e.Name = strings.TrimLeft(line, " ") return e, nil } var listLineParsers = []func(line string) (*Entry, error){ parseRFC3659ListLine, parseLsListLine, parseDirListLine, } // parseListLine parses the various non-standard format returned by the LIST // FTP command. func parseListLine(line string) (*Entry, error) { for _, f := range listLineParsers { e, err := f(line) if err == errUnsupportedListLine { // Try another format. continue } return e, err } return nil, errUnsupportedListLine } func (e *Entry) setSize(str string) (err error) { e.Size, err = strconv.ParseUint(str, 0, 64) return } func (e *Entry) setTime(fields []string) (err error) { var timeStr string if strings.Contains(fields[2], ":") { // this year thisYear, _, _ := time.Now().Date() timeStr = fields[1] + " " + fields[0] + " " + strconv.Itoa(thisYear)[2:4] + " " + fields[2] + " GMT" } else { // not this year if len(fields[2]) != 4 { return errors.New("Invalid year format in time string") } timeStr = fields[1] + " " + fields[0] + " " + fields[2][2:4] + " 00:00 GMT" } e.Time, err = time.Parse("_2 Jan 06 15:04 MST", timeStr) return } func (c *ServerConn) LastModificationDate(path string) (t time.Time, err error) { if _, mdtmSupported := c.features["MDTM"]; !mdtmSupported { return t, errors.New("MDTM is not supported on this server") } _, msg, err := c.cmd(StatusFile, "MDTM %s", path) if err != nil { return } // Line formats: // 213 20150413095032 // 213 20150413095032.999 if len(msg) < 14 { return t, errors.New("Command unsupported on this server") } gmtLoc, _ := time.LoadLocation("GMT") t, err = time.ParseInLocation("20060102150405.999", msg, gmtLoc) return } // NameList issues an NLST FTP command. func (c *ServerConn) NameList(path string) (entries []string, err error) { conn, err := c.cmdDataConnFrom(0, "NLST %s", path) if err != nil { return } r := &response{conn, c} defer r.Close() scanner := bufio.NewScanner(r) for scanner.Scan() { entries = append(entries, scanner.Text()) } if err = scanner.Err(); err != nil { return entries, err } return } // List issues a LIST FTP command. func (c *ServerConn) List(path string) (entries []*Entry, err error) { var conn net.Conn commands := []string{"MLSD", "LIST -a", "LIST"} if !c.listall { commands = append(commands[:1], commands[2:]...) } if _, mlstSupported := c.features["MLST"]; !mlstSupported { commands = commands[1:] } for _, cmd := range commands { conn, err = c.cmdDataConnFrom(0, "%s %s", cmd, path) if err == nil { break } if cmd == "LIST -a" { // "LIST -a" is not supported // Fall back to LIST c.listall = false } } if err != nil { return } r := &response{conn, c} defer r.Close() scanner := bufio.NewScanner(r) for scanner.Scan() { line := scanner.Text() entry, err := parseListLine(line) if err == nil && entry != nil { entries = append(entries, entry) } } if err := scanner.Err(); err != nil { return nil, err } return } // ChangeDir issues a CWD FTP command, which changes the current directory to // the specified path. func (c *ServerConn) ChangeDir(path string) error { _, _, err := c.cmd(StatusRequestedFileActionOK, "CWD %s", path) return err } // ChangeDirToParent issues a CDUP FTP command, which changes the current // directory to the parent directory. This is similar to a call to ChangeDir // with a path set to "..". func (c *ServerConn) ChangeDirToParent() error { _, _, err := c.cmd(StatusRequestedFileActionOK, "CDUP") return err } // CurrentDir issues a PWD FTP command, which Returns the path of the current // directory. func (c *ServerConn) CurrentDir() (string, error) { _, msg, err := c.cmd(StatusPathCreated, "PWD") if err != nil { return "", err } start := strings.Index(msg, "\"") end := strings.LastIndex(msg, "\"") if start == -1 || end == -1 { return "", errors.New("Unsuported PWD response format") } return msg[start+1 : end], nil } // Retr issues a RETR FTP command to fetch the specified file from the remote // FTP server. // // The returned ReadCloser must be closed to cleanup the FTP data connection. func (c *ServerConn) Retr(path string) (io.ReadCloser, error) { return c.RetrFrom(path, 0) } // RetrFrom issues a RETR FTP command to fetch the specified file from the remote // FTP server, the server will not send the offset first bytes of the file. // // The returned ReadCloser must be closed to cleanup the FTP data connection. func (c *ServerConn) RetrFrom(path string, offset uint64) (io.ReadCloser, error) { conn, err := c.cmdDataConnFrom(offset, "RETR %s", path) if err != nil { return nil, err } return &response{conn, c}, nil } // Stor issues a STOR FTP command to store a file to the remote FTP server. // Stor creates the specified file with the content of the io.Reader. // // Hint: io.Pipe() can be used if an io.Writer is required. func (c *ServerConn) Stor(path string, r io.Reader) error { return c.StorFrom(path, r, 0) } // StorFrom issues a STOR FTP command to store a file to the remote FTP server. // Stor creates the specified file with the content of the io.Reader, writing // on the server will start at the given file offset. // // Hint: io.Pipe() can be used if an io.Writer is required. func (c *ServerConn) StorFrom(path string, r io.Reader, offset uint64) error { conn, err := c.cmdDataConnFrom(offset, "STOR %s", path) if err != nil { return err } _, err = io.Copy(conn, r) conn.Close() if err != nil { return err } _, _, err = c.conn.ReadResponse(StatusClosingDataConnection) return err } // Rename renames a file on the remote FTP server. func (c *ServerConn) Rename(from, to string) error { _, _, err := c.cmd(StatusRequestFilePending, "RNFR %s", from) if err != nil { return err } _, _, err = c.cmd(StatusRequestedFileActionOK, "RNTO %s", to) return err } // Delete issues a DELE FTP command to delete the specified file from the // remote FTP server. func (c *ServerConn) Delete(path string) error { _, _, err := c.cmd(StatusRequestedFileActionOK, "DELE %s", path) return err } // MakeDir issues a MKD FTP command to create the specified directory on the // remote FTP server. func (c *ServerConn) MakeDir(path string) error { _, _, err := c.cmd(StatusPathCreated, "MKD %s", path) return err } // RemoveDir issues a RMD FTP command to remove the specified directory from // the remote FTP server. func (c *ServerConn) RemoveDir(path string) error { _, _, err := c.cmd(StatusRequestedFileActionOK, "RMD %s", path) return err } // NoOp issues a NOOP FTP command. // NOOP has no effects and is usually used to prevent the remote FTP server to // close the otherwise idle connection. func (c *ServerConn) NoOp() error { _, _, err := c.cmd(StatusCommandOK, "NOOP") return err } // Logout issues a REIN FTP command to logout the current user. func (c *ServerConn) Logout() error { _, _, err := c.cmd(StatusReady, "REIN") return err } // Quit issues a QUIT FTP command to properly close the connection from the // remote FTP server. func (c *ServerConn) Quit() error { c.conn.Cmd("QUIT") return c.conn.Close() } // Read implements the io.Reader interface on a FTP data connection. func (r *response) Read(buf []byte) (int, error) { return r.conn.Read(buf) } // Close implements the io.Closer interface on a FTP data connection. func (r *response) Close() error { err := r.conn.Close() _, _, err2 := r.c.conn.ReadResponse(StatusClosingDataConnection) if err2 != nil { err = err2 } return err } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/etix/goftp/status.go000066400000000000000000000104651411706463700265060ustar00rootroot00000000000000package ftp // FTP status codes, defined in RFC 959 const ( StatusInitiating = 100 StatusRestartMarker = 110 StatusReadyMinute = 120 StatusAlreadyOpen = 125 StatusAboutToSend = 150 StatusCommandOK = 200 StatusCommandNotImplemented = 202 StatusSystem = 211 StatusDirectory = 212 StatusFile = 213 StatusHelp = 214 StatusName = 215 StatusReady = 220 StatusClosing = 221 StatusDataConnectionOpen = 225 StatusClosingDataConnection = 226 StatusPassiveMode = 227 StatusLongPassiveMode = 228 StatusExtendedPassiveMode = 229 StatusLoggedIn = 230 StatusLoggedOut = 231 StatusLogoutAck = 232 StatusRequestedFileActionOK = 250 StatusPathCreated = 257 StatusUserOK = 331 StatusLoginNeedAccount = 332 StatusRequestFilePending = 350 StatusNotAvailable = 421 StatusCanNotOpenDataConnection = 425 StatusTransfertAborted = 426 StatusInvalidCredentials = 430 StatusHostUnavailable = 434 StatusFileActionIgnored = 450 StatusActionAborted = 451 Status452 = 452 StatusBadCommand = 500 StatusBadArguments = 501 StatusNotImplemented = 502 StatusBadSequence = 503 StatusNotImplementedParameter = 504 StatusNotLoggedIn = 530 StatusStorNeedAccount = 532 StatusFileUnavailable = 550 StatusPageTypeUnknown = 551 StatusExceededStorage = 552 StatusBadFileName = 553 ) var statusText = map[int]string{ // 200 StatusCommandOK: "Command okay.", StatusCommandNotImplemented: "Command not implemented, superfluous at this site.", StatusSystem: "System status, or system help reply.", StatusDirectory: "Directory status.", StatusFile: "File status.", StatusHelp: "Help message.", StatusName: "", StatusReady: "Service ready for new user.", StatusClosing: "Service closing control connection.", StatusDataConnectionOpen: "Data connection open; no transfer in progress.", StatusClosingDataConnection: "Closing data connection. Requested file action successful.", StatusPassiveMode: "Entering Passive Mode.", StatusLongPassiveMode: "Entering Long Passive Mode.", StatusExtendedPassiveMode: "Entering Extended Passive Mode.", StatusLoggedIn: "User logged in, proceed.", StatusLoggedOut: "User logged out; service terminated.", StatusLogoutAck: "Logout command noted, will complete when transfer done.", StatusRequestedFileActionOK: "Requested file action okay, completed.", StatusPathCreated: "Path created.", // 300 StatusUserOK: "User name okay, need password.", StatusLoginNeedAccount: "Need account for login.", StatusRequestFilePending: "Requested file action pending further information.", // 400 StatusNotAvailable: "Service not available, closing control connection.", StatusCanNotOpenDataConnection: "Can't open data connection.", StatusTransfertAborted: "Connection closed; transfer aborted.", StatusInvalidCredentials: "Invalid username or password.", StatusHostUnavailable: "Requested host unavailable.", StatusFileActionIgnored: "Requested file action not taken.", StatusActionAborted: "Requested action aborted. Local error in processing.", Status452: "Insufficient storage space in system.", // 500 StatusBadCommand: "Command unrecognized.", StatusBadArguments: "Syntax error in parameters or arguments.", StatusNotImplemented: "Command not implemented.", StatusBadSequence: "Bad sequence of commands.", StatusNotImplementedParameter: "Command not implemented for that parameter.", StatusNotLoggedIn: "Not logged in.", StatusStorNeedAccount: "Need account for storing files.", StatusFileUnavailable: "File unavailable.", StatusPageTypeUnknown: "Page type unknown.", StatusExceededStorage: "Exceeded storage allocation.", StatusBadFileName: "File name not allowed.", } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/youtube/000077500000000000000000000000001411706463700242325ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/youtube/vitess/000077500000000000000000000000001411706463700255475ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/youtube/vitess/LICENSE000066400000000000000000000261361411706463700265640ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/youtube/vitess/go/000077500000000000000000000000001411706463700261545ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/youtube/vitess/go/cgzip/000077500000000000000000000000001411706463700272705ustar00rootroot00000000000000mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/youtube/vitess/go/cgzip/adler32.go000066400000000000000000000031411411706463700310520ustar00rootroot00000000000000// +build cgo /* Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package cgzip /* #cgo CFLAGS: -Werror=implicit #cgo pkg-config: zlib #include "zlib.h" */ import "C" import ( "hash" "unsafe" ) type adler32Hash struct { adler C.uLong } // NewAdler32 creates an empty buffer which has an adler32 of '1'. The go // hash/adler32 does the same. func NewAdler32() hash.Hash32 { a := &adler32Hash{} a.Reset() return a } // io.Writer interface func (a *adler32Hash) Write(p []byte) (n int, err error) { if len(p) > 0 { a.adler = C.adler32(a.adler, (*C.Bytef)(unsafe.Pointer(&p[0])), (C.uInt)(len(p))) } return len(p), nil } // hash.Hash interface func (a *adler32Hash) Sum(b []byte) []byte { s := a.Sum32() b = append(b, byte(s>>24)) b = append(b, byte(s>>16)) b = append(b, byte(s>>8)) b = append(b, byte(s)) return b } func (a *adler32Hash) Reset() { a.adler = C.adler32(0, (*C.Bytef)(unsafe.Pointer(nil)), 0) } func (a *adler32Hash) Size() int { return 4 } func (a *adler32Hash) BlockSize() int { return 1 } // hash.Hash32 interface func (a *adler32Hash) Sum32() uint32 { return uint32(a.adler) } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/youtube/vitess/go/cgzip/crc32.go000066400000000000000000000030731411706463700305360ustar00rootroot00000000000000// +build cgo /* Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package cgzip /* #cgo CFLAGS: -Werror=implicit #cgo pkg-config: zlib #include "zlib.h" */ import "C" import ( "hash" "unsafe" ) type crc32Hash struct { crc C.uLong } // NewCrc32 creates an empty buffer which has an crc32 of '1'. The go // hash/crc32 does the same. func NewCrc32() hash.Hash32 { c := &crc32Hash{} c.Reset() return c } // io.Writer interface func (a *crc32Hash) Write(p []byte) (n int, err error) { if len(p) > 0 { a.crc = C.crc32(a.crc, (*C.Bytef)(unsafe.Pointer(&p[0])), (C.uInt)(len(p))) } return len(p), nil } // hash.Hash interface func (a *crc32Hash) Sum(b []byte) []byte { s := a.Sum32() b = append(b, byte(s>>24)) b = append(b, byte(s>>16)) b = append(b, byte(s>>8)) b = append(b, byte(s)) return b } func (a *crc32Hash) Reset() { a.crc = C.crc32(0, (*C.Bytef)(unsafe.Pointer(nil)), 0) } func (a *crc32Hash) Size() int { return 4 } func (a *crc32Hash) BlockSize() int { return 1 } // hash.Hash32 interface func (a *crc32Hash) Sum32() uint32 { return uint32(a.crc) } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/youtube/vitess/go/cgzip/doc.go000066400000000000000000000011541411706463700303650ustar00rootroot00000000000000/* Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ // Package cgzip wraps the C library for gzip. package cgzip mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/youtube/vitess/go/cgzip/pure.go000066400000000000000000000006611411706463700305750ustar00rootroot00000000000000// +build !cgo // A slower, pure go alternative to cgzip to allow for cross compilation. package cgzip import ( "compress/gzip" "hash/adler32" "hash/crc32" ) // Writer is an io.WriteCloser. Writes to a Writer are compressed. type Writer = gzip.Writer var ( Z_BEST_SPEED = gzip.BestSpeed NewWriterLevel = gzip.NewWriterLevel NewReader = gzip.NewReader NewCrc32 = crc32.NewIEEE NewAdler32 = adler32.New ) mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/youtube/vitess/go/cgzip/reader.go000066400000000000000000000056431411706463700310710ustar00rootroot00000000000000// +build cgo /* Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package cgzip import "io" // err starts out as nil // we will call inflateEnd when we set err to a value: // - whatever error is returned by the underlying reader // - io.EOF if Close was called type reader struct { r io.Reader in []byte strm zstream err error skipIn bool } // NewReader returns a new cgzip.reader for reading gzip files with the C gzip // library. func NewReader(r io.Reader) (io.ReadCloser, error) { return NewReaderBuffer(r, DEFAULT_COMPRESSED_BUFFER_SIZE) } // NewReaderBuffer returns a new cgzip.reader with a given buffer size for // reading gzip files with the C gzip library. func NewReaderBuffer(r io.Reader, bufferSize int) (io.ReadCloser, error) { z := &reader{r: r, in: make([]byte, bufferSize)} if err := z.strm.inflateInit(); err != nil { return nil, err } return z, nil } // Read reads from the gz stream. func (z *reader) Read(p []byte) (int, error) { if z.err != nil { return 0, z.err } if len(p) == 0 { return 0, nil } // read and deflate until the output buffer is full z.strm.setOutBuf(p, len(p)) for { // if we have no data to inflate, read more if !z.skipIn && z.strm.availIn() == 0 { var n int n, z.err = z.r.Read(z.in) // If we got data and EOF, pretend we didn't get the // EOF. That way we will return the right values // upstream. Note this will trigger another read // later on, that should return (0, EOF). if n > 0 && z.err == io.EOF { z.err = nil } // FIXME(alainjobart) this code is not compliant with // the Reader interface. We should process all the // data we got from the reader, and then return the // error, whatever it is. if (z.err != nil && z.err != io.EOF) || (n == 0 && z.err == io.EOF) { z.strm.inflateEnd() return 0, z.err } z.strm.setInBuf(z.in, n) } else { z.skipIn = false } // inflate some ret, err := z.strm.inflate(zNoFlush) if err != nil { z.err = err z.strm.inflateEnd() return 0, z.err } // if we read something, we're good have := len(p) - z.strm.availOut() if have > 0 { z.skipIn = ret == Z_OK && z.strm.availOut() == 0 return have, z.err } } } // Close closes the Reader. It does not close the underlying io.Reader. func (z *reader) Close() error { if z.err != nil { if z.err != io.EOF { return z.err } return nil } z.strm.inflateEnd() z.err = io.EOF return nil } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/youtube/vitess/go/cgzip/writer.go000066400000000000000000000067301411706463700311410ustar00rootroot00000000000000// +build cgo /* Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package cgzip import ( "fmt" "io" ) const ( // Allowed flush values Z_NO_FLUSH = 0 Z_PARTIAL_FLUSH = 1 Z_SYNC_FLUSH = 2 Z_FULL_FLUSH = 3 Z_FINISH = 4 Z_BLOCK = 5 Z_TREES = 6 // Return codes Z_OK = 0 Z_STREAM_END = 1 Z_NEED_DICT = 2 Z_ERRNO = -1 Z_STREAM_ERROR = -2 Z_DATA_ERROR = -3 Z_MEM_ERROR = -4 Z_BUF_ERROR = -5 Z_VERSION_ERROR = -6 // compression levels Z_NO_COMPRESSION = 0 Z_BEST_SPEED = 1 Z_BEST_COMPRESSION = 9 Z_DEFAULT_COMPRESSION = -1 // our default buffer size // most go io functions use 32KB as buffer size, so 32KB // works well here for compressed data buffer DEFAULT_COMPRESSED_BUFFER_SIZE = 32 * 1024 ) // err starts out as nil // we will call deflateEnd when we set err to a value: // - whatever error is returned by the underlying writer // - io.EOF if Close was called type Writer struct { w io.Writer out []byte strm zstream err error } func NewWriter(w io.Writer) *Writer { z, _ := NewWriterLevelBuffer(w, Z_DEFAULT_COMPRESSION, DEFAULT_COMPRESSED_BUFFER_SIZE) return z } func NewWriterLevel(w io.Writer, level int) (*Writer, error) { return NewWriterLevelBuffer(w, level, DEFAULT_COMPRESSED_BUFFER_SIZE) } func NewWriterLevelBuffer(w io.Writer, level, bufferSize int) (*Writer, error) { z := &Writer{w: w, out: make([]byte, bufferSize)} if err := z.strm.deflateInit(level); err != nil { return nil, err } return z, nil } // this is the main function: it advances the write with either // new data or something else to do, like a flush func (z *Writer) write(p []byte, flush int) int { if len(p) == 0 { z.strm.setInBuf(nil, 0) } else { z.strm.setInBuf(p, len(p)) } // we loop until we don't get a full output buffer // each loop completely writes the output buffer to the underlying // writer for { // deflate one buffer z.strm.setOutBuf(z.out, len(z.out)) z.strm.deflate(flush) // write everything from := 0 have := len(z.out) - int(z.strm.availOut()) for have > 0 { var n int n, z.err = z.w.Write(z.out[from:have]) if z.err != nil { z.strm.deflateEnd() return 0 } from += n have -= n } // we stop trying if we get a partial response if z.strm.availOut() != 0 { break } } // the library guarantees this if z.strm.availIn() != 0 { panic(fmt.Errorf("cgzip: Unexpected error (2)")) } return len(p) } func (z *Writer) Write(p []byte) (n int, err error) { if z.err != nil { return 0, z.err } n = z.write(p, Z_NO_FLUSH) return n, z.err } func (z *Writer) Flush() error { if z.err != nil { return z.err } z.write(nil, Z_SYNC_FLUSH) return z.err } // Calling Close does not close the wrapped io.Writer originally // passed to NewWriterX. func (z *Writer) Close() error { if z.err != nil { return z.err } z.write(nil, Z_FINISH) if z.err != nil { return z.err } z.strm.deflateEnd() z.err = io.EOF return nil } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/github.com/youtube/vitess/go/cgzip/zstream.go000066400000000000000000000112201411706463700313000ustar00rootroot00000000000000// +build cgo /* Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package cgzip // See http://www.zlib.net/zlib_how.html for more information on this /* #cgo CFLAGS: -Werror=implicit #cgo pkg-config: zlib #include "zlib.h" // inflateInit2 is a macro, so using a wrapper function int zstream_inflate_init(char *strm) { ((z_stream*)strm)->zalloc = Z_NULL; ((z_stream*)strm)->zfree = Z_NULL; ((z_stream*)strm)->opaque = Z_NULL; ((z_stream*)strm)->avail_in = 0; ((z_stream*)strm)->next_in = Z_NULL; return inflateInit2((z_stream*)strm, 16+15); // 16 makes it understand only gzip files } // deflateInit2 is a macro, so using a wrapper function // using deflateInit2 instead of deflateInit to be able to specify gzip format int zstream_deflate_init(char *strm, int level) { ((z_stream*)strm)->zalloc = Z_NULL; ((z_stream*)strm)->zfree = Z_NULL; ((z_stream*)strm)->opaque = Z_NULL; return deflateInit2((z_stream*)strm, level, Z_DEFLATED, 16+15, // 16 makes it a gzip file, 15 is default 8, Z_DEFAULT_STRATEGY); // default values } unsigned int zstream_avail_in(char *strm) { return ((z_stream*)strm)->avail_in; } unsigned int zstream_avail_out(char *strm) { return ((z_stream*)strm)->avail_out; } char* zstream_msg(char *strm) { return ((z_stream*)strm)->msg; } void zstream_set_in_buf(char *strm, void *buf, unsigned int len) { ((z_stream*)strm)->next_in = (Bytef*)buf; ((z_stream*)strm)->avail_in = len; } void zstream_set_out_buf(char *strm, void *buf, unsigned int len) { ((z_stream*)strm)->next_out = (Bytef*)buf; ((z_stream*)strm)->avail_out = len; } int zstream_inflate(char *strm, int flag) { return inflate((z_stream*)strm, flag); } int zstream_deflate(char *strm, int flag) { return deflate((z_stream*)strm, flag); } void zstream_inflate_end(char *strm) { inflateEnd((z_stream*)strm); } void zstream_deflate_end(char *strm) { deflateEnd((z_stream*)strm); } */ import "C" import ( "fmt" "unsafe" ) const ( zNoFlush = C.Z_NO_FLUSH ) // z_stream is a buffer that's big enough to fit a C.z_stream. // This lets us allocate a C.z_stream within Go, while keeping the contents // opaque to the Go GC. Otherwise, the GC would look inside and complain that // the pointers are invalid, since they point to objects allocated by C code. type zstream [unsafe.Sizeof(C.z_stream{})]C.char func (strm *zstream) inflateInit() error { result := C.zstream_inflate_init(&strm[0]) if result != Z_OK { return fmt.Errorf("cgzip: failed to initialize inflate (%v): %v", result, strm.msg()) } return nil } func (strm *zstream) deflateInit(level int) error { result := C.zstream_deflate_init(&strm[0], C.int(level)) if result != Z_OK { return fmt.Errorf("cgzip: failed to initialize deflate (%v): %v", result, strm.msg()) } return nil } func (strm *zstream) inflateEnd() { C.zstream_inflate_end(&strm[0]) } func (strm *zstream) deflateEnd() { C.zstream_deflate_end(&strm[0]) } func (strm *zstream) availIn() int { return int(C.zstream_avail_in(&strm[0])) } func (strm *zstream) availOut() int { return int(C.zstream_avail_out(&strm[0])) } func (strm *zstream) msg() string { return C.GoString(C.zstream_msg(&strm[0])) } func (strm *zstream) setInBuf(buf []byte, size int) { if buf == nil { C.zstream_set_in_buf(&strm[0], nil, C.uint(size)) } else { C.zstream_set_in_buf(&strm[0], unsafe.Pointer(&buf[0]), C.uint(size)) } } func (strm *zstream) setOutBuf(buf []byte, size int) { if buf == nil { C.zstream_set_out_buf(&strm[0], nil, C.uint(size)) } else { C.zstream_set_out_buf(&strm[0], unsafe.Pointer(&buf[0]), C.uint(size)) } } func (strm *zstream) inflate(flag int) (int, error) { ret := C.zstream_inflate(&strm[0], C.int(flag)) switch ret { case Z_NEED_DICT: ret = Z_DATA_ERROR fallthrough case Z_DATA_ERROR, Z_MEM_ERROR: return int(ret), fmt.Errorf("cgzip: failed to inflate (%v): %v", ret, strm.msg()) } return int(ret), nil } func (strm *zstream) deflate(flag int) { ret := C.zstream_deflate(&strm[0], C.int(flag)) if ret == Z_STREAM_ERROR { // all the other error cases are normal, // and this should never happen panic(fmt.Errorf("cgzip: Unexpected error (1)")) } } mirrorbits-0.5.1+git20210123.eeea0e0+ds1/vendor/modules.txt000066400000000000000000000060511411706463700227120ustar00rootroot00000000000000# github.com/etix/goftp v0.0.0-20170217140226-0c13163a1028 github.com/etix/goftp # github.com/golang/protobuf v1.2.0 github.com/golang/protobuf/ptypes github.com/golang/protobuf/ptypes/empty github.com/golang/protobuf/proto github.com/golang/protobuf/ptypes/timestamp github.com/golang/protobuf/ptypes/any github.com/golang/protobuf/ptypes/duration github.com/golang/protobuf/protoc-gen-go/descriptor # github.com/gomodule/redigo v0.0.0-20181026001555-e8fc0692a7e2 github.com/gomodule/redigo/redis # github.com/howeyc/gopass v0.0.0-20170109162249-bf9dde6d0d2c github.com/howeyc/gopass # github.com/op/go-logging v0.0.0-20160315200505-970db520ece7 github.com/op/go-logging # github.com/oschwald/maxminddb-golang v0.0.0-20181014221851-ed835b226061 github.com/oschwald/maxminddb-golang # github.com/pkg/errors v0.0.0-20181008045315-2233dee583dc github.com/pkg/errors # github.com/rafaeljusto/redigomock v0.0.0-20181020085750-2c62053f7724 github.com/rafaeljusto/redigomock # github.com/youtube/vitess v0.0.0-20181105031612-54855ec7b369 github.com/youtube/vitess/go/cgzip # golang.org/x/crypto v0.0.0-20181112202954-3d3f9f413869 golang.org/x/crypto/ssh/terminal # golang.org/x/net v0.0.0-20181114220301-adae6a3d119a golang.org/x/net/context golang.org/x/net/trace golang.org/x/net/internal/timeseries golang.org/x/net/http2 golang.org/x/net/http2/hpack golang.org/x/net/http/httpguts golang.org/x/net/idna # golang.org/x/sys v0.0.0-20181106135930-3a76605856fd golang.org/x/sys/unix golang.org/x/sys/windows # golang.org/x/text v0.3.0 golang.org/x/text/secure/bidirule golang.org/x/text/unicode/bidi golang.org/x/text/unicode/norm golang.org/x/text/transform # google.golang.org/genproto v0.0.0-20190111180523-db91494dd46c google.golang.org/genproto/googleapis/rpc/status # google.golang.org/grpc v1.18.0 google.golang.org/grpc google.golang.org/grpc/codes google.golang.org/grpc/status google.golang.org/grpc/metadata google.golang.org/grpc/reflection google.golang.org/grpc/balancer google.golang.org/grpc/balancer/roundrobin google.golang.org/grpc/connectivity google.golang.org/grpc/credentials google.golang.org/grpc/encoding google.golang.org/grpc/encoding/proto google.golang.org/grpc/grpclog google.golang.org/grpc/internal google.golang.org/grpc/internal/backoff google.golang.org/grpc/internal/binarylog google.golang.org/grpc/internal/channelz google.golang.org/grpc/internal/envconfig google.golang.org/grpc/internal/grpcrand google.golang.org/grpc/internal/grpcsync google.golang.org/grpc/internal/transport google.golang.org/grpc/keepalive google.golang.org/grpc/naming google.golang.org/grpc/peer google.golang.org/grpc/resolver google.golang.org/grpc/resolver/dns google.golang.org/grpc/resolver/passthrough google.golang.org/grpc/stats google.golang.org/grpc/tap google.golang.org/grpc/reflection/grpc_reflection_v1alpha google.golang.org/grpc/balancer/base google.golang.org/grpc/credentials/internal google.golang.org/grpc/binarylog/grpc_binarylog_v1 google.golang.org/grpc/internal/syscall # gopkg.in/tylerb/graceful.v1 v1.2.15 gopkg.in/tylerb/graceful.v1 # gopkg.in/yaml.v2 v2.2.1 gopkg.in/yaml.v2